0% found this document useful (0 votes)
16 views16 pages

Lec 31

This document discusses linear independence of vectors. It explains how to check if a set of vectors in Rm are linearly independent by taking their linear combination with unknown coefficients and setting it equal to the zero vector. This forms a homogeneous system of linear equations that can be solved using Gaussian elimination to determine if the only solution is the trivial solution of all coefficients being zero.

Uploaded by

themaker1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views16 pages

Lec 31

This document discusses linear independence of vectors. It explains how to check if a set of vectors in Rm are linearly independent by taking their linear combination with unknown coefficients and setting it equal to the zero vector. This forms a homogeneous system of linear equations that can be solved using Gaussian elimination to determine if the only solution is the trivial solution of all coefficients being zero.

Uploaded by

themaker1602
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Mathematics for Data Science - 2

Professor. Sarang S. Sane


Department of Mathematics
Indian Institute of Technology, Madras
Lecture No. 31
Linear Independence – Part 2

Hello, and welcome to the online B.Sc. program on data science and programming. In this video
we are going to talk about linear independence. So, this continues from our previous video on the
same topic. So, let us recall that linear independence means a set of vectors being linearly
independent means that they are not linearly dependent, which is to say that if you take a linear
combination which equals 0, then the only way that can happen is if the coefficients are 0.

(Refer Slide Time: 00:44)

So, just to recall, here is the last example that we did in our previous video. So, we have these three
vectors, (1, 1, 2), (1, 2, 0) and (0, 2, 1) in ℝ3 and we take unknown coefficients 𝑎, 𝑏 and 𝑐 for these
vectors. So, 𝑎(1,1,2) + 𝑏(1,2,0) + 𝑐(0,2,1) = (0,0,0). This is what we assume. And then we try
to work out 𝑎, 𝑏 and 𝑐. And indeed, if we equate the coefficients and from this equation here and
write down the equations in 𝑎, 𝑏 and 𝑐 and solve them, we get 𝑎 = 𝑏 = 𝑐 = 0.
(Refer Slide Time: 01:24)

So, we can now ask the same question in general, how do we check if you have a set of 𝑛 vectors
in ℝ𝑚 when are they linearly independent? So, just to, let us go back one step and ask what we did
in ℝ3 . So, in ℝ3 what we did is we took arbitrary coefficients. I will underline the word arbitrary
or unknown. And then checked when, what are the possible solutions for these coefficients for this
equation to be 0. This is a general template. So, in ℝ𝑚 you will do the same thing.

So, suppose you have coordinates 𝑣1𝑗 , 𝑣2𝑗 , … , 𝑣𝑚𝑗 , it is in ℝ𝑚 , so there are 𝑚 coordinates, so you
take the 𝑗 𝑡ℎ vector, that is your, that is 𝑣𝑗 and you write down the coordinates, so that is
𝑣1𝑗 , 𝑣2𝑗 , … , 𝑣𝑚𝑗 and you will do this for each of your vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 . So, let us write the
linear combination of these vectors with arbitrary or unknown coefficients 𝑎1 , 𝑎2 , … , 𝑎𝑛 . So, here
we want to determine the coefficients, which yields this equation 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 = 0.
So, we equate this linear combination on the left to 0.

So, now, we can, this is in ℝ𝑚 remember, so both sides we can express in terms of coordinates.
So, the first coordinate on the left is 𝑣11 𝑎1 + 𝑣12 𝑎2 + ⋯ + 𝑣1𝑛 𝑎𝑛 . The first coordinate on the right
hand side is 0. That is a 0 vector. Similarly, for the second coordinate 𝑣21 𝑎1 + 𝑣22 𝑎2 + ⋯ +
𝑣2𝑛 𝑎𝑛 on the left, on the right it is 0, and we can do this all the way up till 𝑚. So, remember, there
are 𝑚 coordinates.

So, now, you can notice that we have written this system in a different way. We have written the
𝑣𝑖𝑗 ′s on the left and the 𝑎𝑗 ′s on the right. Why do we do that? So, the reason is that here the
unknowns are the coefficients 𝑎1 , 𝑎2 , … , 𝑎𝑛 and we, our notation is that if you have a system 𝑎𝑥 =
𝑏, so you write ∑ 𝑎𝑖𝑗 𝑥𝑗 . So, here 𝑥𝑗 ′𝑠 are 𝑎𝑖 ′𝑠 and 𝑎𝑖𝑗 ′𝑠 are 𝑣𝑖𝑗 ′𝑠. That is why we have written it in
a slightly altered way.

And now, we know this is a system of linear equations. In fact, this is a homogeneous system of
linear equations, because the right hand side is 0. And we know exactly how to solve this. This is
something we have done in the previous weeks. So, the most general method is what we call
Gaussian elimination. If you are in, there are other situations where you can do better by just
looking at determinant and so on.

(Refer Slide Time: 04:26)

So, since the 𝑎𝑖 ′𝑠 are arbitrary or unknown, we can treat this like a homogeneous system of linear
equations with coefficients 𝑣𝑖𝑗 and unknowns 𝑎𝑖 . This is exactly what we discussed. So, for linear
independence, we have to check if the only choice of 𝑎𝑖 satisfying the above identities or equations
is 𝑎𝑖 = 0 for all 𝑖. So, 𝑎𝑖 = 0 for each 𝑖.

So, remember, we are not, mean when we study linear independence, we are not really interested
in asking what coefficients give you the linear combination 0. We do not want to expressly evaluate
which 𝑎𝑖 is, what is the set of solutions. All we want to know is whether the solution is only a set
of 0’s or whether non-zero solutions are, do exist, meaning at least one non-zero coefficient,
whether such solutions exist. So, we need not solve the entire system. That is the point.
So, equivalently in terms of the homogeneous system of linear equations, we have to check if the
only solution is the 0 solution, meaning each of the 𝑎𝑖 ′𝑠 is 0. So, the conclusion here is, this is the
main point, to check if 𝑣1 , 𝑣2 , … , 𝑣𝑛 in ℝ𝑚 are linearly independent, we have to check that the
homogeneous system of linear equations 𝑉𝑥 = 0 has only the trivial solution, meaning the solution
where 𝑥 is 0, that is what we have to check. And what is this 𝑉, 𝑉 is the matrix we can form out of
these vectors, where the 𝑗 𝑡ℎ column is the vector 𝑣𝑗 , meaning you take the coordinates of the vector
𝑣𝑗 and write that in as a column of 𝑉. That is how you get a matrix 𝑉.

(Refer Slide Time: 06:29)

So, let us do this. Let us do a bunch of examples to set these ideas into place. Let us do a 2 by 2
example first. So, consider the two vectors (5, 2) and (1, 3) in ℝ2 , write the linear combination of
these 2 vectors with unknown coefficients 𝑥1 and 𝑥2 and equate it to 0, 𝑥1 (5, 2) + 𝑥2 (1, 3) =
(0, 0). So, now I am using 𝑥1 and 𝑥2 in place of 𝑎1 and 𝑎2 .

So, we have the system of linear equations 5𝑥1 + 𝑥2 = 0 and 2𝑥1 + 3𝑥2 = 0. And we want to
know if the only solution for this is 𝑥1 = 𝑥2 = 0 or whether there are other solutions. Remember
that this being a homogeneous system, the trivial solution always exists. This is something we have
seen even in the previous week. We have discussed in the previous week.

So, since it is a 2 by 2 matrix, we can look at the corresponding determinant. So, here the
determinant will be enough to check whether or not the only solution is 0, 0. So, in this case, the
determinant is non-zero. It is an invertible matrix. So, the determinant is 13, which is non-zero.
So, since the determinant is non-zero, it is an invertible matrix. And once you note an invertible
matrix, we know that the only solution is the trivial solution. So, it has a unique solution 𝑥1 =
𝑥2 = 0. So, the upshot is that the vectors (5, 2) and (1, 3) are linearly independent. So, I hope this
is, you have understood how to solve this question of whether two vectors are linearly independent
or not at least in this case.

(Refer Slide Time: 08:23)

We are going to do a bunch of examples. So, here is an example where you have a 3 by 2 matrix.
So, consider the two vectors (1, 2, 0) and (3, 3, 5) in ℝ3 . So, let us also understand what is this 3
by 2? This means your vectors are in ℝ3 and you have two vectors. So, your, this is saying you
have a 3 by 2 matrix, which means you have two columns. So, remember, columns correspond to
the number of vectors and the size of the vector is the number of rows. So, that tells you it is in
ℝ3 .

So, in general, if you have an 𝑚 by 𝑛 matrix, that means you have 𝑛 columns which is
corresponding to 𝑛 vectors and they are, the size of the vectors is 𝑚, which means they are in ℝ𝑚 .
We studied this a few slides ago. So, you have these two vectors (1, 2, 0) and (3, 3, 5). So, before
we go ahead, let us note first of all that we already know the linear independence of two vectors.
Namely, two vectors are linearly independent means that they are not multiples of each other. They
are two non-zero vectors. So, here you can see that these are not multiples of each other. That
means they are linearly independent. So, you already know this fact. But let us do this from, in
terms of our matrix.

So, write the linear combination of these two vectors with unknown coefficients 𝑥1 and 𝑥2 and
equate it to 0. So, 𝑥1 (1, 2, 0) + 𝑥2 (3, 3, 5) = (0, 0, 0). From here we get a set of system of linear
equations by equating the corresponding coordinates. So, the first coordinate gives you 𝑥1 + 3𝑥2 =
0. The second coordinate gives you 2𝑥1 + 3𝑥2 = 0 And the third coordinate gives you 0𝑥1 +
5𝑥2 = 0.

And now, in this case, it is actually easy to check that the only solution is where 𝑥1 and 𝑥2 are both
0, because the third equation gives you that 5𝑥2 = 0 that means 𝑥2 must be 0. And then you, if
you put that into the first equation that tells you 𝑥1 must be 0. And you need not cross check with
the second equation, because we know, already know that 𝑥1 and 𝑥2 are 0 is a solution, because
this is a homogeneous system.

So, the only solution here, we have a unique solution, namely that 𝑥1 and 𝑥2 are both 0. So, if you
do not find this adhoc principal, this adhoc way of doing it agreeable, then you can actually do it
the standard way. Namely, you can use Gaussian elimination and we will see soon that there are,
we will see an example soon where we do actually have to use Gaussian elimination. So, what is
the net result? The net result is what we observed right at the start, because they are not multiples
of each other, namely that the vectors (1, 2, 0) and (3, 3, 5) are linearly independent.
(Refer Slide Time: 11:30)

Let us do a 2 by 3 example. So, what does this mean? This means I have three vectors in ℝ2 . So,
what, let us take the three vectors (1, 2), (1, 3) and (3, 4) in ℝ2 . And remember that for three
vectors being linearly dependent is the same as saying that we can write some one of these vectors
in as a linear combination of the others. So, one way of checking this would be to ask if one of
these is a linear combination of the others. But a more direct method is to take unknown
coefficients 𝑥1 , 𝑥2 and 𝑥3 and then write down the equation 𝑥1 (1, 2) + 𝑥2 (1, 3) + 𝑥3 (3, 4) =
(0, 0)

Let us equate the coordinates. So, if you equate the coordinates, you get 1𝑥1 + 1𝑥2 + 3𝑥3 = 0
and you get the 2𝑥1 + 3𝑥2 + 4𝑥3 = 0. Those are your two equations. This is a system of linear
equations. And now we can use Gaussian elimination. So, the augmented matrix for this system is
1 1 3 0
the matrix [ | ].
2 3 4 0

Let us do Gaussian elimination. So, that means we have to row reduce. And if you row reduce, it
is very easy to check that your solutions are of the form 𝑥1 = −5𝑐, 𝑥2 = 2𝑐 and 𝑥3 = 𝑐, where
𝑐 ϵ ℝ. So, the point is, as 𝑐 varies, the solutions vary, which means there are infinitely many
solutions. So, the net result is that the vectors (1, 2), (1, 3) and (3, 4) are linearly dependent.

So, here, let us observe what happened. You see, you had 3 unknowns and you had 2 equations.
And we have already seen what happens in these cases in the previous week, that if you have more
unknowns than the number of equations, then you always have lots of solutions. So, keep that in
mind and we will make this precise later on.

(Refer Slide Time: 13:57)

Finally, let us see a three by three example. So, that means you have three vectors in ℝ3 . So, let us
take the vectors (1, 2, 0), (0, 2, 4) and (3, 0, 0). So, actually, right away, you can see that these are
linearly independent, because if we can, if they are not that means one of them can be written as a
linear combination of the other two. And just by observation you can see that is not possible. But
let us go through our usual method.

So, let us take unknown coefficients, 𝑥1 , 𝑥2 and 𝑥3 and equate that linear combination to 0. So,
you have 𝑥1 (1, 2, 0) + 𝑥2 (0, 2, 4) + 𝑥3 (3, 0, 0) = (0, 0, 0). Let us take the corresponding
coordinates on each side. So, doing that we get 1𝑥1 + 0𝑥2 + 3𝑥3 = 0 , then 2𝑥1 + 2𝑥2 + 0𝑥3 = 0
, and then 0𝑥1 + 4𝑥2 + 0𝑥3 = 0.

So, now, you could either use Gaussian elimination or you can consider, because it is a 3 by 3 case,
you can consider the determinant, which is what we do in this solution or you can just do it by
observation, because in the last, if you look at the last equation, then this is 4𝑥2 = 0 that means
𝑥2 = 0. Then you put that, substitute that in the second equation that will give you 𝑥1 = 0. And
then substitute 𝑥1 = 0 in the first equation that will give you 𝑥3 = 0. Alternatively, you can look
at this matrix. So, the corresponding matrix is (1, 0, 3), (2, 2, 0), (0, 4, 0). And note that this has
non-zero determinant.
So, from here you can check that, so what is the determinant here, so, the determinant here is 24.
And so this is an invertible matrix that tells us that this system has a unique solution 0, 0, 0. So,
the upshot is that these vectors are linearly independent. This is, we sort of observed this at the
start of the slide and indeed we have explicitly proved it.

(Refer Slide Time: 16:32)

So, now let us address this question about, this comment that I made earlier that remember we had
three vectors in ℝ2 that means we had the 2 by 3 case and suppose now we have more than two
vectors in ℝ2 , so suppose we have 𝑛 vectors in ℝ2 , where 𝑛 is at least 3. So, what happens? To
check linear independence, we have to check whether the corresponding homogeneous linear
system 𝑉𝑥 = 0 has a unique solution 𝑥 = 0.

But on the other hand, since 𝑛 is bigger than 3, 𝑛 is at least 3, which is bigger, strictly bigger than
2 and this is a homogeneous system with more unknowns than equations, so we have seen in the
previous week that Gaussian elimination will yield infinitely many solutions. So, any set of 𝑛
vectors in ℝ2 , where 𝑛 is at least 3, so we have three vectors or four vectors or 20 vectors or 100
vectors that is going to be a linearly dependent set of vectors. That is the conclusion.

So, if you have either a single vector or two vectors in ℝ2 only then do they have a chance of being
linearly independent. More than two vectors, it is always going to be linearly dependent. So, you
can see that these numbers somehow picks up the fact that we are in ℝ2 . So, we are in ℝ2 and this
fact is being picked up by linear independence. This is important and we will see this shortly in
our next few videos.

(Refer Slide Time: 18:12)

We can generalize this. So, suppose you are in ℝ𝑛 and you have more than 𝑛 vectors. So, for
example, if you are in ℝ3 and you have 4 vectors or you are in ℝ4 and you have 5 vectors or let us
say you are in ℝ8 and you have 20 vectors, you can make the same argument as in the previous
slide for ℝ2 , namely that you will get a system of linear equations, homogeneous system of linear
equations with more unknowns than number of equations, and hence, it always has a non-trivial
solution that will tell you that these are linearly dependent.

So, if you have more vectors, then the 𝑛, I mean, what is in ℝ𝑛 , you have more than 𝑛 vectors in
ℝ𝑛 , then they are always going to be linearly dependent, important point. So, again, this is picking
up that number 𝑛 in some way.
(Refer Slide Time: 19:17)

Let us do an example in ℝ3 . So, consider the four vectors (1, 2, 0), (0, 2, 4), (3, 0, 0) and (1, 2, 3)
in ℝ3 , so we have already checked, remember that the first three are linearly independent. We
checked that these three are linearly independent. So, now we are introducing a new vector (1, 2,
3). So, let us do our usual thing and write down three equations by equating the corresponding
coordinates.

So, you have equations 𝑥1 (1, 2, 0) + 𝑥2 (0, 2, 4) + 𝑥3 (3, 0, 0) + 𝑥4 (1, 2, 3) = (0, 0, 0). Look at
the corresponding coordinates. So, we get 1𝑥1 + 0𝑥2 + 3𝑥3 + 1𝑥4 = 0, 2𝑥1 + 2𝑥2 + 0𝑥3 +
2𝑥4 = 0, 0𝑥1 + 4𝑥2 + 0𝑥3 + 3𝑥4 = 0.

So, now, of course, there are lots of 0s here. And you may be able to manipulate and solve this by
hand. But as we know the general and fastest and cleanest method of doing this is by Gaussian
1 0 3 1 0
elimination. So, we will write on the augmented matrix that gives us [2 2 0 2 |0].
0 4 0 3 0
(Refer Slide Time: 20:53)

Let us do row reduction. So, if you do row reduction, we get the augmented matrix
1 0 0 1/4 0
[0 1 0 3/4 |0]. So, what have we obtained? We obtained that the solutions to this are 𝑥1 =
0 0 1 1/4 0
𝑐
− 4. So, remember that we had this idea here of which variables are independent and which

variables are dependent. So, 𝑥4 is the independent variable and 𝑥1 , 𝑥2 , 𝑥3 are dependent. So, you
𝑐
put 𝑥4 = 𝑐 and once you put 𝑥4 = 𝑐 from the first equation you will get that 𝑥1 = − 4, from the
3𝑐 𝑐
second you will get that 𝑥2 = − 4 , and from the third you will get that 𝑥3 = − 4.

So, as 𝑐 varies, this is your set of solutions. So, just as an example you could take 𝑐 to be, let us
say 1 or maybe 𝑐 to be 4. So, if you take 𝑐 to be 4, you get −1(1, 2, 0) − 3(0, 2, 4) − 1(3, 0, 0) +
4(1, 2, 3). And you can check that this is actually equal to 0. So, this gives us, so I have given you
an explicit example of an equation or a linear combination of these four vectors, which yields 0,
where the coefficients are not all 0. In fact, in this case, none of them are 0. But it is enough to
have that not all of them are 0.

So, the net upshot is that these 3, these 4 vectors are linearly dependent. We already knew this.
Because remember that we said, if you have more than three vectors in ℝ3 , meaning four or more
vectors in ℝ3 , then they are going to be linearly dependent. And this example shows you why that
is happening.
(Refer Slide Time: 23:08)

So, finally, let us talk about the relationship with the determinant. We have seen some examples
with the determinant. So, let us talk about the relationship with the determinant. So, now, suppose
you have a set of 𝑛 vectors in ℝ𝑛 , which are linearly independent, rather we want to check whether
they are linearly independent. So, what do we do? We take those vectors, express them in terms
of their coordinates, make the corresponding matrix 𝑉.

So, that is now an 𝑛 by 𝑛 matrix, where the 𝑗 𝑡ℎ entry of, sorry, the 𝑗 𝑡ℎ column of that matrix
corresponds to the 𝑗 𝑡ℎ vector 𝑣𝑗 . And then you look at the corresponding homogeneous system of
linear equations 𝑉𝑥 = 0 and ask whether the only solution for this is 𝑥 is 0. That will determine
whether or not it is linearly independent.

So, if the only solution is 0, then it is linearly independent. If the only solution is, well, if there are
solutions which are non-zero, then it is not linearly independent, meaning it is linearly dependent.
So, I can check this by looking at whether or not 𝑉 is invertible. And to check whether or not 𝑉 is
invertible I can check what is the determinant of 𝑉. So, if the determinant is 0, then these vectors
are linearly dependent. If the determinant is non-zero, then these vectors are linearly independent.

So, this has a unique solution if and only if this vector 𝑉 is, this matrix 𝑉 is invertible, so this is
not 𝐴 but this is 𝑉, should have been 𝑉 and if 𝐴 is invertible let us recall that there exists 𝐴 inverse
such that 𝐴 times 𝐴 inverse is 1, meaning the identity, it should be identity is 𝐴 inverse times 𝐴
and so the determinant is non-zero. And we can reverse this, remember, by, if you recall how we
went the other way, if the determinant is non-zero then you can look at the miners and look at the
adjugate matrix. And if the determinant, so that is how you go the other way, this is 𝑉 and this is
𝑉.

(Refer Slide Time: 25:40)

So, let us do an example. So, let us look at these three vectors in ℝ3 (1, 4, 2), (0, 4, 3) and (1, 1,
1 0 1
0). So, the corresponding matrix is [4 4 1]. Well, I should not say that way. I have always been
2 3 0
1 0 1
saying it along the rows. So, the corresponding matrix is [4 4 1]. So, what we do is, we put
2 3 0
this 1, 4, 2 into the first column and 0, 4, 3 into the second column and 1, 1, 0 into the third column.
And then, let us look at the determinant of 𝑉. Apologies, this should be 𝑉.

The determinant is 1. So, this matrix 𝑉 is invertible. And so the vectors (1, 4, 2), (0, 4, 3) and (1,
1, 0) are linearly independent. So, this is an example of the previous idea where if you have 𝑛
vectors in ℝ𝑛 , you can deduce whether or not they are linearly independent just by putting them
into a matrix, by putting the 𝑗 𝑡ℎ vector into the 𝑗 𝑡ℎ column and creating that matrix and then looking
at the determinant. If the determinant is 0, then they are linearly independent, if the determinant is
0, they are linearly dependent. And if the determinant is non-zero, they are linearly independent.

So, let us summarize what we have seen in this video so far. So, we have seen in this video linear
independence deduces to checking a system of linear equations where you take the coefficients to
be the unknowns and you take the matrix to be, the matrix coming from the vectors by putting the
𝑗 𝑡ℎ vector into the 𝑗 𝑡ℎ column. So, all this is, of course, for ℝ𝑚 . If you are in some other vector
space, then what I am saying does not work, then you have to just do that by hand. So, we will see
examples of that later on or in the, and in the tutorials.

So, and then you look at the system of linear equations 𝑉𝑥 = 0. And if that system has a non-
trivial solution is a homogeneous system. In fact, it has a non trivial solution, meaning if there is
a solution where 𝑥 is not 0, then the set of vectors is linearly dependent. If the only possible solution
is a trivial solution, namely 𝑥 is 0, then the set of vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 is linearly independent.

And note that if the number of vectors is bigger than the vector space in which one is working in,
meaning the exponent of the vector space, meaning if you have ℝ𝑛 and if you have more than 𝑛
vectors, so you have 𝑛 + 1 or 𝑛 + 2 and so on, if you have that many vectors, so bigger than or
equal to 𝑛 + 1 vectors, then they are always linearly dependent. And if you have exactly 𝑛 vectors,
so you have 𝑛 vectors in ℝ𝑛 , then you can check the linear dependence or independence by looking
at the determinant of that corresponding matrix 𝑉 formed by putting the 𝑗 𝑡ℎ vector into the
𝑗 𝑡ℎ column. So, this is more or less everything that we have discussed in this video. Thank you.

You might also like