0% found this document useful (0 votes)
11 views10 pages

1550 Lect 12

This lecture focuses on the concept of linear independence in linear algebra, defining it in relation to linear dependence among sets of vectors. It provides examples of both linearly dependent and independent sets of vectors in R5, illustrating the process of determining linear independence through solving homogeneous systems. The document concludes with a theorem that establishes the equivalence between linear independence and the uniqueness of solutions in homogeneous systems.

Uploaded by

Larry Lau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views10 pages

1550 Lect 12

This lecture focuses on the concept of linear independence in linear algebra, defining it in relation to linear dependence among sets of vectors. It provides examples of both linearly dependent and independent sets of vectors in R5, illustrating the process of determining linear independence through solving homogeneous systems. The document concludes with a theorem that establishes the equivalence between linear independence and the uniqueness of solutions in homogeneous systems.

Uploaded by

Larry Lau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

MATH1550

Lecture 12: Linear independence


Last updated: September 24, 2018

Warning: the note is for reference only. It may contain typos. Read at your own risk.
The lecture is based on
Beezer, A first course in Linear algebra. Ver 3.5
Downloadable at http://linear.ups.edu/download.html.
The print version can be downloaded at
http://linear.ups.edu/download/fcla-3.50-print.pdf.

Textbook Reading:
Beezer, Ver 3.5 Section LI (print version p95 - p104)
Strang, Section 2.3
Exercise
Exercises with solutions can be downloaded at
http://linear.ups.edu/download/fcla-3.50-solution-manual.pdf
Section LI (p.40-48)
Strang, Section 2.3

Linear independence is one of the most fundamental conceptual ideas in linear


algebra, along with the notion of a span. So this lecture, and the subsequent will
explore this new idea.

1 Linearly Independent Sets of Vectors


Definition 1 (Relation of Linear Dependence for Column Vectors) Given a set
of vectors S = {u1 , u2 , u3 , . . . , un }, a true statement of the form
α1 u1 + α2 u2 + α3 u3 + · · · + αn un = 0

is a relation of linear dependence on S. If this statement is formed in a trivial


fashion, i.e., αi = 0, 1 ≤ i ≤ n, then we say it is the trivial relation of linear
dependence on S.
Definition 2 (Linear Independence of Column Vectors) The set of vectors S =
{u1 , u2 , u3 , . . . , un } is linearly dependent if there is a relation of linear dependence
on S that is not trivial. In the case where the only relation of linear dependence on S is
the trivial one, then S is a linearly independent set of vectors.

Example 1 Linearly dependent set in R5


Consider the set of n = 4 vectors from R5 ,
       
2 1 2 −6
−1  2  1 7
       
 3  , −1 ,
S = {    −3 , −1}
   
1 5 6 0
2 2 1 1

1
To determine linear independence we first form a relation of linear dependence,
       
2 1 2 −6
−1 2 1 7
       
 3  + α2 −1 + α3 −3 + α4 −1 = 0
α1        
1 5 6 0
2 2 1 1
We know that α1 = α2 = α3 = α4 = 0 is a solution to this equation, but that is of no
interest whatsoever. That is always the case, no matter what four vectors we might have
chosen. We are curious to know if there are other, nontrivial, solutions. Row-reducing
   
2 1 2 −6 1 0 0 −2
−1 2 1 7 RREF  0 1 0 4

 
A =  3 −1 −3 −1 −−−→  0 0 1 −3
  

1 5 6 0 0 0 0 0
2 2 1 1 0 0 0 0
We could solve this homogeneous system completely, but for this example all we need
is one nontrivial solution. Setting the lone free variable to any nonzero value, such as
x4 = 1, yields the nontrivial solution
 
2
−4
x= 3

1
completing our application of Lecture 7 Theorem 2, we have
       
2 1 2 −6
−1 2 1 7
       
2 3  + (−4) −1 + 3 −3 + 1 −1 = 0
       
1 5 6 0
2 2 1 1
This is a relation of linear dependence on S that is not trivial, so we conclude that S
is linearly dependent. 
Example 2 Linearly independent set in R5

Consider the set of n = 4 vectors from R5 ,


       
2 1 2 −6
−1  2  1 7
       
 3  , −1 ,
T = {    −3 , −1}
   
1 5 6 1
2 2 1 1
To determine linear independence we first form a relation of linear dependence,
       
2 1 2 −6
−1 2 1 7
       
α1  3  + α2 −1 + α3 −3 + α4 −1 = 0
       
1 5 6 1
2 2 1 1

2
We know that α1 = α2 = α3 = α4 = 0 is a solution to this equation, but that is of no
interest whatsoever. Row-reducing the
   
2 1 2 −6 1 0 0 0
−1 2 1 7  0 1 0 0
  RREF  
B=  3 −1 −3 −1 −−−→  0
  0 1 0
1 5 6 1 0 0 0 1
2 2 1 1 0 0 0 0
From the form of this matrix, we see that there are no free variables, so the solution
is unique, and because the system is homogeneous, this unique solution is the trivial
solution. So we now know that there is but one way to combine the four vectors of T
into a relation of linear dependence, and that one way is the easy and obvious way. In
this situation we say that the set, T , is linearly independent. 
The above examples relied on solving a homogeneous system of equations to determine
linear independence. We can codify this process in a time-saving theorem.
Theorem 3 (Linearly Independent Vectors and Homogeneous Systems) Suppose
that S = {v1 , v2 , v3 , . . . , vn } ⊆ Rm is a set of vectors and A is the m × n matrix whose
columns are the vectors in S. Then S is a linearly independent set if and only if the
homogeneous system LS(A, 0) has a unique solution.
Proof. (⇐) Suppose that LS(A, 0) has a unique solution. Since it is a homogeneous
system, this solution must be the trivial solution x = 0. This means that the only
relation of linear dependence on S is the trivial one. So S is linearly independent.
(⇒) We will prove the contrapositive. Suppose that LS(A, 0) does not have a unique
solution. Since it is a homogeneous system, it is consistent. And so must have infinitely
many solutions (Lecture 10 Theorem 13). One of these infinitely many solutions must
be nontrivial (in fact, almost all of them are), so choose one. this nontrivial solution
will give a nontrivial relation of linear dependence on S, so we can conclude that S is a
linearly dependent set. 

Since the above theorem is an equivalence, we can use it to determine the linear
independence or dependence of any set of column vectors, just by creating a matrix and
analyzing the row-reduced form. Let us illustrate this with two more examples.
Example 3 Linearly independent, homogeneous system
Is the set of vectors      
2 6 4
−1  2   3 
     
 3  , −1 , −4}
S = {     
4 3 5
2 4 1
linearly independent or linearly dependent? 
Answer. The above theorem suggests we study the matrix, A, whose columns are
the vectors in S. Specifically, we are interested in the size of the solution set for the

3
homogeneous system LS(A, 0), so we row-reduce A.
   
2 6 4 1 0 0
−1 2 3  0 1 0
 RREF  
A= 3 −1 −4 −−−→ 0 0 1
  
4 3 5 0 0 0
2 4 1 0 0 0
Now, r = 3, so there are n − r = 3 − 3 = 0 free variables and we see that LS(A, 0)
has a unique solution . By the above theorem, the set S is linearly independent. 
Example 4 Linearly dependent, homogeneous system
Is the set of vectors      
2 6 4
−1  2   3 
     
 3  , −1 , −4}
S = {     
 4   3  −1
2 4 2
linearly independent or linearly dependent? 
Answer. Theorem 3 suggests we study the matrix, A, whose columns are the vectors
in S. Specifically, we are interested in the size of the solution set for the homogeneous
system LS(A, 0), so we row-reduce A.
   
2 6 4 1 0 −1
−1 2 3 RREF  0 1 1

 
A=  3 −1 −4 −−−→  0 0 0
 

4 3 −1 0 0 0
2 4 2 0 0 0
Now, r = 2, so there are n − r = 3 − 2 = 1 free variables and we see that LS(A, 0)
has infinitely many solutions. By Theorem 3, the set S is linearly dependent. 
As an equivalence, Theorem 3 gives us a straightforward way to determine if a set of
vectors is linearly independent or dependent.
Review the previous two examples. They are very similar, differing only in the last
two slots of the third vector. This resulted in slightly different matrices when row-
reduced, and slightly different values of r, the number of nonzero rows. Notice, too, that
we are less interested in the actual solution set, and more interested in its form or size.
These observations allow us to make a slight improvement in Theorem 3.
Theorem 4 (Linearly Independent Vectors, r and n) Suppose that
S = {v1 , v2 , v3 , . . . , vn } ⊆ Rm
is a set of vectors and A is the m × n matrix whose columns are the vectors in S. Let
B be a matrix in reduced row-echelon form that is row-equivalent to A and let r denote
the number of pivot columns in B. Then S is linearly independent if and only if n = r.
Proof. Theorem 3 says the linear independence of S is equivalent to the homogeneous
linear system LS(A, 0) having a unique solution. Since the zero vector is a solution of
LS(A, 0), LS(A, 0) is consistent, we can apply Lecture 7 Theorem 3 to see that the
solution is unique exactly when n = r. 

4
So now here is an example of the most straightforward way to determine if a set of
column vectors is linearly independent or linearly dependent. While this method can
be quick and easy, do not forget the logical progression from the definition of linear
independence through homogeneous system of equations which makes it possible.
Example 5 Linearly dependent, r and n
Is the set of vectors
2 9 1 −3 6
         
−1 −6 1 1 −2
 3  −2 1 4 1
         
S = {  ,   ,  ,  ,  }
1 3 0 2 4
0 2 0 1 3
3 1 1 2 2
linearly independent or linearly dependent? 
Answer. By Theorem 4 suggests we place these vectors into a matrix as columns and
analyze the row-reduced version of the matrix,
 

2 9 1 −3 6
 1 0 0 0 −1
−1 −6 1 1 −2 0 1
 0 0 1

 3 −2 1 4 1  RREF 
 
0 0 1 0 2
 −−−→  
1 3 0 2 4
 0 0
 0 1 1

0 2 0 1 3 0 0 0 0 0
3 1 1 2 2 0 0 0 0 0
Now we need only compute that r = 4 < 5 = n to recognize, via Theorem 4 that S
is a linearly dependent set. Boom! 
Example 6 Large linearly dependent set in R4
Consider the set of n = 9 vectors from R4 ,
                 
−1 7 1 0 5 2 3 1 −6
 3   1   2  4 −2  1  0 1 −1
R = {  1  , −3 , −1 , 2 ,  4  , −6 ,
            ,
−3
 ,
5
 }.
1
2 6 −2 9 3 4 1 3 1

To employ Theorem 3, we form a 4 × 9 matrix, C, whose columns are the vectors in


R  
−1 7 1 0 5 2 3 1 −6
3 1 2 4 −2 1 0 1 −1
C=  1 −3 −1 2 4 −6 −3 5 1  .

2 6 −2 9 3 4 1 3 1
To determine if the homogeneous system LS(C, 0) has a unique solution or not,
we would normally row-reduce this matrix. But in this particular example, we can do
better. Lecture 7 Theorem 6 tells us that since the system is homogeneous with n = 9
variables in m = 4 equations, and n > m, there must be infinitely many solutions. Since
there is not a unique solution, Theorem 3 says the set is linearly dependent. 
We then have the following theorem
Theorem 5 (More Vectors than Size implies Linear Dependence) Suppose that
S = {u1 , u2 , u3 , . . . , un } ⊆ Rm and n > m. Then S is a linearly dependent set.

5
Proof. Form the m × n matrix A whose columns are ui , 1 ≤ i ≤ n. Consider the ho-
mogeneous system LS(A, 0). By Lecture 8 Theorem 4 this system has infinitely many
solutions. Since the system does not have a unique solution, Theorem 3 says the columns
of A form a linearly dependent set, as desired. 

2 Linear Independence and Nonsingular Matrices


We will now specialize to sets of n vectors from Rn . This will put Theorem 5 off-limits,
while Theorem 3 will involve square matrices.
Example 7 Linearly dependent columns
Are the columns of  
1 −1 2
2 1 1 
1 1 0
Do the columns of this matrix form a linearly independent or dependent set? 
Answer. We can show that A is singular. According to the definition of nonsingu-
lar matrices, the homogeneous system LS(A, 0) has infinitely many solutions. So by
Theorem 3, the columns of A form a linearly dependent set. 
Example 8 Linearly independent columns
 
−7 −6 −12
B= 5 5 7 
1 0 4

Do the columns of this matrix form a linearly independent or dependent set? 


Answer. We can show that B is nonsingular. According to the definition of nonsingular
matrices, the homogeneous system LS(A, 0) has a unique solution. So by Theorem 3,
the columns of B form a linearly independent set. 
The above two examples have opposite properties for the columns of their coefficient
matrices is no accident. Here is the theorem, and then we will update our equivalences
for nonsingular matrices.

Theorem 6 (Nonsingular Matrices have Linearly Independent Columns) Suppose


that A is a square matrix. Then A is nonsingular if and only if the columns of A form
a linearly independent set.

Proof. This is a proof where we can chain together equivalences, rather than proving
the two halves separately.

A nonsingular ⇐⇒ LS(A, 0) has a unique solution


⇐⇒ columns of A are linearly independent

Here is the update to Lecture 8 Theorem 12.
Theorem 7 (Nonsingular Matrix Equivalences, Round 2) Suppose that A is a
square matrix. The following are equivalent.
1. A is nonsingular.

6
2. A row-reduces to the identity matrix.
3. The null space of A contains only the zero vector, N (A) = {0}.
4. The linear system LS(A, b) has a unique solution for every possible choice of b.
5. The columns of A form a linearly independent set.
Proof. The above theorem is yet another equivalence for a nonsingular matrix, so we
can add it to the list in Lecture 8 Theorem 12. 

3 Null Spaces, Spans, Linear Independence


In this section, we will find a linearly independent set that spans a null space. In Lecture
10 Section 2, we proved Lecture 10 Theorem 4, which provided n − r vectors that could
be used with the span construction to build the entire null space of a matrix.
Example 9 Linear independence of null space basis Suppose that we are interested
in the null space of a 3 × 7 matrix, A, which row-reduces to
 
1 0 −2 4 0 3 9
B=0 1 5 6 0 7 1
0 0 0 0 1 8 −5

The set F = {3, 4, 6, 7} is the set of indices for our four free variables that would be
used in a description of the solution set for the homogeneous system LS(A, 0). Applying
Lecture 8 Theorem 4 we can begin to construct a set of four vectors whose span is the
null space of A, a set of vectors we will reference as T .
       
       
1 0 0 0
*         +
N (A) = hT i = h{z1 , z2 , z3 , z4 }i = {0 , 1 , 0 , 0}
       
       
       
0 0 1 0
0 0 0 1
So far, we have constructed as much of these individual vectors as we can, based just
on the knowledge of the contents of the set F . This has allowed us to determine the
entries in slots 3, 4, 6 and 7, while we have left slots 1, 2 and 5 blank. Without doing any
more, let us ask if T is linearly independent? Begin with a relation of linear dependence

7
on T , and see what we can learn about the scalars,
0 = α1 z1 + α2 z2 + α3 z3 + α4 z4
0
         
0        
0 1 0 0 0
         
0 = α1 0 + α2 1 + α3 0 + α4 0
         
0        
         
0 0 0 1 0
0 0 0 0 1
         
         
α1   0   0   0  α1 
         
=  0  + α2  +  0  +  0  = α2 
         
         
         
 0   0  α   0  α 
3 3
0 0 0 α4 α4
Applying the equalities of vectors, we see that α1 = α2 = α3 = α4 = 0. So the only
relation of linear dependence on the set T is a trivial one. By the definition of linear
independence, the set T is linearly independent. The important feature of this example
is how the pattern of zeros and ones in the four vectors led to the conclusion of linear
independence. 
The proof of theorem 8 is really quite straightforward, and relies on the pattern of
zeros and ones that arise in the vectors zi , 1 ≤ i ≤ n − r in the entries that arise with
the locations of the non-pivot columns.
Theorem 8 (Basis for Null Spaces) Suppose that A is an m × n matrix, and B is
a row-equivalent matrix in reduced row-echelon form with r pivot columns. Let D =
{d1 , d2 , d3 , . . . , dr } and F = {f1 , f2 , f3 , . . . , fn−r } be the sets of column indices where
B does and does not (respectively) have pivot columns. Construct the n − r vectors zj ,
1 ≤ j ≤ n − r of size n as

1
 if i ∈ F , i = fj
[zj ]i = 0 if i ∈ F , i 6= fj

− [B]
k,fj if i ∈ D, i = dk

(In fact zj corresponding to the solution xfj = 1 and xfk = 0 for k 6= j.)
Define the set S = {z1 , z2 , z3 , . . . , zn−r }.Then
1. N (A) = hSi.
2. S is a linearly independent set.
Proof. Study the above example. You can skip the proof for now.
Notice first that the vectors zj , 1 ≤ j ≤ n − r are exactly the same as the n − r vectors
defined in Lecture 10 Theorem 4. Also, the hypotheses of Lecture 10 Theorem 4 are the
same as the hypotheses of the theorem we are currently proving. So it is then simply
the conclusion of Lecture 8 Theorem 4 that tells us that N (A) = hSi. That was the
easy half, but the second part is not much harder. What is new here is the claim that
S is a linearly independent set.

8
To prove the linear independence of a set, we need to start with a relation of linear
dependence and somehow conclude that the scalars involved must all be zero, i.e., that
the relation of linear dependence only happens in the trivial fashion. So to establish the
linear independence of S, we start with
α1 z1 + α2 z2 + α3 z3 + · · · + αn−r zn−r = 0.
For each j, 1 ≤ j ≤ n − r, consider the equality of the individual entries of the vectors
on both sides of this equality in position fj ,
0 = [0]fj
= [α1 z1 + α2 z2 + α3 z3 + · · · + αn−r zn−r ]fj
= [α1 z1 ]fj + [α2 z2 ]fj + [α3 z3 ]fj + · · · + [αn−r zn−r ]fj
= α1 [z1 ]fj + α2 [z2 ]fj + α3 [z3 ]fj + · · · +
αj−1 [zj−1 ]fj + αj [zj ]fj + αj+1 [zj+1 ]fj + · · · +
αn−r [zn−r ]fj
= α1 (0) + α2 (0) + α3 (0) + · · · +
αj−1 (0) + αj (1) + αj+1 (0) + · · · + αn−r (0) Definition of zj
= αj
So for all j, 1 ≤ j ≤ n − r, we have αj = 0, which is the conclusion that tells us
that the only relation of linear dependence on S = {z1 , z2 , z3 , . . . , zn−r } is the trivial
one. Hence, by the definition of linear independence, the set is linearly independent, as
desired. 

Example 10 Find the null space of


 
−2 −1 −2 −4 4
−6 −5 −4 −4 6 
 
A=  10 7 7 10 −13

−7 −5 −6 −9 10 
−4 −3 −4 −6 6

Answer. Row reducing A to a RREF
 
1 0 0 1 −2
0
 1 0 −2 2 
B= 0 0 1 2 −1
0 0 0 0 0
0 0 0 0 0
x4 and x5 are free variables.
z1 corresponding to x4 = 1, x5 = 0
 
−1
2
 
−2
z1 =  
1
0

9
z2 corresponding to x4 = 0, x5 = 1.
 
2
−2
 
1
z2 =  
0
1

Hence    
−1 2
2 −2
*   +
   
−2 ,  1 
N (L) =    
1 0
0 1


10

You might also like