1550 Lect 12
1550 Lect 12
Warning: the note is for reference only. It may contain typos. Read at your own risk.
The lecture is based on
Beezer, A first course in Linear algebra. Ver 3.5
Downloadable at http://linear.ups.edu/download.html.
The print version can be downloaded at
http://linear.ups.edu/download/fcla-3.50-print.pdf.
Textbook Reading:
Beezer, Ver 3.5 Section LI (print version p95 - p104)
Strang, Section 2.3
Exercise
Exercises with solutions can be downloaded at
http://linear.ups.edu/download/fcla-3.50-solution-manual.pdf
Section LI (p.40-48)
Strang, Section 2.3
1
To determine linear independence we first form a relation of linear dependence,
2 1 2 −6
−1 2 1 7
3 + α2 −1 + α3 −3 + α4 −1 = 0
α1
1 5 6 0
2 2 1 1
We know that α1 = α2 = α3 = α4 = 0 is a solution to this equation, but that is of no
interest whatsoever. That is always the case, no matter what four vectors we might have
chosen. We are curious to know if there are other, nontrivial, solutions. Row-reducing
2 1 2 −6 1 0 0 −2
−1 2 1 7 RREF 0 1 0 4
A = 3 −1 −3 −1 −−−→ 0 0 1 −3
1 5 6 0 0 0 0 0
2 2 1 1 0 0 0 0
We could solve this homogeneous system completely, but for this example all we need
is one nontrivial solution. Setting the lone free variable to any nonzero value, such as
x4 = 1, yields the nontrivial solution
2
−4
x= 3
1
completing our application of Lecture 7 Theorem 2, we have
2 1 2 −6
−1 2 1 7
2 3 + (−4) −1 + 3 −3 + 1 −1 = 0
1 5 6 0
2 2 1 1
This is a relation of linear dependence on S that is not trivial, so we conclude that S
is linearly dependent.
Example 2 Linearly independent set in R5
2
We know that α1 = α2 = α3 = α4 = 0 is a solution to this equation, but that is of no
interest whatsoever. Row-reducing the
2 1 2 −6 1 0 0 0
−1 2 1 7 0 1 0 0
RREF
B= 3 −1 −3 −1 −−−→ 0
0 1 0
1 5 6 1 0 0 0 1
2 2 1 1 0 0 0 0
From the form of this matrix, we see that there are no free variables, so the solution
is unique, and because the system is homogeneous, this unique solution is the trivial
solution. So we now know that there is but one way to combine the four vectors of T
into a relation of linear dependence, and that one way is the easy and obvious way. In
this situation we say that the set, T , is linearly independent.
The above examples relied on solving a homogeneous system of equations to determine
linear independence. We can codify this process in a time-saving theorem.
Theorem 3 (Linearly Independent Vectors and Homogeneous Systems) Suppose
that S = {v1 , v2 , v3 , . . . , vn } ⊆ Rm is a set of vectors and A is the m × n matrix whose
columns are the vectors in S. Then S is a linearly independent set if and only if the
homogeneous system LS(A, 0) has a unique solution.
Proof. (⇐) Suppose that LS(A, 0) has a unique solution. Since it is a homogeneous
system, this solution must be the trivial solution x = 0. This means that the only
relation of linear dependence on S is the trivial one. So S is linearly independent.
(⇒) We will prove the contrapositive. Suppose that LS(A, 0) does not have a unique
solution. Since it is a homogeneous system, it is consistent. And so must have infinitely
many solutions (Lecture 10 Theorem 13). One of these infinitely many solutions must
be nontrivial (in fact, almost all of them are), so choose one. this nontrivial solution
will give a nontrivial relation of linear dependence on S, so we can conclude that S is a
linearly dependent set.
Since the above theorem is an equivalence, we can use it to determine the linear
independence or dependence of any set of column vectors, just by creating a matrix and
analyzing the row-reduced form. Let us illustrate this with two more examples.
Example 3 Linearly independent, homogeneous system
Is the set of vectors
2 6 4
−1 2 3
3 , −1 , −4}
S = {
4 3 5
2 4 1
linearly independent or linearly dependent?
Answer. The above theorem suggests we study the matrix, A, whose columns are
the vectors in S. Specifically, we are interested in the size of the solution set for the
3
homogeneous system LS(A, 0), so we row-reduce A.
2 6 4 1 0 0
−1 2 3 0 1 0
RREF
A= 3 −1 −4 −−−→ 0 0 1
4 3 5 0 0 0
2 4 1 0 0 0
Now, r = 3, so there are n − r = 3 − 3 = 0 free variables and we see that LS(A, 0)
has a unique solution . By the above theorem, the set S is linearly independent.
Example 4 Linearly dependent, homogeneous system
Is the set of vectors
2 6 4
−1 2 3
3 , −1 , −4}
S = {
4 3 −1
2 4 2
linearly independent or linearly dependent?
Answer. Theorem 3 suggests we study the matrix, A, whose columns are the vectors
in S. Specifically, we are interested in the size of the solution set for the homogeneous
system LS(A, 0), so we row-reduce A.
2 6 4 1 0 −1
−1 2 3 RREF 0 1 1
A= 3 −1 −4 −−−→ 0 0 0
4 3 −1 0 0 0
2 4 2 0 0 0
Now, r = 2, so there are n − r = 3 − 2 = 1 free variables and we see that LS(A, 0)
has infinitely many solutions. By Theorem 3, the set S is linearly dependent.
As an equivalence, Theorem 3 gives us a straightforward way to determine if a set of
vectors is linearly independent or dependent.
Review the previous two examples. They are very similar, differing only in the last
two slots of the third vector. This resulted in slightly different matrices when row-
reduced, and slightly different values of r, the number of nonzero rows. Notice, too, that
we are less interested in the actual solution set, and more interested in its form or size.
These observations allow us to make a slight improvement in Theorem 3.
Theorem 4 (Linearly Independent Vectors, r and n) Suppose that
S = {v1 , v2 , v3 , . . . , vn } ⊆ Rm
is a set of vectors and A is the m × n matrix whose columns are the vectors in S. Let
B be a matrix in reduced row-echelon form that is row-equivalent to A and let r denote
the number of pivot columns in B. Then S is linearly independent if and only if n = r.
Proof. Theorem 3 says the linear independence of S is equivalent to the homogeneous
linear system LS(A, 0) having a unique solution. Since the zero vector is a solution of
LS(A, 0), LS(A, 0) is consistent, we can apply Lecture 7 Theorem 3 to see that the
solution is unique exactly when n = r.
4
So now here is an example of the most straightforward way to determine if a set of
column vectors is linearly independent or linearly dependent. While this method can
be quick and easy, do not forget the logical progression from the definition of linear
independence through homogeneous system of equations which makes it possible.
Example 5 Linearly dependent, r and n
Is the set of vectors
2 9 1 −3 6
−1 −6 1 1 −2
3 −2 1 4 1
S = { , , , , }
1 3 0 2 4
0 2 0 1 3
3 1 1 2 2
linearly independent or linearly dependent?
Answer. By Theorem 4 suggests we place these vectors into a matrix as columns and
analyze the row-reduced version of the matrix,
2 9 1 −3 6
1 0 0 0 −1
−1 −6 1 1 −2 0 1
0 0 1
3 −2 1 4 1 RREF
0 0 1 0 2
−−−→
1 3 0 2 4
0 0
0 1 1
0 2 0 1 3 0 0 0 0 0
3 1 1 2 2 0 0 0 0 0
Now we need only compute that r = 4 < 5 = n to recognize, via Theorem 4 that S
is a linearly dependent set. Boom!
Example 6 Large linearly dependent set in R4
Consider the set of n = 9 vectors from R4 ,
−1 7 1 0 5 2 3 1 −6
3 1 2 4 −2 1 0 1 −1
R = { 1 , −3 , −1 , 2 , 4 , −6 ,
,
−3
,
5
}.
1
2 6 −2 9 3 4 1 3 1
5
Proof. Form the m × n matrix A whose columns are ui , 1 ≤ i ≤ n. Consider the ho-
mogeneous system LS(A, 0). By Lecture 8 Theorem 4 this system has infinitely many
solutions. Since the system does not have a unique solution, Theorem 3 says the columns
of A form a linearly dependent set, as desired.
Proof. This is a proof where we can chain together equivalences, rather than proving
the two halves separately.
6
2. A row-reduces to the identity matrix.
3. The null space of A contains only the zero vector, N (A) = {0}.
4. The linear system LS(A, b) has a unique solution for every possible choice of b.
5. The columns of A form a linearly independent set.
Proof. The above theorem is yet another equivalence for a nonsingular matrix, so we
can add it to the list in Lecture 8 Theorem 12.
The set F = {3, 4, 6, 7} is the set of indices for our four free variables that would be
used in a description of the solution set for the homogeneous system LS(A, 0). Applying
Lecture 8 Theorem 4 we can begin to construct a set of four vectors whose span is the
null space of A, a set of vectors we will reference as T .
1 0 0 0
* +
N (A) = hT i = h{z1 , z2 , z3 , z4 }i = {0 , 1 , 0 , 0}
0 0 1 0
0 0 0 1
So far, we have constructed as much of these individual vectors as we can, based just
on the knowledge of the contents of the set F . This has allowed us to determine the
entries in slots 3, 4, 6 and 7, while we have left slots 1, 2 and 5 blank. Without doing any
more, let us ask if T is linearly independent? Begin with a relation of linear dependence
7
on T , and see what we can learn about the scalars,
0 = α1 z1 + α2 z2 + α3 z3 + α4 z4
0
0
0 1 0 0 0
0 = α1 0 + α2 1 + α3 0 + α4 0
0
0 0 0 1 0
0 0 0 0 1
α1 0 0 0 α1
= 0 + α2 + 0 + 0 = α2
0 0 α 0 α
3 3
0 0 0 α4 α4
Applying the equalities of vectors, we see that α1 = α2 = α3 = α4 = 0. So the only
relation of linear dependence on the set T is a trivial one. By the definition of linear
independence, the set T is linearly independent. The important feature of this example
is how the pattern of zeros and ones in the four vectors led to the conclusion of linear
independence.
The proof of theorem 8 is really quite straightforward, and relies on the pattern of
zeros and ones that arise in the vectors zi , 1 ≤ i ≤ n − r in the entries that arise with
the locations of the non-pivot columns.
Theorem 8 (Basis for Null Spaces) Suppose that A is an m × n matrix, and B is
a row-equivalent matrix in reduced row-echelon form with r pivot columns. Let D =
{d1 , d2 , d3 , . . . , dr } and F = {f1 , f2 , f3 , . . . , fn−r } be the sets of column indices where
B does and does not (respectively) have pivot columns. Construct the n − r vectors zj ,
1 ≤ j ≤ n − r of size n as
1
if i ∈ F , i = fj
[zj ]i = 0 if i ∈ F , i 6= fj
− [B]
k,fj if i ∈ D, i = dk
(In fact zj corresponding to the solution xfj = 1 and xfk = 0 for k 6= j.)
Define the set S = {z1 , z2 , z3 , . . . , zn−r }.Then
1. N (A) = hSi.
2. S is a linearly independent set.
Proof. Study the above example. You can skip the proof for now.
Notice first that the vectors zj , 1 ≤ j ≤ n − r are exactly the same as the n − r vectors
defined in Lecture 10 Theorem 4. Also, the hypotheses of Lecture 10 Theorem 4 are the
same as the hypotheses of the theorem we are currently proving. So it is then simply
the conclusion of Lecture 8 Theorem 4 that tells us that N (A) = hSi. That was the
easy half, but the second part is not much harder. What is new here is the claim that
S is a linearly independent set.
8
To prove the linear independence of a set, we need to start with a relation of linear
dependence and somehow conclude that the scalars involved must all be zero, i.e., that
the relation of linear dependence only happens in the trivial fashion. So to establish the
linear independence of S, we start with
α1 z1 + α2 z2 + α3 z3 + · · · + αn−r zn−r = 0.
For each j, 1 ≤ j ≤ n − r, consider the equality of the individual entries of the vectors
on both sides of this equality in position fj ,
0 = [0]fj
= [α1 z1 + α2 z2 + α3 z3 + · · · + αn−r zn−r ]fj
= [α1 z1 ]fj + [α2 z2 ]fj + [α3 z3 ]fj + · · · + [αn−r zn−r ]fj
= α1 [z1 ]fj + α2 [z2 ]fj + α3 [z3 ]fj + · · · +
αj−1 [zj−1 ]fj + αj [zj ]fj + αj+1 [zj+1 ]fj + · · · +
αn−r [zn−r ]fj
= α1 (0) + α2 (0) + α3 (0) + · · · +
αj−1 (0) + αj (1) + αj+1 (0) + · · · + αn−r (0) Definition of zj
= αj
So for all j, 1 ≤ j ≤ n − r, we have αj = 0, which is the conclusion that tells us
that the only relation of linear dependence on S = {z1 , z2 , z3 , . . . , zn−r } is the trivial
one. Hence, by the definition of linear independence, the set is linearly independent, as
desired.
9
z2 corresponding to x4 = 0, x5 = 1.
2
−2
1
z2 =
0
1
Hence
−1 2
2 −2
* +
−2 , 1
N (L) =
1 0
0 1
10