0% found this document useful (0 votes)
78 views11 pages

Week 8: 5.2 Independence & Dimension

The document provides definitions and examples related to linear independence and dimension of vectors. It discusses determining if a set of vectors is linearly independent or dependent. It also covers basis and dimension of subspaces. Key points include: - A set of vectors is linearly independent if the only solution to the vector equation is the trivial solution, and dependent otherwise. - The dimension of a subspace is the number of vectors in a basis for that subspace. - A basis is a maximal linearly independent set of vectors that spans the subspace. - The row space, column space, and null space of a matrix are subspaces, and the process for finding a basis of each is described.

Uploaded by

DP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views11 pages

Week 8: 5.2 Independence & Dimension

The document provides definitions and examples related to linear independence and dimension of vectors. It discusses determining if a set of vectors is linearly independent or dependent. It also covers basis and dimension of subspaces. Key points include: - A set of vectors is linearly independent if the only solution to the vector equation is the trivial solution, and dependent otherwise. - The dimension of a subspace is the number of vectors in a basis for that subspace. - A basis is a maximal linearly independent set of vectors that spans the subspace. - The row space, column space, and null space of a matrix are subspaces, and the process for finding a basis of each is described.

Uploaded by

DP
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Week 8

5.2 Independence & Dimension


Definition:
A set of vectors {~v1 , ~v2 , . . . , ~vp } in Rn are linearly independent if the vector equation
x1~v1 + x2~v2 + . . . xp~vp = ~0
has just the trivial solution. (x1 = 0, x2 = 0, . . . , xp = 0)
The set {~v1 , ~v2 , . . . , ~vp } is linearly dependent if the above equation has non-trivial solutions.
Example 1:



1
4
2
If ~v1 = 2 v~2 = 5 ~v3 = 1, determine if {~v1 , ~v2 , ~v3 } is a linearly independent or dependent set.
3
6
0
Solution

Note: The vector equation x1~v1 + x2~v2 + x3~v3 = ~0 is equivalent to the matrix equation A~x = ~0. By
placing the vectors ~v1 , ~v2 , ~v3 as columns of a matrix A, we can see that the vectors are dependent
because the matrix equation A~x = ~0 has non-trivial solutions.
Invertible Matrix Theorem
Let A be an n n matrix. TFAE
1. A is invertible
2. The RREF of A is In .
3. A~x = ~0 has just the trivial solution.
1

4. A~x = ~b has exactly one solution for each vector ~b.


5. det(A) 6= 0
6. AT is invertible
7. The columns of A are linearly

8. The columns of A
9. The rows of A

.
.

Linear Independence Proofs


Prove the following:
1. {~0} is linearly dependent

2. {~v }, (~v 6= ~0), is linearly independent.

3. A set of two vectors {~u, ~v } is linearly dependent iff one of the vectors is a scalar multiple of the
other.

4. A set S = {~v1 , ~v2 , . . . , ~vk } is linearly dependent iff at least one of the vectors can be written as
a linear combination of the others.

5. If {~x, ~y } is linearly independent, then {2~x + 3~y , ~x ~y } is linearly independent.

6. If {~v1 , ~v2 , . . . , ~vk } is linearly independent, and ~v


/ span{~v1 , ~v2 , . . . , ~vk }, then {~v , ~v1 , ~v2 , . . . , ~vk }
is linearly independent.

This last result will be useful in creating a largest linearly independent set of vectors by adding
vectors that do not lie in the span of the original set.

Basis and Dimension:


A set of vectors {~v1 , ~v2 , . . . , ~vp } in Rn is called a basis for a subspace W of Rn if they are linearly
independent and span W . The number of vectors in a basis for W is called the dimension of W .
A basis is the most efficient way to describe a subspace.
Not too little, not too much.
It is the smallest set of vectors that still spans the subspace and its the largest set that is still
linearly independent.
In other words, if we removed a vector, the remaining vectors wouldnt span and if we added another
vector, the remaining vectors would no longer be independent.
Examples:
1. The columns of the nn identity matrix In form a basis for Rn . This set of vectors {~e1 , ~e2 , . . . ~en }
is called the standard basis for Rn and since n vectors form a basis for Rn , we say Rn has
dimension n.

2. Any invertible n n matrix A has columns that form a basis for Rn . What happens if we have
a non-square matrix? Could these columns form a basis for Rn ?

~ d~ =
3. Lines through the origin in R3 have dimension 1 since L = span{d},
6 ~0.

4. Planes through the origin in R3 have dimension 2 since P = span{~u, ~v }, where ~u, ~v 6= ~0 and
~u 6= k~v .
Fundamental Theorem
If a subspace U has a basis B = {~u1 , ~u2 , . . . , ~um }, then any set of vectors in U containing more than
m vectors must be linearly dependent.
Proof:
Let {w
~ 1, w
~ 2, . . . , w
~ k } be a set in U with k > m vectors.
Since B is a basis for U , then B spans U and each w
~ i U can be written as a linear combination of
vectors in B.
Thus,
w
~ 1 = a11~u1 + a12~u2 + . . . + a1m~um
w
~ 2 = a21~u1 + a22~u2 + . . . + a2m~um
..
.
w
~ k = ak1~u1 + ak2~u2 + . . . + akm~um
We want to show that {w
~ 1, w
~ 2, . . . , w
~ k } is linearly dependent, so we need to show the following
equation has non-trivial solutions:
c1 w
~ 1 + c2 w
~ 2 + . . . + ck w
~ k = ~0

(1)

Substituting the expressions for each w


~ i above, we have
c1 (a11~u1 + a12~u2 + . . . + a1m~um ) + c2 (a21~u1 + a22~u2 + . . . + a2m~um ) + . . .
. . . + ck (ak1~u1 + ak2~u2 + . . . + akm~um ) = ~0
or
(c1 a11 + c2 a21 + . . . + ck ak1 )~u1 + (c1 a12 + c2 a22 + . . . + ck ak2 )~u2 + . . . + (c1 a1m + c2 a2m + . . . + ck akm )~um = ~0

But {~u1 , ~u2 , . . . , ~um } is a basis, and so must be linearly independent.


This tells us the above equation has only the trivial solution. So all those coefficients must equal 0,
or
c1 a11 + c2 a21 + . . . + ck ak1 = 0
c1 a12 + c2 a22 + . . . + ck ak2 = 0
..
.
c1 a1m + c2 a2m + . . . + ck akm = 0
This is a homogeneous system of m equations in k unknowns, where k > m.
We know this system has non-trivial solutions for c1 , c2 , . . . ck and thus we have shown equation (1)
has non-trivial solutions as required.
Theorem 2:
Let U 6= {~0} be a subspace of Rn . Then
(a) Any independent set of vectors in U can be extended to a basis of U by adding vectors to the
set.
(b) If B = {~u1 , ~u2 , . . . , ~uk } spans U , then B can be cut down to a basis of U by deleting vectors
from B.
Proof:
(a) Let S = {~v1 , ~v2 , . . . , ~vm } be a set of independent vectors in U . If S already spans U , then S is
a basis and we are done.
Otherwise, there exists some vector ~u
/ span{~v1 , ~v2 , . . . , ~vm }.
If we add this ~u to S, the resulting set will still be linearly independent. (See Linear Independence Proof #6 from last day)
We continue this process until S spans U .
(b) If B is already linearly independent, it is a basis and we are done.
Otherwise, it is dependent and at least one of the vectors in B can be written as a linear
combination of the others. (Linear Independence Proof #4 from last day).
Remove this vector. The resulting set of vectors will still span U .
That is, if say ~u1 is a linear combination of {~u2 , . . . , ~uk }, then span{~u1 , ~u2 , . . . , ~uk } = span{~u2 , . . . , ~uk }
(by Theorem 1 from Week 7 - Span)
Continue this process until B is linearly independent.
Theorem 3:
Let U be a subspace of Rn with dim(U ) = m.
(a) Then any set of m linearly independent vectors in U automatically span U .
(b) Then any set of m vectors that span U are automatically linearly independent.

Proof:
(a) (By contradiction) If m linearly independent vectors did not span U , we could add another
vector not in the span of these m vectors. This would give us m + 1 linearly independent
vectors in a subspace of dimension m. This contradicts the Fundamental Theorem.
(b) (By contradiction) If m vectors were linearly dependent, we could remove dependent vectors
until we have a basis for U .
Now we have less than m vectors forming a basis for U which has dimension m. This is a
contradiction of the definition of a basis.
Theorem 4:
Let U and W be two subspaces of Rn with U W .
(a) dim(U ) dim(W )
(b) If dim(U ) = dim(W ), then U = W .
Proof:
Let dim(W ) = k and let B be a basis for U .
Then U =span(B) so B is an independent set of vectors in U .
(a) (By Contradiction) If dim(U ) > k, then B is an independent set with more than k vectors from
U (and thus in W ). This contradicts the Fundamental Theorem.
(b) If dim(U ) = k (= dim(W )), then B (a basis for U ) is an independent set of vectors in U (and
in W ) containing k vectors. Thus, B automatically spans W by Theorem 3a), and so W =
span(B) = U .
Example:

2
6
0
Fine a basis for the subspace U , where U = span 2 , 2 , 16 .

1
0
5

5.4 Row Space, Column Space and Null Space of a Matrix


We will be interested in finding basis for each of these 3 subspaces introduced in the last section.
The row space and column space are by definition a spanning set of vectors, so we only need to make
this set independent by removing the dependent vectors. By writing our solution to a homogenous
system A~x = ~0 as a linear combination of basic vectors, we will have our basis for the Null Space.
We will illustrate the process for finding a basis for each of these three subspaces in the following
example.

1 2 2 1
Find the Row Space, Column Space and Null Space of A = 3 6 5 0 .
1 2 1 2
Solution:
Recall that Row(A) = span{rows of A} and Col(A) = span{columns of A}.
To find a basis for the row space, we already have a set of vectors that span the row space, but they
may not be linearly independent. However, by performing EROs we can find the rows that are linear
combinations of the others by identifying zero rows.
When we reduce A above by the sequence R2 3R1 , R3 R1 and R3 R2 , we get the matrix

1 2 2 1
R = 0 0 1 3
0 0 0
0
If we treat the original rows of A as row vectors ~ri , we have ~r3 ~r1 (~r2 3~r1 ) = ~0 or ~r3 = ~r2 2~r1 .
Since ~r3 can be written as a linear combination of ~r2 and ~r1 , the three vectors are linearly dependent,
but if we remove the vector ~r3 , the remaining rows will be linearly independent and still span Row(A).
Thus, a basis for the Row Space of A is {[1 2 2 1], [0 0 1 3]}, and dim(Row(A)) = 2.
In general we should take the rows from the reduced matrix just in case one of our EROs was a row
swap and we accidentally remove the wrong vector from our set.
To find a basis for the column space, we can again use the reduced matrix R above. If we continue
to reduce R to RREF, we get the matrix

1 2 0 5
0 0 1 3
0 0 0 0
This tells us that the column vectors of A in column 1 and 3 are linearly independent and that
columns 2 and 4 are making the set dependent. In fact we can see that both column 2 and 4 can be
written as linear combinations of columns 1 and 3.

2
1
A basis for the column space of A is 3 , 5 and dim(Col(A)) = 2.

1
1
Note that we took the corresponding columns from the matrix A not from R. Columns 1 and 3 of
R will be linearly independent but they do NOT span the column space. To see this try to express
8

any column of A using only columns 1 and 3 of R. We will always get a 0 in the bottom entry.

Theorem:
If A is any m n matrix, then
dim(Row(A)) = dim(Col(A)) = Rank(A)
This is because on the left side we are counting non-zero rows in RREF and in the middle we are
counting leading ones. This is the same thing since each non-zero row must have a leading one. In
Chapter 1, we referred to the number of leading ones as the Rank of A.
Returning to our example, we can use the RREF of A one more time to find a basis for the Null
Space of A. Recall that the Null Space of A is the set of all solutions to the homogeneous system
A~x = ~0. We know the augmented matrix in reduced form looks like

1 2 0 5 0
0 0 1 3 0
0 0 0 0 0
We have two leading variables and two free variables. Let x2 = s and x4 = t. Then the solutions to
the homogeneous system are



2s 5t
x1
2
5
x2

s
=
= 1 s + 0 t
x3 3t 0
3
x4
t
0
1

5

1 0

A basis for the Null Space of A is , and the dimension of the Null Space is 2. (The
0
3

0
1
number of free variables or parameters).
Theorem:
If A is an m n matrix, then
dim(Null(A)) + Rank(A) = n
Examples
Find bases for the row space, column space and null space of the following matrices:

1 4 0 2 1
3 6 1 1 7
3 12 1 5 5

A = 1 2 2 3 1 , B =
2 8 1 3 2
2 4 5 8 4
5 20 2 8 8

Invertible Matrix Theorem


Let A be an n n matrix. TFAE
1. A is invertible
2. The RREF of A is In .
3. A~x = ~0 has just the trivial solution.
4. A~x = ~b has exactly one solution for each vector ~b.
5. det(A) 6= 0
6. AT is invertible
.

7. The columns of A are linearly


8. The columns of A
9. The columns of A

.
.

10.
11.
12.
13.
14.

10

Solution to last Example



2
1
3

1 0 0


Basis for Null(A) =
0 , 2 , 2 .

0 1 0

0
0
1

1
3

1 , 2 .
Basis for Col(A) =

2
5
Basis for Row(A) = {[1 2 0 1 3], [0 0 1 2 2]}.

4
2

1 0


Basis for Null(B) =
0 , 1 .

0 1

0
0

1
0
1

3
1
, , 5 .
Basis for Col(B) =
2 1 2

5
2
8
Basis for Row(B) = {[1 4 0 2 0], [0 0 1 1 0], [0 0 0 0 1]}.

11

You might also like