0% found this document useful (0 votes)
74 views30 pages

l10 - Linear Algebra - Matrix Spaces

The document discusses the row space, column space, and nullspace of a matrix A. The row space is the vector space spanned by the row vectors of A. The column space is the vector space spanned by the column vectors of A. The nullspace is the vector space of solutions to the homogeneous system Ax = 0. Matrices that are row equivalent have the same row space. The nonzero row vectors of a matrix in row echelon form form a basis for the row space. Examples are provided to demonstrate finding bases for row spaces, column spaces, and nullspaces.

Uploaded by

Harshini M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views30 pages

l10 - Linear Algebra - Matrix Spaces

The document discusses the row space, column space, and nullspace of a matrix A. The row space is the vector space spanned by the row vectors of A. The column space is the vector space spanned by the column vectors of A. The nullspace is the vector space of solutions to the homogeneous system Ax = 0. Matrices that are row equivalent have the same row space. The nonzero row vectors of a matrix in row echelon form form a basis for the row space. Examples are provided to demonstrate finding bases for row spaces, column spaces, and nullspaces.

Uploaded by

Harshini M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

4.

6 Rank of a Matrix and Systems of Linear Equations

 In this section, three vector spaces are investigated


 Row space: the vector space spanned by the row vectors of a matrix A
 Column space: the vector space spanned by column vectors of a matrix A
 Nullspace: the vector space consisting of all solutions of the
homogeneous system of linear equations Ax = 0
 Next, discuss the basis and the dimension of each vector space
 Finally, the relationship between the solutions of Ax = b and Ax = 0 will be discussed
 To begin the introduction, I present the notation for the row and column vectors of a
matrix A with the size m×n on the next slide
 row vectors: (with size 1×n) row vectors of A

 a11 a12  a1n   A1  (a11 a12  a1n )  A(1)


a  
a22  a2 n   A2   (a21 a22  a2n )  A(2)
A   21 
       
   
am1 am 2  amn   Am   (am1 am 2  amn )  A(m )
※ So, the row vectors are vectors in Rn
 column vectors: (with size m×1) column vectors of A

 a11 a12  a1n   a11   a12   a1n 


a a  a   a   a  a 
A   21 22 2n 
  A1 A 2  A n    21   22    2 n 
        
      
a a
 m1 m 2  a mn  am1  am 2  amn 
|| || ||
(1) (2) (n)
※ So, the column vectors are vectors in Rm A A A
Let A be an m×n matrix:
 Row space of A is a subspace of Rn spanned by the row vectors of A:

RS ( A)  { 1 A(1)   2 A( 2)  ...   m A( m ) |  1 ,  2 ,...,  m  R}


(If A(1), A(2), …, A(m) are linearly independent, A(1), A(2), …, A(m) can form a basis for RS(A))

 Column space of A is a subspace of Rm spanned by the column vectors of A:

CS ( A)  {1 A(1)   2 A(2)     n A( n ) 1 ,  2 ,  n  R}


(If A(1), A(2), …, A(n) are linearly independent, A(1), A(2), …, A(n) can form a basis for CS(A))

 Notes:
(1) The definitions of RS(A) and CS(A) satisfy automatically the closure conditions of
vector addition and scalar multiplication
(2) dim(RS(A)) (or dim(CS(A)) equals the number of linearly independent row (or
column) vectors of A
 In Ex 5 of Section 4.4, S = {(1, 2, 3), (0, 1, 2), (–2, 0, 1)} spans R3. Use these vectors as
row vectors to construct A

 1 2 3
A   0 1 2   RS ( A)  R 3
 2 0 1 
(Since (1, 2, 3), (0, 1, 2), (–2, 0, 1) are linearly independent, they can form a basis for RS(A))

 Since S1 = {(1, 2, 3), (0, 1, 2), (–2, 0, 1), (1, 0, 0)} also spans R3,

1 2 3
0 1 2
A1    RS ( A1 )  R 3
 2 0 1
 
1 0 0
(Since (1, 2, 3), (0, 1, 2), (–2, 0, 1) (1, 0, 0) are not linearly independent, they cannot be a
basis for RS(A1))
 Notes: dim(RS(A)) = 3 and dim(RS(A1)) = 3

Theorem 4.13: Row-equivalent matrices have the same row space
If an mn matrix A is row equivalent to an mn matrix B,
then the row space of A is equal to the row space of B

Pf:
(1) Since B can be obtained from A by elementary row operations, the row vectors of B
can be expressed as linear combinations of the row vectors of A  The linear
combinations of row vectors in B must be linear combinations of row vectors in A 
any vector in RS(B) lies in RS(A)  RS(B)  RS(A)

(2) Since A can be obtained from B by elementary row operations, the row vectors of A
can be written as linear combinations of the row vectors of B  The linear
combinations of row vectors in A must be linear combinations of row vectors in B 
any vector in RS(A) lies in RS(B)  RS(A)  RS(B)

 RS ( A) = RS ( B)
• Notes:
(1) The row space of a matrix is not changed by elementary
row operations
RS(r(A)) = RS(A) r: any elementary row operation
(2) But elementary row operations will change the column space

Theorem 4.14: Basis for the row space of a matrix
If a matrix A is row equivalent to a matrix B in the (reduced) row-echelon form, then
the nonzero row vectors of B form a basis for the row space of A
1. The row space of A is the same of the row space of B (Thm. 4.13), spanned by all row vectors in B
2. For the row space of B, it can be constructed by the linear combinations of only nonzero row
vectors since it is impossible to generate more combinations when taking zero row vectors into
consideration (i.e., nonzero row vectors span the row space of B)
3. Since it is impossible to express a nonzero row vector as the linear combination of other nonzero
row vectors in a row-echelon form matrix (see Ex. 2 on the next slide), according to Thm. 4.8, we
can conclude that the nonzero row vectors in B are linearly independent
4. As a consequence, since the nonzero row vectors in B are linearly independent and span the row
space of B, according to the definition on Slide 4.57, they form a basis for the row space of B and
for the row space of A as well
 Ex 2: Finding a basis for a row space

 1 3 1 3
 0 1 1 0

Find a basis of the row space of A =  3 0 6  1
 
 3 4 2 1
 2 0 4 2

3 1 3 w 1
Sol:
 1 3 1 3 1
 0 1 1 0 0 1 1 0 w 2
 
 3 0 6  1 G. E. B = 0 0 0 1 w 3
A=      
 3 4 2 1 0 0 0 0
 2 0  4 2 0 0 0 0
a1 a2 a3 a4 b1 b2 b3 b4
a basis for RS(A) = {the nonzero row vectors of B} (Thm 4.14)
= {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)}

(Check: w1, w2, w3 are linearly independent, i.e., aw1 + bw2 + cw3 = 0 has only the
trivial solution or it is impossible to express any one of them to be the linear
combination of the others (Theorem 4.8))
• Notes:
Although row operations can change the column space of a matrix (mentioned
in Slide 4.77), they do not change the dependency relationships among
columns

(1) {b1 , b 2 , b 4 } is L.I. (because these columns have the leading 1's)
 {a1 , a 2 , a 4 } is L.I.
(2) b3  2b1  b 2  a3  2a1  a2
(The linear combination relationships among column vectors in B still hold for
column vectors in A)
• Ex 3: Finding a basis for a subspace using Thm. 4.14
Find a basis for the subspace of R3 spanned by
v1 v2 v3
S  {(1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol:
1 2 5 v1  1  2 5 w1
A=  3 0 3 v G. E.
B  0 1 3 w 2
  2  
 5 1 8 v3 0 0 0
(Construct A such that RS(A) = span(S))

a basis for span({v1, v2, v3})


= a basis for RS(A)
= {the nonzero row vectors of B} (Thm 4.14)
= {w1, w2}
= {(1, –2, – 5) , (0, 1, 3)}
• Ex 4: Finding a basis for the column space of a matrix
Find a basis for the column space of the matrix A given in Ex 2
 1 3 1 3
 0 1 1 0

A   3 0 6  1
 
 3 4 2 1
 2 0 4  2
Sol. 1:

 Since CS(A)=RS(AT), to find a basis for the column space of the matrix A is
equivalent to find a basis for the row space of the matrix AT

1 0 3 3 2  1 0 3 3 2  w1
3 1 0 4 0  0 1 9 5 6  w 2
AT   G. E.
  B
1 1 6 2 4  0 0 1 1 1 w 3
   
3 0 1 1 2  0 0 0 0 0
 a basis for CS(A)
= a basis for RS(AT)
= {the nonzero row vectors of B}
= {w1, w2, w3}

 1   0   0 
   1   0 
 0      
  3,  9 ,  1   (a basis for the column space of A)
 3     
    5  1 
 2   6  1 

Sol. 2: 1 3 1 3 1 3 1 3
0 1 1 0  0 1 1 0 
 
G. E.
A   3 0 6 1   B  0 0 0 1
   
3 4 2 1  0 0 0 0
 2 0 4 2  0 0 0 0 
a1 a2 a3 a 4 b1 b 2 b3 b 4
Leading 1’s  {b1, b2, b4} is a basis for CS(B) (not for CS(A))
{a1, a2, a4} is a basis for CS(A)
※ This method utilizes that B is with the same dependency relationships among columns
as A (mentioned on Slides 4.77 and 4.79), which does NOT mean CS(B) = CS(A)

 Notes:
The bases for the column space derived from Sol. 1 and Sol. 2 are different.
However, both these bases span the same CS(A), which is a subspace of R5

Theorem 4.16: The definition of the nullspace
If A is an mn matrix, then the set of all solutions of the homogeneous system of linear
equations Ax = 0 is a subspace of Rn called the nullspace of A, which is denoted as

NS ( A)  {x  R n | Ax  0}
Pf:

NS ( A)  R n
NS ( A) is not empty ( A0  0)
Let x1 , x 2  NS ( A) (i.e., Ax1  0 and Ax 2  0)
Then (1) A(x1  x 2 )  Ax1  Ax 2  0  0  0 (closure under
addition)
(2) A(cx1 )  c( Ax1 )  c(0)  0 (closure under scalar multiplication)

Thus NS ( A) is a subspace of R n

Notes: The nullspace of A is also called the solution space of the homogeneous
system Ax = 0
 Ex 6: Finding the solution space (or the nullspace) of a homogeneous system with the coefficient matrix
A as follows.

1 2 2 1 
A  3 6 5 4 
1 2 0 3 
Sol: The nullspace of A is the solution space of Ax = 0
1 2 2 1 0  1 2 0 3 0 
augmented matrix  3 6 5 4 0   G.-J. E.
 0 0 1 1 0 
1 2 0 3 0   0 0 0 0 0 
 x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
 x1   2s  3t   2  3
x   s   1  0
 x   2     s    t    sv1  tv 2
 x3    t   0   1
       
x
 4  t   0  1
 NS ( A)  {sv1  tv 2 | s, t  R}

Theorem 4.15: Row and column space have equal dimensions
If A is an mn matrix, then the row space and the column
space of A have the same dimension
dim(RS(A)) = dim(CS(A))

※You can verify this result numerically through comparing Ex 2 (bases for
the row space) with Ex 4 (bases for the column space) in this section. In
these two examples, A is a 54 matrix, dim(RS(A)) = #(basis for RS(A)) =
3, and dim(CS(A)) = #(basis for CS(A)) = 3
 Rank :
The dimension of the row (or column) space of a matrix A
is called the rank of A
rank(A) = dim(RS(A)) = dim(CS(A))

 Nullity :
The dimension of the nullspace of A is called the nullity of A
nullity(A) = dim(NS(A))

Theorem 4.17: Dimension of the solution space
If A is an mn matrix of rank r, then the dimension of
the solution space of Ax = 0 (the nullsapce of A) is n – r, i.e.,
n = rank(A) + nullity(A) = r + (n – r)
(n is the number of columns or the number of unknown variables)

Pf:
Since rank(A)  r , the reduced row echelon form of [A|0] after G.-J. E. should be
1 0 0 c11 c12  c1,n-r 0   1. From Thm 4.14, the nonzero row
0 1 0 c c  c 0  vectors of B, which is the
 21 22 2, n -r   r (reduced) row-echelon form of A,
     forms a basis for RS(A)
 
 0 0 1 c c
r1 r 2  c r , n -r 0  2. From the definition that rank(A) =
0 0 0 0 0  0 0   dim(RS(A)) = r
   m  r ※ According to the above two facts,
     
 the reduced row echelon form
0 0 0 0 0  0 0  
       should have r nonzero rows like
r nr the left matrix
Therefore, the corresponding system of linear equations is
x1  c11 xr 1 + c12 xr  2 +  + c1, n  r xn = 0
x2 + c21 xr 1 + c22 xr  2 +  + c2, n  r xn = 0
    
xr + cr1 xr 1 + cr 2 xr  2 +  + cr , n  r xn = 0

Solving for the first r variables in terms of the last n – r parametric


variables produces n – r vectors in the basis of the solution space.
Consequently, the dimension of the solution space of Ax = 0 is n – r
because there are n – r free parametric variables. (For instance, in Ex 6,
there are two parametric variables, so dim(NS(A)) = 2)

Notes:
(1) rank(A) can be viewed as the number of leading ones (or nonzero rows)
in the reduced row-echelon form for solving Ax = 0
(2) nullity(A) can be viewed as the number of free variables in the reduced
row-echelon form for solving Ax = 0
• Ex 7: Rank and nullity of a matrix
Let the column vectors of the matrix A be denoted by a1, a2,
a3, a4, and a5
1 0 
2 1 0 
0 
1 
3 1 3
A 
2
1 11 3 
 
0 3 9 0 
1245

a1 a2 a3 a4 a5

(a) Find the rank and nullity of A


(b) Find a subset of the column vectors of A that forms a basis for
the column space of A
(c) If possible, write the third column of A as a linear combination
of the first two columns
Sol: Derive B to be the reduced row-echelon form of A.

1021 0
 10201
0
131 
3 01 30
4
A
  B
 

2
11 
13 000 1
1
   
0390 12
 000 00
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5

(a) rank(A) = 3 (by Theorems 4.13 and 4.14, rank(A) = dim(RS(A)) =


dim(RS(B)) = the number of nonzero rows in B)

nuillity( A)  n  rank( A)  5  3  2 (by Thm. 4.17)


(b) Leading 1’s

 {b1 , b 2 , b 4 } is a basis for CS ( B)


{a1 , a 2 , a 4 } is a basis for CS ( A)

 1  0  1
 0  1  1
a1   , a 2   , and a 4   ,
  2  1  1
 0  3  0
     

(c) bb

32
1
3b
2a
3
2
a
1 3
a
2
(due to the fact that elementary row operations do not change the dependency
relationships among columns)
 Theorem 4.18: Solutions of a nonhomogeneous linear system
If xp is a particular solution of the nonhomogeneous system Ax = b, then
every solution of this system can be written in the form x = xp + xh , wher xh
is a solution of the corresponding homogeneous system Ax = 0

Pf:

Let x be another solution of Ax = b other than xp


 A(x  x p )  Ax  Ax p  b  b  0
 (x  x p ) is a solution of Ax = 0

Let xh be the solution of Ax  0 and xh  x  x p


 x  x p  xh
• Ex 8: Finding the solution set of a nonhomogeneous system
Find the set of all solution vectors of the system of linear equations

x1  2 x3  x4  5
3x1  x2  5 x3  8
x1  2 x2  5 x4  9
Sol:

1
 0
215 1
 0
215

3 1
508
G
.
-J
.. 
E
0 11
3
7
   

1
 20
5
9
 
0
 0000

s t
x
12
st  5
 21
5
x
s 3
t  
7 
1 3

7

x
2
 
s t

x
3
s 0
t  
0 0 0
0 
    
x
40
st  0
 1 1
0

s
u
1tu
2
xp

5
 7 
i.e., xp    is a particular solution vector of Ax = b,
0
 
0
and xh = su1 + tu2 is a solution of Ax = 0 (you can replace the constant vector
with a zero vector to check this result)
 For any n×n matrix A with det(A) ≠ 0
※ If det(A) ≠ 0, Ax = b is with the unique solution (xp) and Ax = 0 has
only the trivial solution (xh = 0). According to Theorem 4.18, the
solution of Ax = b can be written as x = xp + xh. The result of xh = 0
implies that there is only one solution for Ax = b and the solution
is x = xp + 0 = xp
※ In this scenario, nullity(A) = dim(NS(A)) = dim({0}) = 0.
Furthermore, according to Theorem 4.17 that n = rank(A) +
nullity(A) , we can conclude that rank(A) = n
※ Finally, according to the definition of rank(A) = dim(RS(A)) =
dim(CS(A)), we can further obtain that dim(RS(A)) = dim(CS(A)) =
n, which implies that there are n rows (and n columns) of A which
are linearly independent (see the definitions on Slide 4.74)
※ The relationship between the solutions of Ax = b and Ax = 0 for an n×n matrix A

det(A) ≠ 0 det(A) = 0
For Ax = 0 Only the trivial Infinitely many xh
solution xh=0

For Ax = b, the xp must exist xp exists xp does not exist


solution is x =
xp + xh shown
in Thm. 4.18
Only one solution Infinitely many Solution x = xp +
x = xp + 0 = xp solutions x = xp xh does not exist
+ xh
• Theorem 4.19: Solution of a system of linear equations
The system of linear equations Ax = b is consistent if and only
if b is in the column space of A (i.e., b can be expressed as a
linear combination of the column vectors of A)
Pf:
Let

 a11 a12  a1n   x1   b1 


a a  a  x  b 
A   21 22 2n 
, x   2  , and b   2 
       
     
a a
 m1 m 2  amn  x
 n bn 
be the coefficient matrix, the unknown vector, and the constant-term vector,
respectively, of the system Ax = b
Then

 a11 a12  a1n   x1   a11 x1  a12 x2    a1n xn 


a a  a  x  a x  a x    a x 
Ax   21 22 2n   2 
  21 1 22 2 2n n 

          
     
a
 m1 a m2  a x
mn   n  a x
 m1 1  a x
m2 2    a x
mn n 

 a11   a12   a1n 


a  a  a 
 x1  21   x2  22     xn  2 n   x1 A(1)  x2 A(2)    xn A( n )  b
     
     
a
 m1  a
 m2   amn 
Hence, Ax = b is consistent (x has solutions) if and only if b is a linear combination of the
column vectors of A. In other words, the system is consistent if and only if b is in the
m
subspace of R spanned by the column vectors of A
• Ex 9: Consistency of a system of linear equations depending on whether
b is in the column space of A

x1  x2  x3  1
x1  x3  3
3 x1  2 x2  x3  1
Sol:

1 1 1 1
 1
 013 
[
Ab
]
10 1 
3 G
.
-
J

.. 
E

0 1 2 4 
   

32 11 
 
0
 000 

a1 a2 a3 b w1 w2 w3 v
v  3w1  4w 2
 b  3a1  4a 2  0a3 (due to the fact that elementary row operations do not
change the dependency relationships among columns)

(In other words, b is in the column space of A)


 The system of linear equations is consistent
 Check for Ex. 9:

rank( A)  rank([ A | b])  2


 A property that can be inferred:
If rank(A) = rank([A|b]), then the system Ax = b is consistent

The above property can be analyzed as follows:


(1) By Theorem 4.19 in which Ax = b is consistent if and only if b is a
linear combination of the column vectors of A, we can infer that
appending b to the right of A does NOT increase the number of
linearly independent columns, so dim(CS(A)) = dim(CS([A|b]))
(2) By definition of the rank on Slide 4.86, rank(A) = dim(CS(A)) and
rank([A|b]) = dim(CS([A|b]))
(3) By combining (1) and (2), we can obtain rank(A) = rank([A|b]) if and
only if Ax = b is consistent
• Summary of equivalent conditions for square matrices:
If A is an n×n matrix, then the following conditions are equivalent
(1) A is invertible

(2) Ax = b has a unique solution for any n×1 matrix b

(3) Ax = 0 has only the trivial solution

(4) A is row-equivalent to In

(5) det (A)  0


(The above five statements are from Slide 3.39)
(6) rank(A) = n

(7) There are n row vectors of A which are linearly independent

(8) There are n column vectors of A which are linearly independent

(The last three statements are from the arguments on Slide 4.95)

You might also like