l10 - Linear Algebra - Matrix Spaces
l10 - Linear Algebra - Matrix Spaces
Notes:
(1) The definitions of RS(A) and CS(A) satisfy automatically the closure conditions of
vector addition and scalar multiplication
(2) dim(RS(A)) (or dim(CS(A)) equals the number of linearly independent row (or
column) vectors of A
In Ex 5 of Section 4.4, S = {(1, 2, 3), (0, 1, 2), (–2, 0, 1)} spans R3. Use these vectors as
row vectors to construct A
1 2 3
A 0 1 2 RS ( A) R 3
2 0 1
(Since (1, 2, 3), (0, 1, 2), (–2, 0, 1) are linearly independent, they can form a basis for RS(A))
Since S1 = {(1, 2, 3), (0, 1, 2), (–2, 0, 1), (1, 0, 0)} also spans R3,
1 2 3
0 1 2
A1 RS ( A1 ) R 3
2 0 1
1 0 0
(Since (1, 2, 3), (0, 1, 2), (–2, 0, 1) (1, 0, 0) are not linearly independent, they cannot be a
basis for RS(A1))
Notes: dim(RS(A)) = 3 and dim(RS(A1)) = 3
Theorem 4.13: Row-equivalent matrices have the same row space
If an mn matrix A is row equivalent to an mn matrix B,
then the row space of A is equal to the row space of B
Pf:
(1) Since B can be obtained from A by elementary row operations, the row vectors of B
can be expressed as linear combinations of the row vectors of A The linear
combinations of row vectors in B must be linear combinations of row vectors in A
any vector in RS(B) lies in RS(A) RS(B) RS(A)
(2) Since A can be obtained from B by elementary row operations, the row vectors of A
can be written as linear combinations of the row vectors of B The linear
combinations of row vectors in A must be linear combinations of row vectors in B
any vector in RS(A) lies in RS(B) RS(A) RS(B)
RS ( A) = RS ( B)
• Notes:
(1) The row space of a matrix is not changed by elementary
row operations
RS(r(A)) = RS(A) r: any elementary row operation
(2) But elementary row operations will change the column space
Theorem 4.14: Basis for the row space of a matrix
If a matrix A is row equivalent to a matrix B in the (reduced) row-echelon form, then
the nonzero row vectors of B form a basis for the row space of A
1. The row space of A is the same of the row space of B (Thm. 4.13), spanned by all row vectors in B
2. For the row space of B, it can be constructed by the linear combinations of only nonzero row
vectors since it is impossible to generate more combinations when taking zero row vectors into
consideration (i.e., nonzero row vectors span the row space of B)
3. Since it is impossible to express a nonzero row vector as the linear combination of other nonzero
row vectors in a row-echelon form matrix (see Ex. 2 on the next slide), according to Thm. 4.8, we
can conclude that the nonzero row vectors in B are linearly independent
4. As a consequence, since the nonzero row vectors in B are linearly independent and span the row
space of B, according to the definition on Slide 4.57, they form a basis for the row space of B and
for the row space of A as well
Ex 2: Finding a basis for a row space
1 3 1 3
0 1 1 0
Find a basis of the row space of A = 3 0 6 1
3 4 2 1
2 0 4 2
3 1 3 w 1
Sol:
1 3 1 3 1
0 1 1 0 0 1 1 0 w 2
3 0 6 1 G. E. B = 0 0 0 1 w 3
A=
3 4 2 1 0 0 0 0
2 0 4 2 0 0 0 0
a1 a2 a3 a4 b1 b2 b3 b4
a basis for RS(A) = {the nonzero row vectors of B} (Thm 4.14)
= {w1, w2, w3} = {(1, 3, 1, 3), (0, 1, 1, 0), (0, 0, 0, 1)}
(Check: w1, w2, w3 are linearly independent, i.e., aw1 + bw2 + cw3 = 0 has only the
trivial solution or it is impossible to express any one of them to be the linear
combination of the others (Theorem 4.8))
• Notes:
Although row operations can change the column space of a matrix (mentioned
in Slide 4.77), they do not change the dependency relationships among
columns
(1) {b1 , b 2 , b 4 } is L.I. (because these columns have the leading 1's)
{a1 , a 2 , a 4 } is L.I.
(2) b3 2b1 b 2 a3 2a1 a2
(The linear combination relationships among column vectors in B still hold for
column vectors in A)
• Ex 3: Finding a basis for a subspace using Thm. 4.14
Find a basis for the subspace of R3 spanned by
v1 v2 v3
S {(1, 2, 5), (3, 0, 3), (5, 1, 8)}
Sol:
1 2 5 v1 1 2 5 w1
A= 3 0 3 v G. E.
B 0 1 3 w 2
2
5 1 8 v3 0 0 0
(Construct A such that RS(A) = span(S))
Since CS(A)=RS(AT), to find a basis for the column space of the matrix A is
equivalent to find a basis for the row space of the matrix AT
1 0 3 3 2 1 0 3 3 2 w1
3 1 0 4 0 0 1 9 5 6 w 2
AT G. E.
B
1 1 6 2 4 0 0 1 1 1 w 3
3 0 1 1 2 0 0 0 0 0
a basis for CS(A)
= a basis for RS(AT)
= {the nonzero row vectors of B}
= {w1, w2, w3}
1 0 0
1 0
0
3, 9 , 1 (a basis for the column space of A)
3
5 1
2 6 1
Sol. 2: 1 3 1 3 1 3 1 3
0 1 1 0 0 1 1 0
G. E.
A 3 0 6 1 B 0 0 0 1
3 4 2 1 0 0 0 0
2 0 4 2 0 0 0 0
a1 a2 a3 a 4 b1 b 2 b3 b 4
Leading 1’s {b1, b2, b4} is a basis for CS(B) (not for CS(A))
{a1, a2, a4} is a basis for CS(A)
※ This method utilizes that B is with the same dependency relationships among columns
as A (mentioned on Slides 4.77 and 4.79), which does NOT mean CS(B) = CS(A)
Notes:
The bases for the column space derived from Sol. 1 and Sol. 2 are different.
However, both these bases span the same CS(A), which is a subspace of R5
Theorem 4.16: The definition of the nullspace
If A is an mn matrix, then the set of all solutions of the homogeneous system of linear
equations Ax = 0 is a subspace of Rn called the nullspace of A, which is denoted as
NS ( A) {x R n | Ax 0}
Pf:
NS ( A) R n
NS ( A) is not empty ( A0 0)
Let x1 , x 2 NS ( A) (i.e., Ax1 0 and Ax 2 0)
Then (1) A(x1 x 2 ) Ax1 Ax 2 0 0 0 (closure under
addition)
(2) A(cx1 ) c( Ax1 ) c(0) 0 (closure under scalar multiplication)
Thus NS ( A) is a subspace of R n
Notes: The nullspace of A is also called the solution space of the homogeneous
system Ax = 0
Ex 6: Finding the solution space (or the nullspace) of a homogeneous system with the coefficient matrix
A as follows.
1 2 2 1
A 3 6 5 4
1 2 0 3
Sol: The nullspace of A is the solution space of Ax = 0
1 2 2 1 0 1 2 0 3 0
augmented matrix 3 6 5 4 0 G.-J. E.
0 0 1 1 0
1 2 0 3 0 0 0 0 0 0
x1 = –2s – 3t, x2 = s, x3 = –t, x4 = t
x1 2s 3t 2 3
x s 1 0
x 2 s t sv1 tv 2
x3 t 0 1
x
4 t 0 1
NS ( A) {sv1 tv 2 | s, t R}
Theorem 4.15: Row and column space have equal dimensions
If A is an mn matrix, then the row space and the column
space of A have the same dimension
dim(RS(A)) = dim(CS(A))
※You can verify this result numerically through comparing Ex 2 (bases for
the row space) with Ex 4 (bases for the column space) in this section. In
these two examples, A is a 54 matrix, dim(RS(A)) = #(basis for RS(A)) =
3, and dim(CS(A)) = #(basis for CS(A)) = 3
Rank :
The dimension of the row (or column) space of a matrix A
is called the rank of A
rank(A) = dim(RS(A)) = dim(CS(A))
Nullity :
The dimension of the nullspace of A is called the nullity of A
nullity(A) = dim(NS(A))
Theorem 4.17: Dimension of the solution space
If A is an mn matrix of rank r, then the dimension of
the solution space of Ax = 0 (the nullsapce of A) is n – r, i.e.,
n = rank(A) + nullity(A) = r + (n – r)
(n is the number of columns or the number of unknown variables)
Pf:
Since rank(A) r , the reduced row echelon form of [A|0] after G.-J. E. should be
1 0 0 c11 c12 c1,n-r 0 1. From Thm 4.14, the nonzero row
0 1 0 c c c 0 vectors of B, which is the
21 22 2, n -r r (reduced) row-echelon form of A,
forms a basis for RS(A)
0 0 1 c c
r1 r 2 c r , n -r 0 2. From the definition that rank(A) =
0 0 0 0 0 0 0 dim(RS(A)) = r
m r ※ According to the above two facts,
the reduced row echelon form
0 0 0 0 0 0 0
should have r nonzero rows like
r nr the left matrix
Therefore, the corresponding system of linear equations is
x1 c11 xr 1 + c12 xr 2 + + c1, n r xn = 0
x2 + c21 xr 1 + c22 xr 2 + + c2, n r xn = 0
xr + cr1 xr 1 + cr 2 xr 2 + + cr , n r xn = 0
1021 0
10201
0
131
3 01 30
4
A
B
2
11
13 000 1
1
0390 12
000 00
a1 a2 a3 a4 a5 b1 b2 b3 b4 b5
1 0 1
0 1 1
a1 , a 2 , and a 4 ,
2 1 1
0 3 0
(c) bb
32
1
3b
2a
3
2
a
1 3
a
2
(due to the fact that elementary row operations do not change the dependency
relationships among columns)
Theorem 4.18: Solutions of a nonhomogeneous linear system
If xp is a particular solution of the nonhomogeneous system Ax = b, then
every solution of this system can be written in the form x = xp + xh , wher xh
is a solution of the corresponding homogeneous system Ax = 0
Pf:
x1 2 x3 x4 5
3x1 x2 5 x3 8
x1 2 x2 5 x4 9
Sol:
1
0
215 1
0
215
3 1
508
G
.
-J
..
E
0 11
3
7
1
20
5
9
0
0000
s t
x
12
st 5
21
5
x
s 3
t
7
1 3
7
x
2
s t
x
3
s 0
t
0 0 0
0
x
40
st 0
1 1
0
s
u
1tu
2
xp
5
7
i.e., xp is a particular solution vector of Ax = b,
0
0
and xh = su1 + tu2 is a solution of Ax = 0 (you can replace the constant vector
with a zero vector to check this result)
For any n×n matrix A with det(A) ≠ 0
※ If det(A) ≠ 0, Ax = b is with the unique solution (xp) and Ax = 0 has
only the trivial solution (xh = 0). According to Theorem 4.18, the
solution of Ax = b can be written as x = xp + xh. The result of xh = 0
implies that there is only one solution for Ax = b and the solution
is x = xp + 0 = xp
※ In this scenario, nullity(A) = dim(NS(A)) = dim({0}) = 0.
Furthermore, according to Theorem 4.17 that n = rank(A) +
nullity(A) , we can conclude that rank(A) = n
※ Finally, according to the definition of rank(A) = dim(RS(A)) =
dim(CS(A)), we can further obtain that dim(RS(A)) = dim(CS(A)) =
n, which implies that there are n rows (and n columns) of A which
are linearly independent (see the definitions on Slide 4.74)
※ The relationship between the solutions of Ax = b and Ax = 0 for an n×n matrix A
det(A) ≠ 0 det(A) = 0
For Ax = 0 Only the trivial Infinitely many xh
solution xh=0
a
m1 a m2 a x
mn n a x
m1 1 a x
m2 2 a x
mn n
x1 x2 x3 1
x1 x3 3
3 x1 2 x2 x3 1
Sol:
1 1 1 1
1
013
[
Ab
]
10 1
3 G
.
-
J
..
E
0 1 2 4
32 11
0
000
a1 a2 a3 b w1 w2 w3 v
v 3w1 4w 2
b 3a1 4a 2 0a3 (due to the fact that elementary row operations do not
change the dependency relationships among columns)
(4) A is row-equivalent to In
(The last three statements are from the arguments on Slide 4.95)