07chap3 PDF
07chap3 PDF
a x + ~ ~ ~ + a x = y
is called a linear equation in n unknowns (over F). The scalars a are called
the coefficients of the unknowns, and y is called the constant term of the
equation. A vector (c, . . . , c) Fn is called a solution vector of this equa-
tion if and only if
a1 c1 + ~ ~ ~ + an cn = y
115
116 LINEAR EQUATIONS AND MATRICES
in which case we say that (c, . . . , c) satisfies the equation. The set of all
such solutions is called the solution set (or the general solution).
Now consider the following system of m linear equations in n
unknowns:
a11 x1 +!+ a1n xn = y1
a21 x1 +!+ a2n xn = y2
!!!!"!!!!!!!!!!!!!!!!!"!!!!!!!"
am1 x1 +!+ amn xn = ym
If we let Si denote the solution set of the equation ax = y for each i, then
the solution set S of the system is given by the intersection S = S. In other
words, if (c, . . . , c) Fn is a solution of the system of equations, then it is a
solution of each of the m equations in the system.
Example 3.1 Consider this system of two equations in three unknowns over
the real field :
2x1 ! 3x2 +!!!x3 = 6
!!x1 + 5x2 ! 2x3 = 12
2(3) - 3(1) + 3 = 6
while
3 + 5(1) - 2(3) = 2 12 .
In general, we will use the term matrix to denote any array such as the
array A shown above. This matrix has m rows and n columns, and hence is
referred to as an m x n matrix, or a matrix of size m x n. By convention, an
element a F of A is labeled with the first index referring to the row and the
second index referring to the column. The scalar a is usually called the i, jth
entry (or element) of the matrix A. We will frequently denote the matrix A
by the symbol (a).
Another rather general way to define a matrix is as a mapping from a sub-
set of all ordered pairs of positive integers into the field F. In other words, we
define the mapping A by A(i, j) = a for every 1 i m and 1 j n. In this
sense, a matrix is actually a mapping, and the m x n array written above is just
a representation of this mapping.
Before proceeding with the general theory, let us give a specific example
demonstrating how to solve a system of linear equations.
Example 3.2 Let us attempt to solve the following system of linear equa-
tions:
2x1 +!!x2 ! 2x3 = !3
!!x1 ! 3x2 +!!x3 =!!!8
4x1 !!!x2 ! 2x3 =!!3
That our approach is valid in general will be proved in our first theorem
below.
Multiply the first equation by 1/2 to get the coefficient of x equal to 1:
118 LINEAR EQUATIONS AND MATRICES
Multiply the first equation by -1 and add it to the second to obtain a new sec-
ond equation, then multiply the first by -4 and add it to the third to obtain a
new third equation:
x1 +!!(1 / 2)x2 !!!x3 = !3 / 2
!!!!!!(7 / 2)x2 + 2x3 =!19 / 2
!!!!!!!!!!!!!!3x2 ! 2x3 =!!!!!!!9
Multiply the second by -2/7 to get the coefficient of x equal to 1, then mul-
tiply this new second equation by 3 and add to the third:
Multiply the third by 7/2, then add 4/7 times this new equation to the second:
Add the third equation to the first, then add -1/2 times the second equation to
the new first to obtain
x1 =!!2
x2 = !1
x3 =!!3
This is now a solution of our system of equations. While this system could
have been solved in a more direct manner, we wanted to illustrate the system-
atic approach that will be needed below.
in which
a2i = a21 a1i - a11 a2i
b2 = a21 b1 - a11 b2 .
L1 = b1
(1)
L2 = b2
while (2) is just
L1 = b1
(2)
a21 L1 ! a11 L2 = a21b1 ! a11b2
a21 L1 = a21b1
a11 L2 = a11b2
and hence also
a L - a L = a b - a b
are all true equations. Therefore every solution of (1) also satisfies (2).
Conversely, suppose that we have a solution (x, . . . , x) to the system
(2). Then clearly
a L = a b
is a true equation. Hence, subtracting the second of (2) from this gives us
120 LINEAR EQUATIONS AND MATRICES
a L - (a L - a L) = a b - (a b - a b)
or
a L = a b .
Thus L = b is also a true equation. This shows that any solution of (2) is a
solution of (1) also.
We point out that in the proof of Theorem 3.1 (as well as in Example 3.2),
it was only the coefficients themselves that were of any direct use to us. The
unknowns x were never actually used in any of the manipulations. This is the
reason that we defined the matrix of coefficients (a). What we now proceed
to do is to generalize the above method of solving systems of equations in a
manner that utilizes this matrix explicitly.
Exercises
2. Determine whether or not the each of the following two systems is equiva-
lent (over the complex field):
(a)!!!!!x ! y = 0!!!!!!and!!!!!!3x + y = 0
2x + y = 0!!!!!!!!!!!!!!!!!!!!x + y = 0
and
3.1 SYSTEMS OF LINEAR EQUATIONS 121
Note that (a) was not used in Example 3.2, but it would have been necessary if
the coefficient of x in the first equation had been 0. The reason for this is that
we want the equations put into echelon form as defined below.
We now see how to use the matrix aug A as a tool in solving a system of
linear equations. In particular, we define the following so-called elementary
row operations (or transformations) as applied to the augmented matrix:
It should be clear that operations () and () have no effect on the solution set
of the system and, in view of Theorem 3.1, that operation () also has no
effect.
The next two examples show what happens both in the case where there is
no solution to a system of linear equations, and in the case of an infinite
number of solutions. In performing these operations on a matrix, we will let R
denote the ith row. We leave it to the reader to repeat Example 3.2 using this
notation.
Example 3.3 Consider this system of linear equations over the field :
x + 3y + 2z = 7
2x +!!y !!!!z = 5
!x + 2y + 3z = 4
" 1 3 2 7%
$ '
$ 2 1 !1 5 '
$ '
#!1 2 3 4 &
and the reduction proceeds as follows. We first perform the following elemen-
tary row operations:
#1 !!3 !2 !7 &
% (
R2 ! 2R1 " % 0 !5 !5 !9 (
% (
!!R3 + R1 " $ 0 !!5 !5 11 '
It is clear that the equation 0z = 2 has no solution, and hence this system has
no solution.
Example 3.4 Let us solve the following system over the field :
# 1 !2 !2 !1 !14 &
% (
% 0 !!1 !1 !5 !!!2 (
(!1 / 15)R3 " % 0 !!0 !1 !3 !!!5 (
% (
(!1 / 12)R4 " $ 0 !!0 !1 !3 !!!5 '
We see that the third and fourth equations are identical, and hence we have
three equations in four unknowns:
(1) All zero rows (if any) occur below all nonzero rows.
(2) The first nonzero entry (reading from the left) in each row is equal to
1.
(3) If the first nonzero entry in the ith row is in the jth column, then
every other entry in the jth column is 0.
(4) If the first nonzero entry in the ith row is in the jth column, then j <
j < ~ ~ ~ .
We will call the first (or leading) nonzero entries in each row of a row-
echelon matrix the distinguished elements of the matrix. Thus, a matrix is in
reduced row-echelon form if the distinguished elements are each equal to 1,
and they are the only nonzero entries in their respective columns.
The algorithm detailed in the proof of our next theorem introduces a tech-
nique generally known as Gaussian elimination.
Proof This is essentially obvious from Example 3.4. The detailed description
which follows is an algorithm for determining the reduced row-echelon form
of a matrix.
Suppose that we first put A into the form where the leading entry in each
nonzero row is equal to 1, and where every other entry in the column contain-
ing this first nonzero entry is equal to 0. (This is called simply the row-
reduced form of A.) If this can be done, then all that remains is to perform a
finite number of row interchanges to achieve the final desired reduced row-
echelon form.
To obtain the row-reduced form we proceed as follows. First consider row
1. If every entry in row 1 is equal to 0, then we do nothing with this row. If
row 1 is nonzero, then let j be the smallest positive integer for which aj 0
and multiply row 1 by (aj). Next, for each i 1 we add -aj times row 1 to
row i. This leaves us with the leading entry aj of row 1 equal to 1, and every
other entry in the jth column equal to 0.
Now consider row 2 of the matrix we are left with. Again, if row 2 is equal
to 0 there is nothing to do. If row 2 is nonzero, assume that the first nonzero
entry occurs in column j (where j j by the results of the previous para-
graph). Multiply row 2 by (aj) so that the leading entry in row 2 is equal to
1, and then add -aj times row 2 to row i for each i 2. Note that these opera-
tions have no effect on either column j, or on columns 1, . . . , j of row 1.
It should now be clear that we can continue this process a finite number of
times to achieve the final row-reduced form. We leave it to the reader to take
an arbitrary matrix (a) and apply successive elementary row transformations
to achieve the desired final form.
While we have shown that every matrix is row equivalent to at least one
reduced row-echelon matrix, it is not obvious that this equivalence is unique.
However, we shall show in the next section that this reduced row-echelon
matrix is in fact unique. Because of this, the reduced row-echelon form of a
matrix is often called the row canonical form.
Exercises
1. Show that row equivalence defines an equivalence relation on the set of all
matrices.
2. For each of the following matrices, first reduce to row-echelon form, and
then to row canonical form:
126 LINEAR EQUATIONS AND MATRICES
" 1 !3 !1 !2 %
$ '
$ 0 !1 !5 !3'
(c)!!
$ 2 !5 !!3 !1 '
$ '
# 4 !1 !!1 !5 &
3. For each of the following systems, find a solution or show that no solution
exists:
(g)!!!!2x !!!y + 5z = 19
x + 5y ! 3z = 4
3x + 2y + 4z = 25
4. Let f, f and f3 be elements of F[] (i.e., the space of all real-valued func-
tions defined on ).
(a) Given a set {x, x, x3} of real numbers, define the 3 x 3 matrix F(x) =
(f(x)) where the rows are labelled by i and the columns are labelled by j.
Prove that the set {f} is linearly independent if the rows of the matrix F(x)
are linearly independent.
(b) Now assume that each f has first and second derivatives defined on
some interval (a, b) , and let f(j) denote the jth derivative of f (where
f(0) is just f). Define the matrix W(x) = (f(j-1 )(x)) where 1 i, j 3.
Prove that {f} is linearly independent if the rows of W(x) are independent
3.2 ELEMENTARY ROW OPERATIONS 127
for some x (a, b). (The determinant of W(x) is called the Wronskian of
the set of functions {fi}.)
5. Let
" 3 !1 !2 %
$ '
A = $ 2 !!1 !1 '!!.
$ '
# 1 !3 !0 &
" !3 !6 2 !1%
$ '
$ !2 !4 1 !3 '
A= !!.
$ !0 !0 1 !1 '
$ '
# !1 !2 1 !0 &
We will find it extremely useful to consider the rows and columns of an arbi-
trary m x n matrix as vectors in their own right. In particular, the rows of A
are to be viewed as vector n-tuples A, . . . , Am where each A = (ai1, . . . ,
ain) Fn. Similarly, the columns of A are to be viewed as vector m-tuples
A1, . . . , An where each Aj = (a1j, . . . , amj) Fm. For notational clarity, we
should write Aj as the column vector
! a1 j $
# &
# ! &
#a &
" mj %
but it is typographically easier to write this horizontally whenever possible.
Note that we label the row vectors of A by subscripts, and the columns of A
by superscripts.
128 LINEAR EQUATIONS AND MATRICES
Since each row A is an element of Fn, the set of all rows of a matrix can
be used to generate a new vector space V over F. In other words, V is the
space spanned by the rows A, and hence any v V may be written as
m
v = ! ci Ai
i=1
Theorem 3.4 Let A and A be row equivalent m x n matrices. Then the row
space of A is equal to the row space of A, and hence rr(A) = rr(A).
Furthermore, we also have cr(A) = cr(A). (However, note that the column
space of A is not necessarily the same as the column space of A.)
Proof Let V be the row space of A and V the row space of A. Since A and A
are row equivalent, A may be obtained from A by applying successive ele-
mentary row operations. But then each row of A is a linear combination of
rows of A, and hence V V. On the other hand, A may be obtained from A
in a similar manner so that V V. Therefore V = V and hence rr(A) = rr(A).
Now let W be the column space of A and W the column space of A.
Under elementary row operations, it will not be true in general that W = W,
but we will show it is still always true that dim W = dim W. Let us define the
mapping f: W W by
3.3 ROW AND COLUMN SPACES 129
"n % n
f $$! ci Ai '' = ! ci A! i !!.
# i=1 & i=1
In other words, if we are given any linear combination of the columns of A,
then we look at the same linear combination of columns of A. In order to
show that this is well-defined, we must show that if aAi = bAi, then
f(aAi) = f(bAi). This equivalent to showing that if cAi = 0 then
f(cAi) = 0 because if (a - b)Ai = 0 and f((a - b)Ai) = 0, then
A = (a, . . . , a)
and
! a1i $
# &
A i = # ! &!!.
# &
" a mi %
Since
cA = (ca, . . . , ca)
If
cAi = 0
then
ca = 0
for every j = 1, . . . , m and hence we see that f(cAi) = 0. This shows that f is
well-defined for type transformations. Conversely, if
f(cAi) = 0
then we see that again
ca = 0
cAi = 0
then
ca = 0
3.3 ROW AND COLUMN SPACES 131
for j = 2, . . . , m, and this then shows that cia1i = 0 also. Thus cAi = 0
implies that cAi = 0, and hence cAi = 0 if and only if f(cAi) = 0. This
shows that Ker f = {0} for type transformations also, and f is well-defined.
In summary, by constructing an explicit isomorphism in each case, we
have shown that the column spaces W and W are isomorphic under all three
types of elementary row operations, and hence it follows that the column
spaces of row equivalent matrices must have the same dimension.
Proof This was shown explicitly in the proof of Theorem 3.4 for each type of
elementary row operation.
Proof From the four properties of a reduced row-echelon matrix, we see that
if R has r nonzero rows, then there exist integers j, . . . , j r with each j n
and j < ~ ~ ~ < jr such that R has a 1 in the ith row and jth column, and every
other entry in the jth column is 0 (it may help to refer to Example 3.5 for
visualization). If we denote these nonzero row vectors by R, . . . , Rr then any
arbitrary vector
r
v = ! ci Ri
i=1
has c as its jth coordinate (note that v may have more than r coordinates if r <
n). Therefore, if v = 0 we must have each coordinate of v equal to 0, and
hence c = 0 for each i = 1, . . . , r. But this means that the R are linearly
independent, and since {R} spans the row space by definition, we see that
they must in fact form a basis.
132 LINEAR EQUATIONS AND MATRICES
Proof In Theorem 3.4 we showed that A and R have the same row space.
The corollary now follows from Theorem 3.5.
Example 3.6 Let us determine whether or not the following matrices have
the same row space:
" 1 !2 !1 !3 %
$ ' " 1 2 !4 11%
A = $ 2 !4 !!1 !2 '!!!!!!!!!!!!!!!!!B = $ '!!.
$ ' # 2 4 !5 14 &
# 3 !6 !3 !7 &
We leave it to the reader to show (and you really should do it) that the reduced
row-echelon form of these matrices is
" 1 !2 !0 !!1 / 3 %
$ ' " 1 !2 !0 !1 / 3 %
A = $ 0 !0 !1 !8 / 3'!!!!!!!!!!!!!!!!!B = $ '!!.
$ ' # 0 !0 !1 !8 / 3&
# 0 !0 !0 !!0 &
Since the the nonzero rows of the reduced row-echelon form of A and B are
identical, the row spaces must be the same.
Now that we have a better understanding of the row space of a matrix, let
us go back and show that the reduced row-echelon form of a given matrix is
unique. We first prove a preliminary result dealing with the row-echelon form
of two matrices having the same row space.
Theorem 3.4). But bij = 0 for every i, and hence a1j = 0 which contradicts the
assumption that a1j is a distinguished element of A (and must be nonzero by
definition). We are thus forced to conclude that j k. However, we could
clearly have started with the assumption that k < j, in which case we would
have been led to conclude that k j. This shows that we must actually have
j = k.
Now let A and B be the matrices which result from deleting the first row
of A and B respectively. If we can show that A and B have the same row
space, then they will also satisfy the hypotheses of the theorem, and our con-
clusion follows at once by induction.
Let R = (a, a, . . . , a) be any row of A (and hence a row of A), and let
B, . . . , Bm be the rows of B. Since A and B have the same row space, we
again have
m
R = ! d i Bi
i=1
for some set of scalars d. Since R is not the first row of A and A is in row-
echelon form, it follows that a = 0 for i = j = k. In addition, the fact that B is
in row-echelon form means that every entry in the kth column of B must be 0
except for the first, i.e., b1k 0, b2k = ~ ~ ~ = bmk = 0. But then
which implies that d = 0 since b1k 0. This shows that R is actually a linear
combination of the rows of B, and hence (since R was arbitrary) the row
space of A must be a subspace of the row space of B. This argument can
clearly be repeated to show that the row space of B is a subspace of the row
space of A, and hence we have shown that A and B have the same row
space.
Proof Since it is obvious that A and B have the same row space if they have
the same nonzero rows, we need only prove the converse. So, suppose that A
and B have the same row space. Then if A is an arbitrary nonzero row of A,
we may write
A i = !r c r Br (1)
where the Br are the nonzero rows of B. The proof will be finished if we can
show that cr = 0 for r i and c = 1.
134 LINEAR EQUATIONS AND MATRICES
To show that c = 1, let aij be the first nonzero entry in A, i.e., aij is the
distinguished element of the ith row of A. Looking at the jth component of
(1) we see that
a iji = !r c r b rji (2)
(see the proof of Theorem 3.4). From Theorem 3.6 we know that bij is the
distinguished element of the ith row of B, and hence it is the only nonzero
entry in the jth column of B (by definition of a reduced row-echelon matrix).
This means that (2) implies aij = c bij . In fact, it must be true that aij = bij =
1 since A and B are reduced row-echelon matrices, and therefore c = 1.
Now let bkj be the first nonzero entry of B (where k i). From (1) again
we have
a ijk = !r c r b rjk !!. (3)
Suppose that two people are given the same matrix A and asked to trans-
form it to reduced row-echelon form R. The chances are quite good that they
will each perform a different sequence of elementary row operations to
achieve the desired result. Let R and R be the reduced row-echelon matrices
that our two students obtain. We claim that R = R. Indeed, since row equiva-
lence defines an equivalence relation, we see from Theorem 3.4 that the row
spaces of R and R will be the same. Therefore Theorem 3.7 shows us that the
rows of R must equal the rows of R. Hence we are justified in calling the
reduced row-echelon form of a matrix the row canonical form as mentioned
earlier.
Exercises
1. In the proof of Theorem 3.4, show that Ker f = {0} for a type operation.
2. Determine whether or not the following matrices have the same row
space:
" 1 !1 3 %
"1 !2 !1% "1 !1 2 % $ '
A=$ '!!!!!!!!B = $ '!!!!!!!!C = $ 2 !1 10 '!!.
# 3 !4 5 & # 3 3 !1& $ '
# 3 !5 1 &
3.3 ROW AND COLUMN SPACES 135
5. (a) Suppose we are given an m x n matrix A = (a), and suppose that one
of the columns of A, say Ai, is a linear combination of the others. Show
that under any elementary row operation resulting in a new matrix A, the
column Ai is the same linear combination of the columns of A that Ai is of
the columns of A. In other words, show that all linear relations between
columns are preserved by elementary row operations.
(b) Use this result to give another proof of Theorem 3.4.
(c) Use this result to give another proof of Theorem 3.7.
It is important for the reader to realize that there is nothing special about the
rows of a matrix. Everything that we have done up to this point in discussing
elementary row operations could just as easily have been done with columns
instead. In particular, this means that Theorems 3.4 and 3.5 are equally valid
for column spaces if we apply our elementary transformations to columns
instead of rows. This observation leads us to our next fundamental result.
Our next theorem forms the basis for a practical method of finding the rank of
a matrix.
Theorem 3.9 If A is any matrix, then r(A) is equal to the number of nonzero
rows in the (reduced) row-echelon matrix row equivalent to A. (Alternatively,
r(A) is the number of nonzero columns in the (reduced) column-echelon
matrix column equivalent to A.)
Proof Noting that the number of nonzero rows in the row-echelon form is the
same as the number of nonzero rows in the reduced row-echelon form, we see
that this theorem follows directly from the corollary to Theorem 3.5.
!1 0 0 ! 0$
# &
0 1 0 ! 0&
I =# !!.
#" " " "&
# &
"0 0 0 ! 1%
Proof This follows from the definition of a reduced row-echelon matrix and
Theorem 3.9.
To do this, we will apply Theorem 3.9 to columns instead of rows (just for
variety). Proceeding with the elementary transformations, we obtain the fol-
lowing sequence of matrices:
" !!1 !!0 !!0 %
$ '
$ !!2 !3 !!6 '
$ !2 !!3 !3 '
$ '
# !1 !!6 !5 &
A2-2A1 A3 + 3A1
! 1 !!0 0 $
# &
#0 !!1 0 &
# 0 !!0 1 &
# &
" 3 !!1 / 3 7 / 3%
A1 + 2A2 -(A2 - A3)
!1 0 0$
# &
#0 1 0&
#0 0 1&
# &
"0 0 0%
Exercises
3. Using elementary row operations, find the rank of each of the following
matrices:
" !!1 !3 %
$ ' " 5 !1 !!1%
$ !!0 !2 ' $ '
(c)!! (d)!!$ 2 !!1 !2 '
$ !!5 !1' $ '
$ ' # 0 !7 12 &
# !2 !!3&
We now apply the results of the previous section to the determination of some
general characteristics of the solution set to systems of linear equations. We
will have more to say on this subject after we have discussed determinants in
the next chapter.
To begin with, a system of linear equations of the form
n
! aij x j = 0,!!!!!!!!!!i = 1,!!,!m
j=1
Y = (y, . . . , ym) Fm .
From our discussion in the proof of Theorem 3.4, we see that ax is just x
times the ith component of the jth column Aj Fm. Thus our system of non-
homogeneous equations may be written in the form
Proof Let us write our system as jax = 0. We first note that S since
(0, . . . , 0) Fn is the trivial solution of our system. If u = (u, . . . , u) Fn
and v = (v, . . . , v) Fn are both solutions, then
a(u + v) = au + av = 0
a(cu) = cau = 0
so that cu S.
Proof By writing the system in the form xAj = 0, it is clear that a non-
trivial solution exists if and only if the n column vectors Aj Fm are linearly
dependent. Since the rank of A is equal to the dimension of its column space,
we must therefore have r(A) < n.
(note the upper limit on this sum differs from the previous equation). Next we
observe that the solution set S consists of all vectors x Fn such that
n
!xjAj = 0
j=1
b(i) = (b, . . . , b)
3.5 SOLUTIONS TO SYSTEMS OF LINEAR EQUATIONS 141
if and only if ck+1 = ~ ~ ~ = c = 0. This shows that the b(i) are linearly inde-
pendent. (This should have been obvious from their form shown above.)
Now suppose that d = (d, . . . , d) is any solution of
n
! x j A j = 0!!.
j=1
and since {A1, . . . , Ak} is linearly independent, this implies that y = 0 for
each j = 1, . . . , k. Hence y = 0 so that
n
d = ! " di b (i)
i=k+1
142 LINEAR EQUATIONS AND MATRICES
and we see that any solution may be expressed as a linear combination of the
b(i).
Since the b(i) are linearly independent and we just showed that they span
S, they must form a basis for S.
where the entries bi k+1 , . . . , bin are the values of the remaining m unknowns
in the solution vector v. Since the matrix C is in row-echelon form, its rows
are independent and hence r(C) = k. However, C is column-equivalent to B,
and therefore r(B) = k also (by Theorem 3.4 applied to columns). But the rows
of B consist precisely of the k solution vectors vs, and thus these solution vec-
tors must be independent as claimed.
3.5 SOLUTIONS TO SYSTEMS OF LINEAR EQUATIONS 143
x + 2y ! 4z + 3w !!!!t = 0
x + 2y ! 2z + 2w +!!!t = 0
2x + 4y ! 2z + 3w + 4t = 0
x + 2y ! 4z + 3w !!!t = 0
(*)
2z !!!w + 2t = 0
It is obvious that the rank of the matrix of coefficients is 2, and hence the
dimension of the solution space is 5 - 2 = 3. The free variables are clearly y,
w and t. The solution vectors vs are obtained by choosing (y = 1, w = 0, t = 0),
(y = 0, w = 1, t = 0) and (y = 0, w = 0, t = 1). Using each of the these in the
system (*), we obtain the solutions
v1 = (!2,!1,!0,!0,!0)
v2 = (!1,!0,!1 / 2,!1,!0)
v3 = (!3,!0,!!1,!0,!1)
Thus the vectors v, v and v3 form a basis for the solution space of the homo-
geneous system.
We emphasize that the corollary to Theorem 3.4 shows us that the solution
set of a homogeneous system of equations is unchanged by elementary row
operations. It is this fact that allows us to proceed as we did in Example 3.8.
We now turn our attention to the solutions of a nonhomogeneous system
of equations ax = y .
cAj = Y
u + S = {u + v: v S}
a(w - u) = aw - au = y - y = 0
Example 3.9 Let us find the complete solution set over the real numbers of
the nonhomogeneous system
The first thing we must do is determine r(A). Since the proof of Theorem 3.13
dealt with columns, we choose to construct a new matrix B by applying ele-
mentary column operations to A. Thus we define
" 1 !1 !!!3 !1 %
$ '
$ 0 !4 !!7 !!7 '!!.
$ 0 !8 !14 14 '
$ '
# 0 12 !21 21 &
146 LINEAR EQUATIONS AND MATRICES
It is now clear that the first two rows of this matrix are independent, and that
the third and fourth rows are each multiples of the second. Therefore r(A) = 2
as above.)
We now follow the first part of the proof of Theorem 3.13. Observe that
since r(A) = 2 and the first two columns of A are independent, we may write
A3 = (5/4)A1 - (7/4)A2
and
A4 = (3/4)A1 + (7/4)A2 .
which are independent solutions of the homogeneous system and span the
solution space S. Therefore the general solution set to the nonhomogeneous
system is given by
Exercises
1. Find the dimension and a basis for the solution space of each of the fol-
lowing systems of linear equations over :
U = {(a,!b,!c,!d) ! ! 4 :!b + c + d = 0}
V = {(a,!b,!c,!d) ! ! 4 :!a + b = 0!and c = 2d}!!.
3. Find the complete solution set of each of the following systems of linear
equations over :
(A + B) = a + b
obtained by adding the corresponding entries of each matrix. Note that both A
and B must be of the same size. We also say that A equals B if a = b for all
i and j. It is obvious that
A+B = B+A
and that
A + (B + C) = (A + B) + C
148 LINEAR EQUATIONS AND MATRICES
for any other m x n matrix C. We also define the zero matrix 0 as that matrix
for which A + 0 = A. In other words, (0) = 0 for every i and j. Given a matrix
A = (a), we define its negative (or additive inverse)
-A = (-a)
such that A + (-A) = 0. Finally, for any scalar c we define the product of c and
A to be the matrix
cA = (ca) .
where the m x n matrix E is defined as having a 1 in the (i, j)th position and
0s elsewhere, and there are clearly mn such matrices. We denote the space of
all m x n matrices over the field F by Mmxn(F). The particular case of m = n
defines the space Mn(F) of all square matrices of size n. We will often refer
to a matrix in Mn(F) as an n-square matrix.
Now let A Mmxn(F) be an m x n matrix, B Mrxm(F) be an r x m
matrix, and consider the two systems of linear equations
n
! aij x j = yi ,!!!!!!!!!!i = 1,!!,!m
j=1
and
m
! bij y j = zi ,!!!!!!!!!!i = 1,!!,!r
j=1
z = by = bax = cx
Thus the (i, k)th entry of C = BA is given by the standard scalar product
3.6 MATRIX ALGEBRA 149
(BA) = B Ak
of the ith row of B with the kth column of A (where both are considered as
vectors in Fm). Note that matrix multiplication is generally not commutative,
i.e., AB BA. Indeed, the product AB may not even be defined.
" 1 6 !2 % " 2 !9 %
$ ' $ '
A =!$ 3 4 !5 '!!!!!!!!!!!!!!!!!!!!B =!$ 6 !!1'!!.
$ ' $ '
# 7 0 !8 & # 1 !3&
" 36 !!!3 %
$ '
=!$ 35 !38 '!!.
$ '
# 22 !87 &
!1 2 $ !0 1$
A=# & and B=# &
"3 4% "1 0%
then
!1 2 $! 0 1 $ ! 2 1 $
AB = # &# &=# &
" 3 4 % " 1 0 % " 4 3%
while
! 0 1 $!1 2 $ ! 3 4 $
BA = # &# &=# & ' AB!!.
" 1 0 %" 3 4 % "1 2 %
Example 3.11 Two other special cases of matrix multiplication are worth ex-
plicitly mentioning. Let X Fn be the column vector
150 LINEAR EQUATIONS AND MATRICES
! x1 $
# &
X = # ! &!!.
# &
" xn %
! a11 $ ! a1n $
# & # &
AX = # ! & x1 +!"!+ # ! & xn
# & # &
" am1 % " amn %
! a11 ! a1n $
# &
YA = (y1,!!,!ym )!# " " &
# &
" am1 ! amn %
= (y1a11 +!!!+!ym am1,!!,!y1a1n +!!!+ym amn )
= (Y A1,!!,!Y A n )!!.
This again yields the expected form of the product with entries YAi.
This example suggests the following commonly used notation for systems
of linear equations. Consider the system
n
! aij x j = yi
j=1
! x1 $ ! y1 $
# & # &
X = # ! & ' Fn and Y = # ! & ' Fm .
# & # &
" xn % " ym %
AX = Y .
Note that the ith row vector of A is A = (a, . . . , a) so that the expression
ax = y may be written as the standard scalar product
A X = y .
A I = IA = A .
Even if A and B are both square matrices (i.e., matrices of the form m x m),
the product AB will not generally be the same as BA unless A and B are
diagonal matrices (see Exercise 3.6.4). However, we do have the following.
Theorem 3.17 For matrices of proper size (so that these operations are
defined), we have:
(a) (AB)C = A(BC) (associative law).
(b) A(B + C) = AB + AC (left distributive law).
(c) (B + C)A = BA + CA (right distributive law).
(d) k(AB) = (kA)B = A(kB) for any scalar k.
Proof (a)!![(AB)C]ij = !k (AB)ik ckj = !r,! k (air brk )ckj = !r,! k air (brk ckj )
= !r air (BC)rj = [A(BC)]ij !!.
then AT is given by
!1 4 $
# &
!# 2 5 &!!.
# &
"3 6%
We now wish to relate this algebra to our previous results dealing with the
rank of a matrix. Before doing so, let us first make some elementary observa-
tions dealing with the rows and columns of a matrix product. Assume that
A Mmxn(F) and B Mnxr(F) so that the product AB is defined. Since the
(i, j)th entry of AB is given by (AB) = ab, we see that the ith row of AB
is given by a linear combination of the rows of B:
Similarly, for the columns of a product we find that the jth column of AB is a
linear combination of the columns of A:
$ ! a b ' k =1 $ a ' k =1
# k mk kj & # mk &
and
" !k a1k bkj % " a11 " a1n % "b1 j %
$ ' $ '$ '
(AB) j = $ ! ' =!$ ! ! ' $ ! ' = AB j !!.
$ ! a b ' $a '$ '
# k mk kj & # m1 " amn & # bnj &
Theorem 3.20 If A and B are any matrices for which the product AB is
defined, then the row space of AB is a subspace of the row space of B, and the
column space of AB is a subspace of the column space of A.
Proof Using (AB) = aB, we see that the ith row of AB is in the space
spanned by the rows of B, and hence the row space of AB is a subspace of the
row space of B.
Now note that the column space of AB is just the row space of (AB)T =
BTAT, which is a subspace of the row space of AT by the first part of the
theorem. But the row space of AT is just the column space of A.
Proof Let VA be the row space of A, and let WA be the column space of A.
Then
r(AB) = dim VAB dim VB = r(B)
154 LINEAR EQUATIONS AND MATRICES
while
r(AB) = dim WAB dim WA = r(A) .
Exercises
! a11 0 0 ! 0 $
# &
# 0 a22 0 ! 0 &
# " " " " &
# &
"0 0 0 ! ann %
(b) Prove that the only n x n matrices which commute with every n x n
diagonal matrix are diagonal matrices.
!0 1 0 0 ! 0 0$
# &
#0 0 1 0 ! 0 0&
#0 0 0 1 ! 0 0&
A =!# &!!.
#" " " " " "&
#0 0 0 0 ! 0 1&
# &
"0 0 0 0 ! 0 0%
12. Find a basis {A} for the space Mn(F) that consists only of matrices with
the property that A2 = A (such matrices are called idempotent or
projections). [Hint: The matrices
!1 0$ !1 1$ !0 0$ !0 0$
# &!!!!!!# &!!!!!!# &!!!!!!# &
"0 0% "0 0% "1 0% "1 1%
13. Show that it is impossible to find a basis for Mn(F) such that every pair
of matrices in the basis commute with each other.
14. (a) Show that the set of all nonsingular n x n matrices forms a spanning
set for Mn(F). Exhibit a basis of such matrices.
(b) Repeat part (a) with the set of all singular matrices.
15. Show that the set of all matrices of the form AB - BA do not span
Mn(F). [Hint: Use the trace.]
Theorem 3.21 A matrix A M(F) has a right (left) inverse if and only if A
is nonsingular. This right (left) inverse is also a left (right) inverse, and hence
is an inverse of A.
(AB)j = ABj = Ej
Proof The fact that A and B are nonsingular means that A and B exist.
We therefore see that
(BA)(AB) = BIB = BB = I
so that the uniqueness of the inverse tells us that (AT) = (A)T. Note this
also shows that AT is nonsingular.
A major problem that we have not yet discussed is how to actually find the
inverse of a matrix. One method involves the use of determinants as we will
see in the next chapter. However, let us show another approach based on the
fact that a nonsingular matrix is row-equivalent to the identity matrix
160 LINEAR EQUATIONS AND MATRICES
(Theorem 3.10). This method has the advantage that it is algorithmic, and
hence is easily implemented on a computer.
Since the jth column of a product AB is ABj, we see that considering the
particular case of AA = I leads to
(AA)j = A(A)j = Ej
! a11 ! a1n 1$
# &
a ! a2n 0&
aug A = # 21
# " " "&
# &
" an1 ! ann 0%
!1 0 0 ! 0 c11 $
# &
#0 1 0 ! 0 c21 &
#" " " " " &
# &
"0 0 0 ! 1 cn1 %
for some set of scalars c. This means that the solution to the system is x =
c, x = c, . . . , x = c. But X = (A)1 = the first column of A, and
therefore this last matrix may be written as
" 1 ! 0 a !111 %
$ '
$" " " '!!.
$ 0 ! 1 a !1 '
# n1 &
coefficients is independent of this last column, it follows that we may solve all
n systems simultaneously by reducing the single matrix
! a11 ! a1n 1 ! 0$
# &
# " " " " &!!.
#a 0 ! 1 &%
" n1 ! ann
" !1 !2 !1 %
$ '
!$ !!0 !3 !2 '
$ '
# !!2 !1 !!0 &
We leave it as an exercise for the reader to show that the reduced row-echelon
form of
" !1 !2 !1 1 0 0%
$ '
$ !!0 !3 !2 0 1 0'
$ !!2 !1 !0 0 0 1 '&
#
is
"1 0 0 1 / 6 1 / 12 7 / 12 %
$ '
$0 1 0 1/ 3 1/ 6 1/ 6 '
$0 0 1 1 / 2 !1 / 4 1 / 4 '&
#
"1 / 6 !1 / 12 7 / 12 %
$ '
!$1 / 3 !!1 / 6 1 / 6 '!!.
$ '
#1 / 2 !1 / 4 1 / 4 &
162 LINEAR EQUATIONS AND MATRICES
Exercises
3. Show that a matrix is not invertible if it has any zero row or column.
5. Use the inverse of the matrix in Exercise 4(c) above to find the solutions
of each of the following systems:
!1 2 3 4$
# &
#0 2 3 4&
!!.
#0 0 3 4&
# &
"0 0 0 4%
9. (a) Let A be any 2 x 1 matrix, and let B be any 1 x 2 matrix. Prove that
AB is not invertible.
3.7 INVERTIBLE MATRICES 163
10. Summarize several of our results by proving the equivalence of the fol-
lowing statements for any n x n matrix A:
(a) A is invertible.
(b) The homogeneous system AX = 0 has only the zero solution.
(c) The system AX = Y has a solution X for every n x 1 matrix Y.
12. A matrix A is called a left zero divisor if there exists a nonzero matrix B
such that AB = 0, and A is called a right zero divisor if there exists a
nonzero matrix C such that CA = 0. If A is an m x n matrix, prove that:
(a) If m < n, then A is a left zero divisor.
(b) If m > n, then A is a right zero divisor.
(c) If m = n, then A is both a left and a right zero divisor if and only if A
is singular.
Proof We must verify this equation for each of the three types of elementary
row operations. First consider an operation of type . In particular, let be
the interchange of rows i and j. Then
[e(A)] = A for k i, j
164 LINEAR EQUATIONS AND MATRICES
while
[e(A)] = A and [e(A)] = A .
[e(I)A] = [e(I)]A .
[e(I)]A = IA = A .
[e(I)]A = IA = A .
This verifies the theorem for transformations of type . (It may be helpful for
the reader to write out e(I) and e(I)A to see exactly what is going on.)
There is essentially nothing to prove for type transformations, so we go
on to those of type . Hence, let e be the addition of c times row j to row i.
Then
[e(I)] = I for k i
and
[e(I)] = I + cI .
Therefore
[e(I)]A = (I + cI)A = A + cA = [e(A)]
[e(I)]A = IA = A = [e(A)] .
to row i. Thus all three types of elementary row operations have inverses
which are also elementary row operations.
By way of nomenclature, a square matrix A = (a) is said to be diagonal if
a = 0 for i j. The most common example of a diagonal matrix is the identity
matrix.
[e(I)] = e(I) .
Proof By definition, e(I) is row equivalent to I and hence has the same rank
as I (Theorem 3.4). Thus e(I) is nonsingular since r(I) = n, and hence e(I)
exists. Since it was shown above that e is an elementary row operation, we
apply Theorem 3.22 to the matrix e(I) to obtain
e(I)e(I) = e(e(I)) = I .
e(I)e(I) = e(e(I)) = I .
[e(I)] = 1 = [e(I)]
[e(I)]rs = 0 if r s
and
[e(I)]rr = 1 .
We now come to the main result dealing with elementary matrices. For
ease of notation, we denote an elementary matrix by E rather than by e(I). In
other words, the result of applying the elementary row operation e to I will be
denoted by the matrix E = e(I).
so that
A = E ~ ~ ~ ErI = E ~ ~ ~ Er .
The theorem now follows if we note that each E is an elementary matrix ac-
cording to Theorem 3.23 (since E = [e(I)] = e(I) and e is an elementary
row operation).
Note this corollary provides another proof that the method given in the
previous section for finding A is valid.
There is one final important property of elementary matrices that we will
need in a later chapter. Let E be an n x n elementary matrix representing any
3.8 ELEMENTARY MATRICES 167
Now recall that the kth column of AB is given by (AB)k = ABk. We then have
This is the same relationship as that found between the rows of EA where
(EA) = A and (EA) = A + cA (see the proof of Theorem 3.22).
Exercises
2. Write down 4 x 4 elementary matrices that will induce the following ele-
mentary operations in a 4 x 4 matrix when used as left multipliers. Verify
that your answers are correct.
(a) Interchange the 2nd and 4th rows of A.
(b) Interchange the 2nd and 3rd rows of A.
(c) Multiply the 4th row of A by 5.
(d) Add k times the 4th row of A to the 1st row of A.
(e) Add k times the 1st row of A to the 4th row of A.
3. Show that any e(A) may be written as a product of e(A)s and e(A)s.
(The notation should be obvious.)
168 LINEAR EQUATIONS AND MATRICES
4. Pick any 4 x 4 matrix A and multiply it from the right by each of the ele-
mentary matrices found in the previous problem. What is the effect on A?
to the reduced row-echelon form R, and write the elementary matrix cor-
responding to each of the elementary row operations required. Find a
nonsingular matrix P such that PA = R by taking the product of these ele-
mentary matrices.
The remaining problems are all connected, and should be worked in the given
order.
!1 0 ! 0 ! 0 0$
# &
#0 1 ! 0 ! 0 0&
#" " " " "&
# &
C = #0 0 ! 1 ! 0 0&
#0 0 ! 0 ! 0 0&
# &
#" " " " "&
# &
"0 0 ! 0 ! 0 0%
where the first k entries on the main diagonal are 1s, and the rest are 0s.
11. From the previous problem and Theorem 3.3, show that every m x n
matrix A of rank k can be reduced by elementary row and column opera-
tions to the form C. We call the matrix C the canonical form of A.
14. Prove that two m x n matrices A and B are r.c.e. if and only if there exists
a nonsingular m x m matrix P and a nonsingular n x n matrix Q such that
PAQ = B.