La 3
La 3
The presence of any zero row or column will produce a zero determinant (this is easy
to verify: just use the Laplace expansion along that zero row or column).
In the (2 2) case we obtained a zero determinant when one row was a multiple of
the other (or is the same as the other), and this remains true of the (3 3) case. A
more subtle case is when one row is a linear combination of the other two (i.e., we
can write one row as the sum of some multiple of a second row plus some multiple of
the third). For instance, take the matrix
1 2 3
Α 4 2 4
3 3 5
which you can verify has a zero determinant. Here the third row is equal to the first
row plus half of the second row:
3 3 5 1 2 3 12 4 2 4 .
(Alternatively, we can say that the first row is the third row minus half of the second
1 2 3 3 3 5 1
2 4 2 4 .
or that the second row is two times the third row minus two times the first.)
We say that the three row vectors that make up the matrix are “linearly dependent”.
This “linear dependence” between the rows leads to a zero determinant because it
means that row operations can be applied to ultimately create a zero row.
If the only case where this is true is c1 c2 c3 0 , then the vectors are said to be
linearly independent. For now, define the rank of the matrix is the number of linearly
independent row vectors that it contains.
Examples
1 2 3
1. Let A 2 4 6 . Let a1 1 2 3 , a 2 2 4 6 , and a3 2 1 3 .
2 1 3
Then these vectors are linearly dependent, since 2a1 1.a 2 0.a3 0 0 0 .
The second row is just twice that of the first row (or the first is half of the
second). Removing one of the dependent vectors, say a 2 2 4 6 , the
remaining vectors a1 and a3 are linearly independent: the only constants c1 and
c2 such that c1 1 2 3 c2 2 1 3 0 0 0 are c1 c2 0 . The matrix
has two linear independent vectors, and is said to be of rank 2.
Different combinations of the three vectors will therefore result in new vectors that all
lie on a plane (a two-dimensional space), and cannot ‘span’ or cover the entire 3-d
space.
1 2 3
2. Let A 3 3 6 . Let a1 1 2 3 , a 2 3 3 6 , and a3 2 1 3 .
2 1 3
Then these vectors are linearly dependent, since 1a1 1.a 2 1.a3 0 0 0 .
The second row “depends” on the other two in that it is the sum of a1 and a3 .
1 2 3
3. Let A 2 4 6 . Let a1 1 2 3 , a 2 2 4 6 , and a3 3 6 9 .
3 6 9
Then these vectors are linearly dependent, since 1a1 1.a 2 1.a3 0 0 0 . In
this case, there is only one linearly independent vector. If we remove a3 , we find
that the remaining two are still linearly dependent: a 2 is twice that of a1 . The
rank of this matrix is one: rank ( A) 1 .
Geometrically, all three vectors lie on a single line. The vector a 2 is twice that of a1 ,
and a3 is three times that of a1 . Combinations of these vectors therefore will only
create new vectors that are also extensions of a1 .
c1a1 c2a 2 c3a3 c1a1 c2 2a1 c3 3a1 (c1 2c2 3c3 )a1 .
0 0 0
4. Let A 2 4 6 . Let a1 0 0 0 , a 2 2 4 6 , and a3 2 1 3 .
2 1 3
Then these vectors are linearly dependent, since c1.a1 0.a 2 0.a3 0 0 0
for any c1 . Removing vector a1 leaves two linearly independent vectors. The
rank of this matrix is two.
Then all three row vectors are linearly independent; it is impossible to find c1 , c2
and c3 , not all zero, such that c1a1 c2a 2 c3a3 0 0 0 . This matrix is of
“full rank”.
A square matrix will have a non-zero determinant if and only if (‘iff’) it has full rank.
There are many ways to determine the rank of a matrix. One way is to do ‘Gaussian
Elimation” (the row operations) and see how many non-zero pivots you get (see
Section 2 for ‘pivots’). Other ways will be discussed in later sections, or in more
advanced classes. For the time being we will proceed by observation.
We conclude this section with several remarks regarding rank. Earlier, we viewed a
matrix as a stack of row vectors. The rank concept that we discussed could be called
‘row rank’. But we can also view a matrix as the concatenation (joining together) of
three column vectors
a11 a12 a13
a
A a1 a 2 a3 a21 a
22 23
a31 a23 a33
1. Everything that we said here follows for column vectors as for row vectors, and
in fact a matrix will have the same number of linearly independent column vectors as
there are linearly independent row vectors: if A has two linearly independent row
vectors, it will also have two linearly independent column vectors; if it has only one
linearly independent row vector, then it will one have one linearly independent
column vector, and so on. A matrix’s row rank is the same as its “column rank”.
2. We can also speak of the rank of non-square matrices. Take for example
2 4 6
A
2 1 3
This is in fact the last two rows of an earlier example we were looking at. Looking at
the rows, we see that these two rows are linearly independent (verify!) The ‘row-
rank’ is two. What if we view this matrix as a concatenation of three two-dimensional
vectors? Note that three two-dimensional vectors can only span a two-dimensional
Because the row and column ranks are always the same, we can always speak
unambiguously of “the rank of a matrix”. Furthermore,
3. Finally, we often need to find the rank of a product. Here are two very useful
results:
3a If A is (m n) and B is (n p ) , then
This extends to the product CAB where both C and B are full rank:
rank (CAB) rank ( A) .
Exercises
1. Find the determinants of the following matrices using the Laplace expansion. If
you find the determinant of a matrix to be zero, find out how many linear
independent rows the matrix has.
4 0 1 4 3 0 4 3 0 1 3 1
(i) 19 1 3 (ii) 19 1 0 (iii) 1 1 2 (vi) 2 6 2
7 1 0 7 1 0 7 6 6 4 12 4