0% found this document useful (0 votes)
11 views6 pages

La 3

Uploaded by

Me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

La 3

Uploaded by

Me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Section 11 Rank and Linear Dependence

In an early section, we learnt that the number of solutions in a system of linear


equations depends on the number of “independent” equations in the system relative to
the number of unknowns. From Cramer’s Rule, we learnt that a system of n -linear
equations in n -unknowns will have a unique solution only when the determinant of
the coefficient matrix is not zero. When will the determinant of a matrix be zero? This
has a lot to do with the linear dependence in the coefficient matrix.

The presence of any zero row or column will produce a zero determinant (this is easy
to verify: just use the Laplace expansion along that zero row or column).

In the (2  2) case we obtained a zero determinant when one row was a multiple of
the other (or is the same as the other), and this remains true of the (3  3) case. A
more subtle case is when one row is a linear combination of the other two (i.e., we
can write one row as the sum of some multiple of a second row plus some multiple of
the third). For instance, take the matrix
1 2 3
Α   4 2 4 
 3 3 5 

which you can verify has a zero determinant. Here the third row is equal to the first
row plus half of the second row:
3 3 5  1 2 3  12  4 2 4 .
(Alternatively, we can say that the first row is the third row minus half of the second
1 2 3  3 3 5  1
2 4 2 4 .

or that the second row is two times the third row minus two times the first.)

We say that the three row vectors that make up the matrix are “linearly dependent”.
This “linear dependence” between the rows leads to a zero determinant because it
means that row operations can be applied to ultimately create a zero row.

A (3  3) matrix will have a non-zero determinant only if it has three linearly


independent rows (or columns). More generally, think of a (3  3) square matrix
 a11 a12 a13 
A   a21 a22 a23 
 a31 a32 a33 

as a stack of three row vectors

Matrix Algebra Notes 11-1


Anthony Tay
 a1    a11 a12 a13  
   
A  a 2    a21 a22 a23 
 a3   a31 a32 a33  

We say that three (column or row) vectors a1 , a 2 , and a3 are linearly dependent if
there are constants c1 , c2 , and c3 , not all zero, such that
c1a1  c2a 2  c3a3  0 .

If the only case where this is true is c1  c2  c3  0 , then the vectors are said to be
linearly independent. For now, define the rank of the matrix is the number of linearly
independent row vectors that it contains.

Examples
1 2 3
1. Let A   2 4 6  . Let a1  1 2 3 , a 2   2 4 6 , and a3   2 1 3 .
 2 1 3

Then these vectors are linearly dependent, since 2a1  1.a 2  0.a3   0 0 0 .
The second row is just twice that of the first row (or the first is half of the
second). Removing one of the dependent vectors, say a 2   2 4 6 , the
remaining vectors a1 and a3 are linearly independent: the only constants c1 and
c2 such that c1 1 2 3  c2  2 1 3   0 0 0 are c1  c2  0 . The matrix
has two linear independent vectors, and is said to be of rank 2.

If we imagine these three vectors in the usual representation as ‘arrows’ in three-


dimensional co-ordinate space, we will see that the vector a 2 is just an extension of
a1 . Combinations of the three vectors are effectively combinations of only two
vectors.
c1a1  c2a 2  c3a3  c1a1  c2 2a1  c3a3  (c1  2c2 )a1  c3a3

Different combinations of the three vectors will therefore result in new vectors that all
lie on a plane (a two-dimensional space), and cannot ‘span’ or cover the entire 3-d
space.

1 2 3
2. Let A   3 3 6  . Let a1  1 2 3 , a 2  3 3 6 , and a3   2 1 3 .
 2 1 3

Then these vectors are linearly dependent, since 1a1  1.a 2  1.a3   0 0 0 .
The second row “depends” on the other two in that it is the sum of a1 and a3 .

Matrix Algebra Notes 11-2


Anthony Tay
(Equivalently, a3 “depends on” a1 and a 2 because it is a 2  a1 ). Removing one
of these ‘dependent’ vectors (any one of them) leaves two independent vectors.
For instance, removing a3 , then the only constants c1 and c2 such that
c1a1  c2a 2  c1 1 2 3  c2 3 3 6   0 0 0
are c1  c2  0 . The matrix has two linear independent vectors, and is said to be
of rank 2.

If we imagine these three vectors geometrically in three-dimensional co-ordinate


space, we will see that although no one vector is an extension of another, all three
vectors nonetheless lie in a single plane, so again combinations of the three vectors
will ‘create’ new vectors that also lie on that plane. The three vectors cannot ‘span’,
or cover, the entire 3-d space.

1 2 3
3. Let A   2 4 6  . Let a1  1 2 3 , a 2   2 4 6 , and a3  3 6 9 .
 3 6 9 

Then these vectors are linearly dependent, since 1a1  1.a 2  1.a3   0 0 0 . In
this case, there is only one linearly independent vector. If we remove a3 , we find
that the remaining two are still linearly dependent: a 2 is twice that of a1 . The
rank of this matrix is one: rank ( A)  1 .

Geometrically, all three vectors lie on a single line. The vector a 2 is twice that of a1 ,
and a3 is three times that of a1 . Combinations of these vectors therefore will only
create new vectors that are also extensions of a1 .

c1a1  c2a 2  c3a3  c1a1  c2 2a1  c3 3a1  (c1  2c2  3c3 )a1 .

0 0 0
4. Let A   2 4 6  . Let a1   0 0 0 , a 2   2 4 6 , and a3   2 1 3 .
 2 1 3

Then these vectors are linearly dependent, since c1.a1  0.a 2  0.a3   0 0 0
for any c1 . Removing vector a1 leaves two linearly independent vectors. The
rank of this matrix is two.

Matrix Algebra Notes 11-3


Anthony Tay
1 0 0 
5. Let A  0 2 4  . Let a1  1 0 0 , a 2   0 2 4 , and a3   0 2 1 .
0 2 1 

Then all three row vectors are linearly independent; it is impossible to find c1 , c2
and c3 , not all zero, such that c1a1  c2a 2  c3a3   0 0 0 . This matrix is of
“full rank”.

A square matrix will have a non-zero determinant if and only if (‘iff’) it has full rank.

There are many ways to determine the rank of a matrix. One way is to do ‘Gaussian
Elimation” (the row operations) and see how many non-zero pivots you get (see
Section 2 for ‘pivots’). Other ways will be discussed in later sections, or in more
advanced classes. For the time being we will proceed by observation.

We conclude this section with several remarks regarding rank. Earlier, we viewed a
matrix as a stack of row vectors. The rank concept that we discussed could be called
‘row rank’. But we can also view a matrix as the concatenation (joining together) of
three column vectors
  a11   a12   a13  
 a  
A  a1 a 2 a3     a21  a 
 22   23  
  a31   a23   a33  

1. Everything that we said here follows for column vectors as for row vectors, and
in fact a matrix will have the same number of linearly independent column vectors as
there are linearly independent row vectors: if A has two linearly independent row
vectors, it will also have two linearly independent column vectors; if it has only one
linearly independent row vector, then it will one have one linearly independent
column vector, and so on. A matrix’s row rank is the same as its “column rank”.

2. We can also speak of the rank of non-square matrices. Take for example

2 4 6
A   
 2 1 3
This is in fact the last two rows of an earlier example we were looking at. Looking at
the rows, we see that these two rows are linearly independent (verify!) The ‘row-
rank’ is two. What if we view this matrix as a concatenation of three two-dimensional
vectors? Note that three two-dimensional vectors can only span a two-dimensional

Matrix Algebra Notes 11-4


Anthony Tay
space. Draw three two dimensional vectors on a “ x - y ” co-ordinate space on a piece
of paper. Can you generate a three-dimensional space from this? (The answer is no.)

If we observe the three vectors


2 4 6
2 1   3
     
we observe that these do span the entire two-dimensional plane. So the column rank
of this matrix is also 2. The principle that a matrix’s row and column ranks are the
same continues to hold, and holds generally.

As another illustration, consider


2 4 6
A   
1 2 3
It is easy to see that the row rank of this matrix is one, and the column rank of this
matrix is also one.

Because the row and column ranks are always the same, we can always speak
unambiguously of “the rank of a matrix”. Furthermore,

rank ( A)  min(#rows, #columns)

3. Finally, we often need to find the rank of a product. Here are two very useful
results:

3a If A is (m  n) and B is (n  p ) , then

rank ( AB)  min(rank ( A), rank (B))

Proof Let C  AB . Every column of C is some linear combination of the


columns of A , therefore the columns in C cannot span a space of dimension
greater than the column rank of A . Similarly, every row of C is some linear
combination of the rows of B , therefore the rows in C cannot span a space of
dimension greater than the row rank of B . Since row rank and column rank are
the same, we have our result.

3b If A is (m  n) and B is (n  n) and full rank, then


rank ( AB)  rank ( A)

Matrix Algebra Notes 11-5


Anthony Tay
Proof If x  min(a, b) , then obviously x  a and x  b are both true. Since
rank ( AB)  min(rank ( A), rank (B)) .
we have
rank ( A)  rank ( ABB 1 ) [B 1exists since B is full rank]
 min(rank ( AB), B 1 )
 rank ( AB)
 min(rank ( A), rank (B))
 rank ( A)

This extends to the product CAB where both C and B are full rank:
rank (CAB)  rank ( A) .
Exercises

1. Find the determinants of the following matrices using the Laplace expansion. If
you find the determinant of a matrix to be zero, find out how many linear
independent rows the matrix has.

4 0 1  4 3 0 4 3 0 1 3 1
(i) 19 1 3 (ii) 19 1 0  (iii) 1 1 2 (vi)  2 6 2 
       
 7 1 0   7 1 0  7 6 6   4 12 4 

2. Show using the formula


a11 a12 a13
a21 a22 a23  a11a22 a33  a12 a23a31  a13a21a32  a13a22 a31  a11a23a32  a12 a21a33
a31 a32 a33

that the determinant of a (3  3) matrix will be zero if


(i) one row is a multiple of another;
(ii) one row is a linear combination of the other two.

3. Show that rank ( A T A)  rank ( A) .

Matrix Algebra Notes 11-6


Anthony Tay

You might also like