0% found this document useful (0 votes)
18 views20 pages

Linear - Algebra Lecture 4

Lecture 04 of Linear Algebra by Dr. Mrinmoy Datta discusses the concepts of linear dependence and independence of vectors, providing definitions and examples. It establishes a crucial proposition that if a set of vectors exceeds the number of dimensions, it must be linearly dependent. The lecture also covers the row rank of a matrix and its properties, including the relationship between row operations and row rank.

Uploaded by

Sathvik Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views20 pages

Linear - Algebra Lecture 4

Lecture 04 of Linear Algebra by Dr. Mrinmoy Datta discusses the concepts of linear dependence and independence of vectors, providing definitions and examples. It establishes a crucial proposition that if a set of vectors exceeds the number of dimensions, it must be linearly dependent. The lecture also covers the row rank of a matrix and its properties, including the relationship between row operations and row rank.

Uploaded by

Sathvik Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MA1140 : Linear Algebra

Lecture 04

Dr. Mrinmoy Datta


Department of Mathematics
IIT Hyderabad
[email protected]

January 2024

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Linear Dependence

Let n ∈ N. We shall work entirely with row vectors in R1×n (of length n), or entirely
with column vectors in Rn×1 (of length n), both of which will be referred to as
‘vectors’.
We have already considered a linear combination

α1 a1 + · · · + αm am

of vectors a1 , . . . , am , where α1 , . . . , αm are scalars.


A set S of vectors is called linearly dependent if there is m ∈ N, there are (distinct)
vectors a1 , . . . , am in S and there are scalars α1 , . . . , αm , not all zero, such that

α1 a1 + · · · + αm am = 0.

It can be seen that S is linearly dependent ⇐⇒ either 0 ∈ S or a vector in S is a


linear combination of other vectors in S.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Examples n
T  T  T o
⊂ R2×1 . Then the set S is linearly

(i) Let S := 1 2 , 2 1 , 1 −1
dependent since
             
2 1 1 1 2 1 0
= + . Clearly, − + = .
1 2 −1 2 1 −1 0

(ii) Let S := 1 2 3 , 2 3 1 , 3 1 2 , 0 −3 3 ⊂ R1×3 . Then the


       

set S is linearly
 dependent
 since   
0 −3 3 = 1 2 3 − 2 2 3 1 + 3 1 2 . Clearly,
         
1 2 3 −2 2 3 1 + 3 1 2 − 0 −3 3 = 0 0 0 .

In (i) above, S is a set of 3 vectors in R2×1 , and in (ii) above, S is a set of 4 vectors
in R1×3 . These examples illustrate an important phenomenon to which we now turn.
First we prove the following crucial result.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Proposition
Let S be a set of s vectors, each of which is a linear combination of elements of a
(fixed) set of r vectors. If s > r , then the set S is linearly dependent.

Proof. Let S := {x1 , . . . , xs }, and suppose each vector in S is a linear combination of


elements of the set {y1 , . . . , yr } of r vectors and s > r . Then
r
X
xj = ajk yk for j = 1, . . . , s, where ajk ∈ R.
k=1

Let A := [ajk ] ∈ Rs×r . Then AT ∈ Rr ×s . Since r < s, the linear system AT x = 0 has
a nonzero solution, that is, there are α1 , . . . , αs , not all zero, such that

α1 a11 ··· as1 α1 0


      
 .   . .. ..   ..   ..  r ×1
AT  ..  =  .. . .   .  . ∈ R
= ,
αs a1r ··· asr αs 0

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Ps
that is, j=1 ajk αj = 0 for k = 1, . . . , r . It follows that

s
X s
X r
X  X s
r X 
αj xj = αj ajk yk = ajk αj yk = 0.
j=1 j=1 k=1 k=1 j=1

Since not all α1 , . . . , αn are zero, S is linearly dependent.

Corollary
Let n ∈ N and S be a set of vectors of length n. If S has more than n elements, then
S is linearly dependent.

Proof. If S is a set of column vectors of length n, then each element of S is a linear


combination of the n column vectors e1 , . . . , en . Similarly, if S is a set of row vectors
of length n, then each element of S is a linear combination of the n row vectors
eT T
1 , . . . , en . Hence the desired result follows from the crucial result we just proved.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Linear Independence

A set S of vectors is called linearly independent if it is not linearly dependent, that is,

α1 a1 + · · · + αm am = 0 =⇒ α1 = · · · = αm = 0,

whenever a1 , . . . , am are (distinct) vectors in S and α1 , . . . , αm are scalars. We may


also say that the vectors in S are linearly independent.
Linearly independent sets are important because each one of them gives us data that
we cannot obtain from any linear combination of the others. In this sense, each
element of a linearly independent set is indispensable!
Examples
(i) The empty set is linearly independent vacuously.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


(ii) Let S be the subset of Rn×1 consisting of the basic column vectors e1 , . . . , en .
Then S is linearly independent. To see this, let α1 , . . . , αn ∈ R and
α1 e1 + · · · + αn en = 0. Then the jth entry αj of α1 e1 + · · · + αn en is equal to 0 for
j = 1, . . . , n.
(iii) 1×4 consisting of the vectors
 Let S denote
  the subset ofR   
1 0 0 0 , 1 1 0 0 , 1 1 1 0 and 1 1 1 1 .
Then S is linearly independent. To see this, let α1 , α2 , α3 , α4 ∈ R be such that
       
α 1 0 · · · 0 + α2 1 1 0 0 + α3 1 1 1 0 + α4 1 1 1 1 =
1 
0 0 0 0 .
Then α1 + α2 + α3 + α4 = 0, α2 + α3 + α4 = 0, α3 + α4 = 0 and α4 = 0, that is,
α4 = α3 = α2 = α1 = 0.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


How to Decide Linear Independence of Column Vectors?

Suppose c1 , . . . , cn are column vectors each of length m. Let A ∈ Rm×n be the matrix
having c1 , . . . , cn as its n columns. Then for x1 , . . . , xn ∈ R,

x1
 
  . 
x1 c1 + · · · + xn cn = c1 ··· cn  .  = Ax.
.
xn

Hence the subset S := {c1 , . . . , cn } of Rm×1


is linearly independent
⇐⇒ if the linear system Ax = 0 has only the zero solution.
⇐⇒ in the RREF A′ of A, there are n nonzero rows.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Row Rank of a Matrix

Definition;

Let A ∈ Rm×n . The row rank of A is the maximum number of linearly independent
row vectors of A.
Thus the row rank of A is equal to r
⇐⇒ if there is a linearly independent set of r rows of A and any set of r + 1 rows of
A is linearly dependent.
Let r be the row rank of A.
1 Then r = 0 if and only if A = O.
2 Since the total number of rows of A is m, we see that r ≤ m. Also, since the row
vectors of A form a subset of R1×n , no more than n of them can be linearly
independent. Thus r ≤ n.

Let a1 , . . . , am be any m vectors in R1×n . Clearly, they are linearly independent if and
only if the matrix A formed with these vectors as row vectors has row rank m, and
they are linearly dependent if the row rank of A is less than m.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Examples
 
3 0 2
2
(i) Let A :=  −3 0 12
27.
−21 21 0
15
   
The row vectors of A are a1 := 3 0 2 2 , a2 := −3 0 12 27 and
 
a3 := −21 21 0 15 .
Let α1 , α2 , α3 ∈ R be such that α1 a1 + α2 a2 + α3 a3 = 0.
 
  3 0 2 2  
This means α1 α2 α3  −3 0 12 27 = 0 0 0 0 ,
−21 21 0 15
that is,

3α1 − 3α2 − 21α3 = 0,


21α3 = 0
2α1 + 12α2 = 0
2α1 + 27α2 + 15α3 = 0

Clearly, α3 = 0, and the two equations 3α1 − 3α2 = 0, 2α1 + 12α2 = 0 show that
α1 = α2 = 0 as well. Thus the 3 rows of A are linearly independent. Hence the row
rank of A is 3.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


 
3 0 2 2
(ii) Let A :=  −3 21 12 27.
−21 21 0 15
Here
     
a1 := 3 0 2 2 , a2 := −3 21 12 27 , a3 := −21 21 0 15 .

Observe that 6a1 − a2 + a3 = 0. Hence a1 , a2 , a3 are linearly dependent.

But the set {a1 , a2 } is linearly independent Hence the row rank of A is 2.
We used the relation 6a1 − a2 + a3 = 0 above to determine the row rank of A. It is
difficult to think of such a relation out of nowhere. We shall develop a systematic
approach to find the row rank of a matrix.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


First we prove the following preliminary results.

Proposition
If a matrix A is transformed to a matrix A′ by elementary row operations, then the
row ranks of A and A′ are equal, that is, EROs do not alter the row rank of a matrix.

Proof. Ri ←→ Rj with i ̸= j: A and A′ have the same set of row vectors. So there is
nothing to prove.
Rj ←→ α Rj with α ̸= 0: {aj , aj1 , . . . ajs } is linearly independent
⇐⇒ {α aj , aj1 , . . . , ajs } is linearly independent.
Thus the maximum number of linearly independent rows of A is the same as the
maximum number of linearly independent rows of A′ . That is, the row ranks of A and
A′ are equal.
Ri ←→ Ri + αRj with i ̸= j: Suppose the set {a1 , . . . , ai , . . . , aj , . . . , am } of all row
vectors of A contains a linearly independent subset S := {aj1 , . . . , ajs } having s
elements. We claim that the set {a1 , . . . , ai + α aj , . . . , aj , . . . am } of all row vectors of
A′ also contains a linearly independent set S ′ containing s elements. If ai ̸∈ S, then
we let S ′ := S. Next, suppose ai ∈ S. Then

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


we may replace ai suitably either by ai + αaj or by aj in the set S to form a linearly
independent set S ′ . The last statement follows by considering the cases ai + αaj = 0,
aj = 0, and by observing that if ai + α aj as well as aj were linear combinations of
vectors in S \ {ai }, then so would be ai = (ai + α aj ) − α aj , and this would contradict
the linear independence of S. Note that the converse claim also holds.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Proposition
Let a matrix A′ be in RREF. Then the nonzero rows of A′ are linearly independent, so
the row rank of A′ equals the number of nonzero rows of A′ .

Proof. Let the number of nonzero rows of A′ be r . Let the pivots in these rows be in
columns k1 , . . . , kr , where 1 ≤ k1 < · · · < kr ≤ n. Suppose

α1 a1 + · · · + αr ar = 0,

where α1 , . . . , αr ∈ R
The kj th component of α1 a1 + · · · + αr ar is equal to αj since all entries in the kj th
column other than the pivot are equal to 0.
Hence αj = 0.Thus α1 = · · · = αr = 0. This shows that the first r rows of A′ are
linearly independent.
Also, since the last m − r rows of A′ are zero rows, any r + 1 row vectors of A′ will
contain the vector 0, and so they will be linearly dependent. Hence the
row rank of A′ = r .
This completes the proof.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


We have now obtained an important result which tells us how to find the row rank of
a matrix.

Proposition
The row rank of a matrix is equal to the number of nonzero rows in any row reduced
echelon form of the matrix.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Example
It can be seen that the matrix
 
2 −1 3 2
A :=  1 4 0 −1 
2 6 −1 5

can be transformed to a row echelon form


17
 
1 0 0 3
A′ :=  0 1 0 − 53
 

0 0 1 − 113

By our discussions above, the row rank of A is 3.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Column Rank of a Matrix

Definition
Let A ∈ Rm×n . The column rank of A is the maximum number of linearly independent
column vectors of A.

Clearly, column-rank(A) = row-rank(AT ).

Proposition
The column rank of a matrix is equal to its row rank.

Proof. Let A := [ajk ] ∈ Rm×n . Let r = row-rank(A) and s :=column-rank(A). We will


show that s ≤ r and r ≤ s.
Let a1 , . . . , am denote the row vectors of A, so that
 
aj := aj1 · · · ajk · · · ajn for j = 1, . . . , m.

Among these m row vectors of A, there are r row vectors aj1 , . . . , ajr which are linearly
independent, and each row vector of A is a linear combination of these r row vectors.

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


For simplicity, let b1 := aj1 , . . . , br := ajr . Then for j = 1, . . . , m and ℓ = 1, . . . , r ,
there is αjℓ ∈ R such that

aj = αj1 b1 + · · · + αjℓ bℓ + · · · + αjr br for j = 1, . . . , m.


 
If bℓ := bℓ1 · · · bℓk · · · bℓn for ℓ = 1, . . . , r , then the above equations can
be written componentwise as follows:

a1k = α11 b1k + ··· + α1ℓ bℓk + ··· + α1r brk


.. .. .. .. ..
. . . . .
ajk = αj1 b1k + ··· + αjℓ bℓk + ··· + αjr brk
.. .. .. .. ..
. . . . .
amk = αm1 b1k + ··· + αmℓ bℓk + ··· + αmr brk

for k = 1, . . . , n. These m equations show that the kth column of A can be written as
follows:

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


       
a1k α11 α12 α1r
 .   .   .   . 
 .   .   .   . 
 .   .   .   . 
 ajk  = b1k  αj1  + b2k  αj2  + · · · + brk  αjr  .
       
 ..   ..   ..   .. 
       
 .   .   .   . 
amk αm1 αm2 αmr
for k = 1, . . . , n. Thus each of the n columns of A is a linear combination of the r
 T
column vectors α11 · · · αj1 · · · αm1 ,
 T  T
α12 · · · αj2 · · · αm2 , . . . , α1r · · · αjr · · · αmr .
By a crucial result we have proved earlier, no more than r columns of A can be linearly
independent. Hence s ≤ r , i.e., column-rank(A) ≤ row-rank(A).
Applying the above result to AT in place of A, we obtain r ≤ s. Thus s = r , as
desired.

Definition :Rank of A :=Row rank of A= Column rank of A.

In view of the above result, we define the rank of a matrix A to be the common value
of the row rank of A and the column rank of A, and we denote it by rank A .
Consequently, rank AT = rank A .

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04


Let us restate some results proved earlier in terms of the newly introduced notion of
the rank of a matrix.
Let A ∈ Rm×n .
1. If A is transformed to A′ by EROs, then rank A′ = rank A.
2. If A is transformed to A′ by EROs and A′ is in RREF, then

rank A = number of nonzero rows of A′


= number of pivotal columns of A′ .

(Note: Each nonzero row has exactly one pivot, and a zero row has no pivot.)
3. Let m = n. Then
A is invertible ⇐⇒ rank A = n.
(Recall: A square matrix is invertible if and only if it can be transformed to the
identity matrix by EROs.)

Dr. Mrinmoy Datta , IIT Hyderabad Linear Algebra: Lecture 04

You might also like