0% found this document useful (0 votes)
98 views34 pages

Linear Algebra Week 2

The document outlines lecture topics on linear algebra over the course of one week, including: 1) Rank and solvability of matrices (Lectures 4-6) 2) Subspaces, linear spans, and the dimension of vector spaces (Lecture 5) 3) Homogeneous and inhomogeneous linear systems (Lecture 6) It then provides definitions and theorems regarding the rank of matrices and solvability of linear systems, proving properties of the rank and existence, uniqueness, and non-uniqueness of solutions. Examples are given to demonstrate solving systems of equations by Gaussian elimination.

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views34 pages

Linear Algebra Week 2

The document outlines lecture topics on linear algebra over the course of one week, including: 1) Rank and solvability of matrices (Lectures 4-6) 2) Subspaces, linear spans, and the dimension of vector spaces (Lecture 5) 3) Homogeneous and inhomogeneous linear systems (Lecture 6) It then provides definitions and theorems regarding the rank of matrices and solvability of linear systems, proving properties of the rank and existence, uniqueness, and non-uniqueness of solutions. Examples are given to demonstrate solving systems of equations by Gaussian elimination.

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Outline of the week

1 Rank and solvabilty (lec 4)


2 Rn , Subspaces, Linear spans (lec 5)
3 Linear dependence/independence, basis and dimension (lec 5)
4 Row/Column ranks of a matrix, equality (lec 6)
5 Homogeneous/Inhomogeneous linear systems (lec 6)

January 9, 2017 1/1


Rank of matrix

Definition 1 (Rank)
Let A be an m × n matrix and  be any of its row echelon forms. The
rank of A is the number of pivots in Â.

This definition is ad-hoc and not rigorously justified since REF is not
unique.

The justification will be given in due course. We will denote the rank of A
by rank(A). Immediate to observe that

rank(A) ≤ m and rank(A) ≤ n.

January 9, 2017 2/1


Solvability of a linear system

Theorem 2
Let Ax = b be a given system of linear equations. Let A+ = [A|b] denote
the augmented matrix.
1 (Existence) The solution set is non-empty if and only if
rank(A) = rank(A+ ).
2 (Uniqueness) The system has a unique solution if and only if
rank(A) = rank(A+ ) = n.
3 (Non-uniqueness) The system has infinitely many solutions if and only
if rank(A) = rank(A+ ) < n.
4 (Gauss elimination or Completeness) If rank(A) = rank(A+ ), Gauss
elimination method gives the complete set of solutions.

January 9, 2017 3/1


Proof of the theorem

Let A+ = [A|b] be the augmented matrix and A c+ be a row-echelon form


of A+ .Then A
c+ = [Â|b̂] where perforce, Â is a REF of A.

(i)(Existence:) Must have

rank(A+ ) = rank(A) or rank(A) + 1.

If rank(A+ ) = rank(A), then there can be no pivot in the last column


(augmented part) and we can solve the system by back substitutions.
Conversely,if rank(A+ ) = rank(A) + 1 then the last pivot is in the
augmented column. The corresponding equation will read

0 = pivot 6= 0

and hence the system is insolvable.

January 9, 2017 4/1


Proof of the theorem contd.

(ii)(Uniqueness:) If rank(A) = rank(A+ ) = n = the number of columns in


A, then all the pivots are in A and the reduced REF will look like
 
c+ = In

A
[0] 0

and the unique solution is x = b̂. In the obvious notations, xj = b̂j .

(iii)(Non-uniqueness:) If rank(A) = rank(A+ ) = r < n, n − r variables out


of x1 , ..., xn will be free to take any values, thereby giving infinitely many
solutions (non-uniqueness).
The (n − r ) free variables correspond to the pivot-free columns.

(Gauss elimination or completeness:) Since each row operation is


reversible, the system in REF is equivalent to the original. Hence we get
all the solutions by GEM.

January 9, 2017 5/1


Solvability in a tabular form

Let Ax = b be a given system of linear equations. Let A+ = [A|b] denote


the m × (n + 1) augmented matrix.

RankA RankA+ Cases Solution set


r r +1 Empty
r r r <n Infinite set
r r r =n Singleton set
r r r >n ????

January 9, 2017 6/1


Homogeneous systems

A very important sub-class of linear systems Ax = b consists of the


homogeneous systems where the RHS is taken to be 0 ∈ Rm . Solutions to
this simpler system Ax = 0 throws much light on the given system
Ax = b. In fact, the former is called the associated homogeneous system
of the latter.

The homogeneous system is ALWAYS solvable by x = 0.

Definition 3 (Null space)


Given an m × n matrix A, the set N (A) = {x ∈ Rn |Ax = 0} is called the
null space of A.

If x0 is a “particular” solution of Ax = b, the complete set of solutions is


x0 + N (A).

January 9, 2017 7/1


Problems

Suppose the homogeneous system has infinitely many solutions. Two


practical problems arise
How to list the solutions?
How to define the “sizes” of the solution sets?
How to describe the location of the solution sets in the variables’ space
The theory to solve the above problems also suggests on its own a natural
definition of the rank of a matrix (as a by-product).
It also lets you list the solutions of the non-homogeneous equations.

January 9, 2017 8/1


Example

Example 4
Solve the following system of linear equations in the unknowns x1 , . . . , x5
by GEM:
2x3 −2x4 +x5 = 2
2x2 −8x3 +14x4 −5x5 = 2
x2 +3x3 +x5 = α
 
0 0 2 −2 1 2

 
Augmented matrix is 0 2 −8 14 −5 2  .
 
 
0 1 3 0 1 α

January 9, 2017 9/1


Example
 
0 1 3 0 1 α

 
A REF is 0 0 2 −2 1 2

 . Hence no solutions if
 
 
0 0 0 0 0
16 − 2α
α 6= 8.  
0 1 3 0 1 8

 
For α = 8, the REF is 0 0 2 −2 1 2 and the solution set is
 
 
0 0 0 0 0 0

x3 = 1 + x4 − x5 /2, x2 = 5 − 3x4 + x5 /2; x1 , x4 , x5 being arbitrary.


Remark:
N = N (A) = {x ∈ R5 |x3 = x4 − x5 /2, x2 = −3x4 + x5 /2}, a particular
solution is p = [0 5 1 0 0]T and the solution set is

S = p + N.

January 9, 2017 10 / 1
Example
Example 5
For a < b, consider the system of equations:
x + y + z = 1
ax + by + 2z = 3
a2 x + b2 y + 4z = 9.

Find the pairs (a, b) for which the system has infinitely many solutions.

Solution: Row reduction of augmented matrix is


   
1 1 1 1 1 1 1 1
A+ =  a b 2

3 7→ 0 b − a 2 − a 3 − a 
a2 b 2 4 9 0 b − a 4 − a2 9 − a2
2 2
 
1 1 1
1
0 b-a 2−a 3−a 
 

7→  
(2-a)(2-b) (3-a)(3-b) 
 
0 0

January 9, 2017 11 / 1

Example contd.

For non-uniqueness, the 3rd entry in the 3rd row must vanish and THEN
for existence, the last entry of the third row should also be zero. Therefore

(2 − a)(2 − b) = 0 AND (3 − a)(3 − b) = 0

Now a < b forces a = 2, b = 3.

Discussion:
(i) If (2 − a)(2 − b) 6= 0, there is exactly one solution.
(ii) If (2 − a)(2 − b) = 0 and (3 − a)(3 − b) 6= 0, then there will be NO solution.
Hence both must vanish. [1.0]

January 9, 2017 12 / 1
Comparing solution sets

Example 6
Let x +y −z =0 be a system and

x + 2y − z = 0
2x − y = 0

be another system. Compare the solution sets.

Solutions:
First system: x = z − y ; (y , z) arbitrary.
Second system: x = 2z/5, y = 4z/5; z arbitrary.
Both solution sets are infinite, so which is larger?
Since the first solution set is a plane and second is a line we feel that the
first system has “more” solutions or the set is “larger”.

January 9, 2017 13 / 1
Rn , vector spaces in Rn

As usual we continue to think of elements of Rn as n × 1 columns.


Definition 7 (Vector space)
A subset V ⊆ Rn is called a vector space if

v, w ∈ V , a, b ∈ R =⇒ av + bw ∈ V .

Example: For any m × n matrix A,


• its null space N (A) is a vector space in Rn ,
• its range space ARn is a vector space in Rm .
Remark: Let V ⊆ Rn be a vector space.
Then V = N (A) for some suitable m × n matrix A. (Can arrange
m = n i.e. a square matrix.)
Also V = BRm -the range of B : Rm −→ Rn for some n × m matrix
B. (Can arrange m = n i.e. a square matrix.)

January 9, 2017 14 / 1
Linear span

A third and the last method to produce vector spaces is via linear spans.

Definition 8 (Linear span)


Let S = {v1 , v2 , ..., vk } ⊂ Rn be a finite set of vectors in Rn . Let

L(S) = {c1 v1 + c2 v2 + · · · + ck vk | scalars c1 , c2 , ..., ck ∈ R}.

Then L(S) is called the linear span of S.

Theorem 9
L(S) is a vector space in Rn .

January 9, 2017 15 / 1
Redundancy in S

Example 10
     
2 1 3
Let v1 = , v2 = , v3 = be 3 vectors in R2 . Let
−4 9 5
S = {v1 , v2 , v3 }. Show that
1

L(S) = L {v1 , v2 } = L {v2 , v3 } = L {v1 , v3 } = R2 .


  

L {vj } 6= R2 for any j.



2

There could be redundancy in S when spanning L(S).

Any one of the vj can be dropped w.o. affecting the linear span.

But two of them can not be dropped. The linear span shrinks to a line.

[1.5]
January 9, 2017 16 / 1
Linear independence

Definition 11 (Linear combinations)


Let S = {v1 , v2 , ..., vk } be a finite set of vectors in Rn and c1 , c2 , ..., ck be
real scalars. The vector c1 v1 + c2 v2 + · · · + ck vk is called a linear
combination of v1 , v2 , ..., vk .

Remark: The set of all the linear combinations above is the span L(S) of
S. Verify that S ⊂ L(S).

Definition 12 (Linear independence)


A finite set S = {v1 , v2 , ..., vk } of vectors in Rn is called linearly
INdependent if

c1 v1 + c2 v2 + · · · + ck vk = 0 =⇒ c1 = c2 = · · · = ck = 0.

Exercise:The above is equivalent to saying that no vj can be expressed as


a linear combination of the remaining ones.
January 9, 2017 17 / 1
Examples

Remark:It is to be noted that the definition of linear


dependence/independence does not use row or column representation of
vectors, hence it is ”intrinsic”
Remark: The exercise above shows that no element of a linearly
INdependent S is dispensable
  for
  L(S)-i.e. no redundancy!

1 1 1
Example:Let v1 = 0 , v2 = 1 and v3 = 1. Then S = {v1 , v2 , v3 }
0 0 1
3
is a l.i. set in R .     
2 1 3
Example:S = , , ⊂ R2 is NOT a l.i. set in R2 .
−4 9 5
       
2 1 3 0
c1 + c2 + c3 = for c1 = c2 = −c3 = 1.
−4 9 5 0

We say S is a l.d. (linearly dependent) set of vectors.

January 9, 2017 18 / 1
Bases and dimensions

Definition 13 (Basis)
Let V ⊆ Rn be any vector space. A (finite) subset B ⊂ V is called a basis
of V if
1 B is a linearly independent set and
2 L(B) = V i.e. each v ∈ V is a linear combination of elements of B.

Definition 14 (Dimension)
The number of elements in a basis of V is called the dimension of V .
This is exactly the “size” of a vector space we were looking for.
The systems x + y − z = 0 and x + 2y − z = 0, 2x − y = 0, have the
solution spaces of dimensions 2 and 1 respectively;
 as intutively felt.
T T T
Bases are {[−1 1 0] , [1 0 1] } and {[2 4 5] }. [2.0]

January 9, 2017 19 / 1
Row-rank of a matrix

It is immediately obvious that the concepts of linear span, linear


dependence/independence, basis and dimensions are equally applicable  to the

A1
 A2 
1 × n row vectors. The following definitions are well justified. Let A =  .  be
 
 .. 
  Am
an m × n real matrix where each Aj = aj1 aj2 ··· ajn is a 1 × n row.

Definition 15 (Row space)



The linear span R(A) := L {A1 , A2 , ..., Am } of the rows of the matrix A is called
row-space of A.

Definition 16 (Row-rank)
The dimension of the row space R(A) of A is called the row-rank of A.

January 9, 2017 20 / 1
Column rank

Definition 17 (Column rank)


Let A = [A1 A2 ...An ] be m × n where Ak denotes the k th column of A
which, of course is a column vector in Rm . The column rank of A, denoted
rankc (A) is the maximum number of independent columns out of
{A1 , A2 , ..., An }.

Obviously, rankc (A) equals the dimension of the vector space


C(A) = L {A1 , A2 , ..., An } ⊂ Rm , called the column space of A. It is
interesting to observe that the column space is same as the range of A
when viewed as a map A : Rn −→ Rm . In other words,

C(A) = ARn .

Theorem 18 (Equality of ranks)


For any m × n matrix A, rank(A) = rankc (A).
January 9, 2017 21 / 1
Invariance under the row operations-row rank

Theorem 19
Let A be an m × n real matrix. If B is obtained from A by an elementary
row operation, then rank(A) = rank(B).

Proof:
Clearly each row of B is in R(A)-the row space of A. Hence
R(B) ⊆ R(A) =⇒ rank(B) ≤ rank(A). Since A can be recovered from B
by the inverse row operations, rank(B) ≥ rank(A). Proved.

Corollary 20
If  denotes a row echelon form of A, then rank(A) = rank(Â). Moreover,
rank(Â) equals the number of pivots in Â.

This neatly ties up our ad hoc rank with the standard definition.

January 9, 2017 22 / 1
Invariance under the row operations-col. rank

Theorem 21
Let A be an m × n real matrix. If B is obtained from A by an elementary
row operation, then rankc (A) = rankc (B).

Proof:
If B = EA then B k = EAk holds for the columns of A and B. If
{Ak1 , Ak2 , , ..., Akr , } is a l.i. set of columns of A, then so is
{B k1 , B k2 , ..., B kr } and vice versa (due to the inverse row operation
applied to B). Hence rankc (A) = rankc (B). (c1 B k1 + · · · + cr B kr = 0 =⇒
E (c1 Ak1 + · · · + cr Akr ) = 0 =⇒ c1 Ak1 + · · · + cr Akr = 0 =⇒ each cj = 0
due to l.i. of {Ak1 , ..., Akr })
Proved.
Corollary 22
If  denotes a row echelon form of A, then rankc (A) = rankc (Â).

January 9, 2017 23 / 1
Equality of row-rank and column-rank

Theorem 23
For any matrix A, its row rank equals its column rank.

Proof:
Consider the reduced REF Â. Let r = rank(A). The pivotal columns are
{e1 , e2 , ..., er } and they are clearly linearly independent. We conclude

rankc (A) ≥ rank(A).

Applying the above logic to AT , we get rankc (AT ) ≥ rank(AT ).


But obviously, rankc (AT ) = rank(A) and rank(AT ) = rankc (A).
Hence rank(A) ≥ rankc (A) and equality holds.

January 9, 2017 24 / 1
Equality of row and column ranks

In particular the column rank of A equals any of its REF Â. Since
rank(A) = rankc (A), the pivotal columns form a maximal linearly
independent set of columns of Â. So do the corresponding columns of A.

This justifies the following method of extracting a linearly INdependent set


out of any finite set of column vectors of Rn .

January 9, 2017 25 / 1
Linear (in)dependence and row echelon form
If {v1 , v2 , ..., vk } is a set of vectors in Rn , then take the obvious n × k
matrix V = [v1 v2 ... vk ].
Let in any REF of V there be r pivots. Then
If r < k the set is linearly dependent.
If r = k the set is linearly INdependent.
If r > k then what? Does the test FAIL?
Example 24
   
 0 1 
For S = 0 , −2 ⊂ R3 .
1 0
 

     
0 1 1 0 1 0
V = 0 −2 7→ 0 −2 7→  0 -2  .
1 0 0 1 0 0

Conclusion: k = 2, r = 2. Linearly INdependent.


January 9, 2017 26 / 1
Extracting linearly independent subsets

Suppose {v1 , v2 , ..., vk } is a linearly DEPendent set as discovered by r < k,


then DROP the nonpivotal vectors and you get a linearly INdependent set.
Example 25
      
2 1 3
S= v1 = , v2 = , v3 = ⊂ R2 .
−4 9 5
  " #
2 1 3 2 1 3
V= 7→ .
−4 9 5 0 11 11

Conclusions: (i) r = 2 < k = 3 =⇒ linear DEPendence.


(ii) Since there is no pivot in column 3, drop v3 . Therefore v1 , v2 form a
linearly INdependent subset s.t. the linear span is unchanged.

January 9, 2017 27 / 1
Invertibility via rank

Definition 26 (Full rank)


An m × n matrix A is said to be of full rank if its rank equals min{m, n}.

min{m, n} is the maximal possible rank for any m × n matrix.

Theorem 27
A square matrix A is invertible if and only if it is of full rank.

Proof: Gauss-Jordan method of finding A−1 shows that the inverse will
exist if and only if the reduced row echelon form of the n × n matrix A
becomes the identity matrix In which has n pivots.

January 9, 2017 28 / 1
Nullity and the rank-nullity theorem

Definition 28 (Nullity)
If A is any m × n real matrix, the dimension of the null-space N (A) of A is
called the nullity of A and is denoted null(A).

Theorem 29 (Rank-nullity theorem)


Let A be any m × n real matrix. Let null(A) and rank(A) be respectively
the nullity and rank of A. Then

rank(A) + null(A) = n = no. of columns of A

Caution: The title may suggest rank−nullity. It is rank+nullity

January 9, 2017 29 / 1
Proof of the rank-nullity theorem

Lemma 30
The rank of A is the number of pivots in any of its REF.

Proof: Let  be a REF of A with r pivots. The first r rows of  form a


basis of the row-space R(Â) of Â. But R(Â) = R(A) Hence r = rank(A).
Proof of Theorem ?? : Let  be a row-echelon form of A. Then there will
be rank(A) pivots and n − rank(A) pivot-free columns. The corresponding
n − rank(A) free variables describe the solutions of Ax = 0 i.e. the
null-space of A. Hence

null(A) = n − rank(A).

On ”solving” we get
rank(A) + null(A) = n.

January 9, 2017 30 / 1
Am example of the rank-nullity equation

Consider
 the augmented matrix
 of a homogeneous system
1 3 1 −2 −3 0
 1 3
 3 −1 −4 0  
 2 6 −4 −7 −3 0 
3 9 1 −7 −8 0
Performing
 ERO 0 s yields: 
1 3 1 −2 −3 0
 1 3
 3 −1 −4 0  
 2 6 −4 −7 −3 0 
3 9 1 −7 −8 0
 
1 3 1 −2 −3 0
E21 (−1),E31 (−2),E42 (−3)  0 2 1 −1 0 
7−→ 
 0 0 −6

−3 3 0 
0 0 −2 −1 10 0

January 9, 2017 31 / 1
Example contd.

 
1 3 1 −2 −3 0
E32 (3),E42 (1)
 0
 02 1 −1 0 
7−→  0
 (REF )
0 0 0 0 0 
0 0 0 0 0 0
 

 −6x2 + 5x 4 + 5x5 

2x
 

 2 

 ⊂ R5 . Here x2 , x4 , x5

The solution set is N (A) =   −x4 + x5 
2x4

  

 

2x5
 
take arbitrary values and we note that the second, fourth and fifth
columns are pivot-free.
Finally, null(A) = 3, rank(A) = 2 and null(A) + rank(A) = 5 which is the
no. of variables or the number of columns of A.

January 9, 2017 32 / 1
Example extended

 
1 3 1 −2 −3
1 3 3 −1 −4
Given A =  2 6 −4 −7 −3, find a basis of the null space N (A) of

3 9 1 −7 −8
A. The null space is described in the previous slide. Since the real triple
(x2 , x4 , x5 ) can take any value (in R3 ), 
let usassign values
 (1, 0, 0),(0,1, 0)
−6 5 5
2 0 0
     
and (0, 0, 1) successively, to find v1 =  0  , v2 = −1 , v3 = 1 as
     
0 2 0
0 0 2
three linearly independent elements of N (A) which form a basis.
We have parametrized N (A) by R3 and also used its standard basis to find
a basis of N (A).

January 9, 2017 33 / 1
A geometric viewpoint

Rank-Nullity theorem: Let A be an m × n matrix. As a map A : Rn −→ Rm , A


’transmits’ Rn linearly into Rm . Then null(A) is the number of dimensions which
vanish (transmission losses) and rank(A) = dim ARn is the number of dimensions
received. Naturally the sum rank(A) + null(A) = n = dim Rn is the total number
of dimensions transmitted. (A conservation law?)
Solvabilty of Ax = b: Given the linear map A : Rn −→ Rm and b ∈ Rm , we can
solve Ax = b for the unknown x ∈ Rn if and only if b ∈ ARn = C(A).
This geometrically obvious statement is equivalent to rank(A) = rank(A+ ) for the
solvability of the linear systems.
Adjoining b to the columns of A does not increase the linear span viz
LS{A1 , A2 , ..., An } = LS{A1 , A2 , ..., An ; b} ⇐⇒ b ∈ C(A).
Solution set: If p ∈ V is any particular solution i.e. Ap = b, then the set

S = p + N (A) = {p + v v ∈ N (A)}

is the full set of solutions of Ax = p.


The solution set is a parallelly displaced null space N (A). [3.0]

January 9, 2017 34 / 1

You might also like