0% found this document useful (0 votes)
23 views77 pages

Slides1025W 3

Uploaded by

logicalineal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views77 pages

Slides1025W 3

Uploaded by

logicalineal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Applied Linear Algebra

Miguel Angel Mota

York University
Definition
v is a vector of size n (an element of Rn ) iff v ∈ M1×n or
v ∈ Mn×1 (that is, iff v is either a row or a column). So,
 
4  
and 1 0 7
π
are both vectors (one is an element of R2 and the other is an
element of R3 ).
 
  3
Note that, seen as matrices 3 5 and are not equal
5
(since one of them is a 1 × 2 matrix while the other is a 2 × 1
matrix). However, both of them can be used to represent the
ordered pair whose first entry is 3 and whose second entry is 5.

Finally, note that if v is a vector, then it is usual to write ⃗v


instead of v .
Definition
v is a vector of size n (an element of Rn ) iff v ∈ M1×n or
v ∈ Mn×1 (that is, iff v is either a row or a column). So,
 
4  
and 1 0 7
π
are both vectors (one is an element of R2 and the other is an
element of R3 ).
 
  3
Note that, seen as matrices 3 5 and are not equal
5
(since one of them is a 1 × 2 matrix while the other is a 2 × 1
matrix). However, both of them can be used to represent the
ordered pair whose first entry is 3 and whose second entry is 5.

Finally, note that if v is a vector, then it is usual to write ⃗v


instead of v .
Definition
v is a vector of size n (an element of Rn ) iff v ∈ M1×n or
v ∈ Mn×1 (that is, iff v is either a row or a column). So,
 
4  
and 1 0 7
π
are both vectors (one is an element of R2 and the other is an
element of R3 ).
 
  3
Note that, seen as matrices 3 5 and are not equal
5
(since one of them is a 1 × 2 matrix while the other is a 2 × 1
matrix). However, both of them can be used to represent the
ordered pair whose first entry is 3 and whose second entry is 5.

Finally, note that if v is a vector, then it is usual to write ⃗v


instead of v .
A linear system with m equations and n variables is a system of
the form


a11 x1 + a12 x2 + . . . + a1n xn = b1

a21 x1 + a22 x2 + . . . + a2n xn = b2

..


 .

am1 x1 + am2 x2 + . . . + amn xn = bm

.

We say that the vector (t1 , t2 , . . . , tn ) ∈ Rn is a solution of the


system iff (t1 , t2 , . . . , tn ) is a solution of each of these n
equations. So,

• the corresponding solution set is always a subset of Rn ,


and

• when there are no solutions, of course S = ∅


A linear system with m equations and n variables is a system of
the form


a11 x1 + a12 x2 + . . . + a1n xn = b1

a21 x1 + a22 x2 + . . . + a2n xn = b2

..


 .

am1 x1 + am2 x2 + . . . + amn xn = bm

.

We say that the vector (t1 , t2 , . . . , tn ) ∈ Rn is a solution of the


system iff (t1 , t2 , . . . , tn ) is a solution of each of these n
equations. So,

• the corresponding solution set is always a subset of Rn ,


and

• when there are no solutions, of course S = ∅


A linear system with m equations and n variables is a system of
the form


a11 x1 + a12 x2 + . . . + a1n xn = b1

a21 x1 + a22 x2 + . . . + a2n xn = b2

..


 .

am1 x1 + am2 x2 + . . . + amn xn = bm

.

We say that the vector (t1 , t2 , . . . , tn ) ∈ Rn is a solution of the


system iff (t1 , t2 , . . . , tn ) is a solution of each of these n
equations. So,

• the corresponding solution set is always a subset of Rn ,


and

• when there are no solutions, of course S = ∅


A linear system with m equations and n variables is a system of
the form


a11 x1 + a12 x2 + . . . + a1n xn = b1

a21 x1 + a22 x2 + . . . + a2n xn = b2

..


 .

am1 x1 + am2 x2 + . . . + amn xn = bm

.

We say that the vector (t1 , t2 , . . . , tn ) ∈ Rn is a solution of the


system iff (t1 , t2 , . . . , tn ) is a solution of each of these n
equations. So,

• the corresponding solution set is always a subset of Rn ,


and

• when there are no solutions, of course S = ∅


Note that the above system can be represented as follows
 
a11 a12 a1n . . . b1
 a21 a22 a2n . . . b2 
 
 .. 
 . 
am1 am2 amn . . . bm

This matrix is known as the augmented matrix of the system.


The vector  
b1
 b2 
⃗b =  
 .. 
 . 
bm
(i.e., the last column of the augmented matrix is known as the
vector of independent terms.

Finally, the submatrix obtained by omitting the vector of indep.


terms is called the coefficient matrix of the system.
Note that the above system can be represented as follows
 
a11 a12 a1n . . . b1
 a21 a22 a2n . . . b2 
 
 .. 
 . 
am1 am2 amn . . . bm

This matrix is known as the augmented matrix of the system.


The vector  
b1
 b2 
⃗b =  
 .. 
 . 
bm
(i.e., the last column of the augmented matrix is known as the
vector of independent terms.

Finally, the submatrix obtained by omitting the vector of indep.


terms is called the coefficient matrix of the system.
Note that the above system can be represented as follows
 
a11 a12 a1n . . . b1
 a21 a22 a2n . . . b2 
 
 .. 
 . 
am1 am2 amn . . . bm

This matrix is known as the augmented matrix of the system.


The vector  
b1
 b2 
⃗b =  
 .. 
 . 
bm
(i.e., the last column of the augmented matrix is known as the
vector of independent terms.

Finally, the submatrix obtained by omitting the vector of indep.


terms is called the coefficient matrix of the system.
So,  
a11 a12 ... a1n
 a21 a22 ... a2n 
A= .
 
 ..


am1 am2 . . . amn
is the coefficient matrix of the system, and the augmented
matrix of the system can be represented as

(A | ⃗b)
So,  
a11 a12 ... a1n
 a21 a22 ... a2n 
A= .
 
 ..


am1 am2 . . . amn
is the coefficient matrix of the system, and the augmented
matrix of the system can be represented as

(A | ⃗b)
Example
The augmented matrix of the system

w + 3x + 2y + 6z = 7

w + 8x + + z = −1

w +x +y +z =1

is  
1 3 2 6 7
1 8 0 1 −1
1 1 1 1 1

Also,    
7 1 3 2 6
−1 and 1 8 0 1
1 1 1 1 1
are respectively the vector of independent terms and the
coefficient matrix of the system.
Example
The augmented matrix of the system

w + 3x + 2y + 6z = 7

w + 8x + + z = −1

w +x +y +z =1

is  
1 3 2 6 7
1 8 0 1 −1
1 1 1 1 1

Also,    
7 1 3 2 6
−1 and 1 8 0 1
1 1 1 1 1
are respectively the vector of independent terms and the
coefficient matrix of the system.
Now, we come back to general question of how to find the
solution set of a linear system. For so doing, note that the
systems
 
w + 3x + 2y + 6z = 7
 w + 3x + 2y + 6z = 7

w + 8x + + z = −1 and w +x +y +z =1
 
w +x +y +z =1 w + 8x + + z = −1
 

are clearly equivalent, and therefore it makes sense to write


   
1 3 2 6 7 1 3 2 6 7
1 8 0 1 −1 ∼R ↔R 1 1 1 1 1 
2 3
1 1 1 1 1 1 8 0 1 −1
So, we have three different types of elementary operations on
matrices which preserve the solution set of the underlying
linear systems:

• Those of type 1 are performed by interchanging two


rows.

• Those of type 2 are performed by multiplying a row by a


nonzero real number.

• Those of type 3 are performed by applying the method of


elimination (which consists in adding a multiple of a row to
another one with the aim of killing variables).
So, we have three different types of elementary operations on
matrices which preserve the solution set of the underlying
linear systems:

• Those of type 1 are performed by interchanging two


rows.

• Those of type 2 are performed by multiplying a row by a


nonzero real number.

• Those of type 3 are performed by applying the method of


elimination (which consists in adding a multiple of a row to
another one with the aim of killing variables).
So, we have three different types of elementary operations on
matrices which preserve the solution set of the underlying
linear systems:

• Those of type 1 are performed by interchanging two


rows.

• Those of type 2 are performed by multiplying a row by a


nonzero real number.

• Those of type 3 are performed by applying the method of


elimination (which consists in adding a multiple of a row to
another one with the aim of killing variables).
So, we have three different types of elementary operations on
matrices which preserve the solution set of the underlying
linear systems:

• Those of type 1 are performed by interchanging two


rows.

• Those of type 2 are performed by multiplying a row by a


nonzero real number.

• Those of type 3 are performed by applying the method of


elimination (which consists in adding a multiple of a row to
another one with the aim of killing variables).
Example. Find the solution set of the system.

2x + y + 3z = 0

x +y +z =1

2x + y − z = 4

    
2 1 3 0 1 1 1 1 1 1 1 1
1 1 1 1 ∼R ↔R 2 1 3 0 ∼−2R +R 0 −1 1 −2
1 2 1 2
2 1 −1 4 2 1 −1 4 2 1 −1 4

   
1 1 1 1 1 1 1 1
∼−2R1 +R3 0 −1 1 −2 ∼−R 0 1 −1 2 ∼1R +R
2 2 3
0 −1 −3 2 0 −1 −3 2
Example. Find the solution set of the system.

2x + y + 3z = 0

x +y +z =1

2x + y − z = 4

    
2 1 3 0 1 1 1 1 1 1 1 1
1 1 1 1 ∼R ↔R 2 1 3 0 ∼−2R +R 0 −1 1 −2
1 2 1 2
2 1 −1 4 2 1 −1 4 2 1 −1 4

   
1 1 1 1 1 1 1 1
∼−2R1 +R3 0 −1 1 −2 ∼−R 0 1 −1 2 ∼1R +R
2 2 3
0 −1 −3 2 0 −1 −3 2
   
1 1 1 1 1 1 1 1
0 1 −1 2 ∼ 1 0 1 −1 2  ∼1R +R
− 4 R3 3 2
0 0 −4 4 0 0 1 −1

     
1 1 1 1 1 1 0 2 1 0 0 1
0 1 0 1  ∼−1R +R 0 1 0 1  ∼−1R +R 0 1 0 1 
3 1 2 1
0 0 1 −1 0 0 1 −1 0 0 1 −1

So, our solution set is S = {(1, 1, −1)}.


The order and choice of the elementary operations that we
considered in the previous example are not unique but they
follow a certain logic. Essentially, we try to bring the original
matrix to its reduced row echelon form. This concept would
be formally defined later, but we can advance that, to reach that
form, we seek to create (by means of elementary operations of
the three types) a descending stair of maximal height.

We first use from top to bottom and from left to right the
elimination method to kill all components below the start of
each of the future steps. Elementary operations of type two are
used at convenience and serve, among other things, to ensure
that if a row has at least one nonzero component, then its first
nonzero component is equal to one. This one is called a
leading one. Finally, we use the elimination method but this
time from bottom to top and from right to left in order to
guarantee that if a column has a leading one, then all other
entries in that column are zero.
The order and choice of the elementary operations that we
considered in the previous example are not unique but they
follow a certain logic. Essentially, we try to bring the original
matrix to its reduced row echelon form. This concept would
be formally defined later, but we can advance that, to reach that
form, we seek to create (by means of elementary operations of
the three types) a descending stair of maximal height.

We first use from top to bottom and from left to right the
elimination method to kill all components below the start of
each of the future steps. Elementary operations of type two are
used at convenience and serve, among other things, to ensure
that if a row has at least one nonzero component, then its first
nonzero component is equal to one. This one is called a
leading one. Finally, we use the elimination method but this
time from bottom to top and from right to left in order to
guarantee that if a column has a leading one, then all other
entries in that column are zero.
The order and choice of the elementary operations that we
considered in the previous example are not unique but they
follow a certain logic. Essentially, we try to bring the original
matrix to its reduced row echelon form. This concept would
be formally defined later, but we can advance that, to reach that
form, we seek to create (by means of elementary operations of
the three types) a descending stair of maximal height.

We first use from top to bottom and from left to right the
elimination method to kill all components below the start of
each of the future steps. Elementary operations of type two are
used at convenience and serve, among other things, to ensure
that if a row has at least one nonzero component, then its first
nonzero component is equal to one. This one is called a
leading one. Finally, we use the elimination method but this
time from bottom to top and from right to left in order to
guarantee that if a column has a leading one, then all other
entries in that column are zero.
The order and choice of the elementary operations that we
considered in the previous example are not unique but they
follow a certain logic. Essentially, we try to bring the original
matrix to its reduced row echelon form. This concept would
be formally defined later, but we can advance that, to reach that
form, we seek to create (by means of elementary operations of
the three types) a descending stair of maximal height.

We first use from top to bottom and from left to right the
elimination method to kill all components below the start of
each of the future steps. Elementary operations of type two are
used at convenience and serve, among other things, to ensure
that if a row has at least one nonzero component, then its first
nonzero component is equal to one. This one is called a
leading one. Finally, we use the elimination method but this
time from bottom to top and from right to left in order to
guarantee that if a column has a leading one, then all other
entries in that column are zero.
Definition
A linear system with m equations and n variables is said to be
homogeneous iff its corresponding vector of independent terms
is the zero vector of Rm

Definition
A linear system is said to be consistent iff it has at least one
solution (and so, a system is said to be inconsistent iff S = ∅).

Proposition
Every homogeneous system is consistent.
Proof. It suffices to note that if an hom. system has m
equations and n variables, then the zero vector of Rn belongs
to its solution set.

Since the zero vector of R n is always a solution of an


homogeneous system with n variables, we say that the zero
vector of R n is the trivial solution of any of those systems.
Definition
A linear system with m equations and n variables is said to be
homogeneous iff its corresponding vector of independent terms
is the zero vector of Rm

Definition
A linear system is said to be consistent iff it has at least one
solution (and so, a system is said to be inconsistent iff S = ∅).

Proposition
Every homogeneous system is consistent.
Proof. It suffices to note that if an hom. system has m
equations and n variables, then the zero vector of Rn belongs
to its solution set.

Since the zero vector of R n is always a solution of an


homogeneous system with n variables, we say that the zero
vector of R n is the trivial solution of any of those systems.
Definition
A linear system with m equations and n variables is said to be
homogeneous iff its corresponding vector of independent terms
is the zero vector of Rm

Definition
A linear system is said to be consistent iff it has at least one
solution (and so, a system is said to be inconsistent iff S = ∅).

Proposition
Every homogeneous system is consistent.
Proof. It suffices to note that if an hom. system has m
equations and n variables, then the zero vector of Rn belongs
to its solution set.

Since the zero vector of R n is always a solution of an


homogeneous system with n variables, we say that the zero
vector of R n is the trivial solution of any of those systems.
Definition
A linear system with m equations and n variables is said to be
homogeneous iff its corresponding vector of independent terms
is the zero vector of Rm

Definition
A linear system is said to be consistent iff it has at least one
solution (and so, a system is said to be inconsistent iff S = ∅).

Proposition
Every homogeneous system is consistent.
Proof. It suffices to note that if an hom. system has m
equations and n variables, then the zero vector of Rn belongs
to its solution set.

Since the zero vector of R n is always a solution of an


homogeneous system with n variables, we say that the zero
vector of R n is the trivial solution of any of those systems.
Of course, some homogeneous systems have many solutions
solutions (this is the case of the system whose only equation is
x − y = 0 and whose solution set is {(x, x) | x ∈ R}) and others
have only one solution.

Example. Find the solution set of the homogeneous system




 2y + 3z = 0
2x − 3y + 2z = 0

3x + 12 y − z = 0

    
0 2 3 0 2 −3 2 0 2 −3 2
2 −3 2 0 ∼R ↔R 0 2 3 0∼ 3 0 2 3
1 2 − 2 R1 +R3
1 1
3 2 −1 0 3 2 −1 0 0 5 −4
Of course, some homogeneous systems have many solutions
solutions (this is the case of the system whose only equation is
x − y = 0 and whose solution set is {(x, x) | x ∈ R}) and others
have only one solution.

Example. Find the solution set of the homogeneous system




 2y + 3z = 0
2x − 3y + 2z = 0

3x + 12 y − z = 0

    
0 2 3 0 2 −3 2 0 2 −3 2
2 −3 2 0 ∼R ↔R 0 2 3 0∼ 3 0 2 3
1 2 − 2 R1 +R3
1 1
3 2 −1 0 3 2 −1 0 0 5 −4
Of course, some homogeneous systems have many solutions
solutions (this is the case of the system whose only equation is
x − y = 0 and whose solution set is {(x, x) | x ∈ R}) and others
have only one solution.

Example. Find the solution set of the homogeneous system




 2y + 3z = 0
2x − 3y + 2z = 0

3x + 12 y − z = 0

    
0 2 3 0 2 −3 2 0 2 −3 2
2 −3 2 0 ∼R ↔R 0 2 3 0∼ 3 0 2 3
1 2 − 2 R1 +R3
1 1
3 2 −1 0 3 2 −1 0 0 5 −4
   
2 −3 2 0 2 −3 2 0
∼− 5 R2 +R3 0 2 3 0 ∼− 2 R3 0 2 3 0 ∼−3R3 +R2
2 23
0 0 − 23
2 0 0 0 1 0

     
2 −3 2 0 2 −3 0 0 2 −3 0 0
0 2 0 0 ∼−2R +R 0 2 0 0 ∼ 1 0 1 0 0
3 1 R
2 2
0 0 1 0 0 0 1 0 0 0 1 0

   
2 0 0 0 1 0 0 0
∼3R2 +R1 0 1 0 0 ∼ 1 0 1 0 0
R
2 1
0 0 1 0 0 0 1 0

So, our solution set is S = {(0, 0, 0)}.


   
2 −3 2 0 2 −3 2 0
∼− 5 R2 +R3 0 2 3 0 ∼− 2 R3 0 2 3 0 ∼−3R3 +R2
2 23
0 0 − 23
2 0 0 0 1 0

     
2 −3 2 0 2 −3 0 0 2 −3 0 0
0 2 0 0 ∼−2R +R 0 2 0 0 ∼ 1 0 1 0 0
3 1 R
2 2
0 0 1 0 0 0 1 0 0 0 1 0

   
2 0 0 0 1 0 0 0
∼3R2 +R1 0 1 0 0 ∼ 1 0 1 0 0
R
2 1
0 0 1 0 0 0 1 0

So, our solution set is S = {(0, 0, 0)}.


Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Definition
A matrix is said to be in reduced row echelon form (RREF) iff
it satisfies the following 4 conditions:
(1) All the zero rows (if any) appear atthe bottom of the matrix
(2) The first nonzero entry of a nonzero row is equal to 1. This
entry is called the leading one of the row.
(3) For each nonzero row, its leading one appears to the right
and below any leading ones in preceding rows.
(4) If a column contains a leading one, then all other entries in
that column are zero.

If a matrix satisfies the first three conditions (but possibly not


the fourth one), then that matrix is said to be in row echelon
form (REF).
Example
The following matrices are in RREF:
   
  0 1 5 0 7   0 0 0
1 0 5 1 0
A= B = 0 0 0 1 0 I = Θ = 0 0 0
0 1 7 0 1
0 0 0 0 0 0 0 0

The following matrices are not in RREF:


       
0 0 0 7 0 0 0 1 5 0 0 1 8 0
C= D= E= F =
1 0 0 0 1 0 1 0 0 0 0 0 1 0

This is because C does not satisfy (1), D does not satisfy (2),
E does not satisfy (3) and F does not satisfy (4) (but F is in
row echelon form).

In general, a leading entry of a nonzero row is the leftmost


nonzero entry in that row. So, D is not in RREF because the
leading entry of its first row is not equal to 1.
Example
The following matrices are in RREF:
   
  0 1 5 0 7   0 0 0
1 0 5 1 0
A= B = 0 0 0 1 0 I = Θ = 0 0 0
0 1 7 0 1
0 0 0 0 0 0 0 0

The following matrices are not in RREF:


       
0 0 0 7 0 0 0 1 5 0 0 1 8 0
C= D= E= F =
1 0 0 0 1 0 1 0 0 0 0 0 1 0

This is because C does not satisfy (1), D does not satisfy (2),
E does not satisfy (3) and F does not satisfy (4) (but F is in
row echelon form).

In general, a leading entry of a nonzero row is the leftmost


nonzero entry in that row. So, D is not in RREF because the
leading entry of its first row is not equal to 1.
Example
The following matrices are in RREF:
   
  0 1 5 0 7   0 0 0
1 0 5 1 0
A= B = 0 0 0 1 0 I = Θ = 0 0 0
0 1 7 0 1
0 0 0 0 0 0 0 0

The following matrices are not in RREF:


       
0 0 0 7 0 0 0 1 5 0 0 1 8 0
C= D= E= F =
1 0 0 0 1 0 1 0 0 0 0 0 1 0

This is because C does not satisfy (1), D does not satisfy (2),
E does not satisfy (3) and F does not satisfy (4) (but F is in
row echelon form).

In general, a leading entry of a nonzero row is the leftmost


nonzero entry in that row. So, D is not in RREF because the
leading entry of its first row is not equal to 1.
Theorem (Gauss-Jordan)
If A ∈ Mm×n , there is a unique matrix B such that B is in RREF
and B is obtained from A by applying elementary operations.
Such a matrix B is known as the reduced row echelon form
of A.

This theorem implies that given the augmented matrix of a


linear system, then its always possible to get its solution set by
applying suitable elem. operations.

In particular, the so called method of Gauss-Jordan (which


consists in reducing a matrix to its RREF) always answers the
question of whether or not a system is consistent.
Theorem (Gauss-Jordan)
If A ∈ Mm×n , there is a unique matrix B such that B is in RREF
and B is obtained from A by applying elementary operations.
Such a matrix B is known as the reduced row echelon form
of A.

This theorem implies that given the augmented matrix of a


linear system, then its always possible to get its solution set by
applying suitable elem. operations.

In particular, the so called method of Gauss-Jordan (which


consists in reducing a matrix to its RREF) always answers the
question of whether or not a system is consistent.
Theorem (Gauss-Jordan)
If A ∈ Mm×n , there is a unique matrix B such that B is in RREF
and B is obtained from A by applying elementary operations.
Such a matrix B is known as the reduced row echelon form
of A.

This theorem implies that given the augmented matrix of a


linear system, then its always possible to get its solution set by
applying suitable elem. operations.

In particular, the so called method of Gauss-Jordan (which


consists in reducing a matrix to its RREF) always answers the
question of whether or not a system is consistent.
Example
Using the Gauss-Jordan method we will verify that the system
(
x + 2y = 0
−x − 2y = 4

is inconsistent. For so doing, note that


     
1 2 0 1 2 0 1 2 0
∼1R1 +R2 ∼ 1 R2
−1 −2 4 0 0 4 4 0 0 1

The second row of the last matrix is saying that 0x + 0y = 1


and therefore, the system does not have any solution.
Example
Using the Gauss-Jordan method we will verify that the system
(
x + 2y = 0
−x − 2y = 4

is inconsistent. For so doing, note that


     
1 2 0 1 2 0 1 2 0
∼1R1 +R2 ∼ 1 R2
−1 −2 4 0 0 4 4 0 0 1

The second row of the last matrix is saying that 0x + 0y = 1


and therefore, the system does not have any solution.
Definition
• A pivot position in a matrix is the location of a leading one
in its RREF.
• A pivot column in a matrix is a column containing a pivot
position.
• The rank of a matrix is equal to the number of its leading
ones in its RREF.
So, the rank of a matrix is equal to the number of nonzero rows
in its REF (because each of those rows have exactly one
leading one).
Example
       
1 2 1 2 1 2 0 1 2 0
Since ∼ and ∼
−1 −2 0 0 −1 −2 4 0 0 1
then the only pivot position of the coefficient matrix of the
previous system is position 1,1 and that matrix has rank 1.
Definition
• A pivot position in a matrix is the location of a leading one
in its RREF.
• A pivot column in a matrix is a column containing a pivot
position.
• The rank of a matrix is equal to the number of its leading
ones in its RREF.
So, the rank of a matrix is equal to the number of nonzero rows
in its REF (because each of those rows have exactly one
leading one).
Example
       
1 2 1 2 1 2 0 1 2 0
Since ∼ and ∼
−1 −2 0 0 −1 −2 4 0 0 1
then the only pivot position of the coefficient matrix of the
previous system is position 1,1 and that matrix has rank 1.
Definition
• A pivot position in a matrix is the location of a leading one
in its RREF.
• A pivot column in a matrix is a column containing a pivot
position.
• The rank of a matrix is equal to the number of its leading
ones in its RREF.
So, the rank of a matrix is equal to the number of nonzero rows
in its REF (because each of those rows have exactly one
leading one).
Example
       
1 2 1 2 1 2 0 1 2 0
Since ∼ and ∼
−1 −2 0 0 −1 −2 4 0 0 1
then the only pivot position of the coefficient matrix of the
previous system is position 1,1 and that matrix has rank 1.
In the case of the corresponding augmented matrix, its leading
positions are 1,1 and 2,3, its pivot columns are the first and the
last one, and it has rank 2.

So, the rank of the augmented matrix of this inconsistent


system is equal to

2 = 1 + 1 = rank of the coeff. matrix + 1.

. More generally, we have the following result.

Theorem
A linear system is inconsistent iff the rank of its augmented
matrix is equal to the rank of is coefficient matrix plus one.
In the case of the corresponding augmented matrix, its leading
positions are 1,1 and 2,3, its pivot columns are the first and the
last one, and it has rank 2.

So, the rank of the augmented matrix of this inconsistent


system is equal to

2 = 1 + 1 = rank of the coeff. matrix + 1.

. More generally, we have the following result.

Theorem
A linear system is inconsistent iff the rank of its augmented
matrix is equal to the rank of is coefficient matrix plus one.
In the case of the corresponding augmented matrix, its leading
positions are 1,1 and 2,3, its pivot columns are the first and the
last one, and it has rank 2.

So, the rank of the augmented matrix of this inconsistent


system is equal to

2 = 1 + 1 = rank of the coeff. matrix + 1.

. More generally, we have the following result.

Theorem
A linear system is inconsistent iff the rank of its augmented
matrix is equal to the rank of is coefficient matrix plus one.
It is also worth mentioning that the G-J method shows whether
or not a system has an infinite number of solutions.
Example
Find the solution set of the system
(
x + 2y − 3z = −4
2x + y − 3z = 4

   
1 2 −3 −4 1 2 −3 −4
∼−2R1 +R2
2 1 −3 4 0 −3 3 12
   
1 2 −3 −4 1 0 −1 4
∼− 1 R2 ∼−2R2 +R1
3 0 1 −1 −4 0 1 −1 −4
It is also worth mentioning that the G-J method shows whether
or not a system has an infinite number of solutions.
Example
Find the solution set of the system
(
x + 2y − 3z = −4
2x + y − 3z = 4

   
1 2 −3 −4 1 2 −3 −4
∼−2R1 +R2
2 1 −3 4 0 −3 3 12
   
1 2 −3 −4 1 0 −1 4
∼− 1 R2 ∼−2R2 +R1
3 0 1 −1 −4 0 1 −1 −4
It is also worth mentioning that the G-J method shows whether
or not a system has an infinite number of solutions.
Example
Find the solution set of the system
(
x + 2y − 3z = −4
2x + y − 3z = 4

   
1 2 −3 −4 1 2 −3 −4
∼−2R1 +R2
2 1 −3 4 0 −3 3 12
   
1 2 −3 −4 1 0 −1 4
∼− 1 R2 ∼−2R2 +R1
3 0 1 −1 −4 0 1 −1 −4
So, x − z = 4, y − z = −4 and z is a free variable (i.e., it does
not have any restriction). Isolating x and y in terms of this free
variable, we get

S = {(4 + z, −4 + z, z) | z ∈ R}

and therefore, there is an infinite number of solutions. Some of


them are

. . . , (4 − 1, −4 − 1, −1), (4 + 0, −4 + 0, 0), (4 + 1, −4 + 1, 1), . . .

Terminology. Since the general form of the solution set is


expressed in terms of free variables, free variables are also
known as parameters.
So, x − z = 4, y − z = −4 and z is a free variable (i.e., it does
not have any restriction). Isolating x and y in terms of this free
variable, we get

S = {(4 + z, −4 + z, z) | z ∈ R}

and therefore, there is an infinite number of solutions. Some of


them are

. . . , (4 − 1, −4 − 1, −1), (4 + 0, −4 + 0, 0), (4 + 1, −4 + 1, 1), . . .

Terminology. Since the general form of the solution set is


expressed in terms of free variables, free variables are also
known as parameters.
So, x − z = 4, y − z = −4 and z is a free variable (i.e., it does
not have any restriction). Isolating x and y in terms of this free
variable, we get

S = {(4 + z, −4 + z, z) | z ∈ R}

and therefore, there is an infinite number of solutions. Some of


them are

. . . , (4 − 1, −4 − 1, −1), (4 + 0, −4 + 0, 0), (4 + 1, −4 + 1, 1), . . .

Terminology. Since the general form of the solution set is


expressed in terms of free variables, free variables are also
known as parameters.
Definition
Suppose that L is a linear system with m equations and n
variables and that x is one of those variables. We say that
(•) x is a basic variable (restricted variable) of L iff the column
of x in the corresponding REF has a leading one.
(◦) x is a free variable of L iff x is not a basic variable of L.

For instance, in the case of the system


(
x + 2y − 3z = −4
2x + y − 3z = 4

(•) x and y are basic variables.


(◦) z is a free variable.
Definition
Suppose that L is a linear system with m equations and n
variables and that x is one of those variables. We say that
(•) x is a basic variable (restricted variable) of L iff the column
of x in the corresponding REF has a leading one.
(◦) x is a free variable of L iff x is not a basic variable of L.

For instance, in the case of the system


(
x + 2y − 3z = −4
2x + y − 3z = 4

(•) x and y are basic variables.


(◦) z is a free variable.
Of course, every linear system is either inconsistent or it has at
least one solution. In this second case, there are also two
subcases to be considered depending on whether or not there
are free variables. This very easy argument justifies the
following result.
Proposition
If L is a linear system, then one and only one of the following
alternatives holds:
(a) L is inconsistent. That is, its REF contains a row of the form

0 0 . . . 0 | c,

where c is a nonzero real number.


(b) L has exactly solution. So, (a) does not hold and the REF
of L does not have free variables.
(c) L has an infinite number of solutions. That is, (a) does not
hold and the REF of L has at least one free variable.
Of course, every linear system is either inconsistent or it has at
least one solution. In this second case, there are also two
subcases to be considered depending on whether or not there
are free variables. This very easy argument justifies the
following result.
Proposition
If L is a linear system, then one and only one of the following
alternatives holds:
(a) L is inconsistent. That is, its REF contains a row of the form

0 0 . . . 0 | c,

where c is a nonzero real number.


(b) L has exactly solution. So, (a) does not hold and the REF
of L does not have free variables.
(c) L has an infinite number of solutions. That is, (a) does not
hold and the REF of L has at least one free variable.
Of course, every linear system is either inconsistent or it has at
least one solution. In this second case, there are also two
subcases to be considered depending on whether or not there
are free variables. This very easy argument justifies the
following result.
Proposition
If L is a linear system, then one and only one of the following
alternatives holds:
(a) L is inconsistent. That is, its REF contains a row of the form

0 0 . . . 0 | c,

where c is a nonzero real number.


(b) L has exactly solution. So, (a) does not hold and the REF
of L does not have free variables.
(c) L has an infinite number of solutions. That is, (a) does not
hold and the REF of L has at least one free variable.
Of course, every linear system is either inconsistent or it has at
least one solution. In this second case, there are also two
subcases to be considered depending on whether or not there
are free variables. This very easy argument justifies the
following result.
Proposition
If L is a linear system, then one and only one of the following
alternatives holds:
(a) L is inconsistent. That is, its REF contains a row of the form

0 0 . . . 0 | c,

where c is a nonzero real number.


(b) L has exactly solution. So, (a) does not hold and the REF
of L does not have free variables.
(c) L has an infinite number of solutions. That is, (a) does not
hold and the REF of L has at least one free variable.
Proposition
If H is an homogeneous linear system having more variables
than equations, then H has an infinite number of solutions.

Proof.
Suppose that H has m equations and n variables. By
hipothesis, m < n. Also, H is homogeneous and hence,
consistent. Moreover, since

#basic variables of H = #leading ones ≤ m < n,

then H has at least n − m > 0 free variables and those free


variables give rise to an infinite number of solutions.
Example
The homogeneous system
(
πx + 8.9y + 99z = 0
26x + 666y − 3.9z = 0

has an infinite number of solutions since it is consistent and


because it has at least 1 free variable.

Question. What happens in the case of homogeneous linear


systems satisfying that its number of equations is equal to its
number of variables?
Example
The homogeneous system
(
πx + 8.9y + 99z = 0
26x + 666y − 3.9z = 0

has an infinite number of solutions since it is consistent and


because it has at least 1 free variable.

Question. What happens in the case of homogeneous linear


systems satisfying that its number of equations is equal to its
number of variables?
The proof of the following proposition is self-contained in its
own statement.

Proposition
Let H be an homogeneous lineal system with n and n variables
(and so, its coefficient matrix is an n × n matrix). The following
are equivalent (TFAE).
(1) H has a unique solution.
(2) H does not have free variables.
(3) The REF of the coefficient matrix of H has n leading ones.
(4) The REF of the coefficient matrix of H is the n × n identity
matrix.
The proof of the following proposition is self-contained in its
own statement.

Proposition
Let H be an homogeneous lineal system with n and n variables
(and so, its coefficient matrix is an n × n matrix). The following
are equivalent (TFAE).
(1) H has a unique solution.
(2) H does not have free variables.
(3) The REF of the coefficient matrix of H has n leading ones.
(4) The REF of the coefficient matrix of H is the n × n identity
matrix.
The proof of the following proposition is self-contained in its
own statement.

Proposition
Let H be an homogeneous lineal system with n and n variables
(and so, its coefficient matrix is an n × n matrix). The following
are equivalent (TFAE).
(1) H has a unique solution.
(2) H does not have free variables.
(3) The REF of the coefficient matrix of H has n leading ones.
(4) The REF of the coefficient matrix of H is the n × n identity
matrix.
The proof of the following proposition is self-contained in its
own statement.

Proposition
Let H be an homogeneous lineal system with n and n variables
(and so, its coefficient matrix is an n × n matrix). The following
are equivalent (TFAE).
(1) H has a unique solution.
(2) H does not have free variables.
(3) The REF of the coefficient matrix of H has n leading ones.
(4) The REF of the coefficient matrix of H is the n × n identity
matrix.
The proof of the following proposition is self-contained in its
own statement.

Proposition
Let H be an homogeneous lineal system with n and n variables
(and so, its coefficient matrix is an n × n matrix). The following
are equivalent (TFAE).
(1) H has a unique solution.
(2) H does not have free variables.
(3) The REF of the coefficient matrix of H has n leading ones.
(4) The REF of the coefficient matrix of H is the n × n identity
matrix.
Example
Consider the system

x − y + z = 0

2x − 2y + z = 0

3x − 3y + z = 0

and note that this hom. system has infinitely many solutions (for
so doing, it enough to do z = 0 and x = y). So, the REF of
 
1 −1 1
2 −2 1
3 −3 1

is not the identity matrix.

You might also like