0% found this document useful (0 votes)
11 views18 pages

Lecture 2 3 - (Spring 2024)

Uploaded by

wei.c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views18 pages

Lecture 2 3 - (Spring 2024)

Uploaded by

wei.c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Lecture II.

Linear Equations and Solutions

Example of linear equations:

(1) x1  2x 2  1 , solution x1, x 2   1  2t, t  t   . (There are infinitely many solutions!)


 x1  2x 2  1
(2)  , exactly one solution x1, x 2   3, 1 .
2x1  2x 2  4

 x1  x 2  4
(3)  . No solutions!
2x1  2x 2  6

In general we consider a system of m linear equations in n unknowns x1, x 2, , x n described by

a11x1  a12x 2    a1n x n  b1



a21x1  a22x 2    a2n x n  b2
 (2.0)
    

a x  am 2x 2    amn x n  bm
 m 1 1

in which aij , bk   are known scalars. Regarding the solvability of the general linear equations, we have

the following fact, which is to be proved later on.


Fact 2.0.1: A system of linear equations has no solutions, or has exactly one solution, or has infinitely many
solutions.

Definition 2.0.2: A system of linear equations that has no solution is said to be inconsistent; if there is at
least one solution of the system, it is called consistent.

2-1. Procedures for Solving Linear Equations


To further investigate the solvability (i.e., to establish Fact 2.0.1), we need a systematic approach to solving
linear equations, one in terms of vector-matrix notation and leveraging matrix operations introduced in
Lecture I.

Recall: How to solve linear equations in the high school level?


Answer: Successive unknown elimination via algebraic manipulations of equations.
 More systematic approach via matrix operations!
The so-called Gauss (or Gauss-Jordan) elimination, which is the focus of this subsection.

1
 x1 
 
x2 
a a  a  
 11 12 1n  
   
x 
 n 
 x1 
row-column  
x2 
product a a 
 a2n   
 21 22   
x 
 n 
a11x1  a12x 2    a1n x n  b1 

a21x1  a22x 2    a2n x n  b2 

 *
    

am 1x1  am 2x 2    amn x n  bm 

Put in a matrix form we have
 a11 a12  a1n   x1   b1 
     
 a21 a22  a2n   x 2  b 
      2 
        
  
a   
m 1 am 2  amn 
 xn  bm 
    

Amn x n 1
bm1

The “augmented matrix” associated with the above linear equation set is defined to be

 a11 a12  a1n b1 


 
 a21 a22  a2n b 
 A b   2

     
m(n 1) am 1 am 2  amn 
bm 
 

Unknown elimination directly via equation manipulations involves operations like

(1) Multiply an equation through by a nonzero constant.


(2) Add a multiple of one equation to another.
(3) Interchange two equations (when necessary).

This amounts to performing “elementary row operations” on the augmented matrix, leading to the so-called
Gauss elimination.

“Elementary row operations”:


(1) Multiply a row through by a nonzero constant.
(2) Add a multiple of one row to another.
(3) Interchange two rows.

2
Example 2.1.1: (Gauss elimination)
x  y  2z  9 - 1 1 1 2 9 - r1 
 
 
2x  4y - 3z  1 - 2 2 4 -3 1 - r2 
 
3x  6y - 5z  0 - 3  3 6 -5 0 - r 
 3
augmented matrix
Use (1) to eliminate the unknown x in (2) and (3):
x  y  2z  9 - 1 1 1 2 9  - r1 

 
(2)-(1)×2 2y - 7z  -17 - 2a  0 2 -7 -17  - r2'   r2  - r1   2
 
3x  6y - 5z  0 - 3 3 6 -5 0  - r3 
 
x  y  2z  9 - 1 1 1 2 9  - r1 
 
7 17  7 17  1
(2a)×1/2 y- z- - 2b  0 1 - - - r2''   r2'  
2 2  2 2  2

3x  6y - 5z  0 - 3 3 6 -5 0  - r3 
 
x  y  2z  9 - 1 1 1 2 9  - r 
  1
7 17  7 17 
y- z- - 2b  0 1 - -  - r2'' 
2 2  2 2 

(3)-(1)×3 3y - 11z  -27 - 3a  0 3 -11 -27  - r3'   r3  - r1   3
 

Use (2b) to eliminate the unknown y in (3a):


 
x  y  2z  9 - 1 1 1 2 9  - r 
  1
7 17  7 17 
y- z- - 2b   0 1 - -  - r2'' 
2 2  2 2
-  3  3 2 
1 3  1 3  - r ''  r ' - r ''  3
(3a)-(2b)×3 - z  - -  3b  0 0 -
2 2  2 2 
x  y  2z  9 - 1 1 1 2 9  - r 
  1
7 17  7 17 
y- z- - 2b   0 1 - - - r2'' 
2 2  2 2 

(3b)×(-2) z  3 - 3c   0 0 1 3  - r '''   r ''   -1
  3 3
2
Note: So far only elementary row operations of the first and second kinds are involved.

Use back substitution to find the solution:


From (3c) we have z=3
7 17
2b   y -  3  - ,  y  2
2 2
1  x  2  2  3  9,  x  1

3
Summary of the Gauss elimination procedures for solving Ax  b :
Form the augmented matrix A b  .
 
Perform elementary row operations to put the augmented matrix A b  in the row-echelon form (refer
 
to the definition below).
Find solutions by back substitution.

Definition 2.1.2: A matrix is in the row-echelon form if


 Each nonzero row has the first nonzero entry (the leading entry, or pivot) equal to one.
 All zero rows (if any) are one the bottom of the matrix.
 The leading entries of the nonzero rows, as one moves form top to bottom, are successively to the
right. (從上面列看到下面列, leading 1’s 位置從左到右). □

Example 2.1.3: The following three matrices are in the row echelon form

1 4 -3 7     0 1 2 6 0
   1 1 0  
     
 0 1 6 3 ,  0 1 0 ,  0 0 1 -1 0 Check (1)~(3)
     
 0 0 1 5  0 0 0  0 0 0 0 1
     
but the two matrices
1 4 -3 7   1 1 0
   
   
 0 0 1 5 and 0 0 0 are not.
   
 0 1 6 3  0 1 0
 

fails to satisfy (3) fails to satisfy (2)

Recall
Gauss elimination:
 A b 
elementary row operations
  row echelon form   Solution by back substituation.
 

How about

 a form simpler than 


 A b 
more elementary row operations
    Easier way of finding solution.
  the row echelon form 
 

Approach: The Gauss-Jordan elimination, to be illustrated by the next example.

4
Example 2.1.4: (continued from Example 2.3; Gauss-Jordan elimination)

x  y  2z  9 - 1 1 1 2 9 
 
7 17  17 
y- z- - 2 0 1 - 7 -
2 2  2 2 

z  3 - 3 0 0 1 3 
 
Use (2) to eliminate the unknown y in (1):
11 35  11 35 
x  z - 1a  1 0 
2 2  2 2 
7 17  17 
y- z- - 2 0 1 - 7 - 
2 2  2 2

z  3 - 3 0 0 1 3 

 
Use (3) to eliminate the unknown z in (1a) and (2):
x 1  1 0 0 1
 
 
y 2  0 1 0 2 . □
 
z  3
  0 0 1 3
 
yields the solution!

Summary of the Gauss-Jordan elimination procedures for solving Ax  b :


Form the augmented matrix A b  .
 
Perform elementary row operations to put the augmented matrix A b  in the reduced row-echelon
 
form (refer to the definition below).
Find solutions by inspection.

Definition 2.1.5: A matrix is in the reduced row echelon form if and only if it is in the row echelon form
and further satisfies
(4) All entries below and above a leading entry are zero. □

Example 2.1.6: The matrices


0 1 -2 0 1 1 3 0 0 0
   
1 0 5 1 -7   0 0 0 1 3 0 0 1 0 0
    
0 1 2 ,  0 0  ,  0 ,  
    
0 0 0 0 0 0 0 1 0
  
0 0 0 0 0 0 0 0 0 1
   
1 2 0

  1 0 5
are in the reduced row-echelon form, but  0 1 0 ,  are not.
   0 3 2
0 0 1
 

5
The example we consider above has exactly one solution, which can be found based on the Gauss(-Jordan)
elimination process. Linear equations can have more than one solution, or have no solutions. The Gauss
(-Jordan) elimination remains quite useful in determining if there is a solution (and the solution set whenever
a solution exists); this is illustrated by the next two examples.

Example 2.1.7: (linear equation with infinitely many solutions)

x - 2y  z  1 1 -2 1 1 
  
 , augmented matrix 2 -4 -3 -8 .
2x - 4y - 3z  -8
  
Perform elementary row operations

1 -2 1 1  1 -2 1 1  1 -2 1 1 1 -2 0 -1
   → →  , the reduced row echelon form
2 -4 -3 -8 →  0 0 -5 -10  0 0 1 2 0 0 1 2 
       
x    2  -1
x - 2y  -1
   2t - 1    
     
  ,  y    t   t  1   0  , t   . □

z  2        
  z   2   0  2 
     
Example 2.1.8: (linear equations with no solutions)

x - 2y  z  1 1 -2 1 1 1 -2 1 1
  → 
 →
2x - 4y  2z  3 2 -4 2 3  0 0 0 1
    

There do not exist solutions x , y, z  s.t. 0  x  0  y  0  z  1 . □

2.2. Elementary Row Operation as Matrix Multiplication

Recall the three kinds of elementary row operation on the augmented matrix A b  to solve the linear
 
equation Ax  b :
(1) Interchange two rows.
(2) Multiply a row through by a nonzero constant.
(3) Add a scalar multiple of a row to another.

Each of the three operations can be characterized by the associated “elementary matrix”!

a   1 0 0
 11 a12 a13   
 
Example 2.2.1: Let A  a21 a22 a23    33 , I   0 1 0
   
a 31 a 32 a 33   0 0 1
 

6
0 1 0
 
 
P12  1 0 0 (obtained by interchanging row1 and row2 of I)
 
0 0 1 
 
0 1 0   a 
  a11 a12 a13   21 a22 a23 
 
P12A  1 0 0 a21 a22 a23   a11 a12 a13 
    
0 0 1  a 31 a 32 a 33  a 31 a 32 a 33 
    
interchanging row1
and row2 of A!
0 0 1  1 0 0 
   
   
Similarly we can form P13  0 1 0 , P23  0 0 1  ; P12 , P23 , P13 are called elementary matrices
   
1 0 0  0 1 0
   
of the first kind. Next we consider

r 0 0  1 0 0  1 0 0
     
S1   0 1 0 , S2   0 r 0 , S3   0 1 0
    
     
 0 0 1  0 0 1  0 0 r 
ra 
 11 ra12 ra13 
S1A   a21 a22 a23   multiply the 1st row by r
 
 a 31 a 32 a 33 
S1 , S2 , S3 are called elementary matrices of the second kind. The last type of elementary matrices is

 1 0 0  1 0 0  1 0 0
     
     
T12  a 1 0 , T13   0 1 0 , T23   0 1 0
     
 0 0 1 b 0 1  0 c 1
     
 a a12 a13 
 11 

T23 A   a21 a22 a23 

 
ca
 21  a ca  a ca  a 33 
31 22 32 23

 add the 2nd row multiplied by c to the 3rd row
T12 , T13 , and T23 are called elementary matrices of the third kind. □

Important observations:
Perform elementary row operations on a matrix is equivalent to multiplying the matrix from the left by
elementary matrices!
Elementary matrix multiplication provides an analytical expression of elementary row operations.

7
 0 1 1   2 4 2 1 2 1
    1 
  row1 row3   row1  
Example 2.2.2: A=  3 6 1   P13
 3 6 1  
2
S1
 3 6 1
     
 2 4 2   0 1 1   0 1 1 
     
1 2 1 1 2 1 1 2 1
    
row2-3row1   row2 row3   1row2  
    0 0
T12
4       0 1 1  
P23 S2
 0 1 1  B
     
0 1 1  0 0 4  0 0 4 
     
 S2P23T12S1P13A  B

2.3 Solvability of Linear Equations

Recall: To solve Ax  b ,
(1) Form the augmented matrix A b  .
 
(2) Perform elementary row operations on A b  to obtain the reduced row echelon form G h  .
 
 
(3) Solutions (if they exist) by back substitution.

Question: When will Ax  b have a solution?


The reduced row echelon form plays a key role in answering this question. Let us first show a number of
examples to help infer important properties.

Example 2.3.1:
x  y  2z  9 1 1 2 9 
 
 
2x  4y  3z  1 , augmented matrix 2 4 3 1 
 
3x  6y  6z  0 3 6 5 0
 
 
  1 
1 0 0 1  
   
 
reduced row echelon form  0 1 0 2 , exactly one solution 2 .
   
 0 0 1 3  3
     
 G h 
Note that (1) # of pivots in G = # pivots in G h  (=3).
 
(2) # of unknowns = # of pivots of G h  . □
 

Example 2.3.2:

8
2x  2y  z  7w  6  
2 2 1 7 6
 
x  y  2z  8w  0 , augmented matrix 1 1 2 8 0  ,
 
x y  2w  4 1 1 0 2 4 
 
 
 
 1 1 0 2 4 
  x  2y  w  4

reduced row echelon form  0 0 1 3 2  , which yields 
  
 z  3w  2
0 0 0 0 0 
 
   
 G h 
Let
x  2y  4  2a 
 
  x  4  2a  b  4  2 1
w a  
       
 z  2  3a  y   b  0 0 1

         a    b   , a, b   .
  z   2  3a   2     

 x  4  2a  b 
    3 0
y b  
 
 w         
 z  2  3a 
    a  0 1 0

 
       
There are infinitely many solutions.
Note that (1) Number of pivots of G = number of pivots of G h  (=2).
 
(2) Number of unknowns (=4) > number of pivots in G h  (=2).
 
(3) Expression of the solution needs 2 free parameters.
(4) 2 = number of free parameters
= number of unknowns (=4)-number of pivots (=2) in the reduced row echelon form.
This is in general true. If a solution exists, the number q of free parameters = (number of unknowns)-(number
of pivots in the reduced row echelon form). □

Example 2.3.4:
 
x  2y  z  1 1 2 1 1   0

    1 2 1
 → 2 4 2 3 → reduced row echelon form   , no solutions!
2x  4y  2z  3 0 0 0 1

     
 G h 
Note that number of pivots of G( 1)  number of pivots of G h  (=2). □
 

Theorem 2.3.5: Consider the linear equation Ax  b . Let G h  be the row-echelon form of the
 
 
augmented matrix A b  . Then the linear equation has a solution, hence consistent, if and only if G and
 
G h  have the same number of pivots.
 
[Proof]: (  ) Follow the procedures we describe (Gauss elimination + back substitution).
(  ) Note that # of pivots in G h   # of pivots of G (why). If strict inequality holds, we must have
 
one equation of the form [0 0  0]x  a , a  0  Ax  b has no solutions. Thus if Ax  b has
solutions, the equality must hold. □
9
Definition 2.3.6: The number of pivots (or leading entries) in the reduced row echelon form G of a matrix
A is called the rank of A . □

Theorem 2.3.7: (Solvability of linear equation Ax  b )


Consider the linear equation Ax  b . Then exactly one of the following possibilities must hold:
(1) If rank of A b  > rank of A , Ax  b has no solutions.
 
(2) If rank of A b  = rank of A = number of unknowns, Ax  b has a unique solution.

 
(3) If rank of A b  = rank of A < number of unknowns, Ax  b has infinitely many solutions.
 
[Proof]: Let

r  rank of A , s  rank of  A b  , and n = number of unknowns = number of columns of A .

We note that
(i) Since # of pivots of G  # of pivots in G h  , we have r  s .
 
(ii) Since the leading entries of G is successively to the right (hence they lie in different columns), the
total number of or pivots of G does not exceed the number of columns of G (or A ). Hence
r n.
Regarding solvability of the equation, we only have three possibilities:

r  s (no solutions, Case I)



 r  s  n (Case II)
r  s (solvable)  
 r  s  n (Case III)
 

(1) Case I: Ax  b has no solutions, as shown above.


1 0  0 h1 
 
0 1  0 h2 

 
   
 
(2) Case II: The reduced row echelon form of A b  reads G h     1 hl   unique
     
   0 0
 
   

 
0   0 0

solution.
(3) Case III: The unknowns corresponding to non-leading entries can be arbitrarily assigned; so Ax  b has
infinitely many solutions. □

Remark: Hence Ax  b is consistent iff rank of A = rank of A b  . □


 

10
2.4 Structure of Solution Set
If Ax  b is solvable, how can we characterize the solution set, namely, x | Ax  b ? We have the
following theorem.

Theorem 2.4.1: Suppose x 0 is some solution of Ax  b , i.e., Ax 0 = b . Then the set of all solutions to
Ax  b is x 0  w Aw  0 . □
[Proof]: Our purpose is to show x | Ax  b  x0  w Aw  0 .
(i) x | Ax  b  x0  w Aw  0 : Let x be a solution, i.e., Ax  b . Given x0 , we can decompose
x into x  x0  x  x0  . Since Ax 0 = b , we have
A x  x0   Ax  Ax0  b  b  0 .

This immediately implies x  x0  w Aw  0 .


(ii) x0  w Aw  0  x | Ax  b : Obviously holds since A x0  w = Ax0  Aw  b  0  b .
That is, x 0  w is a solution to Ax  b . □

Remark: Given a matrix A , Ax  0 is often called the homogeneous equation whose solutions are called
homogeneous solutions.
Corollary 2.4.2:
(a) Ax  b has at most one solution if and only if the only solution to Aw  0 is w  0 .
(b) If Ax  b has two distinct solutions, then it has infinitely many solutions.
[Proof]:
(a) Note that the homogeneous equation Aw  0 has at least one solution w  0 . Hence w  0 is the
only solution to Aw  0 if and only if Ax  b has at most one solution (from Theorem 2.4.1).
(b) If y and z , y  z , both solve Ax  b . Then A y  z  0 ,  A k y  z  0, k . Thus
y  k y  z , k , are also solutions. □

Example 2.4.3: Consider

2x  2y  z  7w  6 1 1 0 2 4 
 
   x  2y  w  4
 x  y  2z  8w  0 reduced row echelon form 0 0 1 3 2  , thus 
    z  3w  2
 x  y  2w  4 0 0 0 0 0  
  

x  2y  4  2a 
  x  4  2a  b  4  2 1
w a  
      
     0 0 1

 z  2  3a  y   b
          
Let   z        a    b   , a, b   .

 x  4  2a  b 
    2  3a  2 3 0
y b  
 
 w         
 z  2  3a    a  0 1 0

 

        

11
4 
 
0
 
 particular solution x 0   
2
 
0
 
homogeneous solution (i.e., the one solves the homogeneous equation Aw  0 ) is

 2 1 


    
1 
  0    
a    b   a, b   . □
 3 0 
     
  1  0 
     

Gauss (-Jordan) elimination (elementary row operations & reduced row echelon form) proves useful in
characterizing the solvability of linear equations. To gain further insight into the elimination process, let us
consider again the above example

2x  2y  z  7w  6
 x  2y  w  4
 
 x  y  2z  8w  0  z  3w  2
 
 x  y  2w  4 
  Gauss-Jordan elimination  1 1 0 2 4
 2 2 1 7 6 
    0 0 1 3 2 
    
 1 1 2 8 0   0 0 0 0 0 
 
  1 1 0 2 4   
  

Note that we begin with a set of three equations. Going through Gauss-Jordan elimination it turns out that
essentially but two are “effective”, that is to say, one of the original three equations is, in a way, redundant.
In terms of this observation, Gauss-Jordan elimination is basically a “redundancy removal” procedure. In
particular, it helps to find the “core” equations, based on which we can investigate the solvability of the
original equation set.

2.5 Solvability of Linear Equations: A Geometric Perspective

So far we investigate the solvability of linear equations using pure algebraic manipulations (Gauss
elimination, row-echelon form, pivots,…). There is an alternative, yet very important, geometric-oriented
interpretation regarding the solvability of linear equations. To see this, let us start from linear equations in the
matrix form described as

12
 a11 a12  a1n   x1  b 
     1
 a21 a22  a2n   x 2  b 
      2  (2.5.1)
        
  
a   
m 1 am 2  amn  x bm 
  n   

Amn xn1 bm1

Recall that “multiplying a matrix A from the right by a column vector x is simply to perform a linear
combination of the columns of A with coefficients from the entries of x ”. Hence we can alternatively
rewrite (2.5.1) into
 a11   a12   a1n   b1 
     
 a21   a22   a2n   b2 
x1    x 2    x
 n
 
       (2.5.2)
      
a  a  a   
 m 1   m 2   mn  bm 
 

From (2.5.2) we have the following important observations:


The linear equation (2.5.1) has a solution if and only if b can be written as a linear combination of
columns of A .
 (2.5.1) has a unique solution if and only if there is only one way of expressing b through a linear
combination of columns of A .
 (2.5.1) has infinitely many solutions if and only if there are infinitely many ways of linearly
combining columns of A to yield b .
The linear equation (2.5.1) has no solution if and only if b cannot be written as a linear combination of
columns of A .

 1 0  3  1  0  3 a 
            
   x1  x
 1

3
        
Example: Consider 2 , solution x     , thus 3   0  2  1  2 . In fact,
 0 1 x   for any b  ,

 0 0  
  2
 0  2  2  
 0
   
 0  0
 
 0
 
         
1 0 a   1 0  3
 x   a    
 
   
1  

x 1 


    x1   
where a, b   ,  0 always have a solution: x    . However,  0 1 x 
1 x  b  2 does not
   2     2  b    2  
0 0  0    0 0    
      2
 1  0  3
     
     
have a solution, since no linear combinations of  0 and 1 can give 2 .
     
 0  0 2 
     

13
Lecture III. Inverse of Square Matrices

We are now able to use the Gauss elimination to solve the linear equation Ax  b , in which the
dimension of the coefficient matrix A is m  n (thus m equations in n unknowns). For the particular case
that A is square ( m  n , the same number of equations and unknowns) and Ax  b has a unique
solution, the procedures for finding the solution lead to the construction of the so-called “inverse” of A . We
will take a deeper look of the inverse of square matrices in this section.

3.1 Definition and Properties of Inverse

Definition 3.1.1: Given a square matrix A  nn , any matrix X  nn for which XA  AX  I is
called an inverse of the matrix A . □

Notes 3.1.2: Let A  nn . Then


(1) A has at most one inverse (if it exists).
[Proof]: Suppose Y and Z are both inverse of A , the YA  AY  I and ZA  AZ  I . This then
implies
  ZA
Z=Z AY Y  Y.
I I
1
(2) The inverse of A if it exists, is denoted by A .
(3) If A has inverse, A is said to be invertible. □

Definition 3.1.3: A square matrix A  nn is nonsingular (or a nonsingular matrix) if A is invertible.
A is singular if it does not have an inverse. □

All the three kinds of elementary matrices introduced in Lecture II are nonsingular. For the 3  3 case,

0 1 0 0 1 0
   
  1  
P12  1 0 0 , P12  1 0 0
   
0 0 1  0 0 1 
   
r 0 0 r 1 0 0
   
   
S1  0 1 0 , S11   0 1 0
   
0 0 1   0 0 1
   
1 0 0   1 0 0
   
  1  
T12  a 1 0 , T12  a 1 0 .
   
0 0 1   0 0 1
   
Also observe that, if E is an elementary matrix, so is E 1 (you prove this fact).

14
Fact 3.1.4: If A  nn is nonsingular, then the only solution to the homogeneous equation Ax  0 is
x  0. □
[Proof]: Suppose there is another x0  0 so that Ax0  0 . Then

1
A (Ax0 )  A1  0  0 ,

 x0
which leads to contradiction. □

Based on Fact 3.1.4 we have the following important corollary.

Corollary 3.1.5: If A  nn is nonsingular, then Ax  b has a unique solution x  A1b . □


1 1 1
[Proof]: The solution x  A b solves Ax  b since A(A b)  AA b  b . From Theorem 2.4.1,
we know the solution set is
S  A1b  w Aw  0 .

Fact 3.1.4 then implies


S  A1b  w Aw  0  A1b . □

 8 3 x1  b1   2 3 
Example 3.1.6: Consider        . By computation A1   
5 2  x 2  b  5 8  .
     
2  

A check A -1A  I
b1  x1   2 3 b1 
For any     2 , the equation has a unique solution    . □
b x 2  5 8  b2 
 2     

Theorem 3.1.7: (Properties of matrix inverse)


Let A and B be nonsingular matrices (  nn or  nn ). Then
1
(a) AB is nonsingular, and AB  B1A1 .
1
(b) A1 is nonsingular and A1   A.

 (A1 )T , A* 
1 1
(c) AT and A* are nonsingular, and AT   (A1 )* .

[Proof]:
(a) Since A1 and B1 exist, Y  B1A1 is well-defined. Hence we have

AB Y  AB B1A1  A BB1 1 1


 A  AA  I
I
1 1
Y AB  B A AB  B1IB  B1B  I
and ABY  YAB  I .
1
(b) Let Y  A , then YA1  I  AA1 , A1Y  A1A  I . This implies Y  A1  A
(c) Let Y  A  , then AT Y  AT A   A1A
1 T 1 T T
 IT  I ,

 A1  AT  AA1   IT  I .
T T
YAT □

15
3.2 Construction of Inverse

Suppose A  nn is nonsingular. Finding the inverse of A is the same as finding X  nn such
that AX  XA  I . It suffices to consider
AX  I . (3.2.1)

Write X  [x1  xn ] , where xi  n1 is the ith column and I  [e1  en ] , where
T
ei   0  0 1 0  0 is the ith column of I.
 

ith entry

Then (3.2.1) becomes

A[x1  xn ]  [e1  en ] , or Ax1  e1 , Ax2  e2 ,  , Axn  en . (3.2.2)

We could solve all the n sets of equations in (3.2.2) to obtain xi 's , and put them together to form
A1  [x1  xn ] , e.g., by Gaussian elimination on augmented matrices  A e1  ,  A e2  ,  ,  A en 
     
to obtain the respective reduced row echelon forms  I x1  ,  I x2  ,  ,  I xn  .
     
Fact 3.2.1: If A  nn is nonsingular, then the reduced row echelon form of A is the identity matrix.
[Proof]: Exercise. □

The operations performed above, however, are completely determined by the coefficients of the equations
(i.e., entries of A) and are independent of the right-hand side ei. Hence we could perform eliminations with
all the right hand side at the same time, that is, perform elementary row operations (e.r.o.) on  A I .
 
 A I   I B , then A =B .
e.r.o.  1
   

Example 3.2.2:

1 2 1  1 2 1 1 0 0  1 2 1 0 0
1
    
   A I     
A  2 4 3    2 4 3 0 1 0 →  0 0 1 -2 1 0
      
 3 7 4  3 7 4 0 0 1  0 1 1 -3 0 1
     
 1 2 1 1 0 0  1
   2 0 3 -1 0 1 0 0 5 1 -2
     
→  0 1 1 -3 0 1 →  0 1 0 -1 -1 1 → 0 1 0 -1 -1 1  A1
     
 0 0 1 -2 1 0  0
   0 1 -2 1 0 0 0 1 -2 1 0 
  

16
3.3 Inverse as a Product of Elementary Matrices

We further recall that performing elementary row operations on the augmented matrix A I amounts to
 
multiplying A I from the left by a sequence of elementary matrices.
 

 A I e.r.o.   Eg Eg 1  E1  A I   I B


    I B      
elementary matrices

Hence we have the following key relations:


Eg Eg 1  E1A  I  A1  Eg Eg 1  E1


 1

A  Eg Eg 1  E1   E11  Eg 11Eg 1


Hence, if A is nonsingular, both A and A1 can be written as a product of
elementary matrixes!

Example 3.2.2:
1 0 0

1 2 3 1 0 0 
E1  2 1 0
 1 2 3 1 0 0
    
A I  2 5 3 0 1 0    
 0 0 1 
    0 1 3 2 1 0
     
1 0 8 0 0 1  1 0 8 0 0 1 
   
 1 0 0 1 2 0
   
   1 0 0
   5 2 0
1 2 3 1 0 9
E2   0 1 0  E3   0 1 0 
   
1 0 1 
       0 0 1 
     
  0  1 3 2 1 0    0 1 3 2 1 0
   
0 2 5 1 0 1  0 2 5 1 0 1 
   
1 0 0  1 0 0 
  

E 4  0 1 0
 1 0 9 5 2 0

E5   0 1 0 
 1 0 9 0 
5 2
     
0 2 1 
    
0 0 1
  
   
 0  1 3 2 1 0        0 1 3 2 1 0
   
0 0 1 5 2 1  0 0 1 5 2 1
   
1 0 9 1 0 0   



 1 0 0 40 16 9



  
E6   0 1 0 
 
E7   0 1 3  1 0 0 40 16 9 

0 0 1 

 

0 0 1 
  
      0 1 3 2 1 0         0 1 0 13 5 3
  
  
0 0 1 5 2 1  0 0 1 5 2 1
   
 A1 
 I A1   E E E E E E E A I
  
7 6 5 1
4 3 2 
 

A1  E7 E6E5E 4 E3E2E1


I  E7  E1A  A  E11  E71

17
The following conditions are equivalent for a square matrix A  nn to be nonsingular:
(1) The inverse of A exists.
(2) Ax  0 iff x  0 .
(3) Ax  b has a unique solution for every b .
(4) The reduced row echelon form of A is the identity matrix.
(5) rank (A)  n .
(6) A can be expressed as a product of elementary matrices.

18

You might also like