0% found this document useful (0 votes)
71 views19 pages

Linear Algebra and Differential Equations: Sartaj Ul Hasan

(1) The document discusses the Invertible Matrix Theorem (TIMT) which states that for a square matrix A, the following are equivalent: (a) A is invertible, (b) the homogeneous system AX = 0 has only the trivial solution, (c) A is row equivalent to the identity matrix, and (d) A is expressible as a product of elementary matrices. (2) It provides a proof of TIMT by showing the implications (a) implies (b) implies (c) implies (d) implies (a). (3) The method of calculating the inverse matrix by forming the augmented matrix [A I] and performing row operations until A is the identity

Uploaded by

Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views19 pages

Linear Algebra and Differential Equations: Sartaj Ul Hasan

(1) The document discusses the Invertible Matrix Theorem (TIMT) which states that for a square matrix A, the following are equivalent: (a) A is invertible, (b) the homogeneous system AX = 0 has only the trivial solution, (c) A is row equivalent to the identity matrix, and (d) A is expressible as a product of elementary matrices. (2) It provides a proof of TIMT by showing the implications (a) implies (b) implies (c) implies (d) implies (a). (3) The method of calculating the inverse matrix by forming the augmented matrix [A I] and performing row operations until A is the identity

Uploaded by

Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Linear Algebra and Differential Equations

Sartaj Ul Hasan

Department of Mathematics
Indian Institute of Technology Jammu
Jammu, India - 181221

Email: [email protected]

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 0 / 18


Lecture 05
(March 16, 2021)

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 1 / 18


The Invertible Matrix Theorem (TIMT)

(This is an important theorem and try to memorise it!)

Theorem 1
The following are equivalent for an m × m square matrix A:
(a) A is invertible.
(b) The homogenous system AX = 0 has only trivial solution.
(c) A is row equivalent to the identity matrix.
(d) A is expressible as a product of elementary matrices.

Note: We will further extend this theorem as we go deep into the theory
of vector spaces and related concepts.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 2 / 18


Proof of The Invertible Matrix Theorem (TIMT)

We will proceed as follows:

(a) =⇒ (b) =⇒ (c) =⇒ (d) =⇒ (a)


Proof:
(a) =⇒ (b)
Given that A is invertible. Need to show that the system AX = 0 has
only the trivial solution. Suppose Y is any solution of the
homogeneous system. Therefore, AY = 0. Multiply on the left by
A−1 , we get:

(A−1 A)Y = A−1 0 =⇒ IY = 0 =⇒ Y = 0.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 3 / 18


Proof of TIMT (Conti . . . )

(b) =⇒ (c)
[Note: This is actually Proposition 4 (See Lecture 4 and slide 2 OR
Lecture 3 and slide 8 ), but we will now give a proof]
Suppose AX = 0 has only trivial solution, i.e., X = 0. If R is the
RREF matrix of A, then R has no free variables. Therefore, all
variables of R are basic variables. Since no. of variables = no. of
columns = no. of rows (since A is square), there must be a basic
variable in each row and in each column. Therefore R = I , as
required.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 4 / 18


Proof of TIMT (Conti . . . )

(c) =⇒ (d)
Assume that A is row equivalent to the identity matrix, i.e.
Im = (ek ek−1 . . . e1 )A for some finite number of sequence of
elementary row operations. If E1 to Ek are the corresponding
elementary matrices, then Im = (Ek . . . E1 )A. Each Ei being
invertible, we can write A = (Ek . . . E1 )−1 Im = E1−1 . . . Ek−1 .
Hence A is a product of elementary matrices.
(d) =⇒ (a)
Assume that A is product of elementary matrices. Since each
elementary matrix is invertible and product of invertible matrices is
invertible, A is also invertible.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 5 / 18


Calculation of the Inverse Matrix
In order to calculate the inverse of a matrix, we use the following
observation:
Observation: If A is an invertible matrix, then any sequence of row
operations that reduces A to I also transforms I into A−1 .
Proof of the above observation: If A is invertible, then by The
Invertible Matrix Theorem (TIMT), A is row equivalent to the
identity matrix, i.e. I = (ek ek−1 . . . e1 )A for some sequence of
elementary row operations. If E1 to Ek are the corresponding
elementary matrices, then I = (Ek . . . E1 )A. Each Ei being invertible,
we can write A = (Ek . . . E1 )−1 I = E1−1 . . . Ek−1 Hence A is a product
of elementary matrices. Furthermore,
−1
A−1 = (E1−1 . . . Ek−1 ) = (Ek . . . E1 ) = (Ek . . . E1 )I = (ek ek−1 . . . e1 )I

In other words, the same sequence of row operations that reduces A


to I also reduces I to A−1 .
Sartaj Ul Hasan (IIT Jammu) SMD002U1M 6 / 18
Calculation of the Inverse Matrix (Conti . . . )
Method: Form the augmented matrix [A I ] (this is sometimes
known as the enlarged matrix of A) and carry out elementary row
operations till the A part becomes I . The final result has the form
[I A−1 ]
 
0 1 2
Example: Find inverse of the matrix A = 1 0 3, if it exists.
4 −3 8
   
0 1 2 1 0 0 1 0 0 −9/2 7 −3/2
[A I ] = 1 0 3 0 1 0 ∼ 0 1 0 −2 4 −1 
4 −3 8 0 0 1 0 0 1 3/2 −2 1/2
Since A∼ I , we conclude that
 A is invertible by TIMT and
−9/2 7 −3/2
A−1 =  −2 4 −1 
3/2 −2 1/2

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 7 / 18


Invertible Matrices (Conti . . . )

Corollary 1.1: If A has a left inverse or a right inverse, then it has an


inverse.
Proof: (a) Suppose A has a left inverse; then there exists a matrix C
such that CA = I . Now consider the homogeneous system AX = 0.
Multiplying on the left by C , we get:

(CA)X = C 0 =⇒ IX = 0 =⇒ X = 0.

In short, the homogeneous system has only trivial solution. Hence, by


TIMT, A is invertible. Furthermore, I = CA = A−1 A, so multiplying
on the right by A−1 , we get C = A−1 .
(b) Now suppose A has a right inverse; then there exists a matrix D
such that AD = I . In other words, D has a left inverse, so is invertible
by part (a). Hence (AD)D −1 = ID −1 or A = D −1 . Thus, A, being
inverse of an invertible matrix, is itself invertible, and A−1 = D.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 8 / 18


Invertible Matrices (Conti . . . )
Corollary 1.2: Suppose a square matrix A is factored as a product of
square matrices, i.e. A = A1 A2 . . . An (all square matrices). Then A is
invertible if and only if each Ai is invertible. (Note that the above
Corollary 1.3 applies only if the matrices Ai are square.)
Proof: The fact that the product of invertible matrices is invertible was
covered previously. So we have only to show that if A is invertible, then
each Ai is invertible. We will first show that the last matrix in the product,
i.e. An is invertible. Consider the homogeneous system An X = 0.
Multiplying on the left by A1 A2 . . . An−1 , we get: A1 A2 . . . An−1 An X = 0
or AX = 0. Since A is invertible, multiplying on the left by A−1 , we get
(A−1 A)X = 0 =⇒ IX = 0 =⇒ X = 0. In short, the homogeneous
system An X = 0 has only the trivial solution. Hence, by TIMT, An is
invertible. Now putting A1 A2 . . . An−1 An = A and multiplying on the right
by A−1 −1
n , we get A1 A2 . . . An−1 = AAn = B (say). The matrix B being a
product of two invertible matrices is invertible. Thus by what we have
shown above, An−1 is invertible. By repeating this step, we get that each
Ai is invertible.
Sartaj Ul Hasan (IIT Jammu) SMD002U1M 9 / 18
Uniqueness of RREF
Corollary 1.3: Uniqueness of RREF matrix of any matrix A.

Proof of Corollary 1.3: We may assume that A 6= [0], i.e., A is non-zero.


Let A be any m × n matrix. We keep m fixed, and prove the result by
induction on n.
Base case: n = 1  
1
0
Then the only possible non-zero RREF matrix is  .  , so uniqueness
 
 .. 
0 m×1
holds.
Induction Step: So suppose n ≥ 2 and the result holds for all matrices
with n − 1 columns. Let A be an m × n matrix, and suppose BWOC that
A has two distinct RREF matrices, B and C . Let A0 be the matrix
obtained from A by deleting the n-th column.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 10 / 18


Proof of uniqueness of RREF (Conti · · · )
Proof of Corollary 1.3 (Conti · · · ) : Because, the row-reduction
algorithm proceeds downwards and rightwards column-wise, the sequence
which reduces A to an RREF matrix, also reduces A0 to an RREF matrix.
However, by induction hypothesis, RREF matrix of A0 is unique. Hence B
and C can differ only in the n-th column. So, since B 6= C , there exists
some j-th row, 1 ≤ j ≤ m such that the n-th entries of B and C differ,
i.e., bjn 6= cjn . Now, let v be any vector such that Bv = 0, and we may
assume v 6= 0 (else, the system Ax = 0 has only trivial solution =⇒ both
B and C have In on top with zero rows below). Now,
Bv = 0 =⇒ Cv = 0 =⇒ (B − C )v = 0. But the first (n − 1) columns
of B − C are zero, so by considering the j-th component (coordinate) of
(B − C )v , we get that (bjn − cjn )vn = 0. Since bjn 6=cjn ,we get that
v1
 .. 
vn = 0. Thus, in case B 6= C , then any solution v =  .  of Bx = 0 or
vn
Cx = 0 must have vn = 0.
Sartaj Ul Hasan (IIT Jammu) SMD002U1M 11 / 18
Proof of uniqueness of RREF (Conti · · · )

Proof of Corollary 1.3 (Conti · · · ) :


It follows that xn can not be a free variable. Recall that if any xi is a free
variable, then we have to insert a dummy equation xi = xi , and so we get
a vector that contain 1 in its i-th coordinate. Therefore, xn has to be basic
variable. In that case, the n-th column of B has to contain a 1 in the first
zero row in the RREF matrix of A0 (since A0 can have at most (n − 1)
basic variables). But then the n-th column of B consists of a 1 with 0’s
above and below. The same is true of C . Hence B = C , contradicting our
assumption that B 6= C . Result follows.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 12 / 18


Column form of a Matrix

Before we go for a proof other equivalences in TIMT, let us represent a


matrix in the form of columns. Given an m × n matrix, we can regards it
as consisting of n columns, each of which is an m-vector, i.e., given
 
b11 b12 . . . b1n
B =  ... .. .. .. 

. . . 
bm1 bm2 . . . bmn

we can write it in the form B = [v1 , v2 , . . . , vn ], where


     
b11 b12 b1n
v1 =  ...  , v2 =  ...  , . . . , vm =  ...  .
     

bm1 bm2 bmn

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 13 / 18


Column form of a Matrix (Conti . . . )

Similarly, given an ordered list of n vectors v1 , v2 , . . . , vn , not necessarily


distinct, we can construct a matrix by taking these as columns, i. e.
B = [v1 , v2 , . . . , vn ]

Matrix product in the column form: If A is a k × m matrix so that the


product C = AB is well-defined, then C can be easily represented in
column form as follows:

C = AB = A[v1 , v2 , . . . , vn ] = [Av1 , Av2 , . . . , Avn ],

i.e. C is the matrix whose columns are Av1 , Av2 , . . . , Avn .

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 14 / 18


The Invertible Matrices Theorem (TIMT)
(yet another version!)

Theorem 1
The following are equivalent for an m × m square matrix A:
(a) A is invertible.
(b) The homogenous system AX = 0 has only trivial solution.
(c) A is row equivalent to the identity matrix.
(d) A is expressible as a product of elementary matrices.
(e) AX = b is consistent for every b in Rm .
(f) AX = b has exactly one solution for every b in Rm

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 15 / 18


Proof of The Invertible Matrix Theorem (TIMT)
We will proceed as follows:

(a) =⇒ (f ) =⇒ (e) =⇒ (a)

(a) =⇒ (f )
Proof: Suppose that the matrix A is invertible. Since A(A−1 b) = b,
it follows that X = A−1 b is a solution of AX = b. To show that this
is the only solution, we will assume that u is an arbitrary solution and
then show that u must be the solution A−1 b.
If u is any solution of AX = b, then Au = b. Multiplying both sides
of this equation by A−1 , we obtain u = A−1 b. In short, the system
AX = b has the unique solution A−1 b.
(f ) =⇒ (e)
Trivial.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 16 / 18


Proof of The Invertible Matrix Theorem (TIMT)

(e) =⇒ (a)
Proof: Suppose the system AX = b has a solution for every b ∈ Rm .
Let ui be a solution of the system AX = ei for i = 1, 2, . . . , m, where
ei denote the column vector having 1 at the i th position and 0
elsewhere. Let B be the matrix whose columns are the ui , i.e.,
B = [u1 , u2 , . . . , um ]. Then:

AB = A[u1 , u2 , . . . , um ] = [Au1 , Au2 , . . . , Aum ] = [e1 , e2 , . . . , em ] = I .

Since A has a right inverse, by Corollary 1.1, it is invertible.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 17 / 18


Vector Formulation

A system of linear equations can also be expressed in a vector form:


X1 v1 + X2 v2 + · · · + Xn vn = b, where the Xi are scalar unknowns and
the vi are column vectors formed from the coefficients of the original
linear system.
This formulation can be interpreted as: if we can find scalars Xi
satisfying the equation, then the given vector b can be expressed in
terms of the given vectors vi . This formulation is not useful for
solving the system, but will become very important when we start
working with vectors.

Sartaj Ul Hasan (IIT Jammu) SMD002U1M 18 / 18

You might also like