0% found this document useful (0 votes)
82 views8 pages

Department of Mathematics Indian Institute of Technology, Bombay

This document provides definitions and theorems regarding linear algebra concepts such as: 1) Left and right inverses of matrices, and that a square matrix with a left inverse must have a unique right inverse. 2) Properties of the dimension of a vector space, including that the dimension is well-defined and independent of the choice of basis. 3) Properties of the row rank and column rank of a matrix, including that they are equal, and the position of pivots in reduced row echelon form is unique. 4) Properties of the determinant of a matrix, including skew-symmetry, row-wise linearity, normalization, and that the determinant of a product is equal to the

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views8 pages

Department of Mathematics Indian Institute of Technology, Bombay

This document provides definitions and theorems regarding linear algebra concepts such as: 1) Left and right inverses of matrices, and that a square matrix with a left inverse must have a unique right inverse. 2) Properties of the dimension of a vector space, including that the dimension is well-defined and independent of the choice of basis. 3) Properties of the row rank and column rank of a matrix, including that they are equal, and the position of pivots in reduced row echelon form is unique. 4) Properties of the determinant of a matrix, including skew-symmetry, row-wise linearity, normalization, and that the determinant of a product is equal to the

Uploaded by

Vidushi Vinod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Department of Mathematics

Indian Institute of Technology, Bombay

MA 106 : Linear Algebra Handout


Spring 2017 AR, SK

0.1 One-sided Inverse


Definition 0.1.1 (Left inverse). For an m × n matrix A, B is called a left inverse of A, if B
is n × m and BA = In is the n × n identity matrix.

It should be remarked that if rank(A) < n, then a left inverse can not exist. If
rank(A) = n < m then infinitely many left inverses exist.
Analogous definition and remarks can be made for a right inverse.

0.1.1 Left inverse of square matrices

The situation is most interesting for square matrices. First we note the following. Let A and
B be n × n matrices.

• If B is a two-sided inverse of A, then it is unique. It is denoted A−1 .

• If A and B are invertible, then so is AB and (AB)1 = B −1 A−1 .

• Each elementary row matrix is invertible.

Theorem 0.1.1. If a square matrix A has another square matrix B as a left inverse, then B
is also a right inverse and it is unique.

Proof:
It is given that BA = I. Let B̂ = EN · · · E2 E1 B be a row echelon form of B, then its last
row can not be 0. Otherwise, the same will be true of B̂A = EN · · · E2 E1 I which is invertible.

1
Hence can assume that B̂ = I.

B̂ = I =⇒ B̂A = A

=⇒ EN · · · E2 E1 |{z}
BA = A
=I
=⇒ EN · · · E2 E1 = A.

But then A = EN · · · E2 E1 is a product of invertibles and therefore itself invertible and hence
has a unique inverse which is E1−1 E2−1 · · · EN
−1
. Also,

A = EN · · · E2 E1 =⇒ BEN · · · E2 E1 = I
−1
=⇒ B = E1−1 E2−1 · · · EN .

Therefore, B is the inverse of A.

0.2 Dimension, row rank=column rank, pivots

0.2.1 Dimension

Theorem 0.2.1. Let B = {v1 , ..., vk } and B 0 = {w1 , ..., w` } be two bases of a vector space
V , then k = `. Hence the dim V is well defined.

Proof (Sketch): Assume to the contrary that k > `.


Observe that none of the vectors in B and B 0 can be 0. We replace one by one the vectors of
B by those of B 0 .
w1 = a1 v1 + · · · + ak vk with at least one of the aj 6= 0. W.l.o.g. a1 6= 0.
Claim: B1 = {w1 , v2 , ..., vk } is a basis of V .
Proof of the claim:

• v1 = 1/a1 (w1 − a2 v2 − · · · − ak vk =⇒ v1 ∈ L(B1 ) =⇒ L(B1 ) = V .

b1 w1 + b2 v2 + · · · + bk vk = 0

=⇒ b1 (a1 v1 + · · · + ak vk ) + b2 v2 + · · · + bk vk = 0

=⇒ b1 a1 = 0 =⇒ b1 = 0 ∵ a1 6= 0.

=⇒ b2 = · · · = bk = 0 =⇒ L.I. of B1 .

2
Next assuming that Br = {w1 , ..., wr , vr+1 , ..., vk } is a basis we show that Br+1 = {w1 , ..., wr+1 , vr+2 , ..., vk }
is a basis. Using the basis Br ,

wr+1 = b1 w1 + · · · + br wr + cr+1 vr+1 + · · · + ck vk .

In the above at least one of the cj 6= 0 else B 0 will become L.D. So w.l.o.g. let cr+1 6= 0. Then
Br+1 is also a basis. Can be proved just like above:

• Write vr+1 as a linear combination of vectors in Br to show that L(Br+1 ) = V .

• Use cr+1 6= 0 to show L.I. of Br+1 .

Continuing we find a basis B` = {w1 , ..., w` , v`+1 , ..., vk } = B 0 ∪{v`+1 , ..., vk }. But v`+1
is already in L(B 0 ) a contradiction to L.I. of B` . Hence v`+1 can not exist and k > ` is invalid.

0.2.2 Row rank=Column rank

Short proof from Kreyszig, Adv Engg Math (8th ed.) p.333: If A has r = rank(A) linearly
independent rows {Aj1 , ..., Ajr }, then writing all the m rows as their linear combinations gives
an m × r matrix C s.t.  
A
 j1 
 .. 
C  .  = A = [A1 ...An ].
 
Ajr
 
a
 j1 k 
. 
Now writing J = {j1 , ..., jr } and AkJ =  ..  (partial k th column of A) we find

 
aj r k
 
Aj1
 . 
 
C  ..  = C[A1J ...AnJ ] = [CA1J ... CAnJ ] = [A1 ...An ].
  |{z} |{z}
∈C(C) ∈C(C)
Ajr

This implies that each Ak ∈ C(C) =⇒ C(A) ⊂ C(C) =⇒ rankc (A) ≤ rankc (C) ≤ r = rank(A).
Therefore, rankc (A) ≤ rank(A). Now invoke AT . (Recall that M x ∈ C(M ) for any p × q
matrix M and q × 1 column vector x.)

3
0.2.3 Position of the pivots in REFs

Theorem 0.2.2. If  and à are two REFs of the same matrix A, then their pivots are exactly
in the same places.

Proof: Let A have n columns and rank(A) = r. By definition of REF, the pivots occur
in row numbers 1, 2, ..., r. So let them occur in the column numbers k1 < k2 < · · · < kr and
`1 < `2 < · · · < `r respectively. Suppose t is least such that kt 6= `t . Assume w.l.o.g. that
kt < `t . Delete the last n − kt columns from A,  and à to get A0 , Â0 and Ã0 respectively.
Then the last two are the REF’s of A0 . In Â0 there are t pivots while Ã0 has only t − 1 which
gives two different ranks for A0 , a contradiction.

0.3 Properties of the determinant


The determinant function enjoys three crucial properties:

D-1 Skew-symmetry

D-2 Row-wise linearity

D-3 Normalization

These properties characterize the determinant function.

0.3.1 Additional useful properties:

In addition to the crucial 3 properties, the determinant satisfies two more useful properties:

• Multiplicative property: |AB| = |A||B|.



• Invariance under transposition: AT = |A|.

We proceed to prove these additional properties.

4
0.3.2 Determinants of product and transpose

Lemma 0.3.1. Let E be an elementary row matrix. Then



−1, E = Pjk , j 6= k





T 
1. |E| = E = 1, E = Ejk (c), j 6= k




λ, E = Mj (λ), λ 6= 0.

2. Let A be any square matrix of the same size as E. Then |EA| = |E||A|.

3. Let X be any square matrix with the last column 0. Then |X| = 0.

Proof:

T
1. • The first is skew-symmetry applied to In (the identity matrix) and Pjk = Pjk .
• For the second, assume j < k for definiteness and expand |Ejk (c)| by the j th row to
obtain

c.0 + · · · + 1. In−1 + · · · + 0 = 1 and Ejk (c)T = Ekj (c).
0 + · · · + |{z}
| {z }
j th term
kth term

• The third is by expanding diag{1, ..., λ, ...1} by j t h row to get



Mj (λ) = 0 + · · · + λ|In−1 | + · · · + 0 = λ and Mj (λ)T = Mj (λ).


2. • E = Pjk =⇒ |Pjk A| = −|A| (skew-symmetry) = Pjk |A|.

• E = Ejk (c) =⇒ |Ejk (c)A| = |A| + c.0 = Ejk (c) |A|. Due to linearity in the j th row,
the second term being a determinant whose j th row equals its k th row (j 6= k).

• E = Mj (λ) =⇒ |Mj (λ)A| = λ|A| = Mj (λ) |A| due to linearity in the j th row (or
expanding by the j th row).

3. Let X be n × n and Mjk be the (jk)th minor of X, then by inductive hypothesis Mjk =
0 if 1 ≤ k < n. Expanding by the first row, |X| = 0 + · · · + 0 + 0.M1n = 0.

5
Corollary 0.3.1. Let A and B be n × n matrices then

|AB| = |A||B| and AT = |A|.

Proof:
• First we consider A invertible. An invertible matrix is a product of ERM 0 s. So let A =
EN · · · E2 E1 . Then

|AB| = |EN · · · E2 E1 B| = |EN ||EN −1 · · · E2 E1 B| = · · · = |EN | · · · |E2 ||E1 | |B|.
| {z }
=|A|(see below)

Also by the same logic |A| = |AI| = |EN | · · · |E2 ||E1 ||I| = |EN | · · · |E2 ||E1 |.
Further, AT = E1T ...EN
T
is again a product of ERM’s and
T T T
A = E1 ... EN = E1 ... EN = |A|.

• If A is not invertible, then there are ERM’s E1 , E2 , ..., EN such that the last row of
EN · · · E2 E1 A = GA say, is 0. Expansion by the last row shows that

0 = |GA| = |G||A|(∵ G is invertible) =⇒ |A| = 0 ∵ |G| =


6 0.

Now (GA)B also has the last row 0, hence

0 = |GAB| = |G||AB| =⇒ |AB| = 0 = 0.|B| = |A||B|.



Next (GA)T = AT GT has the last column 0,so that AT GT = 0. Then

0 = AT GT = AT GT =⇒ AT = 0 = |A|.
|{z}
6=0

Remark 0.3.1. Invertible matrices are dense in Mn (R), and X 7→ |XB| − |X||B| is a con-
tinuous function (a polynomial of degree n in n2 variables xij ), hence if |XB| − |X||B| = 0
for invertible matrices X, then it so for all matrices X.

Remark 0.3.2. Like wise by the continuity of X 7→ X T −|X|, which vanishes for invertibles,
T
X = |X| for all X.

Corollary 0.3.2.

• Expansion of the determinant by the rows implies that by the columns and vice-versa.

• The determinant function is column-wise linear and skew-symmetric. In other words,


the column analogues of the properties D-1,D-2 hold.

6
0.4 Definition of determinant
We indicate how an n × n determinant can be expanded by any row. We use induction on n.
So we assume that (n − 1) × (n − 1) determinants are well defined. Moreover, we assume that
theorem has been verified for n = 2. So let n > 2. We expand determinant of n × n matrix
A by its first and the second row and show equality.

Theorem 0.4.1. Let A be an n×n matrix. Let |A|1 and |A|2 be expansions of the determinant
by the first and the second row respectively. Then |A|1 = |A|2 .

Proof:
For J, K ⊆ n we use the notation AJK to denote the submatrix of A by taking the rows and

the columns corresponding to J and K respectively. Thus e.g. Mjk = Acj k c
X
|A|1 = (−1)1+k a1k M1k
1≤k≤n
X
|A|2 = (−1)k a2k M2k
1≤k≤n

Next M1k = |A1̂k̂ | on expansion by the first row gives


X
M1k = (−1)1+?? a2` A{1,2}c {k,`}c
1≤`6=k≤n
X X
= (−1)1+` a2` A{1,2}c {k,`}c + (−1)1+(`−1) a2` A{1,2}c {k,`}c
1≤`<k k<`≤n
n X
X k−1 n n
k+`
X X
∴ |A|1 = (−1) a1k a2` A{1,2} {k,`} +
c c (−1)k+`−1 a1k a2` A{1,2}c {k,`}c .
k=1 `=1 k=1 `=k+1

Similarly, on expanding by the first row we get


X X
M2k = (−1)1+` a1` A{1,2}c {k,`}c + (−1)` a1` A{1,2}c {k,`}c
1≤`<k k<`≤n
n X
X k−1 n n
X X
∴ |A|2 = (−1)k+`+1 a2k a1` A{1,2}c {k,`}c + (−1)k+` a2k a1` A{1,2}c {k,`}c .
k=1 `=1 k=1 `=k+1

On re-arranging the sum for |A|2 , we get


n
X n
X n X
`−1
k+`+1
X
|A|2 = (−1) a2k a1` A{1,2} {k,`} +
c c (−1)k+` a2k a1` A{1,2}c {k,`}c
`=1 k=`+1 `=1 k=1
n X
X `−1 n
X n
X
k+`

= (−1) a2k a1` A{1,2}c {k,`}c + (−1)k+`+1 a2k a1` A{1,2}c {k,`}c
`=1 k=1 `=1 k=`+1

7
Finally on interchanging k and `,
n X
X k−1 n n
k+`
X X
|A|2 = (−1) a2` a1k A{1,2}c {k,`}c +
(−1)k+`−1 a2` a1k A{1,2}c {k,`}c ,
k=1 `=1 k=1 `=k+1

same as that for |A|1 .


Some explanations:

• Consider the ordered set of 7 elements: {1, 2, 3, 4, 6, 7, 8} (5 is missing). Then 1, 2, 3, 4


are the first, second, third and the fourth elements respectively but 6, 7, 8 are the fifth,
sixth and seventh. This explains the (??) in the exponent and its subsequent resolution.
n X
X k−1 n
X n
X
• = is like Fubini’s theorem
k=1 `=1 `=1 k=`+1

Z 1 Z x  Z 1 Z 1 
f (x, y) dy dx = f (x, y) dx dy.
x=0 y=0 y=0 x=y

n
X n
X n X
X `−1
Likewise for = .
k=1 `=k+1 `=1 k=1

When we rewrite the sum for |A|2 , in the first double sum we have k < ` hence it is the
second double sum for |A|1 and vice versa.

Exercise: Write the expansion |A|j by the j th row and compare with |A|1 for j > 2. This
will complete the proof of consistency of the definition. Should expand the minors M1k by its
(j − 1)th which will be the j th row of A. row. This is valid due to the induction hypothesis:
can expand n − 1 × n − 1 determinant by any of its rows.

You might also like