0% found this document useful (0 votes)
145 views53 pages

Linear Algebra Matrices, Vectors, Determinants. Linear Systems

The document summarizes key concepts from Chapter 7 of an engineering mathematics textbook, including: 1) Solutions of linear systems can be determined using Gaussian elimination and depend on the rank of the coefficient matrix A. 2) Homogeneous linear systems always have the trivial solution and may have additional non-trivial solutions forming a vector space. 3) Nonhomogeneous linear systems solutions are the sum of a particular solution and a solution to the corresponding homogeneous system.

Uploaded by

Harold
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views53 pages

Linear Algebra Matrices, Vectors, Determinants. Linear Systems

The document summarizes key concepts from Chapter 7 of an engineering mathematics textbook, including: 1) Solutions of linear systems can be determined using Gaussian elimination and depend on the rank of the coefficient matrix A. 2) Homogeneous linear systems always have the trivial solution and may have additional non-trivial solutions forming a vector space. 3) Nonhomogeneous linear systems solutions are the sum of a particular solution and a solution to the corresponding homogeneous system.

Uploaded by

Harold
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

2016 Spring

Engineering Mathematics 1

Ch 7. Linear Algebra:
Matrices, Vectors, Determinants.
Linear Systems – Part 2
Kyungchun Lee
Dept. of EIE, SeoulTech
Overview

- Signal
Ch. 7 Part 1 Ch. 7 Part 2 & 3 Processing
- Matrices, Vectors - Solutions of
- Matrix Multiplication Linear Systems - Communication
- Linear Systems - Determinants Engineering
- Rank - Inverse of a
- Vector Space Matrix - Control Systems

- Ch. 4 System of
ODEs
Ch. 8 Matrix Eigenvalue Problems
- Ch. 2 & 3

2
Outline
 Solutions of Linear Systems: Existence, Uniqueness
 For Reference: Second- and Third-Order Determinants
 Determinants. Cramer's Rule
 Inverse of a Matrix. Gauss-Jordan Elimination
 Summary

3
7.5 Solutions of Linear Systems: Existence,
Uniqueness
 Theorem 1: Fundamental Theorem for Linear Systems
a11 x1   a1n xn  b1  a11 a12 a1n   a11 a12 a1n b1 
a21 x1   a2 n xn  b2 a a22 a2 n  a a22 a2 n b2 
(1)  A   21 , A  21
   
am1 x1   amn xn  bm    
 am1 am 2 amn   am1 am 2 amn bm 
– No solution if and only if
rank A  rank A
(a) Existence (Consistent) if and only if
rank A  rank A
(b) Unique solution if and only if
rank A  rank A  n
(c) Infinitely many solutions if and only if
rank A  rank A  n
– Gauss Elimination
• If solutions exist, they can all be obtained by the Gauss elimination.
4
Solutions of Linear Systems: Existence, Uniqueness
Proof) (a) Existence
 We can write (1) in terms of column vectors c1 , c 2 , ,c n of A
(2)
 The augmented matrix
A  [ A b] (*)
 rank A  rank A or rank A  1 (By Theorem 3 in Sec. 7.4)
 If a solution x exists, (2) shows that b must be a linear
combination of those column vectors.
rank A  rank A
 Conversely,
rank A  rank A
 b must be a linear combination of the column vectors of A

 A solution
x1  1 , x2   2 , , xn   n
5
Solutions of Linear Systems: Existence, Uniqueness
(b) Uniqueness
 If rank A = n,
– n column vectors in (2) are linearly independent. (Theorem 3 in Sec.
7.4)
 If the solution is not unique,


 x1  x1  0, x2  x2  0, , xn  xn  0 (by linear independence)

  The solution x1 , x2 , , xn is uniquely determined.

6
Solutions of Linear Systems: Existence, Uniqueness
(c) Infinitely many solutions
 If rank A  rank A  r  n ,
– A linearly independent set K of r column vectors of A
K  {cˆ 1 , cˆ  2 , ,cˆ  r  }
– The other n-r column vectors{cˆ  r 1 , cˆ  r  2 , ,cˆ  n  }
• can be expressed as linear combinations of {cˆ 1 , cˆ  2 , ,cˆ  r  }
(2) 

y j  xˆ j   j

results from the n-r terms cˆ  r 1 xˆr 1 , ,cˆ  n xˆn


– The solution y1 , , yr is uniquely determined because {cˆ 1 , cˆ  2 , ,cˆ  r  }
are linearly independent.
– However, infinitely many choices of xˆr 1 , , xˆn , which fixes
 j and xˆ j  y j   j j  1, , r
–  Infinitely many solutions. ■ 7
Summary
 For an nn matrix A

n linear independent rows (columns)

Definition &
Theorem 4, Sec. 7.4

Theorem 6
Sec. 7.4 Dimension of the
Rank A = n row (column) space
of A = n
Theorem 1
Sec. 7.5

Ax = b has
a unique solution.
8
Homogeneous Linear System
 Theorem 2: Homogeneous Linear System
a11 x1   a1n xn  0
a21 x1   a2 n xn  0

am1 x1   amn xn  0
– Always has the trivial solution x = 0.
– Nontrivial solutions exist if and only if
rank A  r  n
– Nontrivial solutions and x = 0 form a vector space, called the null
space of A (or solution space)
• A linear combination of two solution vectors is also a solution
vector.
– Nullity
• The dimension null space (= n-r)
• Note: rank A  nullity A  n
9
Homogeneous Linear System
Proof)
 b = 0  rank A  rank A
 A homogeneous system is always consistent.
 If rank A  n ,
– the trivial solution is the unique solution (Theorem 1(b)).
 If rank A  n ,
– the non-trivial solutions exist (Theorem 1(c)).
 If x(1) and x(2) are solutions, then
Ax(1)  0 Ax(2)  0
 A(x (1)  x (2) )  Ax (1)  Ax (2)  0
 A(cx (1) )  cAx (1)  0
–  The solutions form a vector space.

10
Homogeneous Linear System
 If rank A  r  n,
– we can choose n-r suitable unknowns, call them xr 1 , , xn , in an
arbitrary fashion. (Theorem 1(c))
– A basis for the solution space can be obtained by choosing
xr 1 , , xn as follows:
y (1) y (2) y ( n r )  Nullity = n-r
 
 
 
  r components
  : determined by xr 1 , , xn
 
 y (1) , y (2) , y ( n  r )    
 
1 0 0
0 1 0
  n-r components
  xr 1 , , xn ■
0 0 1 
 11
Homogeneous Linear System
 Theorem 3: Homogeneous Linear System with Fewer
Equations than Unknowns
– A homogeneous linear system with fewer equations than unknowns
has always nontrivial solutions.

Proof)
 A : an mn matrix
– m = # of equations
– n = # of unknowns
 rank A  m
 If m < n, then
rank A < n
 Nontrivial solutions (Theorem 2) ■

12
Nonhomogeneous Linear System
 Theorem 4: Nonhomogeneous Linear System
– Assume that a nonhomogeneous linear system is consistent,
Then,
– All of its solutions are obtained as
x  x0  xh
any solution of the runs through all the solutions of the
nonhomogeneous linear system corresponding homogeneous system

Proof)
 For an arbitrary solution x
A(x  x 0 )  Ax  Ax 0  b  b  0
 xh  x  x0 is a solution of the corresponding homogeneous system.
 x  x0  x h

13
7.6 For Reference: Second- and Third-Order
Determinants
 Determinant of Second Order
a11 a12
D  det  A    a11a22  a12 a21
a21 a22
 Cramer’s rule
b1 a12
b2a22 b1a22  a12b2
x1  
a11 x1  a12 x2  b1 Cramer’s rule D D
a21 x1  a22 x2  b2 a11 b1 (3)
D0
a b a b b a
x2  21 2  11 2 1 21
 Proof) D D

– Dividing by D  0, we get (3). ■

14
Second- and Third-Order Determinants
 Determinant of Third Order
a11 a12 a13
D  det  A   a21 a22 a23
a31 a32 a33
a22 a23 a a13 a a13
 a11  a21 12  a31 12
a32 a33 a32 a33 a22 a23
 a11a22 a33  a11a23a32  a21a13a32  a21a12 a33  a31a12 a23  a31a13 a22
 Cramer’s rule
a11 x1  a12 x2  a13 x3  b1
Cramer’s rule D1 D D
a21 x1  a22 x2  a23 x3  b2 x1  , x2  2 , x3  3
D0 D D D
a31 x1  a32 x2  a33 x3  b3

b1 a12 a13 a11 b1 a13 a11 a12 b1


D1  b2 a22 a23 , D2  a21 b2 a23 , D3  a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
15
7.7 Determinants. Cramer’s Rule
 Determinant of Order n
a11 a12 a1n
a21 a22 a2 n
D  det  A  

an1 an 2 ann
– If n = 1,
D  a11
– If n  2 , n
D   a jk C jk = a j1C j1  a j 2C j 2   a jnC jn  j  1, 2, , n
k 1
n
or D   a jk C jk = a1k C1k  a2 k C2 k   ank Cnk  k  1, 2, , n
j 1

– The cofactor C jk of a jk in D
C jk   1
j k
M jk
Minor of a jk :
: Determinant of the (n-1)×(n-1) submatrix
of A obtained from A by omitting the row
and column of the entry ajk 16
Determinants. Cramer’s Rule
 This definition is unambiguous.
– The same value for D no matter which columns or rows we choose in
expanding.
 The determinant may also be written in terms of minors:
n
D   (1) j k a jk M jk ( j  1, 2, , or n)
k 1
n
D   (1) j k a jk M jk (k  1, 2, , or n)
j 1

17
Determinants
 Example 1)
a11 a12 a13
D  det  A   a21 a22 a23
a31 a32 a33

a22 a23 a a13 a a13


 a11  a21 12  a31 12
a32 a33 a32 a33 a22 a23

M 11 M 21 M 31

  
   : Signs for a checkerboard pattern.

  

18
Determinants
 Example 2)
– Expansion by the first row

1 3 0
6 4 2 4 2 6
D 2 6 4  1 3 0
0 2 1 2 1 0
1 0 2

=1(12 − 0) − 3(4 + 4) + 0(0 + 6) = −12


– Expansion by the third column Same value
2 6 1 3 1 3
D0 4 2  0  12  0  12.
1 0 1 0 2 6

19
Determinants
 Example 3) Determinant of a Triangular Matrix

– Inspired by this, can you formulate a little theorem on determinants


of triangular matrices?

20
General Properties of Determinants
 Theorem 1: Behavior of an nth-Order Determinant under
Elementary Row Operations
(a) Interchange of two rows
D  D
(b) Addition of a multiple of a row to another row
D  D (no change)
(c) Multiplication of a row by a nonzero constant c
D  cD

21
General Properties of Determinants
Proof) (a)
 Prove by induction.
 The statement hold for n = 2 because
a b c d
D  ad  bc , but D   bc  ad
c d a b

 We now show that if (a) holds for determinants of order n-1


 2, then it holds for order n.
– D: order n
– E: obtained from D by interchange of two rows.
– Expand D and E by a row that is not one of those interchanged.

(5)

22
General Properties of Determinants
 For example, j = 1 and k = 1,
Expansion by this row
a11 a12 a1n a11 a12 a1n
a21 a22 a2 n a31 a32 a3n
Row
D  a31 a32 a3n interchange E  a21 a22 a2 n

an1 an 2 ann M jk an1 an 2 ann N jk

– By the induction,
N jk   M jk
 D  E
 This can easily be generalized for all j and k.
D  E

23
General Properties of Determinants
a11 a12 a1n
(b)
ai1 ai 2 ain
D c  ()
a j1 a j2 a jn
n n n
D   (a jk  caik )C jk   a jk C jk  c  aik C jk
an1 an 2 ann k 1 k 1 k 1

a11 a12 a1n a11 a12 a1n a11 a12 a1n

ai1 ai 2 ain ai1 ai 2 ain ai1 ai 2 ain


D  c  D  cD2
a j1  cai1 a j 2  cai 2 a jn  cain a j1 a j 2 a jn ai1 ai 2 ain

Expansion by
this row an1 an 2 ann an1 an 2 ann an1 an 2 ann
D2
Interchange of these two rows gives the same matrix .
D2   D2  D2  0  D  D 24
General Properties of Determinants
(c) a11 a12 a1n a11 a12 a1n

D ca j1 ca j 2 ca jn  c a j1 a j 2 a jn  cD
Expansion by
this row an1 an 2 ann an1 an 2 ann
n n ■
  ca jk C jk  c  a jk C jk
k 1 k 1

 Caution: det(cA)  c n det A ( c det A)

25
General Properties of Determinants
 Example 4) Evaluation of Determinants by Reduction to
Triangular Form
2 0 4 6 2 0 4 6
4 5 1 0 0 5 9 12 Row 2  2 Row 1
D 
0 2 6 1 0 2 6 1
3 8 9 1 0 8 3 10 Row 4  1.5 Row 1
2 0 4 6
0 5 9 12

0 0 2.4 3.8 Row 3  0.4 Row 2
0 0 11.4 29.2 Row 4  1.6 Row 2

2 0 4 6
0 5 9 12
  2  5  2.4  47.25  1134
0 0 2.4 3.8
0 0 0 47.25 Row 4  4.75 Row 3 26
General Properties of Determinants
 Theorem 2: Further Properties of nth-Order Determinants
(a)-(c) in Theorem 1 hold also for columns.
(d) Transposition
D  D (no change)
(e) A zero row or column
D0

(f) Proportional rows or columns


D0

27
General Properties of Determinants
 Theorem 3: Rank in Terms of Determinants
1) A : m×n matrix
– Rank A = r (≥ 1) if and only if
det (an r×r submatrix of A)  0
and
det(any p×p submatrix of A) = 0 , p > r

2) A : n×n matrix
– Rank A = n if and only if
det  A   0

28
General Properties of Determinants
Proof) A rough proof:
 The elementary row operations alter neither rank nor the
property of a determinant being nonzero.
 Therefore, we consider the row echelon form of a matrix of
rank r.

r×r submatrix with


a non-zero determinant

(r+1)×(r+1) submatrix
with a zero determinant

29
Cramer’s Rule
 Theorem 4: Cramer’s Theorem
a11 x1  a12 x2   a1n xn  b1
a21 x1  a22 x2   a2 n xn  b2 A: n x n coefficient matrix

b  b1 b2 bn 
T

an1 x1  an 2 x2   ann xn  bn

– If D  det  A   0,
• A unique solution exists:
D D D
x1  1 , x2  2 , , xn  n : Cramer’s rule
D D D
• Dk : Determinant obtained from D by replacing in D the kth
column by b
– For a homogeneous system (b  0) and D  0
• Only a trivial solution exists:
x1  0, x2  0, , xn  0
30
Cramer’s Rule
Proof)
 The augmented matrix A of size n×(n+1)
rank A  n
 If

 then
rank A  n by Theorem 3

 We have rank A  rank A  n


rank A  rank A  n
 A unique solution (Theorem 1 of Sec. 7.5)
 The proof of Cramer’s rule is omitted. ■ 31
Cramer’s Rule
 Cramer’s rule is not practical in computations for which Gauss
elimination is suitable.
 However, Cramer’s rule is of theoretical interest in differential
equations (Secs. 2.10 and 3.3) and in other theoretical work
of engineering applications.

32
Summary
 For an nn matrix A

n linear independent rows (columns)

Definition &
Theorem 4, Sec. 7.4

Theorem 6
Sec. 7.4 Dimension of the
Rank A = n row (column) space
of A = n
Theorem 3 Theorem 1
Sec. 7.7 Sec. 7.5

det(A) ≠ 0 Ax = b has
a unique solution.
33
7.8 Inverse of a Matrix. Gauss-Jordan Elimination
 Inverse of a Square Matrix
A 1 : The inverse of an n  n matrix A   a jk 
 AA 1  A 1A  I

– Nonsingular matrix
• The matrix has an inverse.
– Singular matrix
• The matrix has no inverse.

 If the matrix has an inverse, the inverse is unique.


Proof)
– B, C : inverses of A  AB = I, CA = I
 B  IB  (CA)B  C( AB)  CI  C

34
Existence of the inverse
 Theorem 1: Existence of the Inverse
– A   a jk  : an n×n matrix
• Nonsingular if and only if
rank A = n  det A  0
• Singular if and only if
rank A < n  det A = 0

1) Proof of ‘’
 Consider a linear system
Ax  b (2)
 If the inverse A-1 exists,
A 1Ax  x  A 1b
– A unique solution x, because, for another solution u,
Au  b  u  A 1b  x
 rank A = n (Theorem 1 in Sec. 7.5)
35
Existence of the inverse
2) Proof of ‘’
 Conversely, if rank A = n.
– (2) has a unique solution x for any b.
– By the back substitution following the Gauss elimination, x can be
expressed as
x  Bb (3)

Ax  b (2)

 Ax  A(Bb)  ( AB)b  b for any b


 AB  I
 Similarly, substituting (2) into (3),
x  Bb  B( Ax)  (BA )x for any x B  A 1

Ax  b
 BA  I
36
Summary
 For an nn matrix A

n linear independent rows (columns)

Definition &
Theorem 4, Sec. 7.4

Theorem 1 Theorem 6
Non-singular Sec. 7.8 Sec. 7.4 Dimension of the
Rank A = n row (column) space
(A-1 exists)
of A = n
Theorem 3 Theorem 1
Sec. 7.7 Sec. 7.5

det(A) ≠ 0 Ax = b has
a unique solution.
37
Determination of the Inverse by the Gauss-Jordan
Method
 Consider a linear system
AX  I
 We find the inverse X  A1 by Gauss-Jordan Method.
Gauss elimination
A   A I U H
upper triangular
Reducing U to I
I K   IX  K
 XK
 A 1  K

38
Determination of the Inverse by the Gauss-Jordan
Method
 Example 1) Finding the inverse by Gauss-Jordan Elimination
 1 1 2 
A   3 1 1  .
 
 1 3 4 

 1 1 2 1 0 0 
 A I    3 1 1 0 1 0 
 1 3 4 0 0 1 

 1 1 2 1 0 0 
  0 2 7 3 1 0  Row 2  3 Row 1
 
 0 2 2 1 0 1  Row 3  Row 1

 1 1 2 1 0 0 
  0 2 7 3 1 0
 
 0 0 5 4 1 1  Row 3  Row 2 39
Determination of the Inverse by the Gauss-Jordan
Method
 1 1 2 1 0 0  1 1 2 1 0 0   Row 1
 0 2 7 3 1 0   0 1 3.5 1.5 0.5 0  0.5 Row 2
   
 0 0 5 4 1 1  0 0 1 0.8 0.2 0.2  0.2 Row 3

1 1 0 0.6 0.4 0.4  Row 1  2 Row 3


 0 1 0 1.3 0.2 0.7  Row 2  3.5 Row 3
 
0 0 1 0.8 0.2 0.2 

1 0 0 0.7 0.2 0.3  Row 1  Row 2


 0 1 0 1.3 0.2 0.7 
 
0 0 1 0.8 0.2 0.2 

 A 1
Check  1 1 2   0.7 0.2 0.3  1 0 0 
AA 1   3 1 1   1.3 0.2 0.7   0 1 0 
    
 1 3 4   0.8 0.2 0.2  0 0 1 
40
Formulas for inverses
 Theorem 2: Inverse of a Matrix
 C11 C21 Cn1 
 Cn 2 
1 1 T 1 C12 C22 
A  C jk  
det A det A     
C Cnn 
 1n C2 n
• Cjk : Cofactor of ajk in det A
• Note: In A−1, the cofactor Cjk occupies the same place as akj (not
ajk) does in A.

41
Formulas for inverses
Proof)
 Define
 C11 C21 Cn1 
 Cn 2 
1 C12 C22 
B and G  [ g kl ]  BA
det A     
C Cnn 
 1n C2 n

(by Theorem 2(f) in Sec. 7.7)


 G  I  B  A 1 ■
42
Formulas for inverses
 Theorem 2 is useful for theoretical considerations.
– Not recommended for actually determining inverse matrices.
– Gauss-Jordan elimination can be a more efficient means for finding
inverse matrices.

 Example 2)
3 1 1 1  4 1  0.4 0.1
A  , A  
2 4 10  2 3   0.2 0.3 
 

43
Formulas for inverses
 Example 3)  1 1 2 
A   3 1 1  .
 
 1 3 4 
det A = −1(−7) − 1 · 13 + 2 · 8 = 10
1 1 1 2
C31 
1 2
C11   7, C21    2, 3
3 4 3 4 1 1

3 1 1 2 1 2
C12    13, C22   2, C32   7
1 4 1 4 3 1
3 1 1 1 1 1
C13   8, C23    2, C33   2
1 3 1 3 3 1

 0.7 0.2 0.3 


A 1   1.3 0.2 0.7 
 0.8 0.2 0.2  44
Formulas for inverses
 The inverse of the diagonal matrices
 a11  1/ a11 
 a22   
1/ a22
A   
1
 A 
   
   
 a nn   1/ a nn 
Proof)
 By Theorem 2,

, etc. det()  C11


 a11 
 a22 
A 
 
 
 a nn 

45
Formulas for inverses
 Products
 AC   C1A 1
1

 AC PQ   Q 1P 1
1
C1A 1

AC  AC   I
1
Proof)
 A 1A C  AC   A 1I
1

I
 C  AC   A 1
1

 C1C  AC   C1A 1
1

I
  AC   C1A 1
1

 The inverse of the inverse


A 1 1
 A
46
Unusual Properties of Matrix Multiplication.
Cancellation Laws
 Theorem 3: Cancellation Laws
– A, B, C : n× n matrices
(a) If rank A = n and AB = AC, then
BC
(b) If rank A = n then, AB = 0 implies
B0
(b’) If AB = 0, but A ≠ 0 and B ≠ 0, then
rank A < n and rank B < n
(c) If A is singular, then
BA and AB are also singular.

 Note that (a) and (b) do not hold if rank A < n .

47
Unusual Properties of Matrix Multiplication.
Cancellation Laws
Proof) (a)
 The inverse of A exists by Theorem 1.
AΒ  AC
 A 1AΒ  A 1AC
 ΒC

(b) AΒ  0
 A 1AΒ  A 1 0
 Β0

48
Unusual Properties of Matrix Multiplication.
Cancellation Laws
(c1)
 A is singular.
 rank A < n by Theorem 1.
 Ax = 0 has nontrivial solutions (Theorem 2 in Sec. 7.5).
 These solutions are also nontrivial solutions of
BAx = 0
 rank (BA)  n (Theorem 2 in Sec. 7.5).
  BA is singular by Theorem 1.
(c2)
 AT is singular (Theorem 2(d),Sec. 7.7).
 (AB)T = BTAT is singular by (c1).
 AB is singular. ■

49
Determinant of Matrix Products
 Theorem 4: Determinant of a Product of Matrices
– A, B: n×n matrices
det (AB) = det (BA) = det A det B

Proof)
 Case 1) A or B is singular.
det A det B = 0
– AB and BA are also singular by Theorem 3(c).
det (AB) = det (BA) = 0

50
Determinant of Matrix Products
 Case 2) A or B is nonsingular.
Reduce A to a diagonal matrix  by Gauss-
Jordan steps.
- det A retains its value. (Row interchanging
may cause a sign reversal, however no effect
on the result of this proof. Why?)
The same operation reduce AB to  B.

By taking out aˆ11 , aˆ22, , aˆnn from each row,


ˆ )  aˆ11aˆ22,
det( AB)  det( AB aˆnn det(B)  det( A) det(B) ■
ˆ )  det( A)
 det( A 51
Summary of Chapter 7
 Transpose AT of a matrix A
– Rows become columns and conversely.
 A is called
– symmetric if
A = AT
– skew-symmetric if
A = −AT
 Linear systems of equations
Ax = b (2)
 Gauss elimination
– reduces the system to “triangular” form by elementary row
operations
– The set of solutions are unchanged.

52
Summary of Chapter 7
 Cramer’s rule
– represents the unknowns in a system (2) of n equations in n unknowns
as quotients of determinants
 Inverse A−1 of a square matrix A
AA−1 = A−1A = I
– exists if and only if
det A ≠ 0
– computed by the Gauss–Jordan elimination
 Rank r of a matrix A
– The maximum number of linearly independent rows or columns of A
 Homogeneous system
Ax = 0
– Nontrivial solutions x ≠ 0 if and only if
rank A < n
53

You might also like