Chapter 1
Chapter 1
Chapter I
Systems of Linear Equations and Matrices
ℓ1 ℓ2 ℓ1
ℓ1 ℓ2
ℓ2
(2,0) 𝑥+𝑦=1
𝑥−𝑦=2
𝑥1 − 𝑥2 + 2𝑥3 = 1 1 −1 2 1
−𝑥1 + 4𝑥2 − 3𝑥3 = 0 is [−1 4 −3 0]
2𝑥1 + 7𝑥2 − 𝑥3 = −3 2 7 −1 −3
❖ The augmented matrix for the system
𝑥 − 2𝑦 − 2𝑧 = 1 1 −2 −2 1
2𝑥 + 3𝑧 = 0 is [ 2 0 3 0]
−𝑥 + 𝑦 − 2𝑧 = −1 −1 1 −2 −1
In solving a system of linear equations we usually transform the given system
into a new (equivalent) system that has the same solution set but it is easier to
solve. This can be done in a series of steps (elimination of unknowns) as follows.
1. Multiply any equation through by a nonzero constant
2. Interchange any two equations
3. Add a multiple of one equation to another equation
In the associated augmented matrix these operations (called elementary row
operations) are as follows.
1. Multiply any row through by a nonzero constant
2. Interchange any two rows
3. Add a multiple of one row to another row
Example 2: Consider the system of linear equations
𝑥 + 𝑦 + 2𝑧 = 9
2𝑥 + 4𝑦 − 3𝑧 = 1
3𝑥 + 6𝑦 − 5𝑧 = 0
1 1 2 9
Solution:The augmented matrix is [2 4 −3 1]:
3 6 −5 0
① 1 2 9 −2R1 + R2 ① 1 2 9 1R ① 1 2 9
2 7 17
[2 4 −3 1] −3R1 + R3[ 0 2 −7 −17] 2 [ 0 ① − −2]
−27 ⃗⃗ 0
2
3 6 −5 0 ⃗⃗ 0 3 −11 3 −11 −27
11 35 11 35
−R 2 + R1
① 0 2 2 ① 0 2 2
17 −2R 3
−3R 2 + R 3 0 ① − 72 − 2 ⃗⃗⃗ 0 ① − 72 − 17
⃗⃗⃗ 2
[ 0 0 − 12 − 32 ]
0 ① [ 0
3]
11
− 2 R 3 + R1 ① 0 0 1 𝑥 = 1
7
R + R2
2 3
0 ① 0 2 with equivalent system 𝑦 = 2
⃗⃗⃗ [ 0 0 ① 3] 𝑧 = 3
Example 1:
0 1 −4 0 3
1 0 0 3 1 0 0
0 0 0 1 1 0 0
(i) The matrices:[0 1 0 −1] , [0 1 0] , [ ],[ ]
0 0 0 0 0 0 0
0 0 1 −2 0 0 1
0 0 0 0 0
are in r.r.e.f.
(ii) The matrices
1 2 4 5 1 1 0 0 1 6 3 0
[0 1 −1 2] , [0 1 0] , [0 0 1 −1 0] are inr.e.f.
0 0 1 3 0 0 0 0 0 0 0 1
Example 2: Suppose that the augmented matrix for a system of linear equations
has been reduced by row operations to the given r.r.e.f. Solve the system.
1 0 0 2 1 0 0 4 −1
(a) [0 1 0 −3] (b) [0 1 0 3 7]
0 0 1 4 0 0 1 2 5
1 6 0 0 1 −2
1 0 0 0
0 0 1 0 2 1
(c) [ ] (d) [0 1 −1 0]
0 0 0 1 3 4
0 0 0 1
0 0 0 0 0 0
Solution:
(a) The corresponding system of equations is
𝑥1 = 2
𝑥2 = −3
𝑥3 = 4
Thus, the system has only one (unique) solution 𝑥1 = 2, 𝑥2 = −3, 𝑥3 = 4.
𝑥1 + 4𝑥4 = −1
𝑥2 + 3𝑥4 = 7
𝑥3 + 2𝑥4 = 5
(d) The last equation of the corresponding system is 0𝑥1 + 0𝑥2 + 0𝑥3 = 1.
This equation cannot be satisfied(no solution). So, the system has no solutions
(inconsistent).
1 −1 0 −2 3
2 −2 2 −2 4
Example 3:Reduce the matrix [ ] to a r.r.e.f.
0 0 1 1 −1
1 −1 2 0 1
Solution:
1 −1 0 −2 3 1 −1 0 −2 3
2 −2 2 −2 4 −2R1 + R2 0 0 2 2 −2
[ ] [ ]
0 0 1 1 −1 ⃗⃗⃗ 0 0 1 1 −1
1 −1 2 0 1 1 −1 2 0 1
1 −1 0 −2 3 1 −1 0 −2 3
−R1 + R 4 0 0 2 2 −2 −R 2 + R 4 0 0 2 2 −2
[ ] [ ]
⃗⃗⃗ 0 0 1 1 −1 ⃗⃗⃗ 0 0 1 1 −1
0 0 2 2 −2 0 0 0 0 0
1 1 −1 0 −2 3 1 −1 0 −2 3
R
2 2 [0 0 1 1 −1] −R 2 + R 3 [0 0 1 1 −1], which is in r.r.e.f.
⃗⃗⃗ 0 0 1 1 −1 ⃗⃗⃗ 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
The involved matrices are row equivalent.
6
𝑧=3 𝑧=3
as we have obtained before.
Example 6: Solve the system by Gauss-Jordan elimination.
𝑥 − 𝑦 − 2𝑤 = 3
2𝑥 − 2𝑦 + 2𝑧 − 2𝑤 = 4
𝑧 + 𝑤 = −1
𝑥 − 𝑦 + 2𝑧 = 1
Solution: The augmented matrix
1 −1 0 −2 3 1 −1 0 −2 3
2 −2 2 −2 4 0 0 1 1 −1
[ ] has the r.r.e.f. [ ] which has the system:
0 0 1 1 −1 0 0 0 0 0
1 −1 2 0 1 0 0 0 0 0
𝑥 − 𝑦 − 2𝑤 = 3 𝑥 = 3 + 𝑦 + 2𝑤
or
𝑧 + 𝑤 = −1 𝑧 = −1 − 𝑤
1 0 0 −1 2 −1 0
0 1 0 5 −1 −1 0
[ ]. The corresponding system is
0 0 1 2 0 −3 0
0 0 0 0 0 0 0
𝑥1 − 𝑥4 +2𝑥5 − 𝑥6 =0
𝑥2 + 5𝑥4 − 𝑥5 − 𝑥6 =0
𝑥3 + 2𝑥4 − 3𝑥6 =0
𝑥1 = 𝑥4 − 2𝑥5 + 𝑥6
𝑥2 = −5𝑥4 + 𝑥5 + 𝑥6 }. Let 𝑥4 = 𝑟, 𝑥5 = 𝑠, 𝑥6 = 𝑡 be arbitrary. Then
𝑥3 = −2𝑥4 + 3𝑥6
where 𝑥𝑘1 , 𝑥𝑘2 , … , 𝑥𝑘𝑟 are the leading variables and ∑ ( )denotes sums
involving the 𝑛 − 𝑟 remaining variables. Thus, the system has infinitely many
solutions in which 𝑛 − 𝑟 of the unknowns are arbitrary.
Solution:
1 𝜆−1 1 0 R ↔R 𝜆+3
𝜆−1 1 0 − R2 𝜆+3 1 2 1 − 0]
[ ] 4 [ ] [ 4
−4 𝜆 + 3 0 1 − 0 ⃗⃗
⃗⃗ 4 𝜆−1 1 0
𝜆+3 𝜆+3
−(𝜆 − 1)R1 + R 2 1 − 0 1 − 0
4 = 4
⃗⃗⃗ (𝜆 − 1)(𝜆 + 3) 4 + (𝜆 − 1)(𝜆 + 3)
0 1+ 0] [0 0]
[ 4 4
4+(𝜆−1)(𝜆+3)
Thus, the system has nontrivial solutions iff = 0 iff 𝜆2 + 2𝜆 + 1 = 0
4
iff (𝜆 + 1)2 = 0 iff 𝜆 = −1.
11
2 −1 𝜋 √𝜋 𝑒 1
[0 4 ] , [1 1 −2 0 5], [0 −𝑒 1 ] , [2], and [5] are matrices.
5 −2 0 0 0.5
If 𝐴 is a matrix with 𝑚 rows (horizontal lines) and 𝑛 columns (vertical lines), then
the size of 𝐴 is 𝑚 × 𝑛.
1 3 1 3 9 3
Example 4: If 𝐴 = [−1 5 2], then 3𝐴 = [−3 15 6] and
6 7 3 18 21 9
−1 −3 −1
(−1)𝐴 = [ 1 −5 −2].
−6 −7 −3
can be replaced by the single matrix equation 𝐴𝑥 = 𝑏, where 𝐴 = [𝑎𝑖𝑗 ]𝑚×𝑛 (called
𝑥1 𝑏1
𝑥2 𝑏
the coefficient matrix), 𝑥 = [ ⋮ ] and 𝑏 = [ 2 ] .
⋮
𝑥𝑛 𝑛×1 𝑏𝑚 𝑚×1
2 0
1 1
Example 9: Let 𝐴 = [1 3] and 𝐵 = [ ] .
−1 −3 2×2
0 −1 3×2
2 0 2
nd 1
❖ The 2 column of 𝐴𝐵 is [1 3 ] [ ] = [−8].
−3
0 −1 3
1 1
❖ The 1st row of AB is [2 0] [ ] = [2 2].
−1 −3
Definition 10: If 𝐴 = [𝑎𝑖𝑗 ]𝑚×𝑛 , then the transpose of 𝐴, denoted by𝐴𝑡 is the 𝑛 × 𝑚
matrix 𝐴𝑡 = [𝑎𝑗𝑖 ]𝑛×𝑚 .
Example 11:
𝑎11 𝑎21 𝑎31
𝑎11 𝑎12 𝑎13 𝑎14
𝑎 𝑎22 𝑎32
(a) If 𝐴 = [𝑎21 𝑎22 𝑎23 𝑎24 ], then 𝐴𝑡 = [ 12
𝑎13 𝑎23 𝑎33 ].
𝑎31 𝑎32 𝑎33 𝑎34
𝑎14 𝑎24 𝑎34
1 2
1 −1 4
(b) If 𝐵 = [−1 0], then 𝐵 𝑡 = [ ].
2 0 3
4 3
1
𝑡
(c) If 𝐶 = [1 −1 0], then 𝐶 = [−1].
0
2 −1 3 2 −1 3
𝑡
(d) If 𝐷 = [−1 4 5 ], then 𝐷 = [−1 4 5]⇒𝐷 𝑡 = 𝐷
3 5 1 3 5 1
(e) If 𝐸 = [5], then 𝐸 𝑡 = [5] = 𝐸.
14
1 1 2 2 3 3
Example 1: Let 𝐴 = [ ],𝐵 = [ ]. Then 𝐴𝐵 = [ ],
−1 −1 1 1 −3 −3
0 0
while 𝐵𝐴 = [ ],i.e., 𝐴𝐵 ≠ 𝐵𝐴. Thus, 𝐴, 𝐵 do not commute.
0 0
Theorem 2: Assuming that the sizes of the matrices are such that the indicated
operations can be performed, the following are valid:
(a) 𝐴 + 𝐵 = 𝐵 + 𝐴 (Commutative law for addition)
(b) 𝐴 + (𝐵 + 𝐶) = (𝐴 + 𝐵) + 𝐶 (Associative law for matrix addition)
(c) 𝐴(𝐵 𝐶) = (𝐴 𝐵) 𝐶 (Associative law for matrix multiplication)
(d) 𝐴(𝐵 + 𝐶) = 𝐴 𝐵 + 𝐴 𝐶 (Left distributive law of multiplication)
(e) (𝐵 + 𝐶) 𝐴 = 𝐵 𝐴 + 𝐶 𝐴 (Right distributive law of multiplication)
(f) 𝐴(𝐵 − 𝐶) = 𝐴 𝐵 − 𝐴 𝐶
(g) (𝐵 − 𝐶) 𝐴 = 𝐵 𝐴 − 𝐶 𝐴
(h) 𝑎(𝐵 + 𝐶) = 𝑎 𝐵 + 𝑎 𝐶(a is a scalar)
(i) 𝑎(𝐵 − 𝐶) = 𝑎 𝐵 − 𝑎 𝐶
(j) (𝑎 + 𝑏) 𝐶 = 𝑎 𝐶 + 𝑏 𝐶 (a, b are scalars)
(k) (𝑎 − 𝑏) 𝐶 = 𝑎 𝐶 − 𝑏 𝐶
(l) 𝑎(𝑏 𝐶) = (𝑎 𝑏) 𝐶
(m) 𝑎(𝐵𝐶) = (𝑎𝐵)𝐶 = 𝐵(𝑎𝐶)
15
1 −1
1 3 −2 0
Example 3: Let 𝐴 = [0 1] , 𝐵 = [ ],𝐶 = [ ].
1 4 1 5
2 3
Show that(𝐴𝐵)𝐶 = 𝐴(𝐵𝐶).
Solution:
1 −1 0 −1 −1 5
1 3 −2 0 −2 0
(𝐴𝐵)𝐶 = ([0 1] [ ]) [ ] = [1 4] [ ]=[ 2 20]
1 4 1 5 1 5
2 3 5 18 8 90
1 −1 1 −1 −1 5
1 3 −2 0 1 15
𝐴(𝐵𝐶) = [0 1 ] ([ ][ ]) = [0 1] [ ]=[ 2 20]
1 4 1 5 2 20
2 3 2 3 8 90
⇒(𝐴𝐵)𝐶 = 𝐴(𝐵𝐶).
Definition 4: A matrix, all of whose entries are zero is called a zero matrix,
denoted by 𝟎 = [0𝑖𝑗 ], where 0𝑖𝑗 = 0.
0 1 1 1 2 5 3 7
Example 5: Let 𝐴 = [ ],𝐵 = [ ],𝐶 = [ ],𝐷 = [ ]. Note that
0 2 3 4 3 4 0 0
3 4
𝐴𝐵 = 𝐴𝐶 = [ ], yet 𝐵 ≠ 𝐶, 𝐴 ≠ 𝟎 (the cancellation law does not hold).
6 8
Also, 𝐴𝐷 = 𝟎, yet 𝐴 ≠ 𝟎, 𝐷 ≠ 𝟎 (there are zero divisors in matrices).
Remark 8:
(1) Sometimes we write 𝐼 instead of 𝐼𝑛 .
(2) If 𝐴 is 𝑚 × 𝑛, then 𝐴𝐼𝑛 = 𝐼𝑛 𝐴 = 𝐴 (verify).
3 2 −5 2
Example 10: If 𝐴 = [ ], then its inverse is 𝐵 = [ ].
8 5 8 −3
Not that 𝐴𝐵 = 𝐵𝐴 = 𝐼2 .
1 2
Example 11:[ ] is not invertible. Why?
2 4
𝑎 𝑏
Example 13: If 𝐴 = [ ] and 𝑎𝑑 − 𝑏𝑐 ≠ 0, thenA is invertible and
𝑐 𝑑
𝑑 −𝑏
1 𝑑 −𝑏
𝐴−1 = [ ] = [𝑎𝑑−𝑐
− 𝑏𝑐 𝑎𝑑 − 𝑏𝑐 ].
𝑎
𝑎𝑑 − 𝑏𝑐 −𝑐 𝑎
𝑎𝑑 − 𝑏𝑐 𝑎𝑑 − 𝑏𝑐
Theorem 14: If 𝐴 and 𝐵 are invertible matrices of the same size, then
(a) 𝐴𝐵 is invertible
(b) (𝐴𝐵)−1 = 𝐵 −1 𝐴−1
Theorem 18: I
f 𝐴 is invertible, then
(a) 𝐴−1 is invertible and (𝐴−1 )−1 = 𝐴.
(b) 𝐴𝑛 is invertible and (𝐴𝑛 )−1 = (𝐴−1 )𝑛
1
(c) If 𝑘 ≠ 0 (a scalar), then 𝑘𝐴 is invertible and (𝑘𝐴)−1 = 𝑘 𝐴−1
Proof:
(a) Since 𝐴𝐴−1 = 𝐴−1 𝐴 = 𝐼, 𝐴−1 is invertible and (𝐴−1 )−1 = 𝐴, is invertible.
1 1 1
(c) (𝑘𝐴) ( 𝐴−1 ) = (𝑘𝐴)𝐴−1 = ( 𝑘) 𝐴𝐴−1 = 1𝐼 = 𝐼.
𝑘 𝑘 𝑘
1 1
Similarly, (𝑘 𝐴 ) (𝑘𝐴) = 𝐼. So that 𝑘𝐴 is invertible and (𝑘𝐴)−1 = 𝑘 𝐴−1 .
−1
Theorem 3:If the elementary matrix 𝐸 results from performing a certain row
operation on 𝐼𝑚 and if 𝐴is an 𝑚 × 𝑛matrix, then the product 𝐸𝐴 is
the matrix obtained when the same row operation is performed on 𝐴.
1 0 2 𝑎11 𝑎12 𝑎13
Example 4:Let 𝐸 = [0 1 0] and let 𝐴 = [𝑎21 𝑎22 𝑎23 ]. Then 𝐸 is obtained
0 0 1 𝑎31 𝑎32 𝑎33
from 𝐼3 by adding 2 times the 3rd row to the 1st row of 𝐼3 .
𝑎11 + 2𝑎31 𝑎12 + 2𝑎32 𝑎13 + 2𝑎33
Now, 𝐸𝐴 = [ 𝑎21 𝑎22 𝑎23 ].
𝑎31 𝑎32 𝑎33
20
Definition 6: Two matrices of the same size are called row equivalent if one can
be obtained by applying a finite sequence of elementary row
operations on the other.
Thus, augmented matrix of (1) has the r.r.e.f. (the augmented matrix of (2))
Hence, 𝐴 is row equivalent to 𝐼𝑛 .
21
(c) ⇒ (d): Assume 𝐴 is row equivalent to 𝐼𝑛 . Then there are elementary matrices
𝐸1 , 𝐸2 , … , 𝐸𝑘 such that 𝐸𝑘 ⋯ 𝐸2 𝐸1 𝐴 = 𝐼𝑛
⇒𝐴−1 = 𝐸𝑘 ⋯ 𝐸2 𝐸1⇒𝐴 = (𝐴−1 )−1 = (𝐸𝑘 ⋯ 𝐸2 𝐸1 )−1 .
But the inverses of E1 , 𝐸2 , … , 𝐸𝑘 are elementary matrices. Hence, 𝐴 =
𝐸1−1 𝐸2−1 ⋯ 𝐸𝑘−1 is a product of elementary matrices.
Remark 8: Part (c) says that 𝐼𝑛 is the r.r.e.f. of 𝐴. From the proof of the theorem
we have 𝐴−1 = 𝐸𝑘 ⋯ 𝐸2 𝐸1 and so 𝐴−1 = 𝐸𝑘 ⋯ 𝐸2 𝐸1 𝐼𝑛 , that is 𝐴−1 can
be obtained from 𝐼𝑛 by a sequence of elementary row operations. This
sequence is the same one which reduces 𝐴 to 𝐼𝑛 .
2 0 1
Example 9: Find the inverse of 𝐴 = [2 1 −1] if it exists.
3 1 −1
Solution: We start with [𝐴|𝐼3 ] and reduce it (if possible) to [𝐼3 |𝐵], so we get
𝐴−1 = 𝐵.
1
R
2 0 1 1 0 0 2 1 1 0 1/2 1/2 0 0
[𝐴|𝐼3 ] = [2 1 −1| 0 1 0] ⃗⃗⃗ [2 1 −1 | 0 1 0]
3 1 −1 0 0 1 3 1 −1 0 0 1
2 0 1
Example 10: Find the inverse (if it exists) of the matrix 𝐴 = [ 3 1 −1].
−2 −2 4
Solution:
1
R
2 0 1 1 0 0 2 1 1 0 1/2 1/2 0 0
[𝐴|𝐼3 ] = [ 3 1 −1| 0 1 0] ⃗⃗⃗ [ 3 1 −1 | 0 1 0]
−2 −2 4 0 0 1 −2 −2 4 0 0 1
Remark 11: If 𝐴 is invertible, then the homogeneous system 𝐴𝑥 = 𝟎 has only the
trivial solution. This can be seen as follows:
𝐴𝑥 = 𝟎⇒𝐴−1 (𝐴𝑥) = 𝐴−1 𝟎⇒(𝐴−1 𝐴)𝑥 = 𝟎⇒𝐼𝑛 𝑥 = 𝟎⇒𝑥 = 𝟎.
Proof: We prove part (a). The proof of part (b) is an exercise. Assume 𝐵𝐴 = 𝐼.
We show that 𝐴 is invertible by proving that 𝐴𝑥 = 𝟎 has only the trivial
solution. If 𝐴𝑥 = 𝟎, then 𝐵(𝐴𝑥) = 𝐵𝟎. So,(𝐵𝐴)𝑥 = 𝟎 or 𝐼𝑥 = 𝟎. That
is,𝑥 = 𝟎. Hence, 𝐴 is invertible.
From 𝐵𝐴 = 𝐼 we get 𝐵𝐴𝐴−1 = 𝐼𝐴−1 , and so 𝐵 = 𝐴−1 .
24
𝑥1 + 𝑥2 − 𝑥3 = 𝑏1
−2𝑥1 + 𝑥2 +3𝑥3 = 𝑏2
−𝑥1 +2𝑥2 +2𝑥3 = 𝑏3
Solution:
1 1 −1 𝑏1 2R + R 1 1 −1 𝑏1
1 2
[−2 1 3 𝑏2 ] [0 3 1 2𝑏1 + 𝑏2 ]
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
R 1 + 𝑅 3
−1 2 2 𝑏3 0 3 1 𝑏1 + 𝑏3
1 −1 1 𝑏1
−R 2 + R 3
3 [0
1 2𝑏1 + 𝑏2 ].
⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ . 0
0 0 −𝑏1 − 𝑏2 + 𝑏3
So, the system is consistent if −𝑏1 − 𝑏2 + 𝑏3 = 0, i.e., if 𝑏3 = 𝑏1 + 𝑏2 .
𝑎11 0 0 ⋯ 0
𝑎21 𝑎22 0 ⋯ ⋮
(3) 𝐴 is lower triangular if it is of the form A = ⋮ ⋮ 𝑎33 ⋮ .
⋮ ⋮ ⋱ 0
[𝑎𝑛1 ⋯ ⋯ 𝑎𝑛𝑛 ]
That is all the entries above the main diagonal of 𝐴 are zero.
(i.e., 𝐴 = [𝑎𝑖𝑗 ] is lower triangular ⇔𝑎𝑖𝑗 = 0, 𝑓𝑜𝑟 𝑖 < 𝑗).
26
❖ Note that diagonal matrices are both upper and lower triangular.
Example 2:
2 0 0
(1) 𝐴 = [0 1 0] is diagonal. So, 𝐴 is both upper and lower triangular.
0 0 6
2 0 4
(2) 𝐴 = [0 1 0] is upper triangular
0 0 6
2 0 0 0
𝑒 1 0 0
(3) 𝐴 = [ ] is lower triangular
4 0 6 0
3 0 𝜋 −1
Theorem 3:
(a) The transpose of a lower triangular matrix is upper triangular, and the transpose
of an upper triangular matrix is lower triangular.
(b) The product of lower triangular matrices is lower triangular, and the product of
upper triangular matrices is upper triangular.
(c) A triangular matrix is invertible if and only if its diagonal entries are all
nonzero.
(d) The inverse of an invertible lower triangular matrix is lower triangular, and the
inverse of an invertible upper triangular matrix is upper triangular.
1 3 −1 3 −2 2
Theorem 4: Let 𝐴 = [0 2 4 ] and 𝐵 = [0 0 −1]. Then 𝐴, 𝐵 are upper
0 0 5 0 0 1
3 7
1 −
2 5 3 −2 −2
−1 1 2
triangular. Also, 𝐴 = 0 2
−5 and 𝐴𝐵 = [0 0 2 ] are upper
1 0 0 5
[0 0 5 ]
triangular.
27
Definition 5:
(a) A square matrix 𝐴 is said to be symmetric if 𝐴 = 𝐴𝑡 .
(b) A square matrix 𝐴 is said to be skew-symmetric if 𝐴 = −𝐴𝑡 .
Remark 6: An 𝑛 × 𝑛 matrix 𝐴 = [𝑎𝑖𝑗 ] is:
(a) symmetric if 𝑎𝑖𝑗 = 𝑎𝑗𝑖 for all 𝑖, 𝑗 = 1,2, … , 𝑛.
(b) skew-symmetric if 𝑎𝑖𝑗 = −𝑎𝑗𝑖 for all 𝑖, 𝑗 = 1,2, … , 𝑛. Note that the diagonal
entries of a skew-symmetric matrix are 0.
Example 7:
7 −3 7 −3
(1) 𝐴 = [ ] is symmetric because 𝐴𝑡 = [ ]=𝐴
−3 8 −3 8
1 4 5 1 4 5
𝑡
(2) 𝐴 = [4 −5 0] is symmetric because 𝐴 = [4 −5 0] = 𝐴
5 0 7 5 0 7
𝑑1 0 0 0 𝑑1 0 0 0
0 𝑑2 0 0 0 𝑑2 0 0
(3) 𝐴 = [ ] is symmetric because 𝐴𝑡 = [ ]=𝐴
0 0 𝑑3 0 0 0 𝑑3 0
0 0 0 𝑑4 0 0 0 𝑑4
0 −4 5 0 4 −5
𝑡
(4) 𝐴 = [ 4 0 0 ] is skew-symmetric because 𝐴 = [−4 0 0 ] = −𝐴
−5 0 0 5 0 0
0 0
(5) 𝐴 = [ ] is both symmetric and skew-symmetric because 𝐴𝑡 = 𝐴, 𝐴𝑡 = −𝐴
0 0
28
Theorem 9: If 𝐴 and 𝐵 are symmetric matrices with the same size, and if 𝑘 is any
scalar, then
(a) 𝐴𝑡 is symmetric.
(b) 𝐴 + 𝐵 and 𝐴 − 𝐵 are symmetric.
(c) 𝑘𝐴 is symmetric
Proof:𝐴 and 𝐵 are symmetric⇒𝐴𝑡 = 𝐴 and 𝐵 𝑡 = 𝐵
(a) (𝐴𝑡 )𝑡 = 𝐴 = 𝐴𝑡 ⇒(𝐴𝑡 )𝑡 = 𝐴𝑡 ⇒𝐴𝑡 is symmetric.
(b) (𝐴 + 𝐵)𝑡 = 𝐴𝑡 + 𝐵 𝑡 = 𝐴 + 𝐵⇒𝐴 + 𝐵 is symmetric.
(𝐴 − 𝐵)𝑡 = 𝐴𝑡 − 𝐵 𝑡 = 𝐴 − 𝐵⇒𝐴 − 𝐵 is symmetric.
(c) (𝑘𝐴)𝑡 = 𝑘𝐴𝑡 = 𝑘𝐴⇒𝑘𝐴 is symmetric.
Theorem 11: The product of two symmetric matrices is symmetric if and only if
the matrices commute.
Proof:Let 𝐴, 𝐵 be symmetric matrices of the same size ⇒𝐴𝑡 = 𝐴 and 𝐵 𝑡 = 𝐵.
Then
𝐴𝐵 is symmetric ⇔(𝐴𝐵)𝑡 = 𝐴𝐵⇔𝐵 𝑡 𝐴𝑡 = 𝐴𝐵⇔𝐵𝐴 = 𝐴𝐵
⇔𝐴, 𝐵 commute.
❖ The Products 𝑨𝑨𝒕 and 𝑨𝒕 𝑨 are symmetric: Matrix products of the form 𝐴𝐴𝑡
and 𝐴𝑡 𝐴 arise in a variety of applications. If 𝐴 is an 𝑚 × 𝑛 matrix, then 𝐴𝑡 is an
𝑛 × 𝑚 matrix, so the products 𝐴𝐴𝑡 and 𝐴𝑡 𝐴 are both square matrices. The
matrix 𝐴𝐴𝑡 has size 𝑚 × 𝑚, and the matrix 𝐴𝑡 𝐴 has size 𝑛 × 𝑛. Such products
are always symmetric since
➢ (𝐴𝐴𝑡 )𝑡 = (𝐴𝑡 )𝑡 𝐴𝑡 = 𝐴𝐴𝑡 (that is, 𝐴𝐴𝑡 is symmetric)
(𝐴𝑡 𝐴)𝑡 = 𝐴𝑡 (𝐴𝑡 )𝑡 = 𝐴𝑡 𝐴 (that is, 𝐴𝑡 𝐴 is symmetric).
Theorem 13: If 𝐴 is an invertible matrix, then 𝐴𝐴𝑡 and 𝐴𝑡 𝐴 are also invertible.
Proof: Since 𝐴 is invertible, so is 𝐴𝑡 . Thus, 𝐴𝐴𝑡 and 𝐴𝑡 𝐴 are invertible, since they
are the products of invertible matrices.