Ch06 Introduction To Linear Algebra 5th Edition
Ch06 Introduction To Linear Algebra 5th Edition
Determinants
6
This chapter may be covered at any time after Chapter 1
Overview In this chapter we introduce the idea of the determinant of a square matrix. We also
investigate some of the properties of the determinant. For example, a square matrix is
singular if and only if its determinant is zero.
We also consider applications of determinants in matrix theory. For instance, we
describe Cramer’s Rule for solving Ax = b, see how to express A−1 in terms of the
adjoint matrix, and show how the Wronskian can be used as a device for determining
linear independence of a set of functions.
447
June 6, 2001 14:37 i56-ch06 Sheet number 2 Page number 448 cyan black
INTRODUCTION
6.1
Determinants have played a major role in the historical development of matrix theory,
and they possess a number of properties that are theoretically pleasing. For example, in
terms of linear algebra, determinants can be used to characterize nonsingular matrices,
to express solutions of nonsingular systems Ax = b, and to calculate the dimension of
subspaces. In analysis, determinants are used to express vector cross products, to express
the conversion factor (the Jacobian) when a change of variables is needed to evaluate a
multiple integral, to serve as a convenient test (the Wronskian) for linear independence
of sets of functions, and so on. We explore the theory and some of the applications of
determinants in this chapter.
The material in Sections 6.2 and 6.3 duplicates the material in Sections 4.2 and 4.3
in order to present a contiguous coverage of determinants. The treatment is slightly
different because the material in Chapter 6 is self-contained, whereas Chapter 4 uses a
result (Theorem 6.13) that is stated in Chapter 4 but actually proved in Chapter 6. Hence,
the reader who has seen the results of Sections 4.2 and 4.3 might want to proceed directly
to Section 6.4.
For notational purposes the determinant is often expressed by using vertical bars:
a11 a12
det(A) = .
a21 a22
Solution
1 2
det(A) = = 1 · 3 − 2(−1) = 5;
−1 3
4 1
det(B) = = 4 · 1 − 1 · 2 = 2;
2 1
3 4
det(C) = =3·8−4·6=0
6 8
Definition 2 Let A = (aij ) be an (n × n) matrix, and let Mrs denote the [(n − 1) × (n − 1)]
matrix obtained by deleting the rth row and sth column from A. Then Mrs is
called a minor matrix of A, and the number det(Mrs ) is the minor of the (r,s)th
entry, ars . In addition, the numbers
Example 2 Determine the minor matrices M11 , M23 , and M32 for the matrix A given by
1 −1 2
A= 2 3 −3 .
4 5 1
Determinants are defined only for square matrices. Note also the inductive nature
of the definition. For example, if A is (3 × 3), then det(A) = a11 A11 + a12 A12 + a13 A13 ,
and the cofactors A11 , A12 , and A13 can be evaluated from Definition 1. Similarly, the
determinant of a (4 × 4) matrix is the sum of four (3 × 3) determinants, where each
(3 × 3) determinant is in turn the sum of three (2 × 2) determinants.
a11 a12 a13
a21 a22 a23 = +a11 a22 a33 − a11 a23 a32 − a12 a21 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 .
a a32 a33
31
Since there are n! different permutations when S = {1, 2, . . . , n}, you can see why this definition is not
suitable for calculation. For example, calculating the determinant of a (10 × 10) matrix requires us to
evaluate 10! = 3,628,800 different terms of the form ±a1j1 a2j2 . . . a10j10 . The permutation definition is
useful for theoretical purposes, however. For instance, the permutation definition gives immediately that
det(A) = 0 when A has a row of zeros.
In detail,
2 3 1
A11 = 2 −1 0
−3 −2 1
−1 0 2 0 2 −1
= 2 − 3 + 1 = −15;
−2 1 −3 1 −3 −2
−1 1
3
A12 = − −3 −1 0
2 −2 1
−1 0 −3 0 −3 −1
= − −1 − 3 + 1 = −18;
−2 1 2 1 2 −2
−1 2 3
A14 = − −3 2 −1
2 −3 −2
2 −1 −3 −1 −3 2
= − −1 −2 + 3 = −6.
−3 −2 2 −2 2 −3
The definition of det(A) given in Definition 3 and used in Examples 3 and 4 is based
on a cofactor expansion along the first row of A. In Section 6.5 (see Theorem 13), we
prove that the value det(A) can be calculated from a cofactor expansion along any row
or any column.
Also, note in Example 4 that the calculation of the (4×4) determinant was simplified
because of the zero entry in the (1, 3) position. Clearly, if we had some procedure
for creating zero entries, we could simplify the computation of determinants since the
cofactor of a zero entry need not be calculated. We will develop such simplifications in
the next section.
Solution We have det(T ) = t11 T11 + t12 T12 + t13 T13 + t14 T14 . Since t12 = 0, t13 = 0, and t14 = 0,
the calculation simplifies to
2 0 0
det(T ) = t11 T11 = 3 3 2 0
4 5 1
2 0
= 3·2
5 1
= 3 · 2 · 2 · 1 = 12.
In Example 5, we saw that the determinant of the lower-triangular matrix T was the
product of the diagonal entries, det(T ) = t11 t22 t33 t44 . This simple relationship is valid
for any lower-triangular matrix.
6.2 EXERCISES
In Exercises 1–8, evaluate the determinant of the given 1. 1 3 2. 6 7
matrix. If the determinant is zero, find a nonzero vector 2 1 7 3
x such that Ax = θ . (We will see later that det(A) = 0
if and only if A is singular.)
June 6, 2001 14:37 i56-ch06 Sheet number 8 Page number 454 cyan black
3. 2 4 4. 1 3 2 0 2 0
1 3 1 2
4 8 0 2
23. A =
0 1 2 1
5. 4 3 6. 2 −1 0 3 1 4
1 7 1 1
1 2 1 1
7. 4 1 8. 1 3 0 2 0 3
24. A =
−2 1 2 6 1 4 1 2
0 2 1 3
In Exercises 9–14, calculate the cofactors A11 , A12 , A13 ,
and A33 for the given matrix A. In Exercises 25 and 26, show that the quantities det(A),
a21 A21 + a22 A22 + a23 A23 , and a31 A31 + a32 A32 +
1 2 1 1 4 0
a33 A33 are all equal. (This is a special case of a gen-
9. A = 0 1 3 10. A = 1 0 2 eral result given later in Theorem 13.)
2 1 1 3 1 2
1 3 2 2 4 1
2 −1 3 25. A = −1 4 1 26. A = 3 1 3
11. A = −1 2 2 2 2 3 2 3 2
3 2 1 In Exercises 27 and 28, show that a11 A21 + a12 A22 +
a13 A23 = 0, and a11 A31 + a12 A32 + a13 A33 = 0. (This
1 1 1 −1 1 −1
is a special case of a general result given later in the
12. A = 1 1 2 13. A = 2 1 0 lemma to Theorem 14.)
2 1 1 0 1 3 27. A as in Exercise 25 28. A as in Exercise 26
4 2 1 In Exercises 29 and 30, form the (3 × 3) matrix of co-
factors C where cij = Aij and then calculate BA where
14. A = 4 3 1
B = C T . How can you use this result to find A−1 ?
0 0 2
29. A as in Exercise 25 30. A as in Exercise 26
In Exercises 15–20, use the results of Exercises 9–14 to
find det(A), where: 31. Verify that det(A) = 0 when
15. A is in Exercise 9. 16. A is in Exercise 10. 0 a12 a13
17. A is in Exercise 11. 18. A is in Exercise 12. A = 0 a22 a23 .
19. A is in Exercise 13. 20. A is in Exercise 14. 0 a32 a33
In Exercises 21–24, calculate det(A). 32. Use the result of Exercise 31 to prove that if U =
(uij ) is a (4 × 4) upper-triangular matrix, then
2 1 −1 2
det(U ) = u11 u22 u33 u44 .
3 0 0 1 33. Let A = (aij ) be a (2 × 2) matrix. Show that
21. A = 2 1 2 0
det(AT ) = det(A).
3 1 1 2 34. An (n × n) symmetric matrix A is called positive
definite if xT Ax > 0 for all x in R n, x = θ . Let A
be a (2 × 2) symmetric matrix. Prove the following:
1 −1 1 2
1 0 1 3 a) If A is positive definite, then a11 > 0 and
22. A = det(A) > 0.
0 0 2 4
b) If a11 > 0 and det(A) > 0, then A is positive
1 1 −1 1 definite. [Hint: For part a), consider x = e1 .
June 6, 2001 14:37 i56-ch06 Sheet number 9 Page number 455 cyan black
Then consider x = [u, v]T and use the fact that determinants necessary to find det(A) for an
A is symmetric.] arbitrary n.
35. a) Let A be an (n × n) matrix. If n = 3, det(A) b) Suppose you can perform additions,
can be found by evaluating three (2 × 2) subtractions, multiplications, and divisions
determinants. If n = 4, det(A) can be found by each at a rate of one per second. How long
evaluating twelve (2 × 2) determinants. Give a does it take to evaluate H (n) determinants of
formula, H (n), for the number of (2 × 2) order (2 × 2) when n = 2, n = 5, and n = 10?
Once Eq. (1) is formally established, we will immediately know that the theorems
for column operations are also valid for row operations. (Row operations on A are
precisely mirrored by column operations on AT .) Therefore the following theorems are
stated in terms of elementary row operations, as well as elementary column operations,
although the row results will not be truly established until Theorem 12 is proved.
Elementary Operations
Our purpose is to describe how the determinant of a matrix A changes when an elementary
column operation is applied to A. The description will take the form of a series of
June 6, 2001 14:37 i56-ch06 Sheet number 10 Page number 456 cyan black
theorems. Because of the technical nature of the first three theorems, we defer their
proofs to the end of the section.
Our first result relating to elementary operations is given in Theorem 2. This theorem
asserts that a column interchange (or a row interchange) will change the sign of the
determinant.
Solution Let B denote the matrix obtained by interchanging the first and second columns of A.
Thus B is given by
a12 a11
B= .
a22 a21
Now det(B) = a12 a21 −a11 a22 , and det(A) = a11 a22 −a12 a21 . Thus det(B) = − det(A).
Clearly, det(A ) = ca11 a22 − ca21 a12 = c(a11 a22 − a21 a12 ) = c det(A). Similarly,
det(A ) = ca11 a22 − ca21 a12 = c(a11 a22 − a21 a12 ) = c det(A).
The determinant of A is −10. Use the fact that det(A) = −10 to find the determinants
of G, H , and J , where
2 3 1 2 −3 1 2 −3 2
G = 4 0 4 , H = 4 0 4 , and J = 4 0 8 .
2 2 3 2 −2 3 2 −2 6
June 6, 2001 14:37 i56-ch06 Sheet number 12 Page number 458 cyan black
Solution Clearly, det(A) = −7. Therefore, by the corollary, det(3A) = 32 det(A) = −63. As a
check, note that the matrix 3A is given by
3 6
3A = .
12 3
Theorem 4 If A, B, and C are (n×n) matrices that are equal except that the sth column (or row) of A is
equal to the sum of the sth columns (or rows) of B and C, then det(A) = det(B) + det(C).
As before, the proof of Theorem 4 is somewhat technical and is deferred to the end of
this section.
The case in which A, B, and C have the same first column is left as an exercise.
Example 7 Given that det(B) = 22 and det(C) = 29, find det(A), where
1 3 2 1 1 2 1 2 2
A= 0 4 7 , B= 0 2 7 , and C = 0 2 7 .
2 1 8 2 0 8 2 1 8
Theorem 5 Let A be an (n × n) matrix. If the j th column (or row) of A is a multiple of the kth
column (or row) of A, then det(A) = 0.
Theorem 6 If A is an (n × n) matrix, and if a multiple of the kth column (or row) is added to the j th
column (or row), then the determinant is not changed.
Example 8 Use elementary column operations to simplify finding the determinant of the (4 × 4)
matrix A:
1 2 0 2
−1 2 3 1
A=
−3
.
2 −1 0
2 −3 −2 1
Solution In Example 4 of Section 6.2, a laborious cofactor expansion showed that det(A) = −63.
In column form, A = [A1 , A2 , A3 , A4 ], and clearly we can introduce a zero into the (1, 2)
position by replacing A2 by A2 − 2A1 . Similarly, replacing A4 by A4 − 2A1 creates a
zero in the (1, 4) entry. Moreover, by Theorem 6, the determinant is unchanged. The
details are
1 2 0 2 1 0 0 2
−1 2 3 1 −1 4 3 1
det(A) = =
−3 2 −1 0 −3 8 −1 0
2 −3 −2 1 2 −7 −2 1
1 0 0
0
−1 4 3 3
= .
−3 8 −1 6
2 −7 −2 −3
Thus it follows that det(A) is given by
4 3 3
det(A) = 8 −1 6 .
−7 −2 −3
We now wish to create zeros in the (1, 2) and (1, 3) positions of this (3 × 3) determi-
nant. To avoid using fractions, we multiply the second and third columns by 4 (using
Theorem 3), and then add a multiple of −3 times column 1 to columns 2 and 3:
4 3 3 4 12 12 4 0 0
1 1
det(A) = 8 −1 6 = 8 −4 24 = 8 −28 0 .
16 16
−7 −2 −3 −7 −8 −12 −7 13 9
Thus we again find det(A) = −63.
Solution As in Gaussian elimination, column interchanges are sometimes desirable and serve to
keep order in the computations. Consider
0 1 3 1 1 0 3 1
1 −2 −2 2 −2 1 −2 2
det(A) = = −
4
.
3 4 2 −2 3 2 −2
4 3 −1 1 3 4 −1 1
det(N1s ) = − det(M1s ), s = i or j.
For definiteness let us suppose that i > j . Note that N1i contains no entries from the
original j th column. Furthermore, the columns of N1i can be rearranged to be the same
as the columns of M1j by i − j − 1 successive interchanges of adjacent columns. By
the induction hypotheses, each such interchange causes a sign change, and so
Therefore,
n
det(B) = a1s (−1)1+s det(N1s ) + a1j (−1)i+1 det(N1i )
s=1
s =ior j
Proof of Theorem 3 Again, the proof is by induction. The case k = 2 was proved in Example 3.
Assuming the result is valid for (k × k) matrices, 2 ≤ k ≤ n − 1, let B be the (n × n)
matrix, where
B = [A1 , . . . , As−1 , cAs , As+1 , . . . , An ].
Let M1j and N1j be minor matrices of A and B, respectively, for 1 ≤ j ≤ n.
If j = s, then N1j = M1j except that one column of N1j is multiplied by c. By the
induction hypothesis,
det(N1j ) = c det(M1j ), 1 ≤ j ≤ n, j = s.
Moreover, N1s = M1s . Hence
n
det(B) = a1j (−1)1+j det(N1j ) + ca1s (−1)1+s det(N1s )
j =1
j =s
n
= a1j (−1)1+j c det(M1j ) + ca1s (−1)1+s det(M1s )
j =1
j =s
n
=c a1j (−1)1+j det(M1j ) = c det(A).
j =1
Proof of Theorem 4 We use induction where the case k = 2 is done in Example 6. Assuming the result is
true for (k × k) matrices for 2 ≤ k ≤ n − 1, let
A = [A1 , A2 , . . . , An ], B = [A1 , . . . , As−1 , Bs , As+1 , . . . , An ], and
C = [A1 , . . . , As−1 , Cs , As+1 , . . . , An ],
where As = Bs + Cs , or
ais = bis + cis , for 1 ≤ i ≤ n.
June 6, 2001 14:37 i56-ch06 Sheet number 17 Page number 463 cyan black
Let M1j , N1j , and P1j be minor matrices of A, B, and C, respectively, for 1 ≤ j ≤ n.
If j = s, then M1j , N1j , and P1j are equal except in one column, which we designate
as the rth column. Now the rth columns of N1j and P1j sum to the rth column of M1j .
Hence, by the induction hypothesis,
det(M1j ) = det(N1j ) + det(P1j ), 1 ≤ j ≤ n, j = s.
Clearly, if j = s, then M1s = N1s = P1s . Hence
n
det(B) + det(C) = a1j (−1)1+j det(N1j ) + b1s (−1)1+s det(N1s )
j =1
j =s
n
+ a1j (−1)1+j det(P1j ) + c1s (−1)1+s det(P1s )
j =1
j =s
n
= a1j (−1)1+j [det(N1j ) + det(P1j )]
j =1
j =s
6.3 EXERCISES
In Exercises 1–6, use elementary column operations to 10. B = [A1 , A1 + 2A2 , A3 , A4 ]
create zeros in the last two entries in the first row and 11. B = [A1 + 2A2 , A2 + 3A3 , A3 , A4 ]
then calculate the determinant of the original matrix.
12. B = [2A1 − A2 , 2A2 − A3 , A3 , A4 ]
1. 1 2 1 2. 2 4 −2
In Exercises 13–15, use only column interchanges to
2 0 1 0 2 3
produce a triangular matrix and then give the determi-
1 −1 1 1 1 2 nant of the original matrix.
3. 0 1 2 4. 2 2 4 13. 1 0 0 0 14. 0 0 2 0
2 0 0 3 0 0 1 3
3 1 2 1 0 1
2 0 3 2 1 2
1 1 0 1 0 4 1 3
5. 0 1 3 6. 1 1 1 1 4 2 2 2 1 5 6
2 1 2 2 1 2 15. 0 1 0 0
1 1 2 3 0 2 0 2 0 3
Suppose that A = [A1 , A2 , A3 , A4 ] is a (4 × 4) matrix, 2 1 0 6
where det(A) = 3. In Exercises 7–12, find det(B). 3 2 2 4
7. B = [2A1 , A2 , A4 , A3 ]
In Exercises 16–18, use elementary column operations
8. B = [A2 , 3A3 , A1 , −2A4 ] to create zeros in the (1, 2), (1, 3), (1, 4), (2, 3), and
9. B = [A1 + 2A2 , A2 , A3 , A4 ] (2, 4) positions. Then evaluate the original determinant.
June 6, 2001 14:37 i56-ch06 Sheet number 18 Page number 464 cyan black
16. 1 2 0 3 17. 2 4 −2 −2 25. Let U be an (n × n) upper-triangular matrix and
2 5 1 1 1 3 1 2 consider the cofactors U1j , 2 ≤ j ≤ n. Show that
U1j = 0, 2 ≤ j ≤ n. [Hint: Some column in U1j is
2 0 4 3 1 3 1 3 always the zero column.]
0 1 6 2 −1 2 1 2
26. Use the result of Exercise 25 to prove inductively
that det(U ) = u11 u22 . . . unn , where U = (uij ) is an
18. 1 1 2 1
(n × n) upper-triangular matrix.
0 1 4 1
27. Let y = mx + b be the equation of the line through
2 1 3 0
the points (x1 , y1 ) and (x2 , y2 ) in the plane. Show
2 2 1 2
that the equation is given also by
19. Use elementary row operations on the determinant
in Exercise 16 to create zeros in the (2, 1), (3, 1), x y 1
(4, 1), (3, 2), and (4, 2) positions. Assuming the
x1 y1 1 = 0.
column results in this section also hold for rows,
x y 1
give the value of the original determinant to verify 2 2
that it is the same as in Exercise 16.
28. Let (x1 , y1 ), (x2 , y2 ), and (x3 , y3 ) be the vertices of
20. Repeat Exercise 19, using the determinant in Exer-
a triangle in the plane where these vertices are num-
cise 17.
bered counterclockwise. Prove that the area of the
21. Repeat Exercise 19, using the determinant in Exer- triangle is given by
cise 18.
22. Find a (2 × 2) matrix A and a (2 × 2) matrix B, x1 y1 1
1
where det(A + B) is not equal to det(A) + det(B).
Area = x2 y2 1 .
Find a different A and B, both nonzero, such that 2
x y 1
det(A + B) = det(A) + det(B). 3 3
CRAMER’S RULE
6.4
In Section 6.3, we saw how to calculate the effect that a column operation or a row
operation has on a determinant. In this section, we use that information to analyze
the relationships between determinants, nonsingular matrices, and solutions of systems
Ax = b. We begin with the following lemma, which will be helpful in the proof of the
subsequent theorems.
Lemma 1 Let A = [A1 , A2 , . . . , An ] be an (n × n) matrix, and let b be any vector in R n . For each
i, 1 ≤ i ≤ n, let Bi be the (n × n) matrix:
Proof To keep the notation simple, we give the proof of Eq. (1) only for i = 1. Since the
system Ax = b is assumed to be consistent, there are values x1 , x2 , . . . , xn such that
x1 A1 + x2 A2 + · · · + xn An = b.
x1 det(A) = det[b, A2 , . . . , An ];
and this equality verifies Eq. (1) for i = 1. Clearly, the same argument is valid for
any i.
As the following theorem shows, one consequence of Lemma 1 is that a singular
matrix has determinant zero.
Lemma 2 Let A and B be (n × n) matrices and let C = AB. Let Ĉ denote the result of applying
an elementary column operation to C and let B̂ denote the result of applying the same
column operation to B. Then Ĉ = AB̂.
The proof of Lemma 2 is left to the exercises. The intent of the lemma is given schemat-
ically in Fig. 6.1.
A
Column
* AB operation
AB
B
AB = AB
A
* AB
Column
B operation
B
Lemma 2 tells us that the same result is produced whether we apply a column
operation to the product AB or whether we apply the operation to B first (producing B̂)
June 6, 2001 14:37 i56-ch06 Sheet number 21 Page number 467 cyan black
and then form the product AB̂. For example, suppose that A and B are (3 × 3) matrices.
Consider the operation of interchanging column 1 and column 3:
B = [B1 , B2 , B3 ] → B̂ = [B3 , B2 , B1 ]; AB̂ = [AB3 , AB2 , AB1 ]
= [AB3 , AB2 , AB1 ];
AB = [AB1 , AB2 , AB3 ] → AB = AB̂.
AB
Proof of Theorem 8 Suppose that A and B are (n×n) matrices. If B is singular, then Theorem 8 is immediate,
for in this case AB is also singular. Thus, by Theorem 7, det(B) = 0 and det(AB) = 0.
Consequently, det(AB) = det(A) det(B).
Next, suppose that B is nonsingular. In this case, B can be transformed to the
(n × n) identity matrix I by a sequence of elementary column operations. (To see
this, note that B T is nonsingular by Theorem 17, property 4, of Section 1.9. It now
follows from Theorem 16 of Section 1.9 that B T can be reduced to I by a sequence
of elementary row operations. But performing row operations on B T is equivalent to
performing column operations on B.) Therefore, det(B) = k det(I ) = k, where k is
determined entirely by the sequence of elementary column operations. By Lemma 2,
the same sequence of operations reduces the matrix AB to the matrix AI = A. Thus,
det(AB) = k det(A) = det(B) det(A) = det(A) det(B).
Example 1 Show by direct calculation that det(AB) = det(A) det(B) for the matrices
2 1 −1 3
A= and B = .
1 3 2 −2
Solution We have det(A) = 5 and det(B) = −4. Since AB is given by
0 4
AB = ,
5 −3
it follows that det(AB) = −20 = (5)(−4) = det(A) det(B).
The following theorem is now an easy consequence of Theorem 8.
Theorems 7 and 9 show that an (n×n) matrix A is singular if and only if det(A) = 0.
This characterization of singular matrices is especially useful when we want to examine
matrices that depend on a parameter. The next example illustrates one such application.
Example 2 Find all values λ such that the matrix B(λ) is singular, where
2−λ 0 0
B(λ) = 2 3−λ 4 .
1 2 1−λ
June 6, 2001 14:37 i56-ch06 Sheet number 22 Page number 468 cyan black
Solution By Theorems 7 and 9, B(λ) is singular if and only if det[B(λ)] = 0. The equation
det[B(λ)] = 0 is determined by
0 = det[B(λ)]
= (2 − λ)[(3 − λ)(1 − λ) − 8]
= (2 − λ)[λ2 − 4λ − 5]
= (2 − λ)(λ − 5)(λ + 1).
Thus, B(λ) is singular if and only if λ is one of the values λ = 2, λ = 5, or λ = −1.
The three matrices discovered by solving det[B(λ)] = 0 are listed next. As we can
see, each of these matrices is singular:
0 0 0 −3 0 0 3 0 0
B(2) = 2 1 4 , B(5) = 2 −2 4 , B(−1) = 2 4 4 .
1 2 −1 1 2 −4 1 2 2
CRAMER’S RULE In 1750, Gabriel Cramer (1704–1752) published a work in which, in the
appendix, he stated the determinant procedure named after him for solving n linear equations in n
unknowns. The first discoverer of this rule, however, was almost surely the Scottish mathematician Colin
Maclaurin (1698–1746). It appeared in a paper of Maclaurin’s in 1748, published two years after his
death. This perhaps compensates for the fact that the famous series named after Maclaurin was not first
discovered by him. (Ironically, the Maclaurin series is a special case of a Taylor series, named after the
English mathematician Brook Taylor. However, as with the Maclaurin series, Taylor was not the first
discoverer of the Taylor series!)
June 6, 2001 14:37 i56-ch06 Sheet number 24 Page number 470 cyan black
6.4 EXERCISES
In Exercises 1–3, use column operations to reduce the λ 1 1
given matrix A to lower-triangular form. Find the deter-
13. B(λ) = 1 λ 1
minant of A.
1 1 λ
0 1 3 1 2 1
2−λ 0 3
1. A = 1 2 1 2. A = 2 4 3
14. B(λ) = 2 λ 1
3 4 1 2 1 3
1 0 −λ
2 2 4
In Exercises 15–21, use Cramer’s rule to solve the given
3. A = 1 3 4
system.
−1 2 1
15. x1 + x2 = 3 16. x1 + 3x2 = 4
In Exercises 4–6, use column operations to reduce the x1 − x2 = −1 x1 − x 2 = 0
given matrix A to the identity matrix. Find the determi-
17. x1 − 2x2 + x3 = −1
nant of A. x1 + x3 = 3
1 0 1 1 0 −2
x1 − 2x2 = 0
4. A = 2 1 1 5. A = 3 1 3
18. x1 + x2 + x3 = 2
1 2 1 0 1 2 x1 + 2x2 + x3 = 2
x1 + 3x2 − x3 = −4
2 2 2
19. x1 + x2 + x3 − x4 =2
6. A = 4 3 4
x2 − x3 + x4 =1
2 1 2 x3 − x4 =0
7. Let A and B be (3×3) matrices such that det(A) = 2 x3 + 2x4 =3
and det(B) = 3. Find the value of each of the 20. 2x1 − x2 + x3 = 3
following. x1 + x2 =3
a) det(AB) b) det(AB 2 ) x2 − x3 = 1
c) det(A B) d) det(2A−1 )
−1
21. x1 + x2 + x3 = a
e) det(2A)−1 x2 + x3 = b
8. Show that the matrices x3 = c
sin θ − cos θ 2 22. Suppose that A is an (n×n) matrix such that A2 = I .
sin θ − cos θ Show that | det(A)| = 1.
and cos θ sin θ 3
cos θ sin θ 23. Prove Lemma 2. [Hint: Let
0 0 1
are nonsingular for all values of θ . B = [B1 , B2 , . . . , Bi , . . . , Bj , . . . , Bn ]
In Exercises 9–14, find all values λ such that the given and consider the matrix B̂ produced by interchang-
matrix B(λ) is singular. ing column i and column j . Also consider the matrix
λ 0 B̂ produced by replacing Bi by Bi + aBj .]
9. B(λ) = 24. We know that AB and BA are not usually equal.
3 2−λ
However, show that if A and B are (n × n), then
λ 1 2 λ det(AB) = det(BA).
10. B(λ) = 11. B(λ) =
1 λ λ 2 25. Suppose that S is a nonsingular (n × n) matrix, and
2
suppose that A and B are (n × n) matrices such that
1 λ λ SAS −1 = B. Prove that det(A) = det(B).
12. B(λ) = 1 1 1 26. Suppose that A is (n × n) and A2 = A. What is
1 3 9 det(A)?
June 6, 2001 14:37 i56-ch06 Sheet number 25 Page number 471 cyan black
27. If det(A) = 3, what is det(A5 )? 30. Verify the result in Exercise 29 for the matrix
28. Let A be a nonsingular matrix and suppose that all
the entries of both A and A−1 are integers. Prove 1 2 0 0 0
that det(A) = ±1. [Hint: Use Theorem 9.]
2 1 0 0 0
29. Let A and C be square matrices, and let Q be a
Q= 3 5 1 2 2 .
matrix of the form
7 2 3 5 1
A O
Q= .
B C 1 8 1 4 1
Convince yourself that det(Q) = det(A) det(C).
[Hint: Reduce C to lower-triangular form with col-
umn operations; then reduce A.]
At this point we know that Theorems 2–6 of Section 6.3 are valid for rows as well as
for columns. In particular,we can use row operations to reduce a matrix A to a triangular
matrix T and conclude that det(A) = ± det(T ).
Example 1 We return to the (4 × 4) matrix A in Example 8 of Section 6.3, where det(A) = −63:
1 2 0 2
−1 2 3 1
det(A) = .
−3 2 −1 0
2 −3 −2 1
By using row operations, we can reduce det(A) to
1 2 0 2
0 4 3 3
det(A) = .
0 8 −1 6
0 −7 −2 −3
Now we switch rows 2 and 3 and then switch columns 2 and 3 in order to get the number
−1 into the pivot position. Following this switch, we create zeros in the (2, 3) and (2, 4)
positions with row operations; and we find
1 0 2 2
0 −1 8 6
det(A) =
0 3 4 3
0 −2 −7 −3
1 0 2 2
0 −1 8 6
= .
0 0 28 21
0 0 −23 −15
(The sign of the first determinant above is the same as det(A) because the first determinant
is the result of two interchanges.) A quick calculation shows that the last determinant
has the value −63.
The next theorem shows that we can evaluate det(A) by using an expansion along
any row or any column we choose. Computationally, this ability is useful when some
row or column contains a number of zero entries.
Proof We establish only Eq. (1), which is an expansion of det(A) along the ith row. Expansion
of det(A) along the j th column in Eq. (2) is proved the same way.
June 6, 2001 14:37 i56-ch06 Sheet number 27 Page number 473 cyan black
Form a matrix B from A in the following manner: Interchange row i first with row
i − 1 and then with row i − 2; continue until row i is the top row of B. In other words,
bring row i to the top and push the other rows down so that they retain their same relative
ordering. This procedure requires i − 1 interchanges; so det(A) = (−1)i−1 det(B). An
inspection shows that the cofactors B11 , B12 , . . . , B1n are also related to the cofactors
Ai1 , Ai2 , . . . , Ain by B1k = (−1)i−1 Aik . To see this relationship, one need only observe
that if M is the minor of the (1, k) entry of B, then M is the minor of the (i, k) entry
of A. Therefore, B1k = (−1)k+1 M and Aik = (−1)i+k M, which shows that B1k =
(−1)i−1 Aik . With this equality and Definition 2 of Section 6.2,
det(B) = b11 B11 + b12 B12 + · · · + b1n B1n
= ai1 B11 + ai2 B12 + · · · + ain B1n
= (−1)i−1 (ai1 Ai1 + ai2 Ai2 + · · · + ain Ain ).
Since det(A) = (−1)i−1 det(B), formula (1) is proved.
Lemma If A is an (n × n) matrix and if i = k, then ai1 Ak1 + ai2 Ak2 + · · · + ain Akn = 0.
Proof For i and k, given i = k, let B be the (n × n) matrix obtained from A by deleting the
kth row of A and replacing it by the ith row of A; that is, B has two equal rows, the ith
and kth, and B is the same as A for all rows but the kth.
In this event it is clear that det(B) = 0, that the cofactor Bkj is equal to Akj , and
that the entry bkj is equal to aij . Putting these together gives
0 = det(B) = bk1 Bk1 + bk2 Bk2 + · · · + bkn Bkn
= ai1 Ak1 + ai2 Ak2 + · · · + ain Akn ;
thus the lemma is proved.
This lemma can be used to derive a formula for A−1 . In particular, let A be an (n×n)
matrix, and let C denote the matrix of cofactors; C = (cij ) is (n × n), and cij = Aij .
The adjoint matrix of A, denoted Adj(A), is equal to C T . With these preliminaries, we
prove Theorem 14.
and by the lemma and Theorem 13, bij = 0 when i = j , while bii = det(A). Therefore,
B is equal to a multiple of det(A) times I , and the theorem is proved.
The Wronskian
As a final application of determinant theory, we develop a simple test for the linear
independence of a set of functions. Suppose that f0 (x), f1 (x), . . . , fn (x) are real-valued
June 6, 2001 14:37 i56-ch06 Sheet number 29 Page number 475 cyan black
a 1 b -1
7: 1 1 1
b 1 a
a - 1 1 1 - b
-
(a + b - 2)(a - b) a + b - 2 (a + b - 2)(a - b)
1 a + b 1
8: - -
a + b - 2 a + b - 2 a + b - 2
1 - b 1 a - 1
-
(a + b - 2)(a - b) a + b - 2 (a + b - 2)(a - b)
Figure 6.2 Using Derive to find the inverse of a matrix with variable
entries, as in Example 3
functions defined on an interval [a, b]. If there exist scalars a0 , a1 , . . . , an (not all of
which are zero) such that
for all x in [a, b], then {f0 (x), f1 (x), . . . , fn (x)} is a linearly dependent set of functions
(see Section 5.4). If the only scalars for which Eq. (4) holds for all x in [a, b] are
a0 = a1 = · · · = an = 0, then the set is linearly independent.
A test for linear independence can be formulated from Eq. (4) as follows: If
a0 , a1 , . . . , an are scalars satisfying Eq. (4) and if the functions fi (x) are sufficiently dif-
ferentiable, then we can differentiate both sides of the identity (4) and have a0 f0(i) (x) +
a1 f1(i) (x) + · · · + an fn(i) (x) = 0, 1 ≤ i ≤ n. In matrix terms, these equations are
f0 (x) f1 (x) · · · fn (x) a0 0
f0 (x) f1 (x) · · · fn (x) a1 0
=
.. .. .. .. .
. . . .
f0(n) (x) f1(n) (x) · · · fn(n) (x) an 0
If we denote the coefficient matrix above as W (x), then det[W (x)] is called the Wron-
skian for {f0 (x), f1 (x), . . . , fn (x)}. If there is a point x0 in [a, b] such that det[W (x0 )] =
0, then the matrix W (x) is nonsingular at x = x0 , and the implication is that a0 = a1 =
· · · = an = 0. In summary, if the Wronskian is nonzero at any point in [a, b], then
{f0 (x), f1 (x), . . . , fn (x)} is a linearly independent set of functions. Note, however, that
det[W (x)] = 0 for all x in [a, b] does not imply linear dependence (see Example 4).
June 6, 2001 14:37 i56-ch06 Sheet number 30 Page number 476 cyan black
WRONSKIANS Wronskians are named after the Polish mathematician Josef Maria
Hoëné-Wroński (1778–1853). Unfortunately, the violent character of his personal life often detracted
from the respect he was due from his mathematical work. The Wronskian provides a partial test for linear
independence. If the Wronskian is nonzero for some x0 in [a, b], then f0 (x), f1 (x), . . . , fn (x) are
linearly independent (see the first part of Example 4). If the Wronskian is zero for all x in [a, b], then the
test gives no information (see the second part of Example 4).
The Wronskian does provide a complete test for linear independence, however, when
f0 (x), f1 (x), . . . , fn (x) are solutions of an (n + 1)st-order linear differential equation of the form
y (n+1) + gn (x)y (n) + · · · + g1 (x)y + g0 (x)y = 0,
where g0 (x), g1 (x), . . . , gn (x) are all continuous on (a, b). In this case, f0 (x), f1 (x), . . . , fn (x) are
linearly independent if and only if the Wronskian is never zero for any x in (a, b).
Example 4 Let F1 = {x, cos x, sin x} and F2 = {sin2 x, | sin x| sin x} for −1 ≤ x ≤ 1. The
respective Wronskians are
x cos x sin x
w1 (x) = 1 − sin x cos x = x
0 − cos x − sin x
and
sin2 x | sin x| sin x
w2 (x) = = 0.
sin 2x | sin 2x|
Since w1 (x) = 0 for x = 0, F1 is linearly independent. Even though w2 (x) = 0 for all
x in [−1, 1], F2 is also linearly independent, for if a1 sin2 x + a2 | sin x| sin x = 0, then
at x = 1, a1 + a2 = 0; and at x = −1, a1 − a2 = 0; so a1 = a2 = 0.
The next theorem shows how elementary matrices can be used to represent elemen-
tary column operations as matrix products.
Theorem 15 Let E be the (n × n) elementary matrix that results from performing a certain column
operation on the (n × n) identity. If A is any (n × n) matrix, then AE is the matrix that
results when this same column operation is performed on A.
Proof We prove Theorem 15 only for the case in which the column operation is to add c times
column i to column j . The rest of the proof is left to the exercises.
Let E denote the elementary matrix derived by adding c times the ith column of I
to the j th column of I . Since I is given by I = [e1 , e2 , . . . , ei , . . . , ej , . . . , en ], we can
represent the elementary matrix E in column form as
E = [e1 , e2 , . . . , ei , . . . , ej + cei , . . . , en ].
Consequently, in column form, AE is the matrix
AE = [Ae1 , Ae2 , . . . , Aei , . . . , A(ej + cei ), . . . , Aen ].
Next, if A = [A1 , A2 , . . . , An ], then Aek = Ak , 1 ≤ k ≤ n. Therefore, AE has the form
AE = [A1 , A2 , . . . , Ai , . . . , Aj + cAi , . . . , An ].
From this column representation for AE, it follows that AE is the matrix that results
when c times column i of A is added to column j .
We now use Theorem 15 to prove Theorem 11. Let A be an (n × n) matrix. Then A
can be reduced to a lower-triangular matrix L by using a sequence of column operations.
Equivalently, by Theorem 15, there is a sequence of elementary matrices E1 , E2 , . . . , Er
such that
AE1 E2 · · · Er = L. (5)
In Eq. (5), an elementary matrix Ek represents either a column interchange or the addition
of a multiple of one column to another. It can be shown that:
(a) If Ek represents a column interchange, then Ek is symmetric.
(b) If Ek represents the addition of a multiple of column i to column j , where
i < j , then Ek is an upper-triangular matrix with all main diagonal entries
equal to 1.
Now in Eq. (5), let Q denote the matrix Q = E1 E2 · · · Er and observe that Q is nonsin-
gular because each Ek is nonsingular. To complete the proof of Theorem 11, we need
to verify that det(QT ) = det(Q).
From the remarks in (a) and (b) above, det(EkT ) = det(Ek ), 1 ≤ k ≤ r, since each
matrix Ek is either symmetric or triangular. Thus
det(QT ) = det(ErT · · · E2T E1T )
= det(ErT ) · · · det(E2T ) det(E1T )
= det(Er ) · · · det(E2 ) det(E1 )
= det(Q).
An illustration of the discussion above is provided by the next example.
June 6, 2001 14:37 i56-ch06 Sheet number 32 Page number 478 cyan black
6.5 EXERCISES
In Exercises 1–4, use row operations to reduce the given 7. 1 0 1 8. 2 1 0
determinant to upper-triangular form and determine the
2 1 2 3 0 1
value of the original determinant.
1 1 2 0 1 1
1. 1 2 1 2. 0 3 1
1 2 1
2 3 2 9. 1 1 1 10. 1 2 3
2 −2 2
−1 4 1
1 2 2 0 1 2
3. 0 1 3 4. 1 0 1 1 3 1 0 0 1
1 2 2 0 2 4
In Exercises 11–16, calculate the Wronskian. Also, de-
3 1 0 3 2 1
termine whether the given set of functions is linearly
In Exercises 5–10, find the adjoint matrix for the given independent on the interval [−1, 1].
matrix A. Next, use Theorem 14 to calculate the inverse 11. {1, x, x 2 } 12. {ex , e2x , e3x }
of the given matrix.
13. {1, cos2 x, sin2 x} 14. {1, cos x, cos 2x}
5. 1 2 6. a b
2
3 4 c d 15. {x , x|x|} 16. {x 2 , 1 + x 2 , 2 − x 2 }
June 6, 2001 14:37 i56-ch06 Sheet number 33 Page number 479 cyan black
In Exercises 17–20, find elementary matrices E1 , E2 , 26. Let L be a nonsingular (4 × 4) lower-triangular ma-
and E3 such that AE1 E2 E3 = L, where L is lower tri- trix. Show that L−1 is also a lower-triangular matrix.
angular. Calculate the product Q = E1 E2 E3 and verify [Hint: Consider a variation of Exercise 25.]
that AQ = L and det(Q) = det(QT ). 27. Let A be an (n × n) matrix, where det(A) = 1 and
0 1 3 0 −1 2 A contains only integer entries. Show that A−1 con-
tains only integer entries.
17. A = 1 2 4 18. A = 1 3 −1
2 2 1 1 2 1 28. Let E denote the (n × n) elementary matrix cor-
responding to an interchange of the ith and j th
1 2 −1 2 4 −6 columns of I . Let A be any (n × n) matrix.
19. A = 3 5 1 20. A = 1 1 1 a) Show that matrix AE is equal to the result of
4 0 2 3 2 1 interchanging columns i and j of A.
b) Show that matrix E is symmetric.
In Exercises 21–24, calculate det[A(x)] and show that
29. An (n × n) matrix A is called skew symmetric if
the given matrix A(x) is nonsingular for any real value of
AT = −A. Show that if A is skew symmetric, then
x. Use Theorem 14 to find an expression for the inverse
det(A) = (−1)n det(A). If n is odd, show that A
of A(x).
must be singular.
x 1 1 x
21. A(x) = 22. A(x) = 30. An (n × n) real matrix is orthogonal provided that
−1 x −x 2 AT = A−1 . If A is an orthogonal matrix, prove that
det(A) = ±1.
2 x 0
31. Let A be an (n × n) nonsingular matrix. Prove
23. A(x) = −x 2 x
that det[Adj(A)] = [det(A)]n−1 .
0 −x 2 [Hint: Use Theorem 14.]
sin x 0 cos x 32. Let A be an (n × n) nonsingular matrix.
24. A(x) = 0 1 0 a) Show that
− cos x 0 sin x 1
[Adj(A)]−1 = A.
25. Let L and U be the (3 × 3) matrices det(A)
[Hint: Use Theorem 14.]
1 0 0 1 a b
b) Show that
L = a 1 0 and U = 0 1 c .
1
b c 1 0 0 1 Adj(A−1 ) = A.
det(A)
Use Theorem 14 to show that L−1 is lower triangular [Hint: Use Theorem 14 to obtain a formula for
and U −1 is upper triangular. (A−1 )−1 .]
SUPPLEMENTARY EXERCISES
1. Express 2. Let A = [A1 , A2 , . . . , An ] be an (n × n) matrix and
let B = [An , An−1 , . . . , A1 ]. How are det(A) and
a11 + b11 a12 + b12 det(B) related when n is odd? When n is even?
3. If A is an (n × n) matrix such that A3 = A, then list
a21 + b21 a22 + b22
all possible values for det(A).
as a sum of four determinants in which there are no 4. If A is a nonsingular (2 × 2) matrix and c is a scalar
sums in the entries. such that AT = cA, what are the possible values
June 6, 2001 14:37 i56-ch06 Sheet number 34 Page number 480 cyan black
for c? If A is a nonsingular (3 × 3) matrix, what are be the matrix of cofactors for A. (That is, A11 = −7,
the possible values for c? A12 = 5, and so on.) Find A.
5. Let A = (aij ) be a (3 × 3) matrix such that det(A) = 7. Let b = [b1 , b2 , . . . , bn ]T .
2, and let Aij denote the ij th cofactor of A. If a) For 1 ≤ i ≤ n, let Ai be the (n × n) matrix
A31 A21 A11 Ai = [e1 , . . . , ei−1 , b, ei+1 , . . . , en ]. Apply
Cramer’s rule to the system In x = b to show that
B = A32 A22 A12 ,
det(Ai ) = bi .
A33 A23 A13
b) If B is the (n × n) matrix B = [b, . . . , b], then
then calculate AB. use part a) and Theorem 4 to determine a formula
6. Let A = (aij ) be a (3 × 3) matrix with a11 = 1, for det(B + I ).
a12 = 2, and a13 = −1. Let
8. If the Wronskian for {f0 (x), f1 (x), f2 (x)} is
−7 5 4 (x 2 + 1)ex , then calculate the Wronskian for
C = −4 3 2 {xf0 (x), xf1 (x), xf2 (x)}.
9 −7 −5
CONCEPTUAL EXERCISES
In Exercises 1–8, answer true or false. Justify your an- 10. Let A and B be (n × n) matrices such that AB = I .
swer by providing a counterexample if the statement is Prove that BA = I . [Hint: Show that det(A) = 0
false or an outline of a proof if the statement is true. and conclude that A−1 exists.]
1. If A, B, and C are (n × n) matrices such that 11. If A is an (n × n) matrix and c is a scalar, show that
AB = AC and det(A) = 0, then B = C. det(AT − cI ) = det(A − cI ).
2. If A and B are (n × n) matrices, then det(AB) = 12. Let A and B be (n × n) matrices such that B is
det(BA). nonsingular, and let c be a scalar.
3. If A is an (n × n) matrix and c is a scalar, then a) Show that det(A − cI ) = det(B −1 AB − cI ).
det(cIn − A) = cn − det(A). b) Show that det(AB − cI ) = det(BA − cI ).
4. If A is an (n × n) matrix and c is a scalar, then 13. If A is a nonsingular (n × n) matrix, then prove that
det(cA) = c det(A). Adj(A) is also nonsingular. [Hint: Consider the
5. If A is an (n × n) matrix such that Ak = O for some product A[Adj(A)].]
positive integer k, then det(A) = 0. 14. a) If A and B are nonzero (n × n) matrices such
6. If A1 , A2 , . . . , Am are (n × n) matrices such that that AB = O, then prove that both A and B are
B = A1 A2 . . . Am is nonsingular, then each Ai is singular. [Hint: What would you conclude if
nonsingular. either A or B were nonsingular?]
b) Use part a) to prove that if A is a singular
7. If the matrix A is symmetric, then so is Adj(A). (n × n) matrix, then Adj(A) is also a singular
8. If A is an (n × n) matrix such that det(A) = 1, then matrix. [Hint: Consider the product
Adj[Adj(A)] = A. A[Adj(A)].]
15. If A = (aij ) is an (n × n) orthogonal matrix (that
In Exercises 9–15, give a brief answer. is, AT = A−1 ), then prove that Aij = aij det(A),
9. Show that A2 + I = O is not possible if A is an where Aij is the ij th cofactor of A. [Hint: Express
(n × n) matrix and n is odd. A−1 in terms of Adj(A).]
June 6, 2001 14:37 i56-ch06 Sheet number 35 Page number 481 cyan black
MATLAB EXERCISES
Exercises 1–6 will illustrate some properties of the determinant and help you sharpen your
skills using MATLAB to manipulate matrices and perform matrix surgery. These exercises
also reinforce the theoretical properties of the determinant that you learned in Chapter 6.
2. Use matrix A from Exercise 1 (or a similarly randomly generated matrix) to illustrate
Theorems 2, 3, and the corollary to Theorem 3.
7. How common are singular matrices? Because of the emphasis on singular matrices
in matrix theory, it might seem that they are quite common. In this exercise, randomly
generate 100 matrices, calculate the determinant of each, and then make a rough assessment
as to how likely encountering a singular matrix would be.
The following MATLAB loop will generate the determinant values for 100 randomly
chosen matrices:
determ = zeros(1,100);
for i = 1 : 100
A = round(20*rand(5,5) - 10*ones(5,5));
determ(1,i) = det(A);
end
After executing this loop, list the vector determinant to display the 100 determinant
values calculated. Are any of the 100 matrices singular? Repeat the experiment using 1000
randomly generated matrices instead of 100. Rather than listing the vector determinant, use
the min(abs(determ)) command to find the smallest determinant in absolute value.
Did you encounter any singular matrices?
8. Generating integer matrices with integer inverses For certain simulations, it is con-
venient to have a collection of randomly-generated matrices that have integer entries and
June 6, 2001 14:37 i56-ch06 Sheet number 36 Page number 482 cyan black
whose inverses also have integer entries. Argue, using Theorem 14, that an integer matrix
with determinant equal to 1 or −1 will have an integer inverse.
One easy way to create an integer matrix A with determinant equal to 1 or −1 is to set
A = LU where L is a lower-triangular integer matrix with 1’s and −1’s on its diagonal
and where U is an upper-triangular integer matrix with 1’s and −1’s on its diagonal. Then,
since det(A) = det(L) det(U ), we see that both A and A−1 will be integer matrices.
Use these ideas to create a set of ten randomly generated (5 × 5) integer matrices with
integer inverses. For each matrix A created, use the MATLAB inv command to generate
the inverse for A. Note, because of roundoff error, that the MATLAB inverse for A is
not always an integer matrix. To eliminate the roundoff error, you can use the command
round(inv(A)) in order to round the entries of A−1 to the nearest integer. Check, by
direct multiplication, that this will produce the inverse.