0% found this document useful (0 votes)
57 views10 pages

Homework 8 Solutions

The document provides solutions to homework problems from Chapter 5 on eigenvalues and eigenvectors. Key points include: - Calculating the characteristic polynomial and eigenvalues for various matrices. Eigenvalues included 1, 2, 3, 4, and -1. - Finding eigenvectors by solving (A - λI)x = 0 for different eigenvalues λ. Bases for eigenspaces are presented. - Discussing properties of eigenvalues and eigenvectors, including that eigenvalues of A are also eigenvalues of AT.

Uploaded by

ijji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views10 pages

Homework 8 Solutions

The document provides solutions to homework problems from Chapter 5 on eigenvalues and eigenvectors. Key points include: - Calculating the characteristic polynomial and eigenvalues for various matrices. Eigenvalues included 1, 2, 3, 4, and -1. - Finding eigenvectors by solving (A - λI)x = 0 for different eigenvalues λ. Bases for eigenspaces are presented. - Discussing properties of eigenvalues and eigenvectors, including that eigenvalues of A are also eigenvalues of AT.

Uploaded by

ijji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Homework 8 Solutions

Chapter 5.1
     
4 0 1 1 0 0 3 0 1
13. For λ = 1 : A − 1I = −2 1 0 − 0 1 0 = −2 0 0. The equations for
−2 0 {1 0 0 1} −2 0 0
3x1 + x3 = 0
(A − I)x = 0 are easy to solve: . Row operations hardly seem nec-
−2x1 = 0
essary. Obviously x1 is zero, and hence x3 is also zero. There are three-variables,
  so
0
x2 is free. The general solution of (A − I)x = 0 is x2 e2 , where e2 = 1 and so e2
0
provides a basis for the eigenspace.
    
4 0 1 2 0 0 2 0 1 [ ]
For λ = 2: A − 2I = −2 1 0 − 0 2 0 = −2 −1 0 . (A − 2I) 0 =
   −2 0 1  0 0 2 −2 0 −1
2 0 1 0 2 0 1 0 1 0 1/2 0
−2 −1 0 0 ∼ 0 −1 1 0 ∼ 0 1 −1 0. So x1 = −(1/2)x3 , x2 = x3 ,
−2 0 −1 0 0 0 0 0 0 0 0 0  
−1/2
with x3 free. The general solution of (A − 2I)x = 0 is x3  1 . A nice basis vector
  1
−1
for the eigenspace is 2 .

2     
4 0 1 3 0 0 1 0 1 [ ]
For λ = 3: A − 3I = −2 1 0 − 0 3 0 = −2 −2 0 . (A − 3I) 0 =
  −2 0 1  0 0 3 −2 0 −2
1 0 1 0 1 0 1 0 1 0 1 0
−2 −2 0 0 ∼ 0 −2 2 0 ∼ 0 1 −1 0. So x1 = −x3 , x2 = x3 , with
−2 0 −2 0 0 0 0 0 0 0 0 0
−1
x3 free. A basis vector for the eigenspace is  1 .
1

     
3 0 2 0 4 0 0 0 −1 0 2 0
1 0 0 0   [ ]
16. For λ = 4: A−4I = 
3 1 − 4 0  =  1 −1 1 0. (A − 4I) 0 =
0 1 1 0 0 0 4 0  0 1 −3 0 
0 0 0 4 0 0 0 4 0 0 0 0

1
   
−1 0 2 0 0 10 −2 0 0
 1 −1 1 0 
0 0  1 −3 0 0
 ∼ . So x1 = 2x3 , x2 = 3x3 , with x3 and x4
0 1 −3 0 0 0 0 0 0 0
0 0 0 0 0 00 0 0 0
   
x1 2x3
x2  3x3 
free variables. The general solution of (A − 4I)x = 0 is x =    
x3  =  x3  =
    x4 x4
   
2 0 
 2 0 

3 0  3 0
       
x3   + x4  . A basis for the eigenspace is:   ,   .
1 0 
 1 0 
 
0 1 0 1
 
0 0 0
17. The eigenvalues of 0 2 5  are 0, 2 and −1, on the main diagonal, by Theorem 1.
0 0 −1

21. a. False. The equation Ax = λx must have a nontrivial solution.


b. True. See the paragraph after Example 5.
c. True. See the discussion of equation (3).
d. True. See Example 2 and the paragraph preceding it. Also, see the Numerical
Note.
e. False. See the warning after Example 3.

27. Use the Hint in the text to write, for any λ, (A − λI)T = AT − (λI)T = AT − λI. Since
(A − λI)T is invertible if and only if A − λI is invertible (by Theorem 6(c) in Section
2.2), it follows that AT − λI is not invertible if and only if A − λI is not invertible.
That is, λ is an eigenvalue of AT if and only if λ is an eigenvalue of A.

28. If A is lower triangular, then AT is upper triangular and has the same diagonal entries
as A. Hence, by the part of Theorem 1 already proved in the text, these diagonal
entries are eigenvalues of AT . By Exercise 27, they are also eigenvalues of A.

33. (The solution is given in the text.)


a. Replace k by k + 1 in the definition of xk , and obtain xk+1 = c1 λk+1 u + c2 µk+1 v.
b. Axk = A(c1 λk u + c2 µk v)
= c1 λk Au + c2 µk Av by linearity
= c1 λk λu + c2 µk µv since u and v are eigenvectors
= xk+1 .

2
Chapter 5.2
 
1−λ 0 −1
9. det(A − λI) = det  2 3 − λ −1 . From the special formula for 3 × 3 deter-
0 6 0−λ
minants, the characteristic polynomial is
det(A − λI) = (1 − λ)(3 − λ)(−λ) + 0 + (−1)(2)(6) − 0 − (6)(−1)(1 − λ) − 0.
= (λ2 − 4λ + 3)(−λ) − 12 + 6(1 − λ)
= −λ3 + 4λ2 − 3λ − 12 + 6 − 6λ
= −λ3 + 4λ2 − 9λ − 6
(This polynomial has one irrational zero and two imaginary zeros.) Another way to
evaluate the determinant is to interchange rows 1 and 2 (which reverses the sign of the
determinant)
 and then make
 one row replacement: 
1−λ 0 −1 2 3 − λ −1
det  2 3 − λ −1  = − det 1 − λ 0 −1 
0 6 0−λ 0 6 0−λ 
2 3−λ −1
= − det 0 0 + (0.5λ − 0.5)(3 − λ) −1 + (0.5λ − 0.5)(−1).

0 6 0−λ
Next, expand
[ by cofactors down the first]column. The quantity above equals
(0.5λ − 0.5)(3 − λ) −0.5 − 0.5λ [ ]
−2 det = −2 (0.5λ − 0.5)(3 − λ)(−λ) − (−0.5 − 0.5λ)(6)
6 −λ
= (1 − λ)(3 − λ)(−λ) − (1 + λ)(6) = (λ2 − 4λ + 3)(−λ) − 6 − 6λ = −λ3 + 4λ2 − 9λ − 6.

12. Make a cofactor expansion


 along the thirdrow:
−1 − λ 0 1 [ ]
−1 − λ 0
det(A − λI) = det  −3 4−λ 1  = (2 − λ) · det
−3 4−λ
0 0 2−λ
= (2 − λ)(−1 − λ)(4 − λ) = −λ3 + 5λ2 − 2λ − 8.

15. Use the fact that the determinant


 of a triangular matrix is the product of the diagonal
4 − λ −7 0 2
 0 3 − λ −4 6 
entries: det(A − λI) = det   0
 = (4 − λ)(3 − λ)2 (1 − λ).
0 3 − λ −8 
0 0 0 1−λ
The eigenvalues are 4, 3, 3, and 1.

19. Since the equation det(A − λI) = (λ1 − λ)(λ2 − λ) · · · (λn − λ) holds for all λ, set λ = 0
and conclude that det A = λ1 λ2 · · · λn .

3
20. det(AT − λI) = det(AT − λI T ) .
= det(A − λI)T Transpose property
= det(A − λI) Theorem 3(c)

21. a. False. See Example 1.


b. False. See Theorem 3.
c. True. See Theorem 3.
d. False. See the solution of Example 4.

23. If A = QR, with Q invertible, and if A1 = RQ, then write A1 = Q−1 QRQ = Q−1 AQ,
which shows that A1 is similar to A.

25. Example 5 of Section 4.9 showed that Av1 = v1 , which means that v1 is an eigenvector
of A corresponding to the eigenvalue 1.

Chapter 5.3
5. By the Diagonalization Theorem, eigenvectors form the columns of the left factor, and
they correspond
  respectively
  to the  eigenvalues on the diagonal of the middle factor.
1 1 2
λ = 5 : 1 ; λ = 1 :  0  , −1.
1 −1 0
   
−4 4 −2 [ ] 1 0 −1/4 0

11. For λ = 3: A−3I = −3 1 0 , and row reducing A − 3I 0 yields 0 1 −3/4 0.
−3 1  0 0 0 0  0

1/4 1
  
The general solution is x3 3/4 , and a nice basis vector for the eigenspace is v1 = 3.
 1   4 
−3 4 −2 [ ] 1 0 −2/3 0
For λ = 2: A−2I = −3 2 0 , and row reducing A − 2I 0 yields 0 1 −1 0.
−3 1  1 0 0 0  0

2/3 2
  
The general solution is x3 1 , and a nice basis vector for the eigenspace is v2 = 3.
 1   3
−2 4 −2 [ ] 1 0 −1 0
 
For λ = 1: A−I = −3 3 0 , and row reducing A − 1I 0 yields 0 1 −1 0. 
−3 1 2 0 0 0 0

4
   
1 1

The general solution is x3 1 , and a basis vector for the eigenspace is v3 = 1.
 
1    1 
[ ] 1 2 1 3 0 0
 
From v1 ,v2 and v3 construct P = v1 v2 v3 = 3 3 1 . Then set D = 0 2  0,
4 3 1 0 0 1
where the eigenvalues in D correspond to v1 , v2 and v3 respectively.

13. The eigenvalues of A are given to be


 5 and 1.  
−3 2 −1 [ ] 1 0 1 0
For λ = 5: A−5I =  1 −2 −1, and row reducing A − 5I 0 yields 0 1 1 0.
−1 −2 −3
  0 0 0 0
−1 −1
The general solution is x3 −1, and a basis for the eigenspace is v1 = −1.
 1  1 
1 2 −1 [ ] 1 2 −1 0
For λ = 1: A−1I =  1 2 −1, and row reducing A − I 0 yields 0 0 0 0.
−1 −2  1   0 0 0 0
−2 1
  
The general solution is x2 1 + x3 0, and a basis for the eigenspace is {v2 , v3 } =
    0 1
 −2 1 
 1  , 0 .
 
0 1  
[ ] −1 −2 1
From v1 ,v2 and v3 construct P = v1 v2 v3 = −1 1 0. Then set D =
  1 0 1
5 0 0
0 1 0, where the eigenvalues in D correspond to v1 , v2 and v3 respectively.
0 0 1

17. Since A is triangular, 


its eigenvalues
 are obviously 4 and 5.  
0 0 0 [ ] 1 0 0 0
 
For λ = 4: A − 4I = 1 0 0 , and row reducing A − 4I 0 yields 0 0 1  0.
0 0 1  0 0 0 0
0 0
  
The general solution is x2 1 , and a basis for the eigenspace is v1 = 1.
0 0
Since λ = 5 must have only a one-dimensional eigenspace, we can find at most 2 lin-
early independent eigenvectors for A, so A is not diagonalizable.

5
21. a. False. The symbol D does not automatically denote a diagonal matrix.
b. True. See the remark after the statement of the Diagonalization Theorem.
c. False. The 3 × 3 matrix in Example 4 has 3 eigenvalues, counting multiplicities,
but it is not diagonalizable.
d. False. Invertibility depends on 0 not being an eigenvalue. (See the Invertible Ma-
trix Theorem.) A diagonalizable matrix may or may not have 0 as an eigenvalue.
See Examples 3 and 5 for both possibilities.

26. Yes, if the third eigenspace is only one-dimensional. In this case, the sum of the dim-
ensions of the eigenspaces will be six, whereas the matrix is 7 × 7. See Theorem 7(b).
An argument similar to that for Exercise 24 can also be given.

28. If A has n linearly independent eigenvectors, then by the Diagonalization Theorem,


A = P DP −1 for some invertible P and diagonal D. Using properties of transposes,
AT = (P DP −1 )T = (P −1 )T DT P T = (P T )−1 DP T = QDQ−1 , where Q = (P T )−1 .
Thus AT is diagonalizable. By the Diagonalization Theorem, the columns of Q are n
linearly independent eigenvectors of AT .

29. The diagonal entries in D1 are reversed from those in D. So interchange the (eigen-
vector) columns[of P to ]make them correspond
[ ] properly to the eigenvalues in D1 . In
1 1 3 0
this case, P1 = and D1 = . Although the first column of P must be
−2 −1 0 5
an eigenvector corresponding[ to] the eigenvalue
[ ] 3, there is nothing
[ to ]
prevent us from
1 −3 −3 1
selecting some multiple of , say , and letting P2 = . We now have
−2 6 6 −1
three different factorizations or “diagonalizations” of A: A = P DP −1 = P1 D1 P1−1 =
P2 D1 P2−1 .

Chapter 5.4
3. a. T (e1 ) = 0b1 − 1b2 + b3 , T (e2 ) = −1b1 − 0b2 − 1b3 , T (e3 ) = 1b1 − 1b2 + 0b3 .
     
0 −1 1
b. [T (e1 )]B = −1 , [T (e2 )]B = 0 , [T (e3 )]B = −1.
    
1 −1 0
 
[ ] 0 −1 1
c. The matrix for T relative to E and B is [T (e1 )]B [T (e2 )]B [T (e3 )]B = −1 0 −1.
1 −1 0

5. a. T (p) = (t + 5)(2 − t + t2 ) = 10 − 3t + 4t2 + t3 .

6
b. Let p and q be polynomials in P2 , and let c be any scalar. Then
T (p(t) + q(t)) = (t + 5)[p(t) + q(t)] = (t + 5)p(t) + (t + 5)q(t) = T (p(t)) + T (q(t)),
T (c · p(t)) = (t + 5)[c · p(t)] = c · (t + 5)p(t) = c · T [p(t)], and T is a linear trans-
formation.
c. Let B = {1,t, t2 } and C = {1, t, t2 , t3 }. Since T (b1 ) = T (1) = (t + 5)(1) = t + 5,
5
1
[T (b1 )]C =   2
0. Likewise since T (b2 ) = T (t) = (t + 5)(t) = t + 5t, [T (b2 )]C =
0
   
0 0
5  
 , and since T (b3 ) = T (t2 ) = (t + 5)(t2 ) = t3 + 5t2 , [T (b3 )]C = 0. Thus the
1 5
0 1
 
5 0 0
[ ] 1 5 0
matrix for T relative to B and C is [T (b1 )]C [T (b2 )]C [T (b3 )]C =  0 1 5.

0 0 1
 
3
7. Since T (b1 ) = T (1) = 3+5t, [T (b1 )]B = 5. Likewise since T (b2 ) = T (t) = −2t+4t2 ,

  0  
0 0
 
[T (b2 )]B = −2 , and since T (b3 ) = T (t ) = t , [T (b3 )]B = 0. Thus the ma-
2 2 
4 [ 1 ]
trix
 representation
 of T relative to the basis B is [T (b 1 )]B [T (b2 )]B [T (b3 )]B =
3 0 0
5 −2 0.
0 4 1

13. Start by diagonalizing A. The characteristic polynomial is λ2 − 4λ + 3 = (λ − 1)(λ − 3),


so the eigenvalues of [A are 1] and 3.
−1 1
For λ = 1: A − I = . The equation (A − I)x = 0 amounts to −x1 + x2 = 0,
−3 3 [ ]
1
so x1 = x2 with x2 free. A basis vector for the eigenspace is thus v1 = .
[ ] 1
−3 1
For λ = 3: A − 3I = . The equation (A − 3I)x = 0 amounts to −3x1 + x2 = 0,
−3 1 [ ]
1
so x1 = (1/3)x2 with x2 free. A basis vector for the eigenspace is thus v2 = .
[ ] 3
[ ] 1 1
From v1 and v2 we may construct P = v1 v2 = which diagonalizes A. By
1 3
Theorem 8, the basis B = {v1 , v2 } has the property that the B-matrix of the transfor-

7
mation x 7→ Ax is a diagonal matrix.

[ ][ ] [ ]
1 1 1 2
17. a. We compute that Ab1 = = = 2b1 , so b1 is an eigenvector
−1 3 1 2
of A corresponding to the eigenvalue 2. The characteristic polynomial of A is
λ2 − 4λ] + 4 = (λ − 2)2 , so 2 is the only eigenvalue for A. Now A − 2I =
[
−1 1
, which implies that the eigenspace corresponding to the eigenvalue 2 is
−1 1
one-dimensional. Thus the matrix A is not diagonalizable.
[ ] −1
b. Following Example 4, if P = b 1 b2 , then the B-matrix for T is P AP =
[ ][ ][ ] [ ]
−4 5 1 1 1 5 2 −1
= .
1 −1 −1 3 1 4 0 2

25. If A = P BP −1 , then tr(A) = tr((P B)P −1 ) = tr(P −1 (P B)) By the trace property.
= tr(P −1 P B) = tr(IB) = tr(B)
If B is diagonal, then the diagonal entries of B must be the eigenvalues of A, by the
Diagonalization Theorem (Theorem 5 in Section 5.3).
So tr A = tr B = {sum of the eigenvalues of A}.

26. If A = P DP −1 for some P , then the general trace property from Exercise 25 shows
that tr A = tr[(P D)P −1 ] = tr[P −1 P D] = tr D. (Or, one can use the result of Exercise
25 that since A is similar to D, tr A = tr D.) Since the eigenvalues of A are on the
main diagonal of D, tr D is the sum of the eigenvalues of A.

27. For each j, I(bj ) = bj Since the standard coordinate vector of any vector in Rn is just
] the matrix for I relative to B and the standard
the vector itself, [[I(bj )]E = bj . Thus
basis E is simply b1 b2 · · · bn . This matrix is precisely the change-of-coordinates
matrix PB defined in Section 4.4.

Chapter 5.5
[ ] [ ]
1 −2 1 − λ −2
1. A = , A − λI = . det(A − λI) = (1 − λ)(3 − λ) − (−2) =
1 3 1 3−λ

λ2 − 4λ + 5. Use the quadratic formula to find the eigenvalues: λ = 4± 16−20
2
= 2 ± i.
Example 2 gives a shortcut for finding one eigenvector, and Example 5 shows how to
write the other eigenvector with
[ no effort. ]
−1 − i −2
For λ = 2 + i: A − (2 + i)I = . The equation (A − λI)x = 0 gives
1 1−i
(−1 − i)x1 − 2x2 = 0
x1 + (1 − i)x2 = 0

8
As in Example 2, the two equations are equivalent—each determines the same relation
between x1 and x2 . So use[the second
] equation to obtain x [1 = −(1 ] − i)x2 , with x2 free.
−1 + i −1 + i
The general solution is x2 , and the vector v1 = provides a basis for
1 1
the eigenspace. [ ]
−1 − i
For λ = 2 − i: Let v2 = v̄1 = . The remark prior to Example 5 shows that v2
1
is automatically an eigenvector for 2 + i. In fact, calculations similar to those above
would show that {v2 } is a basis for the eigenspace. (In general, for a real matrix A,
it can be shown that the set of complex conjugates of the vectors in a basis of the
eigenspace for λ is a basis of the eigenspace for λ̄).

[√ ]
3 √3 √
8. A = . From Example 6, the eigenvalues are 3 ± 3i. The scale factor for
−3 3
√√ √
the transformation x 7→ Ax is r = |λ| = ( 3)2 + 32 = 2 3. From trigonometry, the

angle of rotation φ is arctan(b/a) = arctan(−3/ 3) = −π/3 radians.

[ ]
1 5
15. A = . From Exercise 3, the eigenvalues of A are λ = 2±3i, and the eigenvector
[ −2 3] [ ]
1 + 3i [ ] 1 3
v= corresponds to λ = 2 − 3i. By Theorem 9, P = Re v Im v =
2 [ ][ ][ ] [ ] 2 0
0 −3 1 5 1 3 2 −3
and C = P −1 AP = − 61 = .
−2 1 −2 3 2 0 3 2

21. The first equation in (2) is (−0.3 + 0.6i)x1 − 0.6x2 = 0. We solve this for x2 to
find that x[2 = ((−0.3
] + 0.6i)/0.6)x1 = ((−1 + 2i)/2)x1 . Letting x1 = [ 2, we ]find
2 2
that y = is an eigenvector for the matrix A. Since y = =
[ −1]+ 2i −1 + 2i
−1+2i −2 − 4i
5
= −1+2i
5
v1 , the vector y is a complex multiple of the vector v1 used
5
in Example 2.

23. (a) properties of conjugates and the fact that x̄T = xT


(b) Ax = Ax̄ and A is real
(c) xT Ax̄ is a scalar and hence may be viewed as a 1 × 1 matrix
(d) properties of transposes
(e) AT = A and the definition of q.

9
24. xT Ax = xT (λx) = λ · xT x because x is an eigenvector. It is easy to see that xT x
is real (and positive) because zz is nonnegative for every complex number z. Since
xT Ax is real, by Exercise 23, so is λ. Next, write x = u + iv, where u and v are real
vectors. Then Ax = A(u + iv) = Au + iAv and λx = λu + iλv. The rea part pf Ax is
Au because the entries in A, u, and v are all real. The real part of λx is λu because
λ and the entries in u and v are real. Since Ax and λx are equal, their real parts
are equal, too. (Apply the corresponding statement about complex numbers to each
entry of Ax.) Thus Au = λu, which shows that the real part of x is an eigenvector of A.

26. a. If λ = a − bi, then Av = λv = (a − bi)(Re v + i Im v) = (a Re v + b Im v) +


i(a Im v − b Re v). By Exercise 25,
A(Re v) = Re Av = a Re v + b Im v .
A(Im v) = Im Av = −b Re v + a Im v
[ ] [ ]
[ ] a −b
b. Let P = Re v Im v . By (a), A(Re v) = P , A(Im v) = P .
[ [ ] [ b ]] [ ] a
[ ] a −b a −b
So AP = A(Re v) A(Im v) = P P =P = P C.
b a b a

10

You might also like