0% found this document useful (0 votes)
22 views25 pages

Student Solutions Manual For Web Sections

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views25 pages

Student Solutions Manual For Web Sections

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Student Solutions Manual

for

Web Sections

Elementary Linear Algebra


th
4 Edition

Stephen Andrilli

David Hecker
Copyright © 2013, Elsevier Inc. All rights reserved.
Table of Contents
Lines and Planes and the Cross Product in Թ3 ………………1

Change of Variables and the Jacobian ………………………6

Function Spaces ……………………………………………..9

Max-Min Problems in Թ௡ and the Hessian Matrix ………...12

Jordan Canonical Form …………………………………….14

Copyright © 2013, Elsevier Inc. All rights reserved.


Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Lines and Planes

Lines and Planes and the Cross Product in R3


(1) (a) Let (x0 ; y0 ; z0 ) = (3; 1; 0) and [a; b; c] = [0; 1; 4]. Then, from Theorem 1, parametric equations
for the line are x = 3+0t; y = 1+1t; z = 0 4t (t 2 R); that is, x = 3; y = 1+t; z = 4t
(t 2 R).
(c) Let (x0 ; y0 ; z0 ) = (6; 2; 1) and (x1 ; y1 ; z1 ) = (4; 3; 7): Then a vector in the direction of the line
is [x1 x0 ; y1 y0 ; z1 z0 ] = [ 2; 5; 6]: Let [a; b; c] = [ 2; 5; 6]. Then, from Theorem 1,
parametric equations for the line are x = 6 2t; y = 2 5t; z = 1 + 6t (t 2 R).
(Reversing the order of the points gives (x0 ; y0 ; z0 ) = (4; 3; 7) and [a; b; c] = [2; 5; 6], and so
another valid set of parametric equations for the line is: x = 4 + 2t; y = 3 + 5t; z = 7 6t
(t 2 R):)
(e) Let (x0 ; y0 ; z0 ) = (1; 5; 7): A vector in the direction of the given line is [a; b; c] = [ 2; 1; 0]
(since these values are the respective coe¢ cients of the parameter t); and therefore this is a vector
in the direction of the desired line as well. Then, from Theorem 1, parametric equations for the
line are: x = 1 2t; y = 5 + t; z = 7 + 0t = 7 (t 2 R).

(2) (a) Setting the expressions for x equal gives 6 + 6t = 9 + 3s, which means s = 2t 5. Setting the
expressions for y equal gives 3 2t = 13 + 4s, which means 2s + t = 5. Substituting 2t 5 for s
into 2s + t = 5 leads to t = 1 and s = 3. Plugging these values into both expressions for x, y,
and z in turn results in the same values: x = 0, y = 1, and z = 7. Hence, the given lines intersect
at a single point: (0; 1; 7):
(c) Setting the expressions for x equal gives 4 6t = 8 4s, which means s = 1 + 23 t: (The same
result is obtained by setting the expressions for y equal.) Setting the expressions for z equal gives
6 3t = 9 2s, which means s = 32 t + 32 . Since it is impossible for these two expressions for s to
be equal (1 + 23 t 6= 23 t + 23 ) for any value of t, the given lines do not intersect.
(e) Setting the expressions for x equal gives 4t 7 = 8s 19, which means t = 2s 3. Similarly,
setting the expressions for y equal and setting the expressions for z equal also gives t = 2s 3:
(In other words, substituting 2s 3 for t in the expressions for the …rst given line for x, y, z,
respectively, gives the expressions for the second given line for x, y, z, respectively.) Thus, the
lines are actually identical. The intersection of the lines is therefore the set of all points on (either)
line; that is, the intersection consists of all points on the line x = 4t 7; y = 3t + 2; z = t 5
(t 2 R).

(3) (b) A vector in the direction of l1 is a = [ 3; 0; 4]: A vector in the direction of l2pis b =p[ 3; 5; 4].
If is the angle between vectors a and b, then cos = kakkbk ab
= 9+0+16
p
5 50
= 1050 = 22 : Then,
p
1 2
= cos 2 = 45 :

(4) (b) Let (x0 ; y0 ; z0 ) = (6; 2; 1): In the solution to Exercise 1(c) above, we found that [a; b; c] =
[ 2; 5; 6] is a vector in the direction of the line. Substituting these values into the formula
for the symmetric equations of the line gives x 26 = y 52 = z 6 1 (If, instead, we had considered the
points in reverse order, and chosen (x0 ; y0 ; z0 ) = (4; 3; 7) and calculated [a; b; c] = [2; 5; 6] as a
vector in the direction of the line, we would have obtained another valid form for the symmetric
equations of the line: x 2 4 = y+3 5 =
z 7
6 :)

(5) (a) Let (x0 ; y0 ; z0 ) = (1; 7; 2), and let [a; b; c] = [6; 1; 6]. Then by Theorem 2, the equation of the
plane is 6x + 1y + 6z = (6)(1) + (1)(7) + (6)( 2); which simpli…es to 6x + y + 6z = 1.

1
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Lines and Planes

(d) This plane goes through the origin (x0 ; y0 ; z0 ) = (0; 0; 0) with normal vector [a; b; c] = [0; 0; 1].
Then by Theorem 2, the equation for the plane is 0x + 0y + 1z = (0)(0) + (0)(0) + (1)(0), which
simpli…es to z = 0:
(6) (a) A normal vector to the given plane is a = [2; 1; 2]. Since jjajj = 3; a unit normal vector to the
plane is given by 32 ; 13 ; 23 :
(7) (a) Normal vectors to the given planes are, respectively, a = [7; 7; 8] and b = [4; 5; 11]. If
ab 28 35+88 81
is the angle between vectors a and b, then cos = kakkbk = p p
162 162
= 162 = 21 : Then,
= cos 1 12 = 60 :
(8) (a) Let x = [x1 ; x2 ; x3 ] = [1; 2; 1] and y = [y1 ; y2 ; y3 ] = [3; 7; 0]. Then [x2 y3 x3 y2 ; x3 y1 x1 y3 ;
x1 y2 x2 y1 ] = [(2)(0) ( 1)(7); ( 1)(3) (1)(0); (1)(7) (2)(3)] = [7; 3; 1]:
(g) First note that [2; 0; 1] [ 1; 2; 0] = [(0)(0) ( 1)(2); ( 1)( 1) (2)(0); (2)(2) (0)( 1)] =
[2; 1; 4]: Thus, [1; 2; 3] ([2; 0; 1] [ 1; 2; 0]) = [1; 2; 3] [2; 1; 4] = (1)(2) + (2)(1) + ( 3)(4) =
8:
(15) (a) Let A = (1; 3; 0), B = ( 1; 4; 1), C = (3; 2; 2). Let x be the vector from A to B. Then x =
[ 1 1; 4 3; 1 0] = [ 2; 1; 1]. Let y be the vector from A to C. Then y = [3 1; 2 3; 2 0] =
[2; 1; 2]: Hence, a normal vector to the plane is x y = [3; 6; 0]: Therefore, the equation of the
plane is 3x + 6y + 0z = (3)(1) + (6)(3) + (0)(0), or, 3x + 6y = 21, which reduces to x + 2y = 7:
(17) (b) From the solution to Exercise 3(b) above, we know that a vector in the direction of l1 is [ 3; 0; 4],
and a vector in the direction of l2 is [ 3; 5; 4]. The cross product of these vectors gives [ 20; 0; 15]
as a normal vector to the plane. Dividing by 5, we …nd that [4; 0; 3] is also a normal vector to
the plane.
We next …nd a point of intersection for the lines. Setting the expressions for y equal gives
1 = 5s 4,.which means that s = 1. Setting the expressions for x equal gives 9 3t = 3 3s,
which means s = t 2. Substituting 1 for s into s = t 2 leads to t = 3. Plugging these values
for s and t into both expressions for x, y and z, respectively, yields the same values: x = 0; y = 1;
z = 13. Hence, the given lines intersect at a single point: (0; 1; 13):
Finally, the equation for the plane is 4x + 0y + 3z = (4)(0) + (0)(1) + (3)(13), which reduces to
4x + 3z = 39.
(18) (c) The normal vector for the plane x+z = 6 is [1; 0; 1], and the normal vector for the plane y +z = 19
is [0; 1; 1]. The cross product [ 1; 1; 1] of these vectors produces a vector in the direction of the
line of intersection. To …nd a point of intersection of the planes, choose z = 0 to obtain x = 6 and
y = 19. That is, one point where the planes intersect is (6; 19; 0). Thus, the line of intersection
is: x = 6 t; y = 19 t; z = t (t 2 R).
(19) (a) A vector in the direction of the line l is [a; b; c] = [ 2; 1; 2]. By setting t = 0, we obtain the point
(x0 ; y0 ; z0 ) = (4; 1; 5) on l. The vector from this point to the given point (x1 ; y1 ; z1 ) = (3; 1; 2)
is [x1 x0 ; y1 y0 ; z1 z0 ] = [ 1; 0; 3]. Finally, using Theorem 6, the shortest distance from
the given point to l is:
p
k[ 2; 1; 2] [ 1; 0; 3]k k[ 3; 8; 1]k 74
p = = 2:867
2 2
( 2) + 1 + 2 2 3 3

(21) (a) Since the given point is (x1 ; y1 ; z1 ) = (5; 2; 0), and the given plane is 2x y 2z = 12, we have
a = 2, b = 1, c = 2, and d = 12. From the formula in Theorem 7, the shortest distance from

2
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Lines and Planes

the given point to the given plane is:

jax1 + by1 + cz1 dj j2 5 + ( 1) 2 + ( 2) 0 12j j 4j 4


p = p = =
a2 + b2 + c2 22 + ( 1)2 + ( 2)2 3 3

(c) The given point is (x1 ; y1 ; z1 ) = (5; 0; 3): First, we …nd the equation of the plane through the
points A = (3; 1; 5), B = (1; 1; 2), and C = (4; 3; 5). Note that the vector from A to B is
[ 2; 2; 3] and the vector from A to C is [1; 2; 0]. Thus, a normal vector to the plane is the
cross product [6; 3; 2] of these vectors. Hence, the equation of the plane is 6x + ( 3)y + ( 2)z
= (6)(3) + ( 3)(1) + ( 2)(5), or, 6x 3y 2z = 5. That is, a = 6, b = 3, c = 2, and d = 5.
From the formula in Theorem 7, the shortest distance from the given point to the given plane is:

jax1 + by1 + cz1 dj j6 5 + ( 3) 0 + ( 2) ( 3) 5j 31


p = p = 4:429 .
2 2
a +b +c 2 62 + ( 3)2 + ( 2)2 7

(23) (a) A vector in the direction of the line l1 is [ 2; 2; 1], and a vector in the direction of the line l2
is [0; 1; 4]. (Note that the lines are not parallel.) The cross product of these vectors is v =
[7; 8; 2]: By setting t = 0 and s = 0, respectively, we …nd that (x1 ; y1 ; z1 ) = (5; 3; 1) is a point
on l1 , and (x2 ; y2 ; z2 ) = (2; 5; 0) is a point on l2 . Hence, the vector from (x1 ; y1 ; z1 ) to (x2 ; y2 ; z2 )
is w = [x2 x1 ; y2 y1 ; z2 z1 ] = [ 3; 2; 1]. Finally, from the formula given in this exercise, we
…nd that the shortest distance between the given lines is
p
j(v w)j j 21 + 16 2j 7 7 13
=p = p = 0:647 .
kvk 72 + 82 + ( 2)2 3 13 39

(c) A vector in the direction of the line l1 is [ 7; 4; 0], and a vector in the direction of the line
l2 is [ 1; 3; 3]. (Note that the lines are not parallel.) The cross product of these vectors is
v = [ 12; 21; 17]: By setting t = 0 and s = 0, respectively, we …nd that (x1 ; y1 ; z1 ) = (3; 1; 8)
is a point on l1 , and (x2 ; y2 ; z2 ) = ( 1; 4; 1) is a point on l2 . Hence, the vector from (x1 ; y1 ; z1 )
to (x2 ; y2 ; z2 ) is w = [x2 x1 ; y2 y1 ; z2 z1 ] = [ 4; 5; 9]. Finally, from the formula given in
this exercise, we …nd that the shortest distance between the given lines is

j(v w)j j48 + 105 153j 0


=p =p =0.
kvk 2 2
( 12) + ( 21) + ( 17)2 874

That is, the given lines actually intersect. (In fact, it can easily be shown that the lines intersect
at ( 4; 5; 8).)

(24) (a) Labeling the …rst plane as ax + by + cz = d1 and the second plane as ax + by + cz = d2 , we have
[a; b; c] = [3; 1; 4], and d1 = 10, d2 = 7. From the formula in Exercise 24, the shortest distance
between the given planes is
p
jd1 d2 j j10 7j 3 3 26
p =p = p = 0:588 .
a2 + b2 + c2 32 + ( 1)2 + 42 26 26

(c) Divide the equation of the …rst plane by 2, and divide the equation of the second plane by 3 to
obtain equations for the planes whose coe¢ cients agree: 2x + 3y 4z = 29 and 2x + 3y 4z = 53
Labeling the …rst of these new equations as ax + by + cz = d1 and the second as ax + by + cz = d2 ,

3
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Lines and Planes

we have [a; b; c] = [2; 3; 4], and d1 = 29 , d2 = 5


3. From the formula in Exercise 24, the shortest
distance between the given planes is
p
jd1 d2 j j9 5
3 j
37
37 29
p =p 2 = p6 = 1:145 .
a2 + b2 + c2 22 + (3)2 + ( 4)2 29 174

(25) (a) Let (x1 ; y1 ; z1 ) = (2; 1; 0), (x2 ; y2 ; z2 ) = (3; 0; 1), and (x3 ; y3 ; z3 ) = (2; 2; 7). Then we have
[x2 x1 ; y2 y1 ; z2 z1 ] = [1; 1; 1]; and [x3 x1 ; y3 y1 ; z3 z1 ] = [0; 3; 7]: Hence, from the
formula in this exercise, the area of the triangle with vertices (2; 1; 0), (3; 0; 1), and (2; 2; 7) is
p
1 k[4; 7; 3]k 74
k [x2 x1 ; y2 y1 ; z2 z1 ] [x3 x1 ; y3 y1 ; z3 z1 ] k = = 4:301 .
2 2 2

(29) (a) The vector r = [6; 3; 2] ft, and the vector ! = [0; 0; 212 ] = [0; 0; 6 ] rad/sec. Hence, the velocity
p
vector v = ! r = [ 2 ; ; 0] ft/sec; speed = jjvjj = 2 5 3:512 ft/sec.
(32) (a) Let r = [x; y; z] represent the direction vector from the origin to a point (x; y; z) on the Earth’s
surface at latitude . Since the rotation is counterclockwise about the z-axis and one rotation of
2
2 radians occurs every 24 hours = 86400 seconds, we …nd that ! = [0; 0; 86400 ] = [0; 0; 43200 ]
rad/sec. Then,
y x
v=! r= ; ;0 :
43200 43200
By considering the right triangle with hypotenuse
p = Earth’s radius = 6369 km, and altitude z, we
see that z = 6369 sin . But since jjrjj = x2 + y 2 + z 2 = 6369, we must have x2 +y 2 +(6369 sin )2
= (6369)2 ; which means x2 + y 2 = (6369 cos )2 : Thus,
s
2 p
y x 2 6369
jjvjj = + + 02 = (x2 + y 2 ) = cos
43200 43200 43200 43200

0:4632 cos km/sec.


(33) (a) False. If [a; b; c] is a direction vector for a given line, then any nonzero scalar multiple of [a; b; c]
is another direction vector for that same line. For example, the line x = t, y = 0, z = 0 (t 2 R)
(that is, the x-axis) has many distinct direction vectors, such as [1; 0; 0] and [2; 0; 0].
(b) False. If the lines are skew lines, they do not intersect each other, and the angle between the lines
is not de…ned. For example, the lines in Exercise 2(c) do not intersect, and hence, there is no
angle de…ned between them.
(c) True. Two distinct nonparallel planes always intersect each other (in a line), and the angle between
the planes is the (minimal) angle between a pair of normal vectors to the planes.
(d) True. The vector [a; b; c] is normal to the given plane, and the vector [ a; b; c] is a nonzero
scalar multiple of [a; b; c]; so [ a; b; c] is also perpendicular to the given plane.
(e) True. For all vectors x, y, and z, we have: (x + y) z = (z (x + y)) (by part (1) of Theorem
3) = ((z x) + (z y)) (by part (5) of Theorem 3) = (z x) (z y) = (x z) + (y z)
(by part (1) of Theorem 3).
(f) True. For all vectors x, y, and z, we have: y (z x) = (y z) x, by part (7) of Theorem 3.
(g) False. From Corollary 5, if x and y are parallel, then x y = 0: For example, [1; 0; 0] [2; 0; 0]
= [0; 0; 0].

4
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Lines and Planes

(h) True. From Theorem 4, if x and y are nonzero, then kx yk = (jjxjj jjyjj) sin , and dividing
both sides by (jjxjj jjyjj) (which is nonzero) gives the desired result.
(i) False. k i = j, while i k = j. (Since the results of the cross product operations are
nonzero here, the anti-commutative property from part (1) of Theorem 3 also shows that the
given statement is false.)
(j) True. The vectors x, y, and x y, in that order, form a right-handed system, as in Figure 8,
where we can imagine the vectors x, y, and x y lying along the positive x-, y-, and z-axes,
respectively. Then, moving the (right) hand in that …gure, so that its …ngers are curling instead
from the vector x y (positive z-axis) toward the vector x (positive x-axis) will cause the thumb
of the (right) hand to point in the direction of y (positive y-axis). Thus, the vectors x y, x,
and y, taken in that order, also form a right-handed system.
(k) True. This is precisely the method outlined before Theorem 7 for …nding the distance from a
point to a plane.
(l) False. If ! is the angular velocity, v is the velocity, and r is the position vector, then v = ! r,
but it is not generally true that v r = !: For example, in Example 16, v = 2; 4;0 ;
5
r = [ 1; 2; 2]; and ! = 0; 0; 4 , so, v r = 2; 4;0 [ 1; 2; 2] = 2 ; ; 4 = 6 !:

5
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jacobian

Change of Variables and the Jacobian


" #
@x @x
@u @v 1 1
(1) (a) The Jacobian matrix J = @y @y
= . Therefore, dx dy = jJj du dv = 2 du dv.
1 1
@u @v
" #
@x @x
@u @v 2u 2v
(c) The Jacobian matrix J = @y @y
= . Therefore,
2v 2u
@u @v

dx dy = jJj du dv = 4(u2 + v 2 ) du dv.


" #
@x @x
(e) The Jacobian matrix J = @u @y
@v
@y
=
2 @u @v 3
((u+1)2 +v2 )(2) 2u(2(u+1)) 4uv
6 2 2
((u+1) +v ) 2 2 2
((u+1) +v ) 2
7
4 5 . Simplifying yields
((u+1)2 +v2 )( 2u) (1 (u2 +v2 ))(2(u+1)) ((u+1)2 +v2 )( 2v) (1 (u2 +v2 ))(2v)
((u+1)2 +v 2 )2 ((u+1)2 +v 2 )2
" #
(u + 1)2 + v 2 2u(u + 1) 2uv
2
J= ((u+1)2 +v 2 )2 2
u (u + 1) + v 2 (u + 1) 1 u2 + v 2 2uv 2v
" #
2
u2 + v 2 + 1 2uv
= ((u+1)2 +v 2 )2
.
u2 2u 1 + v2 2uv 2v
Now, for a 2 2 matrix A, jkAj = k 2 jAj, and so,
4
jJj = ((u+1)2 +v 2 )4
u2 + v 2 + 1 ( 2uv 2v) ( 2uv) u2 2u 1 + v2
4
= ((u+1)2 +v 2 )4
2u3 v 2uv 3 2uv + 2u2 v 2v 3 2v 2u3 v 4u2 v 2uv + 2uv 3
4 8
= ((u+1)2 +v 2 )4
4uv 2u2 v 2v 3 2v = ((u+1)2 +v 2 )4
u2 v + 2uv + v 3 + v
8 8v
= ((u+1)2 +v 2 )4
(u + 1)2 + v 2 v = ((u+1)2 +v 2 )3
.
8jvj
Hence, dx dy = jJj du dv = ((u+1)2 +v 2 )3
du dv.
2 @x @x @x 3 2 3
@u @v @w 1 1 0
6 7 4
(2) (a) The Jacobian matrix J = 4 @y
@u
@y
@v
@y
@w 5 = 0 1 1 5. Thus, jJj = (1)(1)(1) + (1)(1)(1) +
@z @z @z 1 0 1
@u @v @w
(0)(0)(0) (0)(1)(1) (1)(1)(0) (1)(0)(1) = 2. Therefore, dx dy dz = jJj du dv dw = 2 du dv dw.
2 @x @x @x 3
@u @v @w
6 @y @y @y 7
(c) The Jacobian matrix J = 4 @u @v @w 5=
@z @z @z
@u @v @w
2 3
u2 + v 2 + w 2 2u2 2uv 2uw
1 4 2uv u2 + v 2 + w 2 2v 2 2vw 5.
(u2 +v 2 +w2 )2
2uw 2vw u2 + v 2 + w 2 2w2
To make this expression simpler, let A = u2 + v 2 + w2 .
Now, for a 3 3 matrix M, jkMj = k 3 jMj, and so by basketweaving,

6
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jacobian

1
jJj = A6 A 2u2 A 2v 2 A 2w2 + ( 2uv)( 2vw)( 2uw)
+ ( 2uw)( 2uv)( 2vw) ( 2uw)(A 2v 2 )( 2uw)
(A 2u2 )( 2vw)( 2vw) ( 2uv)( 2uv)(A 2w2 )
1
= A6 A3 2u2 A2 2v 2 A2 2w2 A2 + 4u2 v 2 A + 4u2 w2 A
+ 4v w2 A 8u2 v 2 w2 8u2 v 2 w2 8u2 v 2 w2 4u2 w2 A + 8u2 v 2 w2
2

4v 2 w2 A + 8u2 v 2 w2 4u2 v 2 A + 8u2 v 2 w2


1 1 1
= A6 A3 2(u2 + v 2 + w2 )A2 = A6 A3 = A3 .
1
Therefore, dx dy dz = jJj du dv dw = (u2 +v 2 +w2 )3 du dv dw.

(4) (a) For the given region R, r2 = x2 + y 2 ranges from 1 to 9, and so r ranges from 1 to 3. Since R
is restricted to the …rst quadrant, ranges from 0 to 2 . Therefore, using x = r cos , y = r sin ,
RR RR
and dx dy = r dr d yields (x + y) dx dy = (r cos + r sin ) r dr d
R R
3
R R3 R r3
= 2
0 1
(r cos + r sin ) r dr d = 2
0 3 (cos + sin ) d
1
R 26 26 2
52
= 2
0 3 (cos + sin ) d = 3 (sin cos ) = 3 .
0
(c) For the given sphere R, ranges from 0 to 1. Since R is restricted to the upper half of the
xy-plane, ranges from 0 to 2 . However, ranges from 0 to 2 , since R is the entire upper half
of the sphere; that is, all the way around the z-axis. Therefore, using
z = cos and dx dy dz = ( 2 sin ) d d d , we get
RRR RRR
z dx dy dz = cos ( 2 sin ) d d d
R R
1
R2 R R1 3
R2 R 4
= 0 0
2
0
cos sin d d d = 0 0
2
4 cos sin d d
0
1
R2 R 1
R2 sin2
2
1
R2 1
= 4 0
2
0
cos sin d d = 4 0 2 d = 4 0 2 d
0
2
1
= 8 = 4.
0
(e) Since r2 = x2 + y 2 , the condition x2 + y 2 4 on the region R means that r ranges from 0 to
2 and ranges from 0 to 2 . Therefore, using x2 + y 2 = r2 and dx dy dz = r dr d dz produces
RRR 2 R5 R2 R2
x + y 2 + z 2 dx dy dz = 3 0 0 r2 + z 2 r dr d dz
R
2
R5 R2 r4 r2 z 2
R5 R2
= 3 0 4 + 2 d dz = 3 0
4 + 2z 2 d dz
0
2
R5 R5
= 3
4 + 2z 2 dz = 3
8 + 4 z 2 dz
0
5
4 500 800
= 8 z+ 3 z3 = 40 + 3 ( 24 36 ) = 3 .
3

(5) (a) True. This is because all of the partial derivatives in the Jacobian matrix will be constants, since
they are derivatives of linear functions. Hence, the determinant of the Jacobian will be a constant.

7
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jacobian

" #
@x @x
@u @v
(b) True. The determinant of the Jacobian matrix J = @y @y
is ( 1); however,
@u @v

dx dy = jJj du dv = j 1j du dv = du dv.
(c) True. This is explained and illustrated in the Jacobian section, just after Example 3, in Figure 3,
and again in Example 4.
(d) False. The scaling factor is the absolute value of the determinant of the Jacobian matrix, which
will not equal the determinant of the Jacobian matrix when that determinant is negative. For a
counterexample, see the solution to part (b) of this Exercise.

8
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Function Spaces

Function Spaces
(1) (a) To test for linear independence, we must determine whether aex + be2x + ce3x = 0 has any
nontrivial solutions. Using this equation we substitute the following values for x:
8
< Letting x = 0 =) a + b + c = 0
Letting x = 1 =) a(e) + b(e2 ) + c(e3 ) = 0 :
:
Letting x = 2 =) a(e2 ) + b(e4 ) + c(e6 ) = 0
2 3 2 3
1 1 1 0 1 0 0 0
6 7
But 4 e e2 e3 0 5 row reduces to 4 0 1 0 0 5,
e2 e4 e6 0 0 0 1 0

showing that the system has only the trivial solution a = b = c = 0. Hence, the given set is
linearly independent.
(c) To test for linear independence, we must check for nontrivial solutions to

5x 1 3x + 1 7x3 3x2 + 17x 5


a +b +c = 0:
1 + x2 2 + x2 x4 + 3x2 + 2

Using this equation we substitute the following values for x:


8 1 5
< Letting x = 0 =)
> a + 2b 2c = 0
4 8
Letting x = 1 =) 2a + 3b + 3c = 0 :
>
: 2 16
Letting x = 1 =) 3a 3b 3 c = 0

2 1 5 3 2 3
1 2 2 0 1 0 2 0
6 7
But 4 2 4
3
8
3 0 5 row reduces to 4 0 1 1 0 5,
3 2 16
0 0 0 0 0
3 3
which has nontrivial solution a = 2, b = 1, c = 1. Algebraic simpli…cation veri…es that

5x 1 3x + 1 7x3 3x2 + 17x 5


( 2) + (1) + (1) =0
1 + x2 2 + x2 x4 + 3x2 + 2

is a functional identity. Hence, since a nontrivial linear combination of the elements of S produces
0, the set S is linearly dependent.
(4) (a) First, to simplify matters, we eliminate any functions that we can see by inspection are linear
combinations of other functions in S. The familiar trigonometric identities sin2 x + cos2 x = 1
and sin x cos x = 12 sin 2x show that the last two functions in S are linear combinations of earlier
functions. Thus, the subset S1 = fsin 2x; cos 2x; sin2 x; cos2 xg has the same span as S. However,
the identity cos2 x = cos 2x + sin2 x (more commonly stated as cos 2x = cos2 x sin2 x) shows that
B = fsin 2x; cos 2x; sin2 xg also has the same span. We suspect that we have now eliminated all of
the functions that we can, and that B is linearly independent, making it a basis for span(S). We
verify the linear independence of B by considering the equation a sin 2x + b cos 2x + c sin2 x = 0.

9
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Function Spaces

Using this equation we substitute the following values for x:


8
< Letting x = 0 =)
> p
+ b = 0
3 1 1
Letting x = 6 =) 2 a + 2b + 4c = 0 :
>
: 1
Letting x = 4 =) a + 2c = 0

2 3 2 3
0 1 0 0 1 0 0 0
p
6 7
Next, row reduce 4 2
3 1
2
1
4 0 5 to 4 0 1 0 0 5.
1 0 1
0 0 0 1 0
2
Since the system has only the trivial solution, B is linearly independent, and is thus the desired
basis for span(S).
(c) First, to simplify matters, we eliminate any functions that we can see by inspection are linear
combinations of other functions in S. We start with the familiar trigonometric identities
sin(a + b) = sin a cos b + sin b cos a and cos(a + b) = cos a cos b sin a sin b. Hence,
sin(x + 1) = (sin x)(cos 1) + (sin 1)(cos x);
cos(x + 1) = (cos x)(cos 1) (sin x)(sin 1);
sin(x + 2) = (sin x)(cos 2) + (sin 2)(cos x); and
cos(x + 2) = (cos x)(cos 2) (sin x)(sin 2).
Therefore, each of the elements of S is a linear combination of sin x and cos x. That is, span(S)
span(fsin x; cos xg). Thus, dim(span(S)) 2. Now, if we can …nd two linearly independent
vectors in S, they form a basis for span(S). We claim that the subset B = fsin(x + 1); cos(x + 1)g
of S is linearly independent, and hence is a basis contained in S.
To show that B is linearly independent, we plug two values for x into a sin(x+1)+b cos(x+1) = 0:

Letting x = 0 =) (sin 1)a + (cos 1)b = 0


:
Letting x = 1 =) b = 0
Using back substitution and the fact that sin 1 6= 0 shows that a = b = 0, proving that B is
linearly independent.
(5) (a) Since v = 5ex + 0e2x + ( 7)e3x , [v]B = [5; 0; 7] by the de…nition of [v]B .
(c) We start with the familiar trigonometric identity sin(a + b) = sin a cos b + sin b cos a, which yields
sin(x + 1) = (sin x)(cos 1) + (sin 1)(cos x); and
sin(x + 2) = (sin x)(cos 2) + (sin 2)(cos x).
Dividing the …rst equation by cos 1 and the second by cos 2, and then subtracting and rearrang-
1 1 sin 1 sin 2 sin 1 sin 2
ing terms yields cos 1 sin(x + 1) cos 2 sin(x + 2) = cos 1 cos 2 cos x. Now, cos 1 cos 2 =
sin 1 cos 2 sin 2 cos 1 sin( 1) sin 1 cos 2 cos 1
cos 1 cos 2 = cos 1 cos 2 = cos 1 cos 2 . Hence, cos x = sin 1 sin(x + 1) + sin 1 sin(x + 2).
Therefore, the de…nition of [v]B yields [v]B = [ cos 2 cos 1
sin 1 ; sin 1 ], or approximately [0:4945; 0:6421].
An alternate approach is to solve for a and b in the equation cos x = a sin(x + 1) + b sin(x + 2).
Plug in the following values for x:
Letting x = 1 =) cos 1 = (sin 1)b
;
Letting x = 2 =) cos 2 = (sin 1)a
producing the same results for a and b as above.

10
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Function Spaces

(6) (a) True. In any vector space, S = fv1 ; v2 g is linearly dependent if and only if either of the vectors
in S can be expressed as a linear combination of the other. Since there are only two vectors in
the set, neither of which is zero, this happens precisely when v1 is a nonzero constant multiple of
v2 .
(b) True. Since the polynomials all have di¤erent degrees, none of them is a linear combination of
the others.
(c) False. By the de…nition of linear independence, the set of vectors would be linearly independent,
not linearly dependent.
(d) False. It is possible that the three values chosen for x do not produce equations proving linear
independence, but choosing a di¤erent set of three values might. This principle is illustrated
directly after Example 1 in the Functions Spaces section. We are only assured of linear dependence
when we have found values of a, b, and c, not all zero, such that af1 (x) + bf2 (x) + cf3 (x) = 0 is a
functional identity.

11
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Hessian Matrix

Max-Min Problems in Rn and the Hessian Matrix


(1) (a) First, rf = [3x2 + 2x + 2y 3; 2x + 2y]. We …nd critical points by setting rf = 0. Setting the
second coordinate of rf equal to zero yields y = x. Plugging this into 3x2 + 2x + 2y 3 = 0
produces 3x2 3 = 0, which has solutions x = 1 and x = 1. This gives the two critical points
6x + 2 2 8 2
(1; 1) and ( 1; 1). Next, H = . At the …rst critical point, H = ,
2 2 (1; 1) 2 2
which is positive de…nite because its determinant is 12, which is positive, and its (1; 1) entry
is 8, which is also positive. (See the comment just before Example 3 in the Hessian section
for easily veri…ed necessary and su¢ cient conditions for a 2 2 matrix to be either positive
de…nite or negative de…nite.) So, f has a local minimum at (1; 1). At the second critical point,
4 2
H = . We refer to this matrix as A. Now A is neither positive de…nite nor
( 1;1) 2 2
negative de…nite, since its determinant is negative.
p Also note that pA (x) = x2 + 2x 12, which,
by the Quadratic Formula, has roots 1 13. Since one of these is positive and the other is
negative, we see that ( 1; 1) is not a local extreme value.
(c) First, rf = [4x + 2y + 2; 2x + 2y 2]. We …nd critical points by setting rf = 0. This corresponds
to a linear system with two equations and two variables which has the unique solution ( 2; 3).
4 2
Next, H = . H is positive de…nite since its determinant is 4, which is positive, and its
2 2
(1; 1) entry is 4, which is positive. Hence, the critical point ( 2; 3) is a local minimum.
(e) First, rf = [4x + 2y + 2z; 2x + 4y 3 + 12y 2 z + 12yz 2 2y + 4z 3 4z;
2x+4y 3 +12y 2 z +12yz 2 4y+4z 3 2z]. We …nd critical points by setting rf = 0. To make things
easier, notice that at the critical point, @f @f @f
@y = @z . This yields y = z. With this, and @x = 0, we
obtain x = y. Substituting for y and z into @f @y = 0 and solving, produces 8x 32x3 = 0. Hence,
2 1 1 1
8x(1 4x ) = 8x(1 2x)(1 + 2x) = 0. This yields the three critical points (0; 0; 0), ; ; ,
2 3 2 2 2
4 2 2
and 21 ; 12 ; 12 . Next, H = 4 2 12y 2 + 24yz + 12z 2 2 12y 2 + 24yz + 12z 2 4 5 .
2 12y 2 + 24yz + 12z 2 4 12y 2 + 24yz + 12z 2 2
2 3
4 2 2
At the …rst critical point, H = 4 2 2 4 5. We refer to this matrix as A. Then
(0;0;0)
2 4 2
3 2
p
pA (x) = x 36x + 64 = (x 2)(x + 2x 32). Hence, the eigenvalues of A are 2; and 1 33.
Since A has both positive and negative eigenvalues, (0; 0; 0) is not an extreme point.
2 3
4 2 2
At the second critical point, H 1 1 1 = 4 2 10 8 5. We refer to this matrix as B. Then
( 2;2;2) 2 8 10
pB (x) = x3 24x2 + 108x 128 = (x 2)(x2 22x + 64). Hence, the eigenvalues of B are all
p 1 1 1
positive (2 and 11 57). So f has a local minimum at 2 ; 2 ; 2 by Theorem 10.4.
At the third critical point, H = H , and so f also has a local minimum at
( 21 ; 1
2;
1
2 ) ( 1 1 1
2;2;2 )
1 1 1
2; 2; 2 .

12
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Hessian Matrix

(4) (a) True. We know from calculus that if a real-valued function f on Rn has continuous second partial
2 2
derivatives then @x@i @x
f
j
= @x@j @x
f
i
, for all i; j.
1 0
(b) False. For example, the matrix represents neither a positive de…nite nor negative
0 1
de…nite quadratic form since it has both a positive and a negative eigenvalue.
(c) True. In part (a) we noted that the Hessian matrix is symmetric. Theorems 6.18 and 6.20 then
establish that the matrix is orthogonally diagonalizable, hence diagonalizable.
(d) True. We established that a symmetric 2 2 matrix A with jAj > 0 and a11 > 0 represents a
positive de…nite quadratic form (see the comment just before Example 3 in the Hessian section).
In this problem, jAj = 1 > 0 and a11 = 5 > 0.
(e) False. The eigenvalues of the given matrix are clearly 3, 9, and 4. Hence, the given matrix has
both a positive and a negative eigenvalue, and so can not represent a positive de…nite quadratic
form.

13
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

Jordan Canonical Form


(1) (c) By parts (a) and (b), (A Ik ) e1 = 0k , and (A Ik ) ei = ei 1 for 2 i k. We use induction
i
to prove that (A Ik ) ei = 0k for all i, 1 i k. The Base Step holds since we already noted
i 1
that (A Ik ) e1 = 0k . For the Inductive Step, we assume that (A Ik ) ei 1 = 0k and show
i i i 1 i 1
that (A Ik ) ei = 0k . But (A Ik ) ei = (A Ik ) (A Ik ) ei = (A Ik ) ei 1 =
0k , completing the induction proof.
i k k i i
Using (A Ik ) ei = 0k , we see that (A Ik ) ei = (A Ik ) (A I k ) ei
k i k
= (A Ik ) 0k = 0k for all i, 1 i k: Now, for 1 i k, the ith column of (A Ik )
k k
equals (A Ik ) ei . Hence, we have shown that every column of (A Ik ) is zero, and so
k
(A Ik ) = Ok .
(4) Note for all the parts below that two matrices in Jordan canonical form are similar to each other if
and only if they contain the same Jordan blocks, rearranged in any order.
2 3 2 3 2 3
[2] 0 0 [2] 0 0 [ 1] 0 0
(a) 4 0 [2] 0 5, 4 0 [ 1] 0 5, 4 0 [2] 0 5
0 0 [ 1] 0 0 [2] 0 0 [2]
2 3 2 3
2 1 0 [ 1] 0 0
4 0 2 0 5, 4 0 2 1 5
0 0 [ 1] 0 0 2
The three matrices on the …rst line are all similar to each other. The two matrices on the second
line are similar to each other, but not to those on the …rst line.
(c) Using the quadratic formula, x2 + 6x + 10 has roots 3 + i and 3 i, each having multiplicity 1;
[ 3 + i] 0 [ 3 i] 0
and , which are similar to each other
0 [ 3 i] 0 [ 3 + i]
(d) x4 + 4x3 + 4x2 = x2 (x + 2)2 ;
2 3 2 3
0 1 0 0 2 1 0 0
6 0 0 0 0 7 6 0 2 0 0 7
6 7, 6 7 , which are similar to each other but
4 0 0 2 1 5 4 0 0 0 1 5
0 0 0 2 0 0 0 0
to none of the others;
2 3 2 3 2 3
[0] 0 0 0 [0] 0 0 0 2 1 0 0
6 0 [0] 0 0 7 6 0 2 1 0 7 6 0 2 0 0 7
6 7, 6 7, 6 7,
4 0 0 2 1 5 4 0 0 2 0 5 4 0 0 [0] 0 5
0 0 0 2 0 0 0 [0] 0 0 0 [0]
which are similar to each other but to none of the others;
2 3 2 3 2 3
0 1 0 0 [ 2] 0 0 0 [ 2] 0 0 0
6 0 0 0 0 7 6 0 0 1 0 7 6 0 [ 2] 0 0 7
6 7, 6 7, 6 7,
4 0 0 [ 2] 0 5 4 0 0 0 0 5 4 0 0 0 1 5
0 0 0 [ 2] 0 0 0 [ 2] 0 0 0 0
which are similar to each other but to none of the others;

14
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

2 3 2 3 2 3
[ 2] 0 0 0 [ 2] 0 0 0 [ 2] 0 0 0
6 0 [ 2] 0 0 7 6 0 [0] 0 0 7 6 0 [0] 0 0 7
6 7, 6 7, 6 7,
4 0 0 [0] 0 5 4 0 0 [ 2] 0 5 4 0 0 [0] 0 5
0 0 0 [0] 0 0 0 [0] 0 0 0 [ 2]
2 3 2 3 2 3
[0] 0 0 0 [0] 0 0 0 [0] 0 0 0
6 0 [ 2] 0 0 7 6 0 [ 2] 0 0 7 6 0 [0] 0 0 7
6 7, 6 7, 6 7,
4 0 0 [ 2] 0 5 4 0 0 [0] 0 5 4 0 0 [ 2] 0 5
0 0 0 [0] 0 0 0 [ 2] 0 0 0 [ 2]
which are similar to each other but to none of the others
(f) x4 3x2 4 = x2 4 x2 + 1 = (x 2) (x + 2) (x i) (x + i);
There are 24 possible Jordan forms. Because each eigenvalue has algebraic multiplicity 1, all of
the Jordan blocks have size 1 1. Hence, any Jordan canonical form matrix with these blocks is
diagonal with the 4 eigenvalues 2, 2, i, and i on the main diagonal. The 24 possibilities result
from all of the possible orders in which these 4 eigenvalues can appear on the diagonal. All 24
possible Jordan canonical form matrices are similar to each other because they all have the same
twenty-four 1 1 Jordan blocks, only in a di¤erent order.

(6) (a) Consider the matrix 2 3


9 5 8
B=4 4 3 4 5:
8 4 7
We …nd a Jordan canonical form for B. You can quickly calculate that pB (x) = x3 x2 x + 1 =
(x 1)2 (x + 1). Thus, the eigenvalues for B are 1 = 1 and 2 = 1. We must …nd the sizes of
the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of generalized
eigenvectors corresponding to each block. The following is one possible solution. (Other answers
are possible if, for example, the eigenvalues are taken in a di¤erent order or if di¤erent generalized
eigenvectors are chosen.)
2
The Cayley-Hamilton Theorem tells us that pB (B) = (B I3 ) (B + I3 ) = O3 :
Step A: We begin with the eigenvalue 1 = 1:
Step A1: Let
2 3
8 5 8
D = (B + I3 ) = 4 4 4 4 5 :
8 4 8
2
Then (B I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B I3 ) D = O3 . Now,
2 3
10 5 8
B I3 = 4 4 2 4 5;
8 4 6
and so, 2 3
4 2 4
(B I3 ) D = 4 8 4 8 5 6= O3 ;
0 0 0
2
while, as we have seen, (B I3 ) D = O3 . Hence, k = 2.
k 1
Step A3: We choose a maximal linearly independent subset of the columns of (B I3 ) D=

15
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

(B I3 ) D to get as many linearly independent generalized eigenvectors as possible. Since all of


the columns are multiples of the …rst, the …rst column alone su¢ ces. Thus, v11 = [ 4; 8; 0]. We
decide whether to simplify v11 (by multiplying every entry by 41 ) after v12 is determined.
k j
Step A4: Next, we work backwards through the products of the form (B I3 ) D for j running
from 2 up to k, choosing the same column in which we found the generalized eigenvector v11 .
Because k = 2, the only value we need to consider here is j = 2. Hence, we let v12 = [ 8; 4; 8],
(2 2)
the …rst column of (B I3 ) D = D. Since the entries of v12 are exactly divisible by 4, we
simplify the entries of both v11 and v12 by multiplying by 41 . Hence, we will use v11 = [1; 2; 0]
and v12 = [2; 1; 2].
Steps A5 and A6: Now by construction, (B I3 ) v12 = v11 and (B I3 ) v11 = 0. Since the
total number of generalized eigenvectors we have found for 1 equals the algebraic multiplicity of
1 , which is 2, we can stop our work for 1 . We therefore have a fundamental sequence fv11 ; v12 g
of generalized eigenvectors corresponding to a 2 2 Jordan block associated with 1 = 1 in a
Jordan canonical form for B.
To complete this example, we still must …nd a fundamental sequence of generalized eigenvectors
corresponding to 2 = 1. We repeat Step A for this eigenvalue.
Step A1: Let 2 3
16 8 12
2
D = (B I3 ) = 4 0 0 0 5:
16 8 12
Then (B + I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B + I3 ) D = O3 . How-
ever, it is obvious here that k = 1.
k 1 2
Step A3: Since k 1 = 0, (B + I3 ) D = D: Hence, each nonzero column of D = (B I3 )
is a generalized eigenvector for B corresponding to 2 = 1. In particular, the …rst column of
2
(B I3 ) serves nicely as a generalized eigenvector v21 for B corresponding to 2 = 1. The
2
other columns of (B I3 ) are scalar multiples of the …rst column.
Steps A4, A5, and A6: No further work for 2 is needed here because 2 = 1 has algebraic
multiplicity 1, and hence only one generalized eigenvector corresponding to 2 is required. How-
2 1
ever, we can simplify v21 by multiplying the …rst column of (B I3 ) by 16 , yielding v21 = [1; 0; 1].
Thus, fv21 g is a fundamental sequence of generalized eigenvectors corresponding to the 1 1 Jor-
dan block associated with 2 = 1 in a Jordan canonical form for B.
Thus, we have completed Step A for both eigenvalues.
Step B: Finally, we now have an ordered basis (v11 ; v12 ; v21 ) comprised of fundamental sequences
of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors,
we …nd that
2 32 32 3
1 0 1 9 5 8 1 2 1
A = P 1 BP = 4 2 1 2 54 4 3 4 54 2 1 0 5
4 2 3 8 4 7 0 2 1
2 3
1 1 0
= 4 0 1 0 5;
0 0 [ 1]

which gives a Jordan canonical form for B. Note that we could have used the generalized eigen-
vectors in the order v21 ; v11 ; v12 instead, which would have resulted in the other possible Jordan

16
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

canonical form for B, 2 3


[ 1] 0 0
4 0 1 1 5:
0 0 1
(d) Consider the matrix 2 3
8 5 0
B=4 5 3 1 5:
10 7 0
We …nd a Jordan canonical form for B. You can quickly calculate that pB (x) = x3 5x2 +8x 6 =
(x 3)(x (1 i))(x (1 + i)). Thus, the eigenvalues for B are 1 = 3; 2 = 1 i, and 3 = 1 + i.
Because each eigenvalue has geometric multiplicity 1, each of the three Jordan blocks will be
1 1 blocks. Hence, the Jordan canonical form matrix will be diagonal. We could just solve
this problem using our method for diagonalization from Section 5.6 of the textbook. However, we
shall use the Method of this section here. In what follows, we consider the eigenvalues in the order
given above. (Other answers are possible if, for example, the eigenvalues are taken in a di¤erent
order or if di¤erent eigenvectors are chosen.)
The Cayley-Hamilton Theorem tells us that

pB (B) = (B 3I3 ) (B (1 i)I3 ) (B (1 + i)I3 ) = O3 :

Step A: We begin with the eigenvalue 1 = 3:


Step A1: Let
2 3
25 15 5
D = (B (1 i)I3 ) (B (1 + i)I3 ) = 4 25 15 5 5:
25 15 5
Then (B 3I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B 3I3 ) D = O3 .
Clearly, k = 1.
k 1
Step A3: We choose a maximal linearly independent subset of the columns of (B 3I3 ) D=
D to get as many linearly independent eigenvectors as possible. Since all of the columns are mul-
1
tiples of the …rst, the …rst column alone su¢ ces. We can multiply this column by 25 , obtaining
v11 = [1; 1; 1].
Steps A4, A5, and A6: No further work for 1 is needed here because 1 = 3 has algebraic
multiplicity 1, and hence only one generalized eigenvector corresponding to 1 is required. Thus,
fv11 g is a fundamental sequence of generalized eigenvectors corresponding to the 1 1 Jordan
block associated with 1 = 3 in a Jordan canonical form for B.
Thus, we have completed Step A for this eigenvalue. To complete this example, we still must
…nd a fundamental sequences of generalized eigenvectors corresponding to 2 and 3 . We now
repeat Step A for 2 = 1 i.
Step A1: Let
2 3
10 5i 5 5i 5
D = (B 3I3 ) (B (1 + i)I3 ) = 4 15 + 5i 8 + 6i 7 + i 5:
5 10i 1 7i 4 + 3i
Then (B (1 i)I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B (1 i)I3 ) D = O3 .

17
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

Clearly, k = 1.
k 1
Step A3: We choose a maximal linearly independent subset of the columns of (B 3I3 ) D=
D to get as many linearly independent eigenvectors as possible. Since all of the columns are
multiples of the …rst, the …rst column alone su¢ ces. (The second column is ( 53 1
5 i) times the
2 1
…rst column, and the third column is 5 5 i times the …rst column. We can discover these
scalars by dividing the …rst entry in each column by the …rst entry in the …rst column, and then
verify these scalars are correct for the remaining entries.) We can multiply this column by 51 ,
obtaining v21 = [2 i; 3 + i; 1 2i].
Steps A4, A5, and A6: No further work for 2 is needed here because 2 = 1 i has algebraic
multiplicity 1, and hence only one generalized eigenvector corresponding to 2 is required. Thus,
fv21 g is a fundamental sequence of generalized eigenvectors corresponding to the 1 1 Jordan
block associated with 2 = 1 i in a Jordan canonical form for B.
Thus, we have completed Step A for this eigenvalue. To complete this example, we still must
…nd a fundamental sequence of generalized eigenvectors corresponding to 3 . We now repeat Step
A for 3 = 1 + i.
Step A1: Let
2 3
10 + 5i 5 + 5i 5
D = (B 3I3 ) (B (1 i)I3 ) = 4 15 5i 8 6i 7 i 5:
5 + 10i 1 + 7i 4 3i
Then (B (1 + i)I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B (1 + i)I3 ) D = O3 .
Clearly, k = 1.
Step A3: We choose a maximal linearly independent subset of the columns of
k 1
(B (1 + i)I3 ) D = D to get as many linearly independent eigenvectors as possible. Since all
of the columns are multiples of the …rst, the …rst column alone su¢ ces. (The second column is
( 53 + 51 i) times the …rst column, and the third column is 2 1
5 + 5 i times the …rst column.) We
1
can multiply this column by 5 , obtaining v31 = [2 + i; 3 i; 1 + 2i].
Steps A4, A5, and A6: No further work for 3 is needed here because 3 = 1 + i has algebraic
multiplicity 1, and hence only one generalized eigenvector corresponding to 3 is required. Thus,
fv31 g is a fundamental sequence of generalized eigenvectors corresponding to the 1 1 Jordan
block associated with 3 = 1 + i in a Jordan canonical form for B.
Thus, we have completed Step A for all three eigenvalues.
Step B: Finally, we now have an ordered basis (v11 ; v21 ; v31 ) comprised of fundamental sequences
of generalized eigenvectors for B. Letting P be the matrix whose columns are these basis vectors,
we …nd that
2 32 32 3
10 6 2 8 5 0 1 2 i 2+i
1
A = P 1 BP = 4 1 2i 1 i i 54 5 3 1 54 1 3+i 3 i 5
2
1 + 2i 1+i i 10 7 0 1 1 2i 1 + 2i
2 3
[3] 0 0
= 4 0 [1 i] 0 5;
0 0 [1 + i]

which gives a Jordan canonical form for B.


5
(g) We are given that pB (x) = x5 + 5x4 + 10x3 + 10x2 + 5x + 1 = (x + 1) . Therefore, the only
eigenvalue for B is = 1. (Other answers are possible if, for example, di¤erent generalized
eigenvectors are chosen or they are used in a di¤erent order.)

18
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

Step A: We begin by …nding generalized eigenvectors for = 1.


5
Step A1: pB (B) = (B + I5 ) I5 = O5 . We let D = I5 .
Step A2: We calculate as follows:
2 3
2 2 1 5 3
6 8 3 7 2 1 7
6 7
(B + I5 ) D = (B + I5 ) = 6
6 0 3 3 6 3 7
7
4 4 1 3 0 1 5
0 2 2 4 2
2 2
and (B + I5 ) D = (B + I5 ) = O5 . Hence k = 2.
Step A3: Each nonzero column of (B + I5 ) D is a generalized eigenvector for B corresponding
to = 1. We let v11 = [ 2; 8; 0; 4; 0], the …rst column of (B + I5 ) D. (We do not divide by
2 here since not all of the entries of the vector v12 calculated in the next step are divisible by 2.)
The second column of (B + I5 ) D is not a scalar multiple of v11 , so we let v21 = [ 2; 3; 3; 1; 2],
the second column of (B + I5 ) D. Row reduction shows that the remaining columns of (B + I5 ) D
are linear combinations of the …rst two (see the adjustment step, below).
Step A4: Next, we let v12 = [1; 0; 0; 0; 0] and v22 = [0; 1; 0; 0; 0], the …rst and second columns of
D (= I5 ), respectively. Thus, (B + I5 ) v12 = v11 , and (B + I5 ) v22 = v21 : This gives us our …rst
two fundamental sequences of generalized eigenvectors, fv11 ; v12 g and fv21 ; v22 g corresponding
to two 2 2 Jordan blocks for = 1.
Steps A5 and A6: We have two fundamental sequence of generalized eigenvectors consisting
of a total of 4 vectors. But, since the algebraic multiplicity of is 5, we must still …nd another
generalized eigenvector. We do this by making an adjustment to the matrix D.
Recall that each column of (B + I5 ) D is a linear combination of v11 and v21 . In fact, the row
reduction we did in Step A3 shows that

1st column of (B + I5 ) D = 1v11 + 0v21 = f1 v11 + g1 v21


nd
2 column of (B + I5 ) D = 0v11 + 1v21 = f2 v11 + g2 v21
rd 1
3 column of (B + I5 ) D = 2 v11 + 1v21 = f3 v11 + g3 v21
th 1
4 column of (B + I5 ) D = 2 v11 + ( 2)v21 = f4 v11 + g4 v21
1
and 5th column of (B + I5 ) D = 2 v11 + ( 1)v21 = f5 v11 + g5 v21

where the fi ’s and gi ’s represent the respective coe¢ cients of v11 and v21 for each column of
(B + I5 ) D. We create a new matrix H whose ith column is fi v12 + gi v22 . Thus,
2 1 1 1 3
1 0 2 2 2
6 0 1 1 2 1 7
6 7
H=6
6 0 0 0 0 0 7:
7
4 0 0 0 0 0 5
0 0 0 0 0

Then, since (B + I5 )v12 = v11 and (B + I5 )v22 = v21 , we get that (B + I5 ) H = (B + I5 ) D.


Let D1 = D H = I5 H. Clearly, (B + I5 ) D1 = O5 . We now revisit Steps A2 through A6
using the matrix D1 instead of D. The purpose of this adjustment to the matrix D is to attempt
to eliminate the e¤ects of the two fundamental sequences of length 2, thus unmasking shorter
fundamental sequences.

19
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

Step A2: We have


2 1 1 1 3
0 0 2 2 2
6 0 0 1 2 1 7
6 7
D1 = 6
6 0 0 1 0 0 7
7
4 0 0 0 1 0 5
0 0 0 0 1
and (B + I5 ) D1 = O5 . Hence k = 1.
Step A3: We look for new generalized eigenvectors among the columns of D1 . We must choose
columns of D1 that are not only linearly independent of each other, but also of our previously
computed generalized eigenvectors (at the …rst level). We check this using the Independence Test
Method by row reducing
2 1 1 1 3
2 2 2 2 2
6 8 3 1 2 1 7
6 7
6 0 1 0 0 7
6 3 7;
4 4 1 0 1 0 5
0 2 0 0 1
where the …rst two columns are v11 and v21 , and the other columns are the nonzero columns of
D1 . The resulting reduced row echelon form matrix is
2 1 1 3
1 0 0 4 8
6 0 1 0 0 1 7
6 2 7
6 7
6 0 0 1 0 3 7:
6 2 7
4 0 0 0 0 0 5
0 0 0 0 0

Hence, the third column of D1 is linearly independent of v11 and v21 , and the remaining columns
of D1 are linear combinations of v11 , v21 , and the third column of D1 . Therefore, we let
v31 = [1; 2; 2; 0; 0], which is 2 times the 3rd column of D1 .
Steps A4, A5, and A6: Because k = 1, the generalized eigenvector v31 for = 1 corre-
sponds to a 1 1 Jordan block. This gives the sequence fv31 g ; and we do not need to …nd more
vectors for this sequence.
We now have the following three fundamental sequences of generalized eigenvectors for = 1:
fv11 ; v12 g,fv21 ; v22 g and fv31 g : Since we have now found 5 generalized eigenvectors for B corre-
sponding to = 1; and since the algebraic multiplicity of is 5, we are …nished with Step A.
Step B: We now have the ordered basis (v11 ; v12 ; v21 ; v22 ; v31 ) for C5 consisting of 3 sequences
of generalized eigenvectors for B. If we let
2 3
2 1 2 0 1
6 8 0 3 1 2 7
6 7
P=6 6 0 0 3 0 2 77;
4 4 0 1 0 0 5
0 0 2 0 0

the matrix whose columns are the vectors in this ordered basis. Using this, we obtain the following

20
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

Jordan canonical form for B:

A = P 21 BP 32 32 3
0 0 0 2 1 3 2 1 5 3 2 1 2 0 1
6 8 0 4 4 4 76 8 2 7 2 1 76 8 0 3 1 2 7
166 0 0
76
76
76
76
7
7
= 0 0 4 0 3 4 6 3 0 0 3 0 2
864 0 8
76
54
76
54
7
5
8 16 8 4 1 3 1 1 4 0 1 0 0
0 0 4 0 6 0 2 2 4 1 0 0 2 0 0
2 3
1 1 0 0 0
6 0 1 0 0 0 7
6 7
= 6
6 0 0 1 1 0 7:
7
4 0 0 0 1 0 5
0 0 0 0 [ 1]

(7) (b) Consider the matrix 2 3


4 4 7
B=4 3 5 13 5 :
1 3 8
We …nd a Jordan canonical form for B. You can quickly calculate that pB (x) = x3 7x2 +
16x 12 = (x 2)2 (x 3). Thus, the eigenvalues for B are 1 = 2 and 2 = 3. We must …nd
the sizes of the Jordan blocks corresponding to these eigenvalues and a fundamental sequence of
generalized eigenvectors corresponding to each block. Now, the Cayley-Hamilton Theorem tells
2
us that pB (B) = (B 2I3 ) (B 3I3 ) = O3 :
Step A: We begin with the eigenvalue 1 = 2:
Step A1: Let
2 3
1 4 7
D = (B 3I3 ) = 4 3 8 13 5 :
1 3 5
2
Then (B 2I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B 2I3 ) D = O3 . Now,
2 3
3 3 3
(B 2I3 ) D = 4 5 5 5 5 6= O3 ;
2 2 2
2
while, as we have seen, (B 2I3 ) D = O3 . Hence, k = 2.
k 1
Step A3: We choose a maximal linearly independent subset of the columns of (B 2I3 ) D=
(B 2I3 ) D to get as many linearly independent generalized eigenvectors as possible. Since all of
the columns are multiples of the …rst, the …rst column alone su¢ ces. Thus, u11 = [ 3; 5; 2].
k j
Step A4: Next, we work backwards through the products of the form (B 2I3 ) D for j
running from 2 up to k, choosing the same column in which we found the generalized eigenvector
u11 . Because k = 2, the only value we need to consider here is j = 2. Hence, we let u12 = [1; 3; 1],
(2 2)
the …rst column of (B 2I3 ) D = D.
Steps A5 and A6: Now by construction, (B 2I3 ) u12 = u11 and (B 2I3 ) u11 = 0. Hence,
fu11 ; u12 g forms a fundamental sequence of generalized eigenvectors for 1 = 2. Since the total
number of generalized eigenvectors we have found for 1 equals the algebraic multiplicity of 1 ,

21
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

which is 2, we can stop our work for 1 . We therefore have a fundamental sequence fu11 ; u12 g of
generalized eigenvectors corresponding to a 2 2 Jordan block associated with 1 = 2 in a Jordan
canonical form for B.
To complete this example, we still must …nd a fundamental sequence of generalized eigenvectors
corresponding to 2 = 3. We repeat Step A for this eigenvalue.
Step A1: Let 2 3
1 1 4
2
D = (B 2I3 ) = 4 2 2 8 5:
1 1 4
Then (B 3I3 ) D = O3 .
k
Step A2: Next, we search for the smallest positive integer k such that (B 3I3 ) D = O3 .
However, it is obvious here that k = 1.
k 1 2
Step A3: Since k 1 = 0, (B 3I3 ) D = D: Hence, each nonzero column of D = (B 2I3 )
is a generalized eigenvector for B corresponding to 2 = 3. In particular, the …rst column of
2
(B 2I3 ) serves nicely as a generalized eigenvector u21 = [ 1; 2; 1] for B corresponding to
2
2 = 3. The other columns of (B 3I3 ) are scalar multiples of the …rst column.
Steps A4, A5 and A6: No further work for 2 is needed here because 2 = 3 has algebraic
multiplicity 1, and hence only one generalized eigenvector corresponding to 2 is required. Thus,
fu21 g is a fundamental sequence of generalized eigenvectors corresponding to the 1 1 Jordan
block associated with 2 = 3 in a Jordan canonical form for B.
Thus, we have completed Step A for both eigenvalues.
Step B: Finally, we now have an ordered basis (u11 ; u12 ; u21 ) comprised of fundamental sequences
of generalized eigenvectors for B. Letting Q be the matrix whose columns are these basis vectors,
we …nd that
2 32 32 3
1 0 1 4 4 7 3 1 1
A = Q 1 BQ = 4 1 1 1 54 3 5 13 5 4 5 3 2 5
1 1 4 1 3 8 2 1 1
2 3
2 1 0
= 4 0 2 0 5;
0 0 [3]

which gives a Jordan canonical form for B.


(10) (a) (A + aIn )(A + bIn ) = (A + aIn )A + (A + aIn ) (bIn )
= A2 + aIn A + A (bIn ) + (aIn ) (bIn )
= A2 + aA+bA + abIn = A2 + bA+aA + baIn
= A2 + bIn A + A (aIn ) + (bIn ) (aIn )
= (A + bIn )A+(A + bIn ) (aIn ) = (A + bIn )(A + aIn ):
(15) Let A, B, P, and q(x) be as given in the statement of the problem. We proceed using induction on k,
the degree of the polynomial q.
Base Step: Suppose k = 0. Then q(x) = c for some c 2 C. Hence, q(A) = cIn = q(B). Therefore,
P 1 q(A)P = P 1 (cIn ) P = cP 1 In P =cP 1 P = cIn = q(B).
Inductive Step: Suppose that s(B) = P 1 s(A)P for every k th degree polynomial s(x). We need to
prove that q(B) = P 1 q(A)P, where q(x) has degree k + 1.
Suppose that q(x) = ak+1 xk+1 + ak xk + + a1 x + a0 . Then q(A) = ak+1 Ak+1 + ak Ak +
k k 1
+ a1 A + a0 In = (ak+1 A + ak A + + a1 In )A + a0 In = s(A)A + a0 In , where s(x) =
ak+1 xk + ak xk 1 + + a1 . Similarly, q(B) = s(B)B + a0 In for the same polynomial s(x). Therefore,

22
Student Manual for Andrilli and Hecker - Elementary Linear Algebra, 4th edition Jordan Canonical Form

P 1 q(A)P = P 1 (s(A)A + a0 In ) P = P 1 s(A)A + P 1 (a0 In ) P = P 1 s(A)AP + P 1 (a0 In ) P


= P 1 s(A) PP 1 AP+a0 P 1 In P = P 1 s(A)P P 1 AP +a0 P 1 P = s(B) P 1 AP +a0 P 1 P
(by the inductive hypothesis) = s(B)B + a0 In = q(B).
(16) (a) We compute the (i; j) entry of A2 . First consider the case in which i k and j k. The (i; j)
entry of A2 = ai1 a1j + +aik akj +ai(k+1) a(k+1)j + +ai(k+m) a(k+m)j = (i; j) entry of A211 +
((i; j) entry of A12 A21 ) :
If i > k and j k, then the (i; j) entry of A2 = ai1 a1j + + aik akj + ai(k+1) a(k+1)j + +
ai(k+m) a(k+m)j = ((i; j) entry of A21 A11 ) + ((i; j) entry of A22 A21 ).
The other two cases are handled similarly.
(19) (g) The number of Jordan blocks having size exactly k k is the number having size at least k k
minus the number of size at least (k + 1) (k + 1). By part (f), rk 1 (A) rk (A) gives the
total number of Jordan blocks having size at least k k corresponding to . Similarly, the
number of Jordan blocks having size at least (k + 1) (k + 1) is rk (A) rk+1 (A). Hence the
number of Jordan blocks having size exactly k k equals (rk 1 (A) rk (A)) (rk (A) rk+1 (A))
= rk 1 (A) 2rk (A) + rk+1 (A).
(20) (a) False. For example, any matrix in Jordan canonical form that has more than one Jordan block can
not be similar to a matrix with a single Jordan block. This follows from the uniqueness statement
1 0
in Theorem 1. The matrix is a speci…c example of such a matrix.
0 2
(b) True. This is stated in the …rst paragraph of the subsection entitled “Finding A Jordan Canonical
Form.”
(c) False. The geometric multiplicity of an eigenvalue equals the dimension of the eigenspace cor-
responding to that eigenvalue, not the dimension of the generalized eigenspace. For a speci…c
0 1
example, consider the matrix A = , whose only eigenvalue is = 0. The eigenspace for
0 0
is spanned by e1 , and so the dimension of the eigenspace, which equals the geometric multi-
plicity of = 0, is 1. However, the generalized eigenspace for A is fv 2 C2 j(A 0I2 )k v = 0g for
some positive integer k. But A 0I2 = A; so (A 0I2 )2 = O2 : Thus, the generalized eigenspace
for A is C2 , which has dimension 2. (Note, by the way, that pA (x) = x2 , and so the eigenvalue 0
has algebraic multiplicity 2.)
(d) False. According to Theorem 1, a Jordan canonical form for a matrix is unique, except for the
order in which the Jordan blocks appear on the main diagonal. However, if there are two or more
Jordan blocks, and if they are all identical, then all of the Jordan canonical form matrices are
the same, since changing the order of the blocks in such a case does not change the matrix. One
example is the Jordan canonical form matrix I2 , which has two identical 1 1 Jordan blocks.
(e) False. For a simple counterexample, suppose A = J = I2 ; and let P and Q be any two distinct
nonsingular matrices, such as I2 and I2 .
(f) True. Theorem 1 asserts that there is a nonsingular matrix P such that P 1 AP is in Jordan
canonical form. The comments just before Example 5 show that the columns of P are a basis for
Cn consisting of generalized eigenvectors for A. Since these columns span Cn , every vector in Cn
can be expressed as a linear combination of these column vectors.

23

You might also like