Jim Hefferon Linear Algebra Answers To Questions
Jim Hefferon Linear Algebra Answers To Questions
Answers to Exercises
Linear Algebra
Jim Hefferon
¡1¢
3
¡2¢
1
¯ ¯
¯1 2¯
¯ ¯
¯3 1¯
¡1¢
x1 ·
3
¡2¢
1
¯ ¯
¯x · 1 2 ¯
¯ ¯
¯x · 3 1 ¯
¡6¢
8
¡2¢
1
¯ ¯
¯6 2 ¯
¯ ¯
¯8 1 ¯
Notation
R real numbers
N natural numbers: {0, 1, 2, . . .}
¯ C complex numbers
{. . . ¯ . . .} set of . . . such that . . .
h. . .i sequence; like a set but order matters
V, W, U vector spaces
~v , w
~ vectors
~0, ~0V zero vector, zero vector of V
B, D bases
En = h~e1 , . . . , ~en i standard basis for Rn
~ ~δ
β, basis vectors
RepB (~v ) matrix representing the vector
Pn set of n-th degree polynomials
Mn×m set of n×m matrices
[S] span of the set S
M ⊕N direct sum of subspaces
V ∼ =W isomorphic spaces
h, g homomorphisms, linear maps
H, G matrices
t, s transformations; maps from a space to itself
T, S square matrices
RepB,D (h) matrix representing the map h
hi,j matrix entry from row i, column j
|T | determinant of the matrix T
R(h), N (h) rangespace and nullspace of the map h
R∞ (h), N∞ (h) generalized rangespace and nullspace
Cover. This is Cramer’s Rule for the system x + 2y = 6, 3x + y = 8. The size of the first box is the determinant
shown (the absolute value of the size is the area). The size of the second box is x times that, and equals the size
of the final box. Hence, x is the final determinant divided by the first determinant.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
These are answers to the exercises in Linear Algebra by J. Hefferon. Corrections or comments are
very welcome, email to jimjoshua.smcvt.edu
An answer labeled here as, for instance, One.II.3.4, matches the question numbered 4 from the first
chapter, second section, and third subsection. The Topics are numbered separately.
Contents
Chapter One: Linear Systems 3
Subsection One.I.1: Gauss’ Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Subsection One.I.2: Describing the Solution Set . . . . . . . . . . . . . . . . . . . . . . . 10
Subsection One.I.3: General = Particular + Homogeneous . . . . . . . . . . . . . . . . . 14
Subsection One.II.1: Vectors in Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Subsection One.II.2: Length and Angle Measures . . . . . . . . . . . . . . . . . . . . . . 20
Subsection One.III.1: Gauss-Jordan Reduction . . . . . . . . . . . . . . . . . . . . . . . 25
Subsection One.III.2: Row Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Topic: Computer Algebra Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Topic: Input-Output Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Topic: Accuracy of Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Topic: Analyzing Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
One.I.1.16 Gauss’ method can be performed in different ways, so these simply exhibit one possible
way to get the answer.
(a) Gauss’ method
−(1/2)ρ1 +ρ2 2x + 3y = 7
−→
− (5/2)y = −15/2
gives that the solution is y = 3 and x = 2.
(b) Gauss’ method here
x − z=0 x − z=0
−3ρ1 +ρ2 −ρ2 +ρ3
−→ y + 3z = 1 −→ y + 3z = 1
ρ1 +ρ3
y =4 −3z = 3
gives x = −1, y = 4, and z = −1.
One.I.1.17 (a) Gaussian reduction
−(1/2)ρ1 +ρ2 2x + 2y = 5
−→
−5y = −5/2
shows that y = 1/2 and x = 2 is the unique solution.
(b) Gauss’ method
ρ1 +ρ2 −x + y = 1
−→
2y = 3
gives y = 3/2 and x = 1/2 as the only solution.
(c) Row reduction
−ρ1 +ρ2 x − 3y + z = 1
−→
4y + z = 13
shows, because the variable z is not a leading variable in any row, that there are many solutions.
(d) Row reduction
−3ρ1 +ρ2 −x − y = 1
−→
0 = −1
shows that there is no solution.
(e) Gauss’ method
x + y − z = 10 x+ y − z = 10 x+ y− z = 10
2x − 2y + z = 0
ρ1 ↔ρ4 −2ρ1 +ρ2 −4y + 3z = −20 −(1/4)ρ2 +ρ3 −4y + 3z = −20
−→ −→ −→
x +z= 5 −ρ1 +ρ3 −y + 2z = −5 ρ2 +ρ4 (5/4)z = 0
4y + z = 20 4y + z = 20 4z = 0
gives the unique solution (x, y, z) = (5, 5, 0).
(f ) Here Gauss’ method gives
2x + z+ w= 5 2x + z+ w= 5
−(3/2)ρ1 +ρ3 y − w= −1 −ρ2 +ρ4 y − w= −1
−→ −→
−2ρ1 +ρ4 − (5/2)z − (5/2)w = −15/2 − (5/2)z − (5/2)w = −15/2
y − w= −1 0= 0
which shows that there are many solutions.
One.I.1.18 (a) From x = 1 − 3y we get that 2(1 − 3y) = −3, giving y = 1.
(b) From x = 1 − 3y we get that 2(1 − 3y) + 2y = 0, leading to the conclusion that y = 1/2.
Users of this method must check any potential solutions by substituting back into all the equations.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 7
One.I.1.27 We take three cases, first that a =6= 0, second that a = 0 and c 6= 0, and third that both
a = 0 and c = 0.
For the first, we assume that a 6= 0. Then the reduction
−(c/a)ρ1 +ρ2 ax + by = j
−→ cj
(− cb
a + d)y = − a + k
shows that this system has a unique solution if and only if −(cb/a) + d 6= 0; remember that a 6= 0
so that back substitution yields a unique x (observe, by the way, that j and k play no role in the
conclusion that there is a unique solution, although if there is a unique solution then they contribute
to its value). But −(cb/a) + d = (ad − bc)/a and a fraction is not equal to 0 if and only if its numerator
is not equal to 0. This, in this first case, there is a unique solution if and only if ad − bc 6= 0.
In the second case, if a = 0 but c 6= 0, then we swap
cx + dy = k
by = j
to conclude that the system has a unique solution if and only if b 6= 0 (we use the case assumption that
c 6= 0 to get a unique x in back substitution). But — where a = 0 and c 6= 0 — the condition “b 6= 0”
is equivalent to the condition “ad − bc 6= 0”. That finishes the second case.
Finally, for the third case, if both a and c are 0 then the system
0x + by = j
0x + dy = k
might have no solutions (if the second equation is not a multiple of the first) or it might have infinitely
many solutions (if the second equation is a multiple of the first then for each y satisfying both equations,
any pair (x, y) will do), but it never has a unique solution. Note that a = 0 and c = 0 gives that
ad − bc = 0.
One.I.1.28 Recall that if a pair of lines share two distinct points then they are the same line. That’s
because two points determine a line, so these two points determine each of the two lines, and so they
are the same line.
Thus the lines can share one point (giving a unique solution), share no points (giving no solutions),
or share at least two points (which makes them the same line).
One.I.1.29 For the reduction operation of multiplying ρi by a nonzero real number k, we have that
(s1 , . . . , sn ) satisfies this system
a1,1 x1 + a1,2 x2 + · · · + a1,n xn = d1
..
.
kai,1 x1 + kai,2 x2 + · · · + kai,n xn = kdi
..
.
am,1 x1 + am,2 x2 + · · · + am,n xn = dm
if and only if
a1,1 s1 + a1,2 s2 + · · · + a1,n sn = d1
..
.
and kai,1 s1 + kai,2 s2 + · · · + kai,n sn = kdi
..
.
and am,1 s1 + am,2 s2 + · · · + am,n sn = dm
by the definition of ‘satisfies’. But, because k 6= 0, that’s true if and only if
a1,1 s1 + a1,2 s2 + · · · + a1,n sn = d1
..
.
and ai,1 s1 + ai,2 s2 + · · · + ai,n sn = di
..
.
and am,1 s1 + am,2 s2 + · · · + am,n sn = dm
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 9
One.I.1.31 Yes. This sequence of operations swaps rows i and j
ρi +ρj −ρj +ρi ρi +ρj −1ρi
−→ −→ −→ −→
so the row-swap operation is redundant in the presence of the other two.
One.I.1.32 Swapping rows is reversed by swapping back.
a1,1 x1 + · · · + a1,n xn = d1 a1,1 x1 + · · · + a1,n xn = d1
.. ρi ↔ρj ρj ↔ρi
−→ −→ ..
. .
am,1 x1 + · · · + am,n xn = dm am,1 x1 + · · · + am,n xn = dm
Multiplying both sides of a row by k 6= 0 is reversed by dividing by k.
a1,1 x1 + · · · + a1,n xn = d1 a1,1 x1 + · · · + a1,n xn = d1
.. kρi (1/k)ρi
−→ −→ ..
. .
am,1 x1 + · · · + am,n xn = dm am,1 x1 + · · · + am,n xn = dm
Adding k times a row to another is reversed by adding −k times that row.
a1,1 x1 + · · · + a1,n xn = d1 a1,1 x1 + · · · + a1,n xn = d1
.. kρi +ρj −kρi +ρj
−→ −→ ..
. .
am,1 x1 + · · · + am,n xn = dm am,1 x1 + · · · + am,n xn = dm
Remark: observe for the third case that if we were to allow i = j then the result wouldn’t hold.
2ρ1 +ρ1 −2ρ1 +ρ1
3x + 2y = 7 −→ 9x + 6y = 21 −→ −9x − 6y = −21
One.I.1.33 Let p, n, and d be the number of pennies, nickels, and dimes. For variables that are real
numbers, this system
p + n + d = 13 −ρ1 +ρ2 p + n + d = 13
−→
p + 5n + 10d = 83 4n + 9d = 70
has infinitely many solutions. However, it has a limited number of solutions in which p, n, and d are
non-negative integers. Running through d = 0, . . . , d = 8 shows that (p, n, d) = (3, 4, 6) is the only
sensible solution.
One.I.1.34 Solving the system
(1/3)(a + b + c) + d = 29
(1/3)(b + c + d) + a = 23
(1/3)(c + d + a) + b = 21
(1/3)(d + a + b) + c = 17
we obtain a = 12, b = 9, c = 3, d = 21. Thus the second item, 21, is the correct answer.
One.I.1.35 This is how the answer was given in the cited source. A comparison of the units and
hundreds columns of this addition shows that there must be a carry from the tens column. The tens
column then tells us that A < H, so there can be no carry from the units or hundreds columns. The
five columns then give the following five equations.
A+E =W
2H = A + 10
H =W +1
H + T = E + 10
A+1=T
The five linear equations in five unknowns, if solved simultaneously, produce the unique solution: A =
4, T = 5, H = 7, W = 6 and E = 2, so that the original example in addition was 47474+5272 = 52746.
One.I.1.36 This is how the answer was given in the cited source. Eight commissioners voted for B.
To see this, we will use the given information to study how many voters chose each order of A, B, C.
The six orders of preference are ABC, ACB, BAC, BCA, CAB, CBA; assume they receive a, b,
c, d, e, f votes respectively. We know that
a + b + e = 11
d + e + f = 12
a + c + d = 14
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 11
(d) This reduction
2 1 −1 2 2 1 −1 2 2 1 −1 2
−ρ1 +ρ2 (−3/2)ρ2 +ρ3
2 0 1 3 −→ 0 −1 2 1 −→ 0 −1 2 1
−(1/2)ρ1 +ρ3
1 −1 0 0 0 −3/2 1/2 −1 0 0 −5/2 −5/2
shows that the solution set is a singleton set.
1
{1}
1
(e) This reduction is easy
1 2 −1 0 3 1 2 −1 0 3 1 2 −1 0 3
−2ρ1 +ρ2 −ρ2 +ρ3
2 1 0 1 4 −→ 0 −3 2 1 −2 −→ 0 −3 2 1 −2
−ρ1 +ρ3
1 −1 1 1 1 0 −3 2 1 −2 0 0 0 0 0
and ends with x and y leading, while z and w are free. Solving for y gives y = (2 + 2z + w)/3 and
substitution shows that x + 2(2 + 2z + w)/3 − z = 3 so x = (5/3) − (1/3)z − (2/3)w, making the
solution set
5/3 −1/3 −2/3
2/3 2/3 1/3 ¯
{
0 + 1 z + 0 w z, w ∈ R}.
¯
0 0 1
(f ) The reduction
1 0 1 1 4 1 0 1 1 4 1 0 1 1 4
−2ρ1 +ρ2 −ρ2 +ρ3
2 1 0 −1 2 −→ 0 1 −2 −3 −6 −→ 0 1 −2 −3 −6
−3ρ1 +ρ3
3 1 1 0 7 0 1 −2 −3 −5 0 0 0 0 1
shows that there is no solution — the solution set is empty.
One.I.2.19 (a) This reduction
µ ¶ µ ¶
2 1 −1 1 −2ρ1 +ρ2 2 1 −1 1
−→
4 −1 0 3 0 −3 2 1
ends with x and y leading while z is free. Solving for y gives y = (1−2z)/(−3), and then substitution
2x + (1 − 2z)/(−3) − z = 1 shows that x = ((4/3) + (1/3)z)/2. Hence the solution set is
2/3 1/6 ¯
{−1/3 + 2/3 z ¯ z ∈ R}.
0 1
(b) This application of Gauss’ method
1 0 −1 0 1 1 0 −1 0 1 1 0 −1 0 1
−ρ1 +ρ3 −2ρ2 +ρ3
0 1 2 −1 3 −→ 0 1 2 −1 3 −→ 0 1 2 −1 3
1 2 3 −1 7 0 2 4 −1 6 0 0 0 1 0
leaves x, y, and w leading. The solution set is
1 1
3 −2 ¯
{
0 + 1 z z ∈ R}.
¯
0 0
(c) This row reduction
1 −1 1 0 0 1 −1 1 0 0 1 −1 1 0 0
0 1 0 1 0 −3ρ1 +ρ3 0 1 0 1 0 −ρ2 +ρ3 0 1 0 1 0
3 −2 3 1
−→ −→
0 0 1 0 1 0 ρ2 +ρ4 0 0 0 0 0
0 −1 0 −1 0 0 −1 0 −1 0 0 0 0 0 0
ends with z and w free. The solution set is
0 −1 −1
0 0 −1 ¯
{0 + 1 z + 0 w z, w ∈ R}.
¯
0 0 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 1
(b) Plug in with a = 3, b = 1, and c = −2.
−7 −5
5 3 ¯
{ 15 + 10 w w ∈ R}
¯
0 1
One.I.2.24 Leaving the comma out, say by writing a123 , is ambiguous because it could mean a1,23 or
a12,3 .
2 3 4 5 1 −1 1 −1
3 4 5 6 −1 1 −1 1
One.I.2.25 (a)
(b)
4 5 6 7 1 −1 1 −1
5 6 7 8 −1 1 −1 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 13
1 4 µ ¶ µ ¶
2 1 5 10 ¡ ¢
One.I.2.26 (a) 2 5 (b) (c) (d) 1 1 0
−3 1 10 5
3 6
One.I.2.27 (a) Plugging in x = 1 and x = −1 gives
a + b + c = 2 −ρ1 +ρ2 a + b + c = 2
−→
a−b+c=6 −2b =4
2
¯
so the set of functions is {f (x) = (4 − c)x − 2x + c c ∈ R}.
¯
(b) Putting in x = 1 gives
a+b+c=2
¯
so the set of functions is {f (x) = (2 − b − c)x2 + bx + c ¯ b, c ∈ R}.
One.I.2.28 On plugging in the five pairs (x, y) we get a system with the five equations and six unknowns
a, . . . , f . Because there are more unknowns than equations, if no inconsistency exists among the
equations then there are infinitely many solutions (at least one variable will end up free).
But no inconsistency can exist because a = 0, . . . , f = 0 is a solution (we are only using this zero
solution to show that the system is consistent — the prior paragraph shows that there are nonzero
solutions).
One.I.2.29 (a) Here is one — the fourth equation is redundant but still OK.
x+y− z+ w=0
y− z =0
2z + 2w = 0
z+ w=0
(b) Here is one.
x+y−z+w=0
w=0
w=0
w=0
(c) This is one.
x+y−z+w=0
x+y−z+w=0
x+y−z+w=0
x+y−z+w=0
One.I.2.30 This is how the answer was given in the cited source.
(a) Formal solution of the system yields
a3 − 1 −a2 + a
x= 2 y= 2 .
a −1 a −1
If a + 1 6= 0 and a − 1 6= 0, then the system has the single solution
a2 + a + 1 −a
x= y= .
a+1 a+1
If a = −1, or if a = +1, then the formulas are meaningless; in the first instance we arrive at the
system ½
−x + y = 1
x−y=1
which is a contradictory system. In the second instance we have
½
x+y=1
x+y=1
which has an infinite number of solutions (for example, for x arbitrary, y = 1 − x).
(b) Solution of the system yields
a4 − 1 −a3 + a
x= 2 y= 2 .
a −1 a −1
Here, is a2 − 1 6= 0, the system has the single solution x = a2 + 1, y = −a. For a = −1 and a = 1,
we obtain the systems ½ ½
−x + y = −1 x+y=1
x−y= 1 x+y=1
both of which have an infinite number of solutions.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
One.I.3.15 For the arithmetic to these, see the answers from the prior subsection.
(a) The solution set is µ ¶ µ ¶
6 −2 ¯
{ + y ¯ y ∈ R}.
0 1
Here the particular solution and the
µ ¶ solution setµfor ¶ the associated homogeneous system are
6 −2 ¯
and { y ¯ y ∈ R}.
0 1
(b) The solution set is µ ¶
0
{ }.
1
The particular solution and the solutionµset ¶ for the associated
µ ¶ homogeneous system are
0 0
and { }
1 0
(c) The solution set is
4 −1 ¯
{−1 + 1 x3 ¯ x3 ∈ R}.
0 1
A particular solution and the solution
set for the
associated
homogeneous system are
4 −1 ¯
−1 and { 1 x3 ¯ x3 ∈ R}.
0 1
(d) The solution set is a singleton
1
{1}.
1
A particular solution and the solution
set for the associated
homogeneous system are
1 0 ¯
1 and {0 t ¯ t ∈ R}.
1 0
(e) The solution set is
5/3 −1/3 −2/3
2/3 2/3 1/3 ¯
{
0 + 1 z + 0 w z, w ∈ R}.
¯
0 0 1
A particular solution and
the solution set
for the
associated
homogeneous system are
5/2 −1/3 −2/3
2/3
and { 2/3 z + 1/3 w ¯ z, w ∈ R}.
¯
0 1 0
0 0 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 15
(f ) This system’s solution set is empty. Thus, there is no particular solution. The solution set of the
associated homogeneous system is
−1 −1
2 3 ¯
{ 1 z + 0 w z, w ∈ R}.
¯
0 1
One.I.3.16 The answers from the prior subsection show the row operations.
(a) The solution set is
2/3 1/6 ¯
{−1/3 + 2/3 z ¯ z ∈ R}.
0 1
A particular solution and the solution set for the associated homogeneous system are
2/3 1/6 ¯
−1/3 and {2/3 z ¯ z ∈ R}.
0 1
(b) The solution set is
1 1
3 −2 ¯
0 + 1 z z ∈ R}.
{ ¯
0 0
A particular solution and the solution set for the associated homogeneous system are
1 1
3 −2 ¯
and { z ¯ z ∈ R}.
0 1
0 0
(c) The solution set is
0 −1 −1
0 0 −1 ¯
{
0 + 1 z + 0 w z, w ∈ R}.
¯
0 0 1
A particular solution and the solution set for the associated homogeneous system are
0 −1 −1
0
and { 0 z + −1 w ¯ z, w ∈ R}.
¯
0 1 0
0 0 1
(d) The solution set is
1 −5/7 −3/7 −1/7
0 −8/7 −2/7 4/7
¯
{ 0
+ 1 c + 0 d + 0 e ¯ c, d, e ∈ R}.
0 0 1 0
0 0 0 1
A particular solution and the solution set for the associated homogeneous system are
1 −5/7 −3/7 −1/7
0 −8/7 −2/7 4/7
¯
0 and { 1 c + 0 d + 0 e ¯ c, d, e ∈ R}.
0 0 1 0
0 0 0 1
One.I.3.17 Just plug them in and see if they satisfy all three equations.
(a) No.
(b) Yes.
(c) Yes.
One.I.3.18 Gauss’ method on the associated homogeneous system gives
1 −1 0 1 0 1 −1 0 1 0 1 −1 0 1 0
−2ρ1 +ρ2 −(1/5)ρ2 +ρ3
2 3 −1 0 0 −→ 0 5 −1 −2 0 −→ 0 5 −1 −2 0
0 1 1 1 0 0 1 1 1 0 0 0 6/5 7/5 0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
(a) That vector is indeed a particular solution so the required general solution is
0 −5/6
0 1/6 ¯
{
0 + −7/6 w w ∈ R}.
¯
4 1
(b) That vector is a particular solution so the required general solution is
−5 −5/6
1 1/6 ¯
{
−7 + −7/6 w w ∈ R}.
¯
10 1
(c) That vector is not a solution of the system since it does not satisfy the third equation. No such
general solution exists.
One.I.3.19 The first is nonsingular while the second is singular. Just do Gauss’ method and see if the
echelon form result has non-0 numbers in each entry on the diagonal.
One.I.3.20 (a) Nonsingular:
µ ¶
−ρ1 +ρ2 1 2
−→
0 1
ends with each row containing a leading entry.
(b) Singular:
µ ¶
3ρ1 +ρ2 1 2
−→
0 0
ends with row 2 without a leading entry.
(c) Neither. A matrix must be square for either word to apply.
(d) Singular.
(e) Nonsingular.
One.I.3.21 In each case we must decide if the vector is a linear combination of the vectors in the
set.
(a) Yes. Solve µ ¶ µ ¶ µ ¶
1 1 2
c1 + c2 =
4 5 3
with µ ¶ µ ¶
1 1 2 −4ρ1 +ρ2 1 1 2
−→
4 5 3 0 1 −5
to conclude that there are c1 and c2 giving the combination.
(b) No. The reduction
2 1 −1 2 1 −1 2 1 −1
−(1/2)ρ1 +ρ2 2ρ2 +ρ3
1 0 0 −→ 0 −1/2 1/2 −→ 0 −1/2 1/2
0 1 1 0 1 1 0 0 2
shows that
2 1 −1
c1 1 + c2 0 = 0
0 1 1
has no solution.
(c) Yes. The reduction
1 2 3 4 1 1 2 3 4 1 1 2 3 4 1
−4ρ1 +ρ3 3ρ2 +ρ3
0 1 3 2 3 −→ 0 1 3 2 3 −→ 0 1 3 2 3
4 5 0 1 0 0 −3 −12 −15 −4 0 0 −3 −9 5
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 17
shows that there are infinitely many ways
c1 −10 −9
c2 8 7 ¯
{
c3 = −5/3 + −3 c4 c4 ∈ R}
¯
c4 0 1
to write
1 1 2 3 4
3 = c1 0 + c2 1 + c3 3 + c4 2 .
0 4 5 0 1
(d) No. Look at the third components.
One.I.3.22 Because the matrix of coefficients is nonsingular, Gauss’ method ends with an echelon form
where each variable leads an equation. Back substitution gives a unique solution.
(Another way to see the solution is unique is to note that with a nonsingular matrix of coefficients
the associated homogeneous system has a unique solution, by definition. Since the general solution is
the sum of a particular solution with each homogeneous solution, the general solution has (at most)
one element.)
One.I.3.23 In this case the solution set is all of Rn , and can be expressed in the required form
1 0 0
0 1 0 ¯
{c1 . + c2 . + · · · + cn . ¯ c1 , . . . , cn ∈ R}.
.. .. ..
0 0 1
One.I.3.24 Assume ~s, ~t ∈ Rn and write
s1 t1
~s = ...
~ ..
and t = . .
sn tn
Also let ai,1 x1 + · · · + ai,n xn = 0 be the i-th equation in the homogeneous system.
(a) The check is easy:
ai,1 (s1 + t1 ) + · · · + ai,n (sn + tn ) = (ai,1 s1 + · · · + ai,n sn ) + (ai,1 t1 + · · · + ai,n tn )
= 0 + 0.
(b) This one is similar:
ai,1 (3s1 ) + · · · + ai,n (3sn ) = 3(ai,1 s1 + · · · + ai,n sn ) = 3 · 0 = 0.
(c) This one is not much harder:
ai,1 (ks1 + mt1 ) + · · · + ai,n (ksn + mtn ) = k(ai,1 s1 + · · · + ai,n sn ) + m(ai,1 t1 + · · · + ai,n tn )
= k · 0 + m · 0.
What is wrong with that argument is that any linear combination of the zero vector yields the zero
vector again.
One.I.3.25 First the proof.
Gauss’ method will use only rationals (e.g., −(m/n)ρi +ρj ). Thus the solution set can be expressed
using only rational numbers as the components of each vector. Now the particular solution is all
rational.
There are infinitely many (rational vector) solutions if and only if the associated homogeneous sys-
tem has infinitely many (real vector) solutions. That’s because setting any parameters to be rationals
will produce an all-rational solution.
0 4
Note that this system
−2 + 7t = 1
1 + 9t = 0
1 − 2t = 2
0 + 4t = 1
has no solution. Thus the given point is not in the line.
One.II.1.4 (a) Note that
2 1 1 3 1 2
2 1 1 1 1 0
− = − =
2 5 −3 0 5 −5
0 −1 1 4 −1 5
and so the plane is this set.
1 1 2
1 1 0 ¯
{ + t +
s ¯ t, s ∈ R}
5 −3 −5
−1 1 5
(b) No; this system
1 + 1t + 2s = 0
1 + 1t =0
5 − 3t − 5s = 0
−1 + 1t + 5s = 0
has no solution.
One.II.1.5 The vector
2
0
3
is not in the line. Because
2 −1 3
0 − 0 = 0
3 −4 7
that plane can be described in this way.
−1 1 3 ¯
{ 0 + m 1 + n 0 ¯ m, n ∈ R}
4 2 7
One.II.1.6 The points of coincidence are solutions of this system.
t = 1 + 2m
t + s = 1 + 3k
t + 3s = 4m
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 19
Gauss’ method
1 0 0 −2 1 1 0 0 −2 1 1 0 0 −2 1
−ρ1 +ρ2 −3ρ2 +ρ3
1 1 −3 0 1 −→ 0 1 −3 2 0 −→ 0 1 −3 2 0
−ρ1 +ρ3
1 3 0 −4 0 0 3 0 −2 −1 0 0 9 −8 −1
gives k = −(1/9) + (8/9)m, so s = −(1/3) + (2/3)m and t = 1 + 2m. The intersection is this.
1 0 2 ¯ 1 2 ¯
{1 + 3 (− 91 + 89 m) + 0 m ¯ m ∈ R} = {2/3 + 8/3 m ¯ m ∈ R}
0 0 4 0 4
One.II.1.7 (a) The system
1= 1
1+t= 3+s
2 + t = −2 + 2s
gives s = 6 and t = 8, so this is the solution set.
1
{ 9 }
10
(b) This system
2+t= 0
t = s + 4w
1 − t = 2s + w
gives t = −2, w = −1, and s = 2 so their intersection is this point.
0
−2
3
One.II.1.8 (a) The vector shown
Answers to Exercises 21
One.II.2.15 (a) We can use the x-axis.
(1)(1) + (0)(1)
arccos( √ √ ) ≈ 0.79 radians
1 2
(b) Again, use the x-axis.
(1)(1) + (0)(1) + (0)(1)
arccos( √ √ ) ≈ 0.96 radians
1 3
(c) The x-axis worked before and it will work again.
(1)(1) + · · · + (0)(1) 1
arccos( √ √ ) = arccos( √ )
1 n n
√
(d) Using the formula from the prior item, limn→∞ arccos(1/ n) = π/2 radians.
One.II.2.16 Clearly u1 u1 + · · · + un un is zero if and only if each ui is zero. So only ~0 ∈ Rn is
perpendicular to itself.
One.II.2.17 Assume that ~u, ~v , w ~ ∈ Rn have components u1 , . . . , un , v1 , . . . , wn .
(a) Dot product is right-distributive.
u1 v1 w1
~ = [ ... + ... ] ...
(~u + ~v ) w
un vn wn
u1 + v1 w1
= .. ..
. .
un + vn wn
= (u1 + v1 )w1 + · · · + (un + vn )wn
= (u1 w1 + · · · + un wn ) + (v1 w1 + · · · + vn wn )
= ~u w~ + ~v w ~
(b) Dot product is also left distributive: w
~ (~u + ~v ) = w ~ ~u + w~ ~v . The proof is just like the prior
one.
(c) Dot product commutes.
u1 v1 v1 u1
.. .. .. ..
. . = u1 v1 + · · · + un vn = v1 u1 + · · · + vn un = . .
un vn vn un
(d) Because ~u ~v is a scalar, not a vector, the expression (~u ~v ) w ~ makes no sense; the dot product
of a scalar and a vector is not defined.
(e) This is a vague question so it has many answers. Some are (1) k(~u ~v ) = (k~u) ~v and k(~u ~v ) =
~u (k~v ), (2) k(~u ~v ) 6= (k~u) (k~v ) (in general; an example is easy to produce), and (3) kk~v k = k 2 k~v k
(the connection between norm and dot product is that the square of the norm is the dot product of
a vector with itself).
One.II.2.18 (a) Verifying that (k~x) ~y = k(~x ~y ) = ~x (k~y ) for k ∈ R and ~x, ~y ∈ Rn is easy. Now, for
k ∈ R and ~v , w~ ∈ Rn , if ~u = k~v then ~u ~v = (k~u) ~v = k(~v ~v ), which is k times a nonnegative real.
The ~v = k~u half is similar (actually, taking the k in this paragraph to be the reciprocal of the k
above gives that we need only worry about the k = 0 case).
(b) We first consider the ~u ~v ≥ 0 case. From the Triangle Inequality we know that ~u ~v = k~u k k~v k if
and only if one vector is a nonnegative scalar multiple of the other. But that’s all we need because
the first part of this exercise shows that, in a context where the dot product of the two vectors
is positive, the two statements ‘one vector is a scalar multiple of the other’ and ‘one vector is a
nonnegative scalar multiple of the other’, are equivalent.
We finish by considering the ~u ~v < 0 case. Because 0 < |~u ~v | = −(~u ~v ) = (−~u) ~v and
k~u k k~v k = k − ~u k k~v k, we have that 0 < (−~u) ~v = k − ~u k k~v k. Now the prior paragraph applies to
give that one of the two vectors −~u and ~v is a scalar multiple of the other. But that’s equivalent to
the assertion that one of the two vectors ~u and ~v is a scalar multiple of the other, as desired.
One.II.2.19 No. These give an example.
µ ¶ µ ¶ µ ¶
1 1 1
~u = ~v = w~=
0 0 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
un vn
and then this computation works.
k~u + ~v k2 + k~u − ~v k2 = (u1 + v1 )2 + · · · + (un + vn )2
+ (u1 − v1 )2 + · · · + (un − vn )2
= u1 2 + 2u1 v1 + v1 2 + · · · + un 2 + 2un vn + vn 2
+ u1 2 − 2u1 v1 + v1 2 + · · · + un 2 − 2un vn + vn 2
= 2(u1 2 + · · · + un 2 ) + 2(v1 2 + · · · + vn 2 )
= 2k~u k2 + 2k~v k2
One.II.2.26 We will prove this demonstrating that the contrapositive statement holds: if ~x 6= ~0 then
there is a ~y with ~x ~y 6= 0.
Assume that ~x ∈ Rn . If ~x 6= ~0 then it has a nonzero component, say the i-th one xi . But the
vector ~y ∈ Rn that is all zeroes except for a one in component i gives ~x ~y = xi . (A slicker proof just
considers ~x ~x.)
One.II.2.27 Yes; we can prove this by induction.
Assume that the vectors are in some Rk . Clearly the statement applies to one vector. The Triangle
Inequality is this statement applied to two vectors. For an inductive step assume the statement is true
for n or fewer vectors. Then this
k~u1 + · · · + ~un + ~un+1 k ≤ k~u1 + · · · + ~un k + k~un+1 k
follows by the Triangle Inequality for two vectors. Now the inductive hypothesis, applied to the first
summand on the right, gives that as less than or equal to k~u1 k + · · · + k~un k + k~un+1 k.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 23
One.II.2.28 By definition
~u ~v
= cos θ
k~u k k~v k
where θ is the angle between the vectors. Thus the ratio is | cos θ|.
One.II.2.29 So that the statement ‘vectors are orthogonal iff their dot product is zero’ has no excep-
tions.
One.II.2.30 The angle between (a) and (b) is found (for a, b 6= 0) with
ab
arccos( √ √ ).
a2 b2
If a or b is zero then the angle is π/2 radians. Otherwise, if a and b are of opposite signs then the
angle is π radians, else the angle is zero radians.
One.II.2.31 The angle between ~u and ~v is acute if ~u ~v > 0, is right if ~u ~v = 0, and is obtuse if
~u ~v < 0. That’s because, in the formula for the angle, the denominator is never negative.
One.II.2.32 Suppose that ~u, ~v ∈ Rn . If ~u and ~v are perpendicular then
k~u + ~v k2 = (~u + ~v ) (~u + ~v ) = ~u ~u + 2 ~u ~v + ~v ~v = ~u ~u + ~v ~v = k~u k2 + k~v k2
(the third equality holds because ~u ~v = 0).
One.II.2.33 Where ~u, ~v ∈ Rn , the vectors ~u + ~v and ~u − ~v are perpendicular if and only if 0 =
(~u + ~v ) (~u − ~v ) = ~u ~u − ~v ~v , which shows that those two are perpendicular if and only if ~u ~u = ~v ~v .
That holds if and only if k~u k = k~v k.
One.II.2.34 Suppose ~u ∈ Rn is perpendicular to both ~v ∈ Rn and w
~ ∈ Rn . Then, for any k, m ∈ R
we have this.
~u (k~v + mw)
~ = k(~u ~v ) + m(~u w)
~ = k(0) + m(0) = 0
One.II.2.35 We will show something more general: if k~z1 k = k~z2 k for ~z1 , ~z2 ∈ Rn , then ~z1 + ~z2 bisects
the angle between ~z1 and ~z2
✟✟ ✟
✟✟ ✁
✒✁ ✟′ ✟¡ ✁
✟ ′′¡′′✁
✁ gives ✁
✁✁✕ ✁ ′′ ′
′✁ ¡
✁✟✟ ✯✁
✟ ✁¡ ✟✁
✟′✟
✟
✁ ✟
¡
✁
(we ignore the case where ~z1 and ~z2 are the zero vector).
The ~z1 + ~z2 = ~0 case is easy. For the rest, by the definition of angle, we will be done if we show
this.
~z1 (~z1 + ~z2 ) ~z2 (~z1 + ~z2 )
=
k~z1 k k~z1 + ~z2 k k~z2 k k~z1 + ~z2 k
But distributing inside each expression gives
~z1 ~z1 + ~z1 ~z2 ~z2 ~z1 + ~z2 ~z2
k~z1 k k~z1 + ~z2 k k~z2 k k~z1 + ~z2 k
and ~z1 ~z1 = k~z1 k = k~z2 k = ~z2 ~z2 , so the two are equal.
One.II.2.36 We can show the two statements together. Let ~u, ~v ∈ Rn , write
u1 v1
.. ..
~u = . ~v = .
un vn
and calculate.
ku1 v1 + · · · + kun vn k ~u · ~v ~u ~v
cos θ = q p = =±
2 2 |k| k~u k k~v k k~u k k~v k
(ku1 ) + · · · + (kun ) b1 2 + · · · + bn 2
One.II.2.37 Let
u1 v1 w1
~u = ... , ~v = ... ~ = ...
w
un vn wn
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
as required.
One.II.2.39 This is how the answer was given in the cited source. The actual velocity ~v of the wind
is the sum of the ship’s velocity and the apparent velocity of the wind. Without loss of generality we
may assume ~a and ~b to be unit vectors, and may write
where s and t are undetermined scalars. Take the dot product first by ~a and then by ~b to obtain
Multiply the second by ~a ~b, subtract the result from the first, and find
Answers to Exercises 25
then holds in the n + 1 case. Start with the right-hand side
¡ X ¢¡ X ¢ X ¡ ¢2
aj 2 bj 2 − ak bj − aj bk
1≤j≤n+1 1≤j≤n+1 1≤k<j≤n+1
£ X ¤£ X ¤
= ( aj 2 ) + an+1 2 ( bj 2 ) + bn+1 2
1≤j≤n 1≤j≤n
£ X ¡ ¢2 X ¡ ¢2 ¤
− ak bj − aj bk + ak bn+1 − an+1 bk
1≤k<j≤n 1≤k≤n
¡ X ¢¡ X 2
¢ X X
= aj 2
bj + bj 2 an+1 2 + aj 2 bn+1 2 + an+1 2 bn+1 2
1≤j≤n 1≤j≤n 1≤j≤n 1≤j≤n
£ X ¡ ¢2 X ¡ ¢2 ¤
− ak bj − aj bk + ak bn+1 − an+1 bk
1≤k<j≤n 1≤k≤n
¡ X 2
¢¡ X 2
¢ X ¡ ¢2
= aj bj − ak bj − aj bk
1≤j≤n 1≤j≤n 1≤k<j≤n
X X
+ bj 2 an+1 2 + aj 2 bn+1 2 + an+1 2 bn+1 2
1≤j≤n 1≤j≤n
X ¡ ¢2
− ak bn+1 − an+1 bk
1≤k≤n
and apply the inductive hypothesis
¡ X ¢2 X X
= aj bj + bj 2 an+1 2 + aj 2 bn+1 2 + an+1 2 bn+1 2
1≤j≤n 1≤j≤n 1≤j≤n
£ X 2
X X ¤
− 2
ak bn+1 − 2 ak bn+1 an+1 bk + an+1 2 bk 2
1≤k≤n 1≤k≤n 1≤k≤n
¡ X ¢2 ¡ X ¢
= aj bj −2 ak bn+1 an+1 bk + an+1 2 bn+1 2
1≤j≤n 1≤k≤n
£¡ X ¢ ¤2
= aj bj + an+1 bn+1
1≤j≤n
to derive the left-hand side.
One.III.1.7 These answers show only the Gauss-Jordan reduction. With it, describing the solution
set is µ
easy. ¶ µ ¶ µ ¶ µ ¶
1 1 2 −ρ1 +ρ2 1 1 2 −(1/2)ρ2 1 1 2 −ρ2 +ρ1 1 0 1
(a) −→ −→ −→
1 −1 0 0 −2 −2 0 1 1 0 1 1
µ ¶ µ ¶ µ ¶
1 0 −1 4 −2ρ1 +ρ2 1 0 −1 4 (1/2)ρ2 1 0 −1 4
(b) −→ −→
2 2 0 1 0 2 2 −7 0 1 1 −7/2
(c)
µ ¶ µ ¶ µ ¶ µ ¶
3 −2 1 −2ρ1 +ρ2 3 −2 1 (1/3)ρ1 1 −2/3 1/3 (2/3)ρ2 +ρ1 1 0 2/15
−→ −→ −→
6 1 1/2 0 5 −3/2 (1/5)ρ2 0 1 −3/10 0 1 −3/10
(d) A row swap here makes the arithmetic easier.
2 −1 0 −1 2 −1 0 −1 2 −1 0 −1
1 +ρ2
1 3 −1 5 −(1/2)ρ −→ 0 7/2 −1 11/2 ρ−→ 2 ↔ρ3
0 1 2 5
0 1 2 5 0 1 2 5 0 7/2 −1 11/2
2 −1 0 −1 1 −1/2 0 −1/2
−(7/2)ρ2 +ρ3 (1/2)ρ1
−→ 0 1 2 5 −→ 0 1 2 5
−(1/8)ρ2
0 0 −8 −12 0 0 1 3/2
1 −1/2 0 −1/2 1 0 0 1/2
−2ρ3 +ρ2 (1/2)ρ2 +ρ1
−→ 0 1 0 2 −→ 0 1 0 2
0 0 1 3/2 0 0 1 3/2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 0
(c) This Jordan half
1 0 1 1 0
ρ2 +ρ1 0 1 0 1 0
−→ 0 0 0 0 0
0 0 0 0 0
gives
0 −1 −1
0 0 −1 ¯
{0 + 1 z + 0 w z, w ∈ R}
¯
0 0 1
(of course, the zero vector could be omitted from the description).
(d) The “Jordan” half µ ¶ µ ¶
−(1/7)ρ2 1 2 3 1 −1 1 −2ρ2 +ρ1 1 0 5/7 3/7 1/7 1
−→ −→
0 1 8/7 2/7 −4/7 0 0 1 8/7 2/7 −4/7 0
ends with this solution set.
1 −5/7 −3/7 −1/7
0 −8/7 −2/7 4/7
¯
{0 + 1 c + 0 d +
0 e c, d, e ∈ R}
¯
0 0 1 0
0 0 0 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 27
One.III.1.10 Routine Gauss’ method gives one:
2 1 1 3 2 1 1 3
−3ρ1 +ρ2 −(9/2)ρ2 +ρ3
−→ 0 1 −2 −7 −→ 0 1 −2 −7
−(1/2)ρ1 +ρ3
0 9/2 1/2 7/2 0 0 19/2 35
and any cosmetic change, like multiplying the bottom row by 2,
2 1 1 3
0 1 −2 −7
0 0 19 70
gives another.
One.III.1.11 In the cases listed below, we take a, b ∈ R. Thus, some canonical forms listed below
actually
µ include
¶ µinfinitely
¶ µmany¶cases.
µ In¶particular, they includes the cases a = 0 and b = 0.
0 0 1 a 0 1 1 0
(a) , , ,
µ0 0 0
¶ µ 0 0 ¶ µ 0 1 ¶ µ
0 ¶ µ ¶ µ ¶ µ ¶
0 0 0 1 a b 0 1 a 0 0 1 1 0 a 1 a 0 0 1 0
(b) , , , , , ,
0 0 0 0 0 0 0 0 0
0 0 0 0 1 b 0 0 1 0 0 1
0 0 1 a 0 1 1 0
(c) 0 0, 0 0, 0 0, 0 1
0 0 0 0 0 0 0 0
0 0 0 1 a b 0 1 a 0 0 1 1 0 a 1 a 0 1 0 0
(d) 0 0 0, 0 0 0, 0 0 0, 0 0 0, 0 1 b , 0 0 1, 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
One.III.1.12 A nonsingular homogeneous linear system has a unique solution. So a nonsingular matrix
must reduce to a (square) matrix that is all 0’s except for 1’s down the upper-left to lower-right diagonal,
e.g.,
µ ¶ 1 0 0
1 0
, or 0 1 0 , etc.
0 1
0 0 1
One.III.1.13 (a) The ρi ↔ ρi operation does not change A.
(b) For instance, µ ¶ µ ¶ µ ¶
1 2 −ρ1 +ρ1 0 0 ρ1 +ρ1 0 0
−→ −→
3 4 3 4 3 4
leaves the matrix changed.
(c) If i 6= j then
.. ..
. .
ai,1 · · · ai,n a i,1 ··· ai,n
kρ +ρ
.. i j .
..
. −→
aj,1 · · · aj,n kai,1 + aj,1 · · · kai,n + aj,n
.. ..
. .
..
.
ai,1 ··· ai,n
−kρi +ρj
−→
.
..
−kai,1 + kai,1 + aj,1 · · · −kai,n + kai,n + aj,n
..
.
does indeed give A back. (Of course, if i = j then the third matrix would have entries of the form
−k(kai,j + ai,j ) + kai,j + ai,j .)
Answers to Exercises 29
One.III.2.13 (a) They have the form µ ¶
a 0
b 0
where a, b ∈ R.
(b) They have this form (for a, b ∈ R). µ ¶
1a 2a
1b 2b
(c) They have the form µ ¶
a b
c d
(for a, b, c, d ∈ R) where ad − bc 6= 0. (This is the formula that determines when a 2×2 matrix is
nonsingular.)
One.III.2.14 Infinitely many. For instance, in µ ¶
1 k
0 0
each k ∈ R gives a different class.
One.III.2.15 No. Row operations do not change the size of a matrix.
One.III.2.16 (a) A row operation on a zero matrix has no effect. Thus each zero matrix is alone in
its row equivalence class.
(b) No. Any nonzero entry can be rescaled.
One.III.2.17 Here are two. µ ¶ µ ¶
1 1 0 1 0 0
and
0 0 1 0 0 1
One.III.2.18 Any two n × n nonsingular matrices have the same reduced echelon form, namely the
matrix with all 0’s except for 1’s down the diagonal.
1 0 0
0 1 0
..
.
0 0 1
Two 2×2 singular matrices need not µ row¶equivalent.µ ¶
1 1 1 0
and
0 0 0 0
One.III.2.19 Since there is one and only one reduced echelon form matrix in each class, we can just
list the possible reduced echelon form matrices.
For that list, see the answer for Exercise 11.
One.III.2.20 (a) If there is a linear relationship where c0 is not zero then we can subtract c0 β ~0 and
divide both sides by c0 to get β ~0 as a linear combination of the others. (Remark. If there are no
others — if the relationship is, say, ~0 = 3 · ~0 — then the statement is still true because zero is by
definition the sum of the empty set of vectors.)
~0 is a combination of the others β
If β ~0 = c1 β~1 + · · · + cn β
~n then subtracting β~0 from both sides
gives a relationship where one of the coefficients is nonzero, specifically, the coefficient is −1.
(b) The first row is not a linear combination of the others for the reason given in the proof: in the
equation of components from the column containing the leading entry of the first row, the only
nonzero entry is the leading entry from the first row, so its coefficient must be zero. Thus, from the
prior part of this question, the first row is in no linear relationship with the other rows. Hence, to
see if the second row can be in a linear relationship with the other rows, we can leave the first row
out of the equation. But now the argument just applied to the first row will apply to the second
row. (Technically, we are arguing by induction here.)
One.III.2.21 (a) As in the base case we will argue that ℓ2 isn’t less than k2 and that it also isn’t
greater. To obtain a contradiction, assume that ℓ2 ≤ k2 (the k2 ≤ ℓ2 case, and the possibility that
either or both is a zero row, are left to the reader). Consider the i = 2 version of the equation
that gives each row of B as a linear combination of the rows of D. Focus on the ℓ1 -th and ℓ2 -th
component equations.
b2,ℓ1 = c2,1 d1,ℓ1 + c2,2 d2,ℓ1 + · · · + c2,m dm,ℓ1 b2,ℓ2 = c2,1 d1,ℓ2 + c2,2 d2,ℓ2 + · · · + c2,m dm,ℓ2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 31
(c) First, ℓr+1 < kr+1 is impossible. In the columns of D to the left of column kr+1 the entries are
are all zeroes as dr+1,kr+1 leads the row k + 1) and so if ℓk+1 < kk+1 then the equation of entries
from column ℓk+1 would be br+1,ℓr+1 = sr+1,1 · 0 + · · · + sr+1,m · 0, but br+1,ℓr+1 isn’t zero since it
leads its row. A symmetric argument shows that kr+1 < ℓr+1 also is impossible.
One.III.2.23 The zero rows could have nonzero coefficients, and so the statement would not be true.
One.III.2.24 We know that 4s + c + 10d = 8.45 and that 3s + c + 7d = 6.30, and we’d like to know
what s + c + d is. Fortunately, s + c + d is a linear combination of 4s + c + 10d and 3s + c + 7d. Calling
the unknown price p, we have this reduction.
4 1 10 8.45 4 1 10 8.45 4 1 10 8.45
−(3/4)ρ1 +ρ2 −3ρ2 +ρ3
3 1 7 6.30 −→ 0 1/4 −1/2 −0.037 5 −→ 0 1/4 −1/2 −0.037 5
−(1/4)ρ1 +ρ3
1 1 1 p 0 3/4 −3/2 p − 2.112 5 0 0 0 p − 2.00
The price paid is $2.00.
One.III.2.25 If multiplication of a row by zero were allowed then Lemma 2.6 would not hold. That
is, where µ ¶ µ ¶
1 3 0ρ2 1 3
−→
2 1 0 0
all the rows of the second matrix can be expressed as linear combinations of the rows of the first, but
the converse does not hold. The second row of the first matrix is not a linear combination of the rows
of the second matrix.
One.III.2.26 (1) An easy answer is this:
0 = 3.
For a less wise-guy-ish answer, solve the system:
µ ¶ µ ¶
3 −1 8 −(2/3)ρ1 +ρ2 3 −1 8
−→
2 1 3 0 5/3 −7/3
gives y = −7/5 and x = 11/5. Now any equation not satisfied by (−7/5, 11/5) will do, e.g.,
5x + 5y = 3.
(2) Every equation can be derived from an inconsistent system. For instance, here is how to derive
“3x + 2y = 4” from “0 = 5”. First,
(3/5)ρ1 xρ1
0 = 5 −→ 0 = 3 −→ 0 = 3x
(validity of the x = 0 case is separate but clear). Similarly, 0 = 2y. Ditto for 0 = 4. But now,
0 + 0 = 0 gives 3x + 2y = 4.
One.III.2.27 Define linear systems to be equivalent if their augmented matrices are row equivalent.
The proof that equivalent systems have the same solution set is easy.
One.III.2.28 (a) The three possible row swaps are easy, as are the three possible rescalings. One of
the six possible pivots is kρ1 + ρ2 :
1 2 3
k · 1 + 3 k · 2 + 0 k · 3 + 3
1 4 5
and again the first and second columns add to the third. The other five pivots are similar.
(b) The obvious conjecture is that row operations do not change linear relationships among columns.
(c) A case-by-case proof follows the sketch given in the first item.
0 0 1
as does Maple [3 − 2 t1 + t2 , t1 , t2 , −2 + 3 t1 − 2 t2 ].
(f ) The solution set is empty and Maple replies to the linsolve(A,u) command with no returned
solutions.
4 In response to this prompting
> A:=array( [[a,c],
[b,d]] );
> u:=array([p,q]);
> linsolve(A,u);
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 33
Maple thought for perhaps twenty seconds and gave this reply.
£ −d p + q c −b p + a q ¤
− ,
−b c + a d −b c + a d
1 Sceintific notation is convienent to express the two-place restriction. We have .25 × 102 + .67 × 100 =
.25 × 102 . The 2/3 has no apparent effect.
2 The reduction
−3ρ1 +ρ2 x + 2y = 3
−→
−8 = −7.992
gives a solution of (x, y) = (1.002, 0.999).
3 (a) The fully accurate solution is that x = 10 and y = 0.
(b) The four-digit conclusion is quiteµ different. ¶
−(.3454/.0003)ρ1 +ρ2 .0003 1.556 1.569
−→ =⇒ x = 10460, y = −1.009
0 1789 −1805
4 (a) For the first one, first, (2/3) − (1/3) is .666 666 67 − .333 333 33 = .333 333 34 and so (2/3) +
((2/3) − (1/3)) = .666 666 67 + .333 333 34 = 1.000 000 0. For the other one, first ((2/3) + (2/3)) =
.666 666 67 + .666 666 67 = 1.333 333 3 and so ((2/3) + (2/3)) − (1/3) = 1.333 333 3 − .333 333 33 =
.999 999 97.
(b) The first equation is .333 333 33 · x + 1.000 000 0 · y = 0 while the second is .666 666 67 · x +
2.000 000 0 · y = 0.
5 (a) This calculation
3 2 1 6
−(2/3)ρ1 +ρ2
−→ 0 −(4/3) + 2ε −(2/3) + 2ε −2 + 4ε
−(1/3)ρ1 +ρ3
0 −(2/3) + 2ε −(1/3) − ε −1 + ε
3 2 1 6
−(1/2)ρ2 +ρ3
−→ 0 −(4/3) + 2ε −(2/3) + 2ε −2 + 4ε
0 ε −2ε −ε
gives a third equation of y −2z = −1. Substituting into the second equation gives ((−10/3)+6ε)·z =
(−10/3) + 6ε so z = 1 and thus y = 1. With those, the first equation says that x = 1.
(b) The solution with two digits kept
.30 × 101 .20 × 101 .10 × 101 .60 × 101
.10 × 101 .20 × 10−3 .20 × 10−3 .20 × 101
.30 × 101 .20 × 10−3 −.10 × 10−3 .10 × 101
.30 × 101 .20 × 101 .10 × 101 .60 × 101
−(2/3)ρ1 +ρ2
−→ 0 −.13 × 101 −.67 × 100 −.20 × 101
−(1/3)ρ1 +ρ3
0 −.67 × 100 −.33 × 100 −.10 × 101
.30 × 101 .20 × 101 .10 × 101 .60 × 101
−(.67/1.3)ρ2 +ρ3
−→ 0 −.13 × 101 −.67 × 100 −.20 × 101
0 0 .15 × 10−2 .31 × 10−2
comes out to be z = 2.1, y = 2.6, and x = −.43.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
1 (a) The total resistance is 7 ohms. With a 9 volt potential, the flow will be 9/7 amperes. Inciden-
tally, the voltage drops will then be: 27/7 volts across the 3 ohm resistor, and 18/7 volts across each
of the two 2 ohm resistors.
(b) One way to do this network is to note that the 2 ohm resistor on the left has a voltage drop
across it of 9 volts (and hence the flow through it is 9/2 amperes), and the remaining portion on
the right also has a voltage drop of 9 volts, and so is analyzed as in the prior item.
We can also use linear systems.
−→ −→
i0 i2
i1 ↓
i3
←−
Answers to Exercises 35
(c) Another analysis like the prior ones gives is i2 = 20/r2 , i1 = 20/r1 , and i0 = 20(r1 +r2 )/(r1 r2 ), all
in amperes. So the parallel portion is acting like a single resistor of size 20/i1 = r1 r2 /(r1 + r2 ) ohms.
(This equation is often stated as: the equivalent resistance r satisfies 1/r = (1/r1 ) + (1/r2 ).)
3 (a) The circuit looks like this.
Willow Jay Ln
west
east
Winooski Ave
Because 50 cars leave via Main while 25 cars enter, i1 − 25 = i2 . Similarly Pier’s in/out balance
means that i2 = i3 and North gives i3 + 25 = i1 . We have this system.
i1 − i2 = 25
i2 − i3 = 0
−i1 + i3 = −25
(c) The row operations ρ1 + ρ2 and rho2 + ρ3 lead to the conclusion that there are infinitely many
solutions. With i3 as the parameter,
25 + i3 ¯
{ i3 ¯ i3 ∈ R}
i3
of course, since the problem is stated in number of cars, we might restrict i3 to be a natural number.
(d) If we picture an initially-empty circle with the given input/output behavior, we can superimpose
a z3 -many cars circling endlessly to get a new solution.
(e) A suitable restatement might be: the number of cars entering the circle must equal the number
of cars leaving. The reasonableness of this one is not as clear. Over the five minute time period it
could easily work out that a half dozen more cars entered than left, although the into/out of table
in the problem statement does have that this property is satisfied. In any event it is of no help in
getting a unique solution since for that we need to know the number of cars circling endlessly.
6 (a) Here is a variable for each unknown block; each known block has the flow shown.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
i1
75 40
i2 i3
i4
5 50
80 30
i5 i7 i6
70
We apply Kirchoff’s principle that the flow into the intersection of Willow and Shelburne must equal
the flow out to get i1 + 25 = i2 + 125. Doing the intersections from right to left and top to bottom
gives these equations.
i1 − i2 = 10
−i1 + i3 = 15
i2 + i4 = 5
−i3 − i4 + i6 = −50
i5 − i7 = −10
−i6 + i7 = 30
The row operation ρ1 + ρ2 followed by ρ2 + ρ3 then ρ3 + ρ4 and ρ4 + ρ5 and finally ρ5 + ρ6 result in
this system.
i1 − i2 = 10
−i2 + i3 = 25
i3 + i4 − i5 = 30
−i5 + i6 = −20
−i6 + i7 = −30
0= 0
Since the free variables are i4 and i7 we take them as parameters.
i6 = i7 − 30
i5 = i6 + 20 = (i7 − 30) + 20 = i7 − 10
i3 = −i4 + i5 + 30 = −i4 + (i7 − 10) + 30 = −i4 + i7 + 20 ()
i2 = i3 − 25 = (−i4 + i7 + 20) − 25 = −i4 + i7 − 5
i1 = i2 + 10 = (−i4 + i7 − 5) + 10 = −i4 + i7 + 5
Obviously i4 and i7 have to be positive, and in fact the first equation shows that i7 must be at least
30. If we start with i7 , then the i2 equation shows that 0 ≤ i4 ≤ i7 − 5.
(b) We cannot take i7 to be zero or else i6 will be negative (this would mean cars going the wrong
way on the one-way street Jay). We can, however, take i7 to be as small as 30, and then there are
many suitable i4 ’s. For instance, the solution
(i1 , i2 , i3 , i4 , i5 , i6 , i7 ) = (35, 25, 50, 0, 20, 0, 30)
results from choosing i4 = 0.
Two.I.1.17
µ (a) 0¶+ 0x + 0x2 + 0x3
0 0 0 0
(b)
0 0 0 0
(c) The constant function f (x) = 0
(d) The constant function f (n) = 0
µ ¶
2 −1 +1
Two.I.1.18 (a) 3 + 2x − x (b) (c) −3ex + 2e−x
0 −3
Two.I.1.19 Most of the conditions are easy to check; use Example 1.3 as a guide. Here are some
comments.
(a) This is just like Example 1.3; the zero element is 0 + 0x.
(b) The zero element of this space is the 2×2 matrix of zeroes.
(c) The zero element is the vector of zeroes.
(d) Closure of addition involves noting that the sum
x1 x2 x1 + x2
y1 y2 y1 + y2
+ =
z1 z2 z1 + z2
w1 w2 w1 + w2
is in L because (x1 +x2 )+(y1 +y2 )−(z1 +z2 )+(w1 +w2 ) = (x1 +y1 −z1 +w1 )+(x2 +y2 −z2 +w2 ) = 0+0.
Closure of scalar multiplication is similar. Note that the zero element, the vector of zeroes, is in L.
Two.I.1.20 In each item the set is called Q. For some items, there are other correct ways to show that
Q is not a vector space.
(a) It is not closed under addition.
1 0 1
0 , 1 ∈ Q 1 6∈ Q
0 0 0
(b) It is not closed under addition.
1 0 1
0 , 1 ∈ Q 1 6∈ Q
0 0 0
(c) It is not closed under addition.
µ ¶ µ ¶ µ ¶
0 1 1 1 1 2
, ∈Q 6∈ Q
0 0 0 0 0 0
(d) It is not closed under scalar multiplication.
1 + 1x + 1x2 ∈ Q − 1 · (1 + 1x + 1x2 ) 6∈ Q
(e) It is empty.
Two.I.1.21 The usual operations (v0 + v1 i) + (w0 + w1 i) = (v0 + w0 ) + (v1 + w1 )i and r(v0 + v1 i) =
(rv0 ) + (rv1 )i suffice. The check is easy.
Two.I.1.22 No, it is not closed under scalar multiplication since, e.g., π · (1) is not a rational number.
Two.I.1.23 The natural operations are (v1 x + v2 y + v3 z) + (w1 x + w2 y + w3 z) = (v1 + w1 )x + (v2 +
w2 )y + (v3 + w3 )z and r · (v1 x + v2 y + v3 z) = (rv1 )x + (rv2 )y + (rv3 )z. The check that this is a vector
space is easy; use Example 1.3 as a guide.
Two.I.1.24 The ‘+’ operation is not commutative; producing two members of the set witnessing this
assertion is easy.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 39
Two.I.1.33 (a) Let V be a vector space, assume that ~v ∈ V , and assume that w ~ ∈ V is the additive
~ + ~v = ~0. Because addition is commutative, ~0 = w
inverse of ~v so that w ~ + ~v = ~v + w,
~ so therefore ~v
is also the additive inverse of w.~
(b) Let V be a vector space and suppose ~v , ~s, ~t ∈ V . The additive inverse of ~v is −~v so ~v + ~s = ~v + ~t
gives that −~v + ~v + ~s = −~v + ~v + ~t, which says that ~0 + ~s = ~0 + ~t and so ~s = ~t.
Two.I.1.34 Addition is commutative, so in any vector space, for any vector ~v we have that ~v = ~v + ~0 =
~0 + ~v .
Two.I.1.35 It is not a vector space since addition of two matrices of unequal sizes is not defined, and
thus the set fails to satisfy the closure condition.
Two.I.1.36 Each element of a vector space has one and only one additive inverse.
For, let V be a vector space and suppose that ~v ∈ V . If w ~ 1, w
~ 2 ∈ V are both additive inverses of
~v then consider w ~ 1 + ~v + w
~ 2 . On the one hand, we have that it equals w ~ 1 + (~v + w ~ 1 + ~0 = w
~ 2) = w ~ 1.
On the other hand we have that it equals (w ~ 2 = ~0 + w
~ 1 + ~v ) + w ~2 = w ~ 2 . Therefore, w ~1 = w~ 2.
¯
Two.I.1.37 (a) Every such set has the form {r · ~v + s · w ~ ¯ r, s ∈ R} where either or both of ~v , w ~ may
~
be 0. With the inherited operations, closure of addition (r1~v + s1 w) ~ + (r2~v + s2 w) ~ = (r1 + r2 )~v +
(s1 + s2 )w~ and scalar multiplication c(r~v + sw) ~ = (cr)~v + (cs)w ~ are easy. The other conditions are
also routine.
(b) No such set can be a vector space under the inherited operations because it does not have a zero
element.
Two.I.1.38 Assume that ~v ∈ V is not ~0.
(a) One direction of the if and only if is clear: if r = 0 then r · ~v = ~0. For the other way, let r be a
nonzero scalar. If r~v = ~0 then (1/r) · r~v = (1/r) · ~0 shows that ~v = ~0, contrary to the assumption.
(b) Where r1 , r2 are scalars, r1~v = r2~v holds if and only if (r1 − r2 )~v = ~0. By the prior item, then
r1 − r2 = 0. ¯
(c) A nontrivial space has a vector ~v 6= ~0. Consider the set {k · ~v ¯ k ∈ R}. By the prior item this
set is infinite.
(d) The solution set is either trivial, or nontrivial. In the second case, it is infinite.
Two.I.1.39 Yes. A theorem of first semester calculus says that a sum of differentiable functions is
differentiable and that (f +g)′ = f ′ +g ′ , and that a multiple of a differentiable function is differentiable
and that (r · f )′ = r f ′ .
Two.I.1.40 The check is routine. Note that ‘1’ is 1 + 0i and the zero elements are these.
2
(a) (0
µ + 0i) + (0 + 0i)x
¶ + (0 + 0i)x
0 + 0i 0 + 0i
(b)
0 + 0i 0 + 0i
Two.I.1.41 Notably absent from the definition of a vector space is a distance measure.
Two.I.1.42 (a) A small rearrangement does the trick.
(~v1 + (~v2 + ~v3 )) + ~v4 = ((~v1 + ~v2 ) + ~v3 ) + ~v4
= (~v1 + ~v2 ) + (~v3 + ~v4 )
= ~v1 + (~v2 + (~v3 + ~v4 ))
= ~v1 + ((~v2 + ~v3 ) + ~v4 )
Each equality above follows from the associativity of three vectors that is given as a condition in
the definition of a vector space. For instance, the second ‘=’ applies the rule (w ~1 + w ~ 2) + w
~3 =
w
~ 1 + (w ~2 + w~ 3 ) by taking w
~ 1 to be ~v1 + ~v2 , taking w ~ 2 to be ~v3 , and taking w~ 3 to be ~v4 .
(b) The base case for induction is the three vector case. This case ~v1 + (~v2 + ~v3 ) = (~v1 + ~v2 ) + ~v3 is
required of any triple of vectors by the definition of a vector space.
For the inductive step, assume that any two sums of three vectors, any two sums of four vectors,
. . . , any two sums of k vectors are equal no matter how the sums are parenthesized. We will show
that any sum of k + 1 vectors equals this one ((· · · ((~v1 + ~v2 ) + ~v3 ) + · · · ) + ~vk ) + ~vk+1 .
Any parenthesized sum has an outermost ‘+’. Assume that it lies between ~vm and ~vm+1 so the
sum looks like this.
(· · · ~v1 · · · ~vm · · · ) + (· · · ~vm+1 · · · ~vk+1 · · · )
The second half involves fewer than k + 1 additions, so by the inductive hypothesis we can re-
parenthesize it so that it reads left to right from the inside out, and in particular, so that its
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Two.I.2.20 By Lemma 2.9, to see if each subset of M2×2 is a subspace, we need only check if it is
nonempty and closed.
(a) Yes, it is easily checked to be nonempty and closed. This is a paramatrization.
µ ¶ µ ¶
1 0 0 0 ¯¯
{a +b a, b ∈ R}
0 0 0 1
By the way, the paramatrization also shows that it is a subspace, it is given as the span of the
two-matrix set, and any span is a subspace.
(b) Yes; it is easily checked to be nonempty and closed. Alternatively, as mentioned in the prior
answer, the existence of a paramatrization shows that it is a subspace. For the paramatrization, the
condition a + b = 0 can be rewritten
µ ¶as a = −b. Thenµ we have
¶ this.
−b 0 ¯¯ −1 0 ¯¯
{ b ∈ R} = {b b ∈ R}
0 b 0 1
(c) No. It is not closed under addition. For instance,
µ ¶ µ ¶ µ ¶
5 0 5 0 10 0
+ =
0 0 0 0 0 0
is not in the set. (This set is also not closed under scalar multiplication, for instance, it does not
contain the zero matrix.)
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 41
(d) Yes. µ ¶ µ ¶
−1 0 0 1 ¯¯
{b +c b, c ∈ R}
0 1 0 0
Two.I.2.21 No, it is not closed. In particular, it is not closed under scalar multiplication because it
does not contain the zero polynomial.
Two.I.2.22 (a) Yes, solving the linear system arising from
1 0 2
r1 0 + r2 0 = 0
0 1 1
gives r1 = 2 and r2 = 1.
(b) Yes; the linear system arising from r1 (x2 ) + r2 (2x + x2 ) + r3 (x + x3 ) = x − x3
2r2 + r3 = 1
r1 + r2 = 0
r3 = −1
gives that −1(x2 ) + 1(2x + x2 ) − 1(x + x3 ) = x − x3 .
(c) No; any combination of the two given matrices has a zero in the upper right.
Two.I.2.23 (a) Yes; it is in that span since 1 · cos2 x + 1 · sin2 x = f (x).
(b) No, since r1 cos2 x + r2 sin2 x = 3 + x2 has no scalar solutions that work for all x. For instance,
setting x to be 0 and π gives the two equations r1 · 1 + r2 · 0 = 3 and r1 · 1 + r2 · 0 = 3 + π 2 , which
are not consistent with each other.
(c) No; consider what happens on setting x to be π/2 and 3π/2.
(d) Yes, cos(2x) = 1 · cos2 (x) − 1 · sin2 (x).
Two.I.2.24 (a) Yes, for any x, y, z ∈ R this equation
1 0 0 x
r1 0 + r2 2 + r3 0 = y
0 0 3 z
has the solution r1 = x, r2 = y/2, and r3 = z/3.
(b) Yes, the equation
2 1 0 x
r1 0 + r2 1 + r3 0 = y
1 0 1 z
gives rise to this
2r1 + r2 =x 2r1 + r2 =x
−(1/2)ρ1 +ρ3 (1/2)ρ2 +ρ3
r2 =y −→ −→ r2 =y
r1 + r3 = z r3 = −(1/2)x + (1/2)y + z
so that, given any x, y, and z, we can compute that r3 = (−1/2)x + (1/2)y + z, r2 = y, and
r1 = (1/2)x − (1/2)y.
(c) No. In particular, the vector
0
0
1
cannot be gotten as a linear combination since the two given vectors both have a third component
of zero.
(d) Yes. The equation
1 3 −1 2 x
r1 0 + r2 1 + r3 0 + r4 1 = y
1 0 0 5 z
leads to this reduction.
1 3 −1 2 x 1 3 −1 2 x
−ρ1 +ρ3 3ρ2 +ρ3
0 1 0 1 y −→ −→ 0 1 0 1 y
1 0 0 5 z 0 0 1 6 −x + 3y + z
We have infinitely many solutions. We can, for example, set r4 to be zero and solve for r3 , r2 , and
r1 in terms of x, y, and z by the usual methods of back-substitution.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 1 ¯
(d) Paramatrize the description as {−a1 + a1 x + a3 x2 + a3 x3 ¯ a1 , a3 ∈ R} to get {−1 + x, x2 + x3 }.
2 3 4
(e) {1,
µ x, x ¶, xµ, x } ¶ µ ¶ µ ¶
1 0 0 1 0 0 0 0
(f ) { , , , }
0 0 0 0 1 0 0 1
Two.I.2.27 Technically, no. Subspaces of R3 are sets of three-tall vectors, while R2 is a set of two-tall
vectors. Clearly though, R2 is “just like” this subspace of R3 .
x ¯
{y ¯ x, y ∈ R}
0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 43
Two.I.2.28 Of course, the addition and scalar multiplication operations are the ones inherited from
the enclosing space.
(a) This is a subspace. It is not empty as it contains at least the two example functions given. It is
closed because if f2 , f2 are even and c1 , c2 are scalars then we have this.
(c1 f1 + c2 f2 ) (−x) = c1 f1 (−x) + c2 f2 (−x) = c1 f1 (x) + c2 f2 (x) = (c1 f1 + c2 f2 ) (x)
(b) This is also a subspace; the check is similar to the prior one.
Two.I.2.29 It can be improper. If ~v = ~0 then this is a trivial subspace. At the opposite extreme, if
the vector space is R1 and ~v 6= ~0 then the subspace is all of R1 .
Two.I.2.30 No, such a set is not closed. For one thing, it does not contain the zero vector.
Two.I.2.31 No. The only subspaces of R1 are the space itself and its trivial subspace. Any subspace ¯ S
of R that contains a nonzero member ~v must contain the set of all of its scalar multiples {r · ~v ¯ r ∈ R}.
But this set is all of R.
Two.I.2.32 Item (1) is checked in the text.
Item (2) has five conditions. First, for closure, if c ∈ R and ~s ∈ S then c · ~s ∈ S as c · ~s = c · ~s + 0 · ~0.
Second, because the operations in S are inherited from V , for c, d ∈ R and ~s ∈ S, the scalar product
(c + d) · ~s in S equals the product (c + d) · ~s in V , and that equals c · ~s + d · ~s in V , which equals
c · ~s + d · ~s in S.
The check for the third, fourth, and fifth conditions are similar to the second conditions’s check
just given.
Two.I.2.33 An exercise in the prior subsection shows that every vector space has only one zero vector
(that is, there is only one vector that is the additive identity element of the space). But a trivial space
has only one element and that element must be this (unique) zero vector.
Two.I.2.34 As the hint suggests, the basic reason is the Linear Combination Lemma from the first
chapter. For the full proof, we will show mutual containment between the two sets.
The first containment [[S]] ⊇ [S] is an instance of the more general, and obvious, fact that for any
subset T of a vector space, [T ] ⊇ T .
For the other containment, that [[S]] ⊆ [S], take m vectors from [S], namely c1,1~s1,1 +· · ·+c1,n1 ~s1,n1 ,
. . . , c1,m~s1,m + · · · + c1,nm ~s1,nm , and note that any linear combination of those
r1 (c1,1~s1,1 + · · · + c1,n1 ~s1,n1 ) + · · · + rm (c1,m~s1,m + · · · + c1,nm ~s1,nm )
is a linear combination of elements of S
= (r1 c1,1 )~s1,1 + · · · + (r1 c1,n1 )~s1,n1 + · · · + (rm c1,m )~s1,m + · · · + (rm c1,nm )~s1,nm
and so is in [S]. That is, simply recall that a linear combination of linear combinations (of members
of S) is a linear combination (again of members of S).
Two.I.2.35 (a) It is not a subspace because these are not the inherited operations. For one thing,
in this space,
x 1
0 · y = 0
z 0
while this does not, of course, hold in R3 .
(b) We can combine the argument showing closure under addition with the argument showing closure
under scalar multiplication into one single argument showing closure under linear combinations of
two vectors. If r1 , r2 , x1 , x2 , y1 , y2 , z1 , z2 are in R then
x1 x2 r1 x1 − r1 + 1 r2 x2 − r2 + 1 r1 x1 − r1 + r2 x2 − r2 + 1
r1 y1 + r2 y2 = r1 y1 + r2 y2 = r1 y1 + r2 y2
z1 z2 r1 z1 r2 z2 r1 z1 + r2 z2
(note that the first component of the last vector does not say ‘+2’ because addition of vectors in this
space has the first components combine in this way: (r1 x1 − r1 + 1) + (r2 x2 − r2 + 1) − 1). Adding the
three components of the last vector gives r1 (x1 −1+y1 +z1 )+r2 (x2 −1+y2 +z2 )+1 = r1 ·0+r2 ·0+1 = 1.
Most of the other checks of the conditions are easy (although the oddness of the operations keeps
them from being routine). Commutativity of addition goes like this.
x1 x2 x1 + x2 − 1 x2 + x1 − 1 x2 x1
y1 + y2 = y1 + y2 = y2 + y1 = y2 + y1
z1 z2 z1 + z2 z2 + z1 z2 z1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 45
(such that x1 + y1 + z1 = 1 and x2 + y2 + z2 = 1) move them back by 1 to place them in P and add
as usual,
x1 − 1 x2 − 1 x1 + x2 − 2
y1 + y2 = y1 + y2 (in P )
z1 z2 z1 + z2
and then move the result back out by 1 along the x-axis.
x1 + x2 − 1
y1 + y2 .
z1 + z2
Scalar multiplication is similar.
(c) For the subspace to be closed under the inherited scalar multiplication, where ~v is a member of
that subspace,
0
0 · ~v = 0
0
must also be a member.
The converse does not hold. Here is a subset of R3 that contains the origin
0 1
{0 , 0}
0 0
(this subset has only two elements) but is not a subspace.
Two.I.2.36 (a) (~v1 + ~v2 + ~v3 ) − (~v1 + ~v2 ) = ~v3
(b) (~v1 + ~v2 ) − (~v1 ) = ~v2
(c) Surely, ~v1 .
(d) Taking the one-long sum and subtracting gives (~v1 ) − ~v1 = ~0.
Two.I.2.37 Yes; any space is a subspace of itself, so each space contains the other.
Two.I.2.38 (a) The union of the x-axis and the y-axis in R2 is one.
(b) The set of integers, as a subset of R1 , is one.
(c) The subset {~v } of R2 is one, where ~v is any nonzero vector.
Two.I.2.39 Because vector space addition is commutative, a reordering of summands leaves a linear
combination unchanged.
Two.I.2.40 We always consider that span in the context of an enclosing space.
Two.I.2.41 It is both ‘if’ and ‘only if’.
For ‘if’, let S be a subset of a vector space V and assume ~v ∈ S satisfies ~v = c1~s1 + · · · + cn~sn
where c1 , . . . , cn are scalars and ~s1 , . . . , ~sn ∈ S. We must show that [S ∪ {~v }] = [S].
Containment one way, [S] ⊆ [S ∪ {~v }] is obvious. For the other direction, [S ∪ {~v }] ⊆ [S], note
that if a vector is in the set on the left then it has the form d0~v + d1~t1 + · · · + dm~tm where the d’s are
scalars and the ~t ’s are in S. Rewrite that as d0 (c1~s1 + · · · + cn~sn ) + d1~t1 + · · · + dm~tm and note that
the result is a member of the span of S.
The ‘only if’ is clearly true — adding ~v enlarges the span to include at least ~v .
Two.I.2.42 (a) Always.
Assume that A, B are subspaces of V . Note that their intersection is not empty as both contain
the zero vector. If w,
~ ~s ∈ A ∩ B and r, s are scalars then r~v + sw
~ ∈ A because each vector is in A
and so a linear combination is in A, and r~v + sw
~ ∈ B for the same reason. Thus the intersection is
closed. Now Lemma 2.9 applies.
(b) Sometimes (more precisely, only if A ⊆ B or B ⊆ A).
To see the answer is not ‘always’, take V to be R3 , take A to be the x-axis, and B to be the
y-axis. Note that µ ¶ µ ¶ µ ¶ µ ¶
1 0 1 0
∈ A and ∈ B but + 6∈ A ∪ B
0 1 0 1
as the sum is in neither A nor B.
The answer is not ‘never’ because if A ⊆ B or B ⊆ A then clearly A ∪ B is a subspace.
To show that A ∪ B is a subspace only if one subspace contains the other, we assume that A 6⊆ B
and B 6⊆ A and prove that the union is not a subspace. The assumption that A is not a subset
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 47
Two.II.1.18 For each of these, when the subset is independent it must be proved, and when the subset
is dependent an example of a dependence must be given.
(a) It is dependent. Considering
1 2 4 0
c1 −3 + c2 2 + c3 −4 = 0
5 4 14 0
gives rise to this linear system.
c1 + 2c2 + 4c3 = 0
−3c1 + 2c2 − 4c3 = 0
5c1 + 4c2 + 14c3 = 0
Gauss’ method
1 2 4 0 1 2 4 0
3ρ1 +ρ2 (3/4)ρ2 +ρ3
−3 2 −4 0 −→ −→ 0 8 8 0
−5ρ1 +ρ3
5 4 14 0 0 0 0 0
yields a free variable, so there are infinitely many solutions. For an example of a particular depen-
dence we can set c3 to be, say, 1. Then we get c2 = −1 and c1 = −2.
(b) It is dependent. The linear system that arises here
1 2 3 0 1 2 3 0
−7ρ1 +ρ2 −ρ2 +ρ3
7 7 7 0 −→ −→ 0 −7 −14 0
−7ρ1 +ρ3
7 7 7 0 0 0 0 0
has infinitely many solutions. We can get a particular solution by taking c3 to be, say, 1, and
back-substituting to get the resulting c2 and c1 .
(c) It is linearly independent. The system
0 1 0 −1 4 0
0 0 0 ρ−→ 1 ↔ρ2 ρ3 ↔ρ1
−→ 0 1 0
−1 4 0 0 0 0
has only the solution c1 = 0 and c2 = 0. (We could also have gotten the answer by inspection — the
second vector is obviously not a multiple of the first, and vice versa.)
(d) It is linearly dependent. The linear system
9 2 3 12 0
9 0 5 12 0
0 1 −4 −1 0
has more unknowns than equations, and so Gauss’ method must end with at least one variable free
(there can’t be a contradictory equation because the system is homogeneous, and so has at least the
solution of all zeroes). To exhibit a combination, we can do the reduction
9 2 3 12 0
−ρ1 +ρ2 (1/2)ρ2 +ρ3
−→ −→ 0 −2 2 0 0
0 0 −3 −1 0
and take, say, c4 = 1. Then we have that c3 = −1/3, c2 = −1/3, and c1 = −31/27.
Two.II.1.19 In the cases of independence, that must be proved. Otherwise, a specific dependence must
be produced. (Of course, dependences other than the ones exhibited here are possible.)
(a) This set is independent. Setting up the relation c1 (3−x+9x2 )+c2 (5−6x+3x2 )+c3 (1+1x−5x2 ) =
0 + 0x + 0x2 gives a linear system
3 5 1 0 3 5 1 0
(1/3)ρ1 +ρ2 3ρ2 −(12/13)ρ2 +ρ3
−1 −6 1 0 −→ −→ −→ 0 −13 2 0
−3ρ1 +ρ3
9 3 −5 0 0 0 −128/13 0
with only one solution: c1 = 0, c2 = 0, and c3 = 0.
(b) This set is independent. We can see this by inspection, straight from the definition of linear
independence. Obviously neither is a multiple of the other.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 49
Two.II.1.25 (a) Assume that the set {~u, ~v , w} ~ is linearly independent, so that any relationship d0 ~u +
d1~v + d2 w~ = ~0 leads to the conclusion that d0 = 0, d1 = 0, and d2 = 0.
Consider the relationship c1 (~u) + c2 (~u + ~v ) + c3 (~u + ~v + w) ~ = ~0. Rewrite it to get (c1 + c2 +
c3 )~u + (c2 + c3 )~v + (c3 )w ~
~ = 0. Taking d0 to be c1 + c2 + c3 , taking d1 to be c2 + c3 , and taking d2
to be c3 we have this system.
c1 + c2 + c3 = 0
c2 + c3 = 0
c3 = 0
Conclusion: the c’s are all zero, and so the set is linearly independent.
(b) The second set is dependent.
1 · (~u − ~v ) + 1 · (~v − w) ~ − ~u) = ~0
~ + 1 · (w
Beyond that, the two statements are unrelated in the sense that any of the first set could be either
independent or dependent. For instance, in R3 , we can have that the first is independent while the
second is not
1 0 0 1 0 −1
~ = {0 , 1 , 0}
{~u, ~v , w} {~u − ~v , ~v − w,
~ w~ − ~u} = {−1 , 1 , 0 }
0 0 1 0 −1 1
or that both are dependent.
1 0 0 1 0 −1
~ = {0 , 1 , 1}
{~u, ~v , w} {~u − ~v , ~v − w,
~ w~ − ~u} = {−1 , 0 , 1 }
0 0 1 0 −1 1
Two.II.1.26 (a) A singleton set {~v } is linearly independent if and only if ~v 6= ~0. For the ‘if’ direction,
with ~v 6= ~0, we can apply Lemma 1.4 by considering the relationship c · ~v = ~0 and noting that the
only solution is the trivial one: c = 0. For the ‘only if’ direction, just recall that Example 1.11 shows
that {~0} is linearly dependent, and so if the set {~v } is linearly independent then ~v 6= ~0.
(Remark. Another answer is to say that this is the special case of Lemma 1.16 where S = ∅.)
(b) A set with two elements is linearly independent if and only if neither member is a multiple of the
other (note that if one is the zero vector then it is a multiple of the other, so this case is covered).
This is an equivalent statement: a set is linearly dependent if and only if one element is a multiple
of the other.
The proof is easy. A set {~v1 , ~v2 } is linearly dependent if and only if there is a relationship
c1~v1 + c2~v2 = ~0 with either c1 6= 0 or c2 6= 0 (or both). That holds if and only if ~v1 = (−c2 /c1 )~v2 or
~v2 = (−c1 /c2 )~v1 (or both).
Two.II.1.27 This set is linearly dependent set because it contains the zero vector.
Two.II.1.28 The ‘if’ half is given by Lemma 1.14. The converse (the ‘only if’ statement) does not
hold. An example is to consider the vector space R2 and these vectors.
µ ¶ µ ¶ µ ¶
1 0 1
~x = , ~y = , ~z =
0 1 1
Two.II.1.29 (a) The linear system arising from
1 −1 0
c1 1 + c2 2 = 0
0 0 0
has the unique solution c1 = 0 and c2 = 0.
(b) The linear system arising from
1 −1 3
c1 1 + c2 2 = 2
0 0 0
has the unique solution c1 = 8/3 and c2 = −1/3.
(c) Suppose that S is linearly independent. Suppose that we have both ~v = c1~s1 + · · · + cn~sn and
~v = d1~t1 + · · · + dm~tm (where the vectors are members of S). Now,
c1~s1 + · · · + cn~sn = ~v = d1~t1 + · · · + dm~tm
can be rewritten in this way.
c1~s1 + · · · + cn~sn − d1~t1 − · · · − dm~tm = ~0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 51
To see that no set with five or more vectors can be independent, set up
a1,1 a1,2 a1,3 a1,4 a1,5 0
a2,1 a2,2 a2,3 a2,4 a2,5 0
c1
a3,1 + c2 a3,2 + c3 a3,3 + c4 a3,4 + c5 a3,5 = 0
Answers to Exercises 53
(c) It is linearly dependent if and only if either vector is a multiple of the other. That is, it is not
independent iff
a b b a
d = r · e or e = s · d
g h h g
(or both) for some scalars r and s. Eliminating r and s in order to restate this condition only in
terms of the given letters a, b, d, e, g, h, we have that it is not independent — it is dependent — iff
ae − bd = ah − gb = dh − ge.
(d) Dependence or independence is a function of the indices, so there is indeed a formula (although
at first glance a person might think the formula involves cases: “if the first component of the first
vector is zero then . . . ”, this guess turns out not to be correct).
Two.II.1.39 Recall that two vectors from Rn are perpendicular if and only if their dot product is
zero.
(a) Assume that ~v and w ~ are perpendicular nonzero vectors in Rn , with n > 1. With the linear
relationship c~v + dw~ = ~0, apply ~v to both sides to conclude that c · k~v k2 + d · 0 = 0. Because ~v 6= ~0
we have that c = 0. A similar application of w ~ shows that d = 0.
(b) Two vectors in R1 are perpendicular if and only if at least one of them is zero.
We define R0 to be a trivial space, and so both ~v and w ~ are the zero vector.
(c) The right generalization is to look at a set {~v1 , . . . , ~vn } ⊆ Rk of vectors that are mutually
orthogonal (also called pairwise perpendicular ): if i 6= j then ~vi is perpendicular to ~vj . Mimicing the
proof of the first item above shows that such a set of nonzero vectors is linearly independent.
Two.II.1.40 (a) This check is routine.
(b) The summation is infinite (has infinitely many summands). The definition of linear combination
involves only finite sums.
(c) No nontrivial finite sum of members of {g, f0 , f1 , . . .} adds to the zero object: assume that
c0 · (1/(1 − x)) + c1 · 1 + · · · + cn · xn = 0
(any finite sum uses a highest power, here n). Multiply both sides by 1 − x to conclude that
each coefficient is zero, because a polynomial describes the zero function only when it is the zero
polynomial.
Two.II.1.41 It is both ‘if’ and ‘only if’.
Let T be a subset of the subspace S of the vector space V . The assertion that any linear relationship
c1~t1 + · · · + cn~tn = ~0 among members of T must be the trivial relationship c1 = 0, . . . , cn = 0 is a
statement that holds in S if and only if it holds in V , because the subspace S inherits its addition and
scalar multiplication operations from V .
Two.III.1.16 By Theorem 1.12, each is a basis if and only if each vector in the space can be given in
a unique way as a linear combination of the given vectors.
(a) Yes this is a basis. The relation
1 3 0 x
c1 2 + c2 2 + c3 0 = y
3 1 1 z
gives
1 3 0 x 1 3 0 x
−2ρ1 +ρ2 −2ρ2 +ρ3
2 2 0 y −→ −→ 0 −4 0 −2x + y
−3ρ1 +ρ3
3 1 1 z 0 0 1 x − 2y + z
which has the unique solution c3 = x − 2y + z, c2 = x/2 − y/4, and c1 = −x/2 + 3y/4.
(b) This is not a basis. Setting it up as in the prior item
1 3 x
c1 2 + c2 2 = y
3 1 z
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
1 D
0 0
−1 −1
(c) RepE4 ( 0 ) = 0
1 1 E
4
Two.III.1.18 A natural basis is h1, x, x2 i. There are bases for P2 that do not contain any polynomials
of degree one or degree zero. One is h1 + x + x2 , x + x2 , x2 i. (Every basis has at least one polynomial
of degree two, though.)
Two.III.1.19 The reductionµ ¶ µ ¶
1 −4 3 −1 0 −2ρ1 +ρ2 1 −4 3 −1 0
−→
2 −8 6 −2 0 0 0 0 0 0
gives that
the only condition
is that x 1 = 4x 2 − 3x . The
3 + x4 solution
set
is
4x2 − 3x3 + x4 4 −3 1
x2
¯ 1 0 0 ¯
{ ¯ x2 , x3 , x4 ∈ R} = {x2 + x3 + x4 ¯ x2 , x3 , x4 ∈ R}
x3 0 1 0
x4 0 0 1
and so the obvious candidate for the basis is this.
4 −3 1
1 0 0
h
0 , 1 , 0i
0 0 1
We’ve shown that this spans the space, and showing it is also linearly independent is routine.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 55
Two.III.1.20 There are many bases. This is a natural one.
µ ¶ µ ¶ µ ¶ µ ¶
1 0 0 1 0 0 0 0
h , , , i
0 0 0 0 1 0 0 1
Two.III.1.21 For each item, many answers are possible.
(a) One way to proceed is to parametrize by expressing the a2 as a combination of the other two
a2 = 2a1 + a0 . Then a2 x2 + a1 x + a0 is (2a1 + a0 )x2 + a1 x + a0 and
¯ ¯
{(2a1 + a0 )x2 + a1 x + a0 ¯ a1 , a0 ∈ R} = {a1 · (2x2 + x) + a0 · (x2 + 1) ¯ a1 , a0 ∈ R}
suggests h2x2 + x, x2 + 1i. This only shows that it spans, but checking that it is linearly independent
is routine. ¡ ¢¯ ¡ ¢¯
(b) Paramatrize { a b c ¯ a + b = 0} to get { −b b c ¯ b, c ∈ R}, which suggests using the
¡ ¢ ¡ ¢
sequence h −1 1 0 , 0 0 1 i. We’ve shown that it spans, and checking that it is linearly
independent is easy.
(c) Rewriting µ ¶ µ ¶ µ ¶
a b ¯¯ 1 0 0 1 ¯¯
{ a, b ∈ R} = {a · +b· a, b ∈ R}
0 2b 0 0 0 2
suggests this for the basis. µ ¶ µ ¶
1 0 0 1
h , i
0 0 0 2
Two.III.1.22 We will show that the second is a basis; the first is similar. We will show this straight
from the definition of a basis, because this example appears before Theorem 1.12.
To see that it is linearly independent, we set up c1 · (cos θ − sin θ) + c2 · (2 cos θ + 3 sin θ) =
0 cos θ + 0 sin θ. Taking θ = 0 and θ = π/2 gives this system
c1 · 1 + c2 · 2 = 0 ρ1 +ρ2 c1 + 2c2 = 0
−→
c1 · (−1) + c2 · 3 = 0 + 5c2 = 0
which shows that c1 = 0 and c2 = 0.
The calculation for span is also easy; for any x, y ∈ R4 , we have that c1 ·(cos θ −sin θ)+c2 ·(2 cos θ +
3 sin θ) = x cos θ + y sin θ gives that c2 = x/5 + y/5 and that c1 = 3x/5 − 2y/5, and so the span is the
entire space.
Two.III.1.23 (a) Asking which a0 + a1 x + a2 x2 can be expressed as c1 · (1 + x) + c2 · (1 + 2x) gives
rise to three linear equations, describing the coefficients of x2 , x, and the constants.
c1 + c2 = a0
c1 + 2c2 = a1
0 = a2
Gauss’ method with back-substitution shows, provided that a2 = 0, that c2 = −a0 + a1 and c1 =
2a0 − a1 . Thus, with a2 = 0, that we can compute appropriate ¯ c1 and c2 for any a0 and a1 .
So the span is the¯ entire set of linear polynomials {a0 + a1 x ¯ a0 , a1 ∈ R}. Paramatrizing that
set {a0 · 1 + a1 · x ¯ a0 , a1 ∈ R} suggests a basis h1, xi (we’ve shown that it spans; checking linear
independence is easy).
(b) With
a0 + a1 x + a2 x2 = c1 · (2 − 2x) + c2 · (3 + 4x2 ) = (2c1 + 3c2 ) + (−2c1 )x + (4c2 )x2
we get this system.
2c1 + 3c2 = a0 2c1 + 3c2 = a0
ρ1 +ρ2 (−4/3)ρ2 +ρ3
−2c1 = a1 −→ −→ 3c2 = a0 + a1
4c2 = a2 0 = (−4/3)a0 − (4/3)a1 + a2
Thus, the only quadratic polynomials a0 + a1 x + a2 x2 with associated c’s are ¯the ones such that
2 ¯
0 = (−4/3)a0 − (4/3)a1 + a2 . Hence the span is {(−a
¯ 1 + (3/4)a2 ) + a1 x + a2 x a1 , a2 ∈ R}. Para-
matrizing gives {a1 · (−1 + x) + a2 · ((3/4) + x ) a1 , a2 ∈ R}, which suggests h−1 + x, (3/4) + x2 i
2 ¯
Answers to Exercises 57
2
Two.III.1.31 Here is a subset of R that is not a basis, and two different linear combinations of its
elements that sum to theµsame ¶ µvector.
¶ µ ¶ µ ¶ µ ¶ µ ¶
1 2 1 2 1 2
{ , } 2· +0· =0· +1·
2 4 2 4 2 4
Subsets that are not bases can possibly have unique linear combinations. Linear combinations are
unique if and only if the subset is linearly independent. That is established in the proof of the theorem.
Two.III.1.32 (a) Describing the vector space as
µ ¶
a b ¯¯
{ a, b, c ∈ R}
b c
suggests this for a basis. µ ¶ µ ¶ µ ¶
1 0 0 0 0 1
h , , i
0 0 0 1 1 0
Verification is easy.
(b) This is one possible basis.
1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0
h0 0 0 , 0 1 0 , 0 0 0 , 1 0 0 , 0 0 0 , 0 0 1i
0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0
(c) As in the prior two questions, we can form a basis from two kinds of matrices. First are the
matrices with a single one on the diagonal and all other entries zero (there are n of those matrices).
Second are the matrices with two opposed off-diagonal entries are ones and all other entries are
zeros. (That is, all entries in M are zero except that mi,j and mj,i are one.)
Two.III.1.33 (a) Any four vectors from R3 are linearly related because the vector equation
x1 x2 x3 x4 0
c1 y1 + c2 y2 + c3 y3 + c4 y4 = 0
z1 z2 z3 z4 0
gives rise to a linear system
x1 c1 + x2 c2 + x3 c3 + x4 c4 = 0
y1 c1 + y2 c2 + y3 c3 + y4 c4 = 0
z1 c1 + z2 c2 + z3 c3 + z4 c4 = 0
that is homogeneous (and so has a solution) and has four unknowns but only three equations, and
therefore has nontrivial solutions. (Of course, this argument applies to any subset of R3 with four
or more vectors.)
(b) Given x1 , . . . , z2 ,
x1 x2
S = { y1 , y2 }
z1 z2
to decide which vectors
x
y
z
are in the span of S, set up
x1 x2 x
c1 y1 + c2 y2 = y
z1 z2 z
and row reduce the resulting system.
x1 c1 + x2 c2 = x
y1 c1 + y2 c2 = y
z1 c1 + z2 c2 = z
There are two variables c1 and c2 but three equations, so when Gauss’ method finishes, on the
bottom row there will be some relationship of the form 0 = m1 x + m2 y + m3 z. Hence, vectors in
the span of the two-element set S must satisfy some restriction. Hence the span is not all of R3 .
Two.III.1.34 We have (using these peculiar operations with care)
1−y−z ¯ −y + 1 −z + 1 ¯ 0 0 ¯
{ y ¯ y, z ∈ R} = { y + 0 ¯ y, z ∈ R} = {y · 1 + z · 0 ¯ y, z ∈ R}
z 0 z 0 1
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 59
Two.III.2.18 The bases for these spaces are developed in the answer set of the prior subsection.
Answers to Exercises 61
for some a3 , . . . , an ∈ R. Therefore,
−1
1
1 ¡ ¢
σ1 (~v ) − σ2 (~v ) = 0
y−x ..
.
0
is in V . That is, ~e2 − ~e1 ∈ V , where ~e1 , ~e2 , . . . , ~en is the standard basis for Rn . Similarly, ~e3 − ~e2 ,
. . . , ~en − ~e1 are all in V . It is easy to see that the vectors ~e2 − ~e1 , ~e3 − ~e2 , . . . , ~en − ~e1 are linearly
independent (that is, form a linearly independent set), so dim V ≥ n − 1.
Finally, we can write
~v = x1~e1 + x2~e2 + · · · + xn~en
= (x1 + x2 + · · · + xn )~e1 + x2 (~e2 − ~e1 ) + · · · + xn (~en − ~e1 )
This shows that if x1 + x2 + · · · + xn = 0 then ~v is in the span of ~e2 − ~e1 , . . . , e~n − ~e1 (that is, is in
the span of the set of those vectors); similarly, each σ(~v ) will be in this span, so V will equal this span
and dim V = n − 1. On the other hand, if x1 + x2 + · · · + xn 6= 0 then the above equation shows that
~e1 ∈ V and thus ~e1 , . . . , ~en ∈ V , so V = Rn and dim V = n.
µ ¶ µ ¶ 1 6 µ ¶
2 3 2 1 ¡−1 ¢
Two.III.3.16 (a) (b) (c) 4 7(d) 0 0 0 (e)
1 1 1 3 −2
3 8
¡ ¢ ¡ ¢ ¡ ¢
Two.III.3.17 (a) Yes. To see if there are c1 and c2 such that c1 · 2 1 + c2 · 3 1 = 1 0 we
solve
2c1 + 3c2 = 1
c1 + c2 = 0
and get c1 = −1 and c2 =
¡ 1. Thus¢ the vector
¡ is in ¢the row
¡ space. ¢ ¡ ¢
(b) No. The equation c1 0 1 3 + c2 −1 0 1 + c3 −1 2 7 = 1 1 1 has no solution.
0 −1 −1 1 1 0 2 1
ρ1 ↔ρ2 −3ρ1 +ρ2 ρ2 +ρ3
1 0 2 1 −→ −→ −→ 0 −1 −1 1
3 1 7 1 0 0 0 −1
Thus, the vector is not in the row space.
Two.III.3.18 (a) No. To see if there are c1 , c2 ∈ R such that
µ ¶ µ ¶ µ ¶
1 1 1
c1 + c2 =
1 1 3
we can use Gauss’ method on the resulting linear system.
c1 + c2 = 1 −ρ1 +ρ2 c1 + c2 = 1
−→
c1 + c2 = 3 0=2
There is no solution and so the vector is not in the column space.
(b) Yes. From this relationship
1 3 1 1
c1 2 + c2 0 + c3 4 = 0
1 −3 3 0
we get a linear system that, when Gauss’ method is applied,
1 3 1 1 1 3 1 1
−2ρ1 +ρ2 −ρ2 +ρ3
2 0 4 0 −→ −→ 0 −6 2 −2
−ρ1 +ρ3
1 −3 −3 0 0 0 −6 1
yields a solution. Thus, the vector is in the column space.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 63
leading to this basis. µ ¶ µ ¶
1 0 1 0 0 2
h , i
3 1 −1 −1 0 5
Two.III.3.22 Only the zero matrices have rank of zero. The only matrices of rank one have the form
k1 · ρ
..
.
km · ρ
where ρ is some nonzero row vector, and not all of the ki ’s are zero. (Remark. We can’t simply say
that all of the rows are multiples of the first because the first row might be the zero row. Another
Remark. The above also applies with ‘column’ replacing ‘row’.)
Two.III.3.23 If a 6= 0 then a choice of d = (c/a)b will make the second row be a multiple of the first,
specifically, c/a times the first. If a = 0 and b = 0 then a choice of d = 1 will ensure that the second
row is nonzero. If a = 0 and b 6= 0 and c = 0 then any choice for d will do, since the matrix will
automatically have rank one (even with the choice of d = 0). Finally, if a = 0 and b 6= 0 and c 6= 0
then no choice for d will suffice because the matrix is sure to have rank two.
Two.III.3.24 The column rank is two. One way to see this is by inspection — the column space consists
of two-tall columns and so can have a dimension of at least two, and we can easily find two columns
that together form a linearly independent set (the fourth and fifth columns, for instance). Another
way to see this is to recall that the column rank equals the row rank, and to perform Gauss’ method,
which leaves two nonzero rows.
Two.III.3.25 We apply Theorem 3.13. The number of columns of a matrix of coefficients A of a linear
system equals the number n of unknowns. A linear system with at least one solution has at most one
solution if and only if the space of solutions of the associated homogeneous system has dimension zero
(recall: in the ‘General = Particular + Homogeneous’ equation ~v = p~ + ~h, provided that such a p~ exists,
the solution ~v is unique if and only if the vector ~h is unique, namely ~h = ~0). But that means, by the
theorem, that n = r.
Two.III.3.26 The set of columns must be dependent because the rank of the matrix is at most five
while there are nine columns.
Two.III.3.27 There is little danger of their being equal since the row space is a set of row vectors
while the column space is a set of columns (unless the matrix is 1×1, in which case the two spaces
must be equal).
Remark. Consider µ ¶
1 3
A=
2 6
¡ ¢
and note that the row space is the set of all multiples of 1 3 while the column space consists of
multiples of µ ¶
1
2
so we also cannot argue that the two spaces must be simply transposes of each other.
Two.III.3.28 First, the vector space is the set of four-tuples of real numbers, under the natural oper-
ations. Although this is not the set of four-wide row vectors, the difference is slight — it is “the same”
as that set. So we will treat the four-tuples like four-wide vectors.
With that, one way to see that (1, 0, 1, 0) is not in the span of the first set is to note that this
reduction
1 −1 2 −3 1 −1 2 −3
1 +ρ2 −ρ2 +ρ3
1 1 2 0 −ρ−→ −→ 0 2 0 3
−3ρ1 +ρ3
3 −1 6 −6 0 0 0 0
and this one
1 −1 2 −3 1 −1 2 −3
1 1 2 0 −ρ1 +ρ2 −ρ2 +ρ3 ρ3 ↔ρ4 0 2 0 3
−→ −→ −→
3 −1 6 −6 −3ρ 1 +ρ3 −(1/2)ρ2 +ρ4
0 0 −1 3/2
−ρ1 +ρ4
1 0 1 0 0 0 0 0
yield matrices differing in rank. This means that addition of (1, 0, 1, 0) to the set of the first three
four-tuples increases the rank, and hence the span, of that set. Therefore (1, 0, 1, 0) is not already in
the span.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 65
Two.III.3.32 It cannot be bigger.
Two.III.3.33 The number of rows in a maximal linearly independent set cannot exceed the number
of rows. A better bound (the bound that is, in general, the best possible) is the minimum of m and
n, because the row rank equals the column rank.
Two.III.3.34 Because the rows of a matrix A are turned into the columns of Atrans the dimension of
the row space of A equals the dimension of the column space of Atrans . But the dimension of the row
space of A is the rank of A and the dimension of the column space of Atrans is the rank of Atrans . Thus
the two ranks are equal.
Two.III.3.35 False. The first is a set of columns while the second is a set of rows.
This example, however,
µ ¶ 1 4
1 2 3
A= , Atrans = 2 5
4 5 6
3 6
indicates that as soon as we have a formal meaning for “the same”, we can apply it here:
µ ¶ µ ¶ µ ¶
1 2 3
Columnspace(A) = [{ , , }]
4 5 6
while ¡ ¢ ¡ ¢ ¡ ¢
Rowspace(Atrans ) = [{ 1 4 , 2 5 , 3 6 }]
are “the same” as each other.
Two.III.3.36 No. Here, Gauss’ method does not change the column space.
µ ¶ µ ¶
1 0 −3ρ1 +ρ2 1 0
−→
3 1 0 1
Two.III.3.37 A linear system
c1~a1 + · · · + cn~an = d~
has a solution if and only if d~ is in the span of the set {~a1 , . . . , ~an }. That’s true if and only if the
column rank of the augmented matrix equals the column rank of the matrix of coefficients. Since rank
equals the column rank, the system has a solution if and only if the rank of its augmented matrix
equals the rank of its matrix of coefficients.
Two.III.3.38 (a) Row rank equals column rank so each is at most the minimum of the number of
rows and columns. Hence both can be full only if the number of rows equals the number of columns.
(Of course, the converse does not hold: a square matrix need not have full row rank or full column
rank.)
(b) If A has full row rank then, no matter what the right-hand side, Gauss’ method on the augmented
matrix ends with a leading one in each row and none of those leading ones in the furthest right column
(the “augmenting” column). Back substitution then gives a solution.
On the other hand, if the linear system lacks a solution for some right-hand side it can only be
because Gauss’ method leaves some row so that it is all zeroes to the left of the “augmenting” bar
and has a nonzero entry on the right. Thus, if A does not have a solution for some right-hand sides,
then A does not have full row rank because some of its rows have been eliminated.
(c) The matrix A has full column rank if and only if its columns form a linearly independent set.
That’s equivalent to the existence of only the trivial linear relationship.
(d) The matrix A has full column rank if and only if the set of its columns is linearly independent set,
and so forms a basis for its span. That’s equivalent to the existence of a unique linear representation
of all vectors in that span.
Two.III.3.39 Instead of the row spaces being the same, the row space of B would be a subspace
(possibly equal to) the row space of A.
Two.III.3.40 Clearly rank(A) = rank(−A) as Gauss’ method allows us to multiply all rows of a matrix
by −1. In the same way, when k 6= 0 we have rank(A) = rank(kA).
Addition is more interesting. The rank of a sum can be smaller than the rank of the summands.
µ ¶ µ ¶ µ ¶
1 2 −1 −2 0 0
+ =
3 4 −3 −4 0 0
The rank of a sum can be bigger than the rank of the summands.
µ ¶ µ ¶ µ ¶
1 2 0 0 1 2
+ =
0 0 3 4 3 4
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 67
Two.III.4.22 It is. Showing that these two are subspaces is routine. To see that the space is the direct
sum of these two, just note that each member of P2 has the unique decomposition m + nx + px2 =
(m + px2 ) + (nx).
Two.III.4.23 To show that they are subspaces is routine. We will argue they are complements with
Lemma 4.15. The intersection E ∩ O is trivial because the only polynomial satisfying both conditions
p(−x) = p(x) and p(−x) = −p(x) is the zero polynomial. To see that the entire space is the sum of
the subspaces E + O = Pn , note that the polynomials p0 (x) = 1, p2 (x) = x2 , p4 (x) = x4 , etc., are in
E and also note that the polynomials p1 (x) = x, p3 (x) = x3 , etc., are in O. Hence any member of Pn
is a combination of members of E and O.
Two.III.4.24 Each of these is R3 .
(a) These are broken into lines for legibility.
W1 + W2 + W3 , W1 + W2 + W3 + W4 , W1 + W2 + W3 + W5 , W1 + W2 + W3 + W4 + W5 ,
W1 + W2 + W4 , W1 + W2 + W4 + W5 , W1 + W2 + W5 ,
W1 + W3 + W4 , W1 + W3 + W5 , W1 + W3 + W4 + W5 ,
W1 + W4 , W1 + W4 + W5 ,
W1 + W5 ,
W2 + W3 + W4 , W2 + W3 + W4 + W5 ,
W2 + W4 , W2 + W4 + W5 ,
W3 + W4 , W3 + W4 + W5 ,
W4 + W5
(b) W1 ⊕ W2 ⊕ W3 , W1 ⊕ W4 , W1 ⊕ W5 , W2 ⊕ W4 , W3 ⊕ W4
Two.III.4.25 Clearly each is a subspace. The bases Bi = hxi i for the subspaces, when concatenated,
form a basis for the whole space.
Two.III.4.26 It is W2 .
Two.III.4.27 True by Lemma 4.8.
Two.III.4.28 Two distinct direct sum decompositions of R4 are easy to find. Two such are W1 =
[{~e1 , ~e2 }] and W2 = [{~e3 , ~e4 }], and also U1 = [{~e1 }] and U2 = [{~e2 , ~e3 , ~e4 }]. (Many more are possible,
for example R4 and its trivial subspace.)
In contrast, any partition of R1 ’s single-vector basis will give one basis with no elements and another
with a single element. Thus any decomposition involves R1 and its trivial subspace.
¯
Two.III.4.29 Set inclusion one way is easy: {w ~1 + · · · + w
~k ¯ w~ i ∈ Wi } is a subset of [W1 ∪ . . . ∪ Wk ]
because each w ~1 + · · · + w
~ k is a sum of vectors from the union.
For the other inclusion, to any linear combination of vectors from the union apply commutativity
of vector addition to put vectors from W1 first, followed by vectors from W2 , etc. Add the vectors
from W1 to get a w ~ 1 ∈ W1 , add the vectors from W2 to get a w ~ 2 ∈ W2 , etc. The result has the desired
form.
Two.III.4.30 One example is to take the space to be R3 , and to take the subspaces to be the xy-plane,
the xz-plane, and the yz-plane.
Two.III.4.31 Of course, the zero vector is in all of the subspaces, so the intersection contains at least
that one vector.. By the definition of direct sum the set {W1 , . . . , Wk } is independent and so no
nonzero vector of Wi is a multiple of a member of Wj , when i 6= j. In particular, no nonzero vector
from Wi equals a member of Wj .
Two.III.4.32 It can contain a trivial subspace; this set of subspaces of R3 is independent: {{~0}, x-axis}.
No nonzero vector from the trivial space {~0} is a multiple of a vector from the x-axis, simply because
the trivial space has no nonzero vectors to be candidates for such a multiple (and also no nonzero
vector from the x-axis is a multiple of the zero vector from the trivial subspace).
Two.III.4.33 Yes. For any subspace of a vector space we can take any basis h~ ω1 , . . . , ω
~ k i for that
subspace and extend it to a basis h~ω1 , . . . , ω ~k+1 , . . . , β
~ k, β ~n i for the whole space. Then the complemen
of the original subspace has this for a basis: hβ~k+1 , . . . , β~n i.
Two.III.4.34 (a) It must. Any member of W1 + W2 can be written w ~1 + w~ 2 where w
~ 1 ∈ W1 and
w
~ 2 ∈ W2 . As S1 spans W1 , the vector w
~ 1 is a combination of members of S1 . Similarly w
~ 2 is a
combination of members of S2 .
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 69
Two.III.4.39 It happens when at least one of W1 , W2 is trivial. But that is the only way it can happen.
To prove this, assume that both are non-trivial, select nonzero vectors w ~ 1, w
~ 2 from each, and
consider w
~1 + w~ 2 . This sum is not in W1 because w
~1 + w
~ 2 = ~v ∈ W1 would imply that w ~ 2 = ~v − w~ 1 is
in W1 , which violates the assumption of the independence of the subspaces. Similarly, w ~1 + w ~ 2 is not
in W2 . Thus there is an element of V that is not in W1 ∪ W2 .
Two.III.4.40 (a) The set µ ¶ µ ¶ µ ¶
v ¯ v x
{ 1 ¯ 1 = 0 for all x ∈ R}
v2 v2 0
is easily seen to be the y-axis.
(b) The yz-plane.
(c) The z-axis.
(d) Assume that U is a subspace of some Rn . Because U ⊥ contains the zero vector, since that vector
is perpendicular to everything, we need only show that the orthocomplement is closed under linear
combinations of two elements. If w ~ 2 ∈ U ⊥ then w
~ 1, w ~ 1 ~u = 0 and w
~ 2 ~u = 0 for all ~u ∈ U . Thus
(c1 w
~ 1 + c2 w
~ 2 ) ~u = c1 (w
~ 1 ~u) + c2 (w~ 2 ~u) = 0 for all ~u ∈ U and so U ⊥ is closed under linear
combinations.
(e) The only vector orthogonal to itself is the zero vector.
(f ) This is immediate.
(g) To prove that the dimensions add, it suffices by Corollary 4.13 and Lemma 4.15 to show that
U ∩ U ⊥ is the trivial subspace {~0}. But this is one of the prior items in this problem.
Two.III.4.41 Yes. The left-to-right implication is Corollary 4.13. For the other direction, assume that
dim(V ) = dim(W1 ) + · · · + dim(Wk ). Let B1 , . . . , Bk be bases for W1 , . . . , Wk . As V is the sum of the
subspaces, any ~v ∈ V can be written ~v = w ~1 + · · · + w ~ k and expressing each w ~ i as a combination of
⌢ ⌢
vectors from the associated basis Bi shows that the concatenation B1 · · · Bk spans V . Now, that
concatenation has dim(W1 ) + · · · + dim(Wk ) members, and so it is a spanning set of size dim(V ). The
concatenation is therefore a basis for V . Thus V is the direct sum.
Two.III.4.42 No. The standard basis for R2 does not split into bases for the complementary subspaces
the line x = y and the line x = −y.
Two.III.4.43 (a) Yes, W1 + W2 = W2 + W1 for all subspaces W1 , W2 because each side is the span
of W1 ∪ W2 = W2 ∪ W1 .
(b) This one is similar to the prior one — each side of that equation is the span of (W1 ∪ W2 ) ∪ W3 =
W1 ∪ (W2 ∪ W3 ).
(c) Because this is an equality between sets, we can show that it holds by mutual inclusion. Clearly
W ⊆ W + W . For W + W ⊆ W just recall that every subset is closed under addition so any sum of
the form w~1 + w
~ 2 is in W .
(d) In each vector space, the identity element with respect to subspace addition is the trivial subspace.
(e) Neither of left or right cancelation needs to hold. For an example, in R3 take W1 to be the
xy-plane, take W2 to be the x-axis, and take W3 to be the y-axis.
Two.III.4.44 (a) They are equal because for each, V is the direct sum if and only if each ~v ∈ V can
be written in a unique way as a sum ~v = w ~1 + w
~ 2 and ~v = w~2 + w~ 1.
(b) They are equal because for each, V is the direct sum if and only if each ~v ∈ V can be written in
a unique way as a sum of a vector from each ~v = (w ~1 + w~ 2) + w
~ 3 and ~v = w
~ 1 + (w
~2 + w
~ 3 ).
(c) Any vector in R3 can be decomposed uniquely into the sum of a vector from each axis.
(d) No. For an example, in R2 take W1 to be the x-axis, take W2 to be the y-axis, and take W3 to
be the line y = x.
(e) In any vector space the trivial subspace acts as the identity element with respect to direct sum.
(f ) In any vector space, only the trivial subspace has a direct-sum inverse (namely, itself). One way
to see this is that dimensions add, and so increase.
Topic: Fields
1 These checks are all routine; most consist only of remarking that property is so familiar that it does
not need to be proved.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Topic: Crystals
1 Each fundamental unit is 3.34 × 10−10 cm, so there are about 0.1/(3.34 × 10−10 ) such units. That
gives 2.99 × 108 , so there are something like 300, 000, 000 (three hundred million) units.
2 (a) We solve
µ ¶ µ ¶ µ ¶
1.42 1.23 5.67 1.42c1 + 1.23c2 = 5.67
c1 + c2 = =⇒
0 0.71 3.14 0.71c2 = 3.14
to get c2 =≈ 4.42 and c1 ≈ 0.16.
(b) Here is the point located in the lattice. In the picture on the left, superimposed on the unit cell
~1 and β
are the two basis vectors β ~2 , and a box showing the offset of 0.16β~1 + 4.42β~2 . The picture on
the right shows where that appears inside of the crystal lattice, taking as the origin the lower left
corner of the hexagon in the lower left.
So this point is in the next column of hexagons over, and either one hexagon up or two hexagons
up, depending on how you count them.
(c) This second basis µ ¶ µ ¶
1.42 0
h , i
0 1.42
makes the computation easier
µ ¶ µ ¶ µ ¶
1.42 0 5.67 1.42c1 = 5.67
c1 + c2 = =⇒
0 1.42 3.14 1.42c2 = 3.14
(we get c2 ≈ 2.21 and c1 ≈ 3.99), but it doesn’t seem to have to do much with the physical structure
that we are studying.
3 In terms of the basis the locations of the corner atoms are (0, 0, 0), (1, 0, 0), . . . , (1, 1, 1). The locations
of the face atoms are (0.5, 0.5, 1), (1, 0.5, 0.5), (0.5, 1, 0.5), (0, 0.5, 0.5), (0.5, 0, 0.5), and (0.5, 0.5, 0). The
locations of the atoms a quarter of the way down from the top are (0.75, 0.75, 0.75) and (0.25, 0.25, 0.25).
The atoms a quarter of the way up from the bottom are at (0.75, 0.25, 0.25) and (0.25, 0.75, 0.25).
Converting to Ångstroms is easy.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 71
23 −22
4 (a) 195.08/6.02 × 10 = 3.239 × 10
(b) 4
(c) 4 · 3.239 × 10−22 = 1.296 × 10−21
(d) 1.296 × 10−21 /21.45 = 6.042 × 10−23 cubic centimeters
−8
(e) 3.924
× 10 −8 centimeters.
3.924 × 10 0 0
(f ) h 0 , 3.924 × 10−8 , 0 i
−8
0 0 3.924 × 10
1 The mock election corresponds to the table on page 150 in the way shown in the first table, and after
cancellation the result is the second table.
positive spin negative spin positive spin negative spin
D>R>T T >R>D D>R>T T >R>D
5 voters 2 voters 3 voters –
R>T >D D>T >R R>T >D D>T >R
8 voters 4 voters 4 voters –
T >D>R R>D>T T >D>R R>D>T
8 voters 2 voters 6 voters –
All three come from the same side, the left, as the result from this Topic says must happen. Tallying
the election can now proceed, using the cancelled numbers
D D D D
−1 voter 1 voter 1 voter −1 voter 1 voter 1 voter 7 voter 5 voter
3· T R +4· T R +6· T R = T R
a b -c a+b-c
3 (a) A two-voter election can have a majority cycle in two ways. First, the two voters could be
opposites, resulting after cancellation in the trivial election (with the majority cycle of all zeroes).
Second, the two voters could have the same spin but come from different rows, as here.
D D D D
−1 voter 1 voter 1 voter −1 voter 1 voter 1 voter 0 voters 0 voters
1· T R +1· T R +0· T R = T R
(b) There are two cases. An even number of voters can split half and half into opposites, e.g., half
the voters are D > R > T and half are T > R > D. Then cancellation gives the trivial election. If
the number of voters is greater than one and odd (of the form 2k + 1 with k > 0) then using the
cycle diagram from the proof,
D D D D
-a a b -b c c -a+b+c a-b+c
T R + T R + T R = T R
a b -c a+b-c
we can take a = k and b = k and c = 1. Because k > 0, this is a majority cycle.
4 This is one example that yields a non-rational preference order for a single voter.
character experience policies
Democrat most middle least
Republican middle least most
Third least most middle
The Democrat is preferred to the Republican for character and experience. The Republican is preferred
to the Third for character and policies. And, the Third is preferred to the Democrat for experience
and policies.
5 First, compare the D > R > T decomposition that was done out in the Topic with the decomposition
of the opposite T > R > D voter.
−1 1 −1 −1 1 1 −1 −1
1 2 2
1 = · 1 + · 1 + · 0 and −1 = d1 · 1 + d2 · 1 + d3 · 0
3 3 3
1 1 0 1 −1 1 0 1
Obviously, the second is the negative of the first, and so d1 = −1/3, d2 = −2/3, and d3 = −2/3. This
principle holds for any pair of opposite voters, and so we need only do the computation for a voter
from the second row, and a voter from the third row. For a positive spin voter in the second row,
c1 − c2 − c3 = 1 c1 − c2 − c3 = 1
−ρ1 +ρ2 (−1/2)ρ2 +ρ3
c1 + c2 = 1 −→ −→ 2c2 + c3 = 0
−ρ1 +ρ3
c1 + c3 = −1 (3/2)c3 = −2
gives c3 = −4/3, c2 = 2/3, and c1 = 1/3. For a positive spin voter in the third row,
c1 − c2 − c3 = 1 c1 − c2 − c3 = 1
−ρ1 +ρ2 (−1/2)ρ2 +ρ3
c1 + c2 = −1 −→ −→ 2c2 + c3 = −2
−ρ1 +ρ3
c1 + c3 = 1 (3/2)c3 = 1
gives c3 = 2/3, c2 = −4/3, and c1 = 1/3.
6 It is nonempty because it contains the zero vector. To see that it is closed under linear combinations
of two of its members, suppose that ~v1 and ~v2 are in U ⊥ and consider c1~v1 + c2~v2 . For any ~u ∈ U ,
(c1~v1 + c2~v2 ) ~u = c1 (~v1 ~u) + c2 (~v2 ~u) = c1 · 0 + c2 · 0 = 0
and so c1~v1 + c2~v2 ∈ U ⊥ .
Answers to Exercises 73
gives rise to this linear system
p1 + p 2 + p3 + p 5 =0
0=0
−p3 − 2p5 + p6 = 0
(note that there is no restriction on p4 ). The natural paramatrization uses the free variables to give
p3 = −2p5 + p6 and p1 = −p2 + p5 − p6 . The resulting description of the solution set
p1 −1 0 1 −1
p2 1 0 0 0
p3
= p2 0 + p4 0 + p5 −2 + p6 1 ¯ p2 , p4 , p5 , p6 ∈ R}
¯
{
p4 0 1 0 0
p5 0 0 1 0
p6 0 0 0 1
gives {y/x, θ, xt/v0 2 , v0 t/x} as a complete set of dimensionless products (recall that “complete” in
this context does not mean that there are no other dimensionless products; it simply means that the
set is a basis). This is, however, not the set of dimensionless products that the question asks for.
There are two ways to proceed. The first is to fiddle with the choice of parameters, hoping to
hit on the right set. For that, we can do the prior paragraph in reverse. Converting the given
dimensionless products gt/v0 , gx/v02 , gy/v02 , and θ into vectors gives this description (note the ? ’s
where the parameters will go).
p1 0 1 0 0
p2 0 0 1 0
p3
= ? −1 + ? −2 + ? −2 + p4 0 ¯ p2 , p4 , p5 , p6 ∈ R}
¯
{p4 0 0 0 1
p5 1 1 1 0
p6 1 0 0 0
The p4 is already in place. Examining the rows shows that we can also put in place p6 , p1 , and p2 .
The second way to proceed, following the hint, is to note that the given set is of size four in
a four-dimensional vector space and so we need only show that it is linearly independent. That
is easily done by inspection, by considering the sixth, first, second, and fourth components of the
vectors.
(b) The first equation can be rewritten
gx gt
= cos θ
v0 2 v0
so that Buckingham’s function is f1 (Π1 , Π2 , Π3 , Π4 ) = Π2 − Π1 cos(Π4 ). The second equation can
be rewritten µ ¶2
gy gt 1 gt
= sin θ −
v0 2 v0 2 v0
and Buckingham’s function here is f2 (Π1 , Π2 , Π3 , Π4 ) = Π3 − Π1 sin(Π4 ) + (1/2)Π1 2 .
2 We consider
(L0 M 0 T −1 )p1 (L1 M −1 T 2 )p2 (L−3 M 0 T 0 )p3 (L0 M 1 T 0 )p4 = (L0 M 0 T 0 )
which gives these relations among the powers.
p2 − 3p3 =0 −p1 + 2p2 =0
ρ1 ↔ρ3 ρ2 +ρ3
−p2 + p4 = 0 −→ −→ −p2 + p4 = 0
−p1 + 2p2 =0 −3p3 + p4 = 0
This is the solution space (because we wish to express k as a function of the other quantities, p2 is
taken as the parameter).
2
1 ¯
{
1/3 p2 p2 ∈ R}
¯
1
Thus, Π1 = ν 2 kN 1/3 m is the dimensionless combination, and we have that k equals ν −2 N −1/3 m−1
times a constant (the function fˆ is constant since it has no arguments).
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
−1
Denoting the torque by τ , the rotation rate by r, the volume of air by V , and the density of air by
d we have that Π1 = τ r−2 V −5/3 d−1 , and so the torque is r2 V 5/3 d times a constant.
4 (a) These are the dimensional formulas.
dimensional
quantity formula
speed of the wave v L1 M 0 T −1
separation of the dominoes d L1 M 0 T 0
height of the dominoes h L1 M 0 T 0
acceleration due to gravity g L1 M 0 T −2
(b) The relationship
(L1 M 0 T −1 )p1 (L1 M 0 T 0 )p2 (L1 M 0 T 0 )p3 (L1 M 0 T −2 )p4 = (L0 M 0 T 0 )
gives this linear system.
p1 + p 2 + p3 + p 4 = 0
ρ1 +ρ4 p + p2 + p3 + p4 = 0
0 = 0 −→ 1
p2 + p3 − p4 = 0
−p1 − 2p4 = 0
Taking p3 and p4 as parameters, the solution set is described in this way.
0 −2
−1 1 ¯
{
1 p3 + 0 p4 p3 , p4 ∈ R}
¯
0 1
That gives {Π1 = h/d, Π2 = dg/v 2 } as a complete set.
(c) Buckingham’s Theorem 2 ˆ
√ says that v = dg · f (h/d), and so, since g is a constant, if h/d is fixed
then v is proportional to d .
5 Checking the conditions in the definition of a vector space is routine.
6 (a) The dimensional formula of the circumference is L, that is, L1 M 0 T 0 . The dimensional formula
of the area is L2 .
(b) One is C + A = 2πr + πr2 .
(c) One example is this formula relating the the length of arc subtended by an angle to the radius
and the angle measure in radians: ℓ − rθ = 0. Both terms in that formula have dimensional formula
L1 . The relationship holds for some unit systems (inches and radians, for instance) but not for all
unit systems (inches and degrees, for instance).
Answers to Exercises 77
(b) Yes, this is an isomorphism.
It is one-to-one:
µ ¶ µ ¶ a1 + b1 + c1 + d1 a2 + b2 + c2 + d2
a b1 a b2 a1 + b1 + c1 a2 + b2 + c2
if f ( 1 ) = f( 2 ) then =
c1 d1 c2 d2 a1 + b1 a2 + b2
a1 a2
gives that a1 = a2 , and that b1 = b2 , and that c1 = c2 , and that d1 = d2 .
It is onto, since this shows
x µ ¶
y
= f( w z−w
)
z y−z x−y
w
that any four-tall vector is the image of a 2×2 matrix.
Finally, it preserves combinations
µ ¶ µ ¶ µ ¶
a1 b 1 a2 b2 r1 a1 + r2 a2 r1 b1 + r2 b2
f ( r1 · + r2 · ) = f( )
c1 d1 c2 d2 r1 c1 + r2 c2 r1 d1 + r2 d2
r1 a1 + · · · + r2 d2
r1 a1 + · · · + r2 c2
= r1 a1 + · · · + r2 b2
r1 a1 + r2 a2
a1 + · · · + d1 a2 + · · · + d2
a1 + · · · + c1 a2 + · · · + c2
= r1 ·
a1 + b1 + r2 · a2 + b2
a1 a2
µ ¶ µ ¶
a b1 a b2
= r1 · f ( 1 ) + r2 · f ( 2 )
c1 d1 c2 d2
and so item (2) of Lemma 1.9 shows that it preserves structure.
(c) Yes, it is an isomorphism.
To show that it is one-to-one, we suppose that two members of the domain have the same image
under f . µ ¶ µ ¶
a1 b 1 a2 b2
f( ) = f( )
c1 d1 c2 d2
This gives, by the definition of f , that c1 + (d1 + c1 )x + (b1 + a1 )x2 + a1 x3 = c2 + (d2 + c2 )x + (b2 +
a2 )x2 + a2 x3 and then the fact that polynomials are equal only when their coefficients are equal
gives a set of linear equations
c1 = c2
d1 + c1 = d2 + c2
b1 + a1 = b2 + a2
a1 = a2
that has only the solution a1 = a2 , b1 = b2 , c1 = c2 , and d1 = d2 .
To show that f is onto, we note that p + qx + rx2 + sx3 is the image under f of this matrix.
µ ¶
s r−s
p q−p
We can check that f preserves structure by using item (2) of Lemma 1.9.
µ ¶ µ ¶ µ ¶
a1 b1 a2 b2 r1 a1 + r2 a2 r1 b1 + r2 b2
f (r1 · + r2 · ) = f( )
c1 d1 c2 d2 r1 c1 + r2 c2 r1 d1 + r2 d2
= (r1 c1 + r2 c2 ) + (r1 d1 + r2 d2 + r1 c1 + r2 c2 )x
+ (r1 b1 + r2 b2 + r1 a1 + r2 a2 )x2 + (r1 a1 + r2 a2 )x3
¡ ¢
= r1 · c1 + (d1 + c1 )x + (b1 + a1 )x2 + a1 x3
¡ ¢
+ r2 · c2 + (d2 + c2 )x + (b2 + a2 )x2 + a2 x3
µ ¶ µ ¶
a1 b1 a2 b2
= r1 · f ( ) + r2 · f ( )
c1 d1 c2 d2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 79
∼ n 0
Three.I.1.20 If n ≥ 1 then Pn−1 = R . (If we take P−1 and R to be trivial vector spaces, then the
relationship extends one dimension lower.) The natural isomorphism between them is this.
a0
a1
a0 + a1 x + · · · + an−1 xn−1 7→ .
..
an−1
Checking that it is an isomorphism is straightforward.
Three.I.1.21 This is the map, expanded.
f (a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5 ) = a0 + a1 (x − 1) + a2 (x − 1)2 + a3 (x − 1)3
+ a4 (x − 1)4 + a5 (x − 1)5
= a0 + a1 (x − 1) + a2 (x2 − 2x + 1)
+ a3 (x3 − 3x2 + 3x − 1)
+ a4 (x4 − 4x3 + 6x2 − 4x + 1)
+ a5 (x5 − 5x4 + 10x3 − 10x2 + 5x − 1)
= (a0 − a1 + a2 − a3 + a4 − a5 )
+ (a1 − 2a2 + 3a3 − 4a4 + 5a5 )x
+ (a2 − 3a3 + 6a4 − 10a5 )x2 + (a3 − 4a4 + 10a5 )x3
+ (a4 − 5a5 )x4 + a5 x5
This map is a correspondence because it has an inverse, the map p(x) 7→ p(x + 1).
To finish checking that it is an isomorphism, we apply item (2) of Lemma 1.9 and show that it
preserves linear combinations of two polynomials. Briefly, the check goes like this.
f (c · (a0 + a1 x + · · · + a5 x5 ) + d · (b0 + b1 x + · · · + b5 x5 ))
= · · · = (ca0 − ca1 + ca2 − ca3 + ca4 − ca5 + db0 − db1 + db2 − db3 + db4 − db5 ) + · · · + (ca5 + db5 )x5
= · · · = c · f (a0 + a1 x + · · · + a5 x5 ) + d · f (b0 + b1 x + · · · + b5 x5 )
Three.I.1.22 No vector space has the empty set underlying it. We can take ~v to be the zero vector.
Three.I.1.23 Yes; where the two spaces are {~a} and {~b}, the map sending ~a to ~b is clearly one-to-one
and onto, and also preserves what little structure there is.
Three.I.1.24 A linear combination of n = 0 vectors adds to the zero vector and so Lemma 1.8 shows
that the three statements are equivalent in this case.
Three.I.1.25 Consider the basis h1i for P0 and let f (1) ∈ R be k. For any a ∈ P0 we have that
f (a) = f (a · 1) = af (1) = ak and so f ’s action is multiplication by k. Note that k 6= 0 or else the map
is not one-to-one. (Incidentally, any such map a 7→ ka is an isomorphism, as is easy to check.)
Three.I.1.26 In each item, following item (2) of Lemma 1.9, we show that the map preserves structure
by showing that the it preserves linear combinations of two members of the domain.
(a) The identity map is clearly one-to-one and onto. For linear combinations the check is easy.
id(c1 · ~v1 + c2 · ~v2 ) = c1~v1 + c2~v2 = c1 · id(~v1 ) + c2 · id(~v2 )
(b) The inverse of a correspondence is also a correspondence (as stated in the appendix), so we need
only check that the inverse preserves linear combinations. Assume that w ~ 1 = f (~v1 ) (so f −1 (w
~ 1 ) = ~v1 )
and assume that w ~ 2 = f (~v2 ).
¡ ¢
f −1 (c1 · w
~ 1 + c2 · w~ 2 ) = f −1 c1 · f (~v1 ) + c2 · f (~v2 )
¡ ¢
= f −1 ( f c1~v1 + c2~v2 )
= c1~v1 + c2~v2
= c1 · f −1 (w~ 1 ) + c2 · f −1 (w~ 2)
(c) The composition of two correspondences is a correspondence (as stated in the appendix), so we
need only check that the composition map preserves linear combinations.
¡ ¢ ¡ ¢
g ◦ f c1 · ~v1 + c2 · ~v2 = g f (c1~v1 + c2~v2 )
¡ ¢
= g c1 · f (~v1 ) + c2 · f (~v2 )
¡ ¢
= c1 · g f (~v1 )) + c2 · g(f (~v2 )
= c1 · g ◦ f (~v1 ) + c2 · g ◦ f (~v2 )
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
fℓ
φ 7−→
θ φ − (θ − φ)
To convert to rectangular coordinates, we will use some trigonometric formulas, as we did in the
prior item. First observe that cos φ and sin φ can be determined from the slope k of the line. This
picture
√
x 1 + k2 kx
θ
x
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 81
√ √
2
gives that cos φ = 1/ 1 + k and sin φ = k/ 1 + k . Now, 2
Answers to Exercises 83
and ~q = q1 β~1 + q2 β~2 + q3 β~3 , we have this.
~1 + (cp2 + dq2 )β
RepB (c · p~ + d · ~q) = RepB ( (cp1 + dq1 )β ~2 + (cp3 + dq3 )β~3 )
cp1 + dq1
= cp2 + dq2
cp3 + dq3
p1 q1
= c · p2 + d · q2
p3 q3
= RepB (~
p) + RepB (~q)
(d) Use any basis B for P2 whose first two members are x + x2 and 1 − x, say B = hx + x2 , 1 − x, 1i.
Three.I.1.34 See the next subsection.
Three.I.1.35 (a) Most of the conditions in the definition of a vector space are routine. We here
sketch the verification of part (1) of that definition.
For closure of U × W , note that because U and W are closed, we have that ~u1 + ~u2 ∈ U and
w
~1 + w~ 2 ∈ W and so (~u1 + ~u2 , w
~1 + w~ 2 ) ∈ U × W . Commutativity of addition in U × W follows
from commutativity of addition in U and W .
(~u1 , w
~ 1 ) + (~u2 , w
~ 2 ) = (~u1 + ~u2 , w
~1 + w~ 2 ) = (~u2 + ~u1 , w
~2 + w~ 1 ) = (~u2 , w
~ 2 ) + (~u1 , w
~ 1)
~ ~
The check for associativity of addition is similar. The zero element is (0U , 0W ) ∈ U × W and the
additive inverse of (~u, w) ~ is (−~u, −w). ~
The checks for the second part of the definition of a vector space are also straightforward.
(b) This is a basis µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
0 0 0 1 0
h (1, ), (x, ), (x2 , ), (1, ), (1, )i
0 0 0 0 1
because there is one and only one way to represent any member of P2 × R2 with respect to this set;
here is an example.
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
5 0 0 0 1 0
(3 + 2x + x2 , ) = 3 · (1, ) + 2 · (x, ) + (x2 , ) + 5 · (1, ) + 4 · (1, )
4 0 0 0 0 1
The dimension of this space is five.
(c) We have dim(U × W ) = dim(U ) + dim(W ) as this is a basis.
µ1 , ~0W ), . . . , (~
h(~ µdim(U ) , ~0W ), (~0U , ω
~ 1 ), . . . , (~0U , ω
~ dim(W ) )i
(d) We know that if V = U ⊕ W then each ~v ∈ V can be written as ~v = ~u + w ~ in one and only one
way. This is just what we need to prove that the given function an isomorphism.
First, to show that f is one-to-one we can show that if f ((~u1 , w~ 1 )) = ((~u2 , w
~ 2 )), that is, if
~u1 + w~ 1 = ~u2 + w
~ 2 then ~u1 = ~u2 and w
~1 = w~ 2 . But the statement ‘each ~v is such a sum in only
one way’ is exactly what is needed to make this conclusion. Similarly, the argument that f is onto
is completed by the statement that ‘each ~v is such a sum in at least one way’.
This map also preserves linear combinations
f ( c1 · (~u1 , w
~ 1 ) + c2 · (~u2 , w
~ 2 ) ) = f ( (c1 ~u1 + c2 ~u2 , c1 w
~ 1 + c2 w ~ 2) )
= c1 ~u1 + c2 ~u2 + c1 w ~ 1 + c2 w~2
= c1 ~u1 + c1 w~ 1 + c2 ~u2 + c2 w ~2
= c1 · f ( (~u1 , w
~ 1 ) ) + c2 · f ( (~u2 , w
~ 2) )
and so it is an isomorphism.
Three.I.2.8 Each pair of spaces is isomorphic if and only if the two have the same dimension. We can,
when there is an isomorphism, state a map, but it isn’t strictly necessary.
(a) No, they have different dimensions.
(b) No, they have different dimensions.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 85
(b) A natural definition is this.
a0 a1
a1 2a2
D(
a2 ) = 3a3
a3 0
Three.I.2.23 Yes.
Assume that V is a vector space with basis B = hβ ~1 , . . . , β
~n i and that W is another vector space
such that the map f : B → W is a correspondence. Consider the extension fˆ: V → W of f .
fˆ(c1 β ~1 + · · · + cn β
~n ) = c1 f (β~1 ) + · · · + cn f (β ~n ).
ˆ
The map f is an isomorphism.
First, fˆ is well-defined because every member of V has one and only one representation as a linear
combination of elements of B.
Second, fˆ is one-to-one because every member of W has only one representation as a linear combi-
nation of elements of hf (β ~1 ), . . . , f (β
~n )i. That map fˆ is onto because every member of W has at least
one representation as a linear combination of members of hf (β~1 ), . . . , f (β~n )i.
Finally, preservation of structure is routine to check. For instance, here is the preservation of
addition calculation.
fˆ( (c1 β
~1 + · · · + cn β
~n ) + (d1 β~1 + · · · + dn β ~n ) ) = fˆ( (c1 + d1 )β ~1 + · · · + (cn + dn )β~n )
= (c1 + d1 )f (β ~1 ) + · · · + (cn + dn )f (β
~n )
~1 ) + · · · + cn f (β~n ) + d1 f (β
= c1 f (β ~1 ) + · · · + d n f ( β
~n )
= fˆ(c1 β
~1 + · · · + cn β
~n ) + +fˆ(d1 β ~1 + · · · + dn β~n ).
Preservation of scalar multiplication is similar.
Three.I.2.24 Because V1 ∩ V2 = {~0V } and f is one-to-one we have that f (V1 ) ∩ f (V2 ) = {~0U }. To
finish, count the dimensions: dim(U ) = dim(V ) = dim(V1 ) + dim(V2 ) = dim(f (V1 )) + dim(f (V2 )), as
required.
Three.I.2.25 Rational numbers have many representations, e.g., 1/2 = 3/6, and the numerators can
vary among representations.
Answers to Exercises 87
Three.II.1.20 Each of these projections is a homomorphism. Projection to the xz-plane and to the
yz-plane are these maps.
x x x 0
y 7→ 0 y 7→ y
z z z z
Projection to the x-axis, to the y-axis, and to the z-axis are these maps.
x x x 0 x 0
y 7→ 0 y →7 y y →
7 0
z 0 z 0 z z
And projection to the origin is this map.
x 0
y 7→ 0
z 0
Verification that each is a homomorphism is straightforward. (The last one, of course, is the zero
transformation on R3 .)
Three.II.1.21 The first is not onto; for instance, there is no polynomial that is sent the constant
polynomial p(x) = 1. The second is not one-to-one; both of these members of the domain
µ ¶ µ ¶
1 0 0 0
and
0 0 0 1
are mapped to the same member of the codomain, 1 ∈ R.
Three.II.1.22 Yes; in any space id(c · ~v + d · w)
~ = c · ~v + d · w
~ = c · id(~v ) + d · id(w).
~
Three.II.1.23 (a) This map does not preserve structure since f (1 + 1) = 3, while f (1) + f (1) = 2.
(b) The check is routine.
µ ¶ µ ¶ µ ¶
x1 x2 r1 x1 + r2 x2
f (r1 · + r2 · ) = f( )
y1 y2 r1 y1 + r2 y2
= (r1 x1 + r2 x2 ) + 2(r1 y1 + r2 y2 )
= r1 · (x1 + 2y1 ) + r2 · (x2 + 2y2 )
µ ¶ µ ¶
x x
= r1 · f ( 1 ) + r2 · f ( 2 )
y1 y2
Three.II.1.24 Yes. Where h : V → W is linear, h(~u − ~v ) = h(~u + (−1) · ~v ) = h(~u) + (−1) · h(~v ) =
h(~u) − h(~v ).
Three.II.1.25 (a) Let ~v ∈ V be represented with respect to the basis as ~v = c1 β ~1 + · · · + cn β~n . Then
~ ~ ~ ~
h(~v ) = h(c1 β1 + · · · + cn βn ) = c1 h(β1 ) + · · · + cn h(βn ) = c1 · 0 + · · · + cn · 0 = ~0.
~ ~
(b) This argument is similar to the prior one. Let ~v ∈ V be represented with respect to the basis as
~v = c1 β~1 + · · · + cn β~n . Then h(c1 β
~1 + · · · + cn β
~n ) = c1 h(β
~1 ) + · · · + cn h(β~n ) = c1 β
~1 + · · · + cn β~n = ~v .
~ ~ ~ ~ ~
(c) As above, only c1 h(β1 ) + · · · + cn h(βn ) = c1 rβ1 + · · · + cn rβn = r(c1 β1 + · · · + cn β ~n ) = r~v .
Three.II.1.26 That it is a homomorphism follows from the familiar rules that the logarithm of a
product is the sum of the logarithms ln(ab) = ln(a) + ln(b) and that the logarithm of a power is the
multiple of the logarithm ln(ar ) = r ln(a). This map is an isomorphism because it has an inverse,
namely, the exponential map, so it is a correspondence, and therefore it is an isomorphism.
Three.II.1.27 Where x̂ = x/2 and ŷ = y/3, the image set is
µ ¶ µ ¶
x̂ ¯¯ (2x̂)2 (3ŷ)2 x̂ ¯¯ 2
{ 4 + 9 = 1} = { x̂ + ŷ 2 = 1}
ŷ ŷ
the unit circle in the x̂ŷ-plane.
Three.II.1.28 The circumference function r 7→ 2πr is linear. Thus we have 2π · (rearth + 6) − 2π ·
(rearth ) = 12π. Observe that it takes the same amount of extra rope to raise the circle from tightly
wound around a basketball to six feet above that basketball as it does to raise it from tightly wound
around the earth to six feet above the earth.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 89
nontrivial relationship in the range h(~0V ) = ~0W = h(c1~v1 + · · · + cn~vn ) = c1 h(~v1 ) + · · · + cn h(~vn ) =
c1 w
~ + · · · + cn w
~ n.
(b) Not necessarily. For instance, the transformation of R2 given by
µ ¶ µ ¶
x x+y
7→
y x+y
sends this linearly independent set in the domain to a linearly dependent image.
µ ¶ µ ¶ µ ¶ µ ¶
1 1 1 2
{~v1 , ~v2 } = { , } 7→ { , } = {w~ 1, w
~ 2}
0 1 1 2
(c) Not necessarily. An example is the projection map π : R3 → R2
x µ ¶
y 7→ x
y
z
and this set that does not span the domain but maps to a set that does span the codomain.
1 0 µ ¶ µ ¶
π 1 0
{0 , 1} 7−→ { , }
0 1
0 0
(d) Not necessarily. For instance, the injection map ι : R2 → R3 sends the standard basis E2 for the
domain to a set that does not span the codomain. (Remark. However, the set of w’s~ does span the
range. A proof is easy.)
Three.II.1.34 Recall that the entry in row i and column j of the transpose of M is the entry mj,i
from row j and column i of M . Now, the check is routine.
trans trans
.. .. ..
. . .
· · · ai,j · · · + s · · · · bi,j · · ·]
[r · · · · rai,j + sbi,j · · ·
=
.. .. ..
. . .
..
.
=
· · · ra j,i + sbj,i · · ·
..
.
.. ..
. .
=r·· · · aj,i · · · + s · · · · bj,i · · ·
.. ..
. .
trans trans
.. ..
. .
=r· · · · a j,i · · ·
+ s · · · · bj,i · · ·
.. ..
. .
The domain is Mm×n while the codomain is Mn×m .
Three.II.1.35 (a) For any homomorphism h : Rn → Rm we have
¯ ¯
h(ℓ) = {h(t · ~u + (1 − t) · ~v ) ¯ t ∈ [0..1]} = {t · h(~u) + (1 − t) · h(~v ) ¯ t ∈ [0..1]}
which is the line segment from h(~u) to h(~v ).
(b) We must show that if a subset of the domain is convex then its image, as a subset of the range, is
also convex. Suppose that C ⊆ Rn is convex and consider its image h(C). To show h(C) is convex
we must show that for any two of its members, d~1 and d~2 , the line segment connecting them
¯
ℓ = {t · d~1 + (1 − t) · d~2 ¯ t ∈ [0..1]}
is a subset of h(C).
Fix any member t̂ · d~1 + (1 − t̂) · d~2 of that line segment. Because the endpoints of ℓ are in the
image of C, there are members of C that map to them, say h(~c1 ) = d~1 and h(~c2 ) = d~2 . Now, where
t̂ is the scalar that is fixed in the first sentence of this paragraph, observe that h(t̂ ·~c1 + (1 − t̂) ·~c2 ) =
t̂ · h(~c1 ) + (1 − t̂) · h(~c2 ) = t̂ · d~1 + (1 − t̂) · d~2 Thus, any member of ℓ is a member of h(C), and so
h(C) is convex.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 91
2x + y = −3 2x + y = 1
Answers to Exercises 93
One half is easy — by Theorem 2.21, if h is singular then its nullspace is nontrivial (contains more
than just the zero vector). So, where ~v 6= ~0V is in that nullspace, the singleton set {~ v } is independent
while its image {h(~v )} = {~0W } is not.
For the other half, assume that h is nonsingular and so by Theorem 2.21 has a trivial nullspace.
Then for any ~v1 , . . . , ~vn ∈ V , the relation
~0W = c1 · h(~v1 ) + · · · + cn · h(~vn ) = h(c1 · ~v1 + · · · + cn · ~vn )
implies the relation c1 · ~v1 + · · · + cn · ~vn = ~0V . Hence, if a subset of V is independent then so is its
image in W .
Remark. The statement is that a linear map is nonsingular if and only if it preserves independence
for all sets (that is, if a set is independent then its image is also independent). A singular map may
well preserve some independent sets. An example is this singular map from R3 to R2 .
x µ ¶
y 7→ x + y + z
0
z
Linear independence is preserved for this set
1 µ ¶
1
{ 0 } 7→ {
}
0
0
and (in a somewhat more tricky example) also for this set
1 0 µ ¶
1
{ 0 , 1 } 7→ {
}
0
0 0
(recall that in a set, repeated elements do not appear twice). However, there are sets whose indepen-
dence is not preserved under this map;
1 0 µ ¶ µ ¶
1 2
{ 0 , 2 } 7→ {
, }
0 0
0 0
and so not all sets have independence preserved.
Three.II.2.36 (We use the notation from Theorem 1.9.) Fix a basis hβ ~1 , . . . , β
~n i for V and a basis
hw
~ 1, . . . , w
~ k i for W . If the dimension k of W is less than or equal to the dimension n of V then the
theorem gives a linear map from V to W determined in this way.
β~1 7→ w ~k 7→ w
~ 1, . . . , β ~ k and β~k+1 7→ w ~n 7→ w
~ k, . . . , β ~k
We need only to verify that this map is onto.
Any member of W can be written as a linear combination of basis elements c1 · w ~ 1 + · · · + ck · w ~ k.
~1 + · · · + ck · β
This vector is the image, under the map described above, of c1 · β ~k + 0 · β~k+1 · · · + 0 · β
~n .
Thus the map is onto.
Three.II.2.37 By assumption, h is not the zero map and so a vector ~v ∈ V exists that is not in the
nullspace. Note that hh(~v )i is a basis for R, because it is a size one linearly independent subset of R.
Consequently h is onto, as for any r ∈ R we have r = c · h(~v ) for some scalar c, and so r = h(c~v ).
Thus the rank of h is one. Because the nullity is given as n, the dimension of the domain of h (the
vector space V ) is n + 1. We can finish by showing {~v , β ~1 , . . . , β~n } is linearly independent, as it is a
size n + 1 subset of a dimension n + 1 space. Because {β~1 , . . . , β ~n } is linearly independent we need
only show that ~v is not a linear combination of the other vectors. But c1 β ~1 + · · · + cn β
~n = ~v would
~ ~ ~
give −~v + c1 β1 + · · · + cn βn = 0 and applying h to both sides would give a contradiction.
Three.II.2.38 Yes. For the transformation of R2 given by
µ ¶ µ ¶
x h 0
7−→
y x
we have this. µ ¶
0 ¯¯
N (h) = { y ∈ R} = R(h)
y
Remark. We will see more of this in the fifth chapter.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
h(β~n )
is an isomorphism from V ∗ to Rn .
To see that Φ is one-to-one, assume that h1 and h2 are members of V ∗ such that Φ(h1 ) = Φ(h2 ).
Then
~1 )
h1 ( β ~1 )
h2 (β
.. ..
. = .
h1 (β ~n ) ~n )
h2 (β
and consequently, h1 (β~1 ) = h2 (β
~1 ), etc. But a homomorphism is determined by its action on a basis,
so h1 = h2 , and therefore Φ is one-to-one.
To see that Φ is onto, consider
x1
..
.
xn
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 95
for x1 , . . . , xn ∈ R. This function h from V to R
~n 7−h→ c1 x1 + · · · + cn xn
~1 + · · · + cn β
c1 β
is easily seen to be linear, and to be mapped by Φ to the given vector in Rn , so Φ is onto.
The map Φ also preserves structure: where
c1 β ~n 7−h→
~1 + · · · + cn β 1 ~1 ) + · · · + cn h1 (β~n )
c1 h1 (β
~1 + · · · + cn β h
~n 7−→
2 ~1 ) + · · · + cn h2 (β~n )
c1 β c1 h2 (β
we have
~1 + · · · + cn β
(r1 h1 + r2 h2 )(c1 β ~n ) = c1 (r1 h1 (β
~1 ) + r2 h2 (β~1 )) + · · · + cn (r1 h1 (β
~n ) + r2 h2 (β~n ))
~1 ) + · · · + cn h1 (β~n )) + r2 (c1 h2 (β~1 ) + · · · + cn h2 (β~n ))
= r1 (c1 h1 (β
so Φ(r1 h1 + r2 h2 ) = r1 Φ(h1 ) + r2 Φ(h2 ).
Three.II.2.43 ~1 , . . . , β~n i for V . Consider these n maps from
Let h : V → W be linear and fix a basis hβ
V to W
h1 (~v ) = c1 · h(β~1 ), h2 (~v ) = c2 · h(β~2 ), . . . , hn (~v ) = cn · h(β
~n )
~1 + · · · + cn β
for any ~v = c1 β ~n . Clearly h is the sum of the hi ’s. We need only check that each hi is
linear: where ~u = d1 β~1 + · · · + dn β
~n we have hi (r~v + s~u) = rci + sdi = rhi (~v ) + shi (~u).
Three.II.2.44 Either yes (trivially) or no (nearly trivially).
If V ‘is homomorphic to’ W is taken to mean there is a homomorphism from V into (but not
necessarily onto) W , then every space is homomorphic to every other space as a zero map always
exists.
If V ‘is homomorphic to’ W is taken to mean there is an onto homomorphism from V to W then the
relation is not an equivalence. For instance, there is an onto homomorphism from R3 to R2 (projection
is one) but no homomorphism from R2 onto R3 by Corollary 2.17, so the relation is not reflexive.∗
Three.II.2.45 That they form the chains is obvious. For the rest, we show here that R(tj+1 ) = R(tj )
implies that R(tj+2 ) = R(tj+1 ). Induction then applies.
Assume that R(tj+1 ) = R(tj ). Then t : R(tj+1 ) → R(tj+2 ) is the same map, with the same
domain, as t : R(tj ) → R(tj+1 ). Thus it has the same range: R(tj+2 ) = R(tj+1 ).
0 0 −1 B,D
∗ More information on equivalence relations is in the appendix.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Answers to Exercises 97
(a) The basis vectors from the domain have these images
1 7→ 0 x 7→ 1 x2 7→ 2x . . .
and these images
are represented withrespect
to the codomain’s
basis in this way.
0 1 0 0
0 0 2 0
0 0 0 0
RepB (0) = . RepB (1) = . RepB (2x) = . . . . RepB (nxn−1 ) = .
.. .. .. ..
n
0
The matrix
0 1 0 ... 0
0 0 2 . . . 0
d
RepB,B ( ) =
..
dx .
0 0 0 . . . n
0 0 0 ... 0
has n + 1 rows and columns.
(b) Once the images under this map of the domain’s basis vectors are determined
1 7→ x x 7→ x2 /2 x2 7→ x3 /3 . . .
then they can berepresented
with respect to thecodomain’s basis
0 0 0
1 0 0
RepBn+1 (x) = 0 RepBn+1 (x2 /2) = 1/2 . . . RepBn+1 (xn+1 /(n + 1)) = 0
.. .. ..
. . .
1/(n + 1)
and put together to make the matrix.
0 0 ... 0 0
Z 1 0 ... 0 0
0 1/2 . . . 0 0
RepBn ,Bn+1 ( ) =
..
.
0 0 . . . 0 1/(n + 1)
(c) The images of the basis vectors of the domain are
1 7→ 1 x 7→ 1/2 x2 7→ 1/3 . . .
and they are represented with respect to the codomain’s basis as
RepE1 (1) = 1 RepE1 (1/2) = 1/2 . . .
so the matrix is Z
¡ ¢
RepB,E1 ( ) = 1 1/2 · · · 1/n 1/(n + 1)
(this is an 1×(n + 1) matrix).
(d) Here, the images of the domain’s basis vectors are
1 7→ 1 x 7→ 3 x2 →7 9 ...
and they are represented in the codomain as
RepE1 (1) = 1 RepE1 (3) = 3 RepE1 (9) = 9 ...
and so the matrix is this. Z 1
¡ ¢
RepB,E1 ( ) = 1 3 9 · · · 3n
0
(e) The images of the basis vectors from the domain are
1 7→ 1 x 7→ x + 1 = 1 + x x2 7→ (x + 1)2 = 1 + 2x + x2 x3 7→ (x + 1)3 = 1 + 3x + 3x2 + x3 ...
which are represented as
1 1 1
0 1 2
0 0 1
2
RepB (1) = 0 RepB (1 + x) = 0 RepB (1 + 2x + x ) = 0 . . .
.. .. ..
. . .
0 0 0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 0 1 0
and consequently, the matrix representing the transformation is this.
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0
1 0 0 0
(b)
0 0 0 0
0 0 1 0
0 0 0 0
1 0 0 0
(c)
0 1 0 0
0 0 0 0
Three.III.1.21 (a) The picture of ds : R2 → R2 is this.
ds (~
u)
~
u ds
−→ ds (~
v)
~
v
Answers to Exercises 99
This map’s effect on the vectors in
µ the
¶ standard
µ ¶ basis µ for
¶ the µ domain
¶ is
1 ds s 0 ds 0
7−→ 7−→
0 0 1 s
and those images are represented with respect to the codomain’s basis (again, the standard basis)
by themselves. µ ¶ µ ¶ µ ¶ µ ¶
s s 0 0
RepE2 ( )= RepE2 ( )=
0 0 s s
Thus the representation of the dilation map is this. µ ¶
s 0
RepE2 ,E2 (ds ) =
0 s
2 2
(b) The picture of fℓ : R → R is this.
ℓ f
7−→
= c1 · h(β~1 ) + · · · + cn · β~n
Three.III.2.9 (a) Yes; we are asking if there are scalars c1 and c2 such that
µ ¶ µ ¶ µ ¶
2 1 1
c1 + c2 =
2 5 −3
which gives rise to a linear system
2c1 + c2 = 1 −ρ1 +ρ2 2c1 + c2 = 1
−→
2c1 + 5c2 = −3 4c2 = −4
and Gauss’ method produces c2 = −1 and c1 = 1. That is, there is indeed such a pair of scalars and
so the vector is indeed in the column space of the matrix.
(b) No; we are asking if there are scalars c1 and c2 such that
µ ¶ µ ¶ µ ¶
4 −8 0
c1 + c2 =
2 −4 1
and one way to proceed is to consider the resulting linear system
4c1 − 8c2 = 0
2c1 − 4c2 = 1
that is easily seen to have no solution. Another way to proceed is to note that any linear combination
of the columns on the left has a second component half as big as its first component, but the vector
on the right does not meet that criterion.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
So, for this first one, we are asking whether thare are scalars such that
µ ¶ µ ¶ µ ¶ µ ¶
1 1 3 1
c1 + c2 + c3 =
0 1 4 3
that is, whether the vector is in the column space of the matrix.
(a) Yes. We can get this conclusion by setting up the resulting linear system and applying Gauss’
method, as usual. Another way to get it is to note by inspection of the equation of columns that
taking c3 = 3/4, and c1 = −5/4, and c2 = 0 will do. Still a third way to get this conclusion is to
note that the rank of the matrix is two, which equals the dimension of the codomain, and so the
map is onto — the range is all of R2 and in particular includes the given vector.
(b) No; note that all of the columns in the matrix have a second component that is twice the first,
while the vector does not. Alternatively, the column space of the matrix is
µ ¶ µ ¶ µ ¶ µ ¶
2 0 3 ¯¯ 1 ¯¯
{c1 + c2 + c3 c1 , c2 , c3 ∈ R} = {c c ∈ R}
4 0 6 2
(which is the fact already noted, but was arrived at by calculation rather than inspiration), and the
given vector is not in this set.
Three.III.2.11 (a) The first member of the basis
µ ¶ µ ¶
0 1
=
1 0 B
is mapped to µ ¶
1/2
−1/2 D
which is this member of the codomain. µ ¶ µ ¶ µ ¶
1 1 1 1 0
· − · =
2 1 2 −1 1
(b) The second member of the basis is mapped
µ ¶ µ ¶ µ ¶
1 0 (1/2
= 7→
0 1 B 1/2 D
to this member of the codomain. µ ¶ µ ¶ µ ¶
1 1 1 1 1
· + · =
2 1 2 −1 0
(c) Because the map that the matrix represents is the identity map on the basis, it must be the
identity on all members of the domain. We can come to the same conclusion in another way by
considering µ ¶ µ ¶
x y
=
y x B
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
stretches vectors by a factor of three in the x direction and by a factor of two in the y direction. The
second map µ ¶ µ ¶ µ ¶ µ ¶
x x x x
= 7→ =
y y E 0 E 0
2 2
interchanges first and second components (that is, it is a reflection about the line y = x). The last
µ ¶ µ ¶ µ ¶ µ ¶
x x x + 3y x + 3y
= 7→ =
y y E y E
y
2 2
stretches vectors parallel to the y axis, by an amount equal to three times their distance from that
axis (this is a skew.)
Three.III.2.20 (a) This is immediate from Theorem 2.3.
(b) Yes. This is immediate from the prior item.
To give a specific example, we can start with E3 as the basis for the domain, and then we require
a basis D for the codomain R3 . The matrix H gives the action of the map as this
1 1 1 0 0 0 0 0 0
0 = 0 7→ 2 1 = 1 7→ 0 0 = 0 7→ 0
0 0 E 0 D 0 0 E 1 D 1 1 E 0 D
3 3 3
gn
has the desired action.
v1 g1 v1
.. .. ..
~v = . 7→ . . = g1 v1 + · · · + gn vn
vn gn vn
(c) No. If ~x has any nonzero entries then h~x cannot be the zero map (and if ~x is the zero vector then
h~x can only be the zero map).
Three.III.2.22 See the following section.
Rotating rx first and then ry is different than rotating ry first and then rx . In particular, rx (~e3 ) = −~e2
so ry ◦ rx (~e3 ) = −~e2 , while ry (~e3 ) = ~e1 so rx ◦ ry (~e3 ) = ~e1 , and hence the maps do not commute.
Three.IV.2.27 It doesn’t matter (as long as the spaces have the appropriate dimensions).
For associativity, suppose that F is m × r, that G is r × n, and that H is n × k. We can take
any r dimensional space, any m dimensional space, any n dimensional space, and any k dimensional
space — for instance, Rr , Rm , Rn , and Rk will do. We can take any bases A, B, C, and D, for those
spaces. Then, with respect to C, D the matrix H represents a linear map h, with respect to B, C the
matrix G represents a g, and with respect to A, B the matrix F represents an f . We can use those
maps in the proof.
The second half is done similarly, except that G and H are added and so we must take them to
represent maps with the same domain and codomain.
Three.IV.2.28 (a) The product of rank n matrices can have rank less than or equal to n but not
greater than n.
To see that the rank can fall, consider the maps πx , πy : R2 → R2 projecting onto the axes. Each
is rank one but their composition πx ◦πy , which is the zero map, is rank zero. That can be translated
over to matrices representing those maps in this way.
µ ¶µ ¶ µ ¶
1 0 0 0 0 0
RepE2 ,E2 (πx ) · RepE2 ,E2 (πy ) = =
0 0 0 1 0 0
To prove that the product of rank n matrices cannot have rank greater than n, we can apply the
map result that the image of a linearly dependent set is linearly dependent. That is, if h : V → W
and g : W → X both have rank n then a set in the range R(g ◦ h) of size larger than n is the image
under g of a set in W of size larger than n and so is linearly dependent (since the rank of h is n).
Now, the image of a linearly dependent set is dependent, so any set of size larger than n in the range
is dependent. (By the way, observe that the rank of g was not mentioned. See the next part.)
(b) Fix spaces and bases and consider the associated linear maps f and g. Recall that the dimension
of the image of a map (the map’s rank) is less than or equal to the dimension of the domain, and
consider the arrow diagram.
f g
V 7−→ R(f ) 7−→ R(g ◦ f )
First, the image of R(f ) must have dimension less than or equal to the dimension of R(f ), by the
prior sentence. On the other hand, R(f ) is a subset of the domain of g, and thus its image has
dimension less than or equal the dimension of the domain of g. Combining those two, the rank of a
composition is less than or equal to the minimum of the two ranks.
The matrix fact follows immediately.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 0 0 0 0 0 0 0 0 0
and their product (in either order) is the zero matrix.
Three.IV.2.31 Note that (S + T )(S − T ) = S 2 − ST + T S − T 2 , so a reasonable try is to look at
matrices that do not commute so that −ST µ and¶T S don’t µ cancel:
¶ with
1 2 5 6
S= T =
3 4 7 8
we have the desired inequality. µ ¶ µ ¶
−56 −56 −60 −68
(S + T )(S − T ) = S2 − T 2 =
−88 −88 −76 −84
Three.IV.2.32 Because the identity map acts on the basis B as β1 7→ β ~ ~1 , . . . , β
~n 7→ β~n , the represen-
tation is this.
1 0 0 0
0 1 0 0
0 0 1 0
..
.
0 0 0 1
The second part of the question is obvious from Theorem 2.6.
Three.IV.2.33 Here are four solutions. µ ¶
±1 0
T =
0 ±1
Three.IV.2.34 (a) The vector space M2×2 has dimension four. The set {T 4 , . . . , T, I} has five ele-
ments and thus is linearly dependent.
(b) Where T is n × n, generalizing the argument from the prior item shows that there is such a
2
polynomial of degree n2 or less, since {T n , . . . , T, I} is a n2 +1-member subset of the n2 -dimensional
space Mn×n .
(c) First compute the powers
µ √ ¶ µ ¶ µ √ ¶
1/2 − 3/2 0 −1 −1/2 − 3/2
T2 = √ T3 = T4 = √
3/2 1/2 1 0 3/2 −1/2
(observe that rotating by π/6 three times results in a rotation by π/2, which is indeed what T 3
represents). Then set c4 T 4 + c3 T 3 + c2 T 2 + c1 T + c0 I equal to the zero matrix
µ √ ¶ µ ¶ µ √ ¶ µ√ ¶ µ ¶
−1/2 − 3/2
√ 0 −1 √1/2 − 3/2 3/2 √−1/2 1 0
c4 + c3 + c2 + c1 + c
3/2 −1/2 1 0 3/2 1/2 1/2 3/2 0 1 0
µ ¶
0 0
=
0 0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
(the first equality holds by using the distributive law to multiply the g’s through, the second equality
represents the use of associativity of reals, the third follows by commutativity of reals, and the fourth
comes from using the distributive law to factor the v’s out).
Three.IV.3.23 (a) The second matrix has its first row multiplied by 3 and its second row multiplied
by 0. µ ¶
3 6
0 0
(b) The second matrix has its first row multiplied by 4 and its second row multiplied by 2.
µ ¶
4 8
6 8
(c) The second matrix undergoes the pivot operation of replacing the second row with −2 times the
first row added to the second. µ ¶
1 2
1 0
(d) The first matrix undergoes the column operation of: the second column is replaced by −1 times
the first column plus the second. µ ¶
1 1
3 1
(e) The first matrix has its columns swapped.
µ ¶
2 1
4 3
Three.IV.3.24 (a) The incidence matrix is this (e.g, the first row shows that there is only one
connection including Burlington, the road to Winooski).
0 0 0 0 1
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 1 0 0 0
(b) Because these are two-way roads, any road connecting city i to city j gives a connection between
city j and city i.
(c) The square of the incidence matrix tells how cities are connected by trips involving two roads.
Three.IV.3.25 The pay due each person appears in the matrix product of the two arrays.
Three.IV.3.26 The product is the identity matrix (recall that cos2 θ + sin2 θ = 1). An explanation
is that the given matrix represents, with respect to the standard bases, a rotation in R2 of θ radians
while the transpose represents a rotation of −θ radians. The two cancel.
Three.IV.3.27 The set of diagonal matrices is nonempty as the zero matrix is diagonal. Clearly it is
closed under scalar multiples and sums. Therefore it is a subspace. The dimension is n; here is a basis.
1 0 ... 0 0 ...
0 0 0 0
{ . ,..., . }
.. ..
0 0 0 0 0 1
Three.IV.3.28 No. In P1 , with respect to the unequal bases B = h1, xi and D = h1 + x, 1 − xi, the
identity transformation is represented by by this µmatrix. ¶
1/2 1/2
RepB,D (id) =
1/2 −1/2 B,D
Three.IV.3.29 For any scalar r and square matrix H we have (rI)H = r(IH) = rH = r(HI) =
(Hr)I = H(rI).
There are no other such matrices; here is an argument for 2×2 matrices that is easily extended to
n×n. If a matrix commutes
µ ¶withµall others
¶ µ then¶ it commutes
µ ¶ µwith ¶this µ
unit matrix.
¶
0 a a b 0 1 0 1 a b c d
= = =
0 c c d 0 0 0 0 c d 0 0
From this we first conclude that the upper left entry a must equal its lower right entry d. We also
conclude that the lower left entry c is zero. The argument for the upper right entry b is similar.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Three.V.1.6 For the matrix to change bases from D to E2 we need that RepE2 (id(~δ1 )) = RepE2 (~δ1 )
and that RepE2 (id(~δ2 )) = RepE2 (~δ2 ). Of course, the representation of a vector in R2 with respect to
the standard basis is easy.
µ ¶ µ ¶
~ 2 ~ −2
RepE2 (δ1 ) = RepE2 (δ2 ) =
1 4
Concatenating those two together to make the columns of the change of basis matrix gives this.
µ ¶
2 −2
RepD,E2 (id) =
1 4
The change of basis matrix in the other direction can be gotten by calculating RepD (id(~e1 )) = RepD (~e1 )
and RepD (id(~e2 )) = RepD (~e2 ) (this job is routine) or it can be found by taking the inverse of the above
matrix. Because of the formula for the inverse of a 2×2 matrix, this is easy.
µ ¶ µ ¶
1 4 2 4/10 2/10
RepE2 ,D (id) = · =
10 −1 2 −1/10 2/10
Three.V.1.7 In each case, the columns RepD (id(β ~1 )) = Rep (β~ ~ ~
D 1 ) and RepD (id(β2 )) = RepD (β2 ) are
concatenated to make the change of basis matrix RepB,D (id).
µ ¶ µ ¶ µ ¶ µ ¶
0 1 2 −1/2 1 1 1 −1
(a) (b) (c) (d)
1 0 −1 1/2 2 4 −1 2
Three.V.1.8 One way to go is to find RepB (~δ1 ) and RepB (~δ2 ), and then concatenate them into the
columns of the desired change of basis matrix. Another way is to find the inverse of the matrices that
answerµExercise
¶ 7. µ ¶ µ ¶ µ ¶
0 1 1 1 2 −1/2 2 1
(a) (b) (c) (d)
1 0 2 4 −1 1/2 1 1
Three.V.1.9 The columns vector representations RepD (id(β ~1 )) = Rep (β
~ ~
D 1 ), and RepD (id(β2 )) =
~ ~ ~
RepD (β2 ), and RepD (id(β3 )) = RepD (β3 ) make the change of basis matrix RepB,D (id).
0 0 1 1 −1 0 1 −1 1/2
(a) 1 0 0 (b) 0 1 −1 (c) 1 1 −1/2
0 1 0 0 0 1 0 2 0
E.g., for the first column of the first matrix, 1 = 0 · x2 + 1 · 1 + 0 · x.
Three.V.1.10 A matrix changes bases if and only if it is nonsingular.
(a) This matrix is nonsingular and so changes bases. Finding to what basis E2 is changed means
finding D such that µ ¶
5 0
RepE2 ,D (id) =
0 4
and by the definition of how a matrix represents a linear map, we have this.
µ ¶ µ ¶
5 0
RepD (id(~e1 )) = RepD (~e1 ) = RepD (id(~e2 )) = RepD (~e2 ) =
0 4
Where µ ¶ µ ¶
x x
D=h 1 , 2 i
y1 y2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
f
7−→ ~
e1
³ ´
~ sin(2θ)
δ2 =
− cos(2θ)
This map reflects vectors over that line. Since reflections are self-inverse, the answer to the question
is: the original map reflects about the line through the origin with angle of elevation θ. (Of course, it
does this to any basis.)
Three.V.1.14 The appropriately-sized identity matrix.
Three.V.1.15 Each is true if and only if the matrix is nonsingular.
Three.V.1.16 What remains to be shown is that left multiplication by a reduction matrix represents
a change from another basis to B = hβ~1 , . . . , β ~n i.
Application of a row-multiplication matrix Mi (k) translates a representation with respect to the
basis hβ ~1 , . . . , k β~i , . . . , β
~n i to one with respect to B, as here.
~v = c1 · β~1 + · · · + ci · (k β~i ) + · · · + cn · β ~n 7→ c1 · β ~1 + · · · + (kci ) · β ~i + · · · + cn · β
~n = ~v
Applying a row-swap matrix Pi,j translates a representation with respect to hβ1 , . . . , βj , . . . , β ~ ~ ~i , . . . , β
~n i
to one with respect to hβ ~1 , . . . , β~i , . . . , β~j , . . . , β
~n i. Finally, applying a row-combination matrix Ci,j (k)
changes a representation with respect to hβ1 , . . . , β ~ ~i + k β
~j , . . . , β~j , . . . , β
~n i to one with respect to B.
~v = c1 · β~1 + · · · + ci · (β
~i + k β
~j ) + · · · + cj β~j + · · · + cn · β~n
7→ c1 · β ~1 + · · · + ci · β ~i + · · · + (kci + cj ) · β
~j + · · · + cn · β
~n = ~v
(As in the part of the proof in the body of this subsection, the various conditions on the row operations,
e.g., that the scalar k is nonzero, assure that these are all bases.)
Three.V.1.17 Taking H as a change of basis matrix H = RepB,En (id), its columns are
h1,i
.. ~i )) = Rep (β ~i )
. = RepEn (id(β En
hn,i
and, because representations with respect to the standard basis are transparent, we have this.
h1,i
.. ~
. = βi
hn,i
That is, the basis is the one composed of the columns of H.
Three.V.1.18 (a) We can change the starting vector representation to the ending one through a
sequence of row operations. The proof tells us what how the bases change. We start by swapping
the first and second rows of the representation with respect to B to get a representation with resepect
to a new basis B1 .
1
0
RepB1 (1 − x + 3x2 − x3 ) = 1
B1 = h1 − x, 1 + x, x2 + x3 , x2 − x3 i
2 B
1
We next add −2 times the third row ofthe vector representation to the fourth row.
1
0
RepB3 (1 − x + 3x2 − x3 ) = 1
B2 = h1 − x, 1 + x, 3x2 − x3 , x2 − x3 i
0 B
2
(The third element of B2 is the third element of B1 minus −2 times the fourth element of B1 .) Now
we can finish by doubling the third row.
1
0
RepD (1 − x + 3x2 − x3 ) = 2
D = h1 − x, 1 + x, (3x2 − x3 )/2, x2 − x3 i
0 D
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Three.VI.1.7
µ ¶ µ Each¶is a straightforward application of the formula µ from
¶ µ Definition
¶ 1.1.
2 3 2 3
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
1 −2 3 4 3 12/13 1 0 3 2 3 2
(a) µ ¶ µ ¶ · = · = (b) µ ¶ µ ¶ · = · =
3 3 −2 13 −2 −8/13 3 3 0 3 0 0
−2 −2 0 0
1 1 1 3
1 2 1 3
4 −1 1 1 −1/6 4 12 3
−1
(c) · 2 = · 2 = −1/3 (d) · 3 =
1 1 6 3 3
−1 −1 1/6 12
2 2 3 3
−1 −1 12 12
3 1
1
· 3 = 1
3
12 4
2 −3
−1 1
4 −3 −3 −3 3
−19
Three.VI.1.8 (a) · 1 = · 1 = −1
−3 −3 19
−3 −3 3
1 1
−3 −3
µ ¶
1 ¯¯
(b) Writing the line as {c · c ∈ R} gives this projection.
3
µ ¶ µ ¶
−1 1
µ ¶ µ ¶ µ ¶
−1 3 1 −4 1 −2/5
µ ¶ µ ¶ · = · =
1 1 3 10 3 −6/5
3 3
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Three.VI.2.9 (a)
µ ¶
1
~κ1 =
1
µ ¶ µ ¶
2 1
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
2 2 2 1 1 1 2 3 1 1/2
~κ2 = − proj[~κ1 ] ( )= −µ ¶ µ ¶· = − · =
1 1 1 1 1 1 1 2 1 −1/2
1 1
(b)
µ ¶
0
~κ1 =
1
µ ¶ µ ¶
−1 0
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
−1 −1 −1 3 1 0 −1 3 0 −1
~κ2 = − proj[~κ1 ] ( )= − µ ¶ µ ¶ · = − · =
3 3 3 0 0 1 3 1 1 0
1 1
(c)
µ ¶
0
~κ1 =
1
µ ¶ µ ¶
−1 0
µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶ µ ¶
−1 −1 −1 0 1 0 −1 0 0 −1
~κ2 = − proj[~κ1 ] ( )= − µ ¶ µ ¶ · = − · =
0 0 0 0 0 1 0 1 1 0
1 1
The corresponding orthonormal bases for the three parts of this question are these.
µ √ ¶ µ√ ¶ µ ¶ µ ¶ µ ¶ µ ¶
1/√2 √2/2 0 −1 0 −1
h , i h , i h , i
1/ 2 − 2/2 1 0 1 0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 1
So we take the basis,
−1 0
−2 1
1 , 0i
h
0 1
go through the Gram-Schmidt process
−1
−2
~κ1 =
1
0
0 −1
1 −2
0 0 0 0 1 −1 0
−1 −1/3
1 1 1 1 0 −2 1 −2 −2 1/3
~κ2 =
0 − proj[~κ1 ] (0) = 0 − −1 −1 · 1 = 0 − 6 · 1 = 1/3
1 1 1 −2 −2
0 1 0 1
1 1
0 0
and finish by normalizing. √ √
−1/√6 −√ 3/6
−2/ 6 3/6
h √ √
1/ 6 , 3/6 i
√
0 3/2
n
Three.VI.2.13 A linearly independent subset of R is a basis for its own span. Apply Theorem 2.7.
Remark. Here’s why the phrase ‘linearly independent’ is in the question. Dropping the phrase
would require us to worry about two things. The first thing to worry about is that when we do the
Gram-Schmidt process on a linearly dependent set then we get some zero vectors. For instance, with
µ ¶ µ ¶
1 3
S={ , }
2 6
we would get this. µ ¶ µ ¶ µ ¶ µ ¶
1 3 3 0
~κ1 = ~κ2 = − proj[~κ1 ] ( )=
2 6 6 0
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
The vector ~v − (proj[~κ1 ] (~v ) + proj[~v2 ] (~v )) lies on the dotted line connecting the black vector to the
gray one, that is, it is orthogonal to the xy-plane.
(c) This diagram is gotten by following the hint.
The dashed triangle has a right angle where the gray vector 1 · ~e1 + 2 · ~e2 meets the vertical dashed
line ~v − (1 · ~e1 + 2 · ~e2 ); this is what was proved in the first item of this question. The Pythagorean
theorem then gives that the hypoteneuse — the segment from ~v to any other vector — is longer than
the vertical dashed line.
More formally, writing proj[~κ1 ] (~v ) + · · · + proj[~vk ] (~v ) as c1 · ~κ1 + · · · + ck · ~κk , consider any other
vector in the span d1 · ~κ1 + · · · + dk · ~κk . Note that
~v − (d1 · ~κ1 + · · · + dk · ~κk )
¡ ¢ ¡ ¢
= ~v − (c1 · ~κ1 + · · · + ck · ~κk ) + (c1 · ~κ1 + · · · + ck · ~κk ) − (d1 · ~κ1 + · · · + dk · ~κk )
and that
¡ ¢ ¡ ¢
~v − (c1 · ~κ1 + · · · + ck · ~κk ) (c1 · ~κ1 + · · · + ck · ~κk ) − (d1 · ~κ1 + · · · + dk · ~κk ) = 0
(because the first item shows the ~v − (c1 · ~κ1 + · · · + ck · ~κk ) is orthogonal to each ~κ and so it is
orthogonal to this linear combination of the ~κ’s). Now apply the Pythagorean Theorem (i.e., the
Triangle Inequality).
Three.VI.2.16 One way to proceed is to find a third vector so that the three together make a basis
for R3 , e.g.,
1
~3 = 0
β
0
(the second vector is not dependent on the third because it has a nonzero second component, and the
first is not dependent on the second and third because of its nonzero third component), and then apply
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
~
=β ~i+1 − βi+1 ~κ1 · β~1
~κ1 ~κ1
~
βi+1 ~κ2 ³ ´ ~ ³ ´
− · linear combination of β ~1 , β~2 − · · · − βi+1 ~κi · linear combination of β ~1 , . . . , β
~i
~κ2 ~κ2 ~κi ~κi
The fractions are scalars so this is a linear combination of linear combinations of β ~1 , . . . , β
~i+1 . It is
therefore just a linear combination of β~1 , . . . , β~i+1 . Now, (i) it cannot sum to the zero vector because
the equation would then describe a nontrivial linear relationship among the β’s ~ that are given as
~
members of a basis (the relationship is nontrivial because the coefficient of βi+1 is 1). Also, (ii) the
equation gives ~κi+1 as a combination of β~1 , . . . , β~i+1 . Finally, for (iii), consider ~κj ~κi+1 ; as in the i = 3
case, the dot product of ~κj with ~κi+1 = β~i+1 − proj[~κ1 ] (β ~i+1 ) − · · · − proj (β~i+1 ) can be rewritten to
[~
κi ]
³ ´
give two kinds of terms, ~κj β ~i+1 − proj (β ~i+1 ) (which is zero because the projection is orthogonal)
[~
κj ]
~i+1 ) with m 6= j and m < i + 1 (which is zero because by the hypothesis (iii) the
and ~κj proj[~κm ] (β
vectors ~κj and ~κm are orthogonal).
an orthogonal basis for the space h~κ1 , . . . , ~κk , ~κk+1 , . . . , ~κn i such that the first half h~κ1 , . . . , ~κk i is a
basis for M and the second half is a basis for M ⊥ . The proof also checks that each vector in the
space is the sum of its orthogonal projections onto the lines spanned by these basis vectors.
~v = proj[~κ1 ] (~v ) + · · · + proj[~κn ] (~v )
Because ~v ∈ (M ⊥ )⊥ , it is perpendicular to every vector in M ⊥ , and so the projections in the second
half are all zero. Thus ~v = proj[~κ1 ] (~v ) + · · · + proj[~κk ] (~v ), which is a linear combination of vectors
from M , and so ~v ∈ M . (Remark. Here is a slicker way to do the second half: write the space both
as M ⊕ M ⊥ and as M ⊥ ⊕ (M ⊥ )⊥ . Because the first half showed that M ⊆ (M ⊥ )⊥ and the prior
sentence shows that the dimension of the two subspaces M and (M ⊥ )⊥ are equal, we can conclude
that M equals (M ⊥ )⊥ .)
(b) Because M ⊆ N , any ~v that is perpendicular to every vector in N is also perpendicular to every
vector in M . But that sentence simply says that N ⊥ ⊆ M ⊥ .
(c) We will again show that the sets are equal by mutual ¯ inclusion. The first direction is easy; any ~v
perpendicular to every vector in M + N = {m ~ + ~n ¯ m~ ∈ M, ~n ∈ N } is perpendicular to every vector
of the form m ~ + ~0 (that is, every vector in M ) and every vector of the form ~0 + ~n (every vector in
N ), and so (M + N )⊥ ⊆ M ⊥ ∩ N ⊥ . The second direction is also routine; any vector ~v ∈ M ⊥ ∩ N ⊥
is perpendicular to any vector of the form cm ~ + d~n because ~v (cm ~ + d~n) = c · (~v m)
~ + d · (~v ~n) =
c · 0 + d · 0 = 0.
Three.VI.3.24 (a) The representation of
v1
v2 7−f→ 1v1 + 2v2 + 3v3
v3
is this. ¡ ¢
RepE3 ,E1 (f ) = 1 2 3
By the definition of f
v1 ¯ v1 ¯ 1 v1
N (f ) = {v2 ¯ 1v1 + 2v2 + 3v3 = 0} = {v2 ¯ 2 v2 = 0}
v3 v3 3 v3
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Data on the progression of the world’s records (taken from the Runner’s World web site) is below.
1 As with the first example discussed above, we are trying to find a best m to “solve” this system.
8m = 4
16m = 9
24m = 13
32m = 17
40m = 20
Projecting into the linear subspace gives this
4 8
9 16
13 24
17 32 8 8
16 16
20 40 1832
· 24 = 3520 · 24
8 8 32 32
16 16
24 24 40 40
32 32
40 40
so the slope of the line of best fit is approximately 0.52.
20
15
10
0
0 10 20 30 40
Downloaded by Zhandos Zhandos ([email protected])
Progression of Men’s Mile Record Progression of Men’s 1500 Meter Record Progression of Women’s Mile Record 144
time name date time name date time name date
4:52.0 Cadet Marshall (GBR) 02Sep52 4:09.0 John Bray (USA) 30May00 6:13.2 Elizabeth Atkinson (GBR) 24Jun21
4:45.0 Thomas Finch (GBR) 03Nov58 4:06.2 Charles Bennett (GBR) 15Jul00 5:27.5 Ruth Christmas (GBR) 20Aug32
4:40.0 Gerald Surman (GBR) 24Nov59 4:05.4 James Lightbody (USA) 03Sep04 5:24.0 Gladys Lunn (GBR) 01Jun36
4:33.0 George Farran (IRL) 23May62 3:59.8 Harold Wilson (GBR) 30May08 5:23.0 Gladys Lunn (GBR) 18Jul36
4:29 3/5 Walter Chinnery (GBR) 10Mar68 3:59.2 Abel Kiviat (USA) 26May12 5:20.8 Gladys Lunn (GBR) 08May37
4:28 4/5 William Gibbs (GBR) 03Apr68 3:56.8 Abel Kiviat (USA) 02Jun12 5:17.0 Gladys Lunn (GBR) 07Aug37
4:28 3/5 Charles Gunton (GBR) 31Mar73 3:55.8 Abel Kiviat (USA) 08Jun12 5:15.3 Evelyne Forster (GBR) 22Jul39
4:26.0 Walter Slade (GBR) 30May74 3:55.0 Norman Taber (USA) 16Jul15 5:11.0 Anne Oliver (GBR) 14Jun52
4:24 1/2 Walter Slade (GBR) 19Jun75 3:54.7 John Zander (SWE) 05Aug17 5:09.8 Enid Harding (GBR) 04Jul53
4:23 1/5 Walter George (GBR) 16Aug80 3:53.0 Paavo Nurmi (FIN) 23Aug23 5:08.0 Anne Oliver (GBR) 12Sep53
4:19 2/5 Walter George (GBR) 03Jun82 3:52.6 Paavo Nurmi (FIN) 19Jun24 5:02.6 Diane Leather (GBR) 30Sep53
4:18 2/5 Walter George (GBR) 21Jun84 3:51.0 Otto Peltzer (GER) 11Sep26 5:00.3 Edith Treybal (ROM) 01Nov53
4:17 4/5 Thomas Conneff (USA) 26Aug93 3:49.2 Jules Ladoumegue (FRA) 05Oct30 5:00.2 Diane Leather (GBR) 26May54
4:17.0 Fred Bacon (GBR) 06Jul95 3:49.0 Luigi Beccali (ITA) 17Sep33 4:59.6 Diane Leather (GBR) 29May54
4:15 3/5 Thomas Conneff (USA) 28Aug95 3:48.8 William Bonthron (USA) 30Jun34 4:50.8 Diane Leather (GBR) 24May55
4:15 2/5 John Paul Jones (USA) 27May11 3:47.8 Jack Lovelock (NZL) 06Aug36 4:45.0 Diane Leather (GBR) 21Sep55
4:14.4 John Paul Jones (USA) 31May13 3:47.6 Gunder Hagg (SWE) 10Aug41 4:41.4 Marise Chamberlain (NZL) 08Dec62
4:12.6 Norman Taber (USA) 16Jul15 3:45.8 Gunder Hagg (SWE) 17Jul42 4:39.2 Anne Smith (GBR) 13May67
4:10.4 Paavo Nurmi (FIN) 23Aug23 3:45.0 Arne Andersson (SWE) 17Aug43 4:37.0 Anne Smith (GBR) 03Jun67
4:09 1/5 Jules Ladoumegue (FRA) 04Oct31 3:43.0 Gunder Hagg (SWE) 07Jul44 4:36.8 Maria Gommers (HOL) 14Jun69
4:07.6 Jack Lovelock (NZL) 15Jul33 3:42.8 Wes Santee (USA) 04Jun54 4:35.3 Ellen Tittel (FRG) 20Aug71
4:06.8 Glenn Cunningham (USA) 16Jun34 3:41.8 John Landy (AUS) 21Jun54 4:34.9 Glenda Reiser (CAN) 07Jul73
4:06.4 Sydney Wooderson (GBR) 28Aug37 3:40.8 Sandor Iharos (HUN) 28Jul55 4:29.5 Paola Pigni-Cacchi (ITA) 08Aug73
4:06.2 Gunder Hagg (SWE) 01Jul42 3:40.6 Istvan Rozsavolgyi (HUN) 03Aug56 4:23.8 Natalia Marasescu (ROM) 21May77
4:04.6 Gunder Hagg (SWE) 04Sep42 3:40.2 Olavi Salsola (FIN) 11Jul57 4:22.1 Natalia Marasescu (ROM) 27Jan79
4:02.6 Arne Andersson (SWE) 01Jul43 3:38.1 Stanislav Jungwirth (CZE) 12Jul57 4:21.7 Mary Decker (USA) 26Jan80
4:01.6 Arne Andersson (SWE) 18Jul44 3:36.0 Herb Elliott (AUS) 28Aug58 4:20.89 Lyudmila Veselkova (SOV) 12Sep81
lOMoARcPSD|9950715
4:01.4 Gunder Hagg (SWE) 17Jul45 3:35.6 Herb Elliott (AUS) 06Sep60 4:18.08 Mary Decker-Tabb (USA) 09Jul82
3:59.4 Roger Bannister (GBR) 06May54 3:33.1 Jim Ryun (USA) 08Jul67 4:17.44 Maricica Puica (ROM) 16Sep82
3:58.0 John Landy (AUS) 21Jun54 3:32.2 Filbert Bayi (TAN) 02Feb74 4:15.8 Natalya Artyomova (SOV) 05Aug84
3:57.2 Derek Ibbotson (GBR) 19Jul57 3:32.1 Sebastian Coe (GBR) 15Aug79 4:16.71 Mary Decker-Slaney (USA) 21Aug85
3:54.5 Herb Elliott (AUS) 06Aug58 3:31.36 Steve Ovett (GBR) 27Aug80 4:15.61 Paula Ivan (ROM) 10Jul89
3:54.4 Peter Snell (NZL) 27Jan62 3:31.24 Sydney Maree (usa) 28Aug83 4:12.56 Svetlana Masterkova (RUS) 14Aug96
3:54.1 Peter Snell (NZL) 17Nov64 3:30.77 Steve Ovett (GBR) 04Sep83
3:53.6 Michel Jazy (FRA) 09Jun65 3:29.67 Steve Cram (GBR) 16Jul85
3:51.3 Jim Ryun (USA) 17Jul66 3:29.46 Said Aouita (MOR) 23Aug85
3:51.1 Jim Ryun (USA) 23Jun67 3:28.86 Noureddine Morceli (ALG) 06Sep92
3:51.0 Filbert Bayi (TAN) 17May75 3:27.37 Noureddine Morceli (ALG) 12Jul95
3:49.4 John Walker (NZL) 12Aug75 3:26.00 Hicham el Guerrouj (MOR) 14Jul98
3:49.0 Sebastian Coe (GBR) 17Jul79
3:48.8 Steve Ovett (GBR) 01Jul80
280
260
240
220
1850 1900 1950 2000
3 With this input (the years are zeroed at 1900)
1 .38 249.0
1 .54 246.2
....
b = ...
A := ..
1 92.71 208.86
1 95.54 207.37
(the dates have been rounded to months, e.g., for a September record, the decimal .71 ≈ (8.5/12) was
used), Maple gives an intercept of b = 243.1590327 and a slope of m = −0.401647703. The slope given
in the body of this Topic for the men’s mile is quite close to this.
250
240
230
220
210
200
1900 1920 1940 1960 1980 2000
4 With this input (the years are zeroed at 1900)
1 21.46 373.2
1 32.63 327.5
A = ... .. b = ...
.
1 89.54 255.61
1 96.63 252.56
(the dates have been rounded to months, e.g., for a September record, the decimal .71 ≈ (8.5/12) was
used), MAPLE gave an intercept of b = 378.7114894 and a slope of m = −1.445753225.
380
360
340
320
300
280
260
240
220
1900 1920 1940 1960 1980 2000
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
380
360
340
320
300
280
260
240
220
1850 1900 1950 2000
6 (a) A computer algebra system like MAPLE or MuPAD will give an intercept of b = 4259/1398 ≈
3.239628 and a slope of m = −71/2796 ≈ −0.025393419 Plugging x = 31 into the equation yields
a predicted number of O-ring failures of y = 2.45 (rounded to two places). Plugging in y = 4 and
solving gives a temperature of x = −29.94◦ F.
(b) On the basis of this information
1 53 3
1 75 2
A = ... b = ...
1 80 0
1 81 0
MAPLE gives the intercept b = 187/40 = 4.675 and the slope m = −73/1200 ≈ −0.060833. Here,
plugging x = 31 into the equation predicts y = 2.79 O-ring failures (rounded to two places). Plugging
in y = 4 failures gives a temperature of x = 11◦ F.
3
0
40 50 60 70 80
7 (a) The plot is nonlinear.
20
15
10
0
0 2 4 6
(b) Here is the plot.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0.5
−0.5
0 2 4 6
There is perhaps a jog up between planet 4 and planet 5.
(c) This plot seems even more linear.
0.5
−0.5
0 2 4 6 8
(d) With this input
1 1 −0.40893539
1 2 −0.1426675
1 3 0
1
A= 4 b = 0.18184359
1 6 0.71600334
1 7 0.97954837
1 8 1.2833012
MuPAD gives that the intercept is b = −0.6780677466 and the slope is m = 0.2372763818.
(e) Plugging x = 9 into the equation y = −0.6780677466 + 0.2372763818x from the prior item gives
that the log of the distance is 1.4574197, so the expected distance is 28.669472. The actual distance
is about 30.003.
(f ) Plugging x = 10 into the same equation gives that the log of the distance is 1.6946961, so the
expected distance is 49.510362. The actual distance is about 39.503.
8 (a) With this input
1 306 975
1 329 969
1 356 948
1 367 910
A=
1
b=
396
890
1 427 906
1 415 900
1 424 899
MAPLE gives the intercept b = 34009779/28796 ≈ 1181.0591 and the slope m = −19561/28796 ≈
−0.6793.
980
960
940
920
900
1 (a) To represent H, recall that rotation counterclockwise by θ radians is represented with respect
to the standard basis in this way.
µ ¶
cos θ − sin θ
RepE2 ,E2 (h) =
sin θ cos θ
produces the identity matrix so there is no need for column-swapping operations to end with a
partial-identity.
(b) The reduction is expressed in matrix multiplication as
µ ¶µ √ ¶µ ¶
1 −1 2/ 2 0√ 1 0
H=I
0 1 0 1/ 2 1 1
(note that composition of the Gaussian operations is performed from right to left).
(c) Taking inverses
µ ¶ µ√ ¶µ ¶
1 0 2/2 √0 1 1
H= I
−1 1 0 2 0 1
| {z }
P
gives the desired factorization of H (here, the partial identity is I, and Q is trivial, that is, it is also
an identity matrix).
(d) Reading the composition from right to left (and ignoring the identity matrices as trivial) gives
that H has the same effect as first performing this skew
µ ¶ µ ¶
x x+y
u
~ →
7 h(~
u)
y y
~
v −→ h(~
v)
√
followed
√ by a dilation that multiplies all first
√ components by 2/2 (this is a “shrink” in that
2/2 ≈ 0.707) and all second components by 2, followed by another skew.
µ ¶ µ ¶
u
~ x x
7→
~
v
y −x + y h(~
u)
−→
h(~
v)
For instance, the effect of H on the unit vector whose angle with the x-axis is π/3 is this.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
µ ¶ µ√ ¶ ³√ √
2( √3 + 1)/2
´
x 2/2)x
( √ 2/2
7→
y 2y
−→
µ ¶ µ ¶
x x
7→
y −x + y
−→
³√ √ ´
√2( 3 +
√1)/4
2(1 − 3)/4
Verifying that the resulting vector has unit length and forms an angle of −π/6 with the x-axis is
routine.
2 We will first represent the map with a matrix H, perform the row operations and, if needed, column
operations to reduce it to a partial-identity matrix. We will then translate that into a factorization
H = P BQ. Subsitituting into the general matrix
µ ¶
cos θ − sin θ
RepE2 ,E2 (rθ )
sin θ cos θ
gives this representation. µ √ ¶
−1/2 − 3/2
√
RepE2 ,E2 (r2π/3 )
3/2 −1/2
Gauss’ method is routine. µ √ ¶ µ √ ¶ √ µ ¶
√
3ρ1 +ρ2 −1/2 − 3/2 −2ρ1 1 3 − 3ρ2 +ρ1 1 0
−→ −→ −→
0 −2 (−1/2)ρ2 0 1 0 1
That translates to a matrix equation in this way.
µ √ ¶µ ¶µ ¶µ √ ¶
1 − 3 −2 0 √1 0 −1/2 − 3/2
√ =I
0 1 0 −1/2 3 1 3/2 −1/2
Taking inverses to solve for H yields this factorization.
µ √ ¶ µ ¶µ ¶µ √ ¶
−1/2 − 3/2
√ 1
√ 0 −1/2 0 1 3
= I
3/2 −1/2 − 3 1 0 −2 0 1
3 This Gaussianreduction
1 2 1 1 2 1 1 2 1 1 2 0
−3ρ1 +ρ2 (1/3)ρ2 +ρ3 (−1/3)ρ2 −ρ2 +ρ1
−→ 0 0 −3 −→ 0 0 −3 −→ 0 0 1 −→ 0 0 1
−ρ1 +ρ3
0 0 1 0 0 0 0 0 0 0 0 0
gives the reduced echelon form of the matrix. Now the two column operations of taking −2 times the
first column and adding it to the second, and then of swapping columns two and three produce this
partial identity.
1 0 0
B = 0 1 0
0 0 0
All of that translates
into matrix
terms as: where
1 −1 0 1 0 0 1 0 0 1 0 0 1 0 0
P = 0 1 0 0 −1/3 0 0 1 0 0 1 0 −3 1 0
0 0 1 0 0 1 0 1/3 1 −1 0 1 0 0 1
and
1 −2 0 0 1 0
Q = 0 1 0 1 0 0
0 0 1 0 0 1
the given matrix factors as P BQ.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
f (x)
x f −1 (f (x))
µ ¶µ ¶ µ ¶
0.90 0.01 pT (n) pT (n + 1)
=
0.10 0.99 pC (n) pC (n + 1)
7 (a) They must satisfy this condition because the total probability of a state transition (including
back to the same state) is 100%.
(b) See the answer to the third item.
(c) We will do the 2×2 case; bigger-sized cases are just notational problems. This product
µ ¶µ ¶ µ ¶
a1,1 a1,2 b1,1 b1,2 a1,1 b1,1 + a1,2 b2,1 a1,1 b1,2 + a1,2 b2,2
=
a2,1 a2,2 b2,1 b2,2 a2,1 b1,1 + a2,2 b2,1 a2,1 b1,2 + a2,2 b2,2
has these two column sums
(a1,1 b1,1 + a1,2 b2,1 ) + (a2,1 b1,1 + a2,2 b2,1 ) = (a1,1 + a2,1 ) · b1,1 + (a1,2 + a2,2 ) · b2,1 = 1 · b1,1 + 1 · b2,1 = 1
and
(a1,1 b1,2 + a1,2 b2,2 ) + (a2,1 b1,2 + a2,2 b2,2 ) = (a1,1 + a2,1 ) · b1,2 + (a1,2 + a2,2 ) · b2,2 = 1 · b1,2 + 1 · b2,2 = 1
as required.
1 (a) Yes.
(b) No, the columns do not have length one.
(c) Yes.
2 Someµ of¶theseµ are nonlinear, because they ¶ involve
µ ¶ a µnontrivial
√ translation. ¶
x x · cos(π/6) − y · sin(π/6) 0 x · ( 3/2) − y · (1/2)
√ +0
(a) 7→ + =
y x · sin(π/6) + y · cos(π/6) 1 x · (1/2) + y · cos( 3/2) + 1
√
(b)√The line y = 2x makes an angle of arctan(2/1) with the x-axis. Thus sin θ = 2/ 5 and cos θ =
1/ 5. µ ¶ µ √ √ ¶
x x · (1/√5) − y · (2/√5)
7→
y x · (2/ 5) + y · (1/ 5)
µ ¶ µ √ √ ¶ µ ¶ µ √ √ ¶
x x · (1/ √ 5) − y · (−2/√5) 1 x/ 5√+ 2y/ √5 + 1
(c) 7→ + =
y x · (−2/ 5) + y · (1/ 5) 1 −2x/ 5 + y/ 5 + 1
3 (a) Let f be distance-preserving and consider f −1 . Any two points in the codomain can be written
as f (P1 ) and f (P2 ). Because f is distance-preserving, the distance from f (P1 ) to f (P2 ) equals the
distance from P1 to P2 . But this is exactly what is required for f −1 to be distance-preserving.
(b) Any plane figure F is congruent to itself via the identity map id : R2 → R2 , which is obviously
distance-preserving. If F1 is congruent to F2 (via some f ) then F2 is congruent to F1 via f −1 , which
is distance-preserving by the prior item. Finally, if F1 is congruent to F2 (via some f ) and F2 is
congruent to F3 (via some g) then F1 is congruent to F3 via g ◦ f , which is easily checked to be
distance-preserving.
4 The first two components of each are ax + cy + e and bx + dy + f .
5 (a) The Pythagorean Theorem gives that three points are colinear if and only if (for some ordering of
them into P1 , P2 , and P3 ), dist(P1 , P2 ) + dist(P2 , P3 ) = dist(P1 , P3 ). Of course, where f is distance-
preserving, this holds if and only if dist(f (P1 ), f (P2 )) + dist(f (P2 ), f (P3 )) = dist(f (P1 ), f (P3 )),
which, again by Pythagoras, is true if and only if f (P1 ), f (P2 ), and f (P3 ) are colinear.
The argument for betweeness is similar (above, P2 is between P1 and P3 ).
If the figure F is a triangle then it is the union of three line segments P1 P2 , P2 P3 , and P1 P3 .
The prior two paragraphs together show that the property of being a line segment is invariant. So
f (F ) is the union of three line segments, and so is a triangle.
A circle C centered at P and of radius r is the set of all points Q such that dist(P, Q) = r.
Applying the distance-preserving map f gives that the image f (C) is the set of all f (Q) subject to
the condition that dist(P, Q) = r. Since dist(P, Q) = dist(f (P ), f (Q)), the set f (C) is also a circle,
with center f (P ) and radius r.
(b) Here are two that are easy to verify: (i) the property of being a right triangle, and (ii) the
property of two lines being parallel.
(c) One that was mentioned in the section is the ‘sense’ of a figure. A triangle whose vertices read
clockwise as P1 , P2 , P3 may, under a distance-preserving map, be sent to a triangle read P1 , P2 , P3
counterclockwise.
y2 D
y1 C
F
E
x2 x1
by taking the area of the entire rectangle and subtracting the area of A the upper-left rectangle, B
the upper-middle triangle, D the upper-right triangle, C the lower-left triangle, E the lower-middle
triangle, and F the lower-right rectangle (x1 +x2 )(y1 +y2 )−x2 y1 −(1/2)x1 y1 −(1/2)x2 y2 −(1/2)x2 y2 −
(1/2)x1 y1 − x2 y1 . Simplification gives the determinant formula.
This determinant is the negative of the one above; the formula distinguishes whether the second
column is counterclockwise from the first.
Four.I.1.15 The computation for 2×2 matrices, using the formula quoted in the preamble, is easy. It
does also hold for 3×3 matrices; the computation is routine.
Four.I.1.16 No. Recall that constants come out one row at a time.
µ ¶ µ ¶ µ ¶
2 4 1 2 1 2
det( ) = 2 · det( ) = 2 · 2 · det( )
2 6 2 6 1 3
This contradicts linearity (here we didn’t need S, i.e., we can take S to be the zero matrix).
Four.I.1.17 Bring out the c’s one row at a time.
Four.I.1.18 There are no real numbers θ that make the matrix singular because the determinant of the
matrix cos2 θ + sin2 θ is never 0, it equals 1 for all θ. Geometrically, with respect to the standard basis,
this matrix represents a rotation of the plane through an angle of θ. Each such map is one-to-one —
for one thing, it is invertible.
Four.I.1.19 This is how the answer was given in the cited source. Let P be the sum of the three
positive terms of the determinant and −N the sum of the three negative terms. The maximum value
of P is
9 · 8 · 7 + 6 · 5 · 4 + 3 · 2 · 1 = 630.
The minimum value of N consistent with P is
9 · 6 · 1 + 8 · 5 · 2 + 7 · 4 · 3 = 218.
Any change in P would result in lowering that sum by more than 4. Therefore 412 the maximum value
for the determinant and one form for the determinant is
¯ ¯
¯ 9 4 2¯
¯ ¯
¯ 3 8 6¯ .
¯ ¯
¯ 5 1 7¯
¯ ¯ ¯ ¯ ¯ ¯
¯3 1 2 ¯ ¯3 1 2 ¯ ¯3 1 2 ¯¯
¯ ¯ ¯ ¯ ¯
Four.I.2.7 (a) ¯¯3 1 0¯¯ = ¯¯0 0 −2¯¯ = − ¯¯0 1 4 ¯¯ = 6
¯0 1 4 ¯ ¯0 1 4 ¯ ¯0 0 −2¯ ¯
¯ ¯ ¯ ¯ ¯
¯ 1 0 0 1 ¯ ¯1 0 0 1 ¯ ¯1 0 0 1 ¯¯
¯ ¯ ¯ ¯ ¯
¯ 2 1 1 0¯ ¯0 1 1 −2¯ ¯0 1 1 −2¯¯
(b) ¯
¯
¯=¯
¯ ¯ ¯=¯ =1
¯−1 0 1 0¯ ¯0 0 1 1 ¯ ¯0 0 1 1 ¯¯
¯ ¯
¯ 1 1 1 0¯ ¯0 1 1 −1¯ ¯0 0 0 1¯
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Four.I.3.25 (a) For the column index of the entry in the first row there are five choices. Then, for
the column index of the entry in the second row there are four choices (the column index used in the
first row cannot be used here). Continuing, we get 5 · 4 · 3 · 2 · 1 = 120. (See also the next question.)
(b) Once we choose the second column in the first row, we can choose the other entries in 4·3·2·1 = 24
ways.
Four.I.3.26 n · (n − 1) · · · 2 · 1 = n!
Four.I.3.27 In |A| = |Atrans | = | − A| = (−1)n |A| the exponent n must be even.
Four.I.3.28 Showing that no placement of three zeros suffices is routine. Four zeroes does suffice; put
them all in the same row or column.
Four.I.3.29 The n = 3 case shows what to do. The pivot operations of −x1 ρ2 + ρ3 and −x1 ρ1 + ρ2
give this.
¯ ¯ ¯ ¯ ¯ ¯
¯1 1 1 ¯¯ ¯¯ 1 1 1 ¯ ¯1 1 1 ¯
¯ ¯ ¯ ¯
¯x1 x2 x3 ¯ = ¯x1 x2 x3 ¯ = ¯0
¯ −x1 + x2 −x1 + x3 ¯¯
¯ 2 ¯ ¯ ¯
¯x1 x22 x23 ¯ ¯ 0 (−x1 + x2 )x2 (−x1 + x3 )x3 ¯ ¯0 (−x1 + x2 )x2 (−x1 + x3 )x3 ¯
Then the pivot operation of x2 ρ2 + ρ3 gives the desired result.
¯ ¯
¯1 1 1 ¯
¯ ¯
= ¯¯0 −x1 + x2 −x1 + x3 ¯ = (x2 − x1 )(x3 − x1 )(x3 − x2 )
¯
¯0 0 (−x1 + x3 )(−x2 + x3 )¯
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Four.II.1.8 For each, find the determinant and take the absolute value.
(a) 7 (b) 0 (c) 58
Four.II.1.9 Solving
3 2 1 4
c1 3 + c2 6 + c3 0 = 1
1 1 5 2
gives the unique solution c3 = 11/57, c2 = −40/57 and c1 = 99/57. Because c1 > 1, the vector is not
in the box.
Four.II.1.10 Move the parallelepiped to start at the origin, so that it becomes the box formed by
µ ¶ µ ¶
3 2
h , i
0 1
and now the absolute value of this determinant is easily computed as 3.
¯ ¯
¯3 2¯
¯0 1¯ = 3
¯ ¯
~
e3
~
e2 ~
β
~
e1 ~
β 2
1
~
β 3
In R3 positive orientation is sometimes called ‘right hand orientation’ because if a person’s right
hand is placed with the fingers curling from ~e1 to ~e2 then the thumb will point with ~e3 .
Four.II.1.28 We will compare det(~s1 , . . . , ~sn ) with det(t(~s1 ), . . . , t(~sn )) to show that the second differs
from the first by a factor of |T |. We represent the ~s ’s with respect to the standard bases
s1,i
s2,i
RepEn (~si ) = .
..
sn,i
and then we represent the map application with matrix-vector multiplication
t1,1 t1,2 . . . t1,n s1,j
t2,1 t2,2 . . . t2,n s2,j
RepEn ( t(~si ) ) = .. ..
. .
tn,1 tn,2 . . . tn,n sn,j
t1,1 t1,2 t1,n
t2,1 t2,2 t2,n
= s1,j . + s2,j . + · · · + sn,j .
.. .. ..
tn,1 tn,2 tn,n
= s1,j~t1 + s2,j~t2 + · · · + sn,j~tn
where ~ti is column i of T . Then det(t(~s1 ), . . . , t(~sn )) equals det(s1,1~t1+s2,1~t2+. . .+sn,1~tn , . . . , s1,n~t1+
s2,n~t2 +. . .+sn,n~tn ).
As in the derivation of the permutation expansion formula, we apply multilinearity, first splitting
along the sum in the first argument
det(s1,1~t1 , . . . , s1,n~t1 + s2,n~t2 + · · · + sn,n~tn ) + · · · + det(sn,1~tn , . . . , s1,n~t1 + s2,n~t2 + · · · + sn,n~tn )
and then splitting each of those n summands along the sums in the second arguments, etc. We end
with, as in the derivation of the permutation expansion, nn summand determinants, each of the form
det(si1 ,1~ti1 , si2 ,2~ti2 , . . . , sin ,n~tin ). Factor out each of the si,j ’s = si1 ,1 si2 ,2 . . . sin ,n ·det(~ti1 , ~ti2 , . . . , ~tin ).
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Swap the columns in det(~tφ(1) , . . . , ~tφ(n) ) to get the matrix T back, which changes the sign by a factor
of sgn φ, and then factor out the determinant of T .
X X
= sφ(1),1 . . . sφ(n),n det(~t1 , . . . , ~tn ) · sgn φ = det(T ) sφ(1),1 . . . sφ(n),n · sgn φ.
φ φ
As in the proof that the determinant of a matrix equals the determinant of its transpose, we commute
the s’s so they are listed by ascending row number instead of by ascending column number (and we
substitute sgn(φ−1 ) for sgn(φ)).
X
= det(T ) s1,φ−1 (1) . . . sn,φ−1 (n) · sgn φ−1 = det(T ) det(~s1 , ~s2 , . . . , ~sn )
φ
The box will have a nonzero volume unless the triangle formed by the ends of the three is degenerate.
That only happens (assuming that (x2 , y3 ) 6= (x3 , y3 )) if (x, y) lies on the line through the other two.
(b) This is how the answer was given in the cited source. The altitude through (x1 , y1 ) of a triangle
with vertices (x1 , y1 ) (x2 , y2 ) and (x3 , y3 ) is found in the usual way from the normal form of the
above: ¯ ¯
¯x1 x2 x3 ¯
1 ¯ ¯
¯ y1 y2 y3 ¯ .
p
(x2 − x3 )2 + (y2 − y3 )2 ¯ 1
¯
1 1¯
¯
where the sum is over all n-permutations φ such that φ(n) = n. To show that T̂i,j is the minor Ti,j ,
we need only show that if φ is an n-permutation such that φ(n) = n and σ is an n − 1-permutation
with σ(1) = φ(1), . . . , σ(n − 1) = φ(n − 1) then sgn(σ) = sgn(φ). But that’s true because φ and σ
have the same number of inversions.
Back to the general i, j case. Swap adjacent rows until the i-th is last and swap adjacent columns
until the j-th is last. Observe that the determinant of the i, j-th minor is not affected by these
adjacent swaps because inversions are preserved (since the minor has the i-th row and j-th column
omitted). On the other hand, the sign of |T | and T̂i,j is changed n − i plus n − j times. Thus
T̂i,j = (−1)n−i+n−j |Ti,j | = (−1)i+j |Ti,j |.
Four.III.1.28 This is obvious for the 1×1 base case.
For the inductive case, assume that the determinant of a matrix equals the determinant of its
transpose for all 1×1, . . . , (n−1)×(n−1) matrices. Expanding on row i gives |T | = ti,1 Ti,1 +. . . +ti,n Ti,n
and expanding on column i gives |T trans | = t1,i (T trans )1,i + · · · + tn,i (T trans )n,i Since (−1)i+j = (−1)j+i
the signs are the same in the two summations. Since the j, i minor of T trans is the transpose of the i, j
minor of T , the inductive hypothesis gives |(T trans )i,j | = |Ti,j |.
Four.III.1.29 This is how the answer was given in the cited source. Denoting the above determinant
by Dn , it is seen that D2 = 1, D3 = 2. It remains to show that Dn = Dn−1 + Dn−2 , n ≥ 4. In Dn
subtract the (n − 3)-th column from the (n − 1)-th, the (n − 4)-th from the (n − 2)-th, . . . , the first
from the third, obtaining ¯ ¯
¯1 −1 0 0 0 0 . . .¯¯
¯
¯1 1 −1 0 0 0 . . .¯¯
¯
Fn = ¯0 1
¯ 1 −1 0 0 . . .¯¯ .
¯0 0 1 1 −1 0 . . .¯¯
¯
¯. . . . . . . . .¯
By expanding this determinant with reference to the first row, there results the desired relation.
1 (a) Under Octave, rank(rand(5)) finds the rank of a 5×5 matrix whose entries are (uniformily
distributed) in the interval [0..1). This loop which runs the test 5000 times
octave:1> for i=1:5000
> if rank(rand(5))<5 printf("That’s one."); endif
> endfor
produces (after a few seconds) returns the prompt, with no output.
The Octave script
function elapsed_time = detspeed (size)
a=rand(size);
tic();
for i=1:10
det(a);
endfor
elapsed_time=toc();
endfunction
lead to this session.
octave:1> detspeed(5)
ans = 0.019505
octave:2> detspeed(15)
ans = 0.0054691
octave:3> detspeed(25)
ans = 0.0097431
octave:4> detspeed(35)
ans = 0.017398
(b) Here is the data (rounded a bit), and the graph.
matrix rows 15 25 35 45 55 65 75 85 95
time per ten 0.0034 0.0098 0.0675 0.0285 0.0443 0.0663 0.1428 0.2282 0.1686
(This data is from an average of twenty runs of the above script, because of the possibility that
the randomly chosen matrix happens to take an unusually long or short time. Even so, the timing
cannot be relied on too heavily; this is just an experiment.)
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0.2
0.15
0.1
0.05
0
20 40 60 80 100
2 The number of operations depends on exactly how the operations are carried out.
(a) The determinant is −11. To row reduce takes a single pivot with two multiplications (−5/2
times 2 plus 5, and −5/2 times 1 plus −3) and the product down the diagonal takes one more
multiplication. The permutation expansion takes two multiplications (2 times −3 and 5 times 1).
(b) The determinant is −39. Counting the operations is routine.
(c) The determinant is 4.
3 One way to get started is to compare these under Octave: det(rand(10));, versus det(hilb(10));,
versus det(eye(10));, versus det(zeroes(10));. You can time them as in tic(); det(rand(10));
toc().
4 This is a simple one.
DO 5 ROW=1, N
PIVINV=1.0/A(ROW,ROW)
DO 10 I=ROW+1, N
DO 20 J=I, N
A(I,J)=A(I,J)-PIVINV*A(ROW,J)
20 CONTINUE
10 CONTINUE
5 CONTINUE
5 Yes, because the J is in the innermost loop.
V2 T2
U2
V1
U1
T1
Five.II.2.6 Because the basis vectors are chosen arbitrarily, many different answers are possible. How-
ever, here is one way to go; to diagonalize
µ ¶
4 −2
T =
1 1
take it as the representation of a transformation with respect to the standard basis T = RepE2 ,E2 (t)
and look for B = hβ ~1 , β
~2 i such that
µ ¶
λ1 0
RepB,B (t) =
0 λ2
~ ~
that is, such that t(β1 ) = λ1 and t(β2 ) = λ2 .
µ ¶ µ ¶
4 −2 ~ 4 −2 ~
β1 = λ1 · β~1 ~2
β2 = λ 2 · β
1 1 1 1
We are looking for scalars x such that this equation
µ ¶µ ¶ µ ¶
4 −2 b1 b
=x· 1
1 1 b2 b2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
system.
(5 − x) · b1 + 6 · b2 + 2 · b3 = 0
(−1 − x) · b2 − 8 · b3 = 0
b1 + (−2 − x) · b3 = 0
Plugging in x = λ1 = 4 gives
b1 + 6 · b2 + 2 · b3 = 0 b1 + 6 · b2 + 2 · b3 = 0 b1 + 6 · b2 + 2 · b3 = 0
−ρ1 +ρ2 −ρ1 +ρ2
−5 · b2 − 8 · b3 = 0 −→ −5 · b2 − 8 · b3 = 0 −→ −5 · b2 − 8 · b3 = 0
b1 − 6 · b3 = 0 −6 · b2 − 8 · b3 = 0 −6 · b2 − 8 · b3 = 0
µ ¶ µ ¶ µ ¶ µ ¶
0 0 2 3 −1 0 −2 1
Five.II.3.26 λ = 1, and , λ = −2, , λ = −1,
0 1 1 0 1 0 1 0
Five.II.3.27 Fix the natural basis B = h1, x, x2 , x3 i. The map’s action is 1 7→ 0, x 7→ 1, x2 7→ 2x, and
x3 7→ 3x2 and its representation is easy to compute.
0 1 0 0
0 0 2 0
T = RepB,B (d/dx) = 0 0 0 3
0 0 0 0 B,B
We find the eigenvalues with this computation.¯ ¯
¯−x 1 0 0 ¯¯
¯
¯ 0 −x 2 0 ¯¯ 4
0 = |T − xI| = ¯¯ ¯=x
¯ 0 0 −x 3 ¯
¯ 0 0 0 −x¯
Thus the maphas the single
eigenvalue
λ = 0.Tofind the associated eigenvectors, we solve
0 1 0 0 b1 b1
0 0 2 0 b2 b2
=0· =⇒ b2 = 0, b3 = 0, b4 = 0
0 0 0 3 b3 b3
0 0 0 0 B,B b4 B b4 B
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Five.III.1.8 For the zero transformation, no matter what the space, the chain of rangespaces is V ⊃
{~0} = {~0} = · · · and the chain of nullspaces is {~0} ⊂ V = V = · · · . For the identity transformation
the chains are V = V = V = · · · and {~0} = {~0} = · · · .
Five.III.1.9 (a) Iterating t0 twice a + bx + cx2 7→ b + cx2 7→ cx2 gives
t2
a + bx + cx2 7−→0
cx2
and any higher power is the same¯ map. Thus, while R(t0 ) is the space of quadratic polynomials
with ¯no linear term {p + rx2 ¯ p, r ∈ C}, and R(t20 ) is the space ¯ of purely-quadratic polynomials
{rx2 ¯ r ∈ C}, this is where the chain stabilizes R∞ (t0¯) = {rx2 ¯ n ∈ C}. As for nullspaces, N (t0 )
is the space of purely-linear quadratic polynomials
¯ {qx ¯ q ∈ C}, and N (t20 ) is the space of quadratic
polynomials with no x2 term {p + qx ¯ p, q ∈ C}, and this is the end N∞ (t0 ) = N (t20 ).
(b) The second power µ ¶ µ ¶ µ ¶
a t1 0 t1 0
7−→ 7−→
b a 0
is the zero map. Consequently, the chain of rangespaces
µ ¶
0 ¯¯
R2 ⊃ { p ∈ C} ⊃ {~0 } = · · ·
p
and the chain of nullspaces µ ¶
q ¯¯
{~0 } ⊂ { q ∈ C} ⊂ R2 = · · ·
0
each has length two. The generalized rangespace is the trivial subspace and the generalized nullspace
is the entire space.
(c) Iterates of this map cycle around
t t t
a + bx + cx2 7−→2
b + cx + ax2 7−→
2
c + ax + bx2 7−→ 2
a + bx + cx2 · · ·
and the chains of rangespaces and nullspaces are trivial.
P2 = P2 = · · · {~0 } = {~0 } = · · ·
Thus, obviously, generalized spaces are R∞ (t2 ) = P2 and N∞ (t2 ) = {~0 }.
(d) We have
a a a a
b 7→ a 7→ a 7→ a 7→ · · ·
c b a a
and so the chain of rangespaces
p ¯ p ¯
R3 ⊃ {p ¯ p, r ∈ C} ⊃ {p ¯ p ∈ C} = · · ·
r p
and the chain of nullspaces
0 ¯ 0 ¯
{~0 } ⊂ {0 ¯ r ∈ C} ⊂ {q ¯ q, r ∈ C} = · · ·
r r
each has length two. The generalized spaces are the final ones shown above in each chain.
Five.III.1.10 Each maps x 7→ t(t(t(x))).
Five.III.1.11 Recall that if W is a subspace of V then any basis BW for W can be enlarged to make
a basis BV for V . From this the first sentence is immediate. The second sentence is also not hard: W
is the span of BW and if W is a proper subspace then V is not the span of BW , and so BV must have
at least one vector more than does BW .
Five.III.1.12 It is both ‘if’ and ‘only if’. We have seen earlier that a linear map is nonsingular if and
only if it preserves dimension, that is, if the dimension of its range equals the dimension of its domain.
With a transformation t : V → V that means that the map is nonsingular if and only if it is onto:
R(t) = V (and thus R(t2 ) = V , etc).
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Five.III.2.17 Three. It is at least three because ℓ2 ( (1, 1, 1) ) = (0, 0, 1) 6= ~0. It is at most three because
(x, y, z) 7→ (0, x, y) 7→ (0, 0, x) 7→ (0, 0, 0).
Five.III.2.18 (a) The domain has dimension four. The map’s action is that any vector in the space
c1 · β~1 +c2 · β
~2 +c3 · β
~3 +c4 · β
~4 is sent to c1 · β~2 +c2 ·~0+c3 · β~4 +c4 ·~0 = c1 · β
~3 +c3 · β~4 . The first application
of the map sends two basis vectors β ~2 and β ~4 to zero, and therefore the nullspace has dimension two
and the rangespace has dimension two. With a second application, all four basis vectors are sent to
zero and so the nullspace of the second power has dimension four while the rangespace of the second
power has dimension zero. Thus the index of nilpotency is two. This is the canonical form.
0 0 0 0
1 0 0 0
0 0 0 0
0 0 1 0
(b) The dimension of the domain of this map is six. For the first power the dimension of the
nullspace is four and the dimension of the rangespace is two. For the second power the dimension of
the nullspace is five and the dimension of the rangespace is one. Then the third iteration results in
a nullspace of dimension six and a rangespace of dimension zero. The index of nilpotency is three,
and this is the canonical form.
0 0 0 0 0 0
1 0 0 0 0 0
0 1 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
(c) The dimension of the domain is three, and the index of nilpotency is three. The first power’s
null space has dimension one and its range space has dimension two. The second power’s null space
has dimension two and its range space has dimension one. Finally, the third power’s null space has
dimension three and its range space has dimension zero. Here is the canonical form matrix.
0 0 0
1 0 0
0 1 0
Five.III.2.19 By Lemma 1.3 the nullity has grown as large as possible by the n-th iteration where n
is the dimension of the domain. Thus, for the 2×2 matrices, we need only check whether the square
is the zero matrix. For the 3×3 matrices, we need only check the cube.
(a) Yes, this matrix is nilpotent because its square is the zero matrix.
(b) No, the square is not the zero matrix.
µ ¶2 µ ¶
3 1 10 6
=
1 3 6 10
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
−1 1 −1 u ¯
1 1 0 1 { 0 ¯ u ∈ C}
1 −1 1 −u
1 0 1 u ¯
2 0 0 0 { v ¯ u, v ∈ C}
−1 0 −1 −u
3 –zero matrix– C3
shows that any map represented by this basis must act on a string basis in this way.
β~1 7→ β ~2 7→ β~3 7→ ~0
Therefore, this is the canonical form.
0 0 0
1 0 0
0 1 0
Five.III.2.23 A couple of examples
µ ¶µ ¶ µ ¶ 0 0 0 a b c 0 0 0
0 0 a b 0 0 1 0 0 d e f = a b c
=
1 0 c d a b
0 1 0 g h i d e f
suggest that left multiplication by a block of subdiagonal ones shifts the rows of a matrix downward.
Distinct blocks
0 0 0 0 a b c d 0 0 0 0
1 0 0 0 e f g h a b c d
0 0 0 0 i j k l = 0 0 0 0
0 0 1 0 m n o p i j k l
act to shift down distinct parts of the matrix.
Right multiplication does an analgous thing to columns. See Exercise 17.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Five.III.2.29 We must check that B ∪ Ĉ ∪ {~v1 , . . . , ~vj } is linearly independent where B is a t-string
basis for R(t), where Ĉ is a basis for N (t), and where t(~v1 ) = β~1 , . . . , t(~vi = β~i . Write
~1 + c1,1 t(β
~0 = c1,−1~v1 + c1,0 β ~
~1 ) + c2,−1~v2 + · · · + cj,h thi (β~i )
~1 ) + · · · + c1,h th1 (β
1 i
and apply t.
~1 + c1,0 t(β
~0 = c1,−1 β ~
~1 ) + c1,h ~0 + c2,−1 β~2 + · · · + ci,h −1 thi (β~i ) + ci,h ~0
~1 ) + · · · + c1,h −1 th1 (β
1 1 i i
Conclude that the coefficients c1,−1 , . . . , c1,hi −1 , c2,−1 , . . . , ci,hi −1 are all zero as B ∪ Ĉ is a basis.
Substitute back into the first displayed equation to conclude that the remaining coefficients are zero
also.
Five.III.2.30 For any basis B, a transformation n is nilpotent if and only if N = RepB,B (n) is a
nilpotent matrix. This is because only the zero matrix represents the zero map and so nj is the zero
map if and only if N j is the zero matrix.
Five.III.2.31 It can be of any size greater than or equal to one. To have a transformation that is
nilpotent of index four, whose cube has rangespace of dimension k, take a vector space, a basis for
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
Five.IV.1.13 For each, the minimal polynomial must have a leading coefficient of 1 and Theorem 1.8,
the Cayley-Hamilton Theorem, says that the minimal polynomial must contain the same linear factors
as the characteristic polynomial, although possibly of lower degree but not of zero degree.
(a) The possibilities are m1 (x) = x − 3, m2 (x) = (x − 3)2 , m3 (x) = (x − 3)3 , and m4 (x) = (x − 3)4 .
Note that the 8 has been dropped because a minimal polynomial must have a leading coefficient of
one. The first is a degree one polynomial, the second is degree two, the third is degree three, and
the fourth is degree four.
(b) The possibilities are m1 (x) = (x+1)(x−4), m2 (x) = (x+1)2 (x−4), and m3 (x) = (x+1)3 (x−4).
The first is a quadratic polynomial, that is, it has degree two. The second has degree three, and the
third has degree four.
(c) We have m1 (x) = (x − 2)(x − 5), m2 (x) = (x − 2)2 (x − 5), m3 (x) = (x − 2)(x − 5)2 , and
m4 (x) = (x − 2)2 (x − 5)2 . They are polynomials of degree two, three, three, and four.
(d) The possiblities are m1 (x) = (x + 3)(x − 1)(x − 2), m2 (x) = (x + 3)2 (x − 1)(x − 2), m3 (x) =
(x + 3)(x − 1)(x − 2)2 , and m4 (x) = (x + 3)2 (x − 1)(x − 2)2 . The degree of m1 is three, the degree
of m2 is four, the degree of m3 is four, and the degree of m4 is five.
Five.IV.1.14 In each case we will use the method of Example 1.12.
(a) Because T is triangular, T − xI is also triangular
3−x 0 0
T − xI = 1 3−x 0
0 0 4−x
the characteristic polynomial is easy c(x) = |T −xI| = (3−x)2 (4−x) = −1·(x−3)2 (x−4). There are
only two possibilities for the minimal polynomial, m1 (x) = (x−3)(x−4) and m2 (x) = (x−3)2 (x−4).
(Note that the characteristic polynomial has a negative sign but the minimal polynomial does not
since it must have a leading coefficient of one). Because m1 (T ) is not the zero matrix
0 0 0 −1 0 0 0 0 0
(T − 3I)(T − 4I) = 1 0 0 1 −1 0 = −1 0 0
0 0 1 0 0 0 0 0 0
the minimal polynomial is m(x) = m2 (x).
¡ ¢ 0 0 0 0 0 0 0 0 0
2
(T − 3I) (T − 4I) = (T − 3I) · (T − 3I)(T − 4I) = 1 0 0 −1 0 0 = 0 0 0
0 0 1 0 0 0 0 0 0
(b) As in the prior item, the fact that the matrix is triangular makes computation of the characteristic
polynomial easy.
¯ ¯
¯3 − x 0 0 ¯¯
¯
c(x) = |T − xI| = ¯¯ 1 3−x 0 ¯¯ = (3 − x)3 = −1 · (x − 3)3
¯ 0 0 3 − x¯
There are three possibilities for the minimal polynomial m1 (x) = (x − 3), m2 (x) = (x − 3)2 , and
m3 (x) = (x − 3)3 . We settle the question by computing m1 (T )
0 0 0
T − 3I = 1 0 0
0 0 0
and m2 (T ).
0 0 0 0 0 0 0 0 0
(T − 3I)2 = 1 0 0 1 0 0 = 0 0 0
0 0 0 0 0 0 0 0 0
Because m2 (T ) is the zero matrix, m2 (x) is the minimal polynomial.
(c) Again, the matrix is triangular.
¯ ¯
¯3 − x 0 0 ¯¯
¯
c(x) = |T − xI| = ¯¯ 1 3−x 0 ¯¯ = (3 − x)3 = −1 · (x − 3)3
¯ 0 1 3 − x¯
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 0 0 0
m2 (x) = (x − 1)2 ,
0 0 2 6
0 0 0 6
(T − 1I)2 =
0
0 0 0
0 0 0 0
m3 (x) = (x − 1)3 ,
0 0 0 6
3
0 0 0 0
(T − 1I) =
0
0 0 0
0 0 0 0
and m4 (x) = (x − 1)4 . Because m1 , m2 , and m3 are not right, m4 must be right, as is easily verified.
In the case of a general n, the representation is an upper triangular matrix with ones on the diagonal.
Thus the characteristic polynomial is c(x) = (x−1)n+1 . One way to verify that the minimal polynomial
equals the characteristic polynomial is argue something like this: say that an upper triangular matrix
is 0-upper triangular if there are nonzero entries on the diagonal, that it is 1-upper triangular if the
diagonal contains only zeroes and there are nonzero entries just above the diagonal, etc. As the above
example illustrates, an induction argument will show that, where T has only nonnegative entries, T j
is j-upper triangular. That argument is left to the reader.
Five.IV.1.19 The map twice is the same as the map once: π ◦ π = π, that is, π 2 = π and so the
minimal polynomial is of degree at most two since m(x) = x2 − x will do. The fact that no linear
polynomial will do follows from applying the maps on the left and right side of c1 · π + c0 · id = z
(where z is the zero map) to these two vectors.
0 1
0 0
1 0
Thus the minimal polynomial is m.
Five.IV.1.20 This is one answer.
0 0 0
1 0 0
0 0 0
Five.IV.1.21 The x must be a scalar, not a matrix.
Five.IV.1.22 The characteristic polynomial of
µ ¶
a b
T =
c d
is (a − x)(d − x) − bc = x2 − (a + d)x + (ad − bc). Substitute
µ ¶2 µ ¶ µ ¶
a b a b 1 0
− (a + d) + (ad − bc)
c d c d 0 1
µ 2 ¶ µ 2 ¶ µ ¶
a + bc ab + bd a + ad ab + bd ad − bc 0
= − +
ac + cd bc + d2 ac + cd ad + d2 0 ad − bc
and just check each entry sum to see that the result is the zero matrix.
Five.IV.1.23 By the Cayley-Hamilton theorem the degree of the minimal polynomial is less than or
equal to the degree of the characteristic polynomial, n. Example 1.6 shows that n can happen.
Five.IV.1.24 Suppose that t’s only eigenvalue is zero. Then the characteristic polynomial of t is xn .
Because t satisfies its characteristic polynomial, it is a nilpotent map.
Five.IV.1.25 A minimal polynomial must have leading coefficient 1, and so if the minimal polynomial
of a map or matrix were to be a degree zero polynomial then it would be m(x) = 1. But the identity
map or matrix equals the zero map or matrix only on a trivial vector space.
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
w w
~1 7→ β~2 7→ β
and the nullities show that the action of t − 6 on a string basis is β ~3 7→ ~0 and β
~4 7→ ~0.
The Jordan form is
6 0 0 0
1 6 0 0
0 1 6 0
0 0 0 6
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
1 2 3 0
Five.IV.2.22 There are two eigenvalues, λ1 = −2 and λ2 = 1. The restriction of t + 2 to N∞ (t + 2)
could have either of these actions on an associated string basis.
β~1 7→ β
~2 7→ ~0 ~1 7→ ~0
β
~2 7→ ~0
β
The restriction of t − 1 to N∞ (t − 1) could have either of these actions on an associated string basis.
β~3 7→ β
~4 7→ ~0 ~3 7→ ~0
β
~4 7→ ~0
β
In combination, that makes four possible Jordan forms, the two first actions, the second and first, the
first and second, and the two second actions.
−2 0 0 0 −2 0 0 0 −2 0 0 0−2 0 0 0
1 −2 0 0 0 −2 0 0 1 −2 0 0
0 −2 0 0
0 0 1 0 0 0 1 0 0 0 1 00 0 1 0
0 0 1 1 0 0 1 1 0 0 0 1 0 0 0 1
~
Five.IV.2.23 The restriction of t + 2 to N∞ (t + 2) can have only the action β1 7→ ~0. The restriction
of t − 1 to N∞ (t − 1) could have any of these three actions on an associated string basis.
~2 7→ β~3 7→ β
β ~4 7→ ~0 β~2 7→ β~3 7→ ~0 β~2 7→ ~0
β~4 7→ ~0 β~3 7→ ~0
β~4 7→ ~0
Taken together there are three possible Jordan forms, the one arising from the first action by t − 1
(along with the only action from t + 2), the one arising from the second action, and the one arising
from the third action.
−2 0 0 0 −2 0 0 0 −2 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0
0 1 1 0 0 1 1 0 0 0 1 0
0 0 1 1 0 0 0 1 0 0 0 1
Five.IV.2.24 The action of t + 1 on a string basis for N∞ (t + 1) must be β~1 7→ ~0. Because of the
power of x − 2 in the minimal polynomial, a string basis for t − 2 has length two and so the action of
t − 2 on N∞ (t − 2) must be of this form.
~2 7→ β~3 7→ ~0
β
~4 7→ ~0
β
Therefore there is only one Jordan form that is possible.
−1 0 0 0
0 2 0 0
0 1 2 0
0 0 0 2
Five.IV.2.25 There are two possible Jordan forms. The action of t + 1 on a string basis for N∞ (t + 1)
must be β~1 7→ ~0. There are two actions for t − 2 on a string basis for N∞ (t − 2) that are possible with
this characteristic polynomial and minimal polynomial.
β~2 7→ β
~3 7→ ~0 β~2 7→ β~3 7→ ~0
~ ~
β4 7→ β5 7→ ~0 ~
β4 7→ ~0
β~5 7→ ~0
The resulting Jordan form matrics are these.
−1 0 0 0 0 −1 0 0 0 0
0 2 0 0 0 0 2 0 0 0
0 1 2 0 0 0 1 2 0 0
0 0 0 2 0 0 0 0 2 0
0 0 0 1 2 0 0 0 0 2
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715
0 0 1 0
Five.IV.2.28 Yes. Each has the characteristic polynomial (x + 1)2 . Calculations of the powers of
T1 + 1 · I and T2 + 1 · I gives these two.
µ ¶ µ ¶
y/2 ¯¯ 0 ¯¯
N (t1 + 1) = { y ∈ C} N (t2 + 1) = { y ∈ C}
y y
(Of course, for each the null space of the square is the entire space.) The way that the nullities rise
shows that each is similar to this Jordan formµ matrix ¶
−1 0
1 −1
and they are therefore similar to each other.
Five.IV.2.29 Its characteristic polynomial is c(x) = x2 + 1 which has complex roots x2 + 1 = (x +
i)(x − i). Because the roots are distinct, the matrix is diagonalizable and its Jordan form is that
diagonal matrix. µ ¶
−i 0
0 i
To find an associated basis we compute the null spaces.
µ ¶ µ ¶
−iy ¯¯ iy ¯¯
N (t + i) = { y ∈ C} N (t − i) = { y ∈ C}
y y
For instance, µ ¶
i −1
T +i·I =
1 i
Downloaded by Zhandos Zhandos ([email protected])
lOMoARcPSD|9950715