0% found this document useful (0 votes)
10 views69 pages

Finite Dimensional Linear Algebra 1st Gockenbach Solution Manual

Uploaded by

portisyackov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views69 pages

Finite Dimensional Linear Algebra 1st Gockenbach Solution Manual

Uploaded by

portisyackov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Finite Dimensional Linear Algebra 1st

Gockenbach Solution Manual


Go to download the full and correct content document:
https://testbankmall.com/product/finite-dimensional-linear-algebra-1st-gockenbach-sol
ution-manual/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Solution Manual for Introduction to Applied Linear


Algebra 1st by Boyd

https://testbankmall.com/product/solution-manual-for-
introduction-to-applied-linear-algebra-1st-by-boyd/

Solutions Manual to accompany Matrix Analysis and


Applied Linear Algebra 1st edition 9780898714548

https://testbankmall.com/product/solutions-manual-to-accompany-
matrix-analysis-and-applied-linear-algebra-1st-
edition-9780898714548/

Elementary Linear Algebra 8th Edition Larson Solutions


Manual

https://testbankmall.com/product/elementary-linear-algebra-8th-
edition-larson-solutions-manual/

Introduction To Linear Algebra 4th Edition Strang


Solutions Manual

https://testbankmall.com/product/introduction-to-linear-
algebra-4th-edition-strang-solutions-manual/
Elementary Linear Algebra with Applications 9th Edition
Kolman Solutions Manual

https://testbankmall.com/product/elementary-linear-algebra-with-
applications-9th-edition-kolman-solutions-manual/

Linear Algebra and Its Applications 5th Edition Lay


Solutions Manual

https://testbankmall.com/product/linear-algebra-and-its-
applications-5th-edition-lay-solutions-manual/

Linear Algebra A Modern Introduction 4th Edition David


Poole Solutions Manual

https://testbankmall.com/product/linear-algebra-a-modern-
introduction-4th-edition-david-poole-solutions-manual/

Linear Algebra and Its Applications 5th Edition Lay


Test Bank

https://testbankmall.com/product/linear-algebra-and-its-
applications-5th-edition-lay-test-bank/

Solution Manual for Prealgebra and Introductory Algebra


1st Edition

https://testbankmall.com/product/solution-manual-for-prealgebra-
and-introductory-algebra-1st-edition/
5
62.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES6

5. (a) By definition, i2 = (0 + 1 · i)(0 + 1 · i) = (0 · 0 − 1 · 1) + (0 · 1 + 1 · 0)i = −1 + 0 · i = −1.


(b) We will prove only the existence of multiplicative inverses, the other properties of a field being
straightforward (although possibly tedious) to verify. Let a + bi be a nonzero complex number
(which means that a = 0 or b = 0). We must prove that there exists a complex number c + di such
that (a + bi)(c + di) = (ac − bd) + (ad + bc)i equals 1 = 1 + 0 · i. This is equivalent to the pair of
equations ac − bd = 1, ad + bc = 0. If a = 0, then the second equation yields d = −bc/a; substituting
into the first equations yields ac + b2 c/a = 1, or c = a/(a2 + b2 ). Then d = −bc/a = −b/(a2 + b2 ).
This solution is well-defined even if a = 0 (since in that case b = 0), and it can be directly verified
that it satisfies (a + bi)(c + di) = 1 in that case also. Thus each nonzero complex number has a
multiplicative inverse.
6. Let F be a field, and let α, β, γ ∈ F with γ = 0. Suppose αγ = βγ. Then, multiplying both sides by γ −1 ,
we obtain (αγ)γ −1 = (βγ)γ −1 , or α(γγ −1 ) = β(γγ −1 ), which then yields α · 1 = β · 1, or α = β.
7. Let F be a field and let α, β be elements of F . We wish to show that the equation α + x = β has a unique
solution. The proof has two parts. First, if x satisfies α + x = β, then adding −α to both sides shows
that x must equal −α + β = β − α. This shows that the equation has at most one solution. On the other
hand, x = −α + β is a solution since α + (−α + β) = (α − α) + β = 0 + β = β. Therefore, α + x = β has
a unique solution, namely, x = −α + β.
8. Let F be a field, and let α, β ∈ F . We wish to determine if the equation αx = β always has a unique
solution. The answer is no; in fact, there are three possible cases. First, if α = 0 and β = 0, then αx = β
is satisfied by every element of F , and there are multiple solutions in this case. Second, if α = 0, β = 0,
then αx = β has no solution (since αx = 0 for all x ∈ F in this case). Third, if α = 0, then αx = β
has the unique solution x = α−1 β. (Existence and uniqueness is proved in this case as in the previous
exercise.)
9. Let F be a field.
Let α, β, γ, δ ∈ F , with β, δ = 0. We wish to show that
α γ αδ + βγ α γ αγ
+ = , · = .
β δ βδ β δ βδ

Using the definition of division, the commutative and associative properties of multiplication, and finally
the distributive property, we obtain
α γ
+ = αβ −1 + γδ −1 = (α · 1)β −1 + (γ · 1)δ −1
β δ
= (α(δδ −1 ))β −1 + (γ(ββ −1 ))δ −1 = ((αδ)δ −1 )β −1 + ((γβ)β −1 )δ −1
= (αδ)(δ −1 β −1 ) + (γβ)(β −1 δ −1 ) = (αδ)(δβ)−1 ) + (γβ)(βδ)−1
= (αδ)(βδ)−1 ) + (γβ)(βδ)−1 = ((αδ) + (γβ))(βδ)−1
αδ + βγ
= .
βδ
Similarly,

α γ 1 −1 −1 −1 −1 −1 −1 −1
· = (αβ − )(γδ ) = ((αβ )γ)δ = (α(β γ))δ = (α(γβ ))δ
β δ

αγ
= ((αγ)β −1 )δ −1 = (αγ)(β −1 δ −1 ) = (αγ)(βδ)−1 = .
βδ

Now assuming that β, γ, δ = 0, we wish to show that


α/β αδ
= .
γ/δ βγ
72.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES7

Using the fact that (δ −1 )−1 = δ, we have

α/β
= (αβ −1 )(γδ −1 )−1 = (αβ −1 )(γ −1 δ) = ((αβ −1 )γ −1 )δ
γ/δ
= (α(β −1 γ −1 ))δ = (α(βγ)−1 )δ = α((βγ)−1 δ)
αδ
= α(δ(βγ)−1 ) = (αδ)(βγ)−1 = .
βγ

10. Let F be a field, and let α ∈ F be given. We wish to prove that, for any β1 , . . . , βn ∈ F , α(β1 +· · ·+βn ) =
αβ1 + · · · + αβn . We argue by induction on n. For n = 1, the result is simply αβ1 = αβ1 . Suppose that for
some n ≥ 2, α(β1 + · · · + βn−1 ) = αβ1 + · · · + αβn−1 for any β1 , . . . , βn−1 ∈ F . Let β1 , . . . , βn−1 , βn ∈ F .
Then

α(β1 + · · · + βn ) = α((β1 + · · · + βn−1 ) + βn )


=α(β1 + · · · + βn−1 ) + αβn = αβ1 + · · · + αβn−1 + αβn .

(In the last step, we applied the induction hypothesis, and in the step preceding that, the distributive
property of addition of multiplication.) This shows that α(β1 + · · · + βn ) = αβ1 + · · · + αβn , and the
general result now follows by induction.

11. (a) The space Z is not a field because multiplicative inverses do not exist in general. For example, 2 = 0,
yet there exists no n ∈ Z such that 2n = 1.
(b) The space Q of rational number is a field. Assuming the usual definitions for addition and multipli-
cation, all of the defining properties of a field are straightforward to verify.
(c) The space of positive real numbers is not a field because there is no additive identity. For any
z ∈ (0, ∞), x + z > x for all x ∈ (0, ∞).

12. Let F = {(α, β) : α, β ∈ R}, and define addition and multiplication on F by (α, β)+(γ, δ) = (α+γ, β+δ),
(α, β) · (γ, δ) = (αγ, βδ). With these definitions, F is not a field because multiplicative inverses do not
exists. It is straightforward to verify that (0, 0) is an additive inverse and (1, 1) is a multiplicative inverse.
Then (1, 0) = (0, 0), yet (1, 0) · (α, β) = (α, 0) = (1, 1) for all (α, β) ∈ F . Since F contains a nonzero
element with no multiplicative inverse, F is not a field.

13. Let F = (0, ∞), and define addition and multiplication on F by x ⊕ y = xy, x y = xln y . We wish
to show that F is a field. Commutativity and associativity of addition follow immediately from these
properties for ordinary multiplication of real numbers. Obviously 1 is an additive inverse, and the additive
inverse of x ∈ F is its reciprocal 1/x. The properties of multiplication are less obvious, but note that x
y = eln (y x) = eln (y) ln (x) , and this formula makes both commutativity and associativity easy to verify.
We also see that e is a multiplicative identity: x e = xln (e) = x1 = x for all x ∈ F . For any x ∈ F , x
= 1, y = e1/ ln (x) is a multiplicative inverse. Finally, for any x, y, z ∈ F , x (y ⊕z) = x (yz) = xln (yz) =
xln (y)+ln (z) = xln (y) xln (z) = (x y)(x z) = (x y) ⊕ (x z). Thus the distributive property holds.

14. Suppose F is a set on which are defined two operations, addition and multiplication, such that all the
properties of a field are satisfied except that addition is not assumed to be commutative. We wish to
show that, in fact, addition must be commutative, and therefore F must be a field. We first note that
it is possible to prove that 0 · γ = 0, −1 · γ = −γ, and−(−γ) = γ for all γ ∈ F without invoking
commutativity of addition. Moreover, for all α, β ∈ F , −β + (−α) = −(α + β) since (α + β) + (−β +
(−α)) = ((α + β) + (−β)) + (−α) = (α + (β + (−β))) + (−α) = (α + 0) + (−α) = α + (−α) = 0. We
therefore conclude that −1 · (α + β) = −β + (−α) for all α, β ∈ F . But, by the distributive property,
−1 · (α + β) = −1 · α + (−1) · β = −α + (−β), and therefore −α + (−β) = −β + (−α) for all α, β ∈ F .
Applying this property to −α, −β in place of α, β, respectively, yields α + β = β + α for all α, β ∈ F ,
which is what we wanted to prove.
82.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES8

15. (a) In Z2 = {0, 1}, we have 0 + 0 = 0, 1 + 0 = 0 + 1 = 1, and 1 + 1 = 0. This shows that −0 = 0 (as
always) and −1 = 1. Also, 0 · 0 = 0 · 1 = 1 · 0 = 0, 1 · 1 = 1, and 1−1 = 1 (as in any field).
(b) The addition and multiplication tables for Z3 = {0, 1, 2} are
+ 0 1 2 · 0 1 2
0 0 1 2 0 0 0 0
1 1 2 0 1 0 1 2
2 2 0 1 2 0 2 1

We see that −0 = 0, −1 = 2, −2 = 1, 1−1 = 1, 2−1 = 2.


The addition and multiplication tables for Z5 = {0, 1, 2, 3, 4} are
+ 0 1 2 3 4 + 0 1 2 3 4
0 0 1 2 3 4 0 0 0 0 0 0
1 1 2 3 4 0 1 0 1 2 3 4
2 2 3 4 0 1 2 0 2 4 1 3
3 3 4 0 1 2 3 0 3 1 4 2
4 4 0 1 2 3 4 0 4 3 2 1

We see that −0 = 0, −1 = 4, −2 = 3, −3 = 2, −4 = 1, 1−1 = 1, 2−1 = 3, 3−1 = 2, 4−1 = 4.


16. Let p be a positive integer that is prime. We wish to show that Zp is a field. The commutativity of addition
and multiplication in Zp follow immediately from the commutativity of addition and multiplication of
integers, and similarly for associativity and the distributive property. Obviously 0 is an additive identity
and 1 is a multiplicative identity. Also, 1 = 0. For any α ∈ Zp , the integer p − α, regarded as an element
of Zp , satisfies α +(p−α) = 0 in Zp ; therefore every element of Zp has an additive inverse. It remains only
to prove that every nonzero element of Zp has a multiplicative inverse. Suppose α ∈ Zp , α = 0. Since
Zp has only finitely many elements, α, α2 , α3 , . . . cannot all be distinct; there must exist positive integers
k, , with k > , such that αk = α in Zp . This means that the integers αk , α satisfy αk = λ + np for
some positive integer n, which in turn yields αk − α = np or α (αk− − 1) = np. Now, a basic theorem
from number theory states that if a prime p divides a product of integers, then it must divide one of the
integers in the product. In this case, p must divide α or αk− − 1. Since 0 < α < p, p does not divide α,
and therefore p divides αk− − 1. Therefore, αk− − 1 = sp, where s is a positive integer; this is equivalent
to αk− = 1 in Zp . Finally, this means that ααk− −1 = 1 in Zp , and therefore α has a multiplicative
inverse, namely, αk− −1 . This completes the proof that Zp is a field.
17. Let p be a positive integer that is not prime. It is easy to see that 1 is a multiplicative identity in Zp .
Since p is not prime, there exist integers m, n satisfying 1 < m, n < p and mn = p. But then, if m and
n are regarded as elements of Zp , m, n = 0 and mn = 0, which is impossible in a field. Therefore, Zp is
not a field when p is not prime.
18. Let F be a finite field.
(a) Consider the elements 1, 1 + 1, 1 + 1 + 1, . . . in F . Since F contains only finitely many elements, there
must exist two terms in this sequence that are equal, say 1 + 1 + · · · + 1 ( terms) and 1 + 1 + · · · + 1
(k terms), where k > . We can then add −1 to both sides times to show that 1 + 1 + · · · + 1 (k −
terms) equals 0 in F . Since at least one of the sequence 1, 1 + 1, 1 + 1 + 1, . . . equals 0, we can define
n to be the smallest integer greater than 1 such that 1 + 1 + · · · + 1 = 0 (n terms). We call n the
characteristic of the field.
(b) Given that the characteristic of F is n, for any α ∈ F , we have α + α + · · · + α = α(1 + 1 + · · · + 1) =
α · 0 = 0 if the sum has n terms.
(c) We now wish to show that the characteristic n is prime. Suppose, by way of contradiction, that
n = k , where 1 < k, < n. Define α = 1 + 1 + · · · + 1 (k terms) and β = 1 + 1 + · · · + 1 ( terms).
Then αβ = 1 + 1 + · · · + 1 (n terms), so that αβ = 0. But this implies that α = 0 or β = 0, which
contradicts the definition of the characteristic n. This contradiction shows that n must be prime.
92.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES9

19. We are given that H represents the space of quaternions and the definitions of addition and multiplication
in H. The first two parts of the exercise are purely computational.
(a) i2 = j 2 = k 2 = −1, ij = k, ik = −j, jk = i, ji = −k, ki = j, kj = −i, ijk = −1.
(b) xx = xx = x21 + x22 + x23 + x24.
(c) The additive identity in H is 0 = 0 + 0i + 0j + 0k. The additive inverse of x = x1 + x2 i + x3 j + x4 k
is −x = −x1 − x2 i − x3 j − x4 k.
(d) The calculations above show that multiplication is not commutative; for instance, ij = k, ji = −k.
(e) It is easy to verify that 1 = 1 + 0i + 0j + 0k is a multiplicative identity for H.
(f) If x ∈ H is nonzero, then xx = x21 + x22 + x23 + x24 is also nonzero. It follows that
x x1 x x x
= − 2 i− 3 j − 4 k
xx xx xx xx xx
is a multiplicative inverse for x:

x
x =
xx xx
= 1.
xx
20. In order to solve the first two parts of this problem, it is convenient to prove the following result. Suppose
F is a field under operations +, ·, G is a nonempty subset of F , and G is a field under operations ⊕, .
Moreover, suppose that for all x, y ∈ G, x + y = x ⊕ y and x · y = x y. Then G is a subfield of F .
We already know (since the operations on F reduce to the operations on G when the operands belong
to G) that G is closed under + and ·. We have to prove that the additive and multiplicative identities
of F belong to G, which we will do by showing that 0F = 0G (that is, the additive identity of F equals
the additive identity of G under ⊕) and 1F = 1G (which has the analogous meaning). To prove the first,
notice that 0G + 0G = 0G ⊕ 0G since 0G ∈ G, and therefore 0G + 0G = 0G . Adding the additive inverse
(in F ) −0G to both sides of this equation yields 0G = 0F . A similar proof shows that 1F = 1G . Thus
0F , 1F ∈ G. We next show that if x ∈ G and −x denotes the additive inverse of x in F , then −x ∈ G.
We write ◦ x for the additive inverse of x in G. We have x ⊕(◦ x) = 0, which implies that x + (◦ x) = 0.
But then, adding −x to both sides, we obtain ◦ x = −x, and therefore −x ∈ G. Similarly, if x ∈ G,
x = 0, and x−1 denotes the multiplicative inverse of x in F , then x−1 ∈ G. This completes the proof.
(a) We wish to show that R is a subfield of C. It suffices to prove that addition and multiplication
in C reduce to the usual addition and multiplication in R when the operands are real numbers. If
x, y ∈ R ⊂ C, then (x + 0i) + (y + 0i) = (x + y) + (0 + 0)i = (x + y) + 0i = x + y. Similarly,
(x + 0i)(y + 0i) = (xy − 0 · 0) + (x · 0 + 0 · y)i = xy + 0i = xy. The the operations on C reduce to
the operations on R when the operands are elements of R, and therefore R is a subfield of C.
(b) We now wish to show that C is a subfield of H by showing that the operations of H reduce to the
operations on C when the operands belong to C. Let x = x1 + x2 i, y = y1 + y2 i belong to C, so
that x = x1 + x2 i + 0j + 0k, y = y1 + y2 i + 0j + 0k can be regarded as elements of H. By definition,
x + y = (x1 + x2 i + 0j + 0k) + (y1 + y2 i + 0j + 0k)
= (x1 + y1 ) + (x2 + y2 )i + (0 + 0)j + (0 + 0)k
= (x1 + y1 ) + (x2 + y2 )i,
xy =(x1 + x2 i + 0j + 0k)(y1 + y2 i + 0j + 0k)
=(x1 y1 − x2 y2 − 0 · 0 − 0 · 0)+
(x1 y2 + x2 y1 + 0 · 0 − 0 · 0)i+
(x1 · 0 − x2 · 0 + 0 · y1 + 0 · y2 )j+
(x1 · 0 + x2 · 0 − 0 · y2 + 0 · y1 )k
=(x1 y1 − x2 y2 ) + (x1 y2 + x2 y1 )i.
Thus both operations on H reduce to the usual operations on C, which shows that C is a subfield
of H.
2.1. FIELDS
10 CHAPTER 2. FIELDS AND VECTOR SPACES10

(c) Consider the subset S = {a + bi + cj : a, b, c ∈ R} of H. We wish to determine whether S is a


subsfield of H. In fact, S is not a subfield because it is not closed under multiplication. For example,
i, j ∈ S, but ij = k ∈ S.

2.2 Vector spaces


1. Let F be a field, and let V = {0} , with addition and scalar multiplication on V defined by 0 + 0 = 0,
α · 0 = 0 for all α ∈ F . We wish to prove that V is a vector space over F . This is a straightforward
verification of the defining properties. The commutative property of addition is vacuous, since V contains
a single element. We have (0 + 0) + 0 = 0 + 0 = 0 + (0 + 0), so the associative property holds. The
definition 0 + 0 = 0 shows both that 0 is an additive identity and that 0 is the additive inverse of 0,
the only vector in V . Next, for all α, β ∈ F , we have α(β · 0) = α · 0 = 0 = (αβ) · 0, so the associative
property of scalar multiplication is satisfied. Also, α(0 + 0) = α · 0 = 0 = 0 + 0 = α · 0 + α · 0 and
(α + β) · 0 = 0 = α · 0 + β · 0, so both distributive properties hold. Finally, 1 · 0 = 0 by definition, so the
final property of a vector space holds. Thus V is a vector space over F .

2. Let F be an infinite field, and let V be a nontrivial vector space over F . We wish to show that V contains
infinitely many vectors. By definition, V contains a nonzero vector u. It suffices to show that, for all
α, β ∈ F , α = β implies αu = βu, since then V contains the infinite subset {αu : α ∈ F }. Suppose
α, β ∈ F , α = β. Then αu = βu if and only if αu − βu = 0, that is, if and only if (α − β)u = 0. Since
u = 0 by assumption, this implies that α − β = 0 by Theorem 5. Thus αu = βu implies α = β. which
completes the proof.

3. Let V be a vector space over a field F .

(a) Suppose z ∈ V is an additive identity. Then z + 0 = z (since 0 is an additive identity) and 0 + z = 0


(since z is an additive identity). Then z = z + 0 = 0 + z = 0, which shows that 0 is the only additive
identity in V .
(b) Let u ∈ V . If u + v = 0, then −u + (u + v) = −u + 0, which implies that (−u + u) + v = −u, or
0 + v = −u, or finally v = −u. Thus the additive inverse −u of u is unique.
(c) Suppose u, v ∈ V . Then (u + v) + (−u + (−v)) = ((u + v) + (−u)) + (−v) = (u + (v + (−u))) + (−v) =
(u + (−u + v)) + (−v) = ((u + (−u)) + v) + (−v) = (0 + v) + (−v) = v + (−v) = 0. By the preceding
result, this shows that −u + (−v) = −(u + v).
(d) Suppose u, v, w ∈ V and u + v = u + w. Then −u + (u + v) = −u + (u + w), which implies that
(−u + u) + v = (−u + u) + w, or 0 + v = 0 + w, or finally v = w.
(e) Suppose α ∈ F and 0 is the zero vector in V . Then α0 + α0 = α(0 + 0) = α0; adding −(α0) to both
sides yields α0 = 0, as desired.
(f) Suppose α ∈ F , u ∈ V , and αu = 0. If α = 0, then α−1 exists and α−1 (αu) = α−1 · 0, which implies
(α−1 α)u = 0 (applying the last result), which in turn yields 1 · u = 0 or finally u = 0. Therefore,
αu = 0 implies that α = 0 or u = 0.
(g) Suppose u ∈ V . Then 0 · u + 0 · u = (0 + 0) · u = 0 · u. Adding −(0 · u) to both sides yields 0 · u = 0.
We then have u + (−1)u = 1 · u + (−1)u = (1 + (−1))u = 0 · u = 0, which shows that (−1)u = −u.

4. We are to prove that if F is a field, then F n is a vector space over F . This is a straightforward verification
of the defining properties of a vector space, which follow in this case from the analogous properties of the
field F . The details are omitted.

5. We are to prove that F [a, b] (the space of all functions f : [a, b] → R) is a vector space over R. Like the
last exercise, this straightforward verification is omitted.

6. (a) Let p be a prime and n a positive integer. Since each of the n components of x ∈ Znp can take on
any of the p values 0, 1, . . . , p − 1, there are pn distinct vectors in Znp.
2.2. VECTOR SPACES 11

(b) The elements of Z22 are (0, 0), (0, 1), (1, 0), (1, 1). We have (0, 0) + (0, 0) = (0, 0), (0, 0) + (0, 1) =
(0, 1), (0, 0) + (1, 0) = (1, 0), (0, 0) + (1, 1) = (1, 1), (0, 1) + (0, 1) = (0, 0), (0, 1) + (1, 0) = (1, 1),
(0, 1) + (1, 1) = (1, 0), (1, 0) + (1, 0) = (0, 0), (1, 0) + (1, 1) = (0, 1), (1, 1) + (1, 1) = (0, 0).
7. (a) The elements of P1 (Z2 ) are the polynomials 0, 1, x, 1 + x, which define distinct functions on Z2 .
We have 0 + 0 = 0, 0 + 1 = 1, 0 + x = x, 0 + (1 + x) = 1 + x, 1 + 1 = 0, 1 + x = 1 + x, 1 + (1 + x) = x,
x+ x = (1 + 1)x = 0x = 0, x+(1 +x) = 1+(x+x) = 1, (1+x)+(1+x) = (1+1)+(x+x) = 0 + 0 = 0.
(b) Nominally, the elements of P2 (Z2 ) are 0, 1, x, 1 + x, x2 , 1 + x2 , x + x2 , 1 + x + x2 . However, since
these elements are interpreted as functions mapping Z2 into Z2 , it turns out that the last four
functions equal the first four. In particular, x2 = x (as functions), since 02 = 0 and 12 = 1. Then
1 + x2 = 1 + x, x + x2 = x + x = 0, and 1 + x + x2 = 1 + 0 = 1. Thus we see that the function
spaces P2 (Z2 ) and P1 (Z2 ) are the same.
(c) Let V be the vector space consisting of all functions from Z2 into Z2 . To specify f ∈ V means to
specify the two values f (0) and f (1). There are exactly four ways to do this: f (0) = 0, f (1) = 0
(so f (x) = 0); f (0) = 1, f (1) = 1 (so f (x) = 1); f (0) = 0, f (1) = 1 (so f (x) = x); and f (0) = 1,
f (1) = 0 (so f (x) = 1 + x). Thus we see that V = P1 (Z2 ).
8. Let V = (0, ∞), with addition ⊕ and scalar multiplication defined by u ⊕ v = uv for all u, v ∈ V
and α u = uα for all α ∈ R and all u ∈ V . We will prove that V is a vector space over R. First of
all, ⊕ is commutative and associative (because multiplication of real numbers has these properties). For
all u ∈ V , u ⊕ 1 = u · 1 = u, so there is an additive identity. Also, if u ∈ V , then 1/u ∈ V satisfies
u ⊕ (1/u) = u(1/u) = 1, so each vector has an additive inverse. Next, if α, β ∈ R and u, v ∈ V , then
α (β u) = α (uβ ) = (uβ )α = uαβ = (αβ) u, so the associative property of scalar multiplication
holds. Also, α (u ⊕v) = α (uv) = (uv)α = uα v α = (α u) ⊕(α v) and (α + β) u = uα+β = uα uβ =
(α u) ⊕ (β u). Thus both distributive properties hold. Finally, 1 u = u1 = u. This completes the
proof that V is a vector space over R.
9. Let V = R2 with the usual scalar multiplication and the following nonstandard vector addition: u ⊕v =
(u1 + v1 , u2 + v2 + 1) for all u, v ∈ R2 . It is easy to check that commutativity and associativity of ⊕
hold, that (0, −1) is an additive identity, and that each u = (u1 , u2 ) has an additive inverse, namely,
(−u1 , −u2 − 2). Also, α(βu) = (αβ)u for all u ∈ V , α, β ∈ R (since scalar multiplication is defined in the
standard way). However, if α ∈ R, then α(u + v) = α(u 1 + v1 , u2 + v2 + 1) = (αu1 + αv1 , αu2 + αv2 + α),
while αu + αv = (αu1 , αu2 ) + (αv1 , αv2 ) = (αu1 + αv1 , αu2 + αv2 + 1), and these are unequal if α = 1.
Thus the first distributive property fails to hold, and V is not a vector space over R. (In fact, the second
distributive property also fails.)
10. Let V = R2 with the usual scalar multiplication, and with addition defined by u ⊕v = (α1 u1 +β1 v1 , α2 u2 +
β2 v2 ), where α1 , α2 , β1 , β2 ∈ R are fixed. We wish to determine what values of α1 , α2 , β1 , β2 will make V a
vector space over R. We first note that ⊕ is commutative if and only if α1 = β1 , α2 = β2 . We therefore
redefine u ⊕ v as (α1 u1 + α1 v1 , α2 u2 + α2 v2 ) = (α1 (u1 + v1 ), α2 (u2 + v2 )). Next, we have (u ⊕v) ⊕ w =
(α12 u1 + α21 v1 + α1 w1 , α22 u2 + α22v2 + α2 w2 ), u ⊕ (v ⊕ w) = (α1 u1 + α21 v1 + α21w1 , α2 u2 + α22 v2 + α22w2 ).
From this, it is easy to show that (u ⊕ v) ⊕ w = u ⊕ (v ⊕ w) for all u, v, w ∈ R if and only if α12 = α1
and α22 = α2 , that is, if and only if α1 = 0 or α1 = 1, and similarly for α2 . However, if α1 = 0 or α2 = 0,
then no additive identity can exist. For suppose α1 = 0. Then u ⊕ v = (0, α2 (u2 + v2 )) for all u, v ∈ V,
and no z ∈ V can satisfy u ⊕ z = u if u1 = 0. Similarly, if α2 = 0, then no additive identity can exist.
Therefore, if V is to be a field over R2 , then we must have α1 = β1 = α2 = β2 = 1, and V reduces to R2
under the usual vector space operations.
11. Suppose V is the set of all polynomials (over R) of degree exactly two, together with the zero polynomial.
Addition and scalar multiplication are defined on V in the usual fashion. Then V is not a vector space
over R because it is not closed under addition. For example, 1 + x + x2 ∈ V , 1 + x − x2 ∈ V , but
(1 + x + x2 ) + (1 + x − x2 ) = 2 + 2x ∈ V .
12. (a) We wish to find a function lying in C(0, 1) but not in C[0, 1]. A suitable function with a discontinuity
at one of the endpoints provides an example. For example, f (x) = 1/x satisfies f ∈ C(0, 1) and
2.3. SUBSPACES
12 CHAPTER 2. FIELDS AND VECTOR SPACES
12

f ∈ C[0, 1], as does f (x) = 1/(1 − x) or f (x) = 1/(x − x2 ). A different type of example is provided
by f (x) = sin (1/x).
(b) The function f (x) = |x| belongs to C[−1, 1] but not to C 1 [−1, 1].
13. Let V be the space of all infinite sequences of real numbers, and define {xn } + {yn } = {xn + yn },
α{xn } = {αxn }. The proof that V is a vector space is a straightforward verification of the defining
properties, no different than for Rn , and will not be given here.
14. Let V be the set of all piecewise continuous functions f : [a, b] → R, with addition and scalar multi-
plication defined as usual for functions. We wish to show that V is a vector space over R. Most of
the properties of a vector space are automatically satisfied by V because it is a subset of the space of
all real-valued functions on [a, b], which is known to be a vector space. Specifically, commutativity and
associativity of addition, the associative property of scalar multiplication, the two distributive laws, and
the fact that 1 · u = u for all u ∈ V are all obviously satisfied. Moreover, the 0 function is continuous
and hence by definition piecewise continuous, and therefore 0 ∈ V . It remains only to show that V
is closed under addition and scalar multiplication (then, since −u = −1 · u for any function u, each
function u ∈ V must have an additive inverse in V ). Let u ∈ V , α ∈ R, and suppose u has points of
discontinuity x1 < x2 < · · · < xk−1 , where x1 > x0 = a and xk−1 < xk = b. Then u is continuous on
each interval (xi−1 , xi ), i = 1, 2, . . . , k, and therefore, by a simple theorem of calculus (any multiple of a
continuous function is continuous), αu is also continuous on each (xi−1 , xi ). The one-sided limits of αu
at x0 , x1 , . . . , xk exist since, for example,
lim αu(x) = α lim u(x)
+ +
x→xi x→xi

(and similarly for left-hand limits). Therefore, αu is piecewise continuous and therefore αu ∈ V . Now
suppose u, v belong to V . Let {x1 , x2 , . . . , x −1 } be the union of the sets of points of discontinuity of u
and of v, ordered so that a = x0 < x1 < · · · < x −1 = x = b. Then, since both u and v are continuous
at all other points in (a, b), u + v is continuous on every interval (xi−1 , xi ). Also, at each xi , either
limx→xi u(x) exists (if u is continuous at xi , that is, if xi is a point of discontinuity only for v), or the
one-sided limits limx→x+ u(x) and limx→x− u(x) both exist. In the first case, the two one-sided limits
i i
exist (and are equal), so in any case the two one-sided limits exist. The same is true for v. Thus, for each
xi , i = 0, 1, . . . , − 1,
lim (u(x) + v(x)) = lim u(x) + limi v(x),
x→x+
i x→x+
i x→x+

and similarly for the left-hand limits at x1 , x2 , . . . , x . This shows that u + v is piecewise continuous, and
therefore belongs to V . This completes the proof.
15. Suppose U and V are vector spaces over a field F , and define addition and scalar multiplication on U × V
by (u, v) + (w, z) = (u + w, v + z), α(u, v) = (αu, αv). We wish to prove that U × V is a vector space
over F . In fact, the verifications of all the defining properties of a vector space are straightforward. For
instance, (u, v) + (w, z) = (u + w, v + z) = (w + u, z + v) = (w, z) + (u, v) (using the commutativity of
addition in U and V ), and therefore addition in U × V is commutative. Note that the additive identity
in U × V is (0, 0), where the first 0 is the zero vector in U and the second is the zero vector in V . We
will not verify the remaining properties here.

2.3 Subspaces
1. Let V be a vector space over F .
(a) Let S = {0}. Then 0 ∈ S, S is closed under addition since 0 + 0 = 0 ∈ S, and S is closed under
scalar multiplication since α · 0 = 0 ∈ S for all α ∈ F . Thus S is a subspace of V .
(b) The entire space V is a subspace of V since 0 ∈ V and V is closed under addition and scalar
multiplication by definition.
2.3. SUBSPACES
13 CHAPTER 2. FIELDS AND VECTOR SPACES
13

2. Suppose we adopt an alternate definition of subspace, in which “0 ∈ S” is replaced with “S is nonempty.”


We wish to show that the alternate definition is equivalent to the original definition. If S is a subspace
according to the original definition, then 0 ∈ S, and therefore S is nonempty. Hence S is a subspace
according to the alternate definition. Conversely, suppose S satisfies the alternate definition. Then S is
nonempty, so there exists x ∈ S. Since S is closed under scalar multiplication and addition, it follows that
−x = −1 · x ∈ S, and hence 0 = −x + x ∈ S. Therefore, S satisfies the original definition of subspace.
3. Let V be a vector space over R, and let v ∈ V be nonzero. We wish to prove that S = {0, v} is not a
subspace of V . If S were a subspace, then 2v would lie in S. But 2v = 0 by Theorem 5, and 2v = v
(since otherwise adding −v to both sides would imply that v = 0). Hence 2v ∈ S, and therefore S is not
a subspace of V .
4. We wish to determine which of the given subsets are subspaces of Z23 . Notice that since Z2 = {0, 1}, if S
contains the zero vector, then it is automatically closed under scalar multiplication. Therefore, we need
only check whether the given subset contains (0, 0, 0) and is closed under addition.
(a) S = {(0, 0, 0), (1, 0, 0)}. This set contains (0, 0, 0) and is closed under addition since (0, 0, 0) +
(0, 0, 0) = (0, 0, 0), (0, 0, 0) + (1, 0, 0) = (1, 0, 0), and (1, 0, 0) + (1, 0, 0) = (0, 0, 0). Thus S is a
subspace.
(b) S = {(0, 0, 0), (0, 1, 0), (1, 0, 1), (1, 1, 1)}. This set contains (0, 0, 0) and it can be verified that it is
closed under addition (for instance, (0, 1, 0) + (1, 0, 1) = (1, 1, 1), (1, 1, 1) + (0, 1, 0) = (1, 0, 1), etc.).
Thus S is a subspace.
(c) S = {(0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 1)} is not a subspace because it is not closed under addition:
(1, 0, 0) + (0, 1, 0) = (1, 1, 0) ∈ S.

5. Suppose S is a subset of Z2n . We wish to show that S is a subspace of Z2n if and only if 0 ∈ S and S is
closed under addition. Of course, the “only if” direction is trivial. The other direction follows as in the
preceding exercise: If 0 ∈ S, then S is automatically closed under scalar multiplication, since 0 and 1 are
the only elements of the field Z2 , and 0 · v = 0 for all v ∈ S, 1 · v = v for all v ∈ S.
6. Define S = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0}. Then S is not a subspace of R2 , since it is not closed under
scalar multiplication. For instance, (1, 1) ∈ S but −1 · (1, 1) = (−1, −1) ∈ S.
7. Define S = {x ∈ R2 : ax1 + bx2 = 0}, where a, b ∈ R are constants. We will show that S is subspace
of R2 . First, (0, 0) ∈ S, since a · 0 + b · 0 = 0. Next, suppose x ∈ S and α ∈ R. Then ax1 + bx2 = 0,
and therefore a(αx1 ) + b(αx2 ) = α(ax1 + bx2 ) = α · 0 = 0. This shows that αx ∈ S, and therefore S is
closed under scalar multiplication. Finally, suppose x, y ∈ S, so that ax1 + bx2 = 0 and ay1 + by2 = 0.
Then a(x1 + y1 ) + b(x2 + y2 ) = (ax1 + bx2 ) + (ay1 + by2 ) = 0 + 0 = 0, which shows that x + y ∈ S, and
therefore that S is closed under addition. This completes the proof.
8. (a) The set A = {x ∈ R2 : x1 = 0 or x2 = 0} is closed under scalar multiplication but not addition.
Closure under scalar multiplication holds since if x1 = 0, then (αx)1 = αx1 = α · 0 = 0, and similarly
for the second component. The set is not closed under addition; for instance, (1, 0), (0, 1) ∈ A, but
(1, 0) + (0, 1) = (1, 1) ∈ A.
(b) The set Q = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0} is closed under addition but not scalar multiplication. Since
(1, 1) ∈ Q but −1 · (1, 1) = (−1, −1) ∈ Q, we see that Q is not closed under scalar multiplication.
On the other hand, if x, y ∈ Q, so that x1 , x2 , y1 , y2 ≥ 0, we see that (x + y)1 = x1 + y1 ≥ 0 + 0 = 0
and (x + y)2 = x2 + y2 ≥ 0 + 0 = 0. This shows that x + y ∈ Q, and therefore Q is closed under
addition.
9. Let V be a vector space over a field F , let u ∈ V , and define S = {αu : α ∈ F }. We will show that S
is a subspace of V . First, 0 ∈ V because 0 = 0 · u. Next, suppose x ∈ S and β ∈ F . Since x ∈ S, there
exists α ∈ F such that x = αu. Therefore, βx = β(αu) = (βα)u (using the associative property of scalar
multiplication, which shows that βx belongs to S. Thus S is closed under scalar multiplication. Finally,
suppose x, y ∈ S; then there exist α, β ∈ F such that x = αu, y = βu, and x + y = αu + βu = (α + β)u
2.3. SUBSPACES
14 CHAPTER 2. FIELDS AND VECTOR SPACES
14

by the second distributive property. Therefore S is closed under addition, and we have shown that S is
a subspace.

10. Let R be regarded as a vector space over R. We wish to prove that R has no proper subspaces. It suffices
to prove that if S is a nontrivial subspace of R, then S = R. So suppose S is a nontrivial subspace,
which means that there exists x = 0 belonging to S. But then, given any y ∈ R, y = (yx−1 )x belongs to
S because S is closed under scalar multiplication. Thus R ⊂ S, and hence S = R.

11. We wish to describe all proper subspaces of R2 . We claim that every proper subspace of R2 has the form
{αx : α ∈ R, α = 0}, where x ∈ R2 is nonzero (geometrically, such a set is a line through the origin).
To prove, this, let us suppose S is a proper subspace of R2 . Then there exists x ∈ S, x = 0. Since S
is closed under scalar multiplication, every vector of the form αx, α ∈ R, must belong to S. Therefore,
S contains the set {αx : α ∈ R, α = 0}. Let us suppose that there exists y ∈ S such that y cannot be
written as y = αx for some α ∈ R. In this case, we argue that every z ∈ R2 belongs to S, and hence S is
not a proper subspace of R2 . To justify this conclusion, we first note that, since y is not a multiple of x,
x1 y2 − x2 y1 = 0. Let z ∈ R2 be given and consider the equation αx + βy = z. It can be verified directly
that α = (y2 z1 − y1 z2 )/(x1 y2 − x2 y1 ), β = (x1 z2 − x2 z1 )/(x1 y2 − x2 y1 ) satisfy this equation, from which
it follows that z ∈ S (since S is closed under addition and scalar multiplication). Therefore, if S contains
any vector not lying in {αx : α ∈ R, α = 0}, then S consists of all of R2 , and S is not a proper subspace
of R2 .

12. We wish to find a proper subspace of R3 that is not a plane. One such subspace is the x1 -axis: S = {x ∈
R3 : x2 = x3 = 0}. It is easy to verify that S is a subspace of R3 , and geometrically, S is a line.
More generally, using the results of Exercise 10, we can show that {αx : α ∈ R}, where x = 0 is a given
vector, is a proper subspace of R3 . Such a subspace represents a line through the origin.

13. Consider the subset Rn of Cn . Although Rn contains the zero vector and is closed under addition,
it is not closed under scalar multiplication, and hence is not a subspace of Cn . Here the scalars are
complex numbers (since Cn is a vector space over C), and, for example, (1, 0, . . . , 0) ∈ Rn , i ∈ C, and
i(1, 0, . . . , 0) = (i, 0, . . . , 0) does not belong to Rn .

14. Let S = {u ∈ C[a, b] : u(a) = u(b) = 0}. Then S is a subspace of C[a, b]. The zero function clearly
belongs to S. Suppose u ∈ S and α ∈ R. Then (αu)(a) = αu(a) = α · 0 = 0, and similarly (αu)(b) = 0.
It follows that αu ∈ S, and S is closed under scalar multiplication. If u, v ∈ S, then (u + v)(a) =
u(a) + v(a) = 0 + 0 = 0, and similarly (u + v)(b) = 0. Therefore S is closed under addition, and we have
shown that S is a subspace of C[a, b].

15. Let S = {u ∈ C[a, b] : u(a) = 1}. Then S is not a subspace of C[a, b] because the zero function does not
belong to S.
b
16. Let S = u ∈ C[a, b] : a u(x) dx = 0 . We will show that S is a subspace of C[a, b]. First, since the
integral of the zero function is zero, we see that the zero function belongs to S. Next, suppose u ∈ S
b b b
and α ∈ R. Then a (αu)(x) dx = a αu(x) dx = α a u(x) dx = α · 0 = 0, and therefore αu ∈ S. Finally,
b b b b
suppose u, v ∈ S. Then a (u + v)(x) dx = a (u(x) + v(x)) dx = a u(x) dx + a v(x) dx = 0 + 0 = 0. This
shows that u + v ∈ S, and we have proved that S is a subspace of C[a, b].

17. Let V be the vector space of all (infinite) sequences of real numbers.

(a) Define Z = {{xn } ∈ V : limn→∞ xn = 0}. Clearly the zero seqence converges to zero, and hence
belongs to Z. If {xn } ∈ Z and α ∈ R, then limn→∞ αxn = α limn→∞ xn = α · 0 = 0, which implies
that α{xn } = {αxn } belongs to Z, and therefore Z is closed under scalar multiplication. Now
suppose {xn }, {yn } both belong to Z. Then lim n→∞ (xn +yn ) = limn→∞ xn +limn→∞ yn = 0+0 = 0.
Therefore {xn } + {yn } = {xn + yn } belongs to Z, Z is closed under addition, and we have shown
that Z is a subspace of V .
2.3. SUBSPACES
15 CHAPTER 2. FIELDS AND VECTOR SPACES
15

∞ ∞
(b) Define S = {{xn } ∈ V : n=1 xn < ∞}. From calculus, we know that if xn converges, then
n=1
∞ ∞ ∞ ∞
so does n=1 αxn = α n=1 xn for any α ∈ R. Similarly, if n=1 n=1 yn converge, then so
∞ ∞ ∞
does n=1 (xn + yn ) = xn + yn . Using these facts, it is straightforward to show that S
n=1
xn ,
is closed under addition and scalar multiplication. Obviously the zero sequence belongs to S.

(c) Define L = {{xn } ∈ V : n=1n=1 xn2 < ∞}. Here is it obvious that the zero sequence belongs to L

and that L is closed under scalar multiplication. To prove that L is closed under addition, notice
that, for any x, y ∈ R, (x − y)2 ≥ 0 and (x + y)2 ≥ 0 together imply that |xy| ≤ (x2 + y 2 )/2. It
follows that (xn + yn )2 = x2 + 2xn yn + y 2 ≤ 2(x2 + y 2 ). It follows that if ∞ x2 and ∞
y2
n n n n n=1 n n=1 n
∞ ∞ ∞
both converge, and so does (xn + 2
yn ) , with (xn + yn )2 ≤ 2 x +2 ∞
2
y2 . From
n=1 n=1 n=1 n n=1 n

this we see that L is closed under addition, and thus L is a subspace.



By a common theorem of calculus, we know that if n=1
xn converges, then limn→∞ xn = 0, and the

same is true if n=1 x2n converges. Therefore, S and L are subspaces of Z. However, the converse of this

result is not true (if the sequence converges


n=1 to zero, this does not imply that the corresponding series
converges). Therefore, S and L are proper subspaces of Z. We know that L is not a subspace of S; for
∞ ∞
instance 2
n=1 (1/n ) converges, but (1/n) does not, which shows that {1/n} belongs to L but not

to S. Also, S is not a subspace of L, since {(−1)n / n} belongs to S (by the alternating series test) but
not to L.

18. Let V be a vector space over a field F , and let X and Y be subspaces of V .
(a) We will show that X ∩ Y is also a subspace of V . First of all, since 0 ∈ X and 0 ∈ Y , it follows
that 0 ∈ X ∩ Y . Next, suppose x ∈ X ∩ Y and α ∈ F . Then, by definition of intersection, x ∈ X
and x ∈ Y . Since X and Y are subspaces, both are closed under scalar multiplication and therefore
αx ∈ X and αx ∈ Y , from which it follows that α ∈ X ∩ Y . Thus X ∩ Y is closed under scalar
multiplication. Finally, suppose x, y ∈ X ∩ Y . Then x, y ∈ X and x, y ∈ Y . Since X and Y are
closed under addition, we have x + y ∈ X and x + y ∈ Y , from which we see that x + y ∈ X ∩ Y .
Therefore, X ∩ Y is closed under addition, and we have proved that X ∩ Y is a subspace of V .
(b) It is not necessarily that case that X ∪ Y is a subspace of V . For instance, let V = R2 , and define
X = {x ∈ R2 : x2 = 0}, Y = {x ∈ R2 : x1 = 0}. Thus X ∪ Y is not closed under addition, and
hence is not a subspace of R2 . For instance, (1, 0) ∈ X ⊂ X ∪ Y and (0, 1) ∈ Y ⊂ X ∪ Y ; however,
(1, 0) + (0, 1) = (1, 1) ∈ X ∪ Y .
19. Let V be a vector space over a field F , and let S be a nonempty subset of V . Define T to be the
intersection of all subspaces of V that contain S.

(a) We wish to show that T is a subspace of V . First, 0 belongs to every subspace of V that contains S,
and therefore 0 belongs to the intersection T . Next, suppose x ∈ T and α ∈ F . Then x belongs to
every subspace of V containing S. Since each of these subspaces is closed under scalar multiplication,
it follows that αx also belongs to each subspace, and therefore αx ∈ T . Therefore, T is closed under
scalar multiplication. Finally, suppose x, y ∈ T . Then both x and y belong to every subspace of V
containing S. Since each subspace is closed under addition, it follows that x + y belongs to every
subspace of V containing S. Therefore x + y ∈ T , T is closed under addition, and we have shown
that T is a subspace.
(b) Now suppose U is any subspace of V containing S. Then U is one of the sets whose intersection
defines T , and therefore every element of T belongs to U by definition of intersection. It follows that
T ⊂ U . This means that T is the smallest subspace of V containing S.

20. Let V be a vector space over a field F , and let S, T be subspaces of V . Define S +T = {s+t : s ∈ S, t ∈ T }.
2.3. SUBSPACES
16 CHAPTER 2. FIELDS AND VECTOR SPACES
16

We wish to show that S + T is a subspace of V . First of all, 0 ∈ S and 0 ∈ T because S and T are
subspaces. Therefore, 0 = 0+0 ∈ S +T . Next, suppose x ∈ S +T and α ∈ F . Then, by definition of S +T ,
there exist s ∈ S, t ∈ T such that x = s + t. Since S and T are subspaces, they are closed under scalar
multiplication, and therefore αs ∈ S and αt ∈ T . It follows that αx = α(s + t) = αs + αt ∈ S + T . Thus
S + T is closed under scalar multiplication. Finally, suppose x, y ∈ S + T . Then there exist s1 , s2 ∈ S,
2.3. SUBSPACES
17 CHAPTER 2. FIELDS AND VECTOR SPACES
17

t1 , t2 ∈ T such that x = s1 + t1 , y = s2 + t2 . Since S and T are closed under addition, we see that
s1 + s2 ∈ S, t1 + t2 ∈ T , and therefore x + y = (s1 + t1 ) + (s2 + t2 ) = (s1 + s2 ) + (t1 + t2 ) ∈ S + T . It
follows that S + T is closed under addition, and we have shown that S + T is a field.

2.4 Linear combinations and spanning


sets
1. Write u1 = (−1, −2, 4, −2), u2 = (0, 1, −5, 4).
(a) With v = (−1, 0, −6, 6), the equation α1 u1 + α2 u2 = v has a (unique) solution: α1 = 1, α2 = 2.
This shows that v ∈ sp{u1 , u2 }.
(b) With v = (1, 1, 1, 1), the equation α1 u1 + α2 u2 = v has no solution, and therefore v ∈ sp{u1 , u2 }.
2. Let S = sp{ex , e−x } ⊂ C[0, 1].
(a) The function f (x) = cosh(x) belongs to S because cosh(x) = (1/2)ex + (1/2)e−x .
(b) The function f (x) = 1 does not belong to S because there are no scalars α1 , α2 satisfying α1 ex +
α2 e−x = 1 for all x ∈ [0, 1]. To prove this, note that any solution α1 , α2 would have to satisfy the
equations that result from substituting any three values of x from the interval [0, 1]. For instance,
if we choose x = 0, x = 1/2, x = 1, then we obtain the equations

α1 + α2 = 1,
α1 e 1/2
+ α2 e−1/2 = 1,
α1 e + α2 e−1 = 1.

A direct calculation shows that this system is inconsistent. Therefore no solution α1 , α2 exists, and
f ∈ S.
3. Let S = sp{1 + 2x + 3x2 , x − x2 } ⊂ P2 .
(a) There is a (unique) solution α1 = 2, α2 = 1 to α1 (1 + 2x + 3x2 ) + α2 (x − x2 ) = 2 + 5x + 5x2 .
Therefore, 2 + 5x + 5x2 ∈ S.
(b) There is no solution α1 , α2 to α1 (1 + 2x + 3x2 ) + α2 (x − x2 ) = 1 − x + x2 . Therefore, 1 − x + x2 ∈ S.
4. Let u1 = (1 + i, i, 2), u2 = (1, 2i, 2 − i), and define S = sp{u1 , u2 } ⊂ C3 . The vector v = (2 + 3i, −2 +
2i, 5 + 2i) belongs to S because 2u1 + iu2 = v.
5. Let S = sp{(1, 2, 0, 1), (2, 0, 1, 2)} ⊂ Z34 .
(a) The vector (1, 1, 1, 1) belongs to S because 2(1, 2, 0, 1) + (2, 0, 1, 2) = (1, 1, 1, 1).
(b) The vector (1, 0, 1, 1) does not belong to S because α1 (1, 2, 0, 1) + α2 (2, 0, 1, 2) = (1, 0, 1, 1) has no
solution.
6. Let S = sp{1 + x, x + x2 , 2 + x + x2 } ⊂ P3 (Z3 ).
(a) If p(x) = 1 + x + x2 , then 0(1 + x) + 2(x + x2 ) + 2(2 + x + x2 ) = p(x), and therefore p ∈ S.
(b) Let q(x) = x3 . Recalling that P3 (Z3 ) is a space of polynomials functions, we notice that q(0) = 0,
q(1) = 1, q(2) = 2, which means that q(x) = x for all x ∈ Z3 . We have 1(1 + x) + 2(x + x2 ) + 1(2 + x
+ x2 ) = x = q(x), and therefore q ∈ S.
7. Let u = (1, 1, −1), v = (1, 0, 2) be vectors in R3 . We wish to show that S = sp{u, v} is a plane in R3 .
First note that if S = {x ∈ R3 : ax1 + bx2 + cx3 = 0}, then (taking x = u, x = v) we see that a, b, c
must satisfy a + b − c = 0, a + 2c = 0. One solution is a = 2, b = −3, c = −1. We will now prove
that S = {x ∈ R3 : 2x1 − 3x2 − x3 = 0}. First, suppose x ∈ S. Then there exist α, β ∈ R such that x
= αu+βv = α(1, 1, −1)+β(1, 0, 2) = (α+β, α, −α+2β), and 2x1 −3x2 −x3 = 2(α+β)−3α−(−α+2β) =
2.4. LINEAR COMBINATIONS AND SPANNING SETS 17

2α + 2β − 3α + α − 2β = 0. Therefore, x ∈ {x ∈ R3 : 2x1 − 3x2 − x3 = 0}. Conversely, suppose


x ∈ {x ∈ R3 : 2x1 − 3x2 − x3 = 0}. If we solve the equation αu + βv = x, we see that it has the solution
α = x2 , β = x1 −x2 , and therefore x ∈ S. (Notice that x2 (1, 1, −1)+(x1 −x2 )(1, 0, 2) = (x1 , x2 , 2x1 −3x2 ),
and the assumption 2x1 − 3x2 − x3 = 0 implies that 2x1 − 3x2 = x3 .) This completes the proof.
8. The previous exercise does not hold true for every choice of u, v ∈ R3 . For instance, if u = (1, 1, 1),
v = (2, 2, 2), then S = sp{u, v} is not a plane; in fact, S is easily seen to be the line passing through
(0, 0, 0) and (1, 1, 1).
9. Let v1 = (1, −2, 1, 2), v2 = (−1, 1, 2, 1), and v3 = (−7, 9, 8, 1) be vectors in R4 , and let S = sp{v1 , v2 , v3 }.
Suppose x ∈ S, say x = β1 v1 + β2 v2 + β3 v3 = (β1 − β2 − 7β3 , −2β1 + β2 + 9β3 , β1 + 2β2 + 8β3 , 2β1 + β2 + β3 ).
The equation α1 v1 + α2 v2 = (β1 − β2 − 7β3 , −2β1 + β2 + 9β3 , β1 + 2β2 + 8β3 , 2β1 + β2 + β3 ) has a unique
solution, namely, α1 = β1 − 2β3 , α2 = β2 + 5β3 . This shows that x is a linear combination of v1 , v2 alone.
Alternate solution: We can solve α1 v1 + α2 v2 + α3 v3 = 0 to obtain α1 = −2, α2 = 5, α3 = −1, which
means that −2v1 + 5v2 − v3 = 0 or v3 = −2v1 + 5v2 . Now suppose x ∈ S, say x = β1 v1 + β2 v2 + β3 v3 .
It follows that x = β1 v1 + β2 v2 + β3 (−2v1 + 5v2 ) = (β1 − 2β3 )v1 + (β2 + 5β3 )v2 , and therefore x can be
written as a linear combination of v1 and v2 alone.
10. Let u1 = (1, 1, 1), u2 = (1, −1, 1), u3 = (1, 0, 1), and define S1 = sp{u1 , u2 }, S2 = sp{u1 , u2 , u3 }.
We wish to prove that S1 = S2 . We first note that if x ∈ S1 , then there exists scalars α1 , α2 such
that x = α1 u1 + α2 u2 . But then x can be written as x = α1 u1 + α2 u2 + 0 · u3 , which shows that
x is a linear combination of u1 , u2 , u3 , and hence x ∈ S2 . Conversely, suppose that x ∈ S2 , say x =
β1 u1 + β2 u2 + β3 u3 = (β1 + β2 + β3 , β1 − β2 , β1 + β2 + β3 ). We wish to show that x can be written as a
linear combination of u1 , u2 alone, that is, that there exist scalars α1 , α2 such that α1 u1 + α2 u2 = x. A
direct calculation shows that this equation has a unique solution, namely, α1 = β1 + β3 /2, α2 = β2 +β3 /2.
This shows that x ∈ sp{u1 , u2 } = S1 , and the proof is complete. (The second part of the proof can be
done as in the previous solution, by first showing that u3 = (1/2)u1 + (1/2)u2 .)
11. Let S = sp{(−1, −3, 3), (−1, −4, 3), (−1, −1, 4)} ⊂ R3 . We wish to determine if S = R3 or if S is a proper
subspace of R3 . Given an arbitrary x ∈ R3 , we solve α1 (−1, −1, 3) + α2 (−1, −4, 3) + α3 (−1, −1, 4) =
(x1 , x2 , x3 ) and find that there is a unique solution, namely, α1 = −13x1 + x2 − 3x3 , α2 = 9x1 − x2 + 2x3 ,
α3 = 3x1 + x3 . This shows that every x ∈ R3 lies in S, and therefore S = R3 .
12. Let S = sp{(−1, −5, 1), (3, 14, −4), (1, 4, −2)}. Given an arbitary x ∈ R3 , if we try to solve α1 (−1, −5, 1)+
α2 (3, 14, −4) + α3(1, 4, −2) = (x1 , x2 , x3 ), we find that there is a solution if and only if 6x1 − x2 + x3 = 0.
Since not all x ∈ R3 satisfy this condition, S is a proper subspace of R3 .
13. Let S = sp{1 − x, 2 − 2x + x2 , 1 − 3x2 } ⊂ P2 . We wish to determine if S is a proper subspace of P2 .
Given any p ∈ P2 , say p(x) = c0 + c1 x + c2 x2 , we try to solve α1 (1 − x) + α2 (2 − 2x + x2 ) + α3 (1 − 3x2 ) =
c0 + c1 x + c2 x2 . We find that there is a unique solution, α1 = −6c0 − 7c1 − 2c2 , α2 = 3c0 + 3c1 + c2 ,
α3 = c0 + c1 . Therefore, each p ∈ P2 belongs to S, and therefore S = P2 .
14. Suppose V is a vector space over a field F and S is a subspace of V . We wish to prove that u1 , . . . , uk ∈ S,
α1 , . . . , αk ∈ F imply that α1 u1 + · · · + αk uk ∈ S. We argue by induction on k. For k = 1, we have that
α1 u1 ∈ S because S is a subspace and therefore closed under scalar multiplication. Now suppose that, for
some k ≥ 2, α1 u1 + · · · + αk−1 uk−1 ∈ S for any u1 , . . . , uk−1 ∈ S, α1 , . . . , αk−1 ∈ F . Let u1 , . . . , uk ∈ S,
α1 , . . . , αk ∈ F be arbitrary. Then α1 u1 + . . . + αk uk = (α1 u1 + · · · + αk−1 uk−1 ) + αk uk . By the induction
hypothesis, α1 u1 + · · · + αk−1 uk−1 ∈ S, and αk uk ∈ S because S is closed under scalar multiplication.
But then α1 u1 + . . . + αk uk = (α1 u1 + · · · + αk−1 uk−1 ) + αk uk ∈ S because S is closed under addition.
Therefore, by induction, the result holds for all k ≥ 1, and the proof is complete.
15. Let V be a vector space over a field F , and let u ∈ V , u = 0, α ∈ F . We wish to prove that sp{u} =
sp{u, αu}. First, if x ∈ sp{u}, then x = βu for some β ∈ F , in which case we can write x = βu + 0(αu),
which shows that x also belongs to sp{u, αu}. Conversely, if x ∈ sp{u, αu}, then there exist scalars
β, γ ∈ F such that x = βu + γ(αu). But then x = (β + γα)u, and therefore x ∈ sp{u}. Thus
sp{u} = sp{u, αu}.
18 CHAPTER 2. FIELDS AND VECTOR SPACES

16. Let V be a vector space over a field F , and suppose x, u1 , . . . , uk , v1 , . . . , v are vectors in V . Assume
x ∈ sp{u1 , . . . , uk } and uj ∈ sp{v1 , . . . , v } for j = 1, . . . , k. We wish to show that x ∈ sp{v1 , . . . , v }.
Since uj ∈ sp{v1 , . . . , v }, there exist scalars βj,1 , . . . , βj, such that uj = βj,1 v1 + · · · + βj, v . This
is true for each uj , j = 1, . . . , k. Also, x ∈ sp{u1 , . . . , uk }, so there exist α1 , . . . , αk ∈ F such that
x = α1 u1 + · · · + αk uk . It follows that

x = α1 (β1,1 v1 + · · · + β1, v ) + α2 (β2,1 v1 + · · · + β2, v ) + · · · + αk (βk,1 v1 + · · · + βk, v )


= α1 β1,1 v1 + · · · + α1 β1, v + α2 β2,1 v1 + · · · + α2 β2, v + · · · + αk βk,1 v1 + · · · + αk βk, v
= (α1 β1,1 + α2 β2,1 + · · · + αk βk,1 )v1 + (α1 β1,2 + α2 β2,2 + · · · + αk βk,2 )v2 + · · · +
(α1 β1, + α2 β2, + · · · + αk βk, )v .

This shows that x ∈ sp{v1 , . . . , v }.


17. (a) Let V be a vector space over R, and let u, v be any two vectors in V . We wish to prove that
sp{u, v} = sp{u + v, u − v}. We first suppose that x ∈ sp{u + v, u − v}, say x = α(u + v) + β(u − v).
Then x = αu + αv + βu − βv = (α + β)u + (α − β)v, which shows that x ∈ sp{u, v}. Conversely,
suppose that x ∈ sp{u, v}, say x = αu + βv. We notice that u = (1/2)(u + v) + (1/2)(u − v), and
v = (1/2)(u + v) − (1/2)(u − v). Therefore, x = α((1/2)(u + v) + (1/2)(u − v)) + β((1/2)(u + v) −
(1/2)(u − v)) = (α/2 + β/2)(u + v) + (α/2 − β/2)(u − v), which shows that x ∈ sp{u + v, u − v}.
Therefore, x ∈ sp{u, v} if and only if x ∈ sp{u + v, u − v}, and hence the two subspaces are equal.
(b) The result just proved does not necessarily hold if V is a vector space over an arbitrary field F .
More specifically, the first part of the proof is always valid, and therefore sp{u + v, u − v} ⊂ sp{u, v}
always holds. However, it is not always possible to write u and v in terms of u + v and u − v, and
therefore sp{u, v} ⊂ sp{u + v, u − v} need not hold. For example, if F = Z2 , V = Z22 , u = (1, 0),
v = (0, 1), then we have u + v = (1, 1) and u − v = (1, 1) (since −1 = 1 in Z2 ). It follows that
sp{u + v, u − v} = {(0, 0), (1, 1)}, and hence u, v ∈ sp{u + v, u − v}, which in turn means that
sp{ v} ⊂ sp{u + v, u − v}.
u,

2.5 Linear independence


1. Let V be a vector space over a field F , and let u1 , u2 ∈ V . We wish to prove that {u1 , u2 } is linearly
dependent if and only if one of these vectors is a multiple of the other. Suppose first that {u, v} is linearly
dependent. Then there exist scalars α1 , α2 , not both zero, such that α1 u1 + α2 u2 = 0. Suppose α1 = 0;
then α−11 exists, and we have α1 u1 + α2 u2 = 0 ⇒ α1 u1 = −α2 u2 ⇒ u1 = −α1−1 α2 u2 . Therefore, in this
case, u1 is a multiple of u2 . Similarly, if α2 = 0, we can show that u2 is a multiple of u1 . Conversely,
suppose one of u1 , u2 is a multiple of the other, say u1 = αu2 . We can then write u1 − αu2 = 0, or
1 · u1 + (−α)u2 = 0, which, since 1 = 0, shows that {u1 , u2 } is linearly dependent. A similar proof shows
that if u2 is a multiple of u1 , then {u1 , u2 } is linearly dependent. This completes the proof.
2. Let V be a vector space over a field F , and suppose v ∈ V . We wish to prove that {v} is linearly
independent if and only if v = 0. First, if v = 0, then αv = 0 implies that α = 0 by Theorem 5. It follows
that {v} is linearly independent if v = 0. On the other hand, if v = 0, then 1 · v = 0, which shows that
{v} is linearly dependent (there is a nontrivial solution to αv = 0). Thus {v} is linearly independent if
and only if v = 0.

3. Let V be a vector space over a field F , and let u1 , . . . , un ∈ V . Suppose ui = 0 for some i, 1 ≤ i ≤ n,
and define scalars α1 , . . . , αn ∈ F by αk = 0 if k = i, αi = 1. Then α1 u1 + · · · + αn un = 0 · u1 + · · · + 0 ·
ui−1 + 1 · 0 + 0 · ui+1 + · · · + 0 · un = 0, and hence there is a nontrivial solution to α1 u1 + · · · + αn un = 0.
This shows that {u1 , . . . , un } is linearly dependent.

4. Let V be a vector space over a field F , let {u1 , . . . , uk } be a linearly independent subspace of V , and
assume v ∈ V , v ∈ sp{u1 , . . . , uk }. We wish to show that {u1 , . . . , uk , v} is also linearly independent. We
argue by contradiction and assume that {u1 , . . . , uk , v} is linearly dependent. Then there exist scalars
2.5. LINEAR INDEPENDENCE 19

α1 , . . . , αk , β, not all zero, such that α1 u1 +· · ·+αk uk +βv = 0. We now consider two cases. First, if β = 0,
then not all of α1 , . . . , αk are zero, and we see that α1 u1 + · · · + αk uk = α1 u1 + · · · + αk uk + 0 · v = 0.
This contradicts the fact that {u1 , . . . , uk } is linearly independent. Second, if β = 0, then we can
solve α1 u1 + · · · + αk uk + βv = 0 to obtain v = −β −1 α1 u1 − · · · − β −1 αk uk , which contradicts that
v ∈ sp{u1 , . . . , uk }. Thus, in either case, we obtain a contradiction, and the proof is complete.

5. We wish to determine whether each of the following sets is linearly independent or not.

(a) The set {(1, 2), (1, −1)} ⊂ R2 is linearly independent by Exercise 1, since neither vector is a multiple
of the other.
(b) The set {(−1, −1, 4), (−4, −4, 17), (1, 1, −3)} is linearly dependent. Solving

α1 (−1, −1, 4) + α2 (−4, −4, 17) + α3 (1, 1, −3) = 0

shows that α1 = 5, α2 = −1, α3 = 1 is a nontrivial solution.


(c) The set {(−1, 3, −2), (3, −10, 7), (−1, 3, −1)} is linearly independent. Solving

α1 (−1, 3, −2) + α2 (3, −10, 7) + α3 (−1, 3, −1) = 0

shows that the only solution is α1 = α2 = α3 = 0.

6. We wish to determine whether each of the following sets of polynomials is linearly independent or not.

(a) The set {1 − x2 , x + x2 , 3 + 3x − 4x2 } ⊂ P2 is linearly independent since the only solution to
α1 (1 − x2 ) + α2 (x + x2 ) + α3 (3 + 3x − 4x2 ) = 0 is α1 = α2 = α3 = 0.
(b) The set {1 + x2 , 4 + 3x2 + 3x3 , 3 − x + 10x3 , 1 + 7x2 − 18x3 } ⊂ P3 is linearly dependent. Solving
α1 (1 + x2 ) + α2 (4 + 3x2 + 3x3 ) + α3 (3 − x + 10x3 ) + α4 (1 + 7x2 − 18x3 ) = 0 yields a nontrivial
solution α1 = −25, α2 = 6, α3 = 0, α4 = 1.

7. The set {ex , e−x , cosh (x)} ⊂ C[0, 1] is linearly dependent since (1/2)ex + (1/2)e−x − cosh (x) = 0 for all
x ∈ [0, 1].

8. The subset {(0, 1, 2), (1, 2, 0), (2, 0, 1)} of Z33 is linearly dependent because 1 · (0, 1, 2) + 1 · (1, 2, 0) + 1 ·
(2, 0, 1) = (0, 0, 0).

9. We wish to show that {1, x, x2 } is linearly dependent in P2 (Z2 ). The equation α1 · 1 + α2 x + α3 x2 = 0


has the nontrivial solution α1 = 0, α2 = 1, α3 = 1. To verify this, we must simply verify that x + x2 is
the zero function in P2 (Z2 ). Substituting x = 0, we obtain 0 + 02 = 0 + 0 = 0, and with x = 1, we obtain
1+ 12 = 1 + 1 = 0.

10. The set {(i, 1, 2i), (1, 1+i, i), (1, 3+5i, −4+3i)} ⊂ C3 is linearly dependent, because α1 (i, 1, 2i)+ α2 (1, 1+
i, i) + α3 (1, 3 + 5i, −4 + 3i) = (0, 0, 0) has the nontrivial solution α1 = −2i, α2 = −3, α3 = 1.

11. We have already seen that {(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)} ⊂ R4 is linearly dependent, because (3, 2, 2, 3)−
2(3, 2, 1, 2) + (3, 2, 0, 1) = (0, 0, 0, 0).

(a) We can solve this equation for any one of the vectors in terms of the other two; for instance,
(3, 2, 2, 3) = 2(3, 2, 1, 2) − (3, 2, 0, 1).
(b) We can show that (−3, −2, 2, 1) ∈ sp{(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)} by solving α1 (3, 2, 2, 3) +
α2 (3, 2, 1, 2) + α3 (3, 2, 0, 1) = (−3, −2, 2, 1). One solution is (−3, −2, 2, 1) = 3(3, 2, 2, 3) − 4(3, 2, 1, 2).
Substituting (3, 2, 2, 3) = 2(3, 2, 1, 2) − (3, 2, 0, 1), we obtain another solution: (−3, −2, 2, 1) =
2(3, 2, 1, 2) − 3(3, 2, 0, 1).
2.6. BASIS AND DIMENSION
20 CHAPTER 2. FIELDS AND VECTOR SPACES
20

12. We wish to show that {(−1, 1, 3), (1, −1, −2), (−3, 3, 13)} ⊂ R3 is linearly dependent by writing one of
the vectors as a linear combination of the others. We will try to solve for the third vector in terms of the
other two. (There is an element of trial and error involved here: Even if the three vectors form a linearly
independent set, there is no guarantee that this will work; it could be, for instance, that the first two
vectors form a linearly dependent set and the third vector does not lie in the span of the first two.) Solving
α1 (−1, 1, 3) + α2(1, −1, −2) = (−3, 3, 13) yields a unique solution: (−3, 3, 13) = 7(−1, 1, 3) + 4(1, −1, −2).
This shows that the set is linearly dependent.
Alternate solution: We begin by solving α1 (−1, 1, 3) + α2 (1, −1, −2) + α3 (−3, 3, 13) = (0, 0, 0) to obtain
7(−1, 1, 3)+4(1, −1, −2)−(−3, 3, 13) = (0, 0, 0). We can easily solve this for the third vector: (−3, 3, 13) =
7(−1, 1, 3) + 4(1, −1, −2).
13. We wish to show that {p1 , p2 , p3 }, where p1 (x) = 1 − x2 , p2 (x) = 1 + x − 6x2 , p3 (x) = 3 − 2x2 ,
is linearly independent and spans P2 . We first verify that the set is linearly independent by solving
α1 (1 − x2 ) + α2 (1 + x − 6x2 ) + α3 (3 − 2x2 ) = 0. This equation is equivalent to the system α1 + α2 + 3α2 = 0,
α2 = 0, −α1 − 6α2 − 2α3 = 0, and a direct calculation shows that the only solution is α1 = α2 = α3 =
0. To show that the set spans P2 , we take an arbitrary p ∈ P2 , say p(x) = c0 + c1 x + c2 x2 , and
solve α1 (1 − x2 ) + α2 (1 + x − 6x2 ) + α3 (3 − 2x2 ) = c0 + c1 x + c2 x2 . This is equivalent to the system
α1 + α2 + 3α2 = c0 , α2 = c1 , −α1 − 6α2 − 2α3 = c2 . There is a unique solution: α1 = −2c0 − 16c1 − 3c2 ,
α2 = c1 , α3 = c0 + 5c1 + c2 . This shows that p ∈ sp{p1 , p2 , p3 }, and, since p was arbitrary, that {p1 , p2 , p3 }
spans all of P2 .
14. Let V be a vector space over a field F and let {u1 , . . . , uk } be a linearly independent subset of V . Suppose
u, v ∈ V , {u, v} is linearly independent, and u, v ∈ sp{u1 , . . . , uk }. We wish to determine whether
{u1 , . . . , uk , u, v} is necessarily linearly independent. In fact, this set need not be linearly independent.
For example, take V = R4 , F = R, k = 3, and u1 = (1, 0, 0, 0), u2 = (0, 1, 0, 0), u3 = (0, 0, 1, 0). With
u = (0, 0, 0, 1), v = (1, 1, 1, 1), we see immediately that {u, v} is linearly independent (neither vector is a
multiple of the other), and that neither u nor v belongs to sp{u1 , u2 , u3 }. Nevertheless, {u1 , u2 , u3 , u, v}
is linear dependent because v = u1 + u2 + u3 + u.
15. Let V be a vector space over a field F , and suppose S and T are subspaces of V satisfying S ∩ T =
{0}. Suppose {s1 , . . . , sk } ⊂ S and {t1 , . . . , t } ⊂ T are both linearly independent sets. We wish to
prove that {s1 , . . . , sk , t1 , . . . , t } is linearly independent. Suppose scalars α1 , . . . , αk , β1 , . . . , β satisfy
α1 s1 + · · · + αk sk + β1 t1 + · · · + β t = 0. We can rearrange this equation to read α1 s1 + · · · + αk sk =
−β1 t1 − · · · − β t . If v is the vector represented by these two expressions, then v ∈ S (since v is a
linear combination of s1 , . . . , sk ) and v ∈ T (since v is a linear combination of t1 , . . . , t ). But the only
vector in S ∩ T is the zero vector, and hence α1 s1 + · · · + αk sk = 0, −β1 t1 − · · · − β t = 0. The
first equation implies that α1 = · · · = αk = 0 (since {s1 , . . . , sk } is linearly independent), while the
second equation implies that β1 = · · · = β = 0 (since {t1 , . . . , t } is linearly independent). Therefore,
α1 s1 +· · ·+αk sk +β1 t1 +· · ·+β t = 0 implies that all the scalars are zero, and hence {s1 , . . . , sk , t1 , . . . , t }
is linearly independent.
16. Let V be a vector space over a field F , and let {u1 , . . . , uk } and {v1 , . . . , v } be two linearly independent
subsets of V . We wish to find a condition that implies that {u1 , . . . , uk , v1 , . . . , v } is linearly independent.
By the previous exercise, a sufficient condition for {u1 , . . . , uk , v1 , . . . , v } to be linearly independent is
that S = sp{u1 , . . . , uk }, T = sp{v1 , . . . , v } satisfy S ∩ T = {0}. We will prove that this condition is
also necessary. Suppose {u1 , . . . , uk } and {v1 , . . . , v } are linearly independent subsets of V , and that
{u1 , . . . , uk , v1 , . . . , v } is also linearly independent. Define S and T as above. If x ∈ S ∩ T , then there
exist scalars α1 , . . . , αk ∈ F such that x = α1 u1 + · · · + αk uk (since x ∈ S), and also scalars β1 , . . . , β ∈ F
such that x = β1 v1 + · · · + β v (since x ∈ T ). But then α1 u1 + · · · + αk uk = β1 v1 + · · · + β v , which
implies that α1 u1 + · · ·+ αk uk − β1 v1 − · · ·− β v = 0. Since {u1 , . . . , uk , v1 , . . . , v } is linearly independent
by assumption, this implies that α1 = · · · = αk = β1 = · · · = β = 0, which in turn shows that x = 0.
Therefore S ∩ T = {0}, and the proof is complete.
17. (a) Let V be a vector space over R, and suppose {x, y, z} is a linearly independent subset of V . We wish
to show that {x + y, y + z, x + z} is also linearly independent. Let α1 , α2 , α3 ∈ R satisfy α1 (x + y) +
2.6. BASIS AND DIMENSION
21 CHAPTER 2. FIELDS AND VECTOR SPACES
21

α2 (y + z) + α3 (x + z) = 0. This equation is equivalent to (α1 + α3 )x + (α1 + α2 )y + (α2 + α3 )z = 0.


Since {x, y, z} is linearly independent, it follows that α1 + α3 = α1 + α2 = α2 + α3 = 0. This system
can be solved directly to show that α1 = α2 = α3 = 0, which proves that {x + y, y + z, x + z} is
linearly independent.
(b) We now show, by example, that the previous result is not necessarily true if V is a vector space
over some field F = R. Let V = Z23 , and define x = (1, 0, 0), y = (0, 1, 0), and z = (0, 0, 1).
Obviously {x, y, z} is linearly independent. On the other hand, we have (x + y) + (y + z) + (x + z) =
(1, 1, 0)+(0, 1, 1)+(1, 0, 1) = (1+0+1, 1+1+0, 0+1+1) = (0, 0, 0), which shows that {x+y, y+z, x+z} is
linearly dependent.

18. Let U and V be vector spaces over a field F , and define W = U × V . Suppose {u1 , . . . , uk } ⊂ U and
{v1 , . . . , v } ⊂ V are linearly independent. We wish to show that {(u1 , 0), . . . , (uk , 0), (0, v1 ), . . . , (0, v )}
is also linearly independent. Suppose α1 , . . . , αk , β1 , . . . , β ∈ F satisfy α1 (u1 , 0) + · · · + αk (uk , 0) +
β1 (0, v1 ) + · · · + β (0, v ) = (0, 0). This reduces to (α1 u1 + · · · + αk uk , β1 v1 + · · · + β v ) = (0, 0),
which holds if and only if α1 u1 + · · · + αk uk = 0 and β1 v1 + · · · + β v = 0. Since {u1 , . . . , uk } is
linearly independent, the first equation implies that α1 = · · · = αk = 0, and, since {v1 , . . . , v } is linearly
independent, the second implies that β1 = · · · = β = 0. Since all the scalars are necessarily zero, we see
that {(u1 , 0), . . . , (uk , 0), (0, v1 ), . . . , (0, v )} is linearly independent.
19. Let V be a vector space over a field F , and let u1 , u2 , . . . , un be vectors in V . Suppose a nonempty subset
of {u1 , u2 , . . . , un }, say {ui1 , . . . , uik }, is linearly dependent. (Here 1 ≤ k < n and i1 , . . . , ik are distinct
integers each satisfying 1 ≤ ij ≤ n.) We wish to prove that {u1 , u2 , . . . , un } itself is linearly dependent. By
assumption, there exist scalars αi1 , . . . , αik ∈ F , not all zero, such that αi1 ui1 + · · · + αik uik = 0. For each i
∈ {1, . . . , n} \ {i1 , . . . , ik }, define αi = 0. Then we have α1 u1 + · · · + αn un = 0 + αi1 ui1 +· · · + αik uik = 0,
and not all of α1 , . . . , αn are zero since at least one αij is nonzero. This shows that {u1 , . . . , un } is linearly
dependent.
20. Let V be a vector space over a field F , and suppose {u1 , u2 , . . . , un } is a linearly independent subset of
V . We wish to prove that every nonempty subset of {u1 , u2 , . . . , un } is also linearly independent. The
result to be proved is simply the contrapositive of the statement in the previous exercise, and therefore
holds by the previous proof.
21. Let V be a vector space over a field F , and suppose {u1 , u2 , . . . , un } is linearly dependent. We wish to
prove that, given any i, 1 ≤ i ≤ n, either ui is a linear combination of u1 , . . . , ui−1 , ui+1 , . . . , un or these
vectors form a linearly dependent set. By assumption, there exist scalars α1 , . . . , αn ∈ F , not all zero, such
that α1 u1 + · · · + αi ui + · · · + αn un = 0. We now consider two cases. If αi = 0, the we can solve the latter
equation for ui to obtain ui = −α−1 α1 u1 −· · ·−α−1 αi 1 u −α−1 α u −· · ·−α−1 α u . In this case,
i i − i−1 i i+1 i+1 i n n

ui is a linear combination of the remaining vectors. The second case is that αi = 0, in which case at least
one of α1 , . . . , αi−1 , αi+1 , . . . , αn is nonzero, and we have α1 u1 +· · ·+αi−1 ui−1 +αi+1 ui+1 +· · ·+αn un = 0.
This shows that {u1 , . . . , ui−1 , ui+1 , . . . , un } is linearly dependent.

2.6 Basis and dimension


1. Suppose {v1 , v2 , . . . , vn } is a basis for a vector space V .
(a) We wish to show that if any vj is removed from the basis, the resulting set of n − 1 vectors does not
span V and hence is not a basis. This follows from Theorem 24: Since {v1 , v2 , . . . , vn } is linearly
independent, no vj , j = 1, 2, . . . , n, can be written as a linear combination of the remaining vectors.
Therefore,
vj ∈ sp{v1 , . . . , vj−1 , vj+1 , . . . , vn },
which proves the desired result.

(b) Now we wish to show that if any vector u ∈ V , u ∈ {v1 , v2 , . . . , vn }, is added to the basis, the
resulting set of n + 1 vectors is linearly dependent. This is immediate from Theorem 34: Since
2.6. BASIS AND DIMENSION
22 CHAPTER 2. FIELDS AND VECTOR SPACES
22

the dimension of V is n, every set containing more than n vectors is linearly dependent. Since
{v1 , v2 , . . . , vn , u} contains n + 1 vectors, it must be linearly dependent.

2. Consider the following vectors in R3 : v1 = (−1, 4, −2), v2 = (5, −20, 9), v3 = (2, −7, 6). We wish to
determine if {v1 , v2 , v3 } is a basis for R3 . If we solve α1 v1 + α2 v2 + α3 v3 = x for an arbitrary x ∈ R3 , we
find a unique solution: α1 = 57x1 + 12x2 − 5x3 , α2 = 10x1 + 2x2 − x3 α3 = 4x1 + x2 . By Theorem 28,
this implies that {v1 , v2 , v3 } is a basis for R3 .

3. We now repeat the previous exercise for the vectors v1 = (−1, 3, −1), v2 = (1, −2, −2), v3 = (−1, 7, −13).
If we try to solve α1 v1 + α2 v2 + α3 v3 = x for an arbitrary x ∈ R3 , we find that this equation is equivalent
to the following system:

−α1 + α2 − α3 = x1
α2 + 4α3 = 3x1 + x2
0 = 8x1 + 3x2 + x3 .

Since this system is inconsistent for most x ∈ R3 (the system is consistent only if x happens to satisfy
8x1 + 3x2 + x3 = 0), {v1 , v2 , v3 } does not span R3 and therefore is not a basis.

4. Let S = sp{ex , e−x } be regarded as a subspace of C(R). We will show that {ex , e−x }, {cosh (x), sinh (x)}
are two different bases for S. First, to verify that {ex , e−x } is a basis, we merely need to verify that it is
linearly independent (since it spans S by definition). This can be done as follows: If c1 ex + c2 e−x = 0,
where 0 represents the zero function, then the equation must hold for all values of x ∈ R. So choose
x = 0 and x = ln 2; then c1 and c2 must satisfy

c1 + c2 = 0,
1
2c1 + c2 = 0.
2
It is straightforward to show that the only solution of this system is c1 = c2 = 0, and hence {ex , e−x } is
linearly independent.
Next, since
1 x 1 x 1 x 1 x
cosh (x) = e + e− , cosh (x) = e − e− ,
2 2 2 2

we see that cosh (x), sinh (x) ∈ S = sp{ex , e−x }. We can verify that {cosh (x), sinh (x)} is linearly
independent directly: If c1 cosh (x) + c2 sinh (x) = 0, then, substituting x = 0 and x = ln 2, we obtain the
system
5 3
1 · c1 + 0 · c2 = 0, c1 + c2 = 0,
4 4

and the only solution is c1 = c2 = 0. Thus {cosh (x), sinh (x)} is linearly independent. Finally, let f be
any function in S. Then, by definition, f can be written as f (x) = α1 ex + α2 e−x for some α1 , α2 ∈ R.
We must show that f ∈ sp{cosh (x), sinh (x)}, that is, that there exist c1 , c2 ∈ R such that

c1 cosh (x) + c2 sinh (x) = f (x).

This equation can be manipulated as follows:

c1 cosh (x) + c2 sinh (x) = f (x)


1 x 1 −x 1 x 1 −x
⇔c1 e + e + c2 e − e = α1 ex + α2 e−x
2 2 2 2

1 1 x 1 1 x x −x

⇔ c1 + c2 e + c1 − c 2 e = α 1 e + α 2 e .
2 2 2 2
2.6. BASIS AND DIMENSION
23 CHAPTER 2. FIELDS AND VECTOR SPACES
23

Since {{ex , e−x } is linearly independent, Theorem 26 implies that the last equation can hold only if
1 1 1 1
c1 + c2 = α 1 , c1 − c 2 α 2 .
2 2 2 2
This last system has a unique solution:

c1 = α1 + α2 , c2 = α1 − α2 .

This shows that f ∈ sp{cosh (x), sinh (x)}, and the proof is complete.
5. Let p1 (x) = 1 − 4x + 4x2, p2 (x) = x + x2 , p3 (x) = −2 + 11x − 6x2. We will determine whether {p1 , p2 , p3 }
is a basis for P2 or not by solving α1 p1 (x) + α2 p2 (x) + α3 p3 (x) = c0 + c1 x + c2 x2 , where c0 + c1 x + c2 x2
is an arbitrary element of P2 . A direct calculation shows that there is a unique solution for α1 , α2 , α3 :

α1 = 17c0 + 2c1 − 2c2 , α2 = 3c2 − 2c1 − 20c0 , α3 = 8c0 + c1 − c2 .

By Theorem 28, it follows that {p1 , p2 , p3 } is a basis for P2 .


6. Let p1 (x) = 1 − x2 , p2 (x) = 2 + x, p3 (x) = x + 2x2 . We will determine whether {p1 , p2 , p3 } a basis for P2
by trying to solve
α1 p1 (x) + α2 p2 (x) + α3 p3 (x) = c0 + c1 x + c2 x2 ,
where c0 , c1 , c2 are arbitrary real numbers. This equation is equivalent to the system

α1 + 2α2 = c0 ,
α 2 + α 3 = c1 ,
−α1 + 2α3 = c2 .

Solving this system by elimination leads to

α1 + 2α2 = c0 ,
α 2 + α 3 = c1 ,
0 = c0 − 2c1 + c2 ,
which is inconsistent for most values of c0 , c1 , c2 . Therefore, {p1 , p2 , p3 } does not span P2 and hence is
not a basis for P2 .
7. Consider the subspace S = sp{p1 , p2 , p3 , p4 , p5 } of P3 , where

p1 (x) = −1 + 4x − x2 + 3x3 , p2 (x) = 2 − 8x + 2x2 − 5x3 ,


p3 (x) = 3 − 11x + 3x2 − 8x3 , p4 (x) = −2 + 8x − 2x2 − 3x3 ,
p5 (x) = 2 − 8x + 2x2 + 3x3 .

(a) The set {p1 , p2 , p3 , p4 , p5 } is linearly dependent (by Theorem 34) because it contains five elements
and the dimension of P3 is only four.
(b) As illustrated in Example 39, we begin by solving

α1 p1 (x) + α2 p2 (x) + α3 p3 (x) + α4 p4 (x) + α5 p5 (x) = 0;

this is equivalent to the system

−α1 + 2α2 + 3α3 − 2α4 + 2α5 = 0,


4α1 − 8α2 − 11α3 + 8α4 − 8α5 = 0,
−α1 + 2α2 + 3α3 − 2α4 + 2α5 = 0,
3α1 − 5α2 − 8α3 − 3α4 + 3α5 = 0,
2.6. BASIS AND DIMENSION
24 CHAPTER 2. FIELDS AND VECTOR SPACES
24

which reduces to

α1 = 16α4 − 16α5 ,
α2 = 9α4 − 9α5 ,
α3 = 0.

Since there are nontrivial solutions, {p1 , p2 , p3 , p4 , p5 } is linearly dependent (which we already knew),
but we can deduce more than that. By taking α4 = 1, α5 = 0, we see that α1 = 16, α2 = 9, α3 = 0,
α4 = 1, α5 = 0 is one solution, which means that

16p1 (x) + 9p2 (x) + p4 (x) = 0 ⇒ p4 (x) = −16p1 (x) − 9p2 (x).

This shows that p4 ∈ sp{p1 , p2 } ⊂ sp{p1 , p2 , p3 }. Similarly, taking α4 = 0, α5 = 0, we find that

−16p1 (x) − 9p2 (x) + p5 (x) = 0 ⇒ p5 (x) = 16p1 (x) + 9p2 (x),

and hence p5 ∈ sp{p1 , p2 } ⊂ sp{p1 , p2 , p3 }. It follows from Lemma 19 that sp{p1 , p2 , p3 , p4 , p5 } =


sp{p1 , p2 , p3 }. Our calculations above show that {p1 , p2 , p3 } is linearly independent (if α4 = α5 = 0,
then also α1 = α2 = α3 = 0). Therefore, {p1 , p2 , p3 } is a linearly independent spanning set of S and
hence a basis for S.

8. We wish to find a basis for sp{(1, 2, 1), (0, 1, 1), (1, 1, 0)} ⊂ R3 . We will name the vectors v1 , v2 , v3 ,
respectively, and begin by testing the linear independence of {v1 , v2 , v3 }. The equation α1 v1 + α2 v2 +
α3 v3 = 0 is equivalent to

α1 + α3 = 0,
2α2 + α2 + α3 = 0,
α1 + α2 = 0,
which reduces to
α1 = −α3 , α2 = α3 .
One solution is α1 = −1, α2 = 1, α3 = 1, which shows that −v1 + v2 + v3 = 0, or v3 = v1 − v2 . This
in turns shows that sp{v1 , v2 , v3 } = sp{v1 , v2 } (by Lemma 19). Clearly {v1 , v2 } is linearly independent
(since neither vector is a multiple of the other), and hence {v1 , v2 } is a basis for sp{v1 , v2 , v3 }.

9. We wish to find a basis for S = sp{(1, 2, 1, 2, 1), (1, 1, 2, 2, 1), (0, 1, 2, 0, 2)} in Z53. The equation

α1 (1, 2, 1, 2, 1) + α2 (1, 1, 2, 2, 1) + α3 (0, 1, 2, 0, 2) = (0, 0, 0, 0, 0)

is equivalent to the system

α1 + α2 = 0,
2α1 + α2 + α3 = 0,
α1 + 2α2 + 2α3 = 0,
2α1 + 2α2 = 0,
α1 + α2 + 2α3 = 0.

Reducing this system by Gaussian elimination (in modulo 3 arithmetic), we obtain

α1 = α2 = α3 = 0,

which shows that the given vectors form a linearly independent set and therefore a basis for S.
2.6. BASIS AND DIMENSION
25 CHAPTER 2. FIELDS AND VECTOR SPACES
25

10. We will show that {1 + x + x2 , 1 − x + x2 , 1 + x + 2x2 } is a basis for P2 (Z3 ) by showing that there is a
unique solution to
α1 1 + x + x2 + α2 1 − x + x2 + α3 1 + x + 2x2 = c0 + c1 x + c2 x2 .
We first note that 1 − x + x2 = 1 + 2x + x2 in P2 (Z3 ), so we can write our equation as
α1 1 + x + x2 + α2 1 + 2x + x2 + α3 1 + x + 2x2 = c0 + c1 x + c2 x2 .
We rearrange the previous equation in the form
(α1 + α2 + α3 ) + (α1 + 2α2 + α3 )x + (α1 + α2 + 2α3 )x2 = c0 + c1 x + c2 x2 .
Since the polynomials involved are of degree 2 and the field Z3 contains 3 elements, this last equation is
is equivalent to the system
α 1 + α 2 + α 3 = c0 ,
α1 + 2α2 + α3 = c1 ,
α1 + α2 + 2α3 = c2
(cf. the discussion on page 45 of the text). Applying Gaussian elimination (modulo 3) shows that there
is a unique solution:
α1 = c0 + 2c1 + c2 ,
α2 = 2c0 + c1 ,
α3 = 2c0 + c2 .
This in turn proves (by Theorem 28) that the given polynomials form a basis for P2 (Z3 ).
11. Suppose F is a finite field with q distinct elements.
(a) Assume n ≤ q − 1. We wish to show that {1, x, x2 , . . . , xn } is a linearly independent subset of Pn (F ).
(Since {1, x, x2 , . . . , xn } clearly spans Pn (F ), this will show that it is a basis for Pn (F ), and hence
that dim(Pn (F )) = n + 1 in the case that n ≤ q − 1.) The desired conclusion follows from the
discussion on page 45 of the text. If c0 · 1 + c1 x + · · · + cn xn = 0 (where 0 is the zero function), then
every element of F is a root of c0 · 1 + c1 x + · · · + cn xn . Since F contains more than n elements
and a nonzero polynomial of degree n can have at most n distinct roots, this is impossible unless
c0 = c1 = . . . = cn = 0. Thus {1, x, . . . , xn } is linearly independent.
(b) Now suppose that n ≥ q. The reasoning above shows that {1, x, x2 , . . . , xq−1 } is linearly independent
in Pn (F ) (c0 · 1 + c1 x + · · · + cq−1 x q−1 has at most q − 1 distinct roots, and F contains more than

q − 1 elements, etc.). This implies that dim (Pn (F )) ≥ q in the case n ≥ q.


12. Suppose V is a vector space over a field F , and S, T are two n-dimensional subspaces of V . We wish to
prove that if S ⊂ T , then in fact S = T . Let {s1 , s2 , . . . , sn } be a basis for S. Since S ⊂ T , this implies
that {s1 , s2 , . . . , sn } is a linearly independent subset of T . We will now show that {s1 , s2 , . . . , sn } also
spans T . Let t ∈ T be arbitrary. Since T has dimension n, the set {s1 , s2 , . . . , sn , t} is linearly dependent
by Theorem 34. But then, by Lemma 33, t must be a linear combination of s1 , s2 , . . . , sn (since no sk is a
linear combination of s1 , s2 , . . . , sk−1 ). This shows that t ∈ sp{s1 , s2 , . . . , sn }, and hence we have shown
that {s1 , s2 , . . . , sn } is a basis for T . But then
T = sp{s1 , s2 , . . . , sn } = S,
as desired.
13. Suppose V is a vector space over a field F , and S, T are two finite-dimensional subspaces of V with
S ⊂ T . We are asked to prove that dim(S) ≤ dim(T ). Let {s1 , s2 , . . . , sn } be a basis for S. Since S ⊂ T ,
it follows that {s1 , s2 , . . . , sn } is a linearly independent subset of T , and hence, by Theorem 34, any basis
for T must have at least n vectors. It follows that dim(T ) ≥ n = dim(S).
2.6. BASIS AND DIMENSION
26 CHAPTER 2. FIELDS AND VECTOR SPACES
26

14. (Note: This exercise belongs in Section 2.7 since the most natural solution uses Theorem
43.) Let V be a vector space over a field F , and let S and T be finite-dimensional subspaces of V . We
wish to prove that
dim(S + T ) = dim(S) + dim(T ) − dim(S ∩ T ).
We know from Exercise 2.3.19 that S ∩ T is a subspace of V , and since it is a subset of S, dim(S ∩ T ) ≤
dim(S). Since S is finite-dimensional by assumption, it follows that S ∩ T is also finite-dimensional, and
therefore either S ∩ T = {0} or S ∩ T has a basis.
Suppose first that S ∩ T = {0}, so that dim(S ∩ T ) = 0. Let {s1 , s2 , . . . , sm } be a basis for S and
{t1 , t2 , . . . , tn } be a basis for T . We will show that {s1 , . . . , sm , t1 , . . . , tn } is a basis for S + T , from which
it follows that

dim(S + T ) = m + n = m + n − 0 = dim(S) + dim(T ) − dim(S ∩ T ).

The set {s1 , . . . , sm , t1 , . . . , tn } is linearly independent by Exercise 2.5.15. Given any v ∈ S + T , there
exist s ∈ S, t ∈ T such that v = s + t. But since s ∈ S, there exist scalars α1 , . . . , αm ∈ F such that
s = α1 s1 + · · · + αm sm . Similarly, since t ∈ T , there exist β1 , . . . , βn ∈ F such that t = β1 t1 + · · · + βn tn .
But then
v = s + t = α 1 s 1 + · · · + α m s m + β1 t 1 + · · · + βn tn ,
which shows that v ∈ sp{s1 , . . . , sm , t1 , . . . , tn }. Thus we have shown that {s1 , . . . , sm , t1 , . . . , tn } is a
basis for S + T , which completes the proof in the case that S ∩ T = {0}.
Now suppose S ∩ T is nontrivial, with basis {v1 , . . . , vk }. Since S ∩ T is a subset of S, {v1 , . . . , vk } is a lin-
early independent subset of S and hence, by Theorem 43, can be extended to a basis {v1 , . . . , vk , s1 , . . . , sp }
of S. Similarly, {v1 , . . . , vk } can be extended to a basis {v1 , . . . , vk , t1 , . . . , tq } of T . We will show that
{v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq } is a basis of S + T . Then we will have

dim(S) = k + p, dim(T ) = k + q, dim(S ∩ T ) = k


and
dim(S + T ) = k + p + q = (k + p) + (k + q) − k = dim(S) + dim(T ) − dim(S ∩ T ),
as desired. First, suppose v ∈ S + T . Then, by definition of S + T , there exist s ∈ S and t ∈ T such that
v = s + t. Since {v1 , . . . , vk , s1 , . . . , sp } is a basis for S, there exist scalars α1 , . . . , αk , β1 , . . . , βp ∈ F such
that
s = α 1 v 1 + · · · + αk v k + β 1 s 1 + · · · + β p s p .
Similarly, there exist γ1 , . . . , γk , δ1 , . . . , δq ∈ F such that

t = γ1 v1 + · · · + γk vk + δ1 s1 + · · · + δq sq .
But then

v = s + t = α 1 v 1 + · · · + αk v k + β 1 s 1 + · · · + β p s p +
γ1 v1 + · · · + γk vk + δ1 s1 + · · · + δq sq
= (α1 + γ1 )v1 + · · · + (αk + γk )vk + β1 s1 + · · · + βp sp +
δ 1 t1 + · · · + δ q t q
∈ sp{v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq }.

This shows that {v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq } spans S+T . Now suppose α1 , . . . , αk , β1 , . . . , βp , γ1 , . . . , γq ∈


F satisfy
α1 v1 + · · · + αk vk + β1 s1 + · · · + βp sp + γ1 t1 + · · · + γq tq = 0.
This implies that
α1 v1 + · · · + αk vk + β1 s1 + · · · + βp sp = −γ1 t1 − · · · − γq tq .
2.6. BASIS AND DIMENSION
27 CHAPTER 2. FIELDS AND VECTOR SPACES
27

The vector on the left belongs to S, while the vector on the right belongs to T ; hence both vectors (which
are really the same) belong to S ∩ T . But then −γ1 t1 − · · · − γq tq can be written in terms of the basis
{v1 , . . . , vk } of S ∩ T , say
−γ1 t1 − · · · − γq tq = δ1 v1 + · · · + δk vk .
But this gives two representations of the vector −γ1 t1 − · · · − γq tq ∈ T in terms of the basis

{v1 , . . . , vk , t1 , . . . , tq }.

Since each vector in T must be uniquely represented as a linear combination of the basis vectors, this is
possible only if γ1 = · · · = γq = δ1 = · · · = δk = 0. But then

α1 v1 + · · · + αk vk + β1 s1 + · · · + βp sp = 0,

and the linear independence of {v1 , . . . , vk , σ1 , . . . , sp } implies that α1 = · · · = αk = β1 = · · · = βp = 0.


We have thus shown that
{v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq }
is linearly independent, which completes the proof.
15. Let V be a vector space over a field F , and let S and T be finite-dimensional subspaces of V . Consider
the four subspaces
X1 = S, X2 = T, X3 = S + T, X4 = S ∩ T.
For every choice of i, j
with 1 ≤ i < j ≤ 4,

we wish to determine if dim(Xi ) ≤ dim(Xj ) or dim(Xi ) ≥ dim(Xj ) (or neither) must hold. First of all,
since S and T are arbitrary subspaces, it is obvious that there need be no particular relationship between
the dimensions of S and T . However, S ⊂ S + T since each s ∈ S can be written as s = s + 0 ∈ S + T
(0 ∈ T because every subspace contains the zero vector). Therefore, by Exercise 13, dim(S) ≤ dim(S +T ).
By the same reasoning, T ⊂ S + T and hence dim(T ) ≤ dim(S + T ). Next, S ∩ T ⊂ S, S ∩ T ⊂ T , and
hence dim(S ∩ T ) ≤ dim(S), dim(S ∩ T ) ≤ dim(T ). Finally, we have S ∩ T ⊂ S ⊂ S + T , and hence
dim(S ∩ T ) ≤ dim(S + T ).
16. Let V be a vector space over a field F , and suppose S and T are subspaces of V satisfying S ∩ T = {0}.
Suppose {s1 , s2 , . . . , sk } ⊂ S and {t1 , t2 , . . . , t } ⊂ T are bases for S and T , respectively. We wish to
prove that
{s1 , s2 , . . . , sk , t1 , t2 , . . . , t }
is a basis for S + T . This was done in the course of proving the result in Exercise 14.
17. Let U and V be vector spaces over a field F , and let {u1 , . . . , un } and {v1 , . . . , vm } be bases for U and
V , respectively. We are asked to prove that

{(u1 , 0), . . . , (un , 0), (0, v1 ), . . . , (0, vm )}

is a basis for U × V . First, let (u, v) be an arbitrary vector in U × V . Then u ∈ U and there exist
α1 , . . . , αn ∈ F such that u = α1 u1 + · · · + αn un . Similarly, v ∈ V and there exist β1 , . . . , βm ∈ F such
that v = β1 v1 + · · · + βm vm . It follows that

(u, v) = (u, 0) + (0, v)


= (α1 u1 + · · · + αn un , 0) + (0, β1 v1 + · · · + βm vm )
= α1 (u1 , 0) + · · · αn (un , 0) + β1 (0, v1 ) + · · · + βm (0, vm ).

This shows that {(u1 , 0), . . . , (un , 0), (0, v1 ), . . . , (0, vm )} spans U × V . Next, suppose α1 , . . . , αn ∈ F ,
β1 , . . . , βm ∈ F satisfy

α1 (u1 , 0) + · · · αn (un , 0) + β1 (0, v1 ) + · · · + βm (0, vm ) = 0.


2.6. BASIS AND DIMENSION
28 CHAPTER 2. FIELDS AND VECTOR SPACES
28

Since the zero vector ∈ U × V is (0, 0), this yields

(α1 u1 + · · · + αn un , 0) + (0, β1 v1 + · · · + βm vm ) = (0, 0),


which is equivalent to
α1 u1 + · · · + αn un = 0, β1 v1 + · · · + βm vm = 0.
Since both {u1 , . . . , un } and {v1 , . . . , vm } are linearly independent, it follows that α1 = · · · = αn = β1 =
· · · = βm = 0. This shows that {(u1 , 0), . . . , (un , 0), (0, v1 ), . . . , (0, vm )} is linearly independent, and the
proof is complete.
18. We will prove that the number of elements in a finite field must be pn , where p is a prime number and n
is a positive integer.
Let F be a finite field.
(a) Let p be the characteristic of F . Then

0, 1, 1 + 1, 1 + 1 + 1, . . . , 1 + 1 + · · · + 1

(p − 1 terms in the last sum) are distinct elements of F , while 1 + 1 + · · · + 1 (p terms) is 0. We


will write 2 = 1 + 1, 3 = 1 + 1 + 1, and so forth, thus labeling p distinct elements of F , namely,
0, 1, . . . , p − 1. We can then show that {0, 1, 2, . . . , p − 1} ⊂ F is a subfield of F isomorphic to Zp .
Writing out a formal proof is difficult, because the symbols 0, 1, 2, . . . , p − 1 have now three different
meanings (they are elements of Z, elements of Zp , and now elements of F ). For the purposes of this
proof, we will temporarily write 0F , 1F , . . . , (p − 1)F for the elements of F , 0Zp , 1Zp , . . . , (p − 1)Zp for
the elements of Zp , and 0, 1, . . . , p − 1 for the elements of Z. Let us define G = {0F , 1F , . . . , (p − 1)F }
and φ : G → Zp by φ(kF ) = kZp . If k + < p, then kF + F is the sum of k + copies of 1F , which
is (k + )F by definition. Similarly, kZp + Zp = (k + )Zp by definition of addition in Zp . Therefore,

φ(kF + F ) = φ((k + )F ) = (k + )Zp = kZp + Zp = φ(kF ) + φ( F ).

On the other hand, if 0 ≤ k, ≤ p − 1 and k + ≥ p, then kF + F is the sum of k + copies of 1F ,


which can be written (by the associative property of addition) as the sum of p copies of 1F plus the
sum of k + − p copies of 1F . This reduces to 0F + (k + − p)F = (k + − p)F . Similarly, by the
definition of addition in Zp , kZp + Zp = (k + − p)Zp , and therefore, in this case also, we see

φ(kF + F ) = φ((k + − p)F ) = (k + − p)Zp = kZp + Zp = φ(kF ) + φ( F ).

Therefore, φ preserves addition.


Now, by the distributive law, kF F can be written as the sum of k copies of 1F (a careful proof of
this would require induction). If k = qp + r, where q ≥ 0 and 0 ≤ r ≤ p − 1, then, by the associative
property of addition, we can write kF F as the sum of q + 1 sums, the first q of them consisting of
p copies of 1F (and thus each equalling 0F ) and the last consisting of r copies of 1F . It follows that
kF F = rF . By definition of multiplication in Zp , we simiarly have kZp Zp = rZp , and hence

φ(kF F ) = φ(rF ) = rZp = kZp Zp = φ(kF )φ( F ).

Therefore, φ also preserves multiplication, and we have shown that G and Zp are isomorphic as
fields.
(b) We now drop the subscripts and identify Zp with the subfield G of F . We wish to show that that
F is a vector space over Zp . We already know that addition in F is commutative, associative, and
has an identity, and that each element of F has an additive inverse in F . The associative property
of scalar multiplication and the two distributive properties of scalar multiplication reduce to the
associative property of multiplication and the distributive property in F . Finally, 1 · u = u for all
u ∈ F since 1 is nothing more than the multiplicative identity in F . This verifies that F is a vector
field over Zp .
2.7. PROPERTIES OF BASES
29 CHAPTER 2. FIELDS AND VECTOR SPACES
29

(c) Since F has only a finite number of elements, it must be a finite-dimensional vector space over Zp .
Let the dimension be n, and let {f1 , . . . , fn } be a basis for F . Then every element of F can be
written uniquely in the form
α 1 f1 + α 2 f 2 + · · · + α n f n , (2.1)
where α1 , α2 , . . . , αn ∈ Zp . Conversely, for each choice of α1 , α2 , . . . , αn in Zp , (2.1) defines an
element of F . Therefore, the number of elements of F is precisely the number of different ways to
choose α1 , α2 , . . . , αn ∈ Zp , which is pn .

2.7 Properties of bases


1. Consider the following vectors in R3 : v1 = (1, 5, 4), v2 = (1, 5, 3), v3 = (17, 85, 56), v4 = (1, 5, 2),
v5 = (3, 16, 13).
(a) We wish to show that {v1 , v2 , v3 , v4 , v5 } spans R3 . Given an arbitrary x ∈ R3 , the equation
α1 v 1 + α2 v 2 + α 3 v 3 + α 4 v 4 + α 5 v 5 = x
is equivalent to the system
α1 + α2 + 17α3 + α4 + 3α5 = x1 ,
5α1 + 5α2 + 85α3 + 5α4 + 16α5 = x2 ,
4α1 + 3α2 + 56α3 + 2α4 + 13α5 = x3 .
Applying Gaussian elimination, this system reduces to
α1 = 17x1 − 4x2 + x3 − 5α3 + α4 ,
α2 = x2 − x1 − x3 − 12α3 − 2α5 ,
α5 = x2 − 5x1 .
This shows that there are solutions regardless of the value of x; that is, each x ∈ R3 can be written
as a linear combination of v1 , v2 , v3 , v4 , v5 . Therefore, {v1 , v2 , v3 , v4 , v5 } spans R3 .
(b) Now we wish to find a subset of {v1 , v2 , v3 , v4 , v5 } that is a basis for R3 . According to the calculations
given above, each x ∈ R3 can be written as a linear combination of {v1 , v2 , v5 } (just take α3 = α4 = 0
in the system solved above). Since dim(R3 ) = 3, any three vectors spanning R3 form a basis for R3
(by Theorem 45). Hence {v1 , v2 , v5 } is a basis for R3 .
2. Consider the following vectors in R4 :
u1 = (1, 3, 5, −1), u2 = (1, 4, 9, 0), u3 = (4, 9, 7, −5).
(a) We wish to show that {u1 , u2 , u3 } is linearly independent, which we do by solving α1 u1 + α2 u2 +
α3 u3 = 0. This is equivalent to the system
α1 + α2 + 4α3 = 0,
3α1 + 4α2 + 9α3 = 0,
5α1 + 9α2 + 7α3 = 0,
−α1 − 5α3 = 0.
Applying Gaussian elimination, we find that the only solution is α1 = α2 = α3 = 0. Thus {u1 , u2 , u3 }
is linearly independent.
(b) Since sp{u1 , u2 , u3 } is a three-dimensional subspace of R4 , and hence a very small part of R4 , almost
every vector in R4 does not belong to sp{u1 , u2 , u3 } and hence would be a valid fourth vector for a
basis. We choose u4 = (0, 0, 0, 1), and test whether {u1 , u2 , u3 , u4 } is linearly independent. A direct
calculation shows that α1 u1 + α2 u2 + α3 u3 + α4 u4 = 0 has only the trivial solution, and hence (by
Theorem 45) {u1 , u2 , u3 , u4 } is a basis for R4 .
2.7. PROPERTIES OF BASES
30 CHAPTER 2. FIELDS AND VECTOR SPACES
30

3. Let p1 (x) = 2 − 5x, p2 (x) = 2 − 5x + 4x2 .


(a) Obviously {p1 , p2 } is linearly independent, because neither polynomial is a multiple of the other.
(b) Now we wish to find a polynomial p3 ∈ P2 such that {p1 , p2 , p3 } is a basis for P2 . Since sp{p1 , p2 }
is a two-dimensional subspace of the three-dimensional space P2 , almost any polynomial will do; we
choose p3 (x) = 1. We then test for linear independence by solving c1 p1 (x) + c2 p2 (x) + c3 p3 (x) = 0.
This equation is equivalent to
(2c1 + 2c2 + c3 ) + (−5c1 − 5c2 )x + 4c2 x2 = 0,
which in turn is equivalent to the system
2c1 + 2c2 + c3 = 0,
−5c1 − 5c2 = 0,
4c2 = 0.
A direct calculation shows that the only solution is c1 = c2 = c3 = 0, and hence {p1 , p2 , p3 } is
linearly independent. It follows from Theorem 45 that {p1 , p2 , p3 } is a basis for R3 .
4. Define p1 , p2 , p3 , p4 , p5 ∈ P2 by
p1 (x) = x, p2 (x) = 1 + x, p3 (x) = 3 + 5x,
p4 (x) = 5 + 8x, p5 (x) = 3 + x − x2 .
(a) We first show that {p1 , p2 , p3 , p4 , p5 } spans P2 . Given an arbitrary q(x) = a0 + a1 x + a2 x2 in P2 ,
the equation c1 p1 (x) + c2 p2 (x) + c3 p3 (x) + c4 p4 (x) + c5 p5 (x) = q(x) is equivalent to
(c2 + 3c3 + 5c4 + 3c5 ) + (c1 + c2 + 5c3 + 8c4 + c5 )x − c5 x2 = a0 + a1 x + a2 x2 ,
and hence to the system
c2 + 3c3 + 5c4 + 3c5 = a0 ,
c1 + c2 + 5c3 + 8c4 + c5 = a1 ,
−c5 = a2 .
Applying Gaussian elimination, we obtain the reduced system
c1 = a1 − a0 − 2a2 − 2c3 − 3c4 ,
c2 = a0 + 3a2 − 3c3 − 5c4 ,
c5 = −a2 .
We see that, regardless of the values of a0 , a1 , a2 , there is a solution, and hence {p1 , p2 , p3 , p4 , p5 }
spans P2 .
(b) We now find a subset of {p1 , p2 , p3 , p4 , p5 } that forms a basis for P2 . From the above calculation,
we see that every q ∈ P2 can be written as a linear combination of p1 , p2 , p5 . Since dim(P2 ) = 3,
Theorem 45 implies that {p1 , p2 , p5 } is a basis for P2 .
5. Let u1 = (1, 4, 0, −5, 1), u2 = (1, 3, 0, −4, 0), u3 = (0, 4, 1, 1, 4) be vectors in R5 .
(a) To show that {u1 , u2 , u3 } is linearly independent, we solve the equation α1 u1 + α2 u2 + α3 u3 = 0,
which is equivalent to the system
α1 + α2 = 0,
4α1 + 3α2 + 4α3 = 0,
α3 = 0,
−5α1 − 4α2 + α3 = 0,
α1 + 4α3 = 0.
A direct calculation shows that this system has only the trivial solution.
2.7. PROPERTIES OF BASES
31 CHAPTER 2. FIELDS AND VECTOR SPACES
31

(b) To extend {u1 , u2 , u3 } to a basis for R5 , we need two more vectors. We will try u4 = (0, 0, 0, 1, 0)
and u5 = (0, 0, 0, 0, 1). We solve α1 u1 + α2 u2 + α3 u3 + α4 u4 + α5 u5 = 0 and find that the only
solution is the trivial one. This implies that {u1 , u2 , u3 , u4 , u5 } is linearly independent and hence,
by Theorem 45, a basis for R5 .

6. Consider the following vectors in R5 :


⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 −1 1
⎢ 2 ⎥ ⎢ 3 ⎥ ⎢ 7 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
u1 = ⎢ 0 ⎥ , u2 = ⎢ 2 ⎥ , u3 = ⎢ 2 ⎥,
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 1 ⎦ ⎣ 1 ⎦ ⎣ 3 ⎦
1 −1 1
⎡ ⎤ ⎡ ⎤
1 2
⎢ −2 ⎥ ⎢ 10 ⎥

⎢ ⎥ ⎢ ⎥
u4 = ⎢ −1 ⎥ , u5 = ⎢ 3 ⎥.
⎢ ⎥ ⎢ ⎥
⎣ 1 ⎦ ⎣ 6 ⎦
1 2

Let S = sp{u1 , u2 , u3 , u4 , u5 }. We wish to find a subset of {u1 , u2 , u3 , u4 , u5 } that is a basis for S. We


let x be an arbitrary vector in R5 and solve α1 u1 + α2 u2 + α3 u3 + α4 u4 + α5 u5 = x. This equation is
equivalent to the system

α1 − α2 + α3 + α4 + 2α5 = x1 ,
2α1 + 3α2 + 7α3 − 2α4 + 10α5 = x2 ,
2α2 + 2α3 − α4 + 3α5 = x3 ,
α1 + α2 + 3α3 + α4 + 6α5 = x4 ,
α1 − α2 + α3 + α4 + 2α5 = x5 .

Applying Gaussian elimination, this system is equivalent to

3 1
α1 = x1 + x3 − x4 − 2α3 − 3α5 ,
2 2
1 1
α2 = − x1 + x4 − α3 − 2α5 ,
2 2
α4 = −x1 − x3 + x4 − α5 ,
7 3
0 = − x1 + x2 − 4x3 + x4 ,
2 2
0 = −x1 + x5 .

We see first of all that S is a proper subspace of R5 , since x ∈ S unless

7 3
− x1 + x2 − 4x3 + x4 = 0, −x1 + x5 = 0.
2 2
We also see that any x ∈ S can be represented as a linear combination of u1 , u2 , u4 by taking α3 = α5 = 0 in
the above equations. Finally, it can be verified directly that {u1 , u2 , u4 } is linearly independent and hence
a basis for S.

7. Consider the following polynomials in P3 :

p1 (x) = 1 − 4x + x2 + x3 , p2 (x) = 3 − 11x + x2 + 4x3 ,


p3 (x) = −x + 2x2 − x3 , p4 (x) = −x2 + 2x3 ,
p5 (x) = 5 − 18x + 2x2 + 5x3 .
2.7. PROPERTIES OF BASES
32 CHAPTER 2. FIELDS AND VECTOR SPACES
32

We wish to determine the dimension of S = sp{p1 , p2 , p3 , p4 , p5 }. We solve c1 p1 (x) + c2 p2 (x) + c3 p3 (x) +


c4 p4 (x) + c5 p5 (x) = 0 to determine the linear dependence relationships among these vectors (notice that
we already know that {p1 , p2 , p3 , p4 , p5 } is linearly dependent because the dimension of P3 is only 4).
This equation is equivalent to the system
c1 + 3c2 + 5c5 = 0,
−4c1 − 11c2 − c3 − 18c5 = 0,
c1 + c2 + 2c3 − c4 + 2c5 = 0,
2c1 + 4c2 − c3 + 2c4 + 5c5 = 0.

Applying Gaussian elimation yields

5 1
c1 = 0, c2 = − c5 , c3 = c5 , c4 = c5 .
3 3
Choosing c5 = 1, we see that

5 1
− p2 (x) + p3 (x) + p4 (x) + p5 (x) = 0.
3 3
We can solve this equation for p5 (x), which shows that p5 ∈ sp{p2 , p3 , p4 } ⊂ sp{p1 , p2 , p3 , p4 }. Therefore, S
= sp{p1 , p2 , p3 , p4 }. The above calculations also show that the only solution of c1 p1 (x) + c2 p2 (x) +
c3 p3 (x) + c4 p4 (x) + c5 p5 (x) = 0 with c5 = 0 is the trivial solution, that is, the only solution of c1 p1 (x) +
c2 p2 (x) + c3 p3 (x) + c4 p4 (x) = 0 is the trivial solution. Thus {p1 , p2 , p3 , p4 } is linearly independent and
hence a basis for S. (This also shows that S is four-dimensional and hence equals all of P3 .)

8. Let S = sp{v1 , v2 , v3 , v4 } ⊂ C3 , where

v1 = (1 − i, 3 + i, 1 + i), v2 = (1, 1 − i, 3),


v3 = (i, −2 − 2i, 2 − i), v4 = (2 − i, 7 + 3i, 2 + 5i).

We will find a basis for S. We begin by solving α1 v1 + α2 v2 + α3 v3 + α4 v4 = 0, which is equivalent to


the system

(1 − i)α1 + α2 + iα3 + (2 − i)α4 = 0,


(3 + i)α1 + (1 − i)α2 + (−2 − 2i)α3 + (7 + 3i)α4 = 0,
(1 + i)α1 + 3α2 + (2 − i)α3 + (2 + 5i)α4 = 0.

Applying Gaussian elimination yields

α1 = α3 − 2α4 , α2 = −α3 − iα4 .

Taking α3 = 1, α4 = 0, we see that v3 can be written as a linear combination of v1 , v2 , and taking α3 = 0,


α4 = 1, we see that v4 can be written as a linear combination of v1 , v2 . Thus both v3 and v4 belong to
sp{v1 , v2 }, which shows that S = sp{v1 , v2 }. Since neither v1 nor v2 is a multiple of the other, we see
that {v1 , v2 } is linearly independent and hence is a basis for S.

9. Consider the vectors u1 = (3, 1, 0, 4) and u2 = (1, 1, 1, 4) in Z45.

(a) It is obvious that {u1 , u2 } is linearly independent, since neither vector is a multiple of the other. (b)
To extend {u1 , u2 } to a basis for Z4 , we5 must find vectors u3 , u4 such that {u1 , u2 , u3 , u4 } is linearly
independent. We try u3 = (0, 0, 1, 0) and u4 = (0, 0, 0, 1). A direct calculation then shows that
α1 u1 + α2 u2 + α3 u3 + α4 u4 = 0 has only the trivial solution. Therefore {u1 , u2 , u3 , u4 } is linearly
independent and hence, since dim(Z54 ) = 4, it is a basis for Z45.
2.7. PROPERTIES OF BASES
33 CHAPTER 2. FIELDS AND VECTOR SPACES
33

10. Let S = sp{v1 , v2 , v3 } ⊂ Z33, where

v1 = (1, 2, 1), v2 = (2, 1, 2), v3 = (1, 0, 1).

We wish to find a subset of {v1 , v2 , v3 } that is a basis for S. As usual, we proceed by solving α1 v1 +
α2 v2 + α3 v3 = 0 to find the linear dependence relationships (if any). This equation is equivalent to the
system

α1 + 2α2 + α3 = 0,
2α1 + α2 = 0,
α1 + 2α2 + α3 = 0,

which reduces, by Gaussian elimination, to

α1 = α2 , α3 = 0.

It follows that v1 + v2 = 0, or v2 = 2v1 (notice that −1 = 2 in Z3 ). Therefore sp{v1 , v3 } = sp{v1 , v2 , v3 } =


S. It is obvious that {v1 , v3 } is linearly independent (since neither vector is a multiple of the other), and
therefore {v1 , v3 } is a basis for S.
11. Let F be a field. We will show how to produce different bases for a nontrivial, finite-dimensional vector
space over V .
(a) Let V be a 1-dimensional vector space over F , and let {u1 } be a basis for V . Then {αu1 } is a basis
for V for any α = 0. To prove this, we first note that u1 is nonzero since {u1 } is linearly independent
by assumption; therefore α1 (αu1 ) = 0 implies that (α1 α)u1 = 0 and hence (by Theorem 5, part 6)
that α1 α = 0. Since α = 0, this in turn yields α1 = 0, and hence {αu1 } is linearly independent.
Now suppose v ∈ V . Since {u1 } is a basis for V , there exists β ∈ F such that βu1 = v. But then
βα−1 (αu1 ) = v, which shows that {αu1 } spans V . Thus {αu1 } is a basis for V .
(b) Now let V be a 2-dimensional vector space over F , and let {u1 , u2 } be a basis for V . We wish to prove
that {αu1 , βu1 +γu2 } is a basis for V for any α = 0, γ = 0. First, suppose α1 (αu1 )+α2 (βu1 +γu2 ) =
0. We can rewrite this equation as (αα1 + βα2 )u1 + (γα2 )u2 = 0 and, since {u1 , u2 } is linearly
independent, this implies that

αα1 + βα2 = 0,
γα2 = 0.

Since γ = 0 by assumption, the second equation implies that α2 = 0. Then the first equation
simplifies to αα1 = 0, which implies (since α = 0) that α1 = 0. This shows that {αu1 , βu1 + γu2 }
is linearly independent.
Now suppose v ∈ V . Since {u1 , u2 } is a basis for V , there exist β1 , β2 ∈ F such that v = β1 u1 + β2 u2 .
We wish to find α1 , α2 ∈ F such that α1 (αu1 ) + α2 (βu1 + γu2 ) = v. We can rearrange this last
equation to read (αα1 + βα2 )u1 + (γα2 ) = v, which then yields (αα1 + βα2 )u1 + (γα2 ) = β1 u1 + β2 u2 .
Since v has a unique representation as a linear combination of the basis vectors u1 , u2 , it follows
that

αα1 + βα2 = β1 ,
γα2 = β2 .

This system can be solved to yield a unique solution:

α1 = α−1 (β1 − βγ −1 β2 ), α2 = γ −1 β2 .

This shows that v ∈ sp{αu1 , βu1 + γu2 }, and hence that {αu1 , βu1 + γu2 } spans V . Therefore,
{αu1 , βu1 + γu2 } is a basis for V .
2.7. PROPERTIES OF BASES
34 CHAPTER 2. FIELDS AND VECTOR SPACES
34

(c) Let V be a vector space over F with basis {u1 , . . . , un }. We wish to generalize the previous parts of
this exercise to show how to produce a collection of different bases for V . We choose any scalars

αij , i = 1, 2, . . . , n, j = 1, 2, . . . , i,

with αii = 0 for all i = 1, 2, . . . , n. Then the set

{α11 u1 , α21 u1 + α22 u2 , . . . , αn1 u1 + αn2 u2 + · · · + αnn un } (2.2)

is a basis for V . We can prove this by induction on n. We have already done the case n = 1 (and
also n = 2). Let us assume that the construction leads to a basis if the dimension of the vector
space is n − 1, and suppose that V has dimension n, with basis {u1 , u2 , . . . , un }. By the induction
hypothesis,
{α11 u1 , α21 u1 + α22 u2 , . . . , αn−1,1 u1 + · · · + αn−1,n−1 un−1 } (2.3)
is a basis for S = sp{u1 , u2 , . . . , un−1 }. We now show that (2.2) is a basis for V by showing that
each v ∈ V can be uniquely represented as a linear combination of the vectors in (2.2). So let v be
any vector in V , say
v = β1 u 1 + β2 u 2 + · · · + β n u n .
Notice that

βn un = βn α−1
nn (αn1 u1 + · · · + α nn u n ) −

βn α−1 −1

nn α n1 u 1 + · · · + βn αnn αn,n−1 un−1 .

The vector
βn α−1 −1

nn α n1 u1 + · · · + βn αnn αn,n−1 un−1

belongs to S and, by the induction hypothesis, can be written uniquely as

γ1 (a11 u1 ) + γ2 (α21 u1 + α22 u2 ) + · · · + γn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 ).


Also,

β1 u1 + · · · + βn−1 un−1
= δ1 (a11 u1 ) + δ2 (α21 u1 + α22 u2 ) + · · · +
δn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 ).

Putting this all together, we obtain

v = β1 u1 + · · · + βn−1 un−1 + βn un
= δ1 (a11 u1 ) + δ2 (α21 u1 + α22 u2 ) + · · · +
δn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 )+
βn α−1
nn (α n1 u1 + · · · + α nn un ) − (γ1 (a11 u1 ) + γ2 (α 21 u1 + α 22 u2 )
+ · · · + γn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 )
= (δ1 − γ1 )(a11 u1 ) + (δ2 − γ2 )(α21 u1 + α22 u2 ) + · · · +
(δn−1 − γn−1 )(αn−1,1 u1 + · · · + αn−1,n−1 un−1 )+
βn α−1
nn (α n1 u1 + · · · + αnn un ) .

This shows that each v can be written as a linear combination of the vectors in (2.2). Uniqueness
follows from the induction hypothesis and the fact that there is only one way to write βn un as a
multiple of αn1 u1 + · · · + αnn un plus a vector from S.
2.7. PROPERTIES OF BASES
35 CHAPTER 2. FIELDS AND VECTOR SPACES
35

12. We wish to prove that every nontrivial subspace of a finite-dimensional vector space has a basis (and
hence is finite-dimensional). Let V be a finite-dimensional vector space, let dim(V ) = n, and suppose S
is a nontrivial subspace of V . Since S is nontrivial, it contains a nonzero vector s1 . Then either {v1 }
spans S, or there exists v2 ∈ S \ sp{v1 }. In the first case, {v1 } is a basis for S (since any set containing
a single nonzero vector is linearly independent by Exercise 2.5.2). In the second case, {v1 , v2 } is linearly
independent by Exercise 2.5.4. Either {v1 , v2 } spans S, in which case it is a basis for S, or we can find
v3 ∈ S \ sp{v1 , v2 }. We continue to add vectors in this fashion until we obtain a basis for S. We know
that the process will end with a linearly independent spanning set for S, containing at most n vectors,
because S ⊂ V , and any linearly independent set with n vectors spans all of V by Theorem 45.
13. Let F be a finite field with q distinct elements, and let n be positive integer, n ≥ q. We wish to prove
that dim (Pn (F )) = q by showing that {1, x, . . . , xq−1 } is a basis for Pn (F ). We have already seen (in
Exercise 2.6.11) that {1, x, . . . , xq−1 } is linearly independent. Consider the following vectors in F q :
⎡ ⎤ ⎡ ⎤ ⎡ 2 ⎤ ⎡ q−1 ⎤
1 α1 α1 α1
1 α2 α2 α
q−1

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
v1 = ⎢ ⎥
⎢ ⎥ , v2 = ⎢
⎢ ⎥, v = ⎢ 2 ⎥, ..., v =⎢ 2
⎥ 3 ⎢ ⎥ q ⎢ . ⎥.

⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦
1 αq α2q αq−1
q

The equation c1 v1 + c2 v2 + · · · + cq vq = 0 is equivalent to the q equations


q−1
c1 · 1 + c 2 α i + · · · + c q α i = 0, i = 1, 2, . . . , q,

which collectively are equivalent to the statement

c1 · 1 + c2 x + · · · + cq xq−1 = 0 for all x ∈ F.

Since {1, x, . . . , xq−1 } is linearly independent, this equation implies that c1 = c2 = · · · = cq = 0, and
hence we have shown that {v1 , v2 , . . . , vq } is linearly independent in F q . Since we know that dim(F q ) = q,
Theorem 45 implies that {v1 , v2 , . . . , vq } also spans F q . Now let p be any polynomial in Pn (F ), and define
⎡ ⎤
p(α1 )
⎢ ⎥
⎢ p(α2 ) ⎥ ∈ F q .
u=⎢ ⎥
⎣ . ⎦
p(αq )

Since {v1 , v2 , . . . , vq } spans F q , there exist scalars c1 , c2 , . . . , cq ∈ F such that c1 v1 + c2 v2 + · · · + cq vq = u.


This last equation is equivalent to
q−1
c1 · 1 + c 2 α i + · · · + c q α i = p(αi ), i = 1, 2, . . . , q,

and hence to
c1 · 1 + c2 x + · · · + cq xq−1 = p(x) for all x ∈ F.
This shows that p ∈ sp{v1 , v2 , . . . , vq }, and hence that {v1 , v2 , . . . , vq } is a basis for Pn (F ). Thus
dim(Pn (F )) = q, as desired.
14. Let V be an n-dimensional vector space over a field F , and suppose S and T are subspaces of V satisfying
S ∩ T = {0}. Suppose that {s1 , s2 , . . . , sk } is a basis for S, {t1 , t2 , . . . , t } is a basis for T , and k + = n.
We wish to prove that {s1 , s2 , . . . , sk , t1 , t2 , . . . , t } is a basis for V . This follows immediately from
Theorem 45, since we have already shown in Exercise 2.5.15 that {s1 , s2 , . . . , sk , t1 , t2 , . . . , t } is linearly
independent.
15. Let V be a vector space over a field F , and let {u1 , . . . , un } be a basis for V . Let v1 , . . . , vk be vectors in
V , and suppose
vj = α1,j u1 + . . . + αn,j un , j = 1, 2, . . . , k.
2.7. PROPERTIES OF BASES
36 CHAPTER 2. FIELDS AND VECTOR SPACES
36

Define the vectors x1 , . . . , xk in F n by


xj = (α1,j , . . . , αn,j ), j = 1, 2, . . . , k.
(a) We first prove that {v1 , . . . , vk } is linearly independent if and only if {x1 , . . . , xk } is linearly indepen-
dent. We will do this by showing that c1 v1 +· · ·+ck vk = 0 in V is equivalent to c1 x1 +· · ·+ck xk = 0 in
F n . Then the first equation has only the trivial solution if and only if the second equation does, and
the result follows. The proof is a direct manipulation, for which summation notation is convenient:
k k n
c j vj = 0 ⇔ cj αij ui

j=1 j=1 i=1


k n

cj αij ui = 0

j=1 i=1

n k
cj αij ui = 0

i=1 j=1

⎛ ⎞
n k
⇔ ⎝ cj αij ⎠ ui = 0.
i=1 j=1

Since {u1 , . . . , un } is linearly independent, the last equation is equivalent to


k
cj αij = 0, i = 1, 2, . . . , n,
j=1

which, by definition of xj and of addition in F n , is equivalent to


k
cj xj = 0.
j=1

This completes the proof.

(b) Now we show that {v1 , . . . , vk } spans V if and only if {x1 , . . . , xk } spans F n . Since each vector in V
can be represented uniquely as a linear combination of u1 , . . . , un , there is a one-to-one correspon-
dence between V and F n :
w = c1 u1 + · · · + cn un ∈ V ←→ x = (c1 , . . . , cn ) ∈ F n .
Mimicking the manipulations in the first part of the exercise, we see that
k k
c j vj = w ⇔ cj xj = x.
j=1 j=1

Thus the first equation has a solution for every v ∈ V if and only if the second equation has a
solution for every x ∈ F n . The result follows.
16. Consider the polynomials p1 (x) = −1 + 3x + 2x2 , p2 (x) = 3 − 8x − 4x2 , and p3 (x) = −1 + 4x + 5x2 in
P2 . We wish to use the result of the previous exercise to determine if {p1 , p2 , p3 } is linearly independent.
The standard basis for P2 is {u1 , u2 , u3 } = {1, x, x2 }. In terms of this basis, p1 , p2 , p3 correspond to
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−1 3 −1
2.7. PROPERTIES OF BASES
37 CHAPTER 2. FIELDS AND VECTOR SPACES
37

x1 = ⎣ 3 ⎦ , x2 = ⎣ −8 ⎦ , x3 = ⎣ 4 ⎦ ∈ R3 ,
2 −4 5

respectively. A direct calculation shows that c1 x1 +c2 x2 +c3 x3 = 0 has only the trivial solution. Therefore
{x1 , x2 , x3 } and {p1 , p2 , p3 } are both linearly independent.
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
37 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
37

2.8 Polynomial interpolation and the Lagrange basis


1. (a) The Lagrange polynomials for the interpolation nodes x0 = 1, x1 = 2, x3 = 3 are

(x − 2)(x − 3) 1
L0 (x) = = − (x − 2)(x − 3),
(1 − 2)(1 − 3) 2

(x − 1)(x − 3)
L1 (x) = = −(x − 1)(x − 3),
(2 − 1)(2 − 3)

(x − 1)(x − 2) 1
L2 (x) = = (x − 1)(x − 2).
(3 − 1)(3 − 2) 2

(b) The quadratic polynomial interpolating (1, 0), (2, 2), (3, 1) is

p(x) = 0L0 (x) + 2L1 (x) + L2 (x)


1
= −2(x − 1)(x − 3) + (x − 1)(x − 2)
2
3 13
= − x2 + x − 5.
2 2

2. (a) The Lagrange polynomials for the interpolation nodes x0 = −2, x1 = −1, x2 = 0, x3 = 1, x4 = 2
are
(x + 1)x(x − 1)(x − 2) 1
L0 (x) = = x(x + 1)(x − 1)(x − 2),
(−2 + 1)(−2 − 0)(−2 − 1)(−2 − 2) 24

(x + 2)x(x − 1)(x − 2) 1
L1 (x) = = − x(x + 2)(x − 1)(x − 2),
(−1 + 2)(−1 − 0)(−1 − 1)(−1 − 2) 6

(x + 2)(x + 1)(x − 1)(x − 2) 1


L2 (x) = = (x + 2)(x + 1)(x − 1)(x − 2),
(0 + 2)(0 + 1)(0 − 1)(0 − 2) 4

(x + 2)(x + 1)x(x − 2) 1
L3 (x) = = − x(x + 2)(x + 1)(x − 2),
(1 + 2)(1 + 1)(1 − 0)(1 − 2) 6

(x + 2)(x + 1)x(x − 1) 1
L4 (x) = = x(x + 2)(x + 1)(x − 1).
(2 + 2)(2 + 1)(2 − 0)(2 − 1) 24

(b) Using the Lagrange basis, we find the interpolating polynomial passing through the points (−2, 10),
(−1, −3), (0, 2), (1, 7), (2, 18) to be

p(x) = 10L0 (x) − 3L1 (x) + 2L2 (x) + 7L3 (x) + 18L4 (x)
5 1
= x(x + 1)(x − 1)(x − 2) + x(x + 2)(x − 1)(x − 2)+
12 2
1 7
(x + 2)(x + 1)(x − 1)(x − 2) − x(x + 2)(x + 1)(x − 2)+
2 6
3

x(x + 2)(x + 1)(x − 1).


4
A tedious calculation shows that p(x) = x4 − x3 − x2 + 6x + 2, which is the same result obtained in
Example 48.
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
38 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
38

3. Consider the data (1, 5), (2, −4), (3, −4), (4, 2). We wish to find the cubic polynomial interpolating these
points.
(a) Using the standard basis, we write p(x) = c0 + c1 x + c2 x2 + c3 x4 . The equations

p(1) = 5, p(2) = −4, p(3) = −4, p(4) = 2


2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
39 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
39

are equivalent to the system


c0 + c1 + c2 + c3 = 5,
c0 + 2c1 + 4c2 + 8c3 = −4,
c0 + 3c1 + 9c2 + 27c3 = −4, c0
+ 4c1 + 16c2 + 64c3 = 2.
Gaussian elimination yields
15 1
c0 = 26, c1 = −28, c2 = , c3 = − ,
2 2
and thus
15 2 1 3
p(x) = 26 − 28x + x − x .
2 2

(b) The Lagrange polynomials for these interpolation nodes are


(x − 2)(x − 3)(x − 4) 1
L0 (x) = = − (x − 2)(x − 3)(x − 4),
(1 − 2)(1 − 3)(1 − 4) 6

(x − 1)(x − 3)(x − 4) 1
L1 (x) = = (x − 1)(x − 3)(x − 4),
(2 − 1)(2 − 3)(2 − 4) 2

(x − 1)(x − 2)(x − 4) 1
L2 (x) = = − (x − 1)(x − 2)(x − 4),
(3 − 1)(3 − 2)(3 − 4) 2

(x − 1)(x − 2)(x − 3) 1
L2 (x) = = (x − 1)(x − 2)(x − 3),
(4 − 1)(4 − 2)(4 − 3) 6

and the interpolating polynomial is


p(x) = 5L0 (x) − 4L1 (x) − 4L2 (x) + 2L3 (x)
5
= − (x − 2)(x − 3)(x − 4) − 2(x − 1)(x − 3)(x − 4)+
6
1
2(x − 1)(x − 2)(x − 4) + (x − 1)(x − 2)(x − 3).
3
A tedious calculation shows that this is the same polynomial computed in the first part.
4. Let {L0 , L1 , . . . , Ln } be the Lagrange basis constructed on the interpolation nodes x0 , x1 , . . . , xn ∈ F .
We wish to prove that, for all p ∈ Pn (F ),
p(x) = p(x0 )L0 (x) + p(x1 )L1 (x) + . . . + p(xn )Ln (x)
Let q(x) be the polynomial on the right. By the definition of the Lagrange polynomials, we see that
q(xi ) = p(xi ) for i = 0, 1, . . . , n. If we define the polynomial r(x) = p(x) − q(x), then we see that r has
n + 1 roots:
r(xi ) = p(xi ) − q(xi ) = p(xi ) − p(xi ) = 0, i = 0, 1, . . . , n.
However, r is a polynomial of degree at most n, and therefore this is impossible unless r is the zero
polynomial (compare the discussion on page 45 in the text). This shows that q = p, as desired.
5. We wish to write p2 (x) = 2 + x − x2 as a linear combination of the Lagrange polynomials constructed
on the nodes x0 = −1, x1 = 1, x2 = 3. The graph of p passes through the points (−1, p(−1)), (1, p(1)),
(3, p(3)), that is, (−1, 0), (1, 2), (3, −4). The Lagrange polynomials are
(x − 1)(x − 3) 1
L0 (x) = = (x − 1)(x − 3),
(−1 − 1)(−1 − 3) 8

(x + 1)(x − 3) 1
L1 (x) = = − (x + 1)(x − 3),
(1 + 1)(1 − 3) 4
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
40 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
40

(x + 1)(x − 1) 1
L2 (x) = = (x + 1)(x − 1),
(3 + 1)(3 − 1) 8
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
41 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
41

and therefore,

p(x) = 0L0 (x) + 2L1 (x) − 4L2 (x)


1 1
= − (x + 1)(x − 3) − (x + 1)(x − 1).
2 2

6. Let F be a field and suppose (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) are points in F 2 . We wish to show that the poly-
nomial interpolation problem has at most one solution, assuming the interpolation nodes x0 , x1 , . . . , xn
are distinct. This follows from reasoning we have seen before. If p, q ∈ Pn (F ) both interpolate the given
data, then r = p − q is a nonzero polynomial of degree at most n having n + 1 roots (x0 , x1 , . . . , xn ). The
only polynomial of degree n (or less) having n + 1 roots is the zero polynomial; thus p = q and there is
at most one interpolating polynomial.
7. Let F be a field and suppose (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) are points in F 2 . We wish to show that the poly-
nomial interpolation problem has at most one solution, assuming the interpolation nodes x0 , x1 , . . . , xn
are distinct. Suppose p, q ∈ Pn (F ) both interpolate the data, and let {L0 , L1 , . . . , Ln } be the basis for
Pn (F ) of Lagrange polynomials for the given interpolation nodes. Both p and q can be written in terms
of this basis: n n
p= αi Li , q = β i Li .
i=0 i=0
Now, we know that the Lagrange polynomials satisfy

1, j = i,
Li (xj ) =
0, j = i.

It follows that n n

p(xj ) = αi Li (xj ) = αj , q(xj ) = βi Li (xj ) = βj .


i=0 i=0

But, since p and q interpolate the given data, p(xj ) = q(xj ) = yj . This shows that αj = βj , j = 0, 1, . . . , n,
and hence that p = q.
8. Suppose x0 , x1 , . . . , xn are distinct real numbers. We wish to prove that, for any real numbers y0 , y1 , . . . , yn ,
the system
c0 + c1 x0 + c2 x02 + . . . + cn xn0 = y0 ,
c0 + c1 x1 + c2 x12 + . . . + cn xn1 = y1 ,

. .
c 0 + c 1 xn + c2 xn2 + ... + cn xnn = yn
has a unique solution c0 , c1 , . . . , cn . This follows immediately from our work on interpolating polynomials:
c0 , c1 , . . . , cn solves the given system if and only if p(x) = c0 + c1 x + · · · + cn xn interpolates the data
(x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ). Since there is a unique interpolating polynomial p ∈ Pn , and since this
polynomial can be uniquely represented in terms of the standard basis {1, x, . . . , xn }, it follows that the
given system of equations has a unique solution.
9. We wish to represent every function f : Z2 → Z2 by a polynomial in P1 (Z2 ). There are exactly four
different functions f : Z2 → Z2 , as defined in the following table:

x f1 (x) f2 (x) f3 (x) f4 (x)


0 0 1 0 1
1 0 0 1 1

We have f1 (x) = 0 (the zero polynomial), f2 (x) = 1 + x, f3 (x) = x, and f4 (x) = 1.


2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
42 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
42

10. The following table defines three functions mapping Z3 → Z3 . We wish to find a polynomial in P2 (Z3 )
representing each one.

x f1 (x) f2 (x) f3 (x)


0 1 0 2
1 2 0 2
2 0 1 1

We can compute f1 , f2 , and f3 as interpolating polynomials. The Lagrange polynomials for the given
interpolation nodes are

(x − 1)(x − 2)
L0 (x) = = 2x2 + 2,
(0 − 1)(0 − 2)

x(x − 2)
L1 (x) = = 2x2 + 2x,
0)(1 2)
(1
− −
x(x − 1)
L2 (x) = = 2x2 + x.
(2 0)(2 1)
− −

(Here we have used the arithmetic of Z3 to simplify the polynomials: −1 = 2, −2 = 1, 2−1 = 2, etc.).
We then have

f1 (x) = L0 (x) + 2L1 (x) = 2x2 + 1 + x2 + x = 1 + x,


f2 (x) = L2 (x) = 2x2 + x,
f3 (x) = 2L0 (x) + 2L1 (x) + L2 (x) = x2 + 2 + x2 + x + 2x2 + x
= x2 + 2x + 2.

11. Consider a secret sharing scheme in which five individuals will receive information about the secret, and
any two of them, working together, will have access to the secret. Assume that the secret is a two-
digit integer, and that p is chosen to be 101. The degree of the polynomial will be one, since then the
polynomial will be uniquely determined by two data points. Let us suppose that the secret is N = 42
and we choose the polynomial to be p(x) = N + c1 x, where c1 = 71 (recall that c1 is chosen at random).
We also choose the five interpolation nodes at random to obtain x1 = 9, x2 = 14, x3 = 39, x4 = 66, and
x5 = 81. We then compute

y1 = p(x1 ) = 42 + 71 · 9 = 75,
y2 = p(x2 ) = 42 + 71 · 14 = 26,
y3 = p(x3 ) = 42 + 71 · 39 = 84,
y4 = p(x4 ) = 42 + 71 · 66 = 82,
y5 = p(x5 ) = 42 + 71 · 81 = 36

(notice that all arithmetic is done modulo 101). The data points, to be distributed to the five individuals,
are (9, 75), (14, 26), (39, 84), (66, 82), (81, 36).

12. An integer N satisfying 1 ≤ N ≤ 256 represents a secret to be shared among five individuals. Any
three of the individuals are allowed access to the information. The secret is encoded in a polynomial
p, according to the secret sharing scheme described in Section 2.8.1, lying in P2 (Z257 ). Suppose three
of the individuals get together, and their data points are (15, 13), (114, 94), and (199, 146). We wish to
determine the secret. We begin by finding the Lagrange polynomials for the interpolation nodes 15, 114,
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
41 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
41

and 199:
(x − 114)(x − 199)
L0 (x) = = 58x2 + 93x + 205,
(15 − 114)(15 − 199)

(x − 15)(x − 199)
L1 (x) = = 74x2 + 98x + 127,
(114 − 15)(114 − 199)

(x − 15)(x − 114)
L2 (x) = = 125x2 + 66x + 183.
(199 − 15)(199 − 114)
The interpolating polynomial p is

p(x) = 13L0 (x) + 94L1 (x) + 146L2 (x).


We need only compute p(0):

p(0) = 13L0 (0) + 94L1 (0) + 146L2 (0) = 13 · 205 + 94 · 127 + 146 · 183 = 201.
Thus the secret is N = 201. (The polynomial p(x) simplifies to p(x) = 3x2 + 11x + 201).
13. We wish to solve the following interpolation problem: Given v1 , v2 , d1 , d2 ∈ R, find p ∈ P3 such that
p(0) = v1 , p(1) = v2 , p× (0) = d1 , p× (1) = d2 .
(a) If we represent p as p(x) = c0 + c1 x + c2 x2 + c3 x3 , then the given conditions
p(0) = v1 ,
p× (0) = d1 ,
p(1) = v2 ,
p× (1) = d2
are equivalent to the system
c0 = v1 ,
c1 = d1 ,
c0 + c1 + c2 + c3 = v2 ,
c1 + 2c2 + 3c3 = d2 .
It is straightforward to solve this system:
c0 = v1 , c1 = d1 , c2 = 3v2 − 3v1 − 2d1 − d2 , c3 = 2v1 − 2v2 + d1 + d2 .
(b) We now find the special basis {q1 , q2 , q3 , q4 } of P3 satisfying the following conditions:
q1 (0) = 1, q1× (0) = 0, q1 (1) = 0, q ×1(1) = 0,
q2 (0) = 0, q2× (0) = 1, q2 (1) = 0, q ×2(1) = 0,
q3 (0) = 0, q3× (0) = 0, q3 (1) = 1, q ×3(1) = 0,
q4 (0) = 0, q4× (0) = 0, q4 (1) = 0, q ×4(1) = 1.
We can use the result of the first part of this exercise to write down the solutions immediately:
q1 (x) = 1 − 3x2 + 2x3 ,
q2 (x) = x − 2x2 + x3 ,
q1 (x) = 3x2 − 2x3 ,
q1 (x) = −x2 + x3 .
In terms of the basis {q1 , q2 , q3 , q4 }, the solution to the interpolation problem is
p(x) = v1 q1 (x) + d1 q2 (x) + v2 q3 (x) + d2 q4 (x).
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
42 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
42

14. We are given n + 1 interpolation nodes, x0 , x1 , . . . , xn . Given v0 , v1 , . . . , vn ∈ R, d0 , d1 , . . . , dn ∈ R, we


wish to find p ∈ P2n+1 such that

p(xi ) = vi , p× (xi ) = di , i = 0, 1, . . . , n.

We define (in terms of the Lagrange polynomials L0 , L1 , . . . , Ln for the nodes x0 , x1 , . . . , xn )

Ai (x) = (1 − 2(x − xi )L×i(xi )) L2i (x),


Bi (x) = (x − xi )L2i (x)

for i = 0, 1, . . . , n.

(a) Since each Lagrange polynomial Li has degree exactly n, we see that Bi has degree 2n + 1 for each
i = 0, 1, . . . , n, while Ai has degree 2n + 1 if L×i(xi ) = 0 and degree 2n if L×i(xi ) = 0. Thus, in every
case, Ai , Bi ∈ P2n+1 for all i = 0, 1, . . . , n.
(b) If j = i, then Li (xj ) = 0 and

Ai (xj ) = (1 − 2(xj − xi )L×i(xi ))L2i (xj ) = (1 − 2(xj − xi )L×i(xi )) · 0 = 0.

If j = i, then Li (xj ) = 1 and

Ai (xj ) = (1 − 2(xi − xi )L×i(xi ))L2i (xi ) = 1 · 1 = 1.


Therefore,
1, i = j,
Ai (xj ) =
0, i = j.
Also,
× ×
A× × 2

i (x) = 2(1 − 2(x − xj )Li (xi ))Li (x)Li (x) − 2Li (xi )Li (x).

If j = i, then Li (xj ) = 0 and therefore A×i(xj ) = 0 (since both terms contain a factor of Li (xj )). If
j = i, then Li (xj ) = 1 and A×i (xj ) simplifies to

A×i (xj ) = 2Li× (xj ) − 2Li× (xj ) = 0.

Thus Ai× (xj ) = 0, j = 0, 1, . . . , n.


(c) Since either xj − xi = 0 (if j = i) or Li (xj ) = 0 (if j = i), it follows that Bi (xj ) = 0 for all
j = 0, 1, . . . , n. Now,
Bi×(x) = Li2 (x) + 2(x − xi )Li (x)Li×(x).
If j = i, then

Bi× (xj ) = L2i (xj ) + 2(xj − xi )Li (xj )Li× (xj ) = 0 − 0 = 0


since Li (xj ) = 0. Also,

Bi×(xi ) = Li2 (xi ) + 2(xi − xi )Li (xi )L×i (xi ) = 1 − 0 = 1.

Therefore,
1, i = j,
B×i (xj ) =
0, i = j.

(d) We now wish to prove that {A0 , . . . , An , B0 , . . . , Bn } is a basis for P2n+1 . Since dim(P2n+1 ) = 2n+ 2
and the proposed basis contains 2n + 2 elements, it suffices to prove that the set spans P2n+1 . Given
any p ∈ P2n+1 , define vj = p(xj ), dj = p× (xj ), j = 0, 1, . . . , n. Then
n n
q(x) = vi Ai (x) + di Bi (x)
i=0 i=0
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
43 CHAPTER 2. FIELDS AND VECTOR SPACES
43

agrees with p at each xj , and q × agrees with p× at each xj (see below). Define r = p − q. Then r is
a polynomial of degree at most 2n + 1, and r(xj ) = r× (xj ) = 0 for j = 0, 1, . . . , n. Using elementary
properties of polynomials, this implies that r(x) can be factored as

r(x) = f (x)(x − x0 )2 (x − x1 )2 · · · (x − xn )2 ,

where f (x) is a polynomial. But then deg(r(x)) ≥ 2n + 2 unless f = 0. Since we know that
deg(r(x)) ≤ 2n + 1, it follows that f = 0, in which case r = 0 and hence q = p. Thus p ∈
sp{A0 , . . . , An , B0 , . . . , Bn }. This proves that the given set is a basis for P2n+1 .

Using the properties of A0 , A1 , . . . , An , B0 , B1 , . . . , Bn derived above, the solution to the Hermite inter-
polation problem is
n n
p(x) = vi Ai (x) + di Bi (x).
i=0 i=0

To verify this, notice that


n n

p(xj ) = vi Ai (xj ) + di Bi (xj ).


i=0 i=0

Every term in the second sum vanishes, as do all terms in the first sum except vj Aj (xj ) = vj · 1 = vj .
Thus p(xj ) = vj . Also,
n n
p× (xj ) = vi A×i(xj ) + di B ×i(xj ).
i=0 i=0

Now every term in the first sum vanishes, as do all terms in the second sum except dj B ×j(xj ) = dj · 1 = dj .
Therefore p× (xj ) = dj , and p is the desired interpolating polynomial.

2.9 Continuous piecewise polynomial functions


1. The following table shows the maximum errors obtained in approximating f (x) = ex on the interval [0, 1]
by polynomial interpolation and by piecewise linear interpolation, each on a uniform grid with n nodes.

n Poly. interp. err. PW linear interp. err.


1 2.1187 · 10−1 2.1187 · 10−1
2 1.4420 · 10−2 6.6617 · 10−2
3 9.2390 · 10−4 3.2055 · 10−2
4 5.2657 · 10−5 1.8774 · 10−2
5 2.6548 · 10−6 1.2312 · 10−2
6 1.1921 · 10−7 8.6902 · 10−3
7 4.8075 · 10−9 6.4596 · 10−3
8 1.7565 · 10−10 4.9892 · 10−3
9 5.8575 · 10−12 3.9692 · 10−3
10 1.8119 · 10−13 3.2328 · 10−3

For this example, polynomial interpolation is very effective.

2. The following table shows the maximum errors obtained in approximating f (x) = 1/(1 + x2 ) on the
interval [−5, 5] by polynomial interpolation and by piecewise linear interpolation, each on a uniform grid
with n nodes.
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
44 CHAPTER 2. FIELDS AND VECTOR SPACES
44

n Poly. interp. err. PW linear interp. err.


1 9.6154 · 10−1 9.6154 · 10−1
2 6.4615 · 10−1 4.1811 · 10−1
3 7.0701 · 10−1 7.3529 · 10−1
4 4.3836 · 10−1 1.8021 · 10−1
5 4.3269 · 10−1 5.0000 · 10−1
6 6.1695 · 10−1 6.2304 · 10−2
7 2.4736 · 10−1 3.3784 · 10−1
8 1.0452 6.3898 · 10−2
9 3.0028 · 10−1 2.3585 · 10−1
10 1.9156 6.7431 · 10−2

Here we see that polynomial interpolation is not effective (cf. Figure 2.6 in the text). Also, we see that
piecewise linear interpolation is much more effective with n even, since then there is an interpolation
node at x = 0, which is the peak of the graph of f .

3. We now repeat the previous two exercises, using piecewise quadratic interpolation instead of piecewise
linear interpolation. We use a uniform grid of 2n + 1 nodes.

(a) Here the function is f (x) = ex on [0, 1].


n Poly. interp. err. PW quad. interp. err.
1 1.4416 · 10−2 1.4416 · 10−2
2 5.2637 · 10−5 2.2079 · 10−3
3 1.1921 · 10−7 7.0115 · 10−4
4 1.7565 · 10−10 3.0632 · 10−4
5 1.8119 · 10−13 1.6017 · 10−4
Once again, polynomial interpolation is quite effective for this function. We only go to n = 5, since
for larger n we reach the limits of the finite precision arithmetic (in standard double precision).
(b) Here f (x) = 1/(1 + x2 ) on [−5, 5].
n Poly. interp. err. PW quad. interp. err.
1 6.4615 · 10−1 6.4615 · 10−1
2 4.3818 · 10−1 8.5472 · 10−2
3 6.1667 · 10−1 2.3570 · 10−1
4 1.0452 9.7631 · 10−2
5 1.9156 8.5779 · 10−2
6 3.6629 7.4693 · 10−2
7 7.1920 3.4671 · 10−2
8 1.4392 · 101 4.7781 · 10−2
For n > 8, it is not possible to solve the linear system defining the polynomial interpolant accurately
in standard double precision arithmetic. Once again, for this example, we see the advantage of using
piecewise polynomials to approximate f .

4. Let x0 , x1 , . . . , xn define a uniform mesh on [a, b] (that is, xi = a + ih, i = 0, 1, . . . , n, where h = (b−a)/n).
We wish to prove that
|(x − x0 )(x − x1 ) · · · (x − xn )| hn+1
max .

x∈ [a,b] (n + 1)! 2(n + 1)

Let x ∈ [a, b] be given. If x equals any of the nodes x0 , x1 , . . . , xn , then the left-hand side is zero, and
thus the inequality holds. So let us assume that x ∈ (xi−1 , xi ) for some i = 1, 2, . . . , n. The distance
from x to the nearer of xi−1 , xi is at most h/2 and the distance to the further of the two is at most h. If
we then list the remaining n − 1 nodes in order of distance from x (nearest to furthest), we see that the
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
45 CHAPTER 2. FIELDS AND VECTOR SPACES
45

distances are at most 2h, 3h, . . . , nh. Therefore,

|(x − x0 )(x − x1 ) · · · (x − xn )| = |x − x0 ||x − x1 | · · · |x − xn |


h 1
≤ n+1
2 (h)(2h) · · · (nh) = 2 n!h
.

Therefore,
|(x − x0 )(x − x1 ) · · · (x − xn )| 1 n!hn+1 hn+1
≤ .
(n + 1)! 2 (n + 1)! 2(n + 1)
Since this holds for each x ∈ [a, b], the proof is complete.

5. We wish to derive a bound on the error in piecewise quadratic interpolation in the case of a uniform mesh.
We use the notation of the text, and consider an arbitrary element [x2i−2 , x2i ] (with x2i − x2i−2 = h).
Let f belonging to C 3 [a, b] be approximated on this element by the quadratic q that interpolates f at
x2i−2 , x2i−1 , x2i . Then the remainder term is given by

f (3) (cx )
f (x) − q(x) = (x − x2i−2 )(x − x2i−1 )(x − x2i ), x ∈ [a, b],
3!

where cx ∈ [a, b]. We assume |f (3) (x)| ≤ M for all x ∈ [a, b]. We must maximize

|(x − x2i−2 )(x − x2i−1 )(x − x2i )|

on [a, b]. This is equivalent, by a simple change of variables, to maximizing |p(x)| on [0, h], where
p(x) = x(x − h/2)(x − h). This is a simple problem in single-variable calculus, and we can easily verify
that
h3
|p(x)| ≤ √ for all x ∈ [0, h].
12 3
Therefore,
|f (3) (cx )| h3 |f (3) (cx )| 3
|f (x) − q(x)| ≤ = h for all x ∈ [x , x ].
· √ √
2i−2 2i
3! 12 3 72 3

Since this is valid over every element in the mesh, we obtain


M
|f (x) − q(x)| ≤ √ h 3 for all x ∈ [a, b].
72 3
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Snap: A legend of the
Lone Mountain
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Snap: A legend of the Lone Mountain

Author: Clive Phillipps-Wolley

Illustrator: H. G. Willink

Release date: August 10, 2022 [eBook #68725]

Language: English

Original publication: United Kingdom: Longmans, Green, and Co,


1892

Credits: Al Haines

*** START OF THE PROJECT GUTENBERG EBOOK SNAP: A


LEGEND OF THE LONE MOUNTAIN ***
SNAP
A LEGEND OF THE LONE MOUNTAIN

BY

C. PHILLIPPS-WOLLEY
AUTHOR OF 'SPORT IN THE CRIMEA AND CAUCASUS' ETC.
WITH THIRTEEN ILLUSTRATIONS BY H. G. WILLINK

NEW EDITION

LONDON
LONGMANS, GREEN, AND CO.
AND NEW YORK: 15 EAST 16th STREET
1892

All rights reserved

TO SMALL CLIVE.

I suppose that you'll cost me the deuce of a lot,


I suppose I must pay and look pleasant,
Though you're only a small insignificant dot—
My three-year-old warrior—at present.

But if ever you need the paternal 'tip,'


If ever you sin and must suffer,
Be brave and go straight, or I'll 'give you gyp'—
If I don't you may call me 'a duffer.'
CONTENTS

CHAPTER

I. FERNHALL v. LOAMSHIRE
II. 'MOP FAUCIBUS HÆSIT'
III. SNAP'S REDEMPTION
IV. THE FERNHALL GHOST
V. THE ADMIRAL'S 'SOCK-DOLLAGER'
VI. THE BLOW FALLS
VII. LEAVE LIVERPOOL
VIII. THE MANIAC
IX. 'THAT BAKING POWDER'
X. AFTER SCRUB CATTLE
XI. BRINGING HOME THE BEAR
XII. BRANDING THE 'SCRUBBER'
XIII. WINTER COMES WITH THE 'WAVIES'
XIV. A NIGHT OF ADVENTURE
XV. FOUNDING 'BULL PINE' FIRM
XVI. BEARS
XVII. IN THE BRÛLÊ
XVIII. THE LOSS OF 'THE CRADLE'
XIX. THE GAMBLERS 'PUT UP'
XX. LONE MOUNTAIN
XXI. AT THE TOP
XXII. AT THE END OF THE ROPE
XXIII. READING THE WILL
XXIV. SNAP'S SACRIFICE
XXV. THE FLIGHT OF THE CROWS
XXVI. SNAP'S STORY
XXVII. CONCLUSION

ILLUSTRATIONS.

IN THE CHIMNEY ... Frontispiece [missing from source book]

THE ADMIRAL FISHING

'GOOD-BYE' [missing from source book]

SNAP AND THE MADMAN

TONY AND THE SCRUBBER

IN THE WOOD

IN THE BRÛLÉ

'HANDS UP'

ON THE FACE OF THE CLIFF

'GOOD-BYE, PARD'

SNAP'S SACRIFICE
SNAP

CHAPTER I

FERNHALL v. LOAMSHIRE

'What on earth shall we do, Winthrop?' asked one of the Fernhall Eleven
of a big fair-faced lad, who seemed to be its captain.

'Do! I'll be shot if I know, Wyndham,' he replied. 'It is bad enough to be a


bat short, but really I don't know that we can spare a bowler.'

'Ah, well,' suggested another of the group, 'though Hales did very well for
the Twenty-two, it isn't quite the same thing bowling against such a team as
Loamshire brings down; he might not "come off" after all, don't you know.'

A quiet grin spread over the captain's face. No one knew better than he
did the spirit which prompted Poynter's last remark.

Good bowler though he was, Poynter had often been a sad thorn in
Winthrop's side. If you put him on first with the wind in his favour, Poynter
would be beautifully good-tempered, and bowl sometimes like a very
Spofforth. Only then sometimes he wouldn't! Sometimes an irreverent
batsman from Loamshire who had never heard of Poynter's break from the
leg would hit him incontinently for six, and perhaps do it twice in one over.
Then Poynter got angry. His arms began to work like a windmill. He tried to
bowl rather faster than Spofforth ever did; about three times as fast as Nature
ever meant John Poynter to. The result of this was always the same. First he
pitched them short, and the delighted batsman cut them for three; then he
pitched them up, and that malicious person felt a thrill of pleasure go
through his whole body as he either drove them or got them away to square
leg. Then Winthrop had to take him off. This was when the trouble began.
Sullenly Poynter would take his place in the field—and it was not every
place in the field which suited him. If you put him in the deep field, he
growled at the folly which risked straining a bowler's arm by shying. If you
put him close in, he grumbled at the risk he ran of having those dexterous
fingers of his damaged by a sharp cut or a 'sweet' drive. For of course he
always expected to be put on again, and from the time that he reached his
place until the time that he was again put into possession of the ball he did
nothing but watch his rival with malicious envy, making a mental bowling
analysis for him, in which he took far more note of the hits (or wides if there
were any) than he did of the maiden overs which were bowled.

But Frank Winthrop was a diplomatist, as a cricket captain should be, so,
though he grinned, he only replied, 'That's true enough, Poynter, but I must
have some ordinary straight stuff, such as Hales's, to rest you and Rolles, and
put these fellows off their guard against your curly ones.'

'Yes, I suppose it is a mistake to bowl a fellow good balls all the time. It
makes him play too carefully,' replied the self-satisfied Poynter.

'Well, but, Winthrop,' insisted the first speaker, 'if you don't do without a
change bowler, what will you do? That other fellow in the Twenty-two
doesn't bowl well enough, but there are lots of them useful bats.'

'I know all that, but I've made up my mind,' replied the young autocrat. 'I
shall play a man short, if I can't persuade Trout' (an irreverent sobriquet for
their head-master) 'to let Snap Hales off in time.'

When a captain of a school eleven says that he has made up his mind, the
intervention of anyone less than a head-master is useless, so that no one
protested.

As the group broke up Wyndham put his arm through Winthrop's, and
together they strolled towards the door of the school-house.

'Are you going up to see "the head," Major?' he asked.

'Yes,' replied Winthrop.

'What! about Snap Hales?' demanded Wyndham.


'Yes,' again replied Winthrop, 'about that young fool Snap.'

'What has he been up to now?' demanded his chum.

'Oh, he has been cheeking Cube-root again. It seems old Cube-root


couldn't knock mathematics into him anyhow, so he piled on the impositions.
Snap did as many lines as he could, but even with three nibs in your pen at
once there is a limit to the number which a fellow can do in a day, and
Master Snap has so many of these little literary engagements for other
masters as well as old Cube that at last he reached a point beyond which no
possible diligence would carry him.'

'Poor old Snap!' laughed Wyndham.

'Then, as he had just got into the eleven,' continued Winthrop, 'he didn't
like to give up his half-hour with the professional; the result of all which was
that yesterday old Cube asked him for his lines and was told—

'"I haven't done them, sir."

'"Haven't done them, sir: what do you mean?" thundered Cube.

'"I hadn't time, sir," pleaded Snap.

'"Not time! Why, I myself saw you playing cricket to-day for a good half-
hour. What do you mean by telling me you had not time?" asked Cube.

'"I had not time, sir, because——" Snap tried to say, but Cube stopped
him with that abominable trick of his, you know it.

'"Yēēs, Hales, yēēs! Yēēs, Hales, yēēs! So you had no time, Hales! Yēēs,
Hales, yēēs!"

'"No, sir, I was obliged to——"

'"To tell me a lie, sir! Yēēs, Hales, yēēs."

'Here Snap's beastly temper gave out, and instead of waiting till he got a
chance of telling his story properly to old Cube, who, although he loves
mathematics and hates a lie, is a good chap after all, he deliberately
mimicked the old chap with—

'"Nōō, sir, nōō! Nōō, sir, nōō!"

'Of course the other fellows went into fits of laughter, and old Cube had
fits too, only of another kind, and I expect I shall get "fits" from the Head for
trying to get the young idiot off for this match. But I really don't see how we
can get on without him,' Winthrop added, as he left his friend at the door,
and plodded with a heavy heart up to the head-master's sanctum.

What happened there the narrator of this truthful story does not pretend to
know. The inside of a headmaster's library was to him a place too sacred for
intrusion, and it was only through the foolish persistence of certain unwise
under-masters that he was ever induced to enter it. Whenever he did, he left
it with a note of recommendation from that excellent man to the school-
sergeant. It was not quite a testimonial to character, but still something like
it, and always contained an allusion to one of the most graceful of forest
trees, the mournful, beautiful birch. I am told that this is the favourite tree of
the Russian peasant. I dare say. I am told he is still uneducated. It was
education which, I think, taught me to dislike the birch.

But I am wandering. The only words which reached me as I stood below,


wondering if my leave out of bounds would be granted or not—and I had
very good reasons for betting on the 'not'—were these:

'Very well, if he is no good as a bat it won't much matter. I'll do what I


can for you, only win the toss and go in first.'

He was a good fellow, our Head, and from Winthrop's face as he came
downstairs I expect that he thought so.

I was quite right about that leave out of bounds. The head-master felt, no
doubt quite properly, that on such a day as the day of the Loamshire match,
when there were sure to be lots of visitors about, it would not do for one of
the school's chief ornaments to be absent. It was very hard upon me because,
you see, I could only buy twelve tarts for my shilling at the tuckshop,
whereas if I had got leave out of bounds I could have got thirteen for the
same money, only four miles from school! That sense of duty to the public
which no doubt will lead me some day to take a seat in the House of
Commons enabled me to bear up under my trouble, and about two o'clock I
was watching the match with my fellows on the Fernhall playing fields.

Ah, me! those Fernhall playing fields! with their long level stretches of
green velvet, their June sunshine and wonderful blue skies! What has life
like them nowadays? On this day they were looking their very best, and,
though I have wandered many a thousand miles since then, I have never seen
a fairer sight. Forty acres there were, all in a ring fence, of level greensward,
every yard of it good enough for a match wicket, and the ring-fence itself
nothing but a tall rampart of green turf, twelve or fourteen feet high, and
broad enough at the top for two boys to walk upon it abreast.

Out in the middle of this great meadow the wickets were pitched, and I
really believe that I have since played billiards on a surface less level than
the two-and-twenty yards which they enclosed. The lines of the crease
gleamed brightly against the surrounding green, and the strong sun blazed
down upon the long white coats of the umpires, the Fernhall eleven (or
rather ten, for Snap was still absent), and two of the strongest bats in
Loamshire.

But, though fourteen figures had the centre of the ground to themselves,
there was plenty of vigorous, young life round its edges. There, where the
sun was the warmest, with their backs up against the bank which enclosed
the master's garden, sat or lay some four hundred happy youngsters,
anxiously watching every turn of the match, keen critics, although
thoroughgoing partisans. Like young lizards, warmed through with the sun,
lying soft against the mossy bank, the scent of the flowers came to them over
the garden hedge, and the soft salt breeze came up from the neighbouring
sea. You could hear the lip and roll of its waves quite plainly where you lay,
if you listened for it, for after all it was only just beyond that green bulwark
of turf behind the pavilion. Many and many a time have we boys seen the
white foam flying in winter across those very playing-fields, and gathered
sea-wrack from the hedges three miles inland. By-and-by, when the match
was over, most of the two-and-twenty players in it would race down to the
golden sands and roll like young dolphins in the blue waves, for Fernhall
boys swam like fishes in those good old days, and such a sea in such
sunshine would have tempted the veriest coward to a plunge.

But the match was not over yet, although yellow-headed Frank Winthrop
began to think that it might almost as well be. He was beginning to despair.
It was a one-day match: the school had only made 156, while the county had
only two wickets down for 93; of course there was no chance of a second
innings; the two best bats in Loamshire seemed set for a century apiece;
Poynter had lost his temper and seemed trying rather to hurt his men than to
bowl them, and everyone else had been tried and had failed. What on earth
was an unfortunate captain to do? Just then a figure in a long cassock and
college cap, a fine portly figure with a kindly face, turned round, and, using
the back of a trembling small boy for a desk, wrote a note and despatched
the aforesaid small boy with it to the rooms of the Rev. Erasmus Cube-Root.
A minute or two before, Winthrop had found time to exchange half-a-dozen
words with 'the Head' whilst in the long field, and now he turned and raised
his cap to him, while an expression of thankfulness overspread his features.
The two Loamshire men at the wickets were Grey and Hawker, both names
well known on all the cricket-fields of England, and one of them known and
a little feared by our cousins at the Antipodes. This man, Hawker, had been
heard to say that he was coming to Fernhall to get up his average and have
an afternoon's exercise. It looked very much as if he would justify his boast.
He was an aggravating bat to bowl to, for more reasons than one. One of his
tricks, indeed, seemed to have been invented for the express purpose of
chaffing the bowler.

As he stood at the wicket his bat was almost concealed from sight behind
his pads, his wicket appeared to be undefended, and all three stumps plainly
visible to his opponent. Alas! as the ball came skimming down the pitch the
square-built little athlete straightened himself, the bat came out from its
ambush, and you had the pleasure of knowing that another six spoiled the
look of your analysis. If he was in very high spirits, and you in very poor
form, he would indulge in the most bewildering liberties, spinning round on
his heels in a way known to few but himself, so as to hit a leg ball into the
'drives.' Altogether he was, as the boys knew, a perfect Tartar to deal with if
he once got 'set.'
Grey, the other bat, was quite as exasperating in his way as Hawker, only
it was quite another way. He it was who had broken poor Poynter's heart.
You did not catch him playing tricks. You did not catch him hitting sixes, or
even threes; but neither did you catch him giving the field a chance,
launching out at a yorker, or interfering with a 'bumpy' one. Oh, no! It didn't
matter what you bowled him, it was always the same story. 'Up went his
shutter,' as Poynter feelingly remarked, 'and you had to pick up that blessed
leather and begin again.' Sometimes he placed a ball so as to get one run for
it, sometimes he turned round and sped a parting ball to leg, and sometimes
he snicked one for two. He was a slow scorer, but he seemed to possess the
freehold of the ground he stood upon. No one could give him notice to quit.
Such were the men at the wicket, and such the state of the game, when a tall,
slight figure came racing on to the ground in very new colours, and with
fingers which, on close inspection, would have betrayed a more intimate
acquaintance with the ink-pot than with the cricket-ball. Although it would
have been nearer to have passed right under the head-master's nose, the new-
comer went a long way round, eyeing that dignitary with nervous suspicion,
and raising his cap with great deference when the eye of authority rested
upon him. As soon as he came on to the ground he dropped naturally into his
place, and anyone could have seen at a glance that, whatever his other merits
might or might not be, Snap Hales was a real keen cricketer. When a ball
came his way there was no waiting for it to reach him on his part. He had
watched it, as a hawk does a young partridge, from the moment it left the
bowler's hands, and was halfway to meet it already. Like a flash he had it
with either hand—both were alike to him—and in the same second it was
sent back straight and true, a nice long hop, arriving in the wicket-keeper's
hands at just about the level of the bails.

But Winthrop had other work for Snap to do, and at the end of the over
sent him to replace Rolles at short-slip.

'By George, Towzer, they are going to put on Snap Hales,' said one
youngster to another on the rugs under the garden hedge.

'About time, too,' replied his companion; 'if he can't bowl better than
those two fellows he ought to be kicked.'
'Well, I dare say both you and he will be, if he doesn't come off to-day. I
expect it was your brother who got him off his lines to-day, and he won't be
a pleasant companion for either of you if the school gets beaten with half-a-
dozen wickets to spare.'

Towzer, the boy addressed, was brother to the captain of the eleven, and
his fag. Snap Hales, when at home, lived near the Winthrops, so that in the
school, generally, they were looked upon as being of one clan, of which, of
course, Frank Winthrop was the chief. Willy Winthrop was Towzer's proper
name, or at least the name he was christened by; but anyone looking at the
fair-haired jolly-looking little fellow would have doubted whether his
godfathers were wiser than his schoolfellows. No one would ever have
dreamed of him as a future scholar of Balliol, nor, on the other hand, as a
sour-visaged failure. He was a bright, impertinent Scotch terrier of a boy,
and his discerning contemporaries called him Towzer.

But we must leave Towzer for the present and stick to Snap. Everyone
was watching him now, and none more closely or more kindly than the man
whom Snap considered chief of his born enemies, 'the Head.' 'Yes, he is a
fine lad,' muttered that great man, 'I wish I knew how to manage him. He has
stuff in him for anything.' And indeed he might have, though he was hardly
good-looking. Tall and spare, with a lean, game look about the head, the first
impression he made upon you was that he was a perfect athlete, one of
Nature's chosen children. Every movement was so easy and so quick that
you knew instinctively that he was strong, though he hardly looked it; but his
face puzzled you. It was a dark, sad-looking face, certainly not handsome,
with firm jaw and somewhat rugged outlines, and yet there was a light
sometimes in the big dark eyes which gave all the rest the lie, and made you
feel that his masters might be right, after all, when they said, 'There is no
misdoing at Fernhall of which "that Hales" is not the leader.'

At any rate he appeared to be out of mischief just now.

'Round the wicket, sir?' asked the umpire as Snap took the ball in hand.

'No, Charteris, over,' was the short reply, as Hales turned to measure his
run behind the sticks.
'What! a new bowler?' asked Hawker of the wicket-keeper as he took a
fresh guard; 'who is he?'

'An importation from the Twenty-two; got his colours last week,'
answered Wyndham, and a smile spread over Hawker's face, as he saw in
fancy a timid beginner pitching him half-volleys to be lifted over the garden
hedge, or leg-balls with which to break the slates on the pavilion.

But Hawker had to reserve his energy for a while, being much too good a
cricketer to hit wildly at anything. With a quiet, easy action the new bowler
sent down an ordinary good-length ball, too straight to take liberties with,
and that was all. Hawker played it back to him confidently, but still carefully,
and another, and another, of almost identical pitch and pace, followed the
first. 'Not so much to be made off this fellow after all,' thought Hawker, 'but
he will get loose like the rest by-and-by, no doubt.' Still it was not as good
fun as he had expected. The fourth ball of Snap's first over was delivered
with exactly the same action as its predecessors, but the pace was about
double that of the others and Hawker was only just in time to stop it. It was
so very nearly too much for the great man that for a moment it shook his
confidence in his own infallibility. That momentary want of confidence
ruined him. The last ball of the over was not nearly up to the standard of the
other four; it was short-pitched and off the wicket, but it had a lot of 'kick' in
it, and Hawker had not come far enough out for it. There was an ominous
click as the ball just touched the shoulder of his bat, and next moment, as
long-slip remarked, he found it revolving in his hands 'like a stray planet.'

Don't talk to me of the lungs of the British tar, of the Irish stump orator,
or even of the 'Grand Old Man' himself! They are nothing, nothing at all, to
the lungs we had in those days. It was Snap's first wicket for the school, and
Snap was the school's favourite, as the scapegrace of a family usually is, and
caps flew up and fellows shouted until even Hawker didn't much regret his
discomfiture if it gave the boys such pleasure. He was very fond of Fernhall
boys, that sinewy man from the North, and, next to their own heroes,
Fernhall liked him better than most men. Even now they show the window
through which he jumped on all fours, and many a neck is nearly dislocated
in trying to follow his example.
In the next over from his end Hales had to deal with Grey, and he found
his match. He tried him with slow ones, he tried him with fast ones, he tried
to seduce him from the paths of virtue with the luscious lob, to storm him
with the Eboracian pilule or ball from York. It was not a bit of good, up went
the shutter, and a maiden over left Snap convinced that the less he had to do
with Grey the better for him, and left Grey convinced that Fernhall had got a
bowler at last who bowled with his head. Was it wilfully, I wonder, that Snap
gave Grey on their next meeting a ball which that steady player hit for one?
It may not have been, and yet there was a grin all over the boy's dark face as
he saw Grey trot up to his end. That run cost Loamshire two batsmen in four
balls—one bowled leg before wicket, and the other clean-bowled with an
ordinary good-length ball rather faster than its fellows.

Those old fields rang with Hales's name that afternoon, and at 6.30,
thanks chiefly to his superb bowling, the county had still two to score to win,
and two wickets to fall. One of the men still in was Grey. At the end of the
over the stumps would be drawn, and the game drawn against the school,
even if (as he might do) Snap should bowl a maiden. That, however, could
hardly be; even Grey would hit out at such a crisis. At the very first ball the
whole school trembled with excitement. The Loamshire man played well
back and stopped a very ugly one, fast and well pitched, but it would not be
altogether denied, and curled in until it lay quiet and inoffensive, absolutely
touching the stumps.

Ah, gentlemen of Loamshire! if you want to win this match why can't you
keep quiet? Don't you think the sight of that fatal little ball, nestling close up
to his wicket, is enough to disconcert any batsman in the last over of a good
match? And yet you cry, 'Steady, Thompson, steady!' Poor chap, you can see
that he is all abroad, and the boy's eyes at the other end are glittering with
repressed excitement. He is fighting his first great battle in public, and
knows it is a winning one. There is a sting and 'devil' in the fourth ball which
would have made even Grace pull himself together. It sent Thompson's bails
over the long-stop's head, and mowed down his wicket like ripe corn before
a thunder-shower.

And now the chivalry of good cricket was apparent; Loamshire had no
desire to 'play out the time.' Even as Thompson was bowled, another
Loamshire man left the pavilion, ready for the fray. If it had been 'cricket,'
Hawker, the Loamshire captain, would have gladly played out the match. As
it was, his man was ready to finish the over. As the two men passed each
other the new-comer gave his defeated friend a playful dig in the ribs, and
remarked, 'Here goes for the score of the match, Edward Anson, duck, not
out!'

As there was only one more ball to be bowled, and only two runs to be
made to secure a win for Loamshire, I'm afraid Anson hardly meant what he
said. Unless it shot underground or was absolutely out of reach, that young
giant, who 'could hit like anything, though not much of a bat,' meant at any
rate to hit that one ball for four. By George, how he opened his shoulders!
how splendidly he lunged out! you could see the great muscles swell as he
made the bat sing through the air, you could almost see the ball going
seaward; and yet—and yet——

The school had risen like one man; they had heard that rattle among the
timber; they knew that Snap's last 'yorker' had done the trick; cool head and
quick hand had pulled the match out of the fire, and even his rival Poynter
was one of the crowd who caught young Hales, tossed him on to their
shoulders, and bore him in triumph to the pavilion, whilst the chapel clock
struck the half-hour.

CHAPTER II

'MOP FAUCIBUS HÆSIT'

Boys in the fifth form at Fernhall shared a study with one companion.
Monitors of course lived in solitary splendour, with a bed which would stand
on its head, and allowed itself to be shut up in a cupboard in the corner.
Small boys who had not attained even to the fringe of the school aristocracy
lived in herds in bare and exceedingly untidy rooms round the inner quads.
Even in those days there were monitors who were worshippers of art. Some
of them had curtains in their rooms of rich and varied colouring; one of them
had a plate hung up which he declared was a piece of undoubted old
Worcester. Tomlinson was a great authority on objects of virtù, and a rare
connoisseur, but we changed his plate for one which we bought for sixpence
at Newby's, and he never knew the difference. Then there was one fellow
who had several original oil paintings. These represented farmyard scenes
and were attributed indifferently to Landseer, Herring, and a number of other
celebrated artists. Whoever painted them, these pictures were the objects of
more desperate forays than any other property within the school limits. I
remember them well as adorning the room of a certain man of muscle, to
whom, of course, they belonged merely as the spoils of war. The rightful
owner lived three doors off, but I don't think that he ever had the pluck to
attempt to regain his own.

However, in the small boys' rooms there were none of these luxuries of an
effete civilisation. There was a book-shelf full of ragged books, none of
which by any chance ever bore the name of anyone in that study; there was a
table, a gas-burner, a frying-pan, and a kettle. These last-named articles
might have been seen in every study at Fernhall, from the study of the
monitor to that of the pauper, as we called that unfortunate being who had
not yet emerged from the lower school. In the long nights of winter, when
the wild sea roared just beyond the limits of their quad, and the spray came
flying over the sea-wall to be dashed against their study windows, all
Fernhall boys had a common consolation. They called it brewing: not the
brewing of beer or of any intoxicating liquor, but of that cheering cup of tea
which consoles so many thousands, from the London charwoman to the pig-
tailed Chinaman, from the enervated Indian to the half-frozen Russian exile
in Siberia. At first the headmasters of Fernhall tried hard to put down this
practice. Sergeants lurked about our passages, confiscated our kettles,
carried away the frying-pans full of curly rashers from under our longing
eyes, and 'lines' and flagellations were all we got in exchange. At last a new
era began. A great reformer arrived, a 'Head' of liberal leanings and wide
sympathy. This man frowned on coercion, and, instead of taking away our
kettles, gave us a huge range of stoves on which to boil them. From a cook's
point of view, no doubt, the range of stoves was a great improvement on the
old gas-burner, but, in spite of the liberality of the 'Head,' small clusters of
boys still stood night after night on those old study tables and patiently fried
their bacon over the gas.

You might also like