Finite Dimensional Linear Algebra 1st Gockenbach Solution Manual
Finite Dimensional Linear Algebra 1st Gockenbach Solution Manual
https://testbankmall.com/product/solution-manual-for-
introduction-to-applied-linear-algebra-1st-by-boyd/
https://testbankmall.com/product/solutions-manual-to-accompany-
matrix-analysis-and-applied-linear-algebra-1st-
edition-9780898714548/
https://testbankmall.com/product/elementary-linear-algebra-8th-
edition-larson-solutions-manual/
https://testbankmall.com/product/introduction-to-linear-
algebra-4th-edition-strang-solutions-manual/
Elementary Linear Algebra with Applications 9th Edition
Kolman Solutions Manual
https://testbankmall.com/product/elementary-linear-algebra-with-
applications-9th-edition-kolman-solutions-manual/
https://testbankmall.com/product/linear-algebra-and-its-
applications-5th-edition-lay-solutions-manual/
https://testbankmall.com/product/linear-algebra-a-modern-
introduction-4th-edition-david-poole-solutions-manual/
https://testbankmall.com/product/linear-algebra-and-its-
applications-5th-edition-lay-test-bank/
https://testbankmall.com/product/solution-manual-for-prealgebra-
and-introductory-algebra-1st-edition/
5
62.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES6
Using the definition of division, the commutative and associative properties of multiplication, and finally
the distributive property, we obtain
α γ
+ = αβ −1 + γδ −1 = (α · 1)β −1 + (γ · 1)δ −1
β δ
= (α(δδ −1 ))β −1 + (γ(ββ −1 ))δ −1 = ((αδ)δ −1 )β −1 + ((γβ)β −1 )δ −1
= (αδ)(δ −1 β −1 ) + (γβ)(β −1 δ −1 ) = (αδ)(δβ)−1 ) + (γβ)(βδ)−1
= (αδ)(βδ)−1 ) + (γβ)(βδ)−1 = ((αδ) + (γβ))(βδ)−1
αδ + βγ
= .
βδ
Similarly,
α γ 1 −1 −1 −1 −1 −1 −1 −1
· = (αβ − )(γδ ) = ((αβ )γ)δ = (α(β γ))δ = (α(γβ ))δ
β δ
αγ
= ((αγ)β −1 )δ −1 = (αγ)(β −1 δ −1 ) = (αγ)(βδ)−1 = .
βδ
α/β
= (αβ −1 )(γδ −1 )−1 = (αβ −1 )(γ −1 δ) = ((αβ −1 )γ −1 )δ
γ/δ
= (α(β −1 γ −1 ))δ = (α(βγ)−1 )δ = α((βγ)−1 δ)
αδ
= α(δ(βγ)−1 ) = (αδ)(βγ)−1 = .
βγ
10. Let F be a field, and let α ∈ F be given. We wish to prove that, for any β1 , . . . , βn ∈ F , α(β1 +· · ·+βn ) =
αβ1 + · · · + αβn . We argue by induction on n. For n = 1, the result is simply αβ1 = αβ1 . Suppose that for
some n ≥ 2, α(β1 + · · · + βn−1 ) = αβ1 + · · · + αβn−1 for any β1 , . . . , βn−1 ∈ F . Let β1 , . . . , βn−1 , βn ∈ F .
Then
(In the last step, we applied the induction hypothesis, and in the step preceding that, the distributive
property of addition of multiplication.) This shows that α(β1 + · · · + βn ) = αβ1 + · · · + αβn , and the
general result now follows by induction.
11. (a) The space Z is not a field because multiplicative inverses do not exist in general. For example, 2 = 0,
yet there exists no n ∈ Z such that 2n = 1.
(b) The space Q of rational number is a field. Assuming the usual definitions for addition and multipli-
cation, all of the defining properties of a field are straightforward to verify.
(c) The space of positive real numbers is not a field because there is no additive identity. For any
z ∈ (0, ∞), x + z > x for all x ∈ (0, ∞).
12. Let F = {(α, β) : α, β ∈ R}, and define addition and multiplication on F by (α, β)+(γ, δ) = (α+γ, β+δ),
(α, β) · (γ, δ) = (αγ, βδ). With these definitions, F is not a field because multiplicative inverses do not
exists. It is straightforward to verify that (0, 0) is an additive inverse and (1, 1) is a multiplicative inverse.
Then (1, 0) = (0, 0), yet (1, 0) · (α, β) = (α, 0) = (1, 1) for all (α, β) ∈ F . Since F contains a nonzero
element with no multiplicative inverse, F is not a field.
13. Let F = (0, ∞), and define addition and multiplication on F by x ⊕ y = xy, x y = xln y . We wish
to show that F is a field. Commutativity and associativity of addition follow immediately from these
properties for ordinary multiplication of real numbers. Obviously 1 is an additive inverse, and the additive
inverse of x ∈ F is its reciprocal 1/x. The properties of multiplication are less obvious, but note that x
y = eln (y x) = eln (y) ln (x) , and this formula makes both commutativity and associativity easy to verify.
We also see that e is a multiplicative identity: x e = xln (e) = x1 = x for all x ∈ F . For any x ∈ F , x
= 1, y = e1/ ln (x) is a multiplicative inverse. Finally, for any x, y, z ∈ F , x (y ⊕z) = x (yz) = xln (yz) =
xln (y)+ln (z) = xln (y) xln (z) = (x y)(x z) = (x y) ⊕ (x z). Thus the distributive property holds.
14. Suppose F is a set on which are defined two operations, addition and multiplication, such that all the
properties of a field are satisfied except that addition is not assumed to be commutative. We wish to
show that, in fact, addition must be commutative, and therefore F must be a field. We first note that
it is possible to prove that 0 · γ = 0, −1 · γ = −γ, and−(−γ) = γ for all γ ∈ F without invoking
commutativity of addition. Moreover, for all α, β ∈ F , −β + (−α) = −(α + β) since (α + β) + (−β +
(−α)) = ((α + β) + (−β)) + (−α) = (α + (β + (−β))) + (−α) = (α + 0) + (−α) = α + (−α) = 0. We
therefore conclude that −1 · (α + β) = −β + (−α) for all α, β ∈ F . But, by the distributive property,
−1 · (α + β) = −1 · α + (−1) · β = −α + (−β), and therefore −α + (−β) = −β + (−α) for all α, β ∈ F .
Applying this property to −α, −β in place of α, β, respectively, yields α + β = β + α for all α, β ∈ F ,
which is what we wanted to prove.
82.1. FIELDS CHAPTER 2. FIELDS AND VECTOR SPACES8
15. (a) In Z2 = {0, 1}, we have 0 + 0 = 0, 1 + 0 = 0 + 1 = 1, and 1 + 1 = 0. This shows that −0 = 0 (as
always) and −1 = 1. Also, 0 · 0 = 0 · 1 = 1 · 0 = 0, 1 · 1 = 1, and 1−1 = 1 (as in any field).
(b) The addition and multiplication tables for Z3 = {0, 1, 2} are
+ 0 1 2 · 0 1 2
0 0 1 2 0 0 0 0
1 1 2 0 1 0 1 2
2 2 0 1 2 0 2 1
19. We are given that H represents the space of quaternions and the definitions of addition and multiplication
in H. The first two parts of the exercise are purely computational.
(a) i2 = j 2 = k 2 = −1, ij = k, ik = −j, jk = i, ji = −k, ki = j, kj = −i, ijk = −1.
(b) xx = xx = x21 + x22 + x23 + x24.
(c) The additive identity in H is 0 = 0 + 0i + 0j + 0k. The additive inverse of x = x1 + x2 i + x3 j + x4 k
is −x = −x1 − x2 i − x3 j − x4 k.
(d) The calculations above show that multiplication is not commutative; for instance, ij = k, ji = −k.
(e) It is easy to verify that 1 = 1 + 0i + 0j + 0k is a multiplicative identity for H.
(f) If x ∈ H is nonzero, then xx = x21 + x22 + x23 + x24 is also nonzero. It follows that
x x1 x x x
= − 2 i− 3 j − 4 k
xx xx xx xx xx
is a multiplicative inverse for x:
x
x =
xx xx
= 1.
xx
20. In order to solve the first two parts of this problem, it is convenient to prove the following result. Suppose
F is a field under operations +, ·, G is a nonempty subset of F , and G is a field under operations ⊕, .
Moreover, suppose that for all x, y ∈ G, x + y = x ⊕ y and x · y = x y. Then G is a subfield of F .
We already know (since the operations on F reduce to the operations on G when the operands belong
to G) that G is closed under + and ·. We have to prove that the additive and multiplicative identities
of F belong to G, which we will do by showing that 0F = 0G (that is, the additive identity of F equals
the additive identity of G under ⊕) and 1F = 1G (which has the analogous meaning). To prove the first,
notice that 0G + 0G = 0G ⊕ 0G since 0G ∈ G, and therefore 0G + 0G = 0G . Adding the additive inverse
(in F ) −0G to both sides of this equation yields 0G = 0F . A similar proof shows that 1F = 1G . Thus
0F , 1F ∈ G. We next show that if x ∈ G and −x denotes the additive inverse of x in F , then −x ∈ G.
We write ◦ x for the additive inverse of x in G. We have x ⊕(◦ x) = 0, which implies that x + (◦ x) = 0.
But then, adding −x to both sides, we obtain ◦ x = −x, and therefore −x ∈ G. Similarly, if x ∈ G,
x = 0, and x−1 denotes the multiplicative inverse of x in F , then x−1 ∈ G. This completes the proof.
(a) We wish to show that R is a subfield of C. It suffices to prove that addition and multiplication
in C reduce to the usual addition and multiplication in R when the operands are real numbers. If
x, y ∈ R ⊂ C, then (x + 0i) + (y + 0i) = (x + y) + (0 + 0)i = (x + y) + 0i = x + y. Similarly,
(x + 0i)(y + 0i) = (xy − 0 · 0) + (x · 0 + 0 · y)i = xy + 0i = xy. The the operations on C reduce to
the operations on R when the operands are elements of R, and therefore R is a subfield of C.
(b) We now wish to show that C is a subfield of H by showing that the operations of H reduce to the
operations on C when the operands belong to C. Let x = x1 + x2 i, y = y1 + y2 i belong to C, so
that x = x1 + x2 i + 0j + 0k, y = y1 + y2 i + 0j + 0k can be regarded as elements of H. By definition,
x + y = (x1 + x2 i + 0j + 0k) + (y1 + y2 i + 0j + 0k)
= (x1 + y1 ) + (x2 + y2 )i + (0 + 0)j + (0 + 0)k
= (x1 + y1 ) + (x2 + y2 )i,
xy =(x1 + x2 i + 0j + 0k)(y1 + y2 i + 0j + 0k)
=(x1 y1 − x2 y2 − 0 · 0 − 0 · 0)+
(x1 y2 + x2 y1 + 0 · 0 − 0 · 0)i+
(x1 · 0 − x2 · 0 + 0 · y1 + 0 · y2 )j+
(x1 · 0 + x2 · 0 − 0 · y2 + 0 · y1 )k
=(x1 y1 − x2 y2 ) + (x1 y2 + x2 y1 )i.
Thus both operations on H reduce to the usual operations on C, which shows that C is a subfield
of H.
2.1. FIELDS
10 CHAPTER 2. FIELDS AND VECTOR SPACES10
2. Let F be an infinite field, and let V be a nontrivial vector space over F . We wish to show that V contains
infinitely many vectors. By definition, V contains a nonzero vector u. It suffices to show that, for all
α, β ∈ F , α = β implies αu = βu, since then V contains the infinite subset {αu : α ∈ F }. Suppose
α, β ∈ F , α = β. Then αu = βu if and only if αu − βu = 0, that is, if and only if (α − β)u = 0. Since
u = 0 by assumption, this implies that α − β = 0 by Theorem 5. Thus αu = βu implies α = β. which
completes the proof.
4. We are to prove that if F is a field, then F n is a vector space over F . This is a straightforward verification
of the defining properties of a vector space, which follow in this case from the analogous properties of the
field F . The details are omitted.
5. We are to prove that F [a, b] (the space of all functions f : [a, b] → R) is a vector space over R. Like the
last exercise, this straightforward verification is omitted.
6. (a) Let p be a prime and n a positive integer. Since each of the n components of x ∈ Znp can take on
any of the p values 0, 1, . . . , p − 1, there are pn distinct vectors in Znp.
2.2. VECTOR SPACES 11
(b) The elements of Z22 are (0, 0), (0, 1), (1, 0), (1, 1). We have (0, 0) + (0, 0) = (0, 0), (0, 0) + (0, 1) =
(0, 1), (0, 0) + (1, 0) = (1, 0), (0, 0) + (1, 1) = (1, 1), (0, 1) + (0, 1) = (0, 0), (0, 1) + (1, 0) = (1, 1),
(0, 1) + (1, 1) = (1, 0), (1, 0) + (1, 0) = (0, 0), (1, 0) + (1, 1) = (0, 1), (1, 1) + (1, 1) = (0, 0).
7. (a) The elements of P1 (Z2 ) are the polynomials 0, 1, x, 1 + x, which define distinct functions on Z2 .
We have 0 + 0 = 0, 0 + 1 = 1, 0 + x = x, 0 + (1 + x) = 1 + x, 1 + 1 = 0, 1 + x = 1 + x, 1 + (1 + x) = x,
x+ x = (1 + 1)x = 0x = 0, x+(1 +x) = 1+(x+x) = 1, (1+x)+(1+x) = (1+1)+(x+x) = 0 + 0 = 0.
(b) Nominally, the elements of P2 (Z2 ) are 0, 1, x, 1 + x, x2 , 1 + x2 , x + x2 , 1 + x + x2 . However, since
these elements are interpreted as functions mapping Z2 into Z2 , it turns out that the last four
functions equal the first four. In particular, x2 = x (as functions), since 02 = 0 and 12 = 1. Then
1 + x2 = 1 + x, x + x2 = x + x = 0, and 1 + x + x2 = 1 + 0 = 1. Thus we see that the function
spaces P2 (Z2 ) and P1 (Z2 ) are the same.
(c) Let V be the vector space consisting of all functions from Z2 into Z2 . To specify f ∈ V means to
specify the two values f (0) and f (1). There are exactly four ways to do this: f (0) = 0, f (1) = 0
(so f (x) = 0); f (0) = 1, f (1) = 1 (so f (x) = 1); f (0) = 0, f (1) = 1 (so f (x) = x); and f (0) = 1,
f (1) = 0 (so f (x) = 1 + x). Thus we see that V = P1 (Z2 ).
8. Let V = (0, ∞), with addition ⊕ and scalar multiplication defined by u ⊕ v = uv for all u, v ∈ V
and α u = uα for all α ∈ R and all u ∈ V . We will prove that V is a vector space over R. First of
all, ⊕ is commutative and associative (because multiplication of real numbers has these properties). For
all u ∈ V , u ⊕ 1 = u · 1 = u, so there is an additive identity. Also, if u ∈ V , then 1/u ∈ V satisfies
u ⊕ (1/u) = u(1/u) = 1, so each vector has an additive inverse. Next, if α, β ∈ R and u, v ∈ V , then
α (β u) = α (uβ ) = (uβ )α = uαβ = (αβ) u, so the associative property of scalar multiplication
holds. Also, α (u ⊕v) = α (uv) = (uv)α = uα v α = (α u) ⊕(α v) and (α + β) u = uα+β = uα uβ =
(α u) ⊕ (β u). Thus both distributive properties hold. Finally, 1 u = u1 = u. This completes the
proof that V is a vector space over R.
9. Let V = R2 with the usual scalar multiplication and the following nonstandard vector addition: u ⊕v =
(u1 + v1 , u2 + v2 + 1) for all u, v ∈ R2 . It is easy to check that commutativity and associativity of ⊕
hold, that (0, −1) is an additive identity, and that each u = (u1 , u2 ) has an additive inverse, namely,
(−u1 , −u2 − 2). Also, α(βu) = (αβ)u for all u ∈ V , α, β ∈ R (since scalar multiplication is defined in the
standard way). However, if α ∈ R, then α(u + v) = α(u 1 + v1 , u2 + v2 + 1) = (αu1 + αv1 , αu2 + αv2 + α),
while αu + αv = (αu1 , αu2 ) + (αv1 , αv2 ) = (αu1 + αv1 , αu2 + αv2 + 1), and these are unequal if α = 1.
Thus the first distributive property fails to hold, and V is not a vector space over R. (In fact, the second
distributive property also fails.)
10. Let V = R2 with the usual scalar multiplication, and with addition defined by u ⊕v = (α1 u1 +β1 v1 , α2 u2 +
β2 v2 ), where α1 , α2 , β1 , β2 ∈ R are fixed. We wish to determine what values of α1 , α2 , β1 , β2 will make V a
vector space over R. We first note that ⊕ is commutative if and only if α1 = β1 , α2 = β2 . We therefore
redefine u ⊕ v as (α1 u1 + α1 v1 , α2 u2 + α2 v2 ) = (α1 (u1 + v1 ), α2 (u2 + v2 )). Next, we have (u ⊕v) ⊕ w =
(α12 u1 + α21 v1 + α1 w1 , α22 u2 + α22v2 + α2 w2 ), u ⊕ (v ⊕ w) = (α1 u1 + α21 v1 + α21w1 , α2 u2 + α22 v2 + α22w2 ).
From this, it is easy to show that (u ⊕ v) ⊕ w = u ⊕ (v ⊕ w) for all u, v, w ∈ R if and only if α12 = α1
and α22 = α2 , that is, if and only if α1 = 0 or α1 = 1, and similarly for α2 . However, if α1 = 0 or α2 = 0,
then no additive identity can exist. For suppose α1 = 0. Then u ⊕ v = (0, α2 (u2 + v2 )) for all u, v ∈ V,
and no z ∈ V can satisfy u ⊕ z = u if u1 = 0. Similarly, if α2 = 0, then no additive identity can exist.
Therefore, if V is to be a field over R2 , then we must have α1 = β1 = α2 = β2 = 1, and V reduces to R2
under the usual vector space operations.
11. Suppose V is the set of all polynomials (over R) of degree exactly two, together with the zero polynomial.
Addition and scalar multiplication are defined on V in the usual fashion. Then V is not a vector space
over R because it is not closed under addition. For example, 1 + x + x2 ∈ V , 1 + x − x2 ∈ V , but
(1 + x + x2 ) + (1 + x − x2 ) = 2 + 2x ∈ V .
12. (a) We wish to find a function lying in C(0, 1) but not in C[0, 1]. A suitable function with a discontinuity
at one of the endpoints provides an example. For example, f (x) = 1/x satisfies f ∈ C(0, 1) and
2.3. SUBSPACES
12 CHAPTER 2. FIELDS AND VECTOR SPACES
12
f ∈ C[0, 1], as does f (x) = 1/(1 − x) or f (x) = 1/(x − x2 ). A different type of example is provided
by f (x) = sin (1/x).
(b) The function f (x) = |x| belongs to C[−1, 1] but not to C 1 [−1, 1].
13. Let V be the space of all infinite sequences of real numbers, and define {xn } + {yn } = {xn + yn },
α{xn } = {αxn }. The proof that V is a vector space is a straightforward verification of the defining
properties, no different than for Rn , and will not be given here.
14. Let V be the set of all piecewise continuous functions f : [a, b] → R, with addition and scalar multi-
plication defined as usual for functions. We wish to show that V is a vector space over R. Most of
the properties of a vector space are automatically satisfied by V because it is a subset of the space of
all real-valued functions on [a, b], which is known to be a vector space. Specifically, commutativity and
associativity of addition, the associative property of scalar multiplication, the two distributive laws, and
the fact that 1 · u = u for all u ∈ V are all obviously satisfied. Moreover, the 0 function is continuous
and hence by definition piecewise continuous, and therefore 0 ∈ V . It remains only to show that V
is closed under addition and scalar multiplication (then, since −u = −1 · u for any function u, each
function u ∈ V must have an additive inverse in V ). Let u ∈ V , α ∈ R, and suppose u has points of
discontinuity x1 < x2 < · · · < xk−1 , where x1 > x0 = a and xk−1 < xk = b. Then u is continuous on
each interval (xi−1 , xi ), i = 1, 2, . . . , k, and therefore, by a simple theorem of calculus (any multiple of a
continuous function is continuous), αu is also continuous on each (xi−1 , xi ). The one-sided limits of αu
at x0 , x1 , . . . , xk exist since, for example,
lim αu(x) = α lim u(x)
+ +
x→xi x→xi
(and similarly for left-hand limits). Therefore, αu is piecewise continuous and therefore αu ∈ V . Now
suppose u, v belong to V . Let {x1 , x2 , . . . , x −1 } be the union of the sets of points of discontinuity of u
and of v, ordered so that a = x0 < x1 < · · · < x −1 = x = b. Then, since both u and v are continuous
at all other points in (a, b), u + v is continuous on every interval (xi−1 , xi ). Also, at each xi , either
limx→xi u(x) exists (if u is continuous at xi , that is, if xi is a point of discontinuity only for v), or the
one-sided limits limx→x+ u(x) and limx→x− u(x) both exist. In the first case, the two one-sided limits
i i
exist (and are equal), so in any case the two one-sided limits exist. The same is true for v. Thus, for each
xi , i = 0, 1, . . . , − 1,
lim (u(x) + v(x)) = lim u(x) + limi v(x),
x→x+
i x→x+
i x→x+
and similarly for the left-hand limits at x1 , x2 , . . . , x . This shows that u + v is piecewise continuous, and
therefore belongs to V . This completes the proof.
15. Suppose U and V are vector spaces over a field F , and define addition and scalar multiplication on U × V
by (u, v) + (w, z) = (u + w, v + z), α(u, v) = (αu, αv). We wish to prove that U × V is a vector space
over F . In fact, the verifications of all the defining properties of a vector space are straightforward. For
instance, (u, v) + (w, z) = (u + w, v + z) = (w + u, z + v) = (w, z) + (u, v) (using the commutativity of
addition in U and V ), and therefore addition in U × V is commutative. Note that the additive identity
in U × V is (0, 0), where the first 0 is the zero vector in U and the second is the zero vector in V . We
will not verify the remaining properties here.
2.3 Subspaces
1. Let V be a vector space over F .
(a) Let S = {0}. Then 0 ∈ S, S is closed under addition since 0 + 0 = 0 ∈ S, and S is closed under
scalar multiplication since α · 0 = 0 ∈ S for all α ∈ F . Thus S is a subspace of V .
(b) The entire space V is a subspace of V since 0 ∈ V and V is closed under addition and scalar
multiplication by definition.
2.3. SUBSPACES
13 CHAPTER 2. FIELDS AND VECTOR SPACES
13
5. Suppose S is a subset of Z2n . We wish to show that S is a subspace of Z2n if and only if 0 ∈ S and S is
closed under addition. Of course, the “only if” direction is trivial. The other direction follows as in the
preceding exercise: If 0 ∈ S, then S is automatically closed under scalar multiplication, since 0 and 1 are
the only elements of the field Z2 , and 0 · v = 0 for all v ∈ S, 1 · v = v for all v ∈ S.
6. Define S = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0}. Then S is not a subspace of R2 , since it is not closed under
scalar multiplication. For instance, (1, 1) ∈ S but −1 · (1, 1) = (−1, −1) ∈ S.
7. Define S = {x ∈ R2 : ax1 + bx2 = 0}, where a, b ∈ R are constants. We will show that S is subspace
of R2 . First, (0, 0) ∈ S, since a · 0 + b · 0 = 0. Next, suppose x ∈ S and α ∈ R. Then ax1 + bx2 = 0,
and therefore a(αx1 ) + b(αx2 ) = α(ax1 + bx2 ) = α · 0 = 0. This shows that αx ∈ S, and therefore S is
closed under scalar multiplication. Finally, suppose x, y ∈ S, so that ax1 + bx2 = 0 and ay1 + by2 = 0.
Then a(x1 + y1 ) + b(x2 + y2 ) = (ax1 + bx2 ) + (ay1 + by2 ) = 0 + 0 = 0, which shows that x + y ∈ S, and
therefore that S is closed under addition. This completes the proof.
8. (a) The set A = {x ∈ R2 : x1 = 0 or x2 = 0} is closed under scalar multiplication but not addition.
Closure under scalar multiplication holds since if x1 = 0, then (αx)1 = αx1 = α · 0 = 0, and similarly
for the second component. The set is not closed under addition; for instance, (1, 0), (0, 1) ∈ A, but
(1, 0) + (0, 1) = (1, 1) ∈ A.
(b) The set Q = {x ∈ R2 : x1 ≥ 0, x2 ≥ 0} is closed under addition but not scalar multiplication. Since
(1, 1) ∈ Q but −1 · (1, 1) = (−1, −1) ∈ Q, we see that Q is not closed under scalar multiplication.
On the other hand, if x, y ∈ Q, so that x1 , x2 , y1 , y2 ≥ 0, we see that (x + y)1 = x1 + y1 ≥ 0 + 0 = 0
and (x + y)2 = x2 + y2 ≥ 0 + 0 = 0. This shows that x + y ∈ Q, and therefore Q is closed under
addition.
9. Let V be a vector space over a field F , let u ∈ V , and define S = {αu : α ∈ F }. We will show that S
is a subspace of V . First, 0 ∈ V because 0 = 0 · u. Next, suppose x ∈ S and β ∈ F . Since x ∈ S, there
exists α ∈ F such that x = αu. Therefore, βx = β(αu) = (βα)u (using the associative property of scalar
multiplication, which shows that βx belongs to S. Thus S is closed under scalar multiplication. Finally,
suppose x, y ∈ S; then there exist α, β ∈ F such that x = αu, y = βu, and x + y = αu + βu = (α + β)u
2.3. SUBSPACES
14 CHAPTER 2. FIELDS AND VECTOR SPACES
14
by the second distributive property. Therefore S is closed under addition, and we have shown that S is
a subspace.
10. Let R be regarded as a vector space over R. We wish to prove that R has no proper subspaces. It suffices
to prove that if S is a nontrivial subspace of R, then S = R. So suppose S is a nontrivial subspace,
which means that there exists x = 0 belonging to S. But then, given any y ∈ R, y = (yx−1 )x belongs to
S because S is closed under scalar multiplication. Thus R ⊂ S, and hence S = R.
11. We wish to describe all proper subspaces of R2 . We claim that every proper subspace of R2 has the form
{αx : α ∈ R, α = 0}, where x ∈ R2 is nonzero (geometrically, such a set is a line through the origin).
To prove, this, let us suppose S is a proper subspace of R2 . Then there exists x ∈ S, x = 0. Since S
is closed under scalar multiplication, every vector of the form αx, α ∈ R, must belong to S. Therefore,
S contains the set {αx : α ∈ R, α = 0}. Let us suppose that there exists y ∈ S such that y cannot be
written as y = αx for some α ∈ R. In this case, we argue that every z ∈ R2 belongs to S, and hence S is
not a proper subspace of R2 . To justify this conclusion, we first note that, since y is not a multiple of x,
x1 y2 − x2 y1 = 0. Let z ∈ R2 be given and consider the equation αx + βy = z. It can be verified directly
that α = (y2 z1 − y1 z2 )/(x1 y2 − x2 y1 ), β = (x1 z2 − x2 z1 )/(x1 y2 − x2 y1 ) satisfy this equation, from which
it follows that z ∈ S (since S is closed under addition and scalar multiplication). Therefore, if S contains
any vector not lying in {αx : α ∈ R, α = 0}, then S consists of all of R2 , and S is not a proper subspace
of R2 .
12. We wish to find a proper subspace of R3 that is not a plane. One such subspace is the x1 -axis: S = {x ∈
R3 : x2 = x3 = 0}. It is easy to verify that S is a subspace of R3 , and geometrically, S is a line.
More generally, using the results of Exercise 10, we can show that {αx : α ∈ R}, where x = 0 is a given
vector, is a proper subspace of R3 . Such a subspace represents a line through the origin.
13. Consider the subset Rn of Cn . Although Rn contains the zero vector and is closed under addition,
it is not closed under scalar multiplication, and hence is not a subspace of Cn . Here the scalars are
complex numbers (since Cn is a vector space over C), and, for example, (1, 0, . . . , 0) ∈ Rn , i ∈ C, and
i(1, 0, . . . , 0) = (i, 0, . . . , 0) does not belong to Rn .
14. Let S = {u ∈ C[a, b] : u(a) = u(b) = 0}. Then S is a subspace of C[a, b]. The zero function clearly
belongs to S. Suppose u ∈ S and α ∈ R. Then (αu)(a) = αu(a) = α · 0 = 0, and similarly (αu)(b) = 0.
It follows that αu ∈ S, and S is closed under scalar multiplication. If u, v ∈ S, then (u + v)(a) =
u(a) + v(a) = 0 + 0 = 0, and similarly (u + v)(b) = 0. Therefore S is closed under addition, and we have
shown that S is a subspace of C[a, b].
15. Let S = {u ∈ C[a, b] : u(a) = 1}. Then S is not a subspace of C[a, b] because the zero function does not
belong to S.
b
16. Let S = u ∈ C[a, b] : a u(x) dx = 0 . We will show that S is a subspace of C[a, b]. First, since the
integral of the zero function is zero, we see that the zero function belongs to S. Next, suppose u ∈ S
b b b
and α ∈ R. Then a (αu)(x) dx = a αu(x) dx = α a u(x) dx = α · 0 = 0, and therefore αu ∈ S. Finally,
b b b b
suppose u, v ∈ S. Then a (u + v)(x) dx = a (u(x) + v(x)) dx = a u(x) dx + a v(x) dx = 0 + 0 = 0. This
shows that u + v ∈ S, and we have proved that S is a subspace of C[a, b].
17. Let V be the vector space of all (infinite) sequences of real numbers.
(a) Define Z = {{xn } ∈ V : limn→∞ xn = 0}. Clearly the zero seqence converges to zero, and hence
belongs to Z. If {xn } ∈ Z and α ∈ R, then limn→∞ αxn = α limn→∞ xn = α · 0 = 0, which implies
that α{xn } = {αxn } belongs to Z, and therefore Z is closed under scalar multiplication. Now
suppose {xn }, {yn } both belong to Z. Then lim n→∞ (xn +yn ) = limn→∞ xn +limn→∞ yn = 0+0 = 0.
Therefore {xn } + {yn } = {xn + yn } belongs to Z, Z is closed under addition, and we have shown
that Z is a subspace of V .
2.3. SUBSPACES
15 CHAPTER 2. FIELDS AND VECTOR SPACES
15
∞ ∞
(b) Define S = {{xn } ∈ V : n=1 xn < ∞}. From calculus, we know that if xn converges, then
n=1
∞ ∞ ∞ ∞
so does n=1 αxn = α n=1 xn for any α ∈ R. Similarly, if n=1 n=1 yn converge, then so
∞ ∞ ∞
does n=1 (xn + yn ) = xn + yn . Using these facts, it is straightforward to show that S
n=1
xn ,
is closed under addition and scalar multiplication. Obviously the zero sequence belongs to S.
∞
(c) Define L = {{xn } ∈ V : n=1n=1 xn2 < ∞}. Here is it obvious that the zero sequence belongs to L
and that L is closed under scalar multiplication. To prove that L is closed under addition, notice
that, for any x, y ∈ R, (x − y)2 ≥ 0 and (x + y)2 ≥ 0 together imply that |xy| ≤ (x2 + y 2 )/2. It
follows that (xn + yn )2 = x2 + 2xn yn + y 2 ≤ 2(x2 + y 2 ). It follows that if ∞ x2 and ∞
y2
n n n n n=1 n n=1 n
∞ ∞ ∞
both converge, and so does (xn + 2
yn ) , with (xn + yn )2 ≤ 2 x +2 ∞
2
y2 . From
n=1 n=1 n=1 n n=1 n
18. Let V be a vector space over a field F , and let X and Y be subspaces of V .
(a) We will show that X ∩ Y is also a subspace of V . First of all, since 0 ∈ X and 0 ∈ Y , it follows
that 0 ∈ X ∩ Y . Next, suppose x ∈ X ∩ Y and α ∈ F . Then, by definition of intersection, x ∈ X
and x ∈ Y . Since X and Y are subspaces, both are closed under scalar multiplication and therefore
αx ∈ X and αx ∈ Y , from which it follows that α ∈ X ∩ Y . Thus X ∩ Y is closed under scalar
multiplication. Finally, suppose x, y ∈ X ∩ Y . Then x, y ∈ X and x, y ∈ Y . Since X and Y are
closed under addition, we have x + y ∈ X and x + y ∈ Y , from which we see that x + y ∈ X ∩ Y .
Therefore, X ∩ Y is closed under addition, and we have proved that X ∩ Y is a subspace of V .
(b) It is not necessarily that case that X ∪ Y is a subspace of V . For instance, let V = R2 , and define
X = {x ∈ R2 : x2 = 0}, Y = {x ∈ R2 : x1 = 0}. Thus X ∪ Y is not closed under addition, and
hence is not a subspace of R2 . For instance, (1, 0) ∈ X ⊂ X ∪ Y and (0, 1) ∈ Y ⊂ X ∪ Y ; however,
(1, 0) + (0, 1) = (1, 1) ∈ X ∪ Y .
19. Let V be a vector space over a field F , and let S be a nonempty subset of V . Define T to be the
intersection of all subspaces of V that contain S.
(a) We wish to show that T is a subspace of V . First, 0 belongs to every subspace of V that contains S,
and therefore 0 belongs to the intersection T . Next, suppose x ∈ T and α ∈ F . Then x belongs to
every subspace of V containing S. Since each of these subspaces is closed under scalar multiplication,
it follows that αx also belongs to each subspace, and therefore αx ∈ T . Therefore, T is closed under
scalar multiplication. Finally, suppose x, y ∈ T . Then both x and y belong to every subspace of V
containing S. Since each subspace is closed under addition, it follows that x + y belongs to every
subspace of V containing S. Therefore x + y ∈ T , T is closed under addition, and we have shown
that T is a subspace.
(b) Now suppose U is any subspace of V containing S. Then U is one of the sets whose intersection
defines T , and therefore every element of T belongs to U by definition of intersection. It follows that
T ⊂ U . This means that T is the smallest subspace of V containing S.
20. Let V be a vector space over a field F , and let S, T be subspaces of V . Define S +T = {s+t : s ∈ S, t ∈ T }.
2.3. SUBSPACES
16 CHAPTER 2. FIELDS AND VECTOR SPACES
16
We wish to show that S + T is a subspace of V . First of all, 0 ∈ S and 0 ∈ T because S and T are
subspaces. Therefore, 0 = 0+0 ∈ S +T . Next, suppose x ∈ S +T and α ∈ F . Then, by definition of S +T ,
there exist s ∈ S, t ∈ T such that x = s + t. Since S and T are subspaces, they are closed under scalar
multiplication, and therefore αs ∈ S and αt ∈ T . It follows that αx = α(s + t) = αs + αt ∈ S + T . Thus
S + T is closed under scalar multiplication. Finally, suppose x, y ∈ S + T . Then there exist s1 , s2 ∈ S,
2.3. SUBSPACES
17 CHAPTER 2. FIELDS AND VECTOR SPACES
17
t1 , t2 ∈ T such that x = s1 + t1 , y = s2 + t2 . Since S and T are closed under addition, we see that
s1 + s2 ∈ S, t1 + t2 ∈ T , and therefore x + y = (s1 + t1 ) + (s2 + t2 ) = (s1 + s2 ) + (t1 + t2 ) ∈ S + T . It
follows that S + T is closed under addition, and we have shown that S + T is a field.
α1 + α2 = 1,
α1 e 1/2
+ α2 e−1/2 = 1,
α1 e + α2 e−1 = 1.
A direct calculation shows that this system is inconsistent. Therefore no solution α1 , α2 exists, and
f ∈ S.
3. Let S = sp{1 + 2x + 3x2 , x − x2 } ⊂ P2 .
(a) There is a (unique) solution α1 = 2, α2 = 1 to α1 (1 + 2x + 3x2 ) + α2 (x − x2 ) = 2 + 5x + 5x2 .
Therefore, 2 + 5x + 5x2 ∈ S.
(b) There is no solution α1 , α2 to α1 (1 + 2x + 3x2 ) + α2 (x − x2 ) = 1 − x + x2 . Therefore, 1 − x + x2 ∈ S.
4. Let u1 = (1 + i, i, 2), u2 = (1, 2i, 2 − i), and define S = sp{u1 , u2 } ⊂ C3 . The vector v = (2 + 3i, −2 +
2i, 5 + 2i) belongs to S because 2u1 + iu2 = v.
5. Let S = sp{(1, 2, 0, 1), (2, 0, 1, 2)} ⊂ Z34 .
(a) The vector (1, 1, 1, 1) belongs to S because 2(1, 2, 0, 1) + (2, 0, 1, 2) = (1, 1, 1, 1).
(b) The vector (1, 0, 1, 1) does not belong to S because α1 (1, 2, 0, 1) + α2 (2, 0, 1, 2) = (1, 0, 1, 1) has no
solution.
6. Let S = sp{1 + x, x + x2 , 2 + x + x2 } ⊂ P3 (Z3 ).
(a) If p(x) = 1 + x + x2 , then 0(1 + x) + 2(x + x2 ) + 2(2 + x + x2 ) = p(x), and therefore p ∈ S.
(b) Let q(x) = x3 . Recalling that P3 (Z3 ) is a space of polynomials functions, we notice that q(0) = 0,
q(1) = 1, q(2) = 2, which means that q(x) = x for all x ∈ Z3 . We have 1(1 + x) + 2(x + x2 ) + 1(2 + x
+ x2 ) = x = q(x), and therefore q ∈ S.
7. Let u = (1, 1, −1), v = (1, 0, 2) be vectors in R3 . We wish to show that S = sp{u, v} is a plane in R3 .
First note that if S = {x ∈ R3 : ax1 + bx2 + cx3 = 0}, then (taking x = u, x = v) we see that a, b, c
must satisfy a + b − c = 0, a + 2c = 0. One solution is a = 2, b = −3, c = −1. We will now prove
that S = {x ∈ R3 : 2x1 − 3x2 − x3 = 0}. First, suppose x ∈ S. Then there exist α, β ∈ R such that x
= αu+βv = α(1, 1, −1)+β(1, 0, 2) = (α+β, α, −α+2β), and 2x1 −3x2 −x3 = 2(α+β)−3α−(−α+2β) =
2.4. LINEAR COMBINATIONS AND SPANNING SETS 17
16. Let V be a vector space over a field F , and suppose x, u1 , . . . , uk , v1 , . . . , v are vectors in V . Assume
x ∈ sp{u1 , . . . , uk } and uj ∈ sp{v1 , . . . , v } for j = 1, . . . , k. We wish to show that x ∈ sp{v1 , . . . , v }.
Since uj ∈ sp{v1 , . . . , v }, there exist scalars βj,1 , . . . , βj, such that uj = βj,1 v1 + · · · + βj, v . This
is true for each uj , j = 1, . . . , k. Also, x ∈ sp{u1 , . . . , uk }, so there exist α1 , . . . , αk ∈ F such that
x = α1 u1 + · · · + αk uk . It follows that
3. Let V be a vector space over a field F , and let u1 , . . . , un ∈ V . Suppose ui = 0 for some i, 1 ≤ i ≤ n,
and define scalars α1 , . . . , αn ∈ F by αk = 0 if k = i, αi = 1. Then α1 u1 + · · · + αn un = 0 · u1 + · · · + 0 ·
ui−1 + 1 · 0 + 0 · ui+1 + · · · + 0 · un = 0, and hence there is a nontrivial solution to α1 u1 + · · · + αn un = 0.
This shows that {u1 , . . . , un } is linearly dependent.
4. Let V be a vector space over a field F , let {u1 , . . . , uk } be a linearly independent subspace of V , and
assume v ∈ V , v ∈ sp{u1 , . . . , uk }. We wish to show that {u1 , . . . , uk , v} is also linearly independent. We
argue by contradiction and assume that {u1 , . . . , uk , v} is linearly dependent. Then there exist scalars
2.5. LINEAR INDEPENDENCE 19
α1 , . . . , αk , β, not all zero, such that α1 u1 +· · ·+αk uk +βv = 0. We now consider two cases. First, if β = 0,
then not all of α1 , . . . , αk are zero, and we see that α1 u1 + · · · + αk uk = α1 u1 + · · · + αk uk + 0 · v = 0.
This contradicts the fact that {u1 , . . . , uk } is linearly independent. Second, if β = 0, then we can
solve α1 u1 + · · · + αk uk + βv = 0 to obtain v = −β −1 α1 u1 − · · · − β −1 αk uk , which contradicts that
v ∈ sp{u1 , . . . , uk }. Thus, in either case, we obtain a contradiction, and the proof is complete.
5. We wish to determine whether each of the following sets is linearly independent or not.
(a) The set {(1, 2), (1, −1)} ⊂ R2 is linearly independent by Exercise 1, since neither vector is a multiple
of the other.
(b) The set {(−1, −1, 4), (−4, −4, 17), (1, 1, −3)} is linearly dependent. Solving
6. We wish to determine whether each of the following sets of polynomials is linearly independent or not.
(a) The set {1 − x2 , x + x2 , 3 + 3x − 4x2 } ⊂ P2 is linearly independent since the only solution to
α1 (1 − x2 ) + α2 (x + x2 ) + α3 (3 + 3x − 4x2 ) = 0 is α1 = α2 = α3 = 0.
(b) The set {1 + x2 , 4 + 3x2 + 3x3 , 3 − x + 10x3 , 1 + 7x2 − 18x3 } ⊂ P3 is linearly dependent. Solving
α1 (1 + x2 ) + α2 (4 + 3x2 + 3x3 ) + α3 (3 − x + 10x3 ) + α4 (1 + 7x2 − 18x3 ) = 0 yields a nontrivial
solution α1 = −25, α2 = 6, α3 = 0, α4 = 1.
7. The set {ex , e−x , cosh (x)} ⊂ C[0, 1] is linearly dependent since (1/2)ex + (1/2)e−x − cosh (x) = 0 for all
x ∈ [0, 1].
8. The subset {(0, 1, 2), (1, 2, 0), (2, 0, 1)} of Z33 is linearly dependent because 1 · (0, 1, 2) + 1 · (1, 2, 0) + 1 ·
(2, 0, 1) = (0, 0, 0).
10. The set {(i, 1, 2i), (1, 1+i, i), (1, 3+5i, −4+3i)} ⊂ C3 is linearly dependent, because α1 (i, 1, 2i)+ α2 (1, 1+
i, i) + α3 (1, 3 + 5i, −4 + 3i) = (0, 0, 0) has the nontrivial solution α1 = −2i, α2 = −3, α3 = 1.
11. We have already seen that {(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)} ⊂ R4 is linearly dependent, because (3, 2, 2, 3)−
2(3, 2, 1, 2) + (3, 2, 0, 1) = (0, 0, 0, 0).
(a) We can solve this equation for any one of the vectors in terms of the other two; for instance,
(3, 2, 2, 3) = 2(3, 2, 1, 2) − (3, 2, 0, 1).
(b) We can show that (−3, −2, 2, 1) ∈ sp{(3, 2, 2, 3), (3, 2, 1, 2), (3, 2, 0, 1)} by solving α1 (3, 2, 2, 3) +
α2 (3, 2, 1, 2) + α3 (3, 2, 0, 1) = (−3, −2, 2, 1). One solution is (−3, −2, 2, 1) = 3(3, 2, 2, 3) − 4(3, 2, 1, 2).
Substituting (3, 2, 2, 3) = 2(3, 2, 1, 2) − (3, 2, 0, 1), we obtain another solution: (−3, −2, 2, 1) =
2(3, 2, 1, 2) − 3(3, 2, 0, 1).
2.6. BASIS AND DIMENSION
20 CHAPTER 2. FIELDS AND VECTOR SPACES
20
12. We wish to show that {(−1, 1, 3), (1, −1, −2), (−3, 3, 13)} ⊂ R3 is linearly dependent by writing one of
the vectors as a linear combination of the others. We will try to solve for the third vector in terms of the
other two. (There is an element of trial and error involved here: Even if the three vectors form a linearly
independent set, there is no guarantee that this will work; it could be, for instance, that the first two
vectors form a linearly dependent set and the third vector does not lie in the span of the first two.) Solving
α1 (−1, 1, 3) + α2(1, −1, −2) = (−3, 3, 13) yields a unique solution: (−3, 3, 13) = 7(−1, 1, 3) + 4(1, −1, −2).
This shows that the set is linearly dependent.
Alternate solution: We begin by solving α1 (−1, 1, 3) + α2 (1, −1, −2) + α3 (−3, 3, 13) = (0, 0, 0) to obtain
7(−1, 1, 3)+4(1, −1, −2)−(−3, 3, 13) = (0, 0, 0). We can easily solve this for the third vector: (−3, 3, 13) =
7(−1, 1, 3) + 4(1, −1, −2).
13. We wish to show that {p1 , p2 , p3 }, where p1 (x) = 1 − x2 , p2 (x) = 1 + x − 6x2 , p3 (x) = 3 − 2x2 ,
is linearly independent and spans P2 . We first verify that the set is linearly independent by solving
α1 (1 − x2 ) + α2 (1 + x − 6x2 ) + α3 (3 − 2x2 ) = 0. This equation is equivalent to the system α1 + α2 + 3α2 = 0,
α2 = 0, −α1 − 6α2 − 2α3 = 0, and a direct calculation shows that the only solution is α1 = α2 = α3 =
0. To show that the set spans P2 , we take an arbitrary p ∈ P2 , say p(x) = c0 + c1 x + c2 x2 , and
solve α1 (1 − x2 ) + α2 (1 + x − 6x2 ) + α3 (3 − 2x2 ) = c0 + c1 x + c2 x2 . This is equivalent to the system
α1 + α2 + 3α2 = c0 , α2 = c1 , −α1 − 6α2 − 2α3 = c2 . There is a unique solution: α1 = −2c0 − 16c1 − 3c2 ,
α2 = c1 , α3 = c0 + 5c1 + c2 . This shows that p ∈ sp{p1 , p2 , p3 }, and, since p was arbitrary, that {p1 , p2 , p3 }
spans all of P2 .
14. Let V be a vector space over a field F and let {u1 , . . . , uk } be a linearly independent subset of V . Suppose
u, v ∈ V , {u, v} is linearly independent, and u, v ∈ sp{u1 , . . . , uk }. We wish to determine whether
{u1 , . . . , uk , u, v} is necessarily linearly independent. In fact, this set need not be linearly independent.
For example, take V = R4 , F = R, k = 3, and u1 = (1, 0, 0, 0), u2 = (0, 1, 0, 0), u3 = (0, 0, 1, 0). With
u = (0, 0, 0, 1), v = (1, 1, 1, 1), we see immediately that {u, v} is linearly independent (neither vector is a
multiple of the other), and that neither u nor v belongs to sp{u1 , u2 , u3 }. Nevertheless, {u1 , u2 , u3 , u, v}
is linear dependent because v = u1 + u2 + u3 + u.
15. Let V be a vector space over a field F , and suppose S and T are subspaces of V satisfying S ∩ T =
{0}. Suppose {s1 , . . . , sk } ⊂ S and {t1 , . . . , t } ⊂ T are both linearly independent sets. We wish to
prove that {s1 , . . . , sk , t1 , . . . , t } is linearly independent. Suppose scalars α1 , . . . , αk , β1 , . . . , β satisfy
α1 s1 + · · · + αk sk + β1 t1 + · · · + β t = 0. We can rearrange this equation to read α1 s1 + · · · + αk sk =
−β1 t1 − · · · − β t . If v is the vector represented by these two expressions, then v ∈ S (since v is a
linear combination of s1 , . . . , sk ) and v ∈ T (since v is a linear combination of t1 , . . . , t ). But the only
vector in S ∩ T is the zero vector, and hence α1 s1 + · · · + αk sk = 0, −β1 t1 − · · · − β t = 0. The
first equation implies that α1 = · · · = αk = 0 (since {s1 , . . . , sk } is linearly independent), while the
second equation implies that β1 = · · · = β = 0 (since {t1 , . . . , t } is linearly independent). Therefore,
α1 s1 +· · ·+αk sk +β1 t1 +· · ·+β t = 0 implies that all the scalars are zero, and hence {s1 , . . . , sk , t1 , . . . , t }
is linearly independent.
16. Let V be a vector space over a field F , and let {u1 , . . . , uk } and {v1 , . . . , v } be two linearly independent
subsets of V . We wish to find a condition that implies that {u1 , . . . , uk , v1 , . . . , v } is linearly independent.
By the previous exercise, a sufficient condition for {u1 , . . . , uk , v1 , . . . , v } to be linearly independent is
that S = sp{u1 , . . . , uk }, T = sp{v1 , . . . , v } satisfy S ∩ T = {0}. We will prove that this condition is
also necessary. Suppose {u1 , . . . , uk } and {v1 , . . . , v } are linearly independent subsets of V , and that
{u1 , . . . , uk , v1 , . . . , v } is also linearly independent. Define S and T as above. If x ∈ S ∩ T , then there
exist scalars α1 , . . . , αk ∈ F such that x = α1 u1 + · · · + αk uk (since x ∈ S), and also scalars β1 , . . . , β ∈ F
such that x = β1 v1 + · · · + β v (since x ∈ T ). But then α1 u1 + · · · + αk uk = β1 v1 + · · · + β v , which
implies that α1 u1 + · · ·+ αk uk − β1 v1 − · · ·− β v = 0. Since {u1 , . . . , uk , v1 , . . . , v } is linearly independent
by assumption, this implies that α1 = · · · = αk = β1 = · · · = β = 0, which in turn shows that x = 0.
Therefore S ∩ T = {0}, and the proof is complete.
17. (a) Let V be a vector space over R, and suppose {x, y, z} is a linearly independent subset of V . We wish
to show that {x + y, y + z, x + z} is also linearly independent. Let α1 , α2 , α3 ∈ R satisfy α1 (x + y) +
2.6. BASIS AND DIMENSION
21 CHAPTER 2. FIELDS AND VECTOR SPACES
21
18. Let U and V be vector spaces over a field F , and define W = U × V . Suppose {u1 , . . . , uk } ⊂ U and
{v1 , . . . , v } ⊂ V are linearly independent. We wish to show that {(u1 , 0), . . . , (uk , 0), (0, v1 ), . . . , (0, v )}
is also linearly independent. Suppose α1 , . . . , αk , β1 , . . . , β ∈ F satisfy α1 (u1 , 0) + · · · + αk (uk , 0) +
β1 (0, v1 ) + · · · + β (0, v ) = (0, 0). This reduces to (α1 u1 + · · · + αk uk , β1 v1 + · · · + β v ) = (0, 0),
which holds if and only if α1 u1 + · · · + αk uk = 0 and β1 v1 + · · · + β v = 0. Since {u1 , . . . , uk } is
linearly independent, the first equation implies that α1 = · · · = αk = 0, and, since {v1 , . . . , v } is linearly
independent, the second implies that β1 = · · · = β = 0. Since all the scalars are necessarily zero, we see
that {(u1 , 0), . . . , (uk , 0), (0, v1 ), . . . , (0, v )} is linearly independent.
19. Let V be a vector space over a field F , and let u1 , u2 , . . . , un be vectors in V . Suppose a nonempty subset
of {u1 , u2 , . . . , un }, say {ui1 , . . . , uik }, is linearly dependent. (Here 1 ≤ k < n and i1 , . . . , ik are distinct
integers each satisfying 1 ≤ ij ≤ n.) We wish to prove that {u1 , u2 , . . . , un } itself is linearly dependent. By
assumption, there exist scalars αi1 , . . . , αik ∈ F , not all zero, such that αi1 ui1 + · · · + αik uik = 0. For each i
∈ {1, . . . , n} \ {i1 , . . . , ik }, define αi = 0. Then we have α1 u1 + · · · + αn un = 0 + αi1 ui1 +· · · + αik uik = 0,
and not all of α1 , . . . , αn are zero since at least one αij is nonzero. This shows that {u1 , . . . , un } is linearly
dependent.
20. Let V be a vector space over a field F , and suppose {u1 , u2 , . . . , un } is a linearly independent subset of
V . We wish to prove that every nonempty subset of {u1 , u2 , . . . , un } is also linearly independent. The
result to be proved is simply the contrapositive of the statement in the previous exercise, and therefore
holds by the previous proof.
21. Let V be a vector space over a field F , and suppose {u1 , u2 , . . . , un } is linearly dependent. We wish to
prove that, given any i, 1 ≤ i ≤ n, either ui is a linear combination of u1 , . . . , ui−1 , ui+1 , . . . , un or these
vectors form a linearly dependent set. By assumption, there exist scalars α1 , . . . , αn ∈ F , not all zero, such
that α1 u1 + · · · + αi ui + · · · + αn un = 0. We now consider two cases. If αi = 0, the we can solve the latter
equation for ui to obtain ui = −α−1 α1 u1 −· · ·−α−1 αi 1 u −α−1 α u −· · ·−α−1 α u . In this case,
i i − i−1 i i+1 i+1 i n n
ui is a linear combination of the remaining vectors. The second case is that αi = 0, in which case at least
one of α1 , . . . , αi−1 , αi+1 , . . . , αn is nonzero, and we have α1 u1 +· · ·+αi−1 ui−1 +αi+1 ui+1 +· · ·+αn un = 0.
This shows that {u1 , . . . , ui−1 , ui+1 , . . . , un } is linearly dependent.
(b) Now we wish to show that if any vector u ∈ V , u ∈ {v1 , v2 , . . . , vn }, is added to the basis, the
resulting set of n + 1 vectors is linearly dependent. This is immediate from Theorem 34: Since
2.6. BASIS AND DIMENSION
22 CHAPTER 2. FIELDS AND VECTOR SPACES
22
the dimension of V is n, every set containing more than n vectors is linearly dependent. Since
{v1 , v2 , . . . , vn , u} contains n + 1 vectors, it must be linearly dependent.
2. Consider the following vectors in R3 : v1 = (−1, 4, −2), v2 = (5, −20, 9), v3 = (2, −7, 6). We wish to
determine if {v1 , v2 , v3 } is a basis for R3 . If we solve α1 v1 + α2 v2 + α3 v3 = x for an arbitrary x ∈ R3 , we
find a unique solution: α1 = 57x1 + 12x2 − 5x3 , α2 = 10x1 + 2x2 − x3 α3 = 4x1 + x2 . By Theorem 28,
this implies that {v1 , v2 , v3 } is a basis for R3 .
3. We now repeat the previous exercise for the vectors v1 = (−1, 3, −1), v2 = (1, −2, −2), v3 = (−1, 7, −13).
If we try to solve α1 v1 + α2 v2 + α3 v3 = x for an arbitrary x ∈ R3 , we find that this equation is equivalent
to the following system:
−α1 + α2 − α3 = x1
α2 + 4α3 = 3x1 + x2
0 = 8x1 + 3x2 + x3 .
Since this system is inconsistent for most x ∈ R3 (the system is consistent only if x happens to satisfy
8x1 + 3x2 + x3 = 0), {v1 , v2 , v3 } does not span R3 and therefore is not a basis.
4. Let S = sp{ex , e−x } be regarded as a subspace of C(R). We will show that {ex , e−x }, {cosh (x), sinh (x)}
are two different bases for S. First, to verify that {ex , e−x } is a basis, we merely need to verify that it is
linearly independent (since it spans S by definition). This can be done as follows: If c1 ex + c2 e−x = 0,
where 0 represents the zero function, then the equation must hold for all values of x ∈ R. So choose
x = 0 and x = ln 2; then c1 and c2 must satisfy
c1 + c2 = 0,
1
2c1 + c2 = 0.
2
It is straightforward to show that the only solution of this system is c1 = c2 = 0, and hence {ex , e−x } is
linearly independent.
Next, since
1 x 1 x 1 x 1 x
cosh (x) = e + e− , cosh (x) = e − e− ,
2 2 2 2
we see that cosh (x), sinh (x) ∈ S = sp{ex , e−x }. We can verify that {cosh (x), sinh (x)} is linearly
independent directly: If c1 cosh (x) + c2 sinh (x) = 0, then, substituting x = 0 and x = ln 2, we obtain the
system
5 3
1 · c1 + 0 · c2 = 0, c1 + c2 = 0,
4 4
and the only solution is c1 = c2 = 0. Thus {cosh (x), sinh (x)} is linearly independent. Finally, let f be
any function in S. Then, by definition, f can be written as f (x) = α1 ex + α2 e−x for some α1 , α2 ∈ R.
We must show that f ∈ sp{cosh (x), sinh (x)}, that is, that there exist c1 , c2 ∈ R such that
1 1 x 1 1 x x −x
−
⇔ c1 + c2 e + c1 − c 2 e = α 1 e + α 2 e .
2 2 2 2
2.6. BASIS AND DIMENSION
23 CHAPTER 2. FIELDS AND VECTOR SPACES
23
Since {{ex , e−x } is linearly independent, Theorem 26 implies that the last equation can hold only if
1 1 1 1
c1 + c2 = α 1 , c1 − c 2 α 2 .
2 2 2 2
This last system has a unique solution:
c1 = α1 + α2 , c2 = α1 − α2 .
This shows that f ∈ sp{cosh (x), sinh (x)}, and the proof is complete.
5. Let p1 (x) = 1 − 4x + 4x2, p2 (x) = x + x2 , p3 (x) = −2 + 11x − 6x2. We will determine whether {p1 , p2 , p3 }
is a basis for P2 or not by solving α1 p1 (x) + α2 p2 (x) + α3 p3 (x) = c0 + c1 x + c2 x2 , where c0 + c1 x + c2 x2
is an arbitrary element of P2 . A direct calculation shows that there is a unique solution for α1 , α2 , α3 :
α1 + 2α2 = c0 ,
α 2 + α 3 = c1 ,
−α1 + 2α3 = c2 .
α1 + 2α2 = c0 ,
α 2 + α 3 = c1 ,
0 = c0 − 2c1 + c2 ,
which is inconsistent for most values of c0 , c1 , c2 . Therefore, {p1 , p2 , p3 } does not span P2 and hence is
not a basis for P2 .
7. Consider the subspace S = sp{p1 , p2 , p3 , p4 , p5 } of P3 , where
(a) The set {p1 , p2 , p3 , p4 , p5 } is linearly dependent (by Theorem 34) because it contains five elements
and the dimension of P3 is only four.
(b) As illustrated in Example 39, we begin by solving
which reduces to
α1 = 16α4 − 16α5 ,
α2 = 9α4 − 9α5 ,
α3 = 0.
Since there are nontrivial solutions, {p1 , p2 , p3 , p4 , p5 } is linearly dependent (which we already knew),
but we can deduce more than that. By taking α4 = 1, α5 = 0, we see that α1 = 16, α2 = 9, α3 = 0,
α4 = 1, α5 = 0 is one solution, which means that
16p1 (x) + 9p2 (x) + p4 (x) = 0 ⇒ p4 (x) = −16p1 (x) − 9p2 (x).
−16p1 (x) − 9p2 (x) + p5 (x) = 0 ⇒ p5 (x) = 16p1 (x) + 9p2 (x),
8. We wish to find a basis for sp{(1, 2, 1), (0, 1, 1), (1, 1, 0)} ⊂ R3 . We will name the vectors v1 , v2 , v3 ,
respectively, and begin by testing the linear independence of {v1 , v2 , v3 }. The equation α1 v1 + α2 v2 +
α3 v3 = 0 is equivalent to
α1 + α3 = 0,
2α2 + α2 + α3 = 0,
α1 + α2 = 0,
which reduces to
α1 = −α3 , α2 = α3 .
One solution is α1 = −1, α2 = 1, α3 = 1, which shows that −v1 + v2 + v3 = 0, or v3 = v1 − v2 . This
in turns shows that sp{v1 , v2 , v3 } = sp{v1 , v2 } (by Lemma 19). Clearly {v1 , v2 } is linearly independent
(since neither vector is a multiple of the other), and hence {v1 , v2 } is a basis for sp{v1 , v2 , v3 }.
9. We wish to find a basis for S = sp{(1, 2, 1, 2, 1), (1, 1, 2, 2, 1), (0, 1, 2, 0, 2)} in Z53. The equation
α1 + α2 = 0,
2α1 + α2 + α3 = 0,
α1 + 2α2 + 2α3 = 0,
2α1 + 2α2 = 0,
α1 + α2 + 2α3 = 0.
α1 = α2 = α3 = 0,
which shows that the given vectors form a linearly independent set and therefore a basis for S.
2.6. BASIS AND DIMENSION
25 CHAPTER 2. FIELDS AND VECTOR SPACES
25
10. We will show that {1 + x + x2 , 1 − x + x2 , 1 + x + 2x2 } is a basis for P2 (Z3 ) by showing that there is a
unique solution to
α1 1 + x + x2 + α2 1 − x + x2 + α3 1 + x + 2x2 = c0 + c1 x + c2 x2 .
We first note that 1 − x + x2 = 1 + 2x + x2 in P2 (Z3 ), so we can write our equation as
α1 1 + x + x2 + α2 1 + 2x + x2 + α3 1 + x + 2x2 = c0 + c1 x + c2 x2 .
We rearrange the previous equation in the form
(α1 + α2 + α3 ) + (α1 + 2α2 + α3 )x + (α1 + α2 + 2α3 )x2 = c0 + c1 x + c2 x2 .
Since the polynomials involved are of degree 2 and the field Z3 contains 3 elements, this last equation is
is equivalent to the system
α 1 + α 2 + α 3 = c0 ,
α1 + 2α2 + α3 = c1 ,
α1 + α2 + 2α3 = c2
(cf. the discussion on page 45 of the text). Applying Gaussian elimination (modulo 3) shows that there
is a unique solution:
α1 = c0 + 2c1 + c2 ,
α2 = 2c0 + c1 ,
α3 = 2c0 + c2 .
This in turn proves (by Theorem 28) that the given polynomials form a basis for P2 (Z3 ).
11. Suppose F is a finite field with q distinct elements.
(a) Assume n ≤ q − 1. We wish to show that {1, x, x2 , . . . , xn } is a linearly independent subset of Pn (F ).
(Since {1, x, x2 , . . . , xn } clearly spans Pn (F ), this will show that it is a basis for Pn (F ), and hence
that dim(Pn (F )) = n + 1 in the case that n ≤ q − 1.) The desired conclusion follows from the
discussion on page 45 of the text. If c0 · 1 + c1 x + · · · + cn xn = 0 (where 0 is the zero function), then
every element of F is a root of c0 · 1 + c1 x + · · · + cn xn . Since F contains more than n elements
and a nonzero polynomial of degree n can have at most n distinct roots, this is impossible unless
c0 = c1 = . . . = cn = 0. Thus {1, x, . . . , xn } is linearly independent.
(b) Now suppose that n ≥ q. The reasoning above shows that {1, x, x2 , . . . , xq−1 } is linearly independent
in Pn (F ) (c0 · 1 + c1 x + · · · + cq−1 x q−1 has at most q − 1 distinct roots, and F contains more than
14. (Note: This exercise belongs in Section 2.7 since the most natural solution uses Theorem
43.) Let V be a vector space over a field F , and let S and T be finite-dimensional subspaces of V . We
wish to prove that
dim(S + T ) = dim(S) + dim(T ) − dim(S ∩ T ).
We know from Exercise 2.3.19 that S ∩ T is a subspace of V , and since it is a subset of S, dim(S ∩ T ) ≤
dim(S). Since S is finite-dimensional by assumption, it follows that S ∩ T is also finite-dimensional, and
therefore either S ∩ T = {0} or S ∩ T has a basis.
Suppose first that S ∩ T = {0}, so that dim(S ∩ T ) = 0. Let {s1 , s2 , . . . , sm } be a basis for S and
{t1 , t2 , . . . , tn } be a basis for T . We will show that {s1 , . . . , sm , t1 , . . . , tn } is a basis for S + T , from which
it follows that
The set {s1 , . . . , sm , t1 , . . . , tn } is linearly independent by Exercise 2.5.15. Given any v ∈ S + T , there
exist s ∈ S, t ∈ T such that v = s + t. But since s ∈ S, there exist scalars α1 , . . . , αm ∈ F such that
s = α1 s1 + · · · + αm sm . Similarly, since t ∈ T , there exist β1 , . . . , βn ∈ F such that t = β1 t1 + · · · + βn tn .
But then
v = s + t = α 1 s 1 + · · · + α m s m + β1 t 1 + · · · + βn tn ,
which shows that v ∈ sp{s1 , . . . , sm , t1 , . . . , tn }. Thus we have shown that {s1 , . . . , sm , t1 , . . . , tn } is a
basis for S + T , which completes the proof in the case that S ∩ T = {0}.
Now suppose S ∩ T is nontrivial, with basis {v1 , . . . , vk }. Since S ∩ T is a subset of S, {v1 , . . . , vk } is a lin-
early independent subset of S and hence, by Theorem 43, can be extended to a basis {v1 , . . . , vk , s1 , . . . , sp }
of S. Similarly, {v1 , . . . , vk } can be extended to a basis {v1 , . . . , vk , t1 , . . . , tq } of T . We will show that
{v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq } is a basis of S + T . Then we will have
t = γ1 v1 + · · · + γk vk + δ1 s1 + · · · + δq sq .
But then
v = s + t = α 1 v 1 + · · · + αk v k + β 1 s 1 + · · · + β p s p +
γ1 v1 + · · · + γk vk + δ1 s1 + · · · + δq sq
= (α1 + γ1 )v1 + · · · + (αk + γk )vk + β1 s1 + · · · + βp sp +
δ 1 t1 + · · · + δ q t q
∈ sp{v1 , . . . , vk , s1 , . . . , sp , t1 , . . . , tq }.
The vector on the left belongs to S, while the vector on the right belongs to T ; hence both vectors (which
are really the same) belong to S ∩ T . But then −γ1 t1 − · · · − γq tq can be written in terms of the basis
{v1 , . . . , vk } of S ∩ T , say
−γ1 t1 − · · · − γq tq = δ1 v1 + · · · + δk vk .
But this gives two representations of the vector −γ1 t1 − · · · − γq tq ∈ T in terms of the basis
{v1 , . . . , vk , t1 , . . . , tq }.
Since each vector in T must be uniquely represented as a linear combination of the basis vectors, this is
possible only if γ1 = · · · = γq = δ1 = · · · = δk = 0. But then
α1 v1 + · · · + αk vk + β1 s1 + · · · + βp sp = 0,
we wish to determine if dim(Xi ) ≤ dim(Xj ) or dim(Xi ) ≥ dim(Xj ) (or neither) must hold. First of all,
since S and T are arbitrary subspaces, it is obvious that there need be no particular relationship between
the dimensions of S and T . However, S ⊂ S + T since each s ∈ S can be written as s = s + 0 ∈ S + T
(0 ∈ T because every subspace contains the zero vector). Therefore, by Exercise 13, dim(S) ≤ dim(S +T ).
By the same reasoning, T ⊂ S + T and hence dim(T ) ≤ dim(S + T ). Next, S ∩ T ⊂ S, S ∩ T ⊂ T , and
hence dim(S ∩ T ) ≤ dim(S), dim(S ∩ T ) ≤ dim(T ). Finally, we have S ∩ T ⊂ S ⊂ S + T , and hence
dim(S ∩ T ) ≤ dim(S + T ).
16. Let V be a vector space over a field F , and suppose S and T are subspaces of V satisfying S ∩ T = {0}.
Suppose {s1 , s2 , . . . , sk } ⊂ S and {t1 , t2 , . . . , t } ⊂ T are bases for S and T , respectively. We wish to
prove that
{s1 , s2 , . . . , sk , t1 , t2 , . . . , t }
is a basis for S + T . This was done in the course of proving the result in Exercise 14.
17. Let U and V be vector spaces over a field F , and let {u1 , . . . , un } and {v1 , . . . , vm } be bases for U and
V , respectively. We are asked to prove that
is a basis for U × V . First, let (u, v) be an arbitrary vector in U × V . Then u ∈ U and there exist
α1 , . . . , αn ∈ F such that u = α1 u1 + · · · + αn un . Similarly, v ∈ V and there exist β1 , . . . , βm ∈ F such
that v = β1 v1 + · · · + βm vm . It follows that
This shows that {(u1 , 0), . . . , (un , 0), (0, v1 ), . . . , (0, vm )} spans U × V . Next, suppose α1 , . . . , αn ∈ F ,
β1 , . . . , βm ∈ F satisfy
0, 1, 1 + 1, 1 + 1 + 1, . . . , 1 + 1 + · · · + 1
Therefore, φ also preserves multiplication, and we have shown that G and Zp are isomorphic as
fields.
(b) We now drop the subscripts and identify Zp with the subfield G of F . We wish to show that that
F is a vector space over Zp . We already know that addition in F is commutative, associative, and
has an identity, and that each element of F has an additive inverse in F . The associative property
of scalar multiplication and the two distributive properties of scalar multiplication reduce to the
associative property of multiplication and the distributive property in F . Finally, 1 · u = u for all
u ∈ F since 1 is nothing more than the multiplicative identity in F . This verifies that F is a vector
field over Zp .
2.7. PROPERTIES OF BASES
29 CHAPTER 2. FIELDS AND VECTOR SPACES
29
(c) Since F has only a finite number of elements, it must be a finite-dimensional vector space over Zp .
Let the dimension be n, and let {f1 , . . . , fn } be a basis for F . Then every element of F can be
written uniquely in the form
α 1 f1 + α 2 f 2 + · · · + α n f n , (2.1)
where α1 , α2 , . . . , αn ∈ Zp . Conversely, for each choice of α1 , α2 , . . . , αn in Zp , (2.1) defines an
element of F . Therefore, the number of elements of F is precisely the number of different ways to
choose α1 , α2 , . . . , αn ∈ Zp , which is pn .
(b) To extend {u1 , u2 , u3 } to a basis for R5 , we need two more vectors. We will try u4 = (0, 0, 0, 1, 0)
and u5 = (0, 0, 0, 0, 1). We solve α1 u1 + α2 u2 + α3 u3 + α4 u4 + α5 u5 = 0 and find that the only
solution is the trivial one. This implies that {u1 , u2 , u3 , u4 , u5 } is linearly independent and hence,
by Theorem 45, a basis for R5 .
⎢ ⎥ ⎢ ⎥
u4 = ⎢ −1 ⎥ , u5 = ⎢ 3 ⎥.
⎢ ⎥ ⎢ ⎥
⎣ 1 ⎦ ⎣ 6 ⎦
1 2
α1 − α2 + α3 + α4 + 2α5 = x1 ,
2α1 + 3α2 + 7α3 − 2α4 + 10α5 = x2 ,
2α2 + 2α3 − α4 + 3α5 = x3 ,
α1 + α2 + 3α3 + α4 + 6α5 = x4 ,
α1 − α2 + α3 + α4 + 2α5 = x5 .
3 1
α1 = x1 + x3 − x4 − 2α3 − 3α5 ,
2 2
1 1
α2 = − x1 + x4 − α3 − 2α5 ,
2 2
α4 = −x1 − x3 + x4 − α5 ,
7 3
0 = − x1 + x2 − 4x3 + x4 ,
2 2
0 = −x1 + x5 .
7 3
− x1 + x2 − 4x3 + x4 = 0, −x1 + x5 = 0.
2 2
We also see that any x ∈ S can be represented as a linear combination of u1 , u2 , u4 by taking α3 = α5 = 0 in
the above equations. Finally, it can be verified directly that {u1 , u2 , u4 } is linearly independent and hence
a basis for S.
5 1
c1 = 0, c2 = − c5 , c3 = c5 , c4 = c5 .
3 3
Choosing c5 = 1, we see that
5 1
− p2 (x) + p3 (x) + p4 (x) + p5 (x) = 0.
3 3
We can solve this equation for p5 (x), which shows that p5 ∈ sp{p2 , p3 , p4 } ⊂ sp{p1 , p2 , p3 , p4 }. Therefore, S
= sp{p1 , p2 , p3 , p4 }. The above calculations also show that the only solution of c1 p1 (x) + c2 p2 (x) +
c3 p3 (x) + c4 p4 (x) + c5 p5 (x) = 0 with c5 = 0 is the trivial solution, that is, the only solution of c1 p1 (x) +
c2 p2 (x) + c3 p3 (x) + c4 p4 (x) = 0 is the trivial solution. Thus {p1 , p2 , p3 , p4 } is linearly independent and
hence a basis for S. (This also shows that S is four-dimensional and hence equals all of P3 .)
(a) It is obvious that {u1 , u2 } is linearly independent, since neither vector is a multiple of the other. (b)
To extend {u1 , u2 } to a basis for Z4 , we5 must find vectors u3 , u4 such that {u1 , u2 , u3 , u4 } is linearly
independent. We try u3 = (0, 0, 1, 0) and u4 = (0, 0, 0, 1). A direct calculation then shows that
α1 u1 + α2 u2 + α3 u3 + α4 u4 = 0 has only the trivial solution. Therefore {u1 , u2 , u3 , u4 } is linearly
independent and hence, since dim(Z54 ) = 4, it is a basis for Z45.
2.7. PROPERTIES OF BASES
33 CHAPTER 2. FIELDS AND VECTOR SPACES
33
We wish to find a subset of {v1 , v2 , v3 } that is a basis for S. As usual, we proceed by solving α1 v1 +
α2 v2 + α3 v3 = 0 to find the linear dependence relationships (if any). This equation is equivalent to the
system
α1 + 2α2 + α3 = 0,
2α1 + α2 = 0,
α1 + 2α2 + α3 = 0,
α1 = α2 , α3 = 0.
αα1 + βα2 = 0,
γα2 = 0.
Since γ = 0 by assumption, the second equation implies that α2 = 0. Then the first equation
simplifies to αα1 = 0, which implies (since α = 0) that α1 = 0. This shows that {αu1 , βu1 + γu2 }
is linearly independent.
Now suppose v ∈ V . Since {u1 , u2 } is a basis for V , there exist β1 , β2 ∈ F such that v = β1 u1 + β2 u2 .
We wish to find α1 , α2 ∈ F such that α1 (αu1 ) + α2 (βu1 + γu2 ) = v. We can rearrange this last
equation to read (αα1 + βα2 )u1 + (γα2 ) = v, which then yields (αα1 + βα2 )u1 + (γα2 ) = β1 u1 + β2 u2 .
Since v has a unique representation as a linear combination of the basis vectors u1 , u2 , it follows
that
αα1 + βα2 = β1 ,
γα2 = β2 .
α1 = α−1 (β1 − βγ −1 β2 ), α2 = γ −1 β2 .
This shows that v ∈ sp{αu1 , βu1 + γu2 }, and hence that {αu1 , βu1 + γu2 } spans V . Therefore,
{αu1 , βu1 + γu2 } is a basis for V .
2.7. PROPERTIES OF BASES
34 CHAPTER 2. FIELDS AND VECTOR SPACES
34
(c) Let V be a vector space over F with basis {u1 , . . . , un }. We wish to generalize the previous parts of
this exercise to show how to produce a collection of different bases for V . We choose any scalars
αij , i = 1, 2, . . . , n, j = 1, 2, . . . , i,
is a basis for V . We can prove this by induction on n. We have already done the case n = 1 (and
also n = 2). Let us assume that the construction leads to a basis if the dimension of the vector
space is n − 1, and suppose that V has dimension n, with basis {u1 , u2 , . . . , un }. By the induction
hypothesis,
{α11 u1 , α21 u1 + α22 u2 , . . . , αn−1,1 u1 + · · · + αn−1,n−1 un−1 } (2.3)
is a basis for S = sp{u1 , u2 , . . . , un−1 }. We now show that (2.2) is a basis for V by showing that
each v ∈ V can be uniquely represented as a linear combination of the vectors in (2.2). So let v be
any vector in V , say
v = β1 u 1 + β2 u 2 + · · · + β n u n .
Notice that
βn un = βn α−1
nn (αn1 u1 + · · · + α nn u n ) −
βn α−1 −1
The vector
βn α−1 −1
β1 u1 + · · · + βn−1 un−1
= δ1 (a11 u1 ) + δ2 (α21 u1 + α22 u2 ) + · · · +
δn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 ).
v = β1 u1 + · · · + βn−1 un−1 + βn un
= δ1 (a11 u1 ) + δ2 (α21 u1 + α22 u2 ) + · · · +
δn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 )+
βn α−1
nn (α n1 u1 + · · · + α nn un ) − (γ1 (a11 u1 ) + γ2 (α 21 u1 + α 22 u2 )
+ · · · + γn−1 (αn−1,1 u1 + · · · + αn−1,n−1 un−1 )
= (δ1 − γ1 )(a11 u1 ) + (δ2 − γ2 )(α21 u1 + α22 u2 ) + · · · +
(δn−1 − γn−1 )(αn−1,1 u1 + · · · + αn−1,n−1 un−1 )+
βn α−1
nn (α n1 u1 + · · · + αnn un ) .
This shows that each v can be written as a linear combination of the vectors in (2.2). Uniqueness
follows from the induction hypothesis and the fact that there is only one way to write βn un as a
multiple of αn1 u1 + · · · + αnn un plus a vector from S.
2.7. PROPERTIES OF BASES
35 CHAPTER 2. FIELDS AND VECTOR SPACES
35
12. We wish to prove that every nontrivial subspace of a finite-dimensional vector space has a basis (and
hence is finite-dimensional). Let V be a finite-dimensional vector space, let dim(V ) = n, and suppose S
is a nontrivial subspace of V . Since S is nontrivial, it contains a nonzero vector s1 . Then either {v1 }
spans S, or there exists v2 ∈ S \ sp{v1 }. In the first case, {v1 } is a basis for S (since any set containing
a single nonzero vector is linearly independent by Exercise 2.5.2). In the second case, {v1 , v2 } is linearly
independent by Exercise 2.5.4. Either {v1 , v2 } spans S, in which case it is a basis for S, or we can find
v3 ∈ S \ sp{v1 , v2 }. We continue to add vectors in this fashion until we obtain a basis for S. We know
that the process will end with a linearly independent spanning set for S, containing at most n vectors,
because S ⊂ V , and any linearly independent set with n vectors spans all of V by Theorem 45.
13. Let F be a finite field with q distinct elements, and let n be positive integer, n ≥ q. We wish to prove
that dim (Pn (F )) = q by showing that {1, x, . . . , xq−1 } is a basis for Pn (F ). We have already seen (in
Exercise 2.6.11) that {1, x, . . . , xq−1 } is linearly independent. Consider the following vectors in F q :
⎡ ⎤ ⎡ ⎤ ⎡ 2 ⎤ ⎡ q−1 ⎤
1 α1 α1 α1
1 α2 α2 α
q−1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
v1 = ⎢ ⎥
⎢ ⎥ , v2 = ⎢
⎢ ⎥, v = ⎢ 2 ⎥, ..., v =⎢ 2
⎥ 3 ⎢ ⎥ q ⎢ . ⎥.
⎥
⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦
1 αq α2q αq−1
q
Since {1, x, . . . , xq−1 } is linearly independent, this equation implies that c1 = c2 = · · · = cq = 0, and
hence we have shown that {v1 , v2 , . . . , vq } is linearly independent in F q . Since we know that dim(F q ) = q,
Theorem 45 implies that {v1 , v2 , . . . , vq } also spans F q . Now let p be any polynomial in Pn (F ), and define
⎡ ⎤
p(α1 )
⎢ ⎥
⎢ p(α2 ) ⎥ ∈ F q .
u=⎢ ⎥
⎣ . ⎦
p(αq )
and hence to
c1 · 1 + c2 x + · · · + cq xq−1 = p(x) for all x ∈ F.
This shows that p ∈ sp{v1 , v2 , . . . , vq }, and hence that {v1 , v2 , . . . , vq } is a basis for Pn (F ). Thus
dim(Pn (F )) = q, as desired.
14. Let V be an n-dimensional vector space over a field F , and suppose S and T are subspaces of V satisfying
S ∩ T = {0}. Suppose that {s1 , s2 , . . . , sk } is a basis for S, {t1 , t2 , . . . , t } is a basis for T , and k + = n.
We wish to prove that {s1 , s2 , . . . , sk , t1 , t2 , . . . , t } is a basis for V . This follows immediately from
Theorem 45, since we have already shown in Exercise 2.5.15 that {s1 , s2 , . . . , sk , t1 , t2 , . . . , t } is linearly
independent.
15. Let V be a vector space over a field F , and let {u1 , . . . , un } be a basis for V . Let v1 , . . . , vk be vectors in
V , and suppose
vj = α1,j u1 + . . . + αn,j un , j = 1, 2, . . . , k.
2.7. PROPERTIES OF BASES
36 CHAPTER 2. FIELDS AND VECTOR SPACES
36
cj αij ui = 0
⇔
j=1 i=1
n k
cj αij ui = 0
⇔
i=1 j=1
⎛ ⎞
n k
⇔ ⎝ cj αij ⎠ ui = 0.
i=1 j=1
(b) Now we show that {v1 , . . . , vk } spans V if and only if {x1 , . . . , xk } spans F n . Since each vector in V
can be represented uniquely as a linear combination of u1 , . . . , un , there is a one-to-one correspon-
dence between V and F n :
w = c1 u1 + · · · + cn un ∈ V ←→ x = (c1 , . . . , cn ) ∈ F n .
Mimicking the manipulations in the first part of the exercise, we see that
k k
c j vj = w ⇔ cj xj = x.
j=1 j=1
Thus the first equation has a solution for every v ∈ V if and only if the second equation has a
solution for every x ∈ F n . The result follows.
16. Consider the polynomials p1 (x) = −1 + 3x + 2x2 , p2 (x) = 3 − 8x − 4x2 , and p3 (x) = −1 + 4x + 5x2 in
P2 . We wish to use the result of the previous exercise to determine if {p1 , p2 , p3 } is linearly independent.
The standard basis for P2 is {u1 , u2 , u3 } = {1, x, x2 }. In terms of this basis, p1 , p2 , p3 correspond to
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−1 3 −1
2.7. PROPERTIES OF BASES
37 CHAPTER 2. FIELDS AND VECTOR SPACES
37
x1 = ⎣ 3 ⎦ , x2 = ⎣ −8 ⎦ , x3 = ⎣ 4 ⎦ ∈ R3 ,
2 −4 5
respectively. A direct calculation shows that c1 x1 +c2 x2 +c3 x3 = 0 has only the trivial solution. Therefore
{x1 , x2 , x3 } and {p1 , p2 , p3 } are both linearly independent.
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
37 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
37
(x − 2)(x − 3) 1
L0 (x) = = − (x − 2)(x − 3),
(1 − 2)(1 − 3) 2
(x − 1)(x − 3)
L1 (x) = = −(x − 1)(x − 3),
(2 − 1)(2 − 3)
(x − 1)(x − 2) 1
L2 (x) = = (x − 1)(x − 2).
(3 − 1)(3 − 2) 2
(b) The quadratic polynomial interpolating (1, 0), (2, 2), (3, 1) is
2. (a) The Lagrange polynomials for the interpolation nodes x0 = −2, x1 = −1, x2 = 0, x3 = 1, x4 = 2
are
(x + 1)x(x − 1)(x − 2) 1
L0 (x) = = x(x + 1)(x − 1)(x − 2),
(−2 + 1)(−2 − 0)(−2 − 1)(−2 − 2) 24
(x + 2)x(x − 1)(x − 2) 1
L1 (x) = = − x(x + 2)(x − 1)(x − 2),
(−1 + 2)(−1 − 0)(−1 − 1)(−1 − 2) 6
(x + 2)(x + 1)x(x − 2) 1
L3 (x) = = − x(x + 2)(x + 1)(x − 2),
(1 + 2)(1 + 1)(1 − 0)(1 − 2) 6
(x + 2)(x + 1)x(x − 1) 1
L4 (x) = = x(x + 2)(x + 1)(x − 1).
(2 + 2)(2 + 1)(2 − 0)(2 − 1) 24
(b) Using the Lagrange basis, we find the interpolating polynomial passing through the points (−2, 10),
(−1, −3), (0, 2), (1, 7), (2, 18) to be
p(x) = 10L0 (x) − 3L1 (x) + 2L2 (x) + 7L3 (x) + 18L4 (x)
5 1
= x(x + 1)(x − 1)(x − 2) + x(x + 2)(x − 1)(x − 2)+
12 2
1 7
(x + 2)(x + 1)(x − 1)(x − 2) − x(x + 2)(x + 1)(x − 2)+
2 6
3
3. Consider the data (1, 5), (2, −4), (3, −4), (4, 2). We wish to find the cubic polynomial interpolating these
points.
(a) Using the standard basis, we write p(x) = c0 + c1 x + c2 x2 + c3 x4 . The equations
(x − 1)(x − 3)(x − 4) 1
L1 (x) = = (x − 1)(x − 3)(x − 4),
(2 − 1)(2 − 3)(2 − 4) 2
(x − 1)(x − 2)(x − 4) 1
L2 (x) = = − (x − 1)(x − 2)(x − 4),
(3 − 1)(3 − 2)(3 − 4) 2
(x − 1)(x − 2)(x − 3) 1
L2 (x) = = (x − 1)(x − 2)(x − 3),
(4 − 1)(4 − 2)(4 − 3) 6
(x + 1)(x − 3) 1
L1 (x) = = − (x + 1)(x − 3),
(1 + 1)(1 − 3) 4
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
40 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
40
(x + 1)(x − 1) 1
L2 (x) = = (x + 1)(x − 1),
(3 + 1)(3 − 1) 8
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
41 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
41
and therefore,
6. Let F be a field and suppose (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) are points in F 2 . We wish to show that the poly-
nomial interpolation problem has at most one solution, assuming the interpolation nodes x0 , x1 , . . . , xn
are distinct. This follows from reasoning we have seen before. If p, q ∈ Pn (F ) both interpolate the given
data, then r = p − q is a nonzero polynomial of degree at most n having n + 1 roots (x0 , x1 , . . . , xn ). The
only polynomial of degree n (or less) having n + 1 roots is the zero polynomial; thus p = q and there is
at most one interpolating polynomial.
7. Let F be a field and suppose (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) are points in F 2 . We wish to show that the poly-
nomial interpolation problem has at most one solution, assuming the interpolation nodes x0 , x1 , . . . , xn
are distinct. Suppose p, q ∈ Pn (F ) both interpolate the data, and let {L0 , L1 , . . . , Ln } be the basis for
Pn (F ) of Lagrange polynomials for the given interpolation nodes. Both p and q can be written in terms
of this basis: n n
p= αi Li , q = β i Li .
i=0 i=0
Now, we know that the Lagrange polynomials satisfy
1, j = i,
Li (xj ) =
0, j = i.
It follows that n n
But, since p and q interpolate the given data, p(xj ) = q(xj ) = yj . This shows that αj = βj , j = 0, 1, . . . , n,
and hence that p = q.
8. Suppose x0 , x1 , . . . , xn are distinct real numbers. We wish to prove that, for any real numbers y0 , y1 , . . . , yn ,
the system
c0 + c1 x0 + c2 x02 + . . . + cn xn0 = y0 ,
c0 + c1 x1 + c2 x12 + . . . + cn xn1 = y1 ,
. .
c 0 + c 1 xn + c2 xn2 + ... + cn xnn = yn
has a unique solution c0 , c1 , . . . , cn . This follows immediately from our work on interpolating polynomials:
c0 , c1 , . . . , cn solves the given system if and only if p(x) = c0 + c1 x + · · · + cn xn interpolates the data
(x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ). Since there is a unique interpolating polynomial p ∈ Pn , and since this
polynomial can be uniquely represented in terms of the standard basis {1, x, . . . , xn }, it follows that the
given system of equations has a unique solution.
9. We wish to represent every function f : Z2 → Z2 by a polynomial in P1 (Z2 ). There are exactly four
different functions f : Z2 → Z2 , as defined in the following table:
10. The following table defines three functions mapping Z3 → Z3 . We wish to find a polynomial in P2 (Z3 )
representing each one.
We can compute f1 , f2 , and f3 as interpolating polynomials. The Lagrange polynomials for the given
interpolation nodes are
(x − 1)(x − 2)
L0 (x) = = 2x2 + 2,
(0 − 1)(0 − 2)
x(x − 2)
L1 (x) = = 2x2 + 2x,
0)(1 2)
(1
− −
x(x − 1)
L2 (x) = = 2x2 + x.
(2 0)(2 1)
− −
(Here we have used the arithmetic of Z3 to simplify the polynomials: −1 = 2, −2 = 1, 2−1 = 2, etc.).
We then have
11. Consider a secret sharing scheme in which five individuals will receive information about the secret, and
any two of them, working together, will have access to the secret. Assume that the secret is a two-
digit integer, and that p is chosen to be 101. The degree of the polynomial will be one, since then the
polynomial will be uniquely determined by two data points. Let us suppose that the secret is N = 42
and we choose the polynomial to be p(x) = N + c1 x, where c1 = 71 (recall that c1 is chosen at random).
We also choose the five interpolation nodes at random to obtain x1 = 9, x2 = 14, x3 = 39, x4 = 66, and
x5 = 81. We then compute
y1 = p(x1 ) = 42 + 71 · 9 = 75,
y2 = p(x2 ) = 42 + 71 · 14 = 26,
y3 = p(x3 ) = 42 + 71 · 39 = 84,
y4 = p(x4 ) = 42 + 71 · 66 = 82,
y5 = p(x5 ) = 42 + 71 · 81 = 36
(notice that all arithmetic is done modulo 101). The data points, to be distributed to the five individuals,
are (9, 75), (14, 26), (39, 84), (66, 82), (81, 36).
12. An integer N satisfying 1 ≤ N ≤ 256 represents a secret to be shared among five individuals. Any
three of the individuals are allowed access to the information. The secret is encoded in a polynomial
p, according to the secret sharing scheme described in Section 2.8.1, lying in P2 (Z257 ). Suppose three
of the individuals get together, and their data points are (15, 13), (114, 94), and (199, 146). We wish to
determine the secret. We begin by finding the Lagrange polynomials for the interpolation nodes 15, 114,
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
41 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
41
and 199:
(x − 114)(x − 199)
L0 (x) = = 58x2 + 93x + 205,
(15 − 114)(15 − 199)
(x − 15)(x − 199)
L1 (x) = = 74x2 + 98x + 127,
(114 − 15)(114 − 199)
(x − 15)(x − 114)
L2 (x) = = 125x2 + 66x + 183.
(199 − 15)(199 − 114)
The interpolating polynomial p is
p(0) = 13L0 (0) + 94L1 (0) + 146L2 (0) = 13 · 205 + 94 · 127 + 146 · 183 = 201.
Thus the secret is N = 201. (The polynomial p(x) simplifies to p(x) = 3x2 + 11x + 201).
13. We wish to solve the following interpolation problem: Given v1 , v2 , d1 , d2 ∈ R, find p ∈ P3 such that
p(0) = v1 , p(1) = v2 , p× (0) = d1 , p× (1) = d2 .
(a) If we represent p as p(x) = c0 + c1 x + c2 x2 + c3 x3 , then the given conditions
p(0) = v1 ,
p× (0) = d1 ,
p(1) = v2 ,
p× (1) = d2
are equivalent to the system
c0 = v1 ,
c1 = d1 ,
c0 + c1 + c2 + c3 = v2 ,
c1 + 2c2 + 3c3 = d2 .
It is straightforward to solve this system:
c0 = v1 , c1 = d1 , c2 = 3v2 − 3v1 − 2d1 − d2 , c3 = 2v1 − 2v2 + d1 + d2 .
(b) We now find the special basis {q1 , q2 , q3 , q4 } of P3 satisfying the following conditions:
q1 (0) = 1, q1× (0) = 0, q1 (1) = 0, q ×1(1) = 0,
q2 (0) = 0, q2× (0) = 1, q2 (1) = 0, q ×2(1) = 0,
q3 (0) = 0, q3× (0) = 0, q3 (1) = 1, q ×3(1) = 0,
q4 (0) = 0, q4× (0) = 0, q4 (1) = 0, q ×4(1) = 1.
We can use the result of the first part of this exercise to write down the solutions immediately:
q1 (x) = 1 − 3x2 + 2x3 ,
q2 (x) = x − 2x2 + x3 ,
q1 (x) = 3x2 − 2x3 ,
q1 (x) = −x2 + x3 .
In terms of the basis {q1 , q2 , q3 , q4 }, the solution to the interpolation problem is
p(x) = v1 q1 (x) + d1 q2 (x) + v2 q3 (x) + d2 q4 (x).
2.8. POLYNOMIAL INTERPOLATION AND THE LAGRANGE
42 CHAPTER
BASIS
2. FIELDS AND VECTOR SPACES
42
p(xi ) = vi , p× (xi ) = di , i = 0, 1, . . . , n.
for i = 0, 1, . . . , n.
(a) Since each Lagrange polynomial Li has degree exactly n, we see that Bi has degree 2n + 1 for each
i = 0, 1, . . . , n, while Ai has degree 2n + 1 if L×i(xi ) = 0 and degree 2n if L×i(xi ) = 0. Thus, in every
case, Ai , Bi ∈ P2n+1 for all i = 0, 1, . . . , n.
(b) If j = i, then Li (xj ) = 0 and
i (x) = 2(1 − 2(x − xj )Li (xi ))Li (x)Li (x) − 2Li (xi )Li (x).
If j = i, then Li (xj ) = 0 and therefore A×i(xj ) = 0 (since both terms contain a factor of Li (xj )). If
j = i, then Li (xj ) = 1 and A×i (xj ) simplifies to
Therefore,
1, i = j,
B×i (xj ) =
0, i = j.
(d) We now wish to prove that {A0 , . . . , An , B0 , . . . , Bn } is a basis for P2n+1 . Since dim(P2n+1 ) = 2n+ 2
and the proposed basis contains 2n + 2 elements, it suffices to prove that the set spans P2n+1 . Given
any p ∈ P2n+1 , define vj = p(xj ), dj = p× (xj ), j = 0, 1, . . . , n. Then
n n
q(x) = vi Ai (x) + di Bi (x)
i=0 i=0
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
43 CHAPTER 2. FIELDS AND VECTOR SPACES
43
agrees with p at each xj , and q × agrees with p× at each xj (see below). Define r = p − q. Then r is
a polynomial of degree at most 2n + 1, and r(xj ) = r× (xj ) = 0 for j = 0, 1, . . . , n. Using elementary
properties of polynomials, this implies that r(x) can be factored as
r(x) = f (x)(x − x0 )2 (x − x1 )2 · · · (x − xn )2 ,
where f (x) is a polynomial. But then deg(r(x)) ≥ 2n + 2 unless f = 0. Since we know that
deg(r(x)) ≤ 2n + 1, it follows that f = 0, in which case r = 0 and hence q = p. Thus p ∈
sp{A0 , . . . , An , B0 , . . . , Bn }. This proves that the given set is a basis for P2n+1 .
Using the properties of A0 , A1 , . . . , An , B0 , B1 , . . . , Bn derived above, the solution to the Hermite inter-
polation problem is
n n
p(x) = vi Ai (x) + di Bi (x).
i=0 i=0
Every term in the second sum vanishes, as do all terms in the first sum except vj Aj (xj ) = vj · 1 = vj .
Thus p(xj ) = vj . Also,
n n
p× (xj ) = vi A×i(xj ) + di B ×i(xj ).
i=0 i=0
Now every term in the first sum vanishes, as do all terms in the second sum except dj B ×j(xj ) = dj · 1 = dj .
Therefore p× (xj ) = dj , and p is the desired interpolating polynomial.
2. The following table shows the maximum errors obtained in approximating f (x) = 1/(1 + x2 ) on the
interval [−5, 5] by polynomial interpolation and by piecewise linear interpolation, each on a uniform grid
with n nodes.
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
44 CHAPTER 2. FIELDS AND VECTOR SPACES
44
Here we see that polynomial interpolation is not effective (cf. Figure 2.6 in the text). Also, we see that
piecewise linear interpolation is much more effective with n even, since then there is an interpolation
node at x = 0, which is the peak of the graph of f .
3. We now repeat the previous two exercises, using piecewise quadratic interpolation instead of piecewise
linear interpolation. We use a uniform grid of 2n + 1 nodes.
4. Let x0 , x1 , . . . , xn define a uniform mesh on [a, b] (that is, xi = a + ih, i = 0, 1, . . . , n, where h = (b−a)/n).
We wish to prove that
|(x − x0 )(x − x1 ) · · · (x − xn )| hn+1
max .
≤
x∈ [a,b] (n + 1)! 2(n + 1)
Let x ∈ [a, b] be given. If x equals any of the nodes x0 , x1 , . . . , xn , then the left-hand side is zero, and
thus the inequality holds. So let us assume that x ∈ (xi−1 , xi ) for some i = 1, 2, . . . , n. The distance
from x to the nearer of xi−1 , xi is at most h/2 and the distance to the further of the two is at most h. If
we then list the remaining n − 1 nodes in order of distance from x (nearest to furthest), we see that the
2.9. CONTINUOUS PIECEWISE POLYNOMIAL FUNCTIONS
45 CHAPTER 2. FIELDS AND VECTOR SPACES
45
Therefore,
|(x − x0 )(x − x1 ) · · · (x − xn )| 1 n!hn+1 hn+1
≤ .
(n + 1)! 2 (n + 1)! 2(n + 1)
Since this holds for each x ∈ [a, b], the proof is complete.
5. We wish to derive a bound on the error in piecewise quadratic interpolation in the case of a uniform mesh.
We use the notation of the text, and consider an arbitrary element [x2i−2 , x2i ] (with x2i − x2i−2 = h).
Let f belonging to C 3 [a, b] be approximated on this element by the quadratic q that interpolates f at
x2i−2 , x2i−1 , x2i . Then the remainder term is given by
f (3) (cx )
f (x) − q(x) = (x − x2i−2 )(x − x2i−1 )(x − x2i ), x ∈ [a, b],
3!
where cx ∈ [a, b]. We assume |f (3) (x)| ≤ M for all x ∈ [a, b]. We must maximize
on [a, b]. This is equivalent, by a simple change of variables, to maximizing |p(x)| on [0, h], where
p(x) = x(x − h/2)(x − h). This is a simple problem in single-variable calculus, and we can easily verify
that
h3
|p(x)| ≤ √ for all x ∈ [0, h].
12 3
Therefore,
|f (3) (cx )| h3 |f (3) (cx )| 3
|f (x) − q(x)| ≤ = h for all x ∈ [x , x ].
· √ √
2i−2 2i
3! 12 3 72 3
Illustrator: H. G. Willink
Language: English
Credits: Al Haines
BY
C. PHILLIPPS-WOLLEY
AUTHOR OF 'SPORT IN THE CRIMEA AND CAUCASUS' ETC.
WITH THIRTEEN ILLUSTRATIONS BY H. G. WILLINK
NEW EDITION
LONDON
LONGMANS, GREEN, AND CO.
AND NEW YORK: 15 EAST 16th STREET
1892
TO SMALL CLIVE.
CHAPTER
I. FERNHALL v. LOAMSHIRE
II. 'MOP FAUCIBUS HÆSIT'
III. SNAP'S REDEMPTION
IV. THE FERNHALL GHOST
V. THE ADMIRAL'S 'SOCK-DOLLAGER'
VI. THE BLOW FALLS
VII. LEAVE LIVERPOOL
VIII. THE MANIAC
IX. 'THAT BAKING POWDER'
X. AFTER SCRUB CATTLE
XI. BRINGING HOME THE BEAR
XII. BRANDING THE 'SCRUBBER'
XIII. WINTER COMES WITH THE 'WAVIES'
XIV. A NIGHT OF ADVENTURE
XV. FOUNDING 'BULL PINE' FIRM
XVI. BEARS
XVII. IN THE BRÛLÊ
XVIII. THE LOSS OF 'THE CRADLE'
XIX. THE GAMBLERS 'PUT UP'
XX. LONE MOUNTAIN
XXI. AT THE TOP
XXII. AT THE END OF THE ROPE
XXIII. READING THE WILL
XXIV. SNAP'S SACRIFICE
XXV. THE FLIGHT OF THE CROWS
XXVI. SNAP'S STORY
XXVII. CONCLUSION
ILLUSTRATIONS.
IN THE WOOD
IN THE BRÛLÉ
'HANDS UP'
'GOOD-BYE, PARD'
SNAP'S SACRIFICE
SNAP
CHAPTER I
FERNHALL v. LOAMSHIRE
'What on earth shall we do, Winthrop?' asked one of the Fernhall Eleven
of a big fair-faced lad, who seemed to be its captain.
'Ah, well,' suggested another of the group, 'though Hales did very well for
the Twenty-two, it isn't quite the same thing bowling against such a team as
Loamshire brings down; he might not "come off" after all, don't you know.'
A quiet grin spread over the captain's face. No one knew better than he
did the spirit which prompted Poynter's last remark.
Good bowler though he was, Poynter had often been a sad thorn in
Winthrop's side. If you put him on first with the wind in his favour, Poynter
would be beautifully good-tempered, and bowl sometimes like a very
Spofforth. Only then sometimes he wouldn't! Sometimes an irreverent
batsman from Loamshire who had never heard of Poynter's break from the
leg would hit him incontinently for six, and perhaps do it twice in one over.
Then Poynter got angry. His arms began to work like a windmill. He tried to
bowl rather faster than Spofforth ever did; about three times as fast as Nature
ever meant John Poynter to. The result of this was always the same. First he
pitched them short, and the delighted batsman cut them for three; then he
pitched them up, and that malicious person felt a thrill of pleasure go
through his whole body as he either drove them or got them away to square
leg. Then Winthrop had to take him off. This was when the trouble began.
Sullenly Poynter would take his place in the field—and it was not every
place in the field which suited him. If you put him in the deep field, he
growled at the folly which risked straining a bowler's arm by shying. If you
put him close in, he grumbled at the risk he ran of having those dexterous
fingers of his damaged by a sharp cut or a 'sweet' drive. For of course he
always expected to be put on again, and from the time that he reached his
place until the time that he was again put into possession of the ball he did
nothing but watch his rival with malicious envy, making a mental bowling
analysis for him, in which he took far more note of the hits (or wides if there
were any) than he did of the maiden overs which were bowled.
But Frank Winthrop was a diplomatist, as a cricket captain should be, so,
though he grinned, he only replied, 'That's true enough, Poynter, but I must
have some ordinary straight stuff, such as Hales's, to rest you and Rolles, and
put these fellows off their guard against your curly ones.'
'Yes, I suppose it is a mistake to bowl a fellow good balls all the time. It
makes him play too carefully,' replied the self-satisfied Poynter.
'Well, but, Winthrop,' insisted the first speaker, 'if you don't do without a
change bowler, what will you do? That other fellow in the Twenty-two
doesn't bowl well enough, but there are lots of them useful bats.'
'I know all that, but I've made up my mind,' replied the young autocrat. 'I
shall play a man short, if I can't persuade Trout' (an irreverent sobriquet for
their head-master) 'to let Snap Hales off in time.'
When a captain of a school eleven says that he has made up his mind, the
intervention of anyone less than a head-master is useless, so that no one
protested.
As the group broke up Wyndham put his arm through Winthrop's, and
together they strolled towards the door of the school-house.
'Then, as he had just got into the eleven,' continued Winthrop, 'he didn't
like to give up his half-hour with the professional; the result of all which was
that yesterday old Cube asked him for his lines and was told—
'"Not time! Why, I myself saw you playing cricket to-day for a good half-
hour. What do you mean by telling me you had not time?" asked Cube.
'"I had not time, sir, because——" Snap tried to say, but Cube stopped
him with that abominable trick of his, you know it.
'"Yēēs, Hales, yēēs! Yēēs, Hales, yēēs! So you had no time, Hales! Yēēs,
Hales, yēēs!"
'Here Snap's beastly temper gave out, and instead of waiting till he got a
chance of telling his story properly to old Cube, who, although he loves
mathematics and hates a lie, is a good chap after all, he deliberately
mimicked the old chap with—
'Of course the other fellows went into fits of laughter, and old Cube had
fits too, only of another kind, and I expect I shall get "fits" from the Head for
trying to get the young idiot off for this match. But I really don't see how we
can get on without him,' Winthrop added, as he left his friend at the door,
and plodded with a heavy heart up to the head-master's sanctum.
What happened there the narrator of this truthful story does not pretend to
know. The inside of a headmaster's library was to him a place too sacred for
intrusion, and it was only through the foolish persistence of certain unwise
under-masters that he was ever induced to enter it. Whenever he did, he left
it with a note of recommendation from that excellent man to the school-
sergeant. It was not quite a testimonial to character, but still something like
it, and always contained an allusion to one of the most graceful of forest
trees, the mournful, beautiful birch. I am told that this is the favourite tree of
the Russian peasant. I dare say. I am told he is still uneducated. It was
education which, I think, taught me to dislike the birch.
He was a good fellow, our Head, and from Winthrop's face as he came
downstairs I expect that he thought so.
I was quite right about that leave out of bounds. The head-master felt, no
doubt quite properly, that on such a day as the day of the Loamshire match,
when there were sure to be lots of visitors about, it would not do for one of
the school's chief ornaments to be absent. It was very hard upon me because,
you see, I could only buy twelve tarts for my shilling at the tuckshop,
whereas if I had got leave out of bounds I could have got thirteen for the
same money, only four miles from school! That sense of duty to the public
which no doubt will lead me some day to take a seat in the House of
Commons enabled me to bear up under my trouble, and about two o'clock I
was watching the match with my fellows on the Fernhall playing fields.
Ah, me! those Fernhall playing fields! with their long level stretches of
green velvet, their June sunshine and wonderful blue skies! What has life
like them nowadays? On this day they were looking their very best, and,
though I have wandered many a thousand miles since then, I have never seen
a fairer sight. Forty acres there were, all in a ring fence, of level greensward,
every yard of it good enough for a match wicket, and the ring-fence itself
nothing but a tall rampart of green turf, twelve or fourteen feet high, and
broad enough at the top for two boys to walk upon it abreast.
Out in the middle of this great meadow the wickets were pitched, and I
really believe that I have since played billiards on a surface less level than
the two-and-twenty yards which they enclosed. The lines of the crease
gleamed brightly against the surrounding green, and the strong sun blazed
down upon the long white coats of the umpires, the Fernhall eleven (or
rather ten, for Snap was still absent), and two of the strongest bats in
Loamshire.
But, though fourteen figures had the centre of the ground to themselves,
there was plenty of vigorous, young life round its edges. There, where the
sun was the warmest, with their backs up against the bank which enclosed
the master's garden, sat or lay some four hundred happy youngsters,
anxiously watching every turn of the match, keen critics, although
thoroughgoing partisans. Like young lizards, warmed through with the sun,
lying soft against the mossy bank, the scent of the flowers came to them over
the garden hedge, and the soft salt breeze came up from the neighbouring
sea. You could hear the lip and roll of its waves quite plainly where you lay,
if you listened for it, for after all it was only just beyond that green bulwark
of turf behind the pavilion. Many and many a time have we boys seen the
white foam flying in winter across those very playing-fields, and gathered
sea-wrack from the hedges three miles inland. By-and-by, when the match
was over, most of the two-and-twenty players in it would race down to the
golden sands and roll like young dolphins in the blue waves, for Fernhall
boys swam like fishes in those good old days, and such a sea in such
sunshine would have tempted the veriest coward to a plunge.
But the match was not over yet, although yellow-headed Frank Winthrop
began to think that it might almost as well be. He was beginning to despair.
It was a one-day match: the school had only made 156, while the county had
only two wickets down for 93; of course there was no chance of a second
innings; the two best bats in Loamshire seemed set for a century apiece;
Poynter had lost his temper and seemed trying rather to hurt his men than to
bowl them, and everyone else had been tried and had failed. What on earth
was an unfortunate captain to do? Just then a figure in a long cassock and
college cap, a fine portly figure with a kindly face, turned round, and, using
the back of a trembling small boy for a desk, wrote a note and despatched
the aforesaid small boy with it to the rooms of the Rev. Erasmus Cube-Root.
A minute or two before, Winthrop had found time to exchange half-a-dozen
words with 'the Head' whilst in the long field, and now he turned and raised
his cap to him, while an expression of thankfulness overspread his features.
The two Loamshire men at the wickets were Grey and Hawker, both names
well known on all the cricket-fields of England, and one of them known and
a little feared by our cousins at the Antipodes. This man, Hawker, had been
heard to say that he was coming to Fernhall to get up his average and have
an afternoon's exercise. It looked very much as if he would justify his boast.
He was an aggravating bat to bowl to, for more reasons than one. One of his
tricks, indeed, seemed to have been invented for the express purpose of
chaffing the bowler.
As he stood at the wicket his bat was almost concealed from sight behind
his pads, his wicket appeared to be undefended, and all three stumps plainly
visible to his opponent. Alas! as the ball came skimming down the pitch the
square-built little athlete straightened himself, the bat came out from its
ambush, and you had the pleasure of knowing that another six spoiled the
look of your analysis. If he was in very high spirits, and you in very poor
form, he would indulge in the most bewildering liberties, spinning round on
his heels in a way known to few but himself, so as to hit a leg ball into the
'drives.' Altogether he was, as the boys knew, a perfect Tartar to deal with if
he once got 'set.'
Grey, the other bat, was quite as exasperating in his way as Hawker, only
it was quite another way. He it was who had broken poor Poynter's heart.
You did not catch him playing tricks. You did not catch him hitting sixes, or
even threes; but neither did you catch him giving the field a chance,
launching out at a yorker, or interfering with a 'bumpy' one. Oh, no! It didn't
matter what you bowled him, it was always the same story. 'Up went his
shutter,' as Poynter feelingly remarked, 'and you had to pick up that blessed
leather and begin again.' Sometimes he placed a ball so as to get one run for
it, sometimes he turned round and sped a parting ball to leg, and sometimes
he snicked one for two. He was a slow scorer, but he seemed to possess the
freehold of the ground he stood upon. No one could give him notice to quit.
Such were the men at the wicket, and such the state of the game, when a tall,
slight figure came racing on to the ground in very new colours, and with
fingers which, on close inspection, would have betrayed a more intimate
acquaintance with the ink-pot than with the cricket-ball. Although it would
have been nearer to have passed right under the head-master's nose, the new-
comer went a long way round, eyeing that dignitary with nervous suspicion,
and raising his cap with great deference when the eye of authority rested
upon him. As soon as he came on to the ground he dropped naturally into his
place, and anyone could have seen at a glance that, whatever his other merits
might or might not be, Snap Hales was a real keen cricketer. When a ball
came his way there was no waiting for it to reach him on his part. He had
watched it, as a hawk does a young partridge, from the moment it left the
bowler's hands, and was halfway to meet it already. Like a flash he had it
with either hand—both were alike to him—and in the same second it was
sent back straight and true, a nice long hop, arriving in the wicket-keeper's
hands at just about the level of the bails.
But Winthrop had other work for Snap to do, and at the end of the over
sent him to replace Rolles at short-slip.
'By George, Towzer, they are going to put on Snap Hales,' said one
youngster to another on the rugs under the garden hedge.
'About time, too,' replied his companion; 'if he can't bowl better than
those two fellows he ought to be kicked.'
'Well, I dare say both you and he will be, if he doesn't come off to-day. I
expect it was your brother who got him off his lines to-day, and he won't be
a pleasant companion for either of you if the school gets beaten with half-a-
dozen wickets to spare.'
Towzer, the boy addressed, was brother to the captain of the eleven, and
his fag. Snap Hales, when at home, lived near the Winthrops, so that in the
school, generally, they were looked upon as being of one clan, of which, of
course, Frank Winthrop was the chief. Willy Winthrop was Towzer's proper
name, or at least the name he was christened by; but anyone looking at the
fair-haired jolly-looking little fellow would have doubted whether his
godfathers were wiser than his schoolfellows. No one would ever have
dreamed of him as a future scholar of Balliol, nor, on the other hand, as a
sour-visaged failure. He was a bright, impertinent Scotch terrier of a boy,
and his discerning contemporaries called him Towzer.
But we must leave Towzer for the present and stick to Snap. Everyone
was watching him now, and none more closely or more kindly than the man
whom Snap considered chief of his born enemies, 'the Head.' 'Yes, he is a
fine lad,' muttered that great man, 'I wish I knew how to manage him. He has
stuff in him for anything.' And indeed he might have, though he was hardly
good-looking. Tall and spare, with a lean, game look about the head, the first
impression he made upon you was that he was a perfect athlete, one of
Nature's chosen children. Every movement was so easy and so quick that
you knew instinctively that he was strong, though he hardly looked it; but his
face puzzled you. It was a dark, sad-looking face, certainly not handsome,
with firm jaw and somewhat rugged outlines, and yet there was a light
sometimes in the big dark eyes which gave all the rest the lie, and made you
feel that his masters might be right, after all, when they said, 'There is no
misdoing at Fernhall of which "that Hales" is not the leader.'
'Round the wicket, sir?' asked the umpire as Snap took the ball in hand.
'No, Charteris, over,' was the short reply, as Hales turned to measure his
run behind the sticks.
'What! a new bowler?' asked Hawker of the wicket-keeper as he took a
fresh guard; 'who is he?'
'An importation from the Twenty-two; got his colours last week,'
answered Wyndham, and a smile spread over Hawker's face, as he saw in
fancy a timid beginner pitching him half-volleys to be lifted over the garden
hedge, or leg-balls with which to break the slates on the pavilion.
But Hawker had to reserve his energy for a while, being much too good a
cricketer to hit wildly at anything. With a quiet, easy action the new bowler
sent down an ordinary good-length ball, too straight to take liberties with,
and that was all. Hawker played it back to him confidently, but still carefully,
and another, and another, of almost identical pitch and pace, followed the
first. 'Not so much to be made off this fellow after all,' thought Hawker, 'but
he will get loose like the rest by-and-by, no doubt.' Still it was not as good
fun as he had expected. The fourth ball of Snap's first over was delivered
with exactly the same action as its predecessors, but the pace was about
double that of the others and Hawker was only just in time to stop it. It was
so very nearly too much for the great man that for a moment it shook his
confidence in his own infallibility. That momentary want of confidence
ruined him. The last ball of the over was not nearly up to the standard of the
other four; it was short-pitched and off the wicket, but it had a lot of 'kick' in
it, and Hawker had not come far enough out for it. There was an ominous
click as the ball just touched the shoulder of his bat, and next moment, as
long-slip remarked, he found it revolving in his hands 'like a stray planet.'
Don't talk to me of the lungs of the British tar, of the Irish stump orator,
or even of the 'Grand Old Man' himself! They are nothing, nothing at all, to
the lungs we had in those days. It was Snap's first wicket for the school, and
Snap was the school's favourite, as the scapegrace of a family usually is, and
caps flew up and fellows shouted until even Hawker didn't much regret his
discomfiture if it gave the boys such pleasure. He was very fond of Fernhall
boys, that sinewy man from the North, and, next to their own heroes,
Fernhall liked him better than most men. Even now they show the window
through which he jumped on all fours, and many a neck is nearly dislocated
in trying to follow his example.
In the next over from his end Hales had to deal with Grey, and he found
his match. He tried him with slow ones, he tried him with fast ones, he tried
to seduce him from the paths of virtue with the luscious lob, to storm him
with the Eboracian pilule or ball from York. It was not a bit of good, up went
the shutter, and a maiden over left Snap convinced that the less he had to do
with Grey the better for him, and left Grey convinced that Fernhall had got a
bowler at last who bowled with his head. Was it wilfully, I wonder, that Snap
gave Grey on their next meeting a ball which that steady player hit for one?
It may not have been, and yet there was a grin all over the boy's dark face as
he saw Grey trot up to his end. That run cost Loamshire two batsmen in four
balls—one bowled leg before wicket, and the other clean-bowled with an
ordinary good-length ball rather faster than its fellows.
Those old fields rang with Hales's name that afternoon, and at 6.30,
thanks chiefly to his superb bowling, the county had still two to score to win,
and two wickets to fall. One of the men still in was Grey. At the end of the
over the stumps would be drawn, and the game drawn against the school,
even if (as he might do) Snap should bowl a maiden. That, however, could
hardly be; even Grey would hit out at such a crisis. At the very first ball the
whole school trembled with excitement. The Loamshire man played well
back and stopped a very ugly one, fast and well pitched, but it would not be
altogether denied, and curled in until it lay quiet and inoffensive, absolutely
touching the stumps.
Ah, gentlemen of Loamshire! if you want to win this match why can't you
keep quiet? Don't you think the sight of that fatal little ball, nestling close up
to his wicket, is enough to disconcert any batsman in the last over of a good
match? And yet you cry, 'Steady, Thompson, steady!' Poor chap, you can see
that he is all abroad, and the boy's eyes at the other end are glittering with
repressed excitement. He is fighting his first great battle in public, and
knows it is a winning one. There is a sting and 'devil' in the fourth ball which
would have made even Grace pull himself together. It sent Thompson's bails
over the long-stop's head, and mowed down his wicket like ripe corn before
a thunder-shower.
And now the chivalry of good cricket was apparent; Loamshire had no
desire to 'play out the time.' Even as Thompson was bowled, another
Loamshire man left the pavilion, ready for the fray. If it had been 'cricket,'
Hawker, the Loamshire captain, would have gladly played out the match. As
it was, his man was ready to finish the over. As the two men passed each
other the new-comer gave his defeated friend a playful dig in the ribs, and
remarked, 'Here goes for the score of the match, Edward Anson, duck, not
out!'
As there was only one more ball to be bowled, and only two runs to be
made to secure a win for Loamshire, I'm afraid Anson hardly meant what he
said. Unless it shot underground or was absolutely out of reach, that young
giant, who 'could hit like anything, though not much of a bat,' meant at any
rate to hit that one ball for four. By George, how he opened his shoulders!
how splendidly he lunged out! you could see the great muscles swell as he
made the bat sing through the air, you could almost see the ball going
seaward; and yet—and yet——
The school had risen like one man; they had heard that rattle among the
timber; they knew that Snap's last 'yorker' had done the trick; cool head and
quick hand had pulled the match out of the fire, and even his rival Poynter
was one of the crowd who caught young Hales, tossed him on to their
shoulders, and bore him in triumph to the pavilion, whilst the chapel clock
struck the half-hour.
CHAPTER II
Boys in the fifth form at Fernhall shared a study with one companion.
Monitors of course lived in solitary splendour, with a bed which would stand
on its head, and allowed itself to be shut up in a cupboard in the corner.
Small boys who had not attained even to the fringe of the school aristocracy
lived in herds in bare and exceedingly untidy rooms round the inner quads.
Even in those days there were monitors who were worshippers of art. Some
of them had curtains in their rooms of rich and varied colouring; one of them
had a plate hung up which he declared was a piece of undoubted old
Worcester. Tomlinson was a great authority on objects of virtù, and a rare
connoisseur, but we changed his plate for one which we bought for sixpence
at Newby's, and he never knew the difference. Then there was one fellow
who had several original oil paintings. These represented farmyard scenes
and were attributed indifferently to Landseer, Herring, and a number of other
celebrated artists. Whoever painted them, these pictures were the objects of
more desperate forays than any other property within the school limits. I
remember them well as adorning the room of a certain man of muscle, to
whom, of course, they belonged merely as the spoils of war. The rightful
owner lived three doors off, but I don't think that he ever had the pluck to
attempt to regain his own.
However, in the small boys' rooms there were none of these luxuries of an
effete civilisation. There was a book-shelf full of ragged books, none of
which by any chance ever bore the name of anyone in that study; there was a
table, a gas-burner, a frying-pan, and a kettle. These last-named articles
might have been seen in every study at Fernhall, from the study of the
monitor to that of the pauper, as we called that unfortunate being who had
not yet emerged from the lower school. In the long nights of winter, when
the wild sea roared just beyond the limits of their quad, and the spray came
flying over the sea-wall to be dashed against their study windows, all
Fernhall boys had a common consolation. They called it brewing: not the
brewing of beer or of any intoxicating liquor, but of that cheering cup of tea
which consoles so many thousands, from the London charwoman to the pig-
tailed Chinaman, from the enervated Indian to the half-frozen Russian exile
in Siberia. At first the headmasters of Fernhall tried hard to put down this
practice. Sergeants lurked about our passages, confiscated our kettles,
carried away the frying-pans full of curly rashers from under our longing
eyes, and 'lines' and flagellations were all we got in exchange. At last a new
era began. A great reformer arrived, a 'Head' of liberal leanings and wide
sympathy. This man frowned on coercion, and, instead of taking away our
kettles, gave us a huge range of stoves on which to boil them. From a cook's
point of view, no doubt, the range of stoves was a great improvement on the
old gas-burner, but, in spite of the liberality of the 'Head,' small clusters of
boys still stood night after night on those old study tables and patiently fried
their bacon over the gas.