Vector Spaces: 2.1 R Through R
Vector Spaces: 2.1 R Through R
Vector Spaces
One of my favorite dictionaries (the one from Oxford) defines a vector as A
quantity having direction as well as magnitude, denoted by a line drawn from
its original to its final position. What is useful about this definition is that we
can draw pictures and use our spatial intuition. An objection to this being used
as the definition of a vector is: Its not very precise, and thus will be hard to
compute with to any degree of accuracy.
The first section of this chapter makes the phrase a quantity having direction and magnitude more precise and at the same time develops an algebraic
structure. That is, the addition of one vector to another and the multiplication
of vectors by numbers will be defined, and various properties of these operations
will be stated and proved.
In order to avoid confusing vectors with numbers, all vectors will be in
boldface type.
2.1
R2 through Rn
Fix some point (think of it as the origin in the Euclidean plane), draw two short
rays from this point, and put arrowheads at the tips of the rays (see Figure 2.1).
A
B
Figure 2.1
Youve just drawn two vectors. Now label one of them A and the other B . The
magnitudes of these vectors are their lengths.
In many applications, vectors are used to represent forces. Since several
forces may act on an object, and the result of this will be the same as if a single
55
56
force, called the resultant or sum, acted on the object, we define the sum of
two vectors in order to model how forces combine. Thus, if we want the vector
A + B , we draw a dashed line segment starting at the tip of A , parallel to B ,
with the same length as B ; cf. Figure 2.2a and b. Label the end of this dashed
line c and now draw the vector C ; i.e., draw a line segment from the origin to c
and put an arrowhead there; cf. Figure 2.2a. The vector C is called A + B , and
this way of combining A and B is referred to as the parallelogram law of vector
addition. Note that the construction of B + A will be different from that of
A + B ; cf. Figure 2.2b. However, a little geometry convinces us that we get the
same two vectors. We repeat this. The fact that A + B = B + A is something
that has to be proved. Its truth is not self-evident.
There is another operation (scalar multiplication) that we can perform on a
As meaning
vector, and that is to multiply it by a number. If A is a vector, 2A
A is a vector twice as long, and with the
is clear. It is the same as A + A ; i.e., 2A
A (c 0) to be the vector pointing in
same direction, as A . Thus, we define cA
the same direction as A but whose magnitude is c times the magnitude of A . If
A points in the direction opposite to A .
c is negative, cA
Since we will be doing a lot of computing with vectors, we need a method
for doing so other than using a ruler, compass, and protractor. Following in the
footsteps of Descartes and others, we assign a pair of numbers to each vector and
then see how these pairs should be combined in order to model vector addition
and scalar multiplication.
A+B
B+A
(a)
(b)
Figure 2.2
Lets go back to Figure 2.1. This time we label our initial point (0,0),
the origin in the Euclidean plane. We also draw two perpendicular lines that
intersect at (0,0). We call the horizontal line the x1 axis and the vertical line
the x2 axis. We next associate a number with each point on these axes. This
number indicates the directed distance of the point from the origin; that is,
the point on the x1 axis labeled 2 is 2 units to the right of the origin while
the point labeled 2 is 2 units to the left. How large a distance the number
1 represents is arbitrary. For the x2 axis, a positive number indicates that the
point lies above the origin while a negative number means that the point lies
below the origin. We now associate an ordered pair of numbers with each point
in the plane. The first number tells us the directed distance of the point from
the x2 axis along a line parallel to the x1 axis and the second number gives us
the directed distance of the point from the x1 axis along a line parallel to the
2.1. R2 THROUGH RN
57
x2 axis; cf. Figure 2.5. Every vector, which we picture as an arrow emanating
from the origin, is uniquely determined once we know where its tip is located.
This means that every vector can be uniquely associated with an ordered pair
of numbers. In other words, two vectors are equal if and only if their respective
coordinates are the same.
2A
A
c A , c>0
A
c A, c>0
(a)
(b)
(c)
Figure 2.3
b. (1, 3)
58
Our next task is to determine how the number pairs, i.e., the coordinates,
of two vectors should be combined to give their sum. Let A = (a1 , a2 ) and
B = (b1 , b2 ) be two vectors. Then a simple proof using congruent triangles
yields that A + B = (a1 + b1 , a2 + b2 ); cf. Figure 2.6a. Similar arguments show
A = (ca1 , ca2 ) if c is a rational number; cf. Figure 2.6b. We havent
us that cA
B should be equal
talked about subtracting one vector from another yet, but A B
A B ) + B = A . Thus if A = (a1 , a2 ) and B = (b1 , b2 ),
to a vector such that (A
then A B = (a1 b1 , a2 b2 ).
x2
(0,1)
(2, 0)
(2,0)
(0,0)
x1
Figure 2.4
x2
a<0
b>0
a>0
b>0
(0, b)
(a, b)
(a, 0)
x1
a<0
b<0
a>0
b<0
Figure 2.5
We now formally define R2 along with vector addition and scalar multiplication for these particular vectors. R2 is the classical example of a two-dimensional
vector space.
2.1. R2 THROUGH RN
59
(a1 + b1 , a2 + b2 )
a2 + b2
a2
(a1 , a2 )
(b1 , b2 )
b2
a1
b1 a1 + b1
(a)
(ca1 , ca2 )
(a1 , a2 )
(b)
Figure 2.6
= (3 12, 4 + 2) = (9, 2)
60
Thus
1
X = (9, 2) =
3
2
3,
3
Example 3. Given any vector in R2 write it as the sum of two vectors that are
parallel to the coordinate axes.
Solution. Let A = (a1 , a2 ) be any vector in R2 . To say that a vector X is parallel
to the x1 axis is to say that the second component of X s representation as an
ordered pair of numbers is zero. A similar comment applies to vectors parallel
to the x2 axis. Thus,
A = (a1 , a2 ) = (a1 , 0) + (0, a2 )
The first vector, (a1 , 0), is parallel to the x1 axis and the second vector, (0, a2 ),
is parallel to the x2 axis. We could also write
A = (a1 , 0) + (0, a2 ) = a1 (1, 0) + a2 (0, 1)
The vectors (1,0) and (0,1) are often denoted by i and j , respectively, and in
this notation (a1 , a2 ) = a1i + a2j .
a2j = (0, a2 )
(a1 , a2 )
a1i = (a1 , 0)
Figure 2.7
The theorem below lists some of the algebraic properties that vector addition
and scalar multiplication in R2 satisfy.
Theorem 2.1. Let A = (a1 , a2 ), B = (b1 , b2 ), and C = (c1 , c2 ) be three arbitrary
vectors in R2 . Let a and b be any two numbers. Then the following equations
are true.
1. A + B = B + A
A + B ) + C = A + (B
B + C)
2. (A
3. Let 0 = (0, 0), then A + 0 = A [zero vector]
A) such that A + (A
A) = 0 [A
A = (a1 , a2 )]
4. For every A there is a (A
A + B ) = aA
A + aB
B
5. a(A
2.1. R2 THROUGH RN
61
A = aA
A + bA
A
6. (a + b)A
A = a(bA
A)
7. (ab)A
A=A
8. 1A
Proof. We verify equations 1, 4, 5, and 8, leaving the others for the reader.
1. A + B = (a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 + b2 ) = (b1 + a1 , b2 + a2 )
= (b1 , b2 ) + (a1 , a2 ) = B + A
A) = (a1 , a2 ) + (a1 , a2 ) = (a1 a1 , a2 a2 ) = (0, 0) = 0
4. A + (A
A + B ) = a[(a1 , a2 ) + (b1 , b2 )] = a(a1 + b1 , a2 + b2 )
5. a(A
= (a(a1 + b1 ), a(a2 + b2 )) = (aa1 + ab1 , aa2 + ab2 )
= (aa1 , aa2 ) + (ab1 , ab2 ) = a(a1 , a2 ) + a(b1 , b2 )
A + aB
B
= aA
A = 1(a1 , a2 ) = (1a1 , 1a2 ) = (a1 , a2 ) = A
8. 1A
We next discuss the standard three-dimensional space R3 . As with R2 , we
first picture arrows starting at some fixed point and ending at any other point
in three-dimensional space. We draw three mutually perpendicular coordinate
axes x1 , x2 , and x3 and impose a distance scale on each of the axes. To each
point P in three space we associate an ordered triple of numbers (a, b, c), where a
denotes the directed distance from P to the x2 , x3 plane, b the directed distance
from P to the x1 , x3 plane, and c the directed distance from P to the x1 , x2
plane. Just as we did for R2 , we now think of vectors in three space both as
arrows and as ordered triples of numbers.
Example 4. For each of the triples of numbers below sketch the vector they
represent.
a. (1,0,0), (0,1,0), (0,0,1): these three vectors are commonly denoted by i , j ,
and k , respectively.
x3
k = (0, 0, 1)
j = (0, 1, 0)
i = (1, 0, 0)
x1
x2
62
(1, 0, 0)
(1, 2, 1)
(1, 2, 0)
(0, 2, 0)
x2
x1
Definition 2.2 defines R3 and the algebraic operations of vector addition and
scalar multiplication for this vector space.
Definition 2.2. R3 = {(x1 , x2 , x3 ) : x1 , x2 , and x3 are any real numbers}.
Vector addition and scalar multiplication are defined as follows:
B = (a1 , a2 , a3 )(b1 , b2 , b3 ) = (a1 b1 , a2 b2 , a3 b3 ), for any vectors
1. A B
A and B in R3 .
A = c(a1 , a2 , a3 ) = (ca1 , ca2 , ca3 ), for any vector A and any number c.
2. cA
Example 5. Let A = (1, 1, 0), B = (0, 1, 2). Compute the following vectors:
a. A + B = (1, 1, 0) + (0, 1, 2) = (1, 0, 2)
A = 2(1, 1, 0) = (2, 2, 0)
b. 2A
Having defined R2 and R3 , we now define Rn , i.e., the set of ordered n-tuples
of real numbers.
Definition 2.3. Rn = {(x1 , x2 , . . . , xn ) : x1 , x2 , . . . , xn are arbitrary real number}. If A and B are any two vectors in Rn and a is any real number, we define
vector addition and scalar multiplication in Rn as follows:
1. A B = (a1 , a2 , . . . , an ) (b1 , b2 , . . . , bn ) =
(a1 b1 , a2 b2 , . . . , an bn )
2.1. R2 THROUGH RN
63
x2
x1
We can also write (1, 6, 4) as a linear combination of the three vectors i , j , and
k.
(1, 6, 4) = (1, 0, 0) + 6(0, 1, 0) + 4(0, 0, 1) = ii + 6jj + 4kk .
A common mistake is to think that R2 is a subset of R3 ; that is, W =
{(x1 , x2 , 0) : x1 and x2 arbitrary} is equated with R2 . Clearly W is not R2 ,
since W consists of triples of numbers, while R2 consists of pairs of numbers.
Example 7. Solve the following vector equation in R5 :
X = 3(2, 0, 6, 1, 1)
2(1, 4, 2, 0, 1) + 6X
X = (6, 0, 18, 3, 3) 2(1, 4, 2, 0, 1) = (8, 8, 14, 3, 5)
6X
Thus,
X =
1
(8, 8, 14, 3, 5) =
6
4 4 7 1 5
, , , ,
3 3 3 2 6
64
A + B,A B
b. A + B , A
A 3B
B
c. 2A
A, where t is an arbitrary
3. Let A = (1, 1). Sketch all vectors of the form tA
real number.
X + 2A
A = 3B
B.
4. Let A = (1, 2). Let B = (2, 6). Find X such that 6X
5. Let A and B be the same vectors as in problem 4. Sketch vectors of the
form X = c1A + c2B for various values of c1 and c2 . Which vectors in R2
can be written in this manner?
6. Prove Theorem 2.2 for n = 3.
7. Let A = (1, 1, 2), B = (0, 1, 0).
a. Find all vectors x in R3 such that x = t1A + t2B for some constants
t1 and t2 .
b. Is the vector (0,0,1) one of the vectors you found in part a?
8. Let x = (1, 0), y = (0, 1). Give a geometrical description of the following
sets of vectors.
65
x + (1 t)yy : 0 t 1}
a. {tx
b. {t1x + t2y : 0 t1 1, 0 t2 1}
x + (1 t)yy : any real number}
c. {tx
d. {t1x + t2y : 0 t1 , 1 t2 }
9. Let x = (1, 2), y = (1, 1). Describe the following sets of vectors.
x + (1 t)yy : 0 t 1}
a. {tx
b. {t1x + t2y : 0 t1 1, 0 t2 1}
d. {t1x + t2y : 0 t1 , 1 t2 }
10. Let x = (1, 1, 0) and let y = (1, 1, 1). Describe the following sets of vectors:
x + (1 t)yy : 0 t 1}
a. {tx
b. {t1x + t2y : 0 t1 1, 0 t2 1}
x + (1 t)yy : t any real nubmer}
c. {tx
W2 = {(x1 , x2 , x3 ) : x1 = x2 }
W3 = {(x1 , x2 , x3 ) : x1 + x2 = 0}
Graph each of the following sets:
a. W1 W2
2.2
b. W2 W3
c. W1 W2 W3
66
67
68
10. 1 x = x1 = x
Thus
P0 = {a : a is any real number} = R1
69
We remind the reader that a vector space is not just a set, but a set with
two operations: vector addition and scalar multiplication. If we change any one
of these three, all ten properties must again be checked before we can say that
we still have a vector space.
In the future when we deal with any of the standard vector spaces Rn ,
Pn , Mmn , or the vector space in Example 7, the operations of vector addition
and scalar multiplication will not be explicitly stated. The reader, unless told
otherwise, should assume the standard operations in these spaces.
70
10. Let V = {A, B, . . . , Z}; that is, V is the set of capital letters. Define
A + B = C, B + C = D, A + E = F ; i.e., the sum of two letters is the first
letter larger than either of the summands, unless Z is one of the letters
to be added. In that case the sum will always be A. Can you define a
way to multiply the elements in V by real numbers in such a way that V
becomes a vector space?
11. Let V = {(x, 0, y) : x and y are arbitrary real numbers}. Define addition
and scalar multiplication as follows:
(x1 , 0, y1 ) + (x2 , 0, y2 ) = (x1 + x2 , y1 + y2 )
c(x, 0, y) = (cx, cy)
Is V a vector space?
12. Let Vc = {(x1 , x2 ) : x1 + 2x2 = c}.
a. For each value of c sketch Vc .
b. For what values of c, if any, is Vc a vector space?
a1 a2 a3
: a1 + a2 + a3 = c . For what values of c is Vc
13. Let Vc =
a4 a5 a6
a vector space?
14. Let Vc = {pp : p is in P4 and p (c) = 0}. For what values of c is Vc a vector
space?
15. Let V = {A : A is in M23 , and ij aij = 0}. That is, A is in V if A is a
2 3 matrix whose entries sum to zero. Is V a vector space?
16. Let V1 = {(x1 , x2 ) : x21 + x22 = 1}, V2 = {(x1 , x2 ) : x21 + x22 } 1, V3 =
{(x1 , x2 ) : x1 0, x2 0}, and V4 = {(x1 , x2 ) : x1 + x2 0}. Which of
these subsets of R2 is a vector space?
17. Let V = {pp : p is in P17 and p (1) + p (6) p (2) = 0}. Is V a vector
space? Does the subscript 17 have any bearing on the matter? What if
the constraint is p (1) + p (6) p (2) = 1?
2.3
The material covered so far has been relatively concrete and easy to absorb, but
we now have to start thinking about an abstract concept, vector spaces. The
only rules that we may use are those listed in Definition 2.4 and any others we
are able to deduce from them. This, especially for the novice, is not easy and
is somewhat tedious, but well worth the effort.
In the definition of a vector space, we have as axioms the existence of a
zero vector (axiom 5) and an additive inverse (axiom 6) for every vector. One
question that occurs is, how many zero vectors and inverses are there? Lets see
71
72
Quite often when the statement of a theorem or definition is read for the first
time, its meaning is lost in a maze of strange words and concepts. The reader
is strongly urged to always go back to R2 or R3 and try to understand what the
statement means in this perhaps friendlier setting. For example, before proving
Theorem 2.3 we analyzed the special case V = R2 .
xk : k = 1, . . . , n}, we often wish to form
Given a collection of vectors {x
arbitrary sums of these vectors; that is, we wish to look at all vectors x of the
form
n
X
ck x k
(2.1)
x = c1x 1 + c2x 2 + + cnx n =
k=1
where the ck are arbitrary real numbers. Such sums are called linear combinations of the vectors x k .
Example 1. Let x1 = (1, 2, 0), x2 = (1, 1, 0). Determine all linear combinations of these two vectors.
Solution. Any linear combination of these two vectors must be of the form
x = c1x 1 + c2x 2
= c1 (1, 2, 0) + c2 (1, 1, 0) = (c1 , 2c1 , 0) + (c2 , c2 , 0)
= (c1 c2 , 2c1 + c2 , 0)
Thus, no matter what values the constants c1 and c2 are, the third component
of x will always be zero, and it seems likely that as c1 and c2 vary over all pairs
of real numbers, so too will the terms c1 c2 and 2c1 + c2 . We conjecture that
the set of all linear combinations of these two vectors will be the set of vectors
S in R3 whose third component is zero. To prove this, we only have to show
that if x is in S, then there are constants c1 and c2 such that x = c1x 1 + c2x 2 .
Thus suppose x is in S. Then x = (a, b, 0) for some numbers a and b. We want
to find constants c1 and c2 such that
(a, b, 0) = c1 (1, 2, 0) + c2 (1, 1, 0) = (c1 c2 , 2c1 + c2 , 0)
73
and 2c1 + c2 = b
The solution is
2a + b
a+b
c2 =
3
3
Thus (a, b, 0) is a linear combination of (1,2,0) and (1, 1, 0).
c1 =
The set of all linear combinations of a set of vectors will occur frequently
enough that we give it a special name and notation.
xk : k = 1, 2, . . . , n} in some
Definition 2.5. Given a set of vectors A = {x
vector space V , the set of all linear combinations of these vectors will be called
their linear span, or span, and denoted by S[A].
The result of the previous example restated in this notation is
S[(1, 2, 0), (1, 1, 0)] = {(x1 , x2 , 0) : x1 and x2 are arbitrary real numbers}.
Example 2. Find the span of the vector (1, 1).
x2
x2
(1, 1)
x1
x1
x1 + x2 = 0
(a)
(b)
Figure 2.8
74
x3
S[A] = x1 , x2 plane
(1, 0, 0)
x2
x2
(1,1,0)
x1
x1
(a)
(b)
Figure 2.9
The span of a nonempty set of vectors has some properties that we state and
prove in the next theorem.
xk : k = 1, 2, . . . , n}.
Theorem 2.5. Let S[A] be the linear span of the set A = {x
Then S[A] satisfies axioms 1 through 10 of the definition of a vector space. That
is, S[A] is a vector space.
Proof.
1. Let y and z be in S[A]. Then there are constants aj and bj , j = 1, 2, . . . , n
such that
n
n
X
X
bj x j
aj x j
z=
y=
j=1
j=1
Thus
y +z =
=
n
X
j=1
n
X
aj x j +
n
X
bj x j =
j=1
n
X
(aj x j + bj x j )
j=1
xj
(aj + bj )x
j=1
n
n
X
X
x = k
kcj x j
kx
cj x j =
j=1
j=1
x is in S[A].
and we have that kx
3. and 4. Commutativity and associativity of vector addition are true because we are in a vector space to begin with
75
xj =
0x
j=1
n
X
0 =0
j=1
7. 8, 9, and 10 are true because they are true for all the vectors in V , and
hence these properties are true for the vectors in S[A]. Remember everything in S[A] is automatically in V , since V is a vector space and is closed
under linear combinations.
This fact, that the span of a set of vectors is itself a vector space contained in
the original vector space, is expressed by saying that the span is a subspace of
V.
Definition 2.6. Let V be a vector space. Let W be a nonempty subset of V .
Then if W (using the same operations of vector addition and scalar multiplication as in V ) is also a vector space, we say that W is a subspace of V .
Example 4. V = R3 . Let W = {(r, 0, 0) : r is any real number}. Show that
W is a subspace of V .
x3
x2
W
x1
Figure 2.10
Solution. Since the operations of vector addition and scalar multiplication will
be unchanged, we know that W satisfies properties 3, 4, 7, 8, 9, and 10. Hence,
we merely need to verify that W satisfies 1, 2, 5, and 6 of Definition 2.4. Thus
suppose that x and y are in W . Then x = (r, 0, 0) and y = (s, 0, 0) for some
numbers r and s, and
x + y = (r, 0, 0) + (s, 0, 0) = (r + s, 0, 0)
76
x = (r, 0, 0) = (r, 0, 0)
Thus W is a subspace of V = R3 .
Example 5. Let V be any vector space and let A be any nonempty subset of
V . Then S[A] is a subspace of V . This is merely Theorem 2.5 restated.
x], which
Example 6. Let x = (a, b, c) be any nonzero vector in R3 . Then S[x
is a subspace of R3 , is just the straight line passing through the origin and the
point with coordinates (a, b, c). See Figure 2.11. The reader should verify the
details of this example.
To verify that a subset of a vector space is a subspace can be tedious. The
following theorem shows that it is sufficient to verify axioms 1 and 2 of Definition 2.4.
x = (a, b, c)
Figure 2.11
77
A few row operations on the augmented matrix of this system show that it is
row equivalent to the matrix
1 3 0
2
0 1 0
0
0 0 1
4
0 0 0 5
The nonhomogeneous system, which has the above matrix as its augmented
matrix, has no solution. This means that W does not contain the given vector.
Example 8. Let V = R2 . Let W = {(x, sin x) : x is a real number}. Show
x of every vector x in W , it
that while W contains 0 and the additive inverse x
still is not a subspace.
Solution. To see that 0 = (0, 0) is in W , we only need observe that sin 0 = 0.
Thus (0, sin 0) = (0, 0) is in W . If x is in W , then x = (r, sin r) for some number
r. But then
x = (r, sin r) = (r, sin r) = (r, sin(r))
x
must also be in W . To see that W is not a subspace, we note that if W were a
subspace, c(1, sin 1) must be in W for any choice of the scalar c. Since sin 1 > 0,
by making c large enough we will have c sin 1 > 1. Since 1 sin x 1 for all
x, we cannot have sin x = c sin 1 for any x. Thus, c(1, sin 1) = (c, csin 1) is not
in W . Hence, W is not closed under scalar multiplication, and it cannot be a
subspace.
78
4.0
Figure 2.12
x : x is in Rn and Ax
x = 0 }.
Example 9. Let A be an m n matrix. Let K = (x
Thus K is the solution set of the system of linear homogeneous equations whose
coefficient matrix is A. Show that K is a subspace of Rn . Notice that Rn is
being thought of as the set n 1 matrices.
Solution. We first note that 0 is in K, since A00 = 0 . Thus K is nonempty.
Suppose now that x 1 and x 2 are in K. Then
x 1 + x 2 ) = Ax
x 1 + Ax
x2 = 0 + 0 = 0
A(x
and K is closed under addition. To see that K is closed under scalar multiplication we have, if x is in K and c is any number:
x ) = cA(x
x ) = c00 = 0
A(cx
Hence, by Theorem 2.6, K is a subspace of Rn .
a2 = c1
a3 = c3 + c4
a4 = c3
79
Hence,
a1
a3
a2
1
= (a2 )
0
a4
0
+ a4
1
1
1 0
+ (a1 + a2 )
0
0 0
0
0 0
+ (a3 a4 )
1
1 0
80
1 0 3
x = 0 , for x in R3 .
. Let K be the solution set of Ax
4 4 6
From Example 9 we know that K is a subspace of R3 . Find a vector x 0
x0 ] = K.
such that S[x
13. Let A =
P3
j=1
aij = 0 for i = 1, 2}
17. Let V be the vector space of all n n matrices. Show that the following
subsets of V are subspaces:
a. W = all n n scalar matrices = {cIn : c is any number}
18. Let W be the set of all n n invertible matrices. Show that W is not a
subspace of the vector space of all n n matrices.
19. Let W be any subspace of the vector space Mmn of all m n matrices.
Show that W T , that subset of Mnm consisting of the transposes of all the
matrices in W , is a subspace of Mnm .
20. Let P2 be all polynomials of degree 2 or less. Let W = {a + bt : a and b
arbitrary real numbers}. Is W a subspace of P2 ?
21. Let V be a vector space. Let W1 and W2 be any two subspaces of V . Let
x : x belongs to both W1 and W2 }. Show that W is a
W = W1 W2 = {x
subspace of V .
x: x
22. Let V, W1 , and W2 be as in problem 21. Let W = W1 W2 = {x
belongs to W1 or W2 }. Show that W need not be a subspace of V .
23. Let V, W1 , and W2 be as in problem 21. Define W1 + W2 as that subset of
V which consists of all possible sums of the vectors in W1 and W2 , that is
u + v : u and v any vectors in W1 and W2 }
W1 + W2 = {u
Show W1 + W2 is a subspace of V and that any other subspace of V which
contains W1 and W2 must also contain W1 + W2 . Thus, W1 + W2 is the
smallest subspace of V containing W1 W2 .
81
24. Let V = C[a, b]. Let W be that subset of V which consists of all polynomials. Is W a subspace of V ?
25. Let W = {. . . , 2, 1, 0, 1, 2, . . .}. Is W a subspace of R1 ?
26. Let W = {(x1 , x2 ) : x1 or x2 equals zero}. Let W1 = {(x1 , 0)} and
W2 = {(0, x2 )}. Thus, W = W1 W2 . Show that both W1 and W2 are
subspaces of R2 , and that W is not a subspace.
27. Let W = {(x1 , x2 ) : x21 + x22 c}. For which values of c is W a subspace
of R2 ?
28. Let W1 = {pp : p is in P2 and p(1) = 0}. Let W2 = {pp : p is in P2 and
p (2) = 0}.
a. Find a spanning set for W1 W2 .
b. Find spanning sets for W1 and W2 each of which contains the spanning set you found in part a.
29. Example 9 shows that the solution sets of homogeneous systems of equations are subspaces. Find spanning sets for each of the solution spaces of
the following systems of equations:
a. 2x1 + 6x2 x3 = 0
b. 2x1 + 6x2 x3 = 0, x2 + 4x3 = 0
c. 2x1 + 6x2 x3 = 0, x2 + 4x3 = 0, x1 + x2 = 0
2 6 1
30. Let A =
. Let V = {B : B M34 and AB = 024 }.
3 1 0
a. Show V is a subspace of M34 .
b. Find a spanning set for V .
2.4
Linear Independence
In this section we discuss what it means to say that a set of vectors is linearly independent. We encourage the reader to go over this material several times. The
ideas discussed here are important, but hard to digest, thus needing rumination.
Consider the two vectors x = (1, 0, 1) and y = (0, 1, 1) in R3 . They have
the following property: Any vector z in their span can be written in only one
way as a linear combination of these two vectors. Lets check the details of this.
Suppose there are constants c1 , c2 , c3 , and c4 such that
z = c1x + c2y = c3x + c4y
(2.2)
(2.3)
82
n
X
ck xk
k=1
then
c1 = c2 = = cn = 0
That is, the zero vector can be written in only one way as a linear combination
of these vectors.
A set of vectors is said to be linearly dependent if it is not linearly indepenxk : k = 1, . . . , n} is linearly dependent only if
dent. That is, a set of vectors {x
there are constants c1 , c2 , . . . , cn not all zero such that 0 = c1x 1 + c2x 2 + +
cnx n .
Example 1. Show that the vectors {(1, 0, 1), (2, 0, 3), (0, 0, 1)} are linearly dependent.
Solution. We need to find three constants c1 , c2 , c3 not all zero such that
c1 (1, 0, 1) + c2 (2, 0, 3) + c3 (0, 0, 1) = (0, 0, 0)
Thus we have to find a nontrivial solution to the following system of equations:
c1 + 2c2
=0
c1 + 3c2 + c3 = 0
This system has many nontrivial solutions. A particular one is c1 = 2, c2 = 1,
and c3 = 1. Since we have found a nontrivial linear combination of these
vectors, namely, 2(1, 0, 1)(2, 0, 3)+(0, 0, 1), which equals zero, they are linearly
dependent.
Example 2. Determine whether the set
{(1, 0, 1, 1, 2), (0, 1, 1, 2, 3), (1, 1, 0, 1, 0)}
is linearly dependent or independent.
Solution. We have to decide whether or not there are constants c1 , c2 , c3 not all
zero such that
(0, 0, 0, 0, 0) = c1 (1, 0, 1, 1, 2) + c2 (0, 1, 1, 2, 3) + c3 (1, 1, 0, 1, 0)
83
+ c3
c2 + c3
c1 + c2
c1 + 2c2 + c3
2c1 + 3c2
=0
=0
=0
=0
=0
The only solution is the trivial one, c1 = c2 = c3 = 0. Thus the vectors are
linearly independent.
To say that a set of vectors is linearly independent is to say that the zero
vector can be written in only one way as a linear combination of these vectors;
that is, every coefficient ck must equal zero. The following theorem shows that
even more is true for a linearly independent set of vectors.
xk : k = 1, . . . , p} is a linearly independent set of
Theorem 2.7. If A = {x
vectors, then every vector in S[A] can be written in only one way as a linear
combination of the vectors in A.
Proof. Let y be any vector in S[A]. Suppose that we have
y=
p
X
ak x k =
p
X
bk x k
k=1
k=1
p
X
k=1
ak x k
p
X
bk x k =
k=1
p
X
k=1
xk
(ak bk )x
a2 b 2 = 0
ap b p = 0
Thus, we can write any vector in S[A] as a linear combination of the linearly
independent vectors x k in one and only one way.
The unique constants ak that are associated with any vector x in S[A] are
given a special name.
Definition 2.8. Given
Pn a linearly independent set of vectors A = {xk : k =
1, . . . , n}. If x =
k=1 akx k , the coefficients ak , 1 k n, are called the
coordinates of x with respect to A. Note that there is an explicit ordering of
the vectors in the set A.
Example 3. Show that the set A = {(1, 1, 1), (0, 1, 0), (1, 0, 2)} is linearly
independent, and then find the coordinates of (2, 4, 6) with respect to A.
Solution. To see that A is linearly independent suppose
c1 (1, 1, 1) + c2 (0, 1, 0) + c3 (1, 0, 2) = (0, 0, 0)
84
that
0
0
+ c4
1
1
0
0
c1 = 0 c3 + c4 = 0 c3 = 0
Example 11 in
0
0
85
c1 + c3 = 0
c2 = 0
Since the only solution to this system is the trivial one, F is a linearly independent subset of P2 .
xk : k = 1, . . . , n} is linearly dependent if and
Theorem 2.8. A set of vectors {x
only if one of its vectors can be written as a linear combination of the remaining
n 1 vectors.
Proof. Suppose one of the vectors, say x 1 , can be written as a linear combination
of the others. Then we have
x 1 = c2x 2 + c3x 3 + + cnx n
for some constants ck . But then
0 = x 1 c2x 2 cnx n
and we have found a nontrivial (the coefficient of x1 is 1) linear combination of
these n vectors that equals the zero vector. Thus, this set of vectors is linearly
dependent. Suppose now that there is a nontrivial linear combination of these
vectors that equals 0 ; that is
0 = c1x 1 + + cnx n
and not all the constants ck are zero. We may suppose without loss of generality
that c1 is not zero (merely rename the vectors). Solving the above equation for
x 1 , we have
x1 =
c2
cn
1
(c2x 2 cnx n ) = x2 xn .
c1
c1
c1
Hence, we have written one of the vectors in our linearly dependent set as a
linear combination of the remaining vectors.
If a linearly dependent set contains exactly two vectors, then one of them
must be a multiple of the other. In R3 , three nonzero vectors are linearly
dependent if and only if they are coplanar, i.e., one of them lies in the plane
spanned by the other two. For example, the three vectors (1,0,1), (2,0,3), and
(0,0,1), as we saw in Example 1, are linearly dependent and all three of them
lie in the plane generated by (1,0,1) and (2,0,3); that is, the plane x2 = 0.
86
6. Show that any set of vectors containing the zero vector must be linearly
dependent.
7. Suppose that A is a linearly dependent set of vectors and B is any set
containing A. Show that B must also be linearly dependent.
8. Show that any nonempty subset of a linearly independent set is linearly
independent.
9. Show that any set of four or more vectors in R3 must be linearly dependent.
10. Let A = {(0, 1, 1), (1, 0, 1), (1, 1, 0)}. Is A linearly dependent? S[A] =?
11. Let A = {1 + t, 1 t2 , t2 }. Is A a linearly independent subset of P2 ?
S[A] =?
12. Let A = {sin t, cos t, sin(t + )}. Is A a linearly independent subset of
C[0, 1]? Does A span C[0, 1]?
13. Let A = {(x1 , x2 ) : x21 + x22 = 1}. Is A a linearly independent subset of
R2 ? Is A a spanning subset of R2 ?
14. Show that F = {1 + t, t2 t, 2} is a linearly independent subset of P2 .
Show S[F ] = P2 .
15. Show that {sin t, sin 2t, cos t} is a linearly independent subset of C[0, 1].
Does it span C[0, 1]?
1
16. Let V = {pp(t) : p is in P2 and 0 p (t)dt = 0}. Find a spanning set of
V that has exactly two vectors in it. Show that your set is also linearly
independent.
17. Let V = {pp(t) : p is in P3 and p (0) = 0, p (1)pp(0) = 0}. Find a spanning
set of V that has exactly two vectors. Is the set also linearly independent?
2.5. BASES
87
1
1
1 1
1
B = 1 1
C = 1
1
1
1 1
19. Let A be a subset of a vector space V . We say that A is linearly indepenx1 , . . . , x p } is a finite subset of A and c1x 1 + +cpx p =
dent if, whenever {x
0, then each ck = 0.
a. Show that if A is a finite set of vectors to begin with, then Definition 2.7 and the above definition are equivalent.
b. Show Theorem 2.7 is also true using this definition of linear independence. Note: S[A] is the collection of all possible finite linear
combinations of the vectors in A.
20. Let V = C[0, 1], the space of real-valued continuous functions defined on
[0,1]. Let A = {1, t, . . . , tn , . . .}.
a. Show that A is a linear independent set as defined in problem 19.
b. What is S[A]?
21. Let V be the set of all polynomials with only even powers of t (constants
have even degree). Thus 1 + t is not in V , 1 + t + t2 is not in V , but 1 + t2
and t4 + 5t8 are in V . Let A = {1, t2 , t4 , . . . , t2n , . . .}.
a. Show that V is a vector space and that A is a linearly independent
subset of V .
b. S[A] =?
2.5
Bases
88
89
2.5. BASES
Proof. Suppose there are constants c, c1 , . . . , cn such that
0 = cyy + c1x 1 + + cnx n
(2.6)
= {(x1 , x2 , x3 ) : x2 2x1 = x3 }
From this last description of S[A], it is easy to find vectors that are not in
the span of A. For example, (1,0,0) is one such vector. Since A is linearly
independent, Lemma 2.2 now tells us that the set
{(1, 0, 0), (1, 1, 1), (0, 1, 1)}
is also linearly independent. The reader may easily verify, and should do so,
that this set does indeed span R3 .
Definition 2.9. Let V be a vector space. A subset B of V is called a basis of
V if it satisfies:
1. B is linearly independent.
2. B is a spanning set for V ; that is, every vector in V can be written as a
linear combination of the vectors in B.
It can be shown that every vector space containing more than the zero vector
has a basis. We shall not prove this theorem for arbitrary spaces but instead
will exhibit a basis for most of the vector spaces discussed in this text.
Example 3.
90
The set {eej : 1 j n} is a basis for Rn , as the reader can easily show. This
particular set is the standard basis of Rn .
Theorem 2.9. Let V be a vector space that has a basis consisting of n vectors.
Then:
a. Any set with more than n vectors is linearly dependent.
b. Any spanning set has at least n vectors.
c. Any linearly independent set has at most n vectors.
d. Every basis of V contains exactly n vectors.
Proof. Part a. Suppose that A = {ff 1 , . . . , f n } is the given basis of V with n
x1 , . . . , x n , x n+1 } be any subset of V with n + 1 vectors.
vectors. Let B = {x
We want to show that B is linearly dependent. Since A is a basis of V , it is a
spanning set. Hence, there are constants ajk such that
x k = a1kf 1 + + ank f n
We want to find constants ck , 1 k n + 1, not all zero such that
0 = c1x 1 + + cn+1x n+1 =
=
n+1
X
k=1
ck
n
X
j=1
ajk f j =
n
X
j=1
n+1
X
k=1
n+1
X
ck xk
!
ck ajk f j
k=1
If we pick the constants ck in such a manner that each of the coefficients multiplying f j is zero, this linear combination of the vectors in B will equal the zero
vector. In other words, we wish to find a nontrivial solution of the following
system of equations:
aj1 c1 + aj2 c2 + + aj(n+1) cn+1 = 0
91
2.5. BASES
92
(2.7)
x2 + x4 = 0
We have two free parameters x3 and x4 which we set equal to 1 and 0 to get
one solution. To get a second solution, which is linearly independent of the
first, we set x3 and x4 equal to 0 and 1, respectively. Thus we have the set
S = {( 12 , 0, 1, 0), ( 21 , 1, 0, 1)} which is contained in the solution space of
(2.7). Clearly S is linearly independent. [The last two slots look like (1,0) and
(0,1).] Moreover, every solution of (2.7) can be written as a linear combination
of these two vectors; for suppose (x1 , x2 , x3 , x4 ) satisfies (2.7). Then we have
x3 x4
, x4 , x3 , x4
(x1 , x2 , x3 , x4 ) =
2
1
1
= x3
, 0, 1, 0 + x4 , 1, 0, 1
2
2
Thus, S is a basis for the solution space of (2.7), and the dimension of this
subspace of R4 is 2.
Example 6. Consider the system of linear equations
x1 2x2 + x3 = 0
x2 x3 = 0
2x1 + x2 + x3 = 0
1
0
2
2
1
1 1
1
1
93
2.5. BASES
of this system is row equivalent to
1 2
1
0
1 1
0
0
4
This second system has only the trivial solution. Therefore, the solution space
K consists of just the zero vector and we have dim(K) = 0.
94
f2
x2
y
x1
Figure 2.13
Lemma 2.2 implies that the set consisting of A and y must be linearly independent. This contradicts a. of Theorem 2.9. Therefore, any linearly independent
set with n vectors must also span the vector space. Hence it is a basis. Suppose
next that A is a spanning set of n vectors. Then, by Lemma 2.1, if A is not linearly independent, we may discard a vector from A, obtaining a set with fewer
than n vectors, and which still spans V . This contradicts b. of Theorem 2.9.
3 2 16 23
Example 8. Let A =
. Find a basis for the solution space of
2 1 10 13
X = 0. If this is not a basis for R4 , find a basis of R4 that contains the basis
AX
of the solution space.
Solution. The matrix A is row equivalent to the matrix
1 0 4 3
0 1 2 7
T
X = 0, we must have x2 = 2x3
Thus, if X = x1 , x2 , x3 x4 solves AX
7x4 and x1 = 4x3 3x4 . Setting x3 = 1, x4 = 0, and then x3 = 0, x4 = 1, we
have two linearly independent solutions to our system:
f 1 = (4, 2, 1, 0) and f 2 = (3, 7, 0, 1)
Clearly the set (ff 1 , f 2 } is a basis for the solution space. To extend this set to a
basis of R4 we have to find two more vectors g 1 and g 2 such that {ff 1 , f 2 , g 1 , g 2 }
is linearly independent. R4 has as a basis {ee1 , e2 , e3 , e4 } where e1 = (1, 0, 0, 0),
etc. Two of these ek s will not depend linearly on f 1 and f 2 . We need to find
such a pair. A quick inspection of f 1 and f 2 shows us that e 1 and e 2 will work.
We leave the details of showing that {ff 1 , f 2 , e 1 , e 2 } is linearly independent to
the reader.
95
2.5. BASES
2 0 1 0 0
1 1
0 1 1
1 0
0 0 0
0 0 0
A=
0 1
0 0
1 0 0
0 0
0 1 0
0 0
0 0 1
Now use elementary row operations but not row interchanges to find a matrix
that is row equivalent to A and that has two rows of zeros. Zeroing out rows
corresponds to discarding those vectors in the standard basis which depend
linearly on the remaining vectors. After a few row operations, we have A row
equivalent to the matrix
2 0 1 0 0
1 1
0 1 1
1 0
0 0 0
0 1
0 0 0
0 0
0 0 0
0 0
0 1 0
0 0
0 0 0
Thus the set {(2, 0, 1, 0, 0), (1, 1, 0, 1, 1), e1 , e 2 , e 4 } is a basis of R5 that contains our original set A.
Example 10. Find a basis for V = {pp : p is in P2 , p (1) = 0}. If p is in P2 ,
then p (t) = a0 + a1 t + a2 t2 , and p (1) = 0 implies
p (1) = 0 = a0 + a1 + a2
Thus, F = {1 t2 , t t2 } is a subset of V . F is easily shown to be linearly
independent. To see that F spans V , let p be in V . Then
p (t) = a0 + a1 t + a2 t2
= a0 + a1 t + (a0 a1 )t2
= a0 (1 t2 ) + a1 (t t2 )
96
0
0
1
0
,
0
1
0
0
,
0
0
0
form a basis of
1
97
2.5. BASES
12. Let K = {(x1 , x2 , x3 ) : x1 x2 = x3 , x2 + 2x3 = 0)}.
a. Find a basis B for K.
b. dim(K) =?
c. How many vectors have to be added to B to get a basis for R3 ?
13. Let V be the vector space of all n n matrices. Then we know that
dim(V ) = n2 . Find a basis and determine the dimension of each of the
following subspaces of V .
a. Scalar matrices
c. Upper triangular matrices
b. Diagonal matrices
d. Lower triangular matrices
Let B =
20.
21.
22.
23.
98
1
0
p (t)dt = 0 =
2
1
25. Let A = {(1, 0, 0), (0, 1, 1), (0, 2, 3), (0, 3, 4)(1, 1, 3)}.
a. Show that A is linearly dependent.
b. What is the largest number of vectors in A that can form a linearly
independent set?
x}]?
c. For which vectors x in A is it true that S[A] = S[A\x
2.6
Coordinates of a Vector
We have seen in an earlier section (Theorem 2.7) that given a basis for a vector
space, every vector can be uniquely written as a linear combination of the basis
vectors. The constants that are used in this sum are called the coordinates of the
vector with respect to the basis. In this section we show how these coordinates
change when we change our bases. For convenience, we again state the definition
of the coordinates of a vector.
Definition 2.11. Let B = {ff 1 , f 2 , . . . , f n } be a basis of a vector space V ,
dim(V ) = n. Then the coordinates of a vector x in V with respect to the
basis B are the constants needed to write x as a linear combination of the basis
vectors. Thus if
x = c1f 1 + c2f 2 + + cnf n
Note that the ordering of the vectors in B is important. If we change this order,
we change the order in which we list the coordinates.
Example 1. Let V = R3 .
a. Let S = {ee1 , e 2 , e 3 }. Find the coordinates of x = (1, 1, 2) with respect
to S. Clearly,
x = (1, 1, 2) = (1, 0, 0) (0, 1, 0) + 2(0, 0, 1)
Thus,
x ]S = [1, 1, 2]
[x
Note that since S is the standard basis, the coordinates of x with respect
to S are the same three numbers used to define x .
b. Let G = {(0, 0, 1), (0, 1, 0), (1, 0, 0)}. Thus, G is just a reordering of the
standard basis of R3 . Then
x ]G = [(1, 1, 2)]G = [2, 1, 1]
[x
99
c. Let G = {(2, 1, 3), (0, 1, 1), (1, 1, 0)}. Find the coordinates of x =
(1, 1, 2) with respect to G. We need to find constants c1 , c2 , and c3
such that
(1, 1, 2) = c1 (2, 1, 3) + c2 (0, 1, 1) + c3 (1, 1, 0)
The system of equations derived from this vector equation is
1 = 2c1
+ c3
1 = c1 + c2 c3
2 = 3c1 + c2
The solution is c1 = 1, c2 = 1, c3 = 1. Hence
x]G = [1, 1, 1]
[x
(2.8)
n
X
pkj g k =
i=1
k=1
k=1
n
X
n
X
pkj
i=1
k=1
n
X
qik pkj
n
X
fi
qik f i
100
Pn
n
X
for
j = 1, 2, . . . , n
qik pkj = ij
(2.10)
k=1
2 0
1 1
3 1
1
1
0
We next want to find the coordinates of f j with respect to the basis G. There are
two ways to go; one is to actually find the pij such that f 1 = p11g 1 +p21g 2 +p31g 3 ,
etc., or we can just compute Q1 . The first way is instructive; so we will find
the first column of P by computing the coordinates of f 1 with respect to G. We
want to find constants ck , k = 1, 2, 3, such that
(1, 0, 0) = c1 (2, 1, 3) + c2 (0, 1, 1) + c3 (1, 1, 0)
The solutions are c1 = 21 ,c2 = 23 , c3 = 2. Thus (1, 0, 0) = ( 12 )gg 1 +( 23 )gg 2 +2gg 3 .
If we write out the above equations for the cj as a matrix equation we have
or
2 0
1 c1
1
1 1 1 c2 = 0
3 1
0 c3
0
c1
1
Q c2 = 0
c3
0
101
Q1
T
0 , and clearly the first column of P must
1
2
3
=
2
2
21
3
2
1
2
12
We next discuss how to use the change of basis matrix P to express the
coordinates of x with respect to G in terms of the coordinates of x with respect
x ]F =
to F . Thus let F = {ff 1 , . . . , f n } and G = {gg 1 , . . . , g n }. Suppose [x
[x1 , . . . , xn ]. Then
x = x1f 1 + x2f 2 + + xnf n
n
n
n
X
X
X
pjng j
pj2g j + + xn
pj1g j + x2
= x1
=
n
X
k=1
xk
j=1
j=1
j=1
n
X
j=1
pjk g j =
n
n
X
X
j=1
pjk xk g j
k=1
Thus,
x ]G =
[x
"
n
X
k=1
p1k xk ,
n
X
p2k xk , . . . ,
k=1
n
X
k=1
pnk xk
(2.11)
Note that the jth coordinate of x with respect to G is the jth row of the matrix
product P ([x1 , . . . , xn ])T . Thus, (2.11) implies the following equation:
x]TG = P [x
x]TF
[x
(2.12)
In the sequel we use equation (2.12) to specify P ; that is, we ask for the change
x]TG = P [x
x]TF . The following summarizes these
of basis matrix P such that [x
102
relationships:
F = {ff 1 , . . . , f n }
n
X
fj =
pkj g k
G = {gg 1 , . . . , g n }
n
X
gj =
qkj f k
k=1
k=1
x]F = [x1 , . . . , xn ]
[x
yj =
x]G = [y1 , . . . , yn ]
[x
n
X
pjk xk
k=1
x ]TG = P [x
x ]TF
[x
x ]TF = P 1 [x
x]TG
[x
(2.13)
x]TF2
x]TF3 = Q[x
[x
(2.14)
and
Find the change of basis matrix R, relating F1 and F3 , such that
x]TF1
x]TF3 = R[x
[x
x ]TF3 = Q[x
x]TF2 = QP 1 [x
x ]TF1 . By problem 11
From (2.13) and (2.14) we have [x
at the end of this section we must have
R = QP 1
The reader might find this fact useful when computing change of basis matrices,
when neither basis is the standard basis. We illustrate this below.
103
Example 4. Let V = R2 . Let F = {(7, 8), (9, 20)}. Let G = {(6, 5), (1, 1)}.
Find the change of basis matrix R such that
x ]TG = R[x
x]TF
[x
x ]TS = P [x
x]TF . Let Q be
Let S be the standard basis of R2 . Let P be such that [x
T
T
x ]S = Q[x
x]G . Thus, the columns of P and Q are the coordinates of
such that [x
the vectors in F and G, respectively, with respect to the standard basis. Hence,
7 9
6 1
P =
and Q =
8 20
5 1
and we have
x ]TS
x ]TG = Q1 [x
[x
x ]TF )
= Q1 (P [x
x ]TF
= (Q1 P )[x
Thus,
R=Q
1
1 1 7 9
P =
5
6 8 20
11
1
1 29
=
83
75
11
To check our computations, we verify that the first column of R consists of the
coordinates of f 1 with respect to G
77 88
1
83
(1, 1) =
= (7, 8) = f 1 .
(6, 5) +
,
11
11
11 11
104
1
0
105
x]F and [x
x ]G .
a. Find [x
x]TG = P [x
x]TF .
b. Show that [x
x = Bx
x for every
11. Let A and B be the two m n matrices. Suppose that Ax
x in Rn . Show A = B. Note that it will be sufficient to show that if
x = 0 for every x in Rn , then A must be the m n zero matrix. Why?
Ax
12. Let F = {(1, 1), (1, 2)}, G = {(1, 2), (1, 1)}.
x ]TG = P [x
x]TF .
a. Find the change of basis matrix P such that [x
x ]F and [x
x ]G .
b. If x = (1, 1), find [x
x]TF = P 1 [x
x ]TG .
c. Show [x
13. Let S be the standard basis of R3 and let G = {(6, 0, 1), (1, 1, 1), (0, 3, 1)}.
a. Show that G is a basis of R3 .
x ]TG = P [x
x]TS .
b. Find the change of basis matrix P such that [x
c. Wite e k , k = 1, 2, 3, as linear combinations of the vectors in G.
x]G in
14. Let x = (5, 3, 4). Let S and G be the bases in problem 13. Find [x
two different ways.
15. Let F = {ff 1 , f 2 } be a basis of R2 . Let G = {ff 2 , f 1 }. Find the change of
x]TG = P [x
x]TF .
basis matrix P , such that [x
16. Let F be any basis of some n-dimensional vector space. Let G consist of
the same vectors that belong to F , but in perhaps a different order. Show
that the matrix P relating these two bases is a permutation matrix; cf.
problem 13 in Section 1.5.
1 0 0 1 0 0 0 0
17. Let V = M22 . Let F =
. Let G =
0 0 0 0 1 0 0 1
0 1 1 0 1 1 1 1
.
1 1 1 1 0 1 1 0
2 6
x]F =? [x
x ]G =?
a. Let x =
[x
7
5
x ]TG = P [x
x]TF .
b. Find the change of basis matrix P such that [x
x]TG = P [x
x]TF for the vector x of part a.
c. Check that [x
x]TG = [1, 3, 2, 4], then x equals?
d. If [x
18. Let V = P3 . Let F = {2, t + t2 , t2 1, t3 + 1}. Let S denote the standard
basis of V .
a. Verify that F is a basis of V .
x]TF .
x ]TS = P [x
b. Find the change of basis matrix P such that [x
106
19. Let S = {(1, 0), (0, 1)} and let B = {(2, 3), (1, 6)}.
a. Show that both S and B are bases of R2 .
b. Find the coordinates of (6, 3) with respect to S.
c. Find the coordinates of (6, 3) with respect to B.
20. Let S and B be the same sets as in problem 19. The coordinates of (2, 3)
and (1,6) with respect to S are [2, 3] and [1,6], respectively. Show that
1
2
, 15
],
the coordinates of (1,0) and (0,1) with respect to B are [ 25 , 15 ] and [ 15
respectively. Let
#
"2
1
15
2 1
5
and Q =
P =
2
1
3 6
5
15
Supplementary Problems
1. Define each of the following terms and then give at least two examples of
each one:
a. A vector space of dimension 5
b. Linearly independent set
c. Basis
d. A vector space of dimension 100 = 102
e. Spanning set
2. Let V be the vector space of all polynomials in t. Show that there is no
finite set of vectors in V that spans V .
107
m
X
ak X k
j = 1, . . . , m
k=1
a
c
b
: a = d and c = 0 .
d
108
11. Let x 1 and x 2 be any two vectors. Let y 1 , y 2 , and y 3 be any three vectors
x1 , x 2 ]. Show that these three vectors form a linearly dependent set.
in S[x
x1 , . . . , x r } be any set of vectors. Let F be any set of m vectors
12. Let {x
contained in the span of the given set of r vectors. Show that if m > r,
then F must be a linearly dependent set of vectors.
13. Let V = M22 . Let C = {A : AB = BA for every B in V }. That is, C
consists of all 2 2 matrices that commute with every other 2 2 matrix.
a. Show that C is a subspace of V .
b. Find a basis of C and hence determine dim(C).
c. Let V = Mnn , define C as above, and repeat parts a and b.
14. For which numbers x are the vectors (2,3) and (1, x) linearly independent?