0% found this document useful (0 votes)
145 views3 pages

MAT 217 Lecture 4 PDF

The document discusses linear transformations between vector spaces. It defines a linear transformation as a function between vector spaces that preserves vector addition and scalar multiplication. Examples of linear transformations include coordinate maps relative to a basis and transformations defined by matrix multiplication. The set of all linear transformations between two vector spaces itself forms a vector space.

Uploaded by

Carlo Karam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views3 pages

MAT 217 Lecture 4 PDF

The document discusses linear transformations between vector spaces. It defines a linear transformation as a function between vector spaces that preserves vector addition and scalar multiplication. Examples of linear transformations include coordinate maps relative to a basis and transformations defined by matrix multiplication. The set of all linear transformations between two vector spaces itself forms a vector space.

Uploaded by

Carlo Karam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Lecture 4

Now we have one of two main subspace theorems. It says we can extend a basis for a
subspace to a basis for the full space.
Theorem 0.1 (One subspace theorem). Let W be a subspace of a finite-dimensional vector
space V . If BW is a basis for W , there exists a basis B of V containing BW .
Proof. Consider all linearly independent subsets of V that contain BW (there is at least one,
BW !) and choose one, S, of maximal size. We know that #S dimV and if #S = dimV
it must be a basis and we are done, so assume that #S = k < dimV . We must then
have Span(S) 6= V so choose a vector v V \ Span(S). We claim that S {v} is linearly
independent, contradicting maximality of S. To see this write S = {v1 , . . . , vk } and
a1 v1 + + ak vk + bv = ~0 .
If b 6= 0 then we can solve for v, getting v Span(S), a contradiction, so we must have
b = 0. But then a1 v1 + + ak vk = ~0 and linear independence of S gives ai = 0 for all i, a
contradiction.
The second subspace theorem will follow from a dimension theorem.
Theorem 0.2. Let W1 , W2 be subspaces of V , a finite-dimensional vector space. Then
dim(W1 + W2 ) + dim(W1 W2 ) = dim(W1 ) + dim(W2 ) .
Proof. Let B be a basis for the intersection W1 W2 . By the one subspace theorem we can
find bases B1 and B2 of W1 and W2 respectively that both contain B. Write

B = {v1 , . . . , vk }
B1 = {v1 , . . . , vk , vk+1 , . . . , vl }
B2 = {v1 , . . . , vk , wk+1 , . . . , wm } .
We will now show that B = B1 B2 is a basis for W1 + W2 . This will prove the theorem,
since then dim(W1 + W2 ) + dim(W1 W2 ) = k + (l + m k) = l + m.
To show that B is a basis for W1 + W2 we first must prove Span(B) = W1 + W2 . Since
B W1 + W2 , we have Span(B) Span(W1 + W2 ) = W1 + W2 . On the other hand, each
vector in W1 + W2 can be written as w1 + w2 for w1 W1 and w2 W2 . Because B contains
a basis for each of W1 and W2 , these vectors w1 and w2 can be written in terms of vectors
in B, so w1 + w2 Span(B).
Next we show that B is linearly independent. We set a linear combination equal to zero:
a1 v1 + + ak vk + ak+1 vk+1 + + al vl + bk+1 wk+1 + + bm wm = ~0 . (1)
By subtracting the w terms to one side we find that bk+1 wk+1 + + bm wm W1 . But this
sum is already in W2 , so it must be in the intersection. As B is a basis for the intersection
we can write
bk+1 wk+1 + + bm wm = c1 v1 + + ck vk
for some ci s in F. Subtracting the ws to one side and using linear independence of B2 gives
bk+1 = = bm = 0. Therefore (1) reads

a1 v1 + + al vl = ~0 .

Using linear independence of B1 gives ai = 0 for all i and thus B is linearly independent.
The proof of this theorem gives:

Theorem 0.3 (Two subspace theorem). If W1 , W2 are subspaces of a finite-dimensional


vector space V , there exists a basis of V that contains bases of W1 and W2 .

Proof. Use the proof of the last theorem to get a basis for W1 + W2 containing bases of W1
and W2 . Then use the one-subspace theorem to extend it to V .
Note the difference from the one subspace theorem. We are not claiming that you can
extend any given bases of W1 and W2 to a basis of V . We are just claiming there exists at
least one basis of V such that part of this basis is a basis for W1 and part is a basis for W2 .
In fact, given bases of W1 and W2 we cannot generally find a basis of V containing these
bases. Take

V = R3 , W1 = {(x, y, 0) : x, y R}, W2 = {(x, 0, z) : x, z R} .

If we take bases B1 = {(1, 0, 0), (1, 1, 0)} and B2 = {(1, 0, 1), (0, 0, 1)}, there is no basis of
V = R3 containing both B1 and B2 since V is 3-dimensional.
We now move on to the main subject of the course, linear transformations.

Linear transformations

Definition 0.4. Let V and W be vector spaces over the same field F. A function T : V W
is called a linear transformation if

T (v1 + v2 ) = T (v1 ) + T (v2 ) and T (cv1 ) = cT (v1 ) for all v1 , v2 V and c F .

As usual, we only need to check the condition

T (cv1 + v2 ) = cT (v1 ) + T (v2 ) for v1 , v2 V and c F .

Examples

1. Consider C as a vector space over itself. Then if T : C C is linear, we can write

T (z) = zT (1)

so T is completely determined by its value at 1.

2
2. Let V be finite dimensional and B = {v1 , . . . , vn } a basis for V . Each v V can be
written uniquely as
v = a1 v1 + + an vn for ai F .
So define T : V Fn by T (v) = (a1 , . . . , an ). This is called the coordinate map relative
to B. It is linear because if v = a1 v1 + + an vn , w = b1 v1 + + bn vn and c F,

cv + w = (ca1 + b1 )v1 + + (can + bn )vn

is one representation of cv + w in terms of the basis. But this representation is unique,


so we get

T (cv + w) = (ca1 + b1 , . . . , can + bn )


= c(a1 , . . . , an ) + (b1 , . . . , bn )
= cT (v) + T (w) .

3. Given any m n matrix A with entries from F (the notation from the homework is
A Mm,n (F), we can define a linear transformations LA : Fn Fm and RA : Fm Fn
by
LA (v) = A v and RA (v) = v A .
Here we are using matrix multiplication and in the first case, representing v as a column
vector. In the second, v is a row vector.

4. In fact, the set of linear transformations from V to W , written L(V, W ), forms a vector
space! Since the space of functions from V to W is a vector space, it suffices to check
that it is a subspace. So given T, U L(V, W ) and c F, we must show that cT + U
is a linear transformation. So let v1 , v2 V and c0 F:

(cT + U )(c0 v + w) = (cT )(c0 v + w) + U (c0 v + w)


= c(T (c0 v + w)) + U (c0 v + w)
= c(c0 T (v) + T (w)) + c0 U (v) + U (w)
= c0 (cT (v) + U (v)) + cT (w) + U (w)
= c0 (cT + U )v + (cT + U )(w) .

You might also like