Lecture 6 - Vector Spaces, Linear Maps, and Dual Spaces
Lecture 6 - Vector Spaces, Linear Maps, and Dual Spaces
February 9, 2009
1 Vector spaces
A vector space V with scalars F is dened to be a commutative ring (V, +) so that the
scalars form a division ring with identity, and operate on the V in a way satisfying (here
, F and v, w V ):
( + )v = v + v
(v + w) = v + w
(v) = ()v
1v = v where 1 F is the identity element
If o V is the identity of the group (V, +) (ie, the origin of the vector space V ), it is an
exercise to show that these axioms imply 0v = o and o = o.
In our class, we will exclusively be concerned with real vector spaces, meaning F is the
eld R.
2 Linear maps
If V and W are vector spaces with the same eld of scalars, a linear map A is dened to be
a map A : V W satisfying
A(v
1
+ v
2
) = A(v
1
) + A(v
2
)
where , are scalars and v
1
, v
2
V . After bases {v
1
, . . . , v
n
} for V and {v
1
, . . . , v
m
} for
W are chosen, it is possible to express A as a matrix. Specically, we dene the numbers
A
j
i
implicitly by
A(v
i
) = A
1
i
w
1
+ A
2
i
w
2
+ . . . + A
m
i
w
m
.
1
Then if v =
1
v
1
+ . . . +
n
v
n
, we have
A(v) = A(
1
v
1
+
2
v
2
+ . . . +
n
v
n
)
=
1
A(v
1
) +
2
A(v
2
) + . . . +
n
A(v
n
)
=
1
A
1
1
w
1
+ A
2
1
w
2
+ . . . + A
m
1
w
m
+
2
A
1
2
w
1
+ A
2
2
w
2
+ . . . + A
m
2
w
m
+ . . .
+
n
A
1
n
w
1
+ A
2
n
w
2
+ . . . + A
m
n
w
m
1
A
1
1
+
2
A
1
2
+ . . . +
n
A
1
n
w
1
+
1
A
2
1
+
2
A
2
2
+ . . . +
n
A
2
n
w
2
+ . . .
+
1
A
m
1
+
2
A
m
2
+ . . . +
n
A
m
n
w
m
.
Thus if we write v and A(v) in vector notation, then by our calculations we have:
v =
2
.
.
.
{vi}
A(v) =
1
A
1
1
+
2
A
1
2
+ . . . +
n
A
1
n
1
A
1
1
+
2
A
1
2
+ . . . +
n
A
1
n
.
.
.
1
A
m
1
+
2
A
m
2
+ . . . +
n
A
m
n
{wi}
which means that A is an n m matrix:
A =
A
1
1
A
2
1
. . . A
m
1
A
1
2
A
2
2
. . . A
m
2
.
.
.
.
.
.
.
.
.
A
1
n
A
2
n
. . . A
m
n
{wi}{vi}
and the action of A is given by matrix multiplication on the left.
Example
Let V be the vector space of quadratic polynomials with basis e
1
= 1, e
2
= x, e
3
= x
2
,
and let W be the vector space of cubic polynomials with basis f
1
= 1, f
2
= x, f
3
= x
2
, and
f
4
= x
3
. Let A : V W be the map A(P) = (1 + 2x)P.
To express A as a matrix, we see where it send the basis vectors:
A(e
1
) = (1 + 2x)1 = f
1
+ 2f
2
A(e
2
) = (1 + 2x)x = f
2
+ 2f
3
A(e
3
) = (1 + 2x)x
2
= f
3
+ 2f
4
Thus
A =
1 0 0
2 1 0
0 2 1
0 0 2
{fi}{ei}
.
2
3 Dual spaces
Assume V is a vector space with scalar eld F (in our class, F will almost always be just the
reals, R). A linear functional on a vector space V is a linear map f : V F. It is simple to
prove that A(o) = 0 whenever A is a linear operator:
A(o) = A(0 v) = 0 A(v) = 0.
The space of linear operators on a vector space V is called its dual vector space, denoted
V
1
, . . . , v
n
} for V
i
: V R by setting v
i
(v
j
) =
ij
and extending
linearly. To be more explicit, if v =
1
v
1
+ +
n
v
n
, then
v
i
(v) = v
1
v
1
+ . . . +
n
v
n
=
1
v
i
(v
1
) + . . . +
i
v
i
(v
i
) + . . . +
n
v
i
(v
n
)
=
1
0 + . . . +
i
+ . . . +
n
0
=
i
.
It is easy to verify that v
i
is linear.
Theorem 3.1 If dim(V ) = n < , then also dim(V
) = n.
Pf We only have to prove that what we called the dual basis (which consists of n many
elements) is indeed a basis. Let {v
1
, . . . , v
n
} be a basis for V , and {v
1
, . . . , v
n
} its dual
basis. We must prove that the v
i
are linearly independent, and that they indeed span V
.
First, if 0 =
1
v
1
+ +
n
v
n
for some constants
i
, then by plugging in v
j
to both sides
we get
0 =
j
.
Since j was arbitrary, this proves that all the coecients are 0. Thus the v
i
are independent.
To prove that the v
i
span V
, let A V
1
+ A
2
v
2
+ . . . + A
n
v
n
: for let v =
1
v
1
+ +
n
v
n
be a generic
element in V ; then
A(v) = A(
1
v
1
+ . . . +
n
v
n
) =
1
A(v
1
) + . . . +
n
A(v
n
) =
1
A
1
+ . . . +
n
A
n
(A
1
e
1
+ . . . + A
n
e
n
) (v) = A
1
v
1
(v) + . . . + A
n
v
n
(v) = A
1
1
+ . . . + A
n
n
.
1
+ + A
n
v
n
. This
violates the usual motif of summing over upper-lower index pairs, indicating that the dual
basis should probably be written with upper indices. From now on we will do this:
we will write v
i
, not v
i
.
3
Thus the basis dual to {v
1
, . . . , v
n
} will be written {v
1
, . . . , v
n
}, with the same denition:
v
i
: V R
v
i
(v
j
) =
i
j
.
Note that this means dual vectors (elements of V
2
.
.
.
{vi}
A = (A
1
A
2
. . . A
n
)
{v
i
}
.
As usual, we can express the action of A on v via matrix multiplication;
A(v) = A
1
v
1
+ . . . +
n
v
n
=
1
A(v
1
) + . . . +
n
A(v
n
)
=
1
A
1
+
2
A
2
+ . . . +
n
A
n
=
n
i=1
i
A
i
= (A
1
A
2
. . . A
n
)
2
.
.
.
A
i
j
A
1
1
A
1
2
. . . A
1
n
A
2
1
A
2
2
A
2
n
.
.
.
.
.
.
.
.
.
A
n
1
A
n
2
. . . A
n
n
2
.
.
.
{vi}
=
1
A
1
1
+
2
A
1
2
+ +
n
A
1
n
1
A
2
1
+
2
A
2
2
+ +
n
A
2
n
.
.
.
1
A
n
1
+
2
A
n
2
+ +
n
A
n
n
{vi}
4
This is a lot of writing. But we can express the same information more compactly:
v =
n
i=i
i
v
i
, A(v
i
) =
n
j=1
A
j
i
v
j
,
A(v) = A
i=1
i
v
i
=
n
i=1
i
A(v
i
) =
n
i=1
j=1
A
j
i
v
j
=
n
i=1
n
j=1
i
A
j
i
v
j
.
If we just leave o the summation symbol, we can write this even more compactly:
v =
i
v
i
, A(v) = A(
i
v
i
) =
i
A(v
i
) =
i
A
j
i
v
j
.
This is the Einstein summation convention: the summation symbol is left o, and any
repeated upper and lower indices are summed over.
5