Diffgeom7 PDF
Diffgeom7 PDF
585
586 CHAPTER 22. TENSOR ALGEBRAS
ϕ(u)(y) = (u, y)
ψ(v)(x) = (x, v),
Proposition 22.1 For every nondegenerate pairing, (−, −) : E × F → K, the induced maps
ϕ : E → F ∗ and ψ : F → E ∗ are linear and injective. Furthermore, if E and F are finite
dimensional, then ϕ : E → F ∗ and ψ : F → E ∗ are bijective.
Remark: When we use the term “canonical isomorphism” we mean that such an isomor-
phism is defined independently of any choice of bases. For example, if E is a finite dimensional
22.1. TENSORS PRODUCTS 587
vector space and (e1 , . . . , en ) is any basis of E, we have the dual basis, (e∗1 , . . . , e∗n ), of E ∗
(where, e∗i (ej ) = δi j ) and thus, the map ei �→ e∗i is an isomorphism between E and E ∗ . This
isomorphism is not canonical.
On the other hand, if �−, −� is an inner product on E, then Proposition 22.1 shows that
the nondegenerate pairing, �−, −�, induces a canonical isomorphism between E and E ∗ .
This isomorphism is often denoted � : E → E ∗ and we usually write u� for �(u), with u ∈ E.
Given any basis,
� (e1 , . . . , en ), of E (not necessarily orthonormal), if we let gij = (ei , ej ), then
for every u = ni=1 ui ei , since u� (v) = �u, v�, for all v ∈ V , we get
n
� n
�
u� = ωi e∗i , with ωi = gij uj .
i=1 j=1
If we �
use the convention that coordinates of vectors are written using superscripts
n i
(u = � i=1 u ei ) and coordinates of one-forms (covectors) are written using subscripts
n ∗
(ω = i=1 ωi ei ), then the map, �, has the effect of lowering (flattening!)
�n indices. The
∗ ∗ ∗
inverse
�of � is denoted � : E → E. If we write ω ∈ E as ω = i=1 ωi ei and ω � ∈ E as
n
ω = j=1 (ω � )j ej , since
�
n
�
�
ωi = ω(ei ) = �ω , ei � = (ω � )j gij , 1 ≤ i ≤ n,
j=1
we get
n
�
� i
(ω ) = g ij ωj ,
j=1
where (g ij ) is the inverse of the matrix (gij ). The inner product, (−, −), on E induces an
inner product on E ∗ also denoted (−, −) and given by
that is, in the basis (e∗1 , . . . , e∗n ), the inner product on E ∗ is represented by the matrix (g ij ),
the inverse of the matrix (gij ).
The inner product on a finite vector space also yields a natural isomorphism between
the space, Hom(E, E; K), of bilinear forms on E and the space, Hom(E, E), of linear maps
from E to itself. Using this isomorphism, we can define the trace of a bilinear form in an
intrinsic manner. This technique is used in differential geometry, for example, to define the
divergence of a differential one-form.
Proposition 22.2 If �−, −� is an inner product on a finite vector space, E, (over a field,
K), then for every bilinear form, f : E × E → K, there is a unique linear map, f � : E → E,
such that
f (u, v) = �f � (u), v�, for all u, v ∈ E.
The map, f �→ f � , is a linear isomorphism between Hom(E, E; K) and Hom(E, E).
is clearly bilinear. It is also clear that the above defines a linear map from Hom(E, E) to
Hom(E, E; K). This map is injective because if f (u, v) = 0 for all u, v ∈ E, as �−, −� is
an inner product, we get g(u) = 0 for all u ∈ E. Furthermore, both spaces Hom(E, E) and
Hom(E, E; K) have the same dimension, so our linear map is an isomorphism.
If (e1 , . . . , en ) is an orthonormal basis of E, then we check immediately that the trace of
a linear map, g, (which is independent of the choice of a basis) is given by
n
�
tr(g) = �g(ei ), ei �,
i=1
tr(f ) = tr(f � ).
for any orthonormal basis, (e1 , . . . , en ), of E. We can also check directly that the above
expression is independent of the choice of an orthonormal basis.
We will also need the following Proposition to show that various families are linearly
independent.
22.1. TENSORS PRODUCTS 589
Proposition 22.3 Let E and F be two nontrivial vector spaces and let (ui )i∈I be any family
of vectors ui ∈ E. The family, (ui )i∈I , is linearly independent iff for every family, (vi )i∈I , of
vectors vi ∈ F , there is some linear map, f : E → F , so that f (ui ) = vi , for all i ∈ I.
f = f⊗ ◦ ϕ.
Equivalently, there is a unique linear map f⊗ such that the following diagram commutes:
ϕ
E1 × · · · × En � T
f⊗
f
� �
F
First, we show that any two tensor products (T1 , ϕ1 ) and (T2 , ϕ2 ) for E1 , . . . , En , are
isomorphic.
Proposition 22.4 Given any two tensor products (T1 , ϕ1 ) and (T2 , ϕ2 ) for E1 , . . . , En , there
is an isomorphism h : T1 → T2 such that
ϕ2 = h ◦ ϕ1 .
ϕ2 = (ϕ2 )⊗ ◦ ϕ1 .
ϕ1 = (ϕ1 )⊗ ◦ ϕ2 .
and
ϕ2 = (ϕ2 )⊗ ◦ (ϕ1 )⊗ ◦ ϕ2 .
On the other hand, focusing on (T1 , ϕ1 ), we have a multilinear map ϕ1 : E1 × · · · × En → T1 ,
but the unique linear map h : T1 → T1 , with
ϕ1 = h ◦ ϕ1
is h = id, and since (ϕ1 )⊗ ◦ (ϕ2 )⊗ is linear, as a composition of linear maps, we must have
f⊗ (u1 ⊗ · · · ⊗ un ) = f (u1 , . . . , un ),
on the generators u1 ⊗ · · · ⊗ un of E1 ⊗ · · · ⊗ En .
Proof . Given any set, I, viewed as an index set, let K (I) be the set of all functions, f : I → K,
such that f (i) �= 0 only for finitely many i ∈ I. As usual, denote such a function by (fi )i∈I ,
it is a family of finite support. We make K (I) into a vector space by defining addition and
scalar multiplication by
The family, (ei )i∈I , is defined such that (ei )j = 0 if j �= i and (ei )i = 1. It is a basis of
the vector space K (I) , so that every w ∈ K (I) can be uniquely written as a finite linear
combination of the ei . There is also an injection, ι : I → K (I) , such that ι(i) = ei for every
i ∈ I. Furthermore, it is easy to show that for any vector space, F , and for any function,
f : I → F , there is a unique linear map, f : K (I) → F , such that
f = f ◦ ι,
22.1. TENSORS PRODUCTS 591
We let E1 ⊗· · ·⊗En be the quotient M/N of the free vector space M by N , π : M → M/N
be the quotient map and set
ϕ = π ◦ ι.
By construction, ϕ is multilinear, and since π is surjective and the ι(i) = ei generate M ,
since i is of the form i = (u1 , . . . , un ) ∈ E1 ×· · ·×En , the ϕ(u1 , . . . , un ) generate M/N . Thus,
if we denote ϕ(u1 , . . . , un ) as u1 ⊗ · · · ⊗ un , the tensor product E1 ⊗ · · · ⊗ En is generated by
the vectors u1 ⊗ · · · ⊗ un , where u1 ∈ E1 , . . . , un ∈ En .
For every multilinear map f : E1 × · · · × En → F , if a linear map f⊗ : E1 ⊗ · · · ⊗ En → F
exists such that f = f⊗ ◦ ϕ, since the vectors u1 ⊗ · · · ⊗ un generate E1 ⊗ · · · ⊗ En , the map
f⊗ is uniquely defined by
f⊗ (u1 ⊗ · · · ⊗ un ) = f (u1 , . . . , un ).
On the other hand, because M = K (E1 ×···×En ) is free on I = E1 × · · · × En , there is a unique
linear map f : K (E1 ×···×En ) → F , such that
f = f ◦ ι,
as in the diagram below:
ι
E1 × · · · × En � K (E1 ×···×En )
f
f
� �
F
Because f is multilinear, note that we must have f (w) = 0, for every w ∈ N . But then,
f : M → F induces a linear map h : M/N → F , such that
f = h ◦ π ◦ ι,
592 CHAPTER 22. TENSOR ALGEBRAS
by defining h([z]) = f (z), for every z ∈ M , where [z] denotes the equivalence class in M/N
of z ∈ M :
π◦ι �
E1 × · · · × En K (E1 ×···×En ) /N
h
f
� �
F
Indeed, the fact that f vanishes on N insures that h is well defined on M/N , and it is clearly
linear by definition. However, we showed that such a linear map h is unique, and thus it
agrees with the linear map f⊗ defined by
f⊗ (u1 ⊗ · · · ⊗ un ) = f (u1 , . . . , un )
on the generators of E1 ⊗ · · · ⊗ En .
What is important about Theorem 22.5 is not so much the construction itself but the
fact that it produces a tensor product with the universal mapping property with respect to
multilinear maps. Indeed, Theorem 22.5 yields a canonical isomorphism,
L(E1 ⊗ · · · ⊗ En , F ) ∼
= L(E1 , . . . , En ; F ),
between the vector space of linear maps, L(E1 ⊗ · · · ⊗ En , F ), and the vector space of
multilinear maps, L(E1 , . . . , En ; F ), via the linear map − ◦ ϕ defined by
h �→ h ◦ ϕ,
Hom(E1 ⊗ · · · ⊗ En , F ) ∼
= Hom(E1 , . . . , En ; F ).
Remarks:
(1) To be very precise, since the tensor product depends on the field, K, we should subscript
the symbol ⊗ with K and write
E1 ⊗K · · · ⊗ K En .
(2) For F = K, the base field, we obtain a canonical isomorphism between the vector
space L(E1 ⊗ · · · ⊗ En , K), and the vector space of multilinear forms L(E1 , . . . , En ; K).
However, L(E1 ⊗ · · · ⊗ En , K) is the dual space, (E1 ⊗ · · · ⊗ En )∗ , and thus, the vector
space of multilinear forms L(E1 , . . . , En ; K) is canonically isomorphic to (E1 ⊗· · ·⊗En )∗ .
We write
L(E1 , . . . , En ; K) ∼
= (E1 ⊗ · · · ⊗ En )∗ .
u1 ⊗ · · · ⊗ (ui + vi ) ⊗ · · · ⊗ un = (u1 ⊗ · · · ⊗ ui ⊗ · · · ⊗ un )
+ (u1 ⊗ · · · ⊗ vi ⊗ · · · ⊗ un ),
u1 ⊗ · · · ⊗ (λui ) ⊗ · · · ⊗ un = λ(u1 ⊗ · · · ⊗ ui ⊗ · · · ⊗ un ).
Of course, this is just what we wanted! Tensors in E1 ⊗ · · · ⊗ En are also called n-tensors,
and tensors of the form u1 ⊗ · · · ⊗ un , where ui ∈ Ei , are called simple (or indecomposable)
n-tensors. Those n-tensors that are not simple are often called compound n-tensors.
Not only do tensor products act on spaces, but they also act on linear maps (they are
functors). Given two linear maps f : E → E � and g : F → F � , we can define h : E × F →
E � ⊗ F � by
h(u, v) = f (u) ⊗ g(v).
It is immediately verified that h is bilinear, and thus, it induces a unique linear map
f ⊗ g : E ⊗ F → E � ⊗ F �,
such that
(f ⊗ g)(u ⊗ v) = f (u) ⊗ g(u).
(f � ◦ f ) ⊗ (g � ◦ g) = (f � ⊗ g � ) ◦ (f ⊗ g).
for some family of scalars (vjk )j∈Ik . Let F be any nontrivial vector space. We show that for
every family
(wi1 ,...,in )(i1 ,...,in )∈I1 ×...×In ,
of vectors in F , there is some linear map h : E1 ⊗ · · · ⊗ En → F , such that
is linearly independent. However, since (uki )i∈Ik is a basis for Ek , the u1i1 ⊗ · · · ⊗ unin also
generate E1 ⊗ · · · ⊗ En , and thus, they form a basis of E1 ⊗ · · · ⊗ En .
We define the function f : E1 × · · · × En → F as follows:
� � �
f( vj11 u1j1 , . . . , vjnn unjn ) = vj11 · · · vjnn wj1 ,...,jn .
j1 ∈I1 jn ∈In j1 ∈I1 ,...,jn ∈In
for some unique family of scalars λi1 ,...,in ∈ K, all zero except for a finite number.
22.3. SOME USEFUL ISOMORPHISMS FOR TENSOR PRODUCTS 595
(1) E ⊗ F � F ⊗ E
(2) (E ⊗ F ) ⊗ G � E ⊗ (F ⊗ G) � E ⊗ F ⊗ G
(3) (E ⊕ F ) ⊗ G � (E ⊗ G) ⊕ (F ⊗ G)
(4) K ⊗ E � E
(a) u ⊗ v �→ v ⊗ u
(b) (u ⊗ v) ⊗ w �→ u ⊗ (v ⊗ w) �→ u ⊗ v ⊗ w
(c) (u, v) ⊗ w �→ (u ⊗ w, v ⊗ w)
(d) λ ⊗ u �→ λu.
Proof . These isomorphisms are proved using the universal mapping property of tensor prod-
ucts. We illustrate the proof method on (2). Fix some w ∈ G. The map
(u, v) �→ u ⊗ v ⊗ w
g : E ⊗ F ⊗ G → (E ⊗ F ) ⊗ G,
such that g(u ⊗ v ⊗ w) = (u ⊗ v) ⊗ w. Clearly, f ◦ g and g ◦ f are identity maps, and thus,
f and g are isomorphisms. The other cases are similar.
596 CHAPTER 22. TENSOR ALGEBRAS
Hom(E, F ; G) ∼
= Hom(E, Hom(F, G)).
Indeed, any bilinear map, f : E × F → G, gives the linear map, ϕ(f ) ∈ Hom(E, Hom(F, G)),
where ϕ(f )(u) is the linear map in Hom(F, G) given by
Conversely, given a linear map, g ∈ Hom(E, Hom(F, G)), we get the bilinear map, ψ(g),
given by
ψ(g)(u, v) = g(u)(v),
and it is clear that ϕ and ψ and mutual inverses. Consequently, we have the important
corollary:
Proposition 22.8 For any three vector spaces, E, F, G, we have the canonical isomorphism,
Hom(E ⊗ F, G) ∼
= Hom(E, Hom(F, G)),
from E1∗ × · · · × En∗ to Hom(E1 ⊗ · · · ⊗ En , K), which extends to a linear map, L, from
E1∗ ⊗ · · · ⊗ En∗ to Hom(E1 ⊗ · · · ⊗ En , K). However, in view of the isomorphism,
Hom(U ⊗ V, W ) ∼
= Hom(U, Hom(V, W )),
(E1 ⊗ · · · ⊗ En )∗ ∼
= E1∗ ⊗ · · · ⊗ En∗ .
Proposition 22.9 If E and F are vector spaces with E of finite dimension, then the linear
map, α⊗ : E ∗ ⊗ F → Hom(E, F ), is a canonical isomorphism.
Proof . Let (ej )1≤j≤n be a basis of E and, as usual, let e∗j ∈ E ∗ be the linear form defined by
where δj,k = 1 iff j = k and 0 otherwise. We know that (e∗j )1≤j≤n is a basis of E ∗ (this is
where we use the finite dimension of E). Now, for any linear map, f ∈ Hom(E, F ), for every
x = x1 e1 + · · · + xn en ∈ E, we have
f (x) = f (x1 e1 + · · · + xn en ) = x1 f (e1 ) + · · · + xn f (en ) = e∗1 (x)f (e1 ) + · · · + e∗n (x)f (en ).
Since the fi and fi� are uniquely determined by the linear map, f , we must have fi = fi� and
α⊗ is injective. Therefore, α⊗ is a bijection.
Note that in Proposition 22.9, the space F may have infinite dimension but E has finite
dimension. In view of the canonical isomorphism
Hom(E1 , . . . , En ; F ) ∼
= Hom(E1 ⊗ · · · ⊗ En , F )
and the canonical isomorphism (E1 ⊗ · · · ⊗ En )∗ ∼
= E1∗ ⊗ · · · ⊗ En∗ , where the Ei ’s are finite-
dimensional, Proposition 22.9 yields the canonical isomorphism
Hom(E1 , . . . , En ; F ) ∼
= E1∗ ⊗ · · · ⊗ En∗ ⊗ F.
also denoted T • (V ), to avoid confusion with the tangent bundle. This is an interesting object
because we can define a multiplication operation on it which makes it into an algebra called
the tensor algebra of V . When V is of finite dimension n, this space corresponds to the
algebra of polynomials with coefficients in K in n noncommuting variables.
Let us recall the definition of an algebra over a field. Let K denote any (commutative)
field, although for our purposes, we may assume that K = R (and occasionally, K = C).
Since we will only be dealing with associative algebras with a multiplicative unit, we only
define algebras of this kind.
22.5. TENSOR ALGEBRAS 599
For example, the ring, Mn (K), of all n × n matrices over a field, K, is a K-algebra.
There is an obvious notion of ideal of a K-algebra: An ideal, A ⊆ A, is a linear subspace
of A that is also a two-sided ideal with respect to multiplication in A. If the field K is
understood, we usually simply say an algebra instead of a K-algebra.
We would like to define a multiplication operation on T (V ) which makes it into a K-
algebra. As �
T (V ) = V ⊗i ,
i≥0
where vi ∈ V ⊗ni and the ni are natural numbers with ni �= nj if i �= j, to define multiplica-
tion in T (V ), using bilinearity, it is enough to define multiplication operations,
· : V ⊗m × V ⊗n −→ V ⊗(m+n) , which, using the isomorphisms, V ⊗n ∼ = ιn (V ⊗n ), yield multi-
⊗m ⊗n ⊗(m+n)
plication operations, · : ιm (V ) × ιn (V ) −→ ιm+n (V ). More precisely, we use the
canonical isomorphism,
V ⊗m ⊗ V ⊗n ∼
= V ⊗(m+n) ,
which defines a bilinear operation,
V ⊗m × V ⊗n −→ V ⊗(m+n) ,
V ⊗m ⊗ V ⊗n ∼
= V ⊗m ⊗ V
� ⊗ ·��
· · ⊗ V�
n
V ⊗m
⊗V · · ⊗ V� ∼
� ⊗ ·�� = V ⊗(m+n) ,
n
which can be shown using methods similar to those used to proved associativity. Of course,
the multiplication, V ⊗m × V ⊗n −→ V ⊗(m+n) , is defined so that
(v1 ⊗ · · · ⊗ vm ) · (w1 ⊗ · · · ⊗ wn ) = v1 ⊗ · · · ⊗ vm ⊗ w1 ⊗ · · · ⊗ wn .
600 CHAPTER 22. TENSOR ALGEBRAS
(This has to be made rigorous by using isomorphisms involving the associativity of tensor
products, for details, see see Atiyah and Macdonald [9].)
Proposition 22.10 Given any K-algebra, A, for any linear map, f : V → A, there is a
unique K-algebra homomorphism, f : T (V ) → A, so that
f = f ◦ i,
as in the diagram below:
i
V � T (V )
f
f
� �
A
Proof . Left an an exercise (use Theorem 22.5).
Most algebras of interest arise�as well-chosen quotients of the tensor algebra T (V ). This
is true for the exterior algebra, (V ) (also called Grassmann algebra), where we take the
quotient of T (V ) modulo the ideal generated by all elements of the form v ⊗ v, where v ∈ V ,
and for the symmetric algebra, Sym(V ), where we take the quotient of T (V ) modulo the
ideal generated by all elements of the form v ⊗ w − w ⊗ v, where v, w ∈ V .
Algebras such as T (V ) are graded, in the sense that there is a sequence of subspaces,
V ⊗n ⊆ T (V ), such that �
T (V ) = V ⊗n
k≥0
and the multiplication, ⊗, behaves well w.r.t. the grading, i.e., ⊗ : V ⊗m × V ⊗n → V ⊗(m+n) .
Generally, a K-algebra, E, is said to be a graded algebra iff there is a sequence of subspaces,
E n ⊆ E, such that �
E= En
k≥0
Definition 22.4 Given a vector space, V , for any pair of nonnegative integers, (r, s), the
tensor space, T r,s (V ), of type (r, s), is the tensor product
T r,s (V ) = V ⊗r ⊗ (V ∗ )⊗s = V
� ⊗ ·��
· · ⊗ V� ⊗ V ∗
· · ⊗ V �∗ ,
� ⊗ ·��
r s
with T 0,0 (V ) = K. We also define the tensor algebra, T •,• (V ), as the coproduct
�
T •,• (V ) = T r,s (V ).
r,s≥0
Note that tensors in T r,0 (V ) are just our “old tensors” in V ⊗r . We make T •,• (V ) into an
algebra by defining multiplication operations,
T r,s (V ∗ ) × T r,s (V ) −→ K
as follows: If
v ∗ = v1∗ ⊗ · · · ⊗ vr∗ ⊗ ur+1 ⊗ · · · ⊗ ur+s ∈ T r,s (V ∗ )
and
∗ ∗
u = u1 ⊗ · · · ⊗ ur ⊗ vr+1 ⊗ · · · ⊗ vr+s ∈ T r,s (V ),
then
(v ∗ , u) = v1∗ (u1 ) · · · vr+s
∗
(ur+s ).
This is a nondegenerate pairing and thus, we get a canonical isomorphism,
(T r,s (V ))∗ ∼
= T r,s (V ∗ ).
602 CHAPTER 22. TENSOR ALGEBRAS
T r,s (V ∗ ) ∼
= Hom(V r , (V ∗ )s ; K).
Remark: The tensor spaces, T r,s (V ) are also denoted Tsr (V ). A tensor, α ∈ T r,s (V ) is
said to be contravariant in the first r arguments and covariant in the last s arguments.
This terminology refers to the way tensors behave under coordinate changes. Given a basis,
(e1 , . . . , en ), of V , if (e∗1 , . . . , e∗n ) denotes the dual basis, then every tensor α ∈ T r,s (V ) is
given by an expression of the form
� i ,...,i
α= aj11 ,...,jrs ei1 ⊗ · · · ⊗ eir ⊗ e∗j1 ⊗ · · · ⊗ e∗js .
i1 ,...,ir
j1 ,...,js
The tradition in classical tensor notation is to use lower indices on vectors and upper indices
on linear forms and in accordance to Einstein summation convention (or Einstein notation)
the position of the indices on the coefficients is reversed. Einstein summation convention is
to assume that a summation is performed for all values of every index that appears simul-
taneously once as an upper index and once as a lower index. According to this convention,
the tensor α above is written
α = aij11,...,i j1 js
,...,js ei1 ⊗ · · · ⊗ eir ⊗ e ⊗ · · · ⊗ e .
r
Definition 22.5 For all r, s ≥ 1, the contraction, ci,j : T r,s (V ) → T r−1,s−1 (V ), with 1 ≤ i ≤
r and 1 ≤ j ≤ s, is the linear map defined on generators by
As
c1,1 (ei ⊗ e∗j ) = δi,j ,
we get
n
�
c1,1 (h) = aii = tr(h),
i=1
where tr(h) is the trace of h, where h is viewed as the linear map given by the matrix, (aij ).
Actually, since c1,1 is defined independently of any basis, c1,1 provides an intrinsic definition
of the trace of a linear map, h ∈ Hom(V, V ).
α = aij11,...,i j1 js
,...,js ei1 ⊗ · · · ⊗ eir ⊗ e ⊗ · · · ⊗ e ,
r
then
i ,...,i
ck,l (α) = aj11 ,...,jk−1
,i,ik+1 ...,ir
e ⊗ · · · ⊗ e� j1 �j js
ik ⊗ · · · ⊗ e ir ⊗ e ⊗ · · · ⊗ e l ⊗ · · · ⊗ e .
l−1 ,i,jl+1 ,...,js i1
If E and F are two K-algebras, we know that their tensor product, E ⊗ F , exists as a
vector space. We can make E ⊗ F into an algebra as well. Indeed, we have the multilinear
map
E × F × E × F −→ E ⊗ F
given by (a, b, c, d) �→ (ac) ⊗ (bd), where ac is the product of a and c in E and bd is the
product of b and d in F . By the universal mapping property, we get a linear map,
E ⊗ F ⊗ E ⊗ F −→ E ⊗ F.
E⊗F ⊗E⊗F ∼
= (E ⊗ F ) ⊗ (E ⊗ F ),
(a ⊗ b) · (c ⊗ d) = (ac) ⊗ (bd).
for all ui ∈ E and all permutations, σ : {1, . . . , n} → {1, . . . , n}. The group of permutations
on {1, . . . , n} (the symmetric group) is denoted Sn . The vector space of all symmetric
multilinear maps, f : E n → F , is denoted by Sn (E; F ). Note that S1 (E; F ) = Hom(E, F ).
We could proceed directly as in Theorem 22.5, and construct symmetric tensor products
from scratch. However, since we already have the notion of a tensor product, there is a more
economical method. First, we define symmetric tensor powers.
First, we show that any two symmetric n-th tensor powers (S1 , ϕ1 ) and (S2 , ϕ2 ) for E,
are isomorphic.
Proposition 22.11 Given any two symmetric n-th tensor powers (S1 , ϕ1 ) and (S2 , ϕ2 ) for
E, there is an isomorphism h : S1 → S2 such that
ϕ2 = h ◦ ϕ1 .
Proof . Replace tensor product by n-th symmetric tensor power in the proof of Proposition
22.4.
We now give a construction that produces a symmetric n-th tensor power of a vector
space E.
22.6. SYMMETRIC TENSOR POWERS 605
Theorem 22.12 Given a vector space E, a symmetric n-th tensor power (Symn (E), ϕ)
for E can be constructed (n ≥ 1). Furthermore, denoting ϕ(u1 , . . . , un ) as u1 ⊙ · · · ⊙ un ,
the symmetric tensor power Symn (E) is generated by the vectors u1 ⊙ · · · ⊙ un , where
u1 , . . . , un ∈ E, and for every symmetric multilinear map f : E n → F , the unique linear
map f⊙ : Symn (E) → F such that f = f⊙ ◦ ϕ, is defined by
f⊙ (u1 ⊙ · · · ⊙ un ) = f (u1 , . . . , un ),
Proof . The tensor power E ⊗n is too big, and thus, we define an appropriate quotient. Let
C be the subspace of E ⊗n generated by the vectors of the form
u1 ⊗ · · · ⊗ un − uσ(1) ⊗ · · · ⊗ uσ(n) ,
for all ui ∈ E, and all permutations σ : {1, . . . , n} → {1, . . . , n}. We claim that the quotient
space (E ⊗n )/C does the job.
Let p : E ⊗n → (E ⊗n )/C be the quotient map. Let ϕ : E n → (E ⊗n )/C be the map
(u1 , . . . , un ) �→ p(u1 ⊗ · · · ⊗ un ),
which shows that h and f⊙ agree. Thus, Symn (E) = (E ⊗n )/C and ϕ constitute a symmetric
n-th tensor power of E.
Again, the actual construction is not important. What is important is that the symmetric
n-th power has the universal mapping property with respect to symmetric multilinear maps.
Remark: The notation ⊙ for the commutative multiplication of symmetric tensor powers is
not standard. Another notation commonly used is ·. We often abbreviate “symmetric tensor
power” as “symmetric power”. The symmetric power, Symn (E), is also denoted n
�n Sym E or
S(E). To be consistent with the use of ⊙, we could have used the notation E. Clearly,
1 ∼ 0
Sym (E) = E and it is convenient to set Sym (E) = K.
The fact that the map ϕ : E n → Symn (E) is symmetric and multinear, can also be
expressed as follows:
u1 ⊙ · · · ⊙ (ui + vi ) ⊙ · · · ⊙ un = (u1 ⊙ · · · ⊙ ui ⊙ · · · ⊙ un )
+ (u1 ⊙ · · · ⊙ vi ⊙ · · · ⊙ un ),
u1 ⊙ · · · ⊙ (λui ) ⊙ · · · ⊙ un = λ(u1 ⊙ · · · ⊙ ui ⊙ · · · ⊙ un ),
uσ(1) ⊙ · · · ⊙ uσ(n) = u1 ⊙ · · · ⊙ un ,
Hom(Symn (E), F ) ∼
= S(E n ; F ),
between the vector space of linear maps Hom(Symn (E), F ), and the vector space of sym-
metric multilinear maps S(E n ; F ), via the linear map − ◦ ϕ defined by
h �→ h ◦ ϕ,
(Symn (E))∗ ∼
= Sn (E; K).
Symmetric tensors in Symn (E) are also called symmetric n-tensors, and tensors of the
form u1 ⊙ · · · ⊙ un , where ui ∈ E, are called simple (or decomposable) symmetric n-tensors.
22.7. BASES OF SYMMETRIC POWERS 607
Those symmetric n-tensors that are not simple are often called compound symmetric n-
tensors.
Given two linear maps f : E → E � and g : E → E � , we can define h : E × E → Sym2 (E � )
by
h(u, v) = f (u) ⊙ g(v).
It is immediately verified that h is symmetric bilinear, and thus, it induces a unique linear
map
f ⊙ g : Sym2 (E) → Sym2 (E � ),
such that
(f ⊙ g)(u ⊙ v) = f (u) ⊙ g(u).
(f � ◦ f ) ⊙ (g � ◦ g) = (f � ⊙ g � ) ◦ (f ⊙ g).
�
In other words, if i∈I M (i) = n and dom(M ) = {i1 , . . . , ik },1 any function η ∈ JM specifies
a sequence of length n, consisting of M (i1 ) occurrences of i1 , M (i2 ) occurrences of i2 , . . .,
M (ik ) occurrences of ik . Intuitively, any η defines a “permutation” of the sequence (of length
n)
�i1 , . . . , i1 , i2 , . . . , i2 , . . . , ik , . . . , ik �.
� �� � � �� � � �� �
M (i1 ) M (i2 ) M (ik )
1
Note that must have k ≤ n.
608 CHAPTER 22. TENSOR ALGEBRAS
� ⊙ ·��
u · · ⊙ u�
k
as u⊙k .
We can now prove the following Proposition.
Proposition 22.13 Given a vector space E, if (ui )i∈I is a basis for E, then the family of
vectors � �
⊙M (i1 ) ⊙M (ik )
ui 1 ⊙ · · · ⊙ ui k �
M ∈N(I) , i∈I M (i)=n, {i1 ,...,ik }=dom(M )
Proof . The proof is very similar to that of Proposition 22.6. For any nontrivial vector space
F , for any family of vectors
(wM )M ∈N(I) , �i∈I M (i)=n ,
we show the existence
� of a symmetric multilinear map h : Symn (E) → F , such that for every
M ∈ N(I) with i∈I M (i) = n, we have
⊙M (i1 ) ⊙M (ik )
h(ui1 ⊙ · · · ⊙ ui k ) = wM ,
It is not difficult to verify that f is symmetric and multilinear. By the universal mapping
property of the symmetric tensor product, the linear map f⊙ : Symn (E) → F such that
f = f⊙ ◦ ϕ, is the desired map h. Then, by Proposition 22.3, it follows that the family
� �
⊙M (i1 ) ⊙M (ik )
ui 1 ⊙ · · · ⊙ ui k �
M ∈N(I) , i∈I M (i)=n, {i1 ,...,ik }=dom(M )
is linearly independent. Using the commutativity of ⊙, we can also show that these vectors
generate Symn (E), and thus, they form a basis for Symn (E). The details are left as an
exercise.
As a consequence, when I is finite, say of size p = dim(E), the dimension of Symn (E) is
� that j1 + · · · + jp = n, jk ≥ 0. We leave as
the number of finite multisets (j1 , . . . , j�p ), such
an exercise to show that this number is p+n−1 n
. Thus, if dim(E) = p, then the dimension of
22.8. SOME USEFUL ISOMORPHISMS FOR SYMMETRIC POWERS 609
� �
Symn (E) is p+n−1
n
. Compare with the dimension of E ⊗n , which is pn . In particular, when
p = 2, the dimension of Symn (E) is n + 1. This can also be seen directly.
� �
Remark: The number p+n−1 n
is also the number of homogeneous monomials
X1j1 · · · Xpjp
of total degree n in p variables (we have j1 + · · · + jp = n). This is not a coincidence!
Symmetric tensor products are closely related to polynomials (for more on this, see the next
remark).
Given a vector space E and a basis (ui )i∈I for E, Proposition 22.13 shows that every
symmetric tensor z ∈ Symn (E) can be written in a unique way as
� ⊙M (i ) ⊙M (i )
z= λM u i 1 1 ⊙ · · · ⊙ ui k k ,
(I)
� M ∈N
i∈I M (i)=n
{i1 ,...,ik }=dom(M )
for some unique family of scalars λM ∈ K, all zero except for a finite number.
This looks like a homogeneous polynomial of total degree n, where the monomials of total
degree n are the symmetric tensors
⊙M (i1 ) ⊙M (ik )
ui 1 ⊙ · · · ⊙ ui k ,
in the “indeterminates” ui , where i ∈ I (recall that M (i1 ) + · · · + M (ik ) = n). Again, this
is not a coincidence. Polynomials can be defined in terms of symmetric tensors.
Note that the expression on the right-hand side is “almost” the determinant, det(vj∗ (ui )),
except that the sign sgn(σ) is missing (where sgn(σ) is the signature of the permutation
σ, that is, the parity of the number of transpositions into which σ can be factored). Such
an expression is called a permanent. It is easily checked that this expression is symmetric
w.r.t. the ui ’s and also w.r.t. the vj∗ . For any fixed (v1∗ , . . . , vn∗ ) ∈ (E ∗ )n , we get a symmetric
multinear map, �
∗ ∗
lv1∗ ,...,vn∗ : (u1 , . . . , un ) �→ vσ(1) (u1 ) · · · vσ(n) (un ),
σ∈Sn
from E to K. The map lv1∗ ,...,vn∗ extends uniquely to a linear map, Lv1∗ ,...,vn∗ : Symn (E) → K.
n
from (E ∗ )n to Hom(Symn (E), K), which extends to a linear map, L, from Symn (E ∗ ) to
Hom(Symn (E), K). However, in view of the isomorphism,
Hom(U ⊗ V, W ) ∼
= Hom(U, Hom(V, W )),
Now, this pairing in nondegenerate. This can be done using bases and we leave it as an exer-
cise to the reader (see Knapp [89], Appendix A). Therefore, we get a canonical isomorphism,
(Symn (E))∗ ∼
= Symn (E ∗ ).
(Symn (E))∗ ∼
= Sn (E, K),
Symn (E ∗ ) ∼
= Sn (E, K)
Remark: The isomorphism, µ : Symn (E ∗ ) ∼ = Sn (E, K), discussed above can be described
explicity as the linear extension of the map given by
�
µ(v1∗ ⊙ · · · ⊙ vn∗ )(u1 , . . . , un ) = ∗
vσ(1) ∗
(u1 ) · · · vσ(n) (un ).
σ∈Sn
22.10. SYMMETRIC ALGEBRAS 611
σ · z = z, for all σ ∈ Sn
As the right hand side is clearly symmetric, we get a linear map, ι : Symn (E) → E ⊗n .
Clearly, ι(Symn (E)) is the set of symmetrized tensors in E ⊗n . If we consider the map,
S = ι ◦ π : E ⊗n −→ E ⊗n , it is easy to check that S ◦ S = S. Therefore, S is a projection and
by linear algebra, we know that
It turns out that Ker S = E ⊗n ∩ I = Ker π, where I is the two-sided ideal of T (E) generated
by all tensors of the form u ⊗ v − v ⊗ u ∈ E ⊗2 (for example, see Knapp [89], Appendix A).
Therefore, ι is injective,
and the symmetric tensor power, Symn (E), is naturally embedded into E ⊗n .
called the symmetric tensor algebra of V . We could adapt what we did in Section 22.5 for
general tensor powers to symmetric tensors but since we already have the algebra, T (V ),
we can proceed faster. If I is the two-sided ideal generated by all tensors of the form
u ⊗ v − v ⊗ u ∈ V ⊗2 , we set
Sym• (V ) = T (V )/I.
Then, Sym• (V ) automatically inherits a multiplication operation which is commutative and
since T (V ) is graded, that is, �
T (V ) = V ⊗m ,
m≥0
we have �
Sym• (V ) = V ⊗m /(I ∩ V ⊗m ).
m≥0
Symm (V ) ∼
= V ⊗m /(I ∩ V ⊗m ),
so
Sym• (V ) ∼
= Sym(V ).
When V is of finite dimension, n, T (V ) corresponds to the algebra of polynomials with
coefficients in K in n variables (this can be seen from Proposition 22.13). When V is of
infinite dimension and (ui )i∈I is a basis of V , the algebra, Sym(V ), corresponds to the
algebra of polynomials in infinitely many variables in I. What’s nice about the symmetric
tensor algebra, Sym(V ), is that it provides an intrinsic definition of a polynomial algebra in
any set, I, of variables.
It is also easy to see that Sym(V ) satisfies the following universal mapping property:
Proposition 22.14 Given any commutative K-algebra, A, for any linear map, f : V → A,
there is a unique K-algebra homomorphism, f : Sym(V ) → A, so that
f = f ◦ i,
The answer is yes! The solution is to define this multiplication such that, for f ∈ Sm (E, K)
and g ∈ Sn (E, K),
�
(f · g)(u1 , . . . , um+n ) = f (uσ(1) , . . . , uσ(m) )g(uσ(m+1) , . . . , uσ(m+n) ),
σ∈shuffle(m,n)
where shuffle(m, n) consists of all (m, n)-“shuffles”, that is, permutations, σ, of {1, . . . m+n},
such that σ(1) < · · · < σ(m) and σ(m + 1) < · · · < σ(m + n). We urge the reader to check
this fact.
Another useful canonical isomorphim (of K-algebras) is
Sym(E ⊕ F ) ∼
= Sym(E) ⊗ Sym(F ).
f (. . . , ui , . . . uj , . . .) = 0 whenever ui = uj .
Moreover, if our field, K, has characteristic different from 2, then every skew-symmetric
multilinear map is alternating.
However, by (ii),
f (u1 , . . . , un ) = sgn(σ)f (uσ(1) , . . . , uσ(n) ) = 0.
Now, when f is skew-symmetric, if σ is the transposition swapping ui and ui+1 = ui , as
sgn(σ) = −1, we get
f (. . . , ui , ui , . . .) = −f (. . . , ui , ui , . . .),
so that
2f (. . . , ui , ui , . . .) = 0,
and in every characteristic except 2, we conclude that f (. . . , ui , ui , . . .) = 0, namely, f is
alternating.
Proposition 22.15 shows that in every characteristic except 2, alternating and skew-
symmetric multilinear maps are identical. Using Proposition 22.15 we easily deduce the
following crucial fact:
Proposition 22.16 Let f : E n → F be an alternating multilinear map. For any families of
vectors, (u1 , . . . , un ) and (v1 , . . . , vn ), with ui , vi ∈ E, if
n
�
vj = aij ui , 1 ≤ j ≤ n,
i=1
then
� �
�
f (v1 , . . . , vn ) = sgn(σ) aσ(1),1 · · · aσ(n),n f (u1 , . . . , un ) = det(A)f (u1 , . . . , un ),
σ∈Sn
First, we show that any two n-th exterior tensor powers (A1 , ϕ1 ) and (A2 , ϕ2 ) for E, are
isomorphic.
Proposition 22.17 Given any two n-th exterior tensor powers (A1 , ϕ1 ) and (A2 , ϕ2 ) for E,
there is an isomorphism h : A1 → A2 such that
ϕ2 = h ◦ ϕ1 .
Proof . Replace tensor product by n exterior tensor power in the proof of Proposition 22.4.
We now give a construction that produces an n-th exterior tensor power of a vector space
E.
�
Theorem 22.18 Given a vector space E, an n-th exterior tensor power ( n (E), ϕ) for E
� (n ≥ 1). Furthermore, denoting ϕ(u1 , . . . , un ) as u1 ∧· · ·∧un , the exterior
can be constructed
tensor power n (E) is generated by the vectors u1 ∧ · · · ∧ un , where u1 , . . .�
, un ∈ E, and for
every alternating multilinear map f : E → F , the unique linear map f∧ : n (E) → F such
n
that f = f∧ ◦ ϕ, is defined by
f∧ (u1 ∧ · · · ∧ un ) = f (u1 , . . . , un ),
�
on the generators u1 ∧ · · · ∧ un of n (E).
Proof sketch. We can give a quick proof using the tensor algebra, T (E). let Ia be the
two-sided ideal of T (E) generated by all tensors of the form u ⊗ u ∈ E ⊗2 . Then, let
n
�
(E) = E ⊗n /(Ia ∩ E ⊗n )
⊗n
�n
�n π : E →
and let π be the projection, (E). If we let u1 ∧ · · · ∧ un = π(u1 ⊗ · · · ⊗ un ), it
is easy to check that ( (E), ∧) satisfies the conditions of Theorem 22.18.
the exterior algebra of E. This is the skew-symmetric counterpart of Sym(E) and we will
study it a little later.
� �
For simplicity of notation, we may write� n E for n (E). We also abbreviate � “exterior
tensor power” as “exterior power”. Clearly, 1 ∼
(E) = E and it is convenient to set 0 (E) =
K.
22.11. EXTERIOR TENSOR POWERS 617
�n
The fact that the map ϕ : E n → (E) is alternating and multinear, can also be expressed
as follows:
u1 ∧ · · · ∧ (ui + vi ) ∧ · · · ∧ un = (u1 ∧ · · · ∧ ui ∧ · · · ∧ un )
+ (u1 ∧ · · · ∧ vi ∧ · · · ∧ un ),
u1 ∧ · · · ∧ (λui ) ∧ · · · ∧ un = λ(u1 ∧ · · · ∧ ui ∧ · · · ∧ un ),
uσ(1) ∧ · · · ∧ uσ(n) = sgn(σ) u1 ∧ · · · ∧ un ,
for all σ ∈ Sn .
Theorem 22.18 yields a canonical isomorphism
n
�
Hom( (E), F ) ∼
= Altn (E; F ),
�
between the vector space of linear maps Hom( n (E), F ), and the vector space of alternating
multilinear maps Altn (E; F ), via the linear map − ◦ ϕ defined by
h �→ h ◦ ϕ,
�n
where h ∈ Hom( (E), F ). In particular, when F = K, we get a canonical isomorphism
�n �∗
�
(E) ∼ = Altn (E; K).
�
Tensors α ∈ n (E) are called alternating n-tensors or alternating tensors of degree n
and we write deg(α) = n. Tensors of the form u1 ∧ · · · ∧ un , where ui ∈ E, are called simple
(or decomposable) alternating n-tensors. Those alternating n-tensors that are�not simple are
often called compound alternating
�n n-tensors. Simple tensors u1 ∧ · · · ∧ un ∈ n (E) are also
called n-vectors and tensors in (E ∗ ) are often called (alternating) n-forms.
�
Given two linear maps f : E → E � and g : E → E � , we can define h : E × E → 2 (E � ) by
h(u, v) = f (u) ∧ g(v).
It is immediately verified that h is alternating bilinear, and thus, it induces a unique linear
map
2
� �2
f ∧ g: (E) → (E � ),
such that
(f ∧ g)(u ∧ v) = f (u) ∧ g(u).
Proposition 22.19 Given any vector space, � E, if E has finite �dimension, d = dim(E),
then for all n > d, the exterior power n (E) is trivial, that is� n (E) = (0). Otherwise,
for every ordered basis, ((ui )i∈Σ , ≤), the family, (uI ), is basis of n (E), where I ranges over
finite nonempty subsets of Σ of size |I| = n.
Proof�. First, assume that E has finite dimension, d = dim(E) and that n > d. We know
that n (E) is generated by the tensors of the form v1 ∧ · · · ∧ vn , with vi ∈ E. If u1 , . . . , ud
is a basis of E, as every vi is a linear combination of the uj , when we expand v1 ∧ · · · ∧ vn
using multilinearity, we get a linear combination of the form
�
v1 ∧ · · · ∧ vn = λ(j1 ,...,jn ) uj1 ∧ · · · ∧ ujn ,
(j1 ,...,jn )
where each (j1 , . . . , jn ) is some sequence of integers jk ∈ {1, . . . , d}. As n > d, each sequence
(j1 , . . . , jn ) must contain two identical
� elements. By alternation, uj1 ∧ · · · ∧ ujn = 0 and so,
v1 ∧ · · · ∧ vn = 0. It follows that n (E) = (0).
Now, assume that either dim(E) = d and that n ≤ d or that E is infinite dimensional.
The argument below shows that the uI are nonzero and linearly independent. As usual, let
u∗i ∈ E ∗ be the linear form given by
For any nonempty subset, I = {i1 , . . . , in } ⊆ Σ, with i1 < · · · < in , let lI be the map given
by
lI (v1 , . . . , vn ) = det(u∗ij (vk )),
�
for all vk ∈ E. As lI is alternating multilinear, it induces a linear map, LI : n (E) → K.
Observe that for any nonempty finite subset, J ⊆ Σ, with |J| = n, we have
�
1 if I = J
LI (uJ ) =
0 if I �= J.
22.12. BASES OF EXTERIOR POWERS 619
Note that when dim(E) = d and n ≤ d, the forms u∗i1 , . . . , u∗in are all distinct so, the above
does hold. Since LI (uI ) = 1, we conclude that uI �= 0. Now, if we have a linear combination,
�
λI uI = 0,
I
where the above sum is finite and involves nonempty finite subset, I ⊆ Σ, with |I| = n, for
every such I, when we apply LI we get
λI = 0,
proving linear independence.
As a corollary, if E is finite dimensional, say dim(E) = d and if 1 ≤ n ≤ d, then we have
�n � �
n
dim( (E)) =
d
�n
and if n > d, then dim( (E)) = 0.
�
Remark: When n = 0, if we set u∅ = 1, then (u∅ ) = (1) is a basis of 0 (V ) = K.
It follows from Proposition
� 22.19 �nthe family, (uI )I , where I ⊆ Σ ranges over finite
� that
subsets of Σ is a basis of (V ) = n≥0 (V ).
As a corollary of Proposition 22.19 we obtain the following useful criterion for linear
independence:
Proposition 22.20 For any vector space, E, the vectors, u1 , . . . , un ∈ E, are linearly inde-
pendent iff u1 ∧ · · · ∧ un �= 0.
Proof . If u1 ∧ · · · ∧ un �= 0, then u1 , . . . , un must be linearly independent. Otherwise, some
ui would be a linear combination of the other uj ’s (with j �= i) and then, as in the proof
of Proposition 22.19, u1 ∧ · · · ∧ un would be a linear combination of wedges in which two
vectors are identical and thus, zero.
Conversely, assume that u1 , . . . , un are linearly independent. Then, we have the linear
forms, u∗i ∈ E ∗ , such that
u∗i (uj ) = δi,j 1 ≤ i, j ≤ n.
�
As in the proof of Proposition 22.19, we have a linear map, Lu1 ,...,un : n (E) → K, given by
Lu1 ,...,un (v1 ∧ · · · ∧ vn ) = det(u∗j (vi )),
�n
for all v1 ∧ · · · ∧ vn ∈ (E). As,
Lu1 ,...,un (u1 ∧ · · · ∧ un ) = 1,
we conclude that u1 ∧ · · · ∧ un �= 0.
Proposition 22.20 shows that, geometrically, every nonzero wedge, u1 ∧ · · · ∧ un , corre-
sponds to some oriented version of an n-dimensional subspace of E.
620 CHAPTER 22. TENSOR ALGEBRAS
(E ∗ )n × E n −→ K,
given by
�
(v1∗ , . . . , vn∗ , u1 , . . . , un ) �→ ∗
sgn(σ) vσ(1) ∗
(u1 ) · · · vσ(n) (un ) = det(vj∗ (ui )).
σ∈Sn
It is easily checked that this expression is alternating w.r.t. the ui ’s and also w.r.t. the vj∗ .
For any fixed (v1∗ , . . . , vn∗ ) ∈ (E ∗ )n , we get an alternating multinear map,
from E n to K. By the argument used in the symmetric case, we get a bilinear map,
n
� n
�
∗
(E ) × (E) −→ K.
Now, this pairing in nondegenerate. This can be done using bases and we leave it as an
exercise to the reader. Therefore, we get a canonical isomorphism,
n
� n
�
( (E))∗ ∼
= (E ∗ ).
Remark: Variants of our isomorphism, µ, are found in the literature. For example, there
is a version, µ� , where
1
µ� = µ,
n!
1
with the factor n! added in front of the determinant. Each version has its its own merits
and inconvenients. Morita [114] uses µ� because it is more convenient than µ when dealing
with characteristic classes. On the other hand, when using µ� , some extra factor is needed
in defining the wedge operation of alternating multilinear forms (see Section 22.15) and for
exterior differentiation. The version µ is the one adopted by Warner [147], Knapp [89],
Fulton and Harris [57] and Cartan [29, 30].
If f : E → F is any linear map, by transposition we get a linear map, f � : F ∗ → E ∗ ,
given by
f � (v ∗ ) = v ∗ ◦ f, v∗ ∈ F ∗.
Consequently, we have
as claimed.
�
The map p f � is often denoted f ∗ , although this is an ambiguous notation�since p is
dropped. Proposition 22.21 gives us the behavior of f ∗ under the identification of p E ∗ and
Altp (E; K) via the isomorphism µ.
�
As in the case of symmetric powers, the�map from E n to n (E) given by (u1 , . . . , un ) �→
u1 ∧ · · · ∧ un yields a surjection,
�n π : E ⊗n → n (E). Now, this map has some section so there
is some injection, ι : (E) → E ⊗n , with π ◦ ι = id. If our field, K, has characteristic 0,
then there is a special section having a natural definition involving an antisymmetrization
process.
Recall that we have a left action of the symmetric group, Sn , on E ⊗n . The tensors,
z ∈ E ⊗n , such that
σ · z = sgn(σ) z, for all σ ∈ Sn
are called antisymmetrized tensors. We define the map, ι : E n → E ⊗n , by
1 �
ι(u1 , . . . , un ) = sgn(σ) uσ(1) ⊗ · · · ⊗ uσ(n) .
n! σ∈S
n
�n
As the right
�nhand side is clearly an alternating map, we get a linear map, ι : (E) → E ⊗n .
Clearly, ι( (E)) is the set of antisymmetrized tensors in E ⊗n . If we consider the map,
A = ι ◦ π : E ⊗n −→ E ⊗n , it is easy to check that A ◦ A = A. Therefore, A is a projection
and by linear algebra, we know that
n
�
⊗n ⊗n
E = A(E ) ⊕ Ker A = ι( (A)) ⊕ Ker A.
It turns out that Ker A = E ⊗n ∩ Ia = Ker π, where Ia is the two-sided ideal of T (E)
generated by all tensors of the form u ⊗ u ∈ E ⊗2 (for example, see Knapp [89], Appendix
A). Therefore, ι is injective,
n
� n
�
⊗n ⊗n
E = ι( (E)) ⊕ E ∩ I = ι( (E)) ⊕ Ker π,
�n
and the exterior tensor power, (E), is naturally embedded into E ⊗n .
22.15. EXTERIOR ALGEBRAS 623
called the exterior algebra (or Grassmann algebra) of V . We mimic the procedure used
for symmetric powers. If Ia is the two-sided ideal generated by all tensors of the form
u ⊗ u ∈ V ⊗2 , we set
�•
(V ) = T (V )/Ia .
�
Then, • (V ) automatically inherits a multiplication operation, called wedge product, and
since T (V ) is graded, that is, �
T (V ) = V ⊗m ,
m≥0
we have •
� �
(V ) = V ⊗m /(Ia ∩ V ⊗m ).
m≥0
so •
� �
(V ) ∼
= (V ).
When V has finite dimension, d, we actually have a finite coproduct
� d �
� m
(V ) = (V ),
m=0
�m �d�
and since each (V ) has dimension, m , we deduce that
�
dim( (V )) = 2d = 2dim(V ) .
�m �n �m+n
The multiplication, ∧ : (V ) × (V ) → (V ), is skew-symmetric in the following
precise sense:
�m �n
Proposition 22.22 For all α ∈ (V ) and all β ∈ (V ), we have
β ∧ α = (−1)mn α ∧ β.
624 CHAPTER 22. TENSOR ALGEBRAS
where shuffle(m, n) consists of all (m, n)-“shuffles”, that is, permutations, σ, of {1, . . . m+n},
such that σ(1) < · · · < σ(m) and σ(m+1) < · · · < σ(m+n). For example, when m = n = 1,
we have
(f ∧ g)(u, v) = f (u)g(v) − g(u)f (v).
When m = 1 and n ≥ 2, check that
m+1
�
(f ∧ g)(u1 , . . . , um+1 ) = (−1)i−1 f (ui )g(u1 , . . . , u�i , . . . , um+1 ),
i=1
where the hat over the argument ui means that it should be omitted.
As a result of all this, the coproduct
�
Alt(E) = Altn (E; K)
n≥0
�
is an algebra under the above multiplication and this algebra is isomorphic to (E ∗ ). For
the record, we state
�n
Proposition 22.24 When E is finite dimensional, the maps, µ : (E ∗ ) −→ Altn (E; K),
induced by the linear extensions of the maps given by
where shuffle(m, n) consists of all (m, n)-“shuffles”, that is, permutations, σ, of {1, . . . m+n},
such that σ(1) < · · · < σ(m) and σ(m + 1) < · · · < σ(m + n).
�
Remark: The algebra, (E) is a graded algebra. Given two graded algebras, E and F , we
� F , where E ⊗
can make a new tensor product, E ⊗ � F is equal to E ⊗ F as a vector space,
but with a skew-commutative multiplication given by
for any Euclidean vector space, V , of dimension n and any k, with �k 0 ≤ k ≤ n. If �−, −�
denotes the inner product on V , we define an inner product on V , also denoted �−, −�,
by setting
�u1 ∧ · · · ∧ uk , v1 ∧ · · · ∧ vk � = det(�ui , vj �),
for all ui , vi ∈ V and extending �−, −� by bilinearity.
�
It is easy to show that if (e1 , . . . , en ) is an orthonormal basis of V , then the basis of k V
consisting �kof the eI (where I = {i1 , . . . , ik }, with 1 ≤ i1 < · · · < ik ≤ n) is an∗orthonormal
basis of V . Since the inner product on V induces an inner product � on V (recall that
�ω1 , ω2 � = �ω1 , ω2 �, for all ω1 , ω2 ∈ V ), we also get an inner product on k V ∗ .
� � ∗
called the Hodge ∗-operator , as follows: For any choice of a positively oriented orthonormal
basis, (e1 , . . . , en ), of V , set
∗(e1 ∧ · · · ∧ ek ) = ek+1 ∧ · · · ∧ en .
∗(1) = e1 ∧ · · · ∧ en
∗(e1 ∧ · · · ∧ en ) = 1.
It is easy to see that the definition of ∗ does not depend on the choice of positively oriented
orthonormal basis.
�k �
The Hodge
� � ∗-operators, ∗ : V → n−k V , induces � a linear bijection,
�
∗ : (V ) → (V ). We also have Hodge ∗-operators, ∗ : k V ∗ → n−k V ∗ .
The following proposition is easy to show:
22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 627
Proposition 22.25 If V is any oriented vector space of dimension n, for every k, with
0 ≤ k ≤ n, we have
(i) ∗∗ = (−id)k(n−k) .
�k
(ii) �x, y� = ∗(x ∧ ∗y) = ∗(y ∧ ∗x), for all x, y ∈ V.
defined on generators by
and
p+q p q
� � �
�: ∗
E × E −→ E∗ (right hook),
as well as the versions obtained by replacing E by E ∗ and E ∗∗ by E. We begin with the left
interior product or left hook, �.
�
Let u ∈ p E. For any q such that p + q ≤ n, multiplication on the right by u is a linear
map
q p+q
� �
∧R (u) : E −→ E,
628 CHAPTER 22. TENSOR ALGEBRAS
given by
v �→ v ∧ u
�q
where v ∈ E. The transpose of ∧R (u) yields a linear map,
p+q q
� �
∗
t
(∧R (u)) : ( E) −→ ( E)∗ ,
� �p+q ∗ � �q ∗
which, using the isomorphisms ( p+q E)∗ ∼ = E and ( q E)∗ ∼
= E can be viewed as
a map
p+q q
� �
(∧R (u))t : E ∗ −→ E ∗,
given by
z ∗ �→ z ∗ ◦ ∧R (u),
�p+q
where z ∗ ∈ E ∗.
We denote z ∗ ◦ ∧R (u) by
u � z∗.
In terms of our pairing, the q-vector u � z ∗ is uniquely defined by
� � �
�u � z ∗ , v� = �z ∗ , v ∧ u�, for all u ∈ p E, v ∈ q E and z ∗ ∈ p+q E ∗ .
(u ∧ v) � z ∗ = u � (v � z ∗ ),
�p In order to proceed any further, we need some combinatorial properties of the basis of
E constructed from a basis, (e1 , . . . , en ), of E. Recall that for any (nonempty) subset,
I ⊆ {1, . . . , n}, we let
e I = e i1 ∧ · · · ∧ e ip ,
22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 629
where
ν = |{(h, l) | (h, l) ∈ H × L, h > l}|.
Proposition 22.26 For any basis, (e1 , . . . , en ), of E the following properties hold:
eH ∧ eL = ρH,L eH∪L .
we have
eH � e∗L = 0 if H �⊆ L
eH � e∗L = ρL−H,H e∗L−H if H ⊆ L.
�p �p+q �q
Similar formulae hold for � : E∗ × E −→ E. Using Proposition 22.26, we
have the
u � (x∗ ∧ y ∗ ) = (−1)s (u � x∗ ) ∧ y ∗ + x∗ ∧ (u � y ∗ ),
�s
where y ∈ E ∗.
630 CHAPTER 22. TENSOR ALGEBRAS
Proof . We can prove the above identity assuming that x∗ and y ∗ are of the form e∗I and e∗J us-
ing Proposition 22.26 but this is rather tedious. There is also a proof involving determinants,
see Warner [147], Chapter 2.
Thus, � is almost an anti-derivation, except that the sign, (−1)s is applied to the wrong
factor.
It is also possible to define a right interior product or right hook , �, using multiplication
on the left rather than multiplication on the right. Then, � defines a right action,
p+q p q
� � �
�: ∗
E × E −→ E ∗,
such that
�p �q �p+q
�z ∗ , u ∧ v� = �z ∗ � u, v�, for all u ∈ E, v ∈ E, and z ∗ ∈ E ∗.
such that
�p �q �p+q
�u∗ ∧ v ∗ , z� = �v ∗ , z � u∗ �, for all u∗ ∈ E ∗, v∗ ∈ E ∗ , and z ∈ E.
�p �p+q �q
Since the left hook, � : E× E ∗ −→ E ∗ , is defined by
�p �q �p+q
�u � z ∗ , v� = �z ∗ , v ∧ u�, for all u ∈ E, v ∈ E and z ∗ ∈ E ∗,
by
�p �q �p+q
�z ∗ � u, v� = �z ∗ , u ∧ v�, for all u ∈ E, v ∈ E, and z ∗ ∈ E ∗,
and v ∧ u = (−1)pq u ∧ v, we conclude that
u � z ∗ = (−1)pq z ∗ � u,
�p �p+q
where u ∈ E and z ∈ E ∗.
Using the above property and Proposition 22.27 we get the following version of Proposi-
tion 22.27 for the right hook:
22.17. TESTING DECOMPOSABILITY; LEFT AND RIGHT HOOKS 631
Thus, � is an anti-derivation.
For u ∈ E, the right hook, z ∗ � u, is also denoted, i(u)z ∗ , and called insertion operator or
interior
� product. This operator plays an important role in differential geometry. If we view
z ∗ ∈ n+1 (E ∗ ) as an alternating multilinear map in Altn+1 (E; K), then i(u)z ∗ ∈ Altn (E; K)
is given by
(i(u)z ∗ )(v1 , . . . , vn ) = z ∗ (u, v1 , . . . , vn ).
� Note that certain authors, such as Shafarevitch [138], denote our right hook z ∗ � u (which
is also the right hook in Bourbaki [21] and Fulton and Harris [57]) by u � z ∗ .
� �
Using the�two versions of �, we can define linear maps γ : p E → n−p E ∗ and
�
δ : p E ∗ → n−p E. For any basis (e1 , . . . , en ) of E, if we let M = {1, . . . , n}, e = e1 ∧· · ·∧en ,
and e∗ = e∗1 ∧ · · · ∧ e∗n , then
Proof . Using Proposition 22.26, for any subset J ⊆ {1, . . . , n} = M such that |J| = p, we
have
γ(eJ ) = eJ � e∗ = ρM −J,J e∗M −J and δ(e∗J ) = e∗J � e = ρM −J,J eM −J .
Thus,
δ ◦ γ(eJ ) = ρM −J,J ρJ,M −J eJ = (−1)p(n−p) eJ .
A similar result holds for γ ◦ δ. This implies that
ej ∧ z = 0 for j = 1, . . . , n.
�
By wedging z = I λI eI with each ej , as n > p, we deduce λI = 0 for all I, so z = 0, a
contradiction. Therefore, n = p and z is decomposable.
�
In Proposition 22.31, we can let u∗ range over a basis of p−1 E ∗ , and then, the conditions
are
(e∗H � z) ∧ z = 0
�
for all H ⊆ {1, . . . , n}, with |H| = p − 1. Since (e∗H � z) ∧ z ∈ p+1 E, this is equivalent to
e∗J ((e∗H � z) ∧ z) = 0
for all H, J ⊆ {1, . . . , n}, with |H| = p − 1 and |J| = p + 1. Then, for all I, I � ⊆ {1, . . . , n}
with |I| = |I � | = p, we can show that
e∗J ((e∗H � eI ) ∧ eI � ) = 0,
I − H = {i}, J − I � = {i}.
In this case, � �
e∗J (e∗H � eH∪{i} ) ∧ eJ−{i} = ρ{i},H ρ{i},J−{i} .
If we let
�i,J,H = ρ{i},H ρ{i},J−{i} ,
we have �i,J,H = +1 if the parity of the number of j ∈ J such that j < i is the same as the
parity of the number of h ∈ H such that h < i, and �i,J,H = −1 otherwise.
Finally, we obtain the following criterion in terms of quadratic equations (Plücker’s equa-
tions) for the decomposability of an alternating tensor:
634 CHAPTER 22. TENSOR ALGEBRAS
� �p
Proposition 22.32 (Grassmann-Plücker’s Equations) For z = I λI e I ∈ E, the condi-
tions for z �= 0 to be decomposable are
�
�i,J,H λH∪{i} λJ−{i} = 0,
i∈J−H
�Using these criteria, it is a good exercise to prove that if dim(E) = n, then every tensor
in n−1 (E) is decomposable. This can also be shown directly.
It should be noted that the equations given by Proposition 22.32 are not independent.
For example, when dim(E) = n = 4 and p = 2, these equations reduce to the single equation
When the field, K, is the field of complex numbers, this is the homogeneous equation of a
quadric in CP5 known as the Klein quadric. The points on this quadric are in one-to-one
correspondence with the lines in CP3 .
Note
�n that F may have infinite dimension. This isomorphism allows us to view the tensors in
∗
(E ) × F as vector valued alternating forms, a point of view � that is useful in differential
geometry. If (f1 , . . . , fr ) is a basis of F , every tensor, ω ∈ n (E ∗ ) × F can be written as
some linear combination r
�
ω= αi ⊗ fi ,
i=1
22.18. VECTOR-VALUED ALTERNATING FORMS 635
�n
with αi ∈ (E ∗ ). We also let
�n �
� � � �� �
(E; F ) = (E ∗ ) ⊗F = (E) ⊗ F.
n=0
by
(α ⊗ f ) ∧Φ (β ⊗ g) = (α ∧ β) ⊗ Φ(f, g).
As in Section 22.15 (following H. Cartan [30]) we can also define a multiplication,
∧Φ : Altm (E; F ) × Altm (E; G) −→ Altm+n (E; H),
directly on alternating multilinear maps as follows: For f ∈ Altm (E; F ) and g ∈ Altn (E; G),
�
(f ∧Φ g)(u1 , . . . , um+n ) = sgn(σ) Φ(f (uσ(1) , . . . , uσ(m) ), g(uσ(m+1) , . . . , uσ(m+n) )),
σ∈shuffle(m,n)
where shuffle(m, n) consists of all (m, n)-“shuffles”, that is, permutations, σ, of {1, . . . m+n},
such that σ(1) < · · · < σ(m) and σ(m + 1) < · · · < σ(m + n).
In general, not much can be said about ∧Φ unless Φ has some additional properties. In
particular, ∧Φ is generally not associative. We also have the map,
�n �
�
µ: (E ∗ ) ⊗ F −→ Altn (E; F ),
defined on generators by
µ((v1∗ ∧ · · · ∧ vn∗ ) ⊗ a)(u1 , . . . , un ) = (det(vj∗ (ui ))a.
Proposition 22.33 The map
� n
�
�
µ: (E ∗ ) ⊗ F −→ Altn (E; F ),
applying the above combination to each (ei1 , . . . , ein ) (I = {i1 , . . . , in }, i1 < · · · < in ), we
get the linear combination �
λI,j fj = 0,
j
and by linear independence of the fj ’s, we get λI,j = 0, for all I and all j. Therefore, the
µ(e∗I ⊗ fj ) are linearly independent and we are done. The second part of the proposition is
easily checked (a simple computation).
A special case of interest is the case where F = G = H is a Lie algebra and Φ(a, b) �= [a, b],
bracket of F . In this case, using a base, (f1 , . . . , fr ), of F if we write ω = i αi ⊗ fi
is the Lie�
and η = j βj ⊗ fj , we have
�
[ω, η] = αi ∧ βj ⊗ [fi , fj ].
i,j
Consequently,
[η, ω] = (−1)mn+1 [ω, η].
The following proposition will be useful in dealing with vector-valued differential forms:
�
Proposition 22.34 If (e1 , . . . , ep ) is any basis of E, then every element, ω ∈ ( n (E ∗ )) ⊗ F ,
can be written in a unique way as
�
ω= e∗I ⊗ fI , fI ∈ F,
I
Define the product, · : Altn (E; R) × F → Altn (E; F ), as follows: For all ω ∈ Altn (E; R)
and all f ∈ F ,
(ω · f )(u1 , . . . , un ) = ω(u1 , . . . , un )f,
�
for all u1 , . . . , un ∈ E. Then, it is immediately verified that for every ω ∈ ( n (E ∗ )) ⊗ F of
the form
ω = u∗1 ∧ · · · ∧ u∗n ⊗ f,
we have
µ(u∗1 ∧ · · · ∧ u∗n ⊗ f ) = µ(u∗1 ∧ · · · ∧ u∗n ) · f.
Then, Proposition 22.34 yields
Proposition 22.35 If (e1 , . . . , ep ) is any basis of E, then every element, ω ∈ Altn (E; F ),
can be written in a unique way as
�
ω= e∗I · fI , fI ∈ F,
I
It is possible to salvage certain properties of tensor products holding for vector spaces by
restricting the class of modules under consideration. For example, projective modules, have
a pretty good behavior w.r.t. tensor products.
A free R-module, F , is a module that has a basis (i.e., there is a family, (ei )i∈I , of
linearly independent vectors in F that span F ). Projective modules have many equivalent
characterizations. Here is one that is best suited for our needs:
F = P ⊕ Q.
Given any R-module, M , we let M ∗ = HomR (M, R) be its dual . We have the following
proposition:
Proposition 22.36 For any finitely-generated projective R-modules, P , and any R-module,
Q, we have the isomorphisms:
P ∗∗ ∼
= P
HomR (P, Q) ∼
= P ∗ ⊗R Q.
Sketch of proof . We only consider the second isomorphism. Since P is projective, we have
some R-modules, P1 , F , with
P ⊕ P1 = F,
where F is some free module. Now, we know that for any R-modules, U, V, W , we have
�
HomR (U ⊕ V, W ) ∼= HomR (U, W ) HomR (V, W ) ∼
= HomR (U, W ) ⊕ HomR (V, W ),
so
P ∗ ⊕ P1∗ ∼
= F ∗, HomR (P, Q) ⊕ HomR (P1 , Q) ∼
= HomR (F, Q).
By tensoring with Q and using the fact that tensor distributes w.r.t. coproducts, we get
(P ∗ ⊗R Q) ⊕ (P1∗ ⊗ Q) ∼
= (P ∗ ⊕ P1∗ ) ⊗R Q ∼
= F ∗ ⊗R Q.
Now, the proof of Proposition 22.9 goes through because F is free and finitely generated, so
α⊗ : (P ∗ ⊗R Q) ⊕ (P1∗ ⊗ Q) ∼
= F ∗ ⊗R Q −→ HomR (F, Q) ∼
= HomR (P, Q) ⊕ HomR (P1 , Q)
In Section 11.2 we will need to consider a slightly weaker version of the universal mapping
property of tensor products. The situation is this: We have a commutative R-algebra, S,
where R is a field (or even a commutative ring), we have two R-modules, U and V , and
moreover, U is a right S-module and V is a left S-module. In Section 11.2, this corresponds
to R = R, S = C ∞ (B), U = Ai (ξ) and V = Γ(ξ), where ξ is a vector bundle. Then, we can
form the tensor product, U ⊗R V , and we let U ⊗S V be the quotient module, (U ⊗R V )/W ,
where W is the submodule of U ⊗R V generated by the elements of the form
us ⊗R v − u ⊗R sv.
s(u ⊗S v) = us ⊗S v.
It is immediately verified that this S-module is isomorphic to the tensor product of U and
V as S-modules and the following universal mapping property holds:
Note that the linear map, f�: U ⊗S V → Z, is only R-linear, it is not S-linear in general.
A = P DP � ,
For a proof, see see Horn and Johnson [79], Corollary 2.5.14, Gantmacher [61], Chapter IX,
or Gallier [58], Chapter 11.
Since det(Di ) = a2i and det(A) = det(P DP � ) = det(D) = det(D1 ) · · · det(Dn ), we get
det(A) = (a1 · · · an )2 .
Pf(A)2 = det(A)
The Pfaffian shows up in the definition of the Euler class of a vector bundle. There is a
simple way to define the Pfaffian using some exterior algebra. Let (e1 , . . . , e2n ) be any basis
of R2n . For any matrix, A ∈ so(2n), let
�
ω(A) = aij ei ∧ ej ,
i<j
�n
where A = (aij ). Then, ω(A) is of the form Ce1 ∧ e2 ∧ · · · ∧ e2n for some constant, C ∈ R.
Definition 22.10 For every skew symmetric matrix, A ∈ so(2n), the Pfaffian polynomial
or Pfaffian is the degree n polynomial, Pf(A), defined by
n
�
ω(A) = n! Pf(A) e1 ∧ e2 ∧ · · · ∧ e2n .
Clearly, Pf(A) is independent of the basis chosen. If A is the block diagonal matrix D,
a simple calculation shows that
and that n
�
ω(D) = (−1)n n! a1 · · · an e1 ∧ e2 ∧ · · · ∧ e2n ,
and so
Pf(D) = (−1)n a1 · · · an .
Since Pf(D)2 = (a1 · · · an )2 = det(A), we seem to be on the right track.
22.20. THE PFAFFIAN POLYNOMIAL 641
Proposition 22.38 For every skew symmetric matrix, A ∈ so(2n) and every arbitrary ma-
trix, B, we have:
(i) Pf(A)2 = det(A)
Proof . If we assume that (ii) is proved then, since we can write A = P DP � for some
orthogonal matrix, P , and some block diagonal matrix, D, as above, as det(P ) = ±1 and
Pf(D)2 = det(A), we get
τ = 2ω(BAB � ).
Consequently,
n
� n
�
τ =2 n
ω(BAB � ) = 2n n! Pf(BAB � ) e1 ∧ e2 ∧ · · · ∧ e2n .
Now,
n
�
τ = C f1 ∧ f2 ∧ · · · ∧ f2n ,
for some C ∈ R. If B is singular, then the fi are linearly dependent which implies that
f1 ∧ f2 ∧ · · · ∧ f2n = 0, in which case,
Pf(BAB � ) = 0,
so n
�
τ = 2n n! Pf(A) det(B) e1 ∧ e2 ∧ · · · ∧ e2n
and as n
�
τ = 2n n! Pf(BAB � ) e1 ∧ e2 ∧ · · · ∧ e2n ,
we get
Pf(BAB � ) = Pf(A) det(B),
as claimed.
Remark: It can be shown that the polynomial, Pf(A), is the unique polynomial with integer
coefficients such that Pf(A)2 = det(A) and Pf(diag(S, . . . , S)) = +1, where
� �
0 1
S= ,
−1 0
see Milnor and Stasheff [110] (Appendix C, Lemma 9). There is also an explicit formula for
Pf(A), namely:
n
1 � �
Pf(A) = n sgn(σ) aσ(2i−1) σ(2i) .
�
2 n! σ∈S i=1
2n
Beware, some authors use a different sign convention and require the Pfaffian to have
the value +1 on the matrix diag(S � , . . . , S � ), where
� �
� 0 −1
S = .
1 0
For example, if R2n is equipped with an inner product, �−, −�, then some authors define
ω(A) as �
ω(A) = �Aei , ej � ei ∧ ej ,
i<j
where A = (aij ). But then, �Aei , ej � = aji and not aij , and this Pfaffian takes the value +1
on the matrix diag(S � , . . . , S � ). This version of the Pfaffian differs from our version by the
factor (−1)n . In this respect, Madsen and Tornehave [100] seem to have an incorrect sign in
Proposition B6 of Appendix C.
We will also need another property of Pfaffians. Recall that the ring, Mn (C), of n × n
matrices over C is embedded in the ring, M2n (R), of 2n × 2n matrices with real coefficients,
using the injective homomorphism that maps every entry z = a + ib ∈ C to the 2 × 2 matrix
� �
a −b
.
b a
If A ∈ Mn (C), let AR ∈ M2n (R) denote the real matrix obtained by the above process.
�
Observe that every skew Hermitian matrix, A ∈ u(n), (i.e., with A∗ = A = −A) yields a
matrix AR ∈ so(2n).
22.20. THE PFAFFIAN POLYNOMIAL 643
Pf(AR ) = in det(A).
Proof . It is well-known that a skew Hermitian matrix can be diagonalized with respect to a
unitary matrix, U , and that the eigenvalues are pure imaginary or zero, so we can write
A = U diag(ia1 , . . . , ian )U ∗ ,
AR = UR diag(D1 , . . . , Dn )UR� ,
where � �
0 −ai
Di =
ai 0
and
Pf(AR ) = Pf(diag(D1 , . . . , Dn )) = (−1)n a1 · · · an ,
as we saw before. On the other hand,
� Madsen and Tornehave [100] state Proposition 22.39 using the factor (−i)n , which is
wrong.
644 CHAPTER 22. TENSOR ALGEBRAS
Bibliography
[1] Ralph Abraham and Jerrold E. Marsden. Foundations of Mechanics. Addison Wesley,
second edition, 1978.
[2] George E. Andrews, Richard Askey, and Ranjan Roy. Special Functions. Cambridge
University Press, first edition, 2000.
[3] Vincent Arsigny. Processing Data in Lie Groups: An Algebraic Approach. Application
to Non-Linear Registration and Diffusion Tensor MRI. PhD thesis, École Polytech-
nique, Palaiseau, France, 2006. Thèse de Sciences.
[4] Vincent Arsigny, Olivier Commowick, Xavier Pennec, and Nicholas Ayache. A fast
and log-euclidean polyaffine framework for locally affine registration. Technical report,
INRIA, 2004, route des Lucioles, 06902 Sophia Antipolis Cedex, France, 2006. Report
No. 5865.
[5] Vincent Arsigny, Pierre Fillard, Xavier Pennec, and Nicholas Ayache. Geometric means
in a novel vector space structure on symmetric positive-definite matrices. SIAM J. on
Matrix Analysis and Applications, 29(1):328–347, 2007.
[6] Vincent Arsigny, Xavier Pennec, and Nicholas Ayache. Polyrigid and polyaffine trans-
formations: a novel geometrical tool to deal with non-rigid deformations–application
to the registration of histological slices. Medical Image Analysis, 9(6):507–523, 2005.
[8] Andreas Arvanitoyeogos. An Introduction to Lie Groups and the Geometry of Homo-
geneous Spaces. SML, Vol. 22. AMS, first edition, 2003.
[11] Michael F Atiyah, Raoul Bott, and Arnold Shapiro. Clifford modules. Topology, 3,
Suppl. 1:3–38, 1964.
645
646 BIBLIOGRAPHY
[12] Sheldon Axler, Paul Bourdon, and Wade Ramey. Harmonic Function Theory. GTM
No. 137. Springer Verlag, second edition, 2001.
[13] Andrew Baker. Matrix Groups. An Introduction to Lie Group Theory. SUMS. Springer,
2002.
[14] Ronen Basri and David W. Jacobs. Lambertian reflectance and linear subspaces. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 25(2):228–233, 2003.
[15] Marcel Berger. Géométrie 1. Nathan, 1990. English edition: Geometry 1, Universitext,
Springer Verlag.
[17] Marcel Berger and Bernard Gostiaux. Géométrie différentielle: variétés, courbes et
surfaces. Collection Mathématiques. Puf, second edition, 1992. English edition: Dif-
ferential geometry, manifolds, curves, and surfaces, GTM No. 115, Springer Verlag.
[19] Raoul Bott and Tu Loring W. Differential Forms in Algebraic Topology. GTM No. 82.
Springer Verlag, first edition, 1986.
[22] Nicolas Bourbaki. Elements of Mathematics. Lie Groups and Lie Algebras, Chapters
1–3. Springer, first edition, 1989.
[25] T. Bröcker and T. tom Dieck. Representation of Compact Lie Groups. GTM, Vol. 98.
Springer Verlag, first edition, 1985.
[26] R.L. Bryant. An introduction to Lie groups and symplectic geometry. In D.S. Freed and
K.K. Uhlenbeck, editors, Geometry and Quantum Field Theory, pages 5–181. AMS,
Providence, Rhode Island, 1995.
[27] N. Burgoyne and R. Cushman. Conjugacy classes in linear groups. Journal of Algebra,
44:339–362, 1977.
BIBLIOGRAPHY 647
[29] Henri Cartan. Cours de Calcul Différentiel. Collection Méthodes. Hermann, 1990.
[31] Roger Carter, Graeme Segal, and Ian Macdonald. Lectures on Lie Groups and Lie
Algebras. Cambridge University Press, first edition, 1995.
[32] Sheung H. Cheng, Nicholas J. Higham, Charles Kenney, and Alan J. Laub. Approx-
imating the logarithm of a matrix to specified accuracy. SIAM Journal on Matrix
Analysis and Applications, 22:1112–1125, 2001.
[34] Claude Chevalley. Theory of Lie Groups I. Princeton Mathematical Series, No. 8.
Princeton University Press, first edition, 1946. Eighth printing.
[35] Claude Chevalley. The Algebraic Theory of Spinors and Clifford Algebras. Collected
Works, Vol. 2. Springer, first edition, 1997.
[38] Morton L. Curtis. Matrix Groups. Universitext. Springer Verlag, second edition, 1984.
[39] James F. Davis and Paul Kirk. Lecture Notes in Algebraic Topology. GSM, Vol. 35.
AMS, first edition, 2001.
[40] Anton Deitmar. A First Course in Harmonic Analysis. UTM. Springer Verlag, first
edition, 2002.
[41] C. R. DePrima and C. R. Johnson. The range of A−1 A∗ in GL(n, C). Linear Algebra
and Its Applications, 9:209–222, 1974.
[42] Jean Dieudonné. Sur les Groupes Classiques. Hermann, third edition, 1967.
[43] Jean Dieudonné. Special Functions and Linear Representations of Lie Groups. Regional
Conference Series in Mathematics, No. 42. AMS, first edition, 1980.
[44] Jean Dieudonné. Éléments d’Analyse, Tome V. Groupes de Lie Compacts, Groupes de
Lie Semi-Simples. Edition Jacques Gabay, first edition, 2003.
648 BIBLIOGRAPHY
[45] Jean Dieudonné. Éléments d’Analyse, Tome VI. Analyse Harmonique. Edition Jacques
Gabay, first edition, 2003.
[46] Jean Dieudonné. Éléments d’Analyse, Tome VII. Équations Fonctionnelles Linéaires.
Première partie, Opérateurs Pseudo-Différentiels. Edition Jacques Gabay, first edition,
2003.
[47] Jean Dieudonné. Éléments d’Analyse, Tome II. Chapitres XII à XV. Edition Jacques
Gabay, first edition, 2005.
[48] Dragomir Djokovic. On the exponential map in classical lie groups. Journal of Algebra,
64:76–88, 1980.
[49] Manfredo P. do Carmo. Differential Geometry of Curves and Surfaces. Prentice Hall,
1976.
[50] Manfredo P. do Carmo. Riemannian Geometry. Birkhäuser, second edition, 1992.
[51] B.A. Dubrovin, A.T. Fomenko, and S.P. Novikov. Modern Geometry–Methods and
Applications. Part I. GTM No. 93. Springer Verlag, second edition, 1985.
[52] B.A. Dubrovin, A.T. Fomenko, and S.P. Novikov. Modern Geometry–Methods and
Applications. Part II. GTM No. 104. Springer Verlag, first edition, 1985.
[53] J.J. Duistermaat and J.A.C. Kolk. Lie Groups. Universitext. Springer Verlag, first
edition, 2000.
[54] Gerald B. Folland. A Course in Abstract Harmonic Analysis. CRC Press, first edition,
1995.
[55] Joseph Fourier. Théorie Analytique de la Chaleur. Edition Jacques Gabay, first edition,
1822.
[56] William Fulton. Algebraic Topology, A first course. GTM No. 153. Springer Verlag,
first edition, 1995.
[57] William Fulton and Joe Harris. Representation Theory, A first course. GTM No. 129.
Springer Verlag, first edition, 1991.
[58] Jean H. Gallier. Geometric Methods and Applications, For Computer Science and
Engineering. TAM, Vol. 38. Springer, first edition, 2000.
[59] Jean H. Gallier. Logarithms and square roots of real matrices. Technical report,
University of Pennsylvania, Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104,
2008. Report No. MS-CIS-08-12.
[60] S. Gallot, D. Hulin, and J. Lafontaine. Riemannian Geometry. Universitext. Springer
Verlag, second edition, 1993.
BIBLIOGRAPHY 649
[61] F.R. Gantmacher. The Theory of Matrices, Vol. I. AMS Chelsea, first edition, 1977.
[62] Christopher Michael Geyer. Catadioptric Projective Geometry: Theory and Applica-
tions. PhD thesis, University of Pennsylvania, 200 South 33rd Street, Philadelphia,
PA 19104, 2002. Dissertation.
[63] André Gramain. Topologie des Surfaces. Collection Sup. Puf, first edition, 1971.
[64] Robin Green. Spherical harmonic lighting: The gritty details. In Archives of the Game
Developers’ Conference, pages 1–47, 2003.
[65] Marvin J. Greenberg and John R. Harper. Algebraic Topology: A First Course. Addi-
son Wesley, first edition, 1981.
[66] Phillip Griffiths and Joseph Harris. Principles of Algebraic Geometry. Wiley Inter-
science, first edition, 1978.
[67] Cindy M. Grimm. Modeling Surfaces of Arbitrary Topology Using Manifolds. PhD
thesis, Department of Computer Science, Brown University, Providence, Rhode Island,
USA, 1996. Dissertation.
[68] Cindy M. Grimm and John F. Hughes. Modeling surfaces of arbitrary topology using
manifolds. In Proceedings of the 22nd ACM Annual Conference on Computer Graphics
and Interactive Techniques (SIGRAPH’95), pages 359–368. ACM, August 6-11 1995.
[69] Victor Guillemin and Alan Pollack. Differential Topology. Prentice Hall, first edition,
1974.
[70] Brian Hall. Lie Groups, Lie Algebras, and Representations. An Elementary Introduc-
tion. GTM No. 222. Springer Verlag, first edition, 2003.
[71] Allen Hatcher. Algebraic Topology. Cambridge University Press, first edition, 2002.
[72] Sigurdur Helgason. Groups and Geometric Analysis. Integral Geometry, Invariant
Differential Operators and Spherical Functions. MSM, Vol. 83. AMS, first edition,
2000.
[73] Sigurdur Helgason. Differential Geometry, Lie Groups, and Symmetric Spaces. GSM,
Vol. 34. AMS, first edition, 2001.
[74] Nicholas J. Higham. The scaling and squaring method of the matrix exponential
revisited. SIAM Journal on Matrix Analysis and Applications, 26:1179–1193, 2005.
[75] D. Hilbert and S. Cohn-Vossen. Geometry and the Imagination. Chelsea Publishing
Co., 1952.
[76] Morris W. Hirsch. Differential Topology. GTM No. 33. Springer Verlag, first edition,
1976.
650 BIBLIOGRAPHY
[78] Harry Hochstadt. The Functions of Mathematical Physics. Dover, first edition, 1986.
[79] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press,
first edition, 1990.
[80] Roger Howe. Very basic Lie theory. American Mathematical Monthly, 90:600–623,
1983.
[81] James E. Humphreys. Introduction to Lie Algebras and Representation Theory. GTM
No. 9. Springer Verlag, first edition, 1972.
[82] Dale Husemoller. Fiber Bundles. GTM No. 20. Springer Verlag, third edition, 1994.
[83] Jürgen Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer
Verlag, fourth edition, 2005.
[84] Charles S. Kenney and Alan J. Laub. Condition estimates for matrix functions. SIAM
Journal on Matrix Analysis and Applications, 10:191–209, 1989.
[85] A.A. Kirillov. Spinor representations of orthogonal groups. Technical report, University
of Pennsylvania, Math. Department, Philadelphia, PA 19104, 2001. Course Notes for
Math 654.
[86] A.A. Kirillov. Lectures on the Orbit Method. GSM, Vol. 64. AMS, first edition, 2004.
[87] A.A. Kirillov (Ed.). Representation Theory and Noncommutative Harmonic Analysis.
Encyclopaedia of Mathematical Sciences, Vol. 22. Springer Verlag, first edition, 1994.
[88] Wilhelm Klingenberg. Riemannian Geometry. de Gruyter & Co, second edition, 1995.
[90] Shoshichi Kobayashi and Katsumi Nomizu. Foundations of Differential Geometry, II.
Wiley Classics. Wiley-Interscience, first edition, 1996.
[92] Jacques Lafontaine. Introduction Aux Variétés Différentielles. PUG, first edition, 1996.
[93] Serge Lang. Real and Functional Analysis. GTM 142. Springer Verlag, third edition,
1996.
[94] Serge Lang. Undergraduate Analysis. UTM. Springer Verlag, second edition, 1997.
BIBLIOGRAPHY 651
[95] Serge Lang. Fundamentals of Differential Geometry. GTM No. 191. Springer Verlag,
first edition, 1999.
[96] Blaine H. Lawson and Marie-Louise Michelsohn. Spin Geometry. Princeton Math.
Series, No. 38. Princeton University Press, 1989.
[97] N. N. Lebedev. Special Functions and Their Applications. Dover, first edition, 1972.
[98] John M. Lee. Introduction to Smooth Manifolds. GTM No. 218. Springer Verlag, first
edition, 2006.
[99] Pertti Lounesto. Clifford Algebras and Spinors. LMS No. 286. Cambridge University
Press, second edition, 2001.
[100] Ib Madsen and Jorgen Tornehave. From Calculus to Cohomology. De Rham Cohomol-
ogy and Characteristic Classes. Cambridge University Press, first edition, 1998.
[101] Paul Malliavin. Géométrie Différentielle Intrinsèque. Enseignement des Sciences, No.
14. Hermann, first edition, 1972.
[102] Jerrold E. Marsden and T.S. Ratiu. Introduction to Mechanics and Symmetry. TAM,
Vol. 17. Springer Verlag, first edition, 1994.
[103] William S. Massey. Algebraic Topology: An Introduction. GTM No. 56. Springer
Verlag, second edition, 1987.
[104] William S. Massey. A Basic Course in Algebraic Topology. GTM No. 127. Springer
Verlag, first edition, 1991.
[106] John W. Milnor. Morse Theory. Annals of Math. Series, No. 51. Princeton University
Press, third edition, 1969.
[108] John W. Milnor. Topology from the Differentiable Viewpoint. The University Press of
Virginia, second edition, 1969.
[109] John W. Milnor. Curvatures of left invariant metrics on lie groups. Advances in
Mathematics, 21:293–329, 1976.
[110] John W. Milnor and James D. Stasheff. Characteristic Classes. Annals of Math. Series,
No. 76. Princeton University Press, first edition, 1974.
652 BIBLIOGRAPHY
[111] R. Mneimné and F. Testard. Introduction à la Théorie des Groupes de Lie Classiques.
Hermann, first edition, 1997.
[112] Jules Molk. Encyclopédie des Sciences Mathématiques Pures et Appliquées. Tome I
(premier volume), Arithmétique. Gauthier-Villars, first edition, 1916.
[115] James R. Munkres. Topology, a First Course. Prentice Hall, first edition, 1975.
[118] Mitsuru Nishikawa. On the exponential map of the group O(p, q)0 . Memoirs of the
Faculty of Science, Kyushu University, Ser. A, 37:63–69, 1983.
[120] Xavier Pennec. Intrinsic statistics on Riemannian Manifolds: Basic tools for geometric
measurements. Journal of Mathematical Imaging and Vision, 25:127–154, 2006.
[121] Peter Petersen. Riemannian Geometry. GTM No. 171. Springer Verlag, second edition,
2006.
[122] L. Pontryagin. Topological Groups. Princeton University Press, first edition, 1939.
[123] L. Pontryagin. Topological Groups. Gordon and Breach, second edition, 1960.
[124] Ian R. Porteous. Topological Geometry. Cambridge University Press, second edition,
1981.
[126] Marcel Riesz. Clifford Numbers and Spinors. Kluwer Academic Press, first edition,
1993. Edited by E. Folke Bolinder and Pertti Lounesto.
[127] Wulf Rossmann. Lie Groups. An Introduction Through Linear Groups. Graduate Texts
in Mathematics. Oxford University Press, first edition, 2002.
BIBLIOGRAPHY 653
[128] Joseph J. Rotman. Introduction to Algebraic Topology. GTM No. 119. Springer Verlag,
first edition, 1988.
[129] Arthur A. Sagle and Ralph E. Walde. Introduction to Lie Groups and Lie Algebras.
Academic Press, first edition, 1973.
[130] Takashi Sakai. Riemannian Geometry. Mathematical Monographs No 149. AMS, first
edition, 1996.
[131] Hans Samelson. Notes on Lie Algebras. Universitext. Springer, second edition, 1990.
[134] D.H. Sattinger and O.L. Weaver. Lie Groups and Algebras with Applications to Physics,
Geometry, and Mechanics. Applied Math. Science, Vol. 61. Springer Verlag, first
edition, 1986.
[135] Laurent Schwartz. Analyse II. Calcul Différentiel et Equations Différentielles. Collec-
tion Enseignement des Sciences. Hermann, 1992.
[136] Jean-Pierre Serre. Lie Algebras and Lie Groups. Lecture Notes in Mathematics, No.
1500. Springer, second edition, 1992.
[137] Jean-Pierre Serre. Complex Semisimple Lie Algebras. Springer Monographs in Math-
ematics. Springer, first edition, 2000.
[138] Igor R. Shafarevich. Basic Algebraic Geometry 1. Springer Verlag, second edition,
1994.
[140] Marcelo Siqueira, Dianna Xu, and Jean Gallier. Construction of C ∞ -surfaces from
triangular meshes using parametric pseudo-manifolds. Technical report, University
of Pennsylvania, Levine Hall, Philadelphia, PA 19104, 2008. pdf file available from
http://repository.upenn.edu/cis reports/877.
[141] Norman Steenrod. The Topology of Fibre Bundles. Princeton Math. Series, No. 14.
Princeton University Press, 1956.
[142] Elias M. Stein and Guido Weiss. Introduction to Fourier Analysis on Euclidean Spaces.
Princeton Math. Series, No. 32. Princeton University Press, 1971.
[143] S. Sternberg. Lectures On Differential Geometry. AMS Chelsea, second edition, 1983.
654 BIBLIOGRAPHY
[146] N.J. Vilenkin. Special Functions and the Theory of Group Representations. Transla-
tions of Mathematical Monographs No 22. AMS, first edition, 1968.
[147] Frank Warner. Foundations of Differentiable Manifolds and Lie Groups. GTM No. 94.
Springer Verlag, first edition, 1983.
[148] André Weil. Foundations of Algebraic Topology. Colloquium Publications, Vol. XXIX.
AMS, second edition, 1946.
[149] André Weil. L’Intégration dans les Groupes Topologiques et ses Applications. Hermann,
second edition, 1979.
[150] R.O. Wells. Differential Analysis on Complex Manifolds. GTM No. 65. Springer Verlag,
second edition, 1980.
[151] Hermann Weyl. The Classical Groups. Their Invariants and Representations. Prince-
ton Mathematical Series, No. 1. Princeton University Press, second edition, 1946.