5 Tensor Products
5 Tensor Products
We have so far encountered vector fields and the derivatives of smooth functions as
analytical objects on manifolds. These are examples of a general class of objects
called tensors which we shall encounter in more generality. The starting point is pure
linear algebra.
In fact, it is the properties of the vector space V ⊗ W which are more important
than what it is (and after all what is a real number? Do we always think of it as an
equivalence class of Cauchy sequences of rationals?).
Proposition 5.1 The tensor product V ⊗ W has the universal property that if B :
V × W → U is a bilinear map to a vector space U then there is a unique linear map
β :V ⊗W →U
such that B(v, w) = β(v ⊗ w).
There are various ways to define V ⊗ W . In the finite-dimensional case we can say
that V ⊗ W is the dual space of the space of bilinear forms on V × W : i.e. maps
B : V × W → R such that
B(λv1 + µv2 , w) = λB(v1 , w) + µB(v2 , w)
B(v, λw1 + µw2 ) = λB(v, w1 ) + µB(v, w2 )
Given v, w ∈ V, W we then define v ⊗ w ∈ V ⊗ W as the map
(v ⊗ w)(B) = B(v, w).
31
A bilinear form B is uniquely determined by its values B(vi , wj ) on basis vectors
v1 , . . . , vm for V and w1 , . . . wn for W which means the dimension of the vector space
of bilinear forms is mn, as is its dual space V ⊗ W . In fact, we can easily see that
the mn vectors
vi ⊗ wj
form a basis for V ⊗ W . It is important to remember though that a typical element
of V ⊗ W can only be written as a sum
X
aij vi ⊗ wj
i,j
For the most part we shall be interested in only a quotient of this algebra, called the
exterior algebra. A down-to-earth treatment of this is in the Section b3 Projective
Geometry Notes on the Mathematical Institute website.
32
Definition 19 The exterior algebra of V is the quotient
Λ∗ V = T (V )/I(V ).
33
Proposition 5.3 If dim V = n then dim Λn V = 1.
B(w1 , . . . , wn ) = det M
n
Proposition 5.4 let v1 , . . . , vn be a basis for V , then the p
elements vi1 ∧vi2 ∧. . .∧vip
for i1 < i2 < . . . < ip form a basis for Λp V .
Proof: By reordering and changing the sign we can get any exterior product of the
vi ’s so these elements clearly span Λp V . Suppose then that
X
ai1 ...ip vi1 ∧ vi2 ∧ . . . ∧ vip = 0.
Because i1 < i2 < . . . < ip , each term is uniquely indexed by the subset {i1 , i2 , . . . , ip } =
I ⊆ {1, 2, . . . , n}, and we can write
X
aI v I = 0 (8)
I
34
P
Proof: If v is linearly dependent on v1 , . . . , vp then v = ai vi and expanding
p
X
v1 ∧ v2 ∧ . . . ∧ vp ∧ v = v1 ∧ v2 ∧ . . . ∧ vp ∧ ( ai v i )
1
gives terms with repeated vi , which therefore vanish. If not, then v1 , v2 . . . , vp , v can
be extended to a basis and Proposition 5.4 tells us that the product is non-zero. 2
actually defines what ΛpA is on basis vectors but doesn’t prove it is independent of
the choice of basis. But the universal property of tensor products gives us
⊗pA : ⊗p V → ⊗p W
But X
Avi = Aji vj
j
and so
X
Av1 ∧ Av2 ∧ . . . ∧ Avn = Aj1 ,1 vj1 ∧ Aj2 ,2 vj2 ∧ . . . ∧ Ajn ,n vjn
X
= Aσ1,1 vσ1 ∧ Aσ2,2 vσ2 ∧ . . . ∧ Aσn,n vσn
σ∈Sn
35
where the sum runs over all permutations σ. But if σ is a transposition then the term
vσ1 ∧ vσ2 . . . ∧ vσn changes sign, so
X
Av1 ∧ Av2 ∧ . . . ∧ Avn = sgn σAσ1,1 Aσ2,2 . . . Aσn,n v1 ∧ . . . ∧ vn
σ∈Sn
6 Differential forms
for i1 < i2 < . . . < ip form a basis for Λp Tx∗ . The np coefficients of α ∈ Λp Tx∗ then
When p = 1 this is just the coordinate chart we used for the cotangent bundle:
X
ΦU (x, yi dxi ) = (x1 , . . . , xn , y1 , . . . , yn )
36
For the p-th exterior power we need to replace the Jacobian matrix
∂ x̃i
J=
∂xj
Examples:
1. A zero-form is a section of Λ0 T ∗ which by convention is just a smooth function f .
2. A 1-form is a section of the cotangent bundle T ∗ . From our definition of the
derivative of a function, it is clear that df is an example of a 1-form. We can write
in a coordinate system
X ∂f
df = dxj .
j
∂x j
By using a bump function we can extend a locally-defined p-form like dx1 ∧ dx2 ∧
. . . ∧ dxp to the whole of M , so sections always exist. In fact, it will be convenient
at various points to show that any function, form, or vector field can be written as a
sum of these local ones. This involves the concept of partition of unity.
• ϕi ≥ 0
37
• {supp ϕi : i ∈ I} is locally finite
P
• i ϕi = 1
Here locally finite means that for each x ∈ M there is a neighbourhood U which
intersects only finitely many supports supp ϕi .
Theorem 6.1 Given any open covering {Vα } of a manifold M there exists a partition
of unity {ϕi } on M such that supp ϕi ⊂ Vα(i) for some α(i).
Here let us just note that in the case when M is compact, life is much easier: for each
point x ∈ {Vα } we take a coordinate neighbourhood Ux ⊂ {Vα } and a bump function
which is 1 on a neighbourhood Vx of x and whose support lies in Ux . Compactness says
we can extract a finite subcovering of the {Vx }x∈X and so we get smooth functions
ψi ≥ 0 for i = 1, . . . , N and equal to 1 on Vxi . In particular the sum is positive, and
defining
ψi
ϕi = PN
1 ψi
gives the partition of unity.
Now, not only can we create global p-forms by taking local ones, multiplying by ϕi
and extending by zero, but conversely if α is any p-form, we can write it as
X X
α=( ϕi )α = (ϕi α)
i i
At this point, it may not be clear why we insist on introducing these complicated
exterior algebra objects, but there are two motivations. One is that the algebraic
theory of determinants is, as we have seen, part of exterior algebra, and multiple
integrals involve determinants. We shall later be able to integrate p-forms over p-
dimensional manifolds.
The other is the appearance of the skew-symmetric cross product in ordinary three-
dimensional calculus, giving rise to the curl differential operator taking vector fields
to vector fields. As we shall see, to do this in a coordinate-free way, and in all
dimensions, we have to dispense with vector fields and work with differential forms
instead.
38
6.3 Working with differential forms
We defined a differential form in Definition 21 as a section of a vector bundle. In a
local coordinate system it looks like this:
X
α= ai1 i2 ...ip (x)dxi1 ∧ dxi2 . . . ∧ dxip (9)
i1 <i2 <...<ip
where the coefficients are smooth functions. If x(y) is a different coordinate system,
then we write the derivatives
X ∂xi
k
dxik = dyj
j
∂yj
Example: Let M = R2 and consider the 2-form ω = dx1 ∧ dx2 . Now change to
polar coordinates on the open set (x1 , x2 ) 6= (0, 0):
x1 = r cos θ, x2 = r sin θ.
We have
so that
ω = (cos θdr − r sin θdθ) ∧ (sin θdr + r cos θdθ) = rdr ∧ dθ.
39
The derivative of this gives at each point x ∈ M a linear map
DFx : Tx M → TF (x) N
but if we have a section of the tangent bundle T M – a vector field X – then DFx (Xx )
doesn’t in general define a vector field on N – it doesn’t tell us what to choose in
Ta N if a ∈ N is not in the image of F .
On the other hand suppose α is a section of Λp T ∗ N – a p-form on N . Then the dual
map
DFx0 : TF∗ (x) N → Tx∗ M
defines
Λp(DFx0 ) : Λp TF∗ (x) N → Λp Tx∗ M
and then
Λp(DFx0 )(αF (x) )
is defined for all x and is a section of Λp T ∗ M – a p-form on M .
Examples:
1. The pull-back of a 0-form f ∈ C ∞ (N ) is just the composition f ◦ F .
2. Let F : R3 → R2 be given by
and take
α = xdx ∧ dy.
Then
F ∗α = (x ◦ F )d(x ◦ F ) ∧ d(y ◦ F )
= x1 x2 d(x1 x2 ) ∧ d(x2 + x3 )
= x1 x2 (x1 dx2 + x2 dx1 ) ∧ d(x2 + x3 )
= x21 x2 dx2 ∧ dx3 + x1 x22 dx1 ∧ dx2 + x1 x22 dx1 ∧ dx3
40
From the algebraic properties of the maps
ΛpA : Λp V → Λp V
• (F ◦ G)∗ α = G∗ (F ∗ α)
• F ∗ (α + β) = F ∗ α + F ∗ β
• F ∗ (α ∧ β) = F ∗ α ∧ F ∗ β
d : Ωp (M ) → Ωp+1 (M )
2. d2 = 0
3. d(α ∧ β) = dα ∧ β + (−1)p α ∧ dβ if α ∈ Ωp (M )
Examples: Before proving the theorem, let’s look at M = R3 , following the rules
of the theorem, to see d in all cases p = 0, 1, 2.
p = 0: by definition
∂f ∂f ∂f
df = dx1 + dx2 + dx3
∂x1 ∂x2 ∂x3
which we normally would write as grad f .
p = 1: take a 1-form
α = a1 dx1 + a2 dx2 + a3 dx3
41
then applying the rules we have
d(a1 dx1 + a2 dx2 + a3 dx3 ) = da1 ∧ dx1 + da2 ∧ dx2 + da3 ∧ dx3
∂a1 ∂a1 ∂a1
= dx1 + dx2 + dx3 ∧ dx1 + . . .
∂x1 ∂x2 ∂x3
∂a1 ∂a3 ∂a2 ∂a1 ∂a3 ∂a2
= − dx3 ∧ dx1 + − dx1 ∧ dx2 + − dx2 ∧ dx3 .
∂x3 ∂x1 ∂x1 ∂x2 ∂x2 ∂x3
The coefficients of this define what we would call the curl of the vector field a but
a has now become a 1-form α and not a vector field and dα is a 2-form, not a
vector field. The geometrical interpretation has changed. Note nevertheless that the
invariant statement d2 = 0 is equivalent to curl grad f = 0.
p = 2: now we have a 2-form
and
∂b1 ∂b2 ∂b3
dβ = dx1 ∧ dx2 ∧ dx3 + dx1 ∧ dx2 ∧ dx3 + dx1 ∧ dx2 ∧ dx3
∂x ∂x2 ∂x3
1
∂b1 ∂b2 ∂b3
= + + dx1 ∧ dx2 ∧ dx3
∂x1 ∂x2 ∂x3
which would be the divergence of a vector field b but in our case is applied to a 2-form
β. Again d2 = 0 is equivalent to div curl b = 0.
Here we see familiar formulas, but acting on unfamiliar objects. The fact that we can
pull differential forms around by smooth maps will give us a lot more power, even in
three dimensions, than if we always considered these things as vector fields.
Let us return to the Theorem 6.2 now and give its proof.
42
and define X
dα = dai1 i2 ...ip ∧ dxi1 ∧ dxi2 ∧ . . . ∧ dxip .
i1 <i2 <...<ip
When p = 0, this is just the derivative, so the first property of the theorem holds.
The term
∂ 2 ai1 i2 ...ip
∂xj ∂xk
is symmetric in j, k but it multiplies dxk ∧dxj in the formula which is skew-symmetric
in j and k, so the expression vanishes identically and d2 α = 0 as required.
So, using one coordinate system we have defined an operation d which satisfies the
three conditions of the theorem. Now represent α in coordinates y1 , . . . , yn :
X
α= bi1 i2 ...ip dyi1 ∧ dyi2 ∧ . . . ∧ dyip
i1 <i2 <...<ip
43
and define in the same way
X
d0 α = dbi1 i2 ...ip ∧ dyi1 ∧ dyi2 ∧ . . . ∧ dyip .
i1 <i2 <...<ip
From (1) and (2) d2 yi1 = 0 and continuing similarly with the right hand term, we get
zero in all terms.
P
Thus on each coordinate neighbourhood U dα = i1 <i2 <...<ip dbi1 i2 ...ip ∧ dyi1 ∧ dyi2 ∧
. . . ∧ dyip = d0 α and dα is thus globally well-defined. 2
d(F ∗ α) = F ∗ (dα).
Proof: Recall that the derivative DFx : Tx M → TF (x) N was defined in (11) by
44
Now if X
α= ai1 i2 ...ip (x)dxi1 ∧ dxi2 ∧ . . . ∧ dxip ,
i1 <i2 <...<ip
X
F ∗α = ai1 i2 ...ip (F (x))F ∗ dxi1 ∧ F ∗ dxi2 ∧ . . . ∧ F ∗ dxip
i1 <i2 <...<ip
by the multiplicative property of pull-back and then using the properties of d and
(10)
X
d(F ∗ α) = d(ai1 i2 ...ip (F (x))) ∧ F ∗ dxi1 ∧ F ∗ dxi2 ∧ . . . ∧ F ∗ dxip
i1 <i2 <...<ip
X
= F ∗ dai1 i2 ...ip ∧ F ∗ dxi1 ∧ F ∗ dxi2 ∧ . . . ∧ F ∗ dxip
i1 <i2 <...<ip
∗
= F (dα).
i(X) : Ωp (M ) → Ωp−1 (M )
• i(X)df = X(f )
45
The proposition tells us exactly how to work out an interior product: if
X ∂
X= ai ,
i
∂xi
In particular
Example: Suppose
∂ ∂
α = dx ∧ dy, X=x +y
∂x ∂y
then
i(X)α = xdy − ydx.
The interior product is just a linear algebra construction. Above we have seen how
to work it out when we write down a form as a sum of basis vectors. We just need to
prove that it is well-defined and independent of the way we do that, which motivates
the following more abstract proof:
Proof: In Remark 5.1 we defined Λp V as the dual space of the space of alternating
p-multilinear forms on V . If M is an alternating (p − 1)-multilinear form on V and
ξ a linear form on V then
(i(ξ)α)(M ) = α(ξM ).
LX α = d(i(X)α) + i(X)dα.
46
Proof: Consider the right hand side
we have
RX (α ∧ β) = (RX α) ∧ β + α ∧ RX (β).
On the other hand
ϕ∗t (dα) = d(ϕ∗t α)
so differentiating at t = 0, we get
LX dα = d(LX α)
and
ϕ∗t (α ∧ β) = ϕ∗t α ∧ ϕ∗t β
and differentiating this, we have
LX (α ∧ β) = LX α ∧ β + α ∧ LX β.
Thus both LX and RX preserve degree, commute with d and satisfy the same Leibnitz
identity. Hence, if we write a p-form as
X
α= ai1 i2 ...ip (x)dxi1 ∧ dxi2 ∧ . . . ∧ dxip
i1 <i2 <...<ip
so they do agree. 2
47
6.6 de Rham cohomology
In textbooks on vector calculus, you may read not only that curl grad f = 0, but also
that if a vector field a satisfies curl a = 0, then it can be written as a = grad f for
some function f . Sometimes the statement is given with the proviso that the open
set of R3 on which a is defined satisfies the topological condition that it is simply
connected (any closed path can be contracted to a point).
In the language of differential forms on a manifold, the analogue of the above state-
ment would say that if a 1-form α satisfies dα = 0, and M is simply-connected, there
is a function f such that df = α.
While this is true, the criterion of simply connectedness is far too strong. We want
to know when the kernel of
d : Ω1 (M ) → Ω2 (M )
is equal to the image of
d : Ω0 (M ) → Ω1 (M ).
Since d2 f = 0, the second vector space is contained in the first and what we shall do
is simply to study the quotient, which becomes a topological object in its own right,
with an algebraic structure which can be used to say many things about the global
topology of a manifold.
Remark:
1. Although we call it the cohomology group, it is simply a real vector space. There
are analogous structures in algebraic topology where the additive group structure is
more interesting.
2. Since there are no forms of degree −1, the group H 0 (M ) is the space of functions
f such that df = 0. Now each connected component Mi of M is an open set of M
and hence a manifold. The mean value theorem tells us that on any open ball in a
coordinate neighbourhood of Mi , df = 0 implies that f is equal to a constant c, and
the subset of Mi on which f = c is open and closed and hence equal to Mi .
Thus if M is connected, the de Rham cohomology group H 0 (M ) is naturally isomor-
phic to R: the constant value c of the function f . In general H 0 (M ) is the vector
space of real valued functions on the set of components. Our assumption that M
48
has a countable basis of open sets means that there are at most countably many
components. When M is compact, there are only finitely many, since components
provide an open covering. The cohomology groups for all p of a compact manifold
are finite-dimensional vector spaces, though we shall not prove that here.
• H p (M ) = 0 if p > n
F ∗ : H p (N ) → H p (M )
For the product, this comes directly from the exterior product of forms. If a = [α], b =
[β] we define
ab = [α ∧ β]
49
but we need to check that this really does define a cohomology class. Firstly, since
α, β are closed,
d(α ∧ β) = dα ∧ β + (−1)p α ∧ dβ = 0
so there is a class defined by α and β. Suppose we now choose a different representative
α0 = α + dγ for a. Then
α0 ∧ β = (α + dγ) ∧ β = α ∧ β + d(γ ∧ β)
dF ∗ α = F ∗ dα
F ∗ (α ∧ β) = F ∗ α ∧ F ∗ β
Perhaps the most important property of the de Rham cohomology, certainly the
one that links it to algebraic topology, is the deformation invariance of the induced
maps F . We show that if Ft is a smooth family of smooth maps, then the effect on
cohomology is independent of t. As a matter of terminology (because we have only
defined smooth maps of manifolds) we shall say that a map
F : M × [a, b] → N
is smooth if it is the restriction of a smooth map on the product with some slightly
bigger open interval M × (a − , b + ).
Theorem 6.7 Let F : M × [0, 1] → N be a smooth map. Set Ft (x) = F (x, t) and
consider the induced map on de Rham cohomology Ft∗ : H p (N ) → H p (M ). Then
F1∗ = F0∗ .
F ∗ α = β + dt ∧ γ (13)
50
where β is a p-form on M (also depending on t) and γ is a (p−1)-form on M , depending
on t. In a coordinate system it is clear how to do this, but more invariantly, the form
β is just Ft∗ α. To get γ in an invariant manner, we can think of
(x, s) 7→ (x, s + t)
So the closed forms F1∗ α and F0∗ α differ by an exact form and
Proposition 6.8 The de Rham cohomology groups of M = Rn are zero for p > 0.
F (x, t) = tx.
is the identity.
51
But F0 (x) = 0 which is a constant map. In particular the derivative vanishes, so the
pull-back of any p-form of degree greater than zero is the zero map. So for p > 0
vanishes.
From Theorem 6.7 F0∗ = F1∗ and we deduce that H p (Rn ) vanishes for p > 0. Of
course Rn is connected so H 0 (Rn ) ∼
= R. 2
Example: Show that the previous proposition holds for a star shaped region in Rn :
an open set U with a point a ∈ U such that for each x ∈ U the straight-line segment
ax ⊂ U . This is usually called the Poincaré lemma.
We are in no position yet to calculate many other de Rham cohomology groups, but
here is one non-trivial example. Consider the case of R/Z, diffeomorphic to the circle.
In the atlas given earlier, we had ϕ1 ϕ−1 −1
0 (x) = x or ϕ1 ϕ0 (x) = x − 1 so the 1-form
dx = d(x − 1) is well-defined, and nowhere zero. It is not the derivative of a function,
however, since R/Z is compact and any function must have a minimum where df = 0.
We deduce that
H 1 (R/Z) 6= 0.
To get more information we need to study the other aspect of differential forms:
integration.
52