Diffgeom 1 D
Diffgeom 1 D
DIFFERENTIABLE
MANIFOLDS
spring 2009
C ONTENTS
I. Manifolds 5
1. Topological manifolds 5
2. Differentiable manifolds and differentiable structures 11
3. Immersions, submersions and embeddings 19
V. De Rham cohomology 95
17. Definition of De Rham cohomology 95
18. Homotopy invariance of cohomology 96
I. Manifolds
1. Topological manifolds
Basically an m-dimensional (topological) manifold is a topological space M
which is locally homeomorphic to Rm . A more precise definition is:
M
p U
Rm
of p is a set containing an open neighborhood containing p. Here we will always assume that a
neighborhood is an open set.
6
p M
Ui
Uj
ϕi ϕj
Vi ϕij = ϕj ◦ ϕ−1
x
i
y Vj
Rm Rm
and the domain is V1 = (−1, 1). It is clear that ϕi and ϕ−1i are continuous, and
therefore the maps ϕi are homeomorphisms. With these choices we have found an
atlas for S1 consisting of four charts.
(b): (Stereographic projection) Consider the two charts
U1 = S1 \{(0, 1)}, and U2 = S1 \{(0, −1)}.
The coordinate mappings are given by
2p1 2p1
ϕ1 (p) = , and ϕ2 (p) = ,
1 − p2 1 + p2
which are continuous maps from Ui to R. For example
4x x2 − 4
ϕ−1
1 (x) = , ,
x2 + 4 x2 + 4
which is continuous from R to U1 .
p2
Np
(p′1 , p′2 )
p1
(p1 , p2 )
Sp
x x′
(c): (Polar coordinates) Consider the two charts U1 = S1 \{(1, 0)}, and U2 =
S1 \{(−1, 0)}. The homeomorphism are ϕ1 (p) = θ ∈ (0, 2π) (polar angle counter
clockwise rotation), and ϕ2 (p) = θ ∈ (−π, π) (complex argument). For example
ϕ−1
1 (θ) = (cos(θ), sin(θ)). I
8
J 1.10 Example. M = PRn , the real projective spaces. Consider the following
equivalence relation on points on Rn+1 \{0}: For any two x, y ∈ Rn+1 \{0} define
Define PRn = {[x] : x ∈ Rn+1 \{0}} as the set of equivalence classes. One can
think of PRn as the set of lines through the origin in Rn+1 . Consider the natural
map π : Rn+1 \{0} → PRn , via π(x) = [x]. A set U ⊂ PRn is open if π−1 (U) is
open in Rn+1 \{0}. This makes PRn a compact Hausdorff space. In order to verify
that we are dealing with an n-dimensional manifold we need to describe an atlas
for PRn . For i = 1, · · · n + 1, define Vi ⊂ Rn+1 \{0} as the set of points x for which
xi 6= 0, and define Ui = π(Vi ). Furthermore, for any [x] ∈ Ui define
x xi−1 xi+1 xn+1
1
ϕi ([x]) = ,··· , , ,··· , ,
xi xi xi xi
which is continuous, and its continuous inverse is given by
ϕ−1
i (z1 , · · · , zn ) = (z1 , · · · , zi−1 , 1, zi , · · · , zn ) .
The examples 1.6 and 1.8 above are special in the sense that they are subsets of
some Rm , which, as topological spaces, are given a manifold structure.
Define
Hm = {(x1 , · · · , xm ) | xm ≥ 0},
as the standard Euclidean half-space.
U1 = {p ∈ C | p3 < 1},
So far we have seen manifolds and manifolds with boundary. A manifold can
be either compact or non-compact, which we refer to as closed, or open manifolds
respectively. Manifolds with boundary are also either compact, or non-compact. In
both cases the boundary can be compact. Open subsets of a topological manifold
are called open submanifolds, and can be given a manifold structure again.
Let N and M be manifolds, and let f : N → M be a continuous mapping. A
mapping f is called a homeomorphism between N and M if f is continuous and
has a continuous inverse f −1 : M → N. In this case the manifolds N and M are said
11
to be homeomorphic. Using charts (U, ϕ), and (V, ψ) for N and M respectively,
we can give a coordinate expression for f , i.e. f˜ = ψ ◦ f ◦ ϕ−1 .
Recall the subspace topology. Let X be a topological space and let S ⊂ X be
any subset, then the subspace, or relative topology on S (induced by the topology
on X) is defined as follows. A subset U ⊂ S is open if there exists an open set
V ⊂ X such that U = V ∩ S. In this case S is called a (topological) subspace of X.
A (topological) embedding is a continuous injective mapping f : X → Y , which is
a homeomorphism onto its image f (X) ⊂ Y with respect to the subspace topology.
Let f : N → M be an embedding, then its image f (N) ⊂ M is called a submanifold
of M. Notice that an open submanifold is the special case when f = i : U ,→ M is
an inclusion mapping.
ϕi j = ϕ j ◦ ϕ−1
i : ϕi (Ui ∩U j ) → ϕ j (Ui ∩U j ),
x 6= 0, ±∞, and y 6= 0, ±∞. Clearly, ϕ12 and its inverse are differentiable functions
from ϕ1 (U1 ∩U2 ) = R\{0} to ϕ2 (U1 ∩U2 ) = R\{0}. I
J 2.9 Example. The real projective spaces PRn , see exercises Chapter VI. I
J 2.10 Example. The generalizations of projective spaces PRn , the so-called (k, n)-
Grassmannians Gk Rn are examples of smooth manifolds. I
y = ϕ2 (p) Np
S1
ϕ12 p
x = ϕ1 (p) Sp
Theorem 2.11. 6 Given a set M, a collection {Uα }α∈A of subsets, and injective
mappings ϕα : Uα → Rm , such that the following conditions are satisfied:
(i) ϕα (Uα ) ⊂ Rm is open for all α;
(ii) ϕα (Uα ∩Uβ ) and ϕβ (Uα ∩Uβ ) are open in Rm for any pair α, β ∈ A;
(iii) for Uα ∩Uβ 6= ∅, the mappings ϕα ◦ ϕ−1
β : ϕβ (Uα ∩Uβ ) → ϕα (Uα ∩Uβ ) are
diffeomorphisms for any pair α, β ∈ A;
(iv) countably many sets Uα cover M.
(v) for any pair p 6= q ∈ M, either p, q ∈ Uα , or there are disjoint sets Uα ,Uβ
such that p ∈ Uα and q ∈ Uβ .
Then M has a unique differentiable manifold structure and (Uα , ϕα ) are smooth
charts.
Proof: Let us give a sketch of the proof. Let sets ϕ−1 n
α (V ), V ⊂ R open, form a
−1
basis for a topology on M. Indeed, if p ∈ ϕ−1
α (V ) ∩ ϕβ (W ) then the latter is again
of the same form by (ii) and (iii). Combining this with (i) and (iv) we establish
a topological manifold over Rm . Finally from (iii) we derive that {(Uα , ϕα )} is a
smooth atlas.
The above definition also holds true for mappings defined on open subsets of
N, i.e. let W ⊂ N is an open subset, and f : W ⊂ N → M, then smoothness on W
M
ϕ Rm
p U
q
V
ϕ̃ = ϕ ◦ ψ −1
x ϕ̃−1 = ψ ◦ ϕ−1
Rm
and a similar expression for V2 . Clearly, f is continuous and the local expressions
also prove that f is a differentiable mapping using Theorem 2.15. I
Rℓ M Rℓ p M
U
p U
W
ϕ ψ
ϕ
V V ψ ◦ ϕ−1 V′
x y
x
Rm
Rm ϕ ◦ ψ −1 Rm
J 2.21 Example. Let us consider the cone M = C described in Example 1 (see also
Example 2 in Section 1). We already established that C is manifold homeomrphic
to R2 , and moreover C is a differentiable manifold, whose smooth structure is
defined via a one-chart atlas. However, C is not a smooth manifold with respect to
the induced smooth structure as subset of R3 . Indeed, following the definition in the
above remark, we have U = C, and coordinate homeomorphism ϕ(p) = (p1 , p2 ) =
x. By the definition of smoothq maps it easily follows that ϕ is smooth. The inverse
−1
is given by ϕ (x) = x1 , x2 , x12 + x22 , which is clearly not differentiable at the
cone-top (0, 0, 0). The cone C is not a smoothly embedded submainfold of R3
(topological embedding). I
. .
. . ... ,
. .. ..
J f |x = . Jg|y = .
. . . . ,
∂ fk ∂ fk ∂gm ∂gm
∂x1 · · · ∂xn ∂y1 · · · ∂yk
and J(g ◦ f )|x = Jg|y= f (x) · J f |x (chain-rule). The commutative diagram for the
maps f , g and g ◦ f yields a commutative diagram for the Jacobians:
19
V Rk
A
A
f A g J f |x
A Jg|y
A A
A A
AA
U A
AU
U - W
g◦ f R n - Rm
J(g◦ f )|x
This definition is independent of the chosen charts, see Figure 12. Via the commu-
tative diagram in Figure 12 we see that f˜ = (ψ0 ◦ ψ−1 ) ◦ f˜ ◦ (ϕ0 ◦ ϕ−1 )−1 , and by the
chain rule J f˜|x0 = J(ψ0 ◦ ψ−1 )|y · J f˜|x · J(ϕ0 ◦ ϕ−1 )−1 |x0 . Since ψ0 ◦ ψ−1 and ϕ0 ◦ ϕ−1
are diffeomorphisms it easily follows that rk(J f˜)|x = rk(J f˜)|x0 , which shows that
our notion of rank is well-defined. If a map has constant rank for all p ∈ N we sim-
ply write rk( f ). These are called constant rank mappings. Let us now consider
the various types of constant rank mappings between manifolds.
20
J 3.6 Example. Let N = S1 be defined via the atlas A = {(Ui , ϕi )}, with ϕ−1 1 (t) =
((cos(t), sin(t)), and ϕ−1
2 (t) = ((sin(t), cos(t)), and t ∈ (− π 3π
,
2 2 ). Furthermore, let
2
M = R , and the mapping f : N → M is given in local coordinates; in U1 as in
Example 1. Restricted to S1 ⊂ R2 the map f can also be described by
f (x, y) = (2xy, x).
This then yields for U2 that f˜(t) = (sin(2t), sin(t)). As before rk( f ) = 1, which
shows that f is an immersion of S1 . However, this immersion is not injective at
the origin in R2 , indicating the subtle differences between these two examples, see
Figures 13 and 14. I
R R2
J 3.14 Example. Let N = M = R2 and consider the mapping f (x, y) = (x2 , y)t . The
Jacobian is !
2x 0
J f |(x,y) = ,
0 1
and rk(J f )|(x,y) = 2 for x 6= 0, and rk(J f )|(x,y) = 1 for x = 0. See Figure 18. I
F IGURE 19. Take a k-slice W . On the left the image ϕ(W ) corre-
sponds to the Eucliden subspace Rk ⊂ Rm .
To get an idea let us show that N is a topological manifold. Axioms (i) and
(iii) are of course satisfied. Consider the projection π : Rm → Rn , and an inclusion
j : Rn → Rm defined by
π(x1 , · · · , xn , xn+1 , · · · , xm ) = (x1 , · · · , xn ),
j(x1 , · · · , xn ) = (x1 , · · · , xn , 0, · · · , 0).
Now set Z = (π ◦ ϕ)(W ) ⊂ Rn , and ϕ̄ = (π ◦ ϕ)|W , then ϕ̄−1 = (ϕ−1 ◦ j)|Z , and
ϕ̄ : W → Z is a homeomorphism. Therefore, pairs (W, ϕ̄) are charts for N, which
form an atlas for N. The inclusion i : N ,→ M is a topological embedding.
Given slice charts (U, ϕ) and (U 0 , ϕ0 ) and associated charts (W, ϕ̄) and (W 0 , ϕ̄0 )
for N. For the transitions maps it holds that ϕ̄ ◦ ϕ̄−1 = π ◦ ϕ0 ◦ ϕ−1 ◦ j, which are
diffeomorphisms, which defines a smooth atlas for N. The inclusion i : N ,→ M can
be expressed in local coordinates;
ĩ = ϕ ◦ i ◦ ϕ̄−1 , (x1 , · · · , xn ) 7→ (x1 , · · · , xn , 0, · · · , 0),
which is an injective immersion. Since i is also a topological embedding it is thus
a smooth embedding. It remains to prove that the smooth structure is unique, see
Lee Theorem 8.2.
15
Theorem 3.22. The image of an embedding is a smooth embedded submanifold.
Proof: By assumption rk( f ) = n and thus for any p ∈ N it follows from Theorem
3.3 that
fe(x1 , · · · , xn ) = (x1 , · · · , xn , 0, · · · , 0),
for appropriate coordinates (U, ϕ) for p and (V, ψ) for f (p), with f (U) ⊂ V . Con-
sequently, f (U) is a slice in V , since ψ( f (U) satisfies Definition 3.19. By assump-
tion f (U) is open in f (N) and thus f (U) = A ∩ f (N) for some open set A ⊂ M. By
replacing V by V 0 = A ∩V and restricting ψ to V 0 , (V 0 , ψ) is a slice chart with slice
V 0 ∩ f (N) = V 0 ∩ f (U).
Summarizing we conclude that embedded submanifolds are the images of
smooth embeddings.
A famous result by Whitney says that considering embeddings into Rm is not
not really a restriction for defining smooth manifolds.
Theorem 3.23. 16 Any smooth n-dimensional manifold M can be (smoothly) em-
bedded into R2n+1 .
A subset N ⊂ M is called an immersed submanifold if N is a smooth n-
dimensional manifold, and the mapping i : N ,→ M is a (smooth) immersion. This
means that we can endow N with an appropriate manifold topology and smooth
15See Lee, Thm’s 8.3
16See Lee, Ch. 10.
26
g : N = Rn → Rn−m × Rm ∼
= Rn ,
by g(ξ) = (Lξ, f (ξ) − q)t , where L : N = Rn → Rn−m is any linear map which
is invertible on the subspace ker J f | p ⊂ Rn . Clearly, Jg| p = L ⊕ J f | p , which, by
construction, is an invertible (linear) map on Rn . Applying Theorem 2.22 (Inverse
Function Theorem) to g we conclude that a sufficiently small neighborhood of U
of p maps diffeomorphically onto a neighborhood V of (L(p), 0). Since g is a
diffeomorphism it holds that g−1 maps Rn−m × {0} ∩V onto f −1 (q) ∩U (the 0 ∈
Rm corresponds to q). This exactly says that every point p ∈ S allows an (n − m)-
slice and is therefore an (n − m)-dimensional submanifold in N = Rn (codim S =
m).
J 3.30 Example. Let us start with an explicit illustration of the above proof. Let
N = R2 , M = R, and f (p1 , p2 ) = p21 + p22 . Consider the regular value q = 2, then
√
J f | p = (2p1 2p2 ), and f −1 (2) = {p : p21 + p22 = 2}, the circle with radius 2.
We have ker J f | p = span{(p1 , −p2 )t }, and is always isomorphic to R. For example
28
fix the point (1, 1) ∈ S, then ker J f | p = span{(1, −1)t } and define
! ! !
L(ξ) ξ1 − ξ2 1 −1
g(ξ) = = , Jg| p =
f (ξ) − 2 ξ21 + ξ22 − 2 2 2
where the linear map is L = (1 − 1). The map g is a local diffeomorphism and
on S ∩U this map is given by
q !
ξ1 − 2 − ξ21
q
g(ξ1 , 2 − ξ21 ) = ,
0
any q > 0 is a regular value since then the rank is equal to 1. Figure 24 shows the
level set f −1 (0) (not an embedded manifold), and f −1 (1) (an embedded circle) I
4. Tangent spaces
For a(n) (embedded) manifold M ⊂ R` the tangent space Tp M at a point p ∈
M can be pictured as a hyperplane tangent to M. In Figure 25 we consider the
parametrizations x+tei in Rm . These parametrizations yield curves γi (t) = ϕ−1 (x+
tei ) on M whose velocity vectors are given by
d −1
γ0 (0) = ϕ (x + tei ) = Jϕ−1 |x (ei ).
dt t=0
The vectors p + γ0 (0) are tangent to M at p and span an m-dimensional affine linear
subspace M p of R` . Since the vectors Jϕ−1 |x (ei ) span Tp M the affine subspace is
given by
M p := p + Tp M ⊂ R` ,
which is tangent to M at p.
The considerations are rather intuitive in the sense that we consider only em-
bedded manifolds, and so the tangent spaces are tangent m-dimensional affine sub-
spaces of R` . One can also define the notion of tangent space for abstract smooth
manifolds. There are many ways to do this. Let us describe one possible way (see
e.g. Lee, or Abraham, Marsden and Ratiu) which is based on the above considera-
tions.
Let a < 0 < b and consider a smooth mapping γ : I = (a, b) ⊂ R → M, such that
γ(0) = p. This mapping is called a (smooth) curve on M, and is parametrized by
t ∈ I. If the mapping γ (between the manifolds N = I, and M) is an immersion,
then γ is called an immersed curve. For such curves the ‘velocity vector’ Jγ̃|t =
(ϕ ◦ γ)0 (t) in Rm is nowhere zero. We build the concept of tangent spaces in order
to define the notion velocity vector to a curve γ.
Let (U, ϕ) be a chart at p. Then, two curves γ and γ̃ are equivalent, γ̃ ∼ γ, if
which is an element of Tp M.
M
p
γ
N 0
ϕ
ϕ◦γ
x (ϕ ◦ γ)′ (0)
Rm
The above definition does not depend on the choice of charts at p ∈ M. Let
(U 0 , ϕ0 ) be another chart at p ∈ M. Then, using that (ϕ ◦ γ̃)0 (0) = (ϕ ◦ γ)0 (0), for
(ϕ0 ◦ γ)0 (0) we have
h i0
(ϕ0 ◦ γ)0 (0) = (ϕ0 ◦ ϕ−1 ) ◦ (ϕ ◦ γ) (0)
= J(ϕ0 ◦ ϕ−1 ) x (ϕ ◦ γ)0 (0)
= J(ϕ0 ◦ ϕ−1 ) x (ϕ ◦ γ̃)0 (0)
h i0
= (ϕ0 ◦ ϕ−1 ) ◦ (ϕ ◦ γ̃) (0) = (ϕ0 ◦ γ̃)0 (0),
which proves that the equivalence relation does not depend on the particular choice
of charts at p ∈ M.
One can prove that Tp M ∼ = Rm . Indeed, Tp M can be given a linear structure as
follows; given two equivalence classes [γ1 ] and [γ2 ], then
[γ1 ] + [γ2 ] := γ : (ϕ ◦ γ)0 (0) = (ϕ ◦ γ1 )0 (0) + (ϕ ◦ γ2 )0 (0) ,
The above argument shows that these operation are well-defined, i.e. indepen-
dent of the chosen chart at p ∈ M, and the operations yield non-empty equivalence
classes. The mapping
τϕ : Tp M → Rm , τϕ [γ] = (ϕ ◦ γ)0 (0),
is a linear isomorphism and τϕ0 = J(ϕ0 ◦ ϕ−1 )|x ◦ τϕ . Indeed, by considering curves
γi (x) = ϕ−1 (x + tei ), i = 1, ..., m, it follows that [γi ] 6= [γ j ], i 6= j, since
(ϕ ◦ γi )0 (0) = ei 6= e j = (ϕ ◦ γ j )0 (0).
This proves the surjectivity of τϕ . As for injectivity one argues as follows. Suppose,
(ϕ ◦ γ)0 (0) = (ϕ ◦ eγ)0 (0), then by definition [γ] = [eγ], proving injectivity.
Given a smooth mapping f : N → M we can define how tangent vectors in Tp N
are mapped to tangent vectors in Tq M, with q = f (p). Choose charts (U, ϕ) for
p ∈ N, and (V, ψ) for q ∈ M. We define the tangent map or pushforward of f as
follows, see Figure 27. For a given tangent vector Xp = [γ] ∈ Tp N,
d f p = f∗ : Tp N → Tq M, f∗ ([γ]) = [ f ◦ γ].
The following commutative diagram shows that f∗ is a linear map and its definition
does not depend on the charts chosen at p ∈ N, or q ∈ M.
f∗
Tp N −−−−→ Tq M
τϕ
y
τψ
y
J(ψ◦ f ◦ϕ−1 )|x
Rn −−−−−−−→ Rm
34
R
h◦f h
Tp N Xp f Tq M
p f∗ Xp q
N M
ϕ
ψ
f˜
x y
n
R Rm
F IGURE 27. Tangent vectors in Xp ∈ Tp N yield tangent vectors
f∗ Xp ∈ Tq M under the pushforward of f .
τid ◦ ϕ∗ = τϕ , ϕ∗ = τ−1
id ◦ τϕ .
21
Lemma 4.5. Let f : N → M, and g : M → P be smooth mappings, and let p ∈ M,
then
(i) f∗ : Tp N → T f (p) M, and g∗ : T f (p) M → T(g◦ f )(p) P are linear maps (homo-
morphisms),
(ii) (g ◦ f )∗ = g∗ · f∗ : Tp N → T(g◦ f )(p) P,
(iii) (id)∗ = id : Tp N → Tp N,
(iv) if f is a diffeomorphism, then the pushforward f∗ is a isomorphism from
Tp N to T f (p) M.
Proof: We have that f∗ ([γ]) = [ f ◦ γ], and g∗ ([ f ◦ γ]) = [g ◦ f ◦ γ], which defines
the mapping (g ◦ f )∗ ([γ]) = [g ◦ f ◦ γ]. Now
[g ◦ f ◦ γ] = [g ◦ ( f ◦ γ)] = g∗ ([ f ◦ γ]) = g∗ ( f∗ ([γ])),
which shows that (g ◦ f )∗ = g∗ · f∗ .
A parametrization ϕ−1 : Rm → M coming from a chart (U, ϕ) is a local dif-
feomorphism, and can be used to find a canonical basis for Tp M. Choosing local
coordinates x = (x1 , · · · , xn ) = ϕ(p), and the standard basis vectors ei for Rm , we
define
∂
:= ϕ−1
∗ (ei ).
∂xi p
By definition ∂x∂ i p ∈ Tp M, and since the vectors ei form a basis for Rm , the vectors
∂
∂xi p form a basis for Tp M. An arbitrary tangent vector Xp ∈ Tp M can now be
written with respect to the basis { ∂x∂ i p }:
∂
Xp = ϕ−1
∗ (Xi ei ) = Xi .
∂xi p
where the notation Xi ∂x∂ i p = ∑i Xi ∂x∂ i p denotes the Einstein summation convention,
and (Xi ) is a vector in Rm !
We now define the directional derivative of a smooth function h : M → R in the
direction of Xp ∈ Tp M by
Xp h := h∗ Xp = [h ◦ γ].
In fact we have that Xp h = (h ◦ γ)0 (0), with Xp = [γ]. For the basis vectors of Tp M
this yields the following. Let γi (t) = ϕ−1 (x + tei ), and Xp = ∂
∂xi p , then
∂ 0 ∂h̃
(2) Xp h = h = (h ◦ ϕ−1 ) ◦ (ϕ ◦ γi ) (0) = ,
∂xi p ∂xi
in local coordinates, which explains the notation for tangent vectors. In particular,
∂h̃
for general tangent vectors Xp , Xp h = Xi ∂x i
.
21Lee, Lemma 3.5.
36
Let go back to the curve γ : N = (a, b) → M and express velocity vectors. Con-
sider the chart (N, id) for N, with coordinates t, then
d
:= id−1
∗ (1).
dt t=0
∂ ∂x0j ∂
= .
∂xi p ∂xi ∂x0j p
Let us prove this formula by looking a slightly more general situation. Let N, M
be smooth manifolds and f : N → M a smooth mapping. Let q = f (p) and (V, ψ)
is a chart for M containing q. The vectors ∂y∂ j q form a basis for Tq M. If we write
Xp = Xi ∂x∂ i p , then for the basis vectors ∂
∂xi p we have
∂ ∂ ∂ g ∂
h ◦ f ◦ ϕ−1
f∗ h = (h ◦ f ) = h◦ f =
∂xi p ∂xi p ∂xi ∂xi
∂ ∂
h̃ ◦ ψ ◦ f ◦ ϕ−1 = h̃ ◦ f˜
=
∂xi ∂xi
∂h̃ ∂ f˜j ∂ f˜j ∂
= = h
∂y j ∂xi ∂xi ∂y j q
37
5. Cotangent spaces
In linear algebra it is often useful to study the space of linear functions on a
given vector space V . This space is denoted by V ∗ and called the dual vector space
to V — again a linear vector space. So the elements of V ∗ are linear functions
θ : V → R. As opposed to vectors v ∈ V , the elements, or vectors in V ∗ are called
covectors.
Lemma 5.1. Let V be a n-dimensional vector space with basis {v1 , · · · , vn }, then
the there exist covectors {θ1 , · · · , θn } such that
By definition the cotangent space Tp∗ M is also m-dimensional and it has a canon-
ical basis as described in Lemma 5.1. As before have the canonical basis vectors
∂ ∗ i
∂xi | p for Tp M, the associated basis vectors for Tp M are denoted dx | p . Let us now
∗
describe this dual basis for Tp M and explain the notation.
The covectors dxi | p are called differentials and we show now that these are
indeed related dh p . Let h : M → R be a smooth function, then h∗ : Tp M → R
and h∗ ∈ Tp∗ M. Since the differentials dxi | p form a basis of Tp∗ M we have that
h∗ = λi dxi | p , and therefore
∂ ∂ j ∂h̃
h∗ = λ j dx j | p · = λ j δi = λi = ,
∂xi p ∂xi p ∂xi
38
and thus
∂h̃ i
(4) dh p = h∗ = dx p .
∂xi
Choose h such that h∗ satisfies the identity in Lemma 5.1, i.e. let h̃ = xi ( h = xi ◦ ϕ
= hϕ, ei i). These linear functions h of course span Tp∗ M, and
h∗ = (xi ◦ ϕ)∗ = d(xi ◦ ϕ) p = dxi | p .
Cotangent vectors are of the form
θ p = θi dxi p .
The pairing between a tangent vector Xp and a cotangent vector θ p is expressed
component wise as follows:
θ p · Xp = θi X j δij = θi Xi .
In the case of tangent spaces a mapping f : N → M pushes forward to a linear
f∗ ; Tp N → Tq M for each p ∈ N. For cotangent spaces one expects a similar con-
struction. Let q = f (p), then for a given cotangent vector θq ∈ Tq∗ M define
( f ∗ θq ) · Xp = θq · ( f∗ Xp ) ∈ Tp∗ N,
for any tangent vector Xp ∈ Tp N. The homomorphism f ∗ : Tq∗ M → Tp∗ N, defined by
θq 7→ f ∗ θq is called the pullback of f at p. It is a straightforward consequence from
linear algebra that f ∗ defined above is indeed the dual homomorphism of f∗ , also
called the adjoint, or transpose (see Lee, Ch. 6, for more details, and compare the
the definition of the transpose of a matrix). If we expand the definition of pullback
in local coordinates, using (3), we obtain
∂ ∂
f ∗ dy j q · Xi = dy j q · f∗ Xi
∂xi p ∂xi p
" #
j ∂ f˜j ∂ ∂ f˜j
= dy q Xi = Xi
∂xi ∂y j q ∂xi
Using this relation we obtain that
∗ j j
∂ ∂ f˜j ∂ f˜j i ∂
f σ dy q · Xi = σj Xi = σ j dx p Xi ,
∂xi p ∂xi ∂xi ∂xi p
which produces the local formula
" # " #
∂ ˜j
f ∂ ˜j
f
(5) f ∗ σ j dy j q = σ j dxi p = σ j ◦ f˜ dxi p .
∂xi ∂xi
x
Lemma 5.3. Let f : N → M, and g : M → P be smooth mappings, and let p ∈ M,
then
39
For the composition g◦ f it holds that (g◦ f )∗ ω(g◦ f )(p) ·Xp = ω(g◦ f )(p) ·(g◦ f )∗ (Xp ).
Using Lemma 4.5 we obtain
6. Vector bundles
The abstract notion of vector bundle consists of topological spaces E (total
space) and M (the base) and a projection π : E → M (surjective). To be more
precise:
Definition 6.1. A triple (E, M, π) is called a real vector bundle of rank k over M if
(i) for each p ∈ M, the set E p = π−1 (p) is a real k-dimensional linear vector
space, called the fiber over p, such that
(ii) for every p ∈ M there exists a open neighborhood U 3 p, and a homeomor-
phism Φ : π−1 (U) → U × Rk ;
(a) π ◦ Φ−1 (p, ξ) = p, for all ξ ∈ Rk ;
If there is no ambiguity about the base space M we often denote a vector bundle
by E for short. Another way to denote a vector bundle that is common in the
literature is π : E → M. It is clear from the above definition that if M is a topological
manifold then so is E. Indeed, via the homeomorphisms Φ it follows that E
is Hausdorff and has a countable basis of open set. Define ϕ e = ϕ × IdRk ◦ Φ,
and ϕ e : π−1 (U) → V × Rk is a homeomorphism. Figure 28 explains the choice of
bundle charts for E related to charts for M.
For two trivializations Φ : π−1 (U) → U ×Rk and Ψ : π−1 (V ) → V ×Rk , we have
that the transition map Ψ ◦ Φ−1 : (U ∩V ) × Rk → (U ∩V ) × Rk has the following
form
Ψ ◦ Φ−1 (p, ξ) = (p, τ(p)ξ)),
and Ψ−1 (p, σ) = A(p)ξ = B(p)σ. Therefore σ(p, ξ) = B−1 Aξ =: τ(p)ξ. Continuity
is clear from the assumptions in Definition 6.1.
If both E and M are smooth manifolds and π is a smooth projection, such that the
local trivializations can be chosen to be diffeomorphisms, then (E, M, π) is called
a smooth vector bundle. In this case the maps τ are smooth. The following result
allows us to construct smooth vector bundles and is important for the special vector
bundles used in this course.
41
Φβ ◦ Φ−1
α (p, ξ) = (p, ταβ (p)ξ).
Mappings between smooth vector bundles are called bundle maps, and are de-
fined as follows. Given vector bundles π : E → M and π0 : E 0 → M 0 and smooth
mappings F : E → E 0 and f : M → M 0 , such that the following diagram commutes
F
E −−−−→ E0
πy
0
yπ
f
M −−−−→ M 0
and F|E p : E p → E 0f (p) is linear, then the pair (F, f ) is a called a smooth bundle
map.
A section, or cross section of a bundle E is a continuous mapping σ : M → E,
such that π ◦ σ = IdM . A section is smooth if σ is a smooth mapping. The space of
is called the tangent bundle of M. We show now that T M is a fact a smooth vector
bundle over M.
Theorem 6.8. The tangent bundle T M is smooth vector bundle over M of rank m,
and as such T M is a smooth 2m-dimensional manifold.
Proof: Let (U, ϕ) be a smooth chart for p ∈ M. Define
∂
Φ Xi = p, (Xi ) ,
∂xi p
Φβ ◦ Φ−1
α (p, X p ) = (p, ταβ (p)X p ),
where ταβ : Uα ∩Uβ → Gl(m, R). It remains to show that ταβ is smooth. We have
e−1
eβ ◦ ϕ
ϕ −1 −1
α = (ϕβ × Id) ◦ Φβ ◦ Φα ◦ (ϕα × Id). Using the change of coordinates
formula derived in Section 4 we have that
!
∂x0
j
e−1 0 0
eβ ◦ ϕ
ϕ α x, (Xi ) = x1 (x), · · · , xm (x), Xj ,
∂xi
which proves the smoothness of ταβ . Theorem 6.4 can be applied now showing that
T M is asmooth vector bundle over M. From the charts (π−1 (U), ϕ e) we conclude
that T M is a smooth 2m-dimensional manifold.
44
X : M → T M,
with the property that π ◦ X = idM . In other words X is a smooth (cross) section in
the vector bundle T M, see Figure 31. The space of smooth vector fields on M is
denoted by F(M).
Tp M Xp Xp
TM
p M
p
M
X
F IGURE 31. A smooth vector field X on a manifold M [right],
and as a ‘curve’, or section in the vector bundle T M [left].
p∈M
is called cotangent bundle of M.
Theorem 6.12. 23 The cotangent bundle T ∗ M is a smooth vector bundle over M of
rank m, and as such T ∗ M is a smooth 2m-dimensional manifold.
Proof: The proof is more identical to the proof for T M, and is left to the reader
as an exercise.
The differential dh : M → T ∗ M is an example of a smooth function. The above
consideration give the coordinate wise expression for dh.
Using the formulas in (6) we can also obtain (5) in a rather straightforward way.
Let g = ψ j = hψ, e j i = y j , and ω = dg = dy j |q in local coordinates, then
h ∂ f˜j i i
f ∗ (σ j ◦ ψ)ω = σ j ◦ ψ ◦ f f ∗ dg = σ j ◦ ψ ◦ f d(g ◦ f ) = σ j ◦ f˜
dx p ,
∂xi x
where the last step follows from (4).
The notation for tangent vectors was motivated by the fact that functions on a
manifold can be differentiated in tangent directions. The notation for the cotangent
vectors was partly motivated as the ‘reciprocal’ of the partial derivative. The in-
troduction of line integral will give an even better motivation for the notation for
cotangent vectors. Let N = R, and θ a 1-form on N given in local coordinates by
θt = h(t)dt, which can be identified with a function h. The notation makes sense
because θ can be integrated over any interval [a, b] ⊂ R:
Z Z b
θ := h(t)dt.
[a,b] a
The latter can be seen by combining some of the notion introduced above:
d d
γ∗ θ · = (γ∗ θ)t = θγ(t) · γ∗ = θγ (t) · γ0 (t).
dt dt
Therefore, γ θ = (γ θ)t dt = θγ(t) · γ (t)dt = θ (γ(t))γ0i (t)dt, and
∗ ∗ 0 i
Z Z Z Z
∗ 0
θ= γ θ= θγ(t) · γ (t)dt = θi (γ(t))γ0i (t)dt.
γ [a,b] [a,b] [a,b]
γ0
If is nowhere zero then the map γ : [a, b] → N is either an immersion or em-
bedding. For example in the embedded case this gives an embedded submanifold
γ ⊂ N with boundary ∂γ = {γ(a), γ(b)}. Let θ = dg be an exact 1-form, then
Z
dg = g ∂γ
= g(γ(b)) − g(γ(a)).
γ
Indeed,
Z Z Z Z b
dg = γ∗ dg = d(g ◦ γ) = (g ◦ γ)0 (t)dt = g(γ(b)) − g(γ(a)).
γ [a,b] [a,b] a
This identity is called the Fundamental Theorem for Line Integrals and is a spe-
cial case of the Stokes Theorem (see Section 16).
Now consider the special case that W = R, then T becomes a multilinear func-
tion, or form, and a generalization of linear functions. If in addition V1 = · · · =
Vr = V , then
T : V × · · · ×V → R,
is a multilinear function on V , and is called a covariant r-tensor on V . The number
of copies r is called the rank of T . The space of covariant r-tensors on V is denoted
by T r (V ), which clearly is a real vector space using the multilinearity property in
Definition 7.1. In particular we have that T 0 (V ) ∼= R, T 1 (V ) = V ∗ , and T 2 (V ) is
the space of bilinear forms on V . If we consider the case V1 = · · · = Vr = V ∗ , then
T : V ∗ × · · · ×V ∗ → R,
T (x, y) = x × y ∈ R3 ,
Since multilinear functions on V can be multiplied, i.e. given vector spaces V,W
and tensors T ∈ T r (V ), and S ∈ T s (W ), the multilinear function
R(v1 , · · · vr , w1 , · · · , ws ) = T (v1 , · · · vr )S(w1 , · · · , ws )
is well defined and is a multilnear function on V r × W s . This brings us to the
following definition. Let T ∈ T r (V ), and S ∈ T s (W ), then
T ⊗ S : V r ×W s → R,
is given by
T ⊗ S(v1 , · · · , vr , w1 , · · · ws ) = T (v1 , · · · , vr )S(w1 , · · · ws ).
This product is called the tensor product. By taking V = W , T ⊗ S is a covariant
(r + s)-tensor on V , which is a element of the space T r+s (V ) and ⊗ : T r (V ) ×
T s (V ) → T r+s (V ).
Lemma 7.3. Let T ∈ T r (V ), S, S0 ∈ T s (V ), and R ∈ T t (V ), then
(i) (T ⊗ S) ⊗ R = T ⊗ (S ⊗ R) (associative),
(ii) T ⊗ (S + S0 ) = T ⊗ S + T ⊗ S0 (distributive),
(iii) T ⊗ S 6= S ⊗ T (non-commutative).
The tensor product is also defined for contravariant tensors and mixed tensors.
As a special case of the latter we also have the product between covariant and
contravariant tensors.
J 7.4 Example. The last property can easily be seen by the following example. Let
V = R2 , and T, S ∈ T 1 (R2 ), given by T (v) = v1 + v2 , and S(w) = w1 − w2 , then
T ⊗ S(1, 1, 1, 0) = 2 6= 0 = S ⊗ T (1, 1, 1, 0),
which shows that ⊗ is not commutative in general. I
The following theorem shows that the tensor product can be used to build the
tensor space T r (V ) from elementary building blocks.
Theorem 7.5. 26 Let {v1 , · · · , vn } be a basis for V , and let {θ1 , · · · , θn } be the dual
basis for V ∗ . Then the set
B = θi1 ⊗ · · · ⊗ θir : 1 ≤ i1 , · · · , ir ≤ n ,
which shows by using the multilinearity of tensors that T can be expanded in the
basis B as follows;
T = Ti1 ···ir θi1 ⊗ · · · ⊗ θir ,
where Ti1 ···ir = T (vi1 , · · · , vir ), the components of the tensor T . Linear independence
follows form the same calculation.
J 7.6 Example. Consider the the 2-tensors T (x, y) = x1 y1 + x2 y2 , T 0 (x, y) = x1 y2 +
x2 y2 and T 00 = x1 y1 + x2 y2 + x1 y2 on R2 . With respect to the standard bases θ1 (x) =
x1 , θ2 (x) = x1 , and
θ1 ⊗ θ1 (x, y) = x1 y1 , θ1 ⊗ θ2 (x, y) = x1 y2 ,
θ1 ⊗ θ2 (x, y) = x2 y1 , and θ2 ⊗ θ2 (x, y) = x2 y2 .
Using this the components of T are given by T11 = 1, T12 = 0, T21 = 0, and T22 = 1.
Also notice that T 0 = S ⊗ S0 , where S(x) = x1 + x2 , and S0 (y) = y2 . Observe that
not every tensor T ∈ T 2 (R2 ) is of the form T = S ⊗ S0 . For example T 00 6= S ⊗ S0 ,
for any S, S0 ∈ T 1 (R2 ). I
In Lee, Ch. 11, the notion of tensor product between arbitrary vector spaces is
explained. Here we will discuss a simplified version of the abstract theory. Let
V and W be two (finite dimensional) real vector spaces, with bases {v1 , · · · vn }
and {w1 , · · · wm } respectively, and for their dual spaces V ∗ and W ∗ we have the
dual bases {θ1 , · · · , θn } and {σ1 , · · · , σm } respectively. If we use the identification
{V ∗ }∗ ∼
= V , and {W ∗ }∗ ∼ = W we can define V ⊗W as follows:
Definition 7.7. The tensor product of V and W is the real vector space of (finite)
linear combinations
n o h i
V ⊗W := λi j vi ⊗ w j : λi j ∈ R = vi ⊗ w j i, j ,
where vi ⊗ w j (v∗ , w∗ ) := v∗ (vi )w∗ (w j ), using the identification vi (v∗ ) := v∗ (vi ), and
w j (w∗ ) := w∗ (w j ), with (v∗ , w∗ ) ∈ V ∗ ×W ∗ .
To get a feeling of what the tensor product of two vector spaces represents con-
sider the tensor product of the dual spaces V ∗ and W ∗ . We obtain the real vector
space of (finite) linear combinations
n o h i
V ∗ ⊗W ∗ := λi j θi ⊗ σ j : λi j ∈ R = θi ⊗ σ j i, j ,
where θi ⊗ σ j (v, w) = θi (v)σ j (w) for any (v, w) ∈ V ×W . One can show that V ∗ ⊗
W ∗ is isomorphic to space of bilinear maps from V ×W to R. In particular elements
v∗ ⊗ w∗ all lie in V ∗ ⊗ W ∗ , but not all elements in V ∗ ⊗ W ∗ are of this form. The
isomorphism is easily seen as follows. Let v = ξi vi , and w = η j w j , then for a given
bilinear form b it holds that b(v, w) = ξi η j b(vi , w j ). By definition of dual basis
51
we have that ξi η j = θi (v)σ j (w) = θi ⊗ σ j (v, w), which shows the isomorphism by
setting λi j = b(vi , w j ).
In the case V ∗ ⊗ W the tensors represent linear maps from V to W . Indeed,
from the previous we know that elements in V ∗ ⊗W represent bilinear maps from
V ×W ∗ to R. For an element b ∈ V ∗ ⊗W this means that b(v, ·) : W ∗ → R, and thus
b(v, ·) ∈ (W ∗ )∗ ∼
= W.
J 7.8 Example. Consider vectors a ∈ V and b∗ ∈ W , then a∗ ⊗ (b∗ )∗ can be iden-
tified with a matrix, i.e a∗ ⊗ (b∗ )∗ (v, ·) = a∗ (v)(b∗ )∗ (·) ∼
= a∗ (v)b. For example let
a∗ (v) = a1 v1 + a2 v2 + a3 v3 , and
! ! v
1
a1 b1 v1 + a2 b1 v2 + a3 b1 v3 a1 b1 a2 b1 a3 b1
Av = a∗ (v)b = = v2 .
a1 b2 v1 + a2 b2 v2 + a3 b2 v3 a1 b2 a2 b2 a3 b2
v3
which shows how a vector and covector can be ‘tensored’ to become a matrix. Note
that it also holds that A = (a · b∗ )∗ = b · a∗ . I
Elements in this space are called (r, s)-mixed tensors on V — r copies of V ∗ , and s
copies of V . Of course the tensor product described above is defined in general for
0
tensors T ∈ Tsr (V ), and S ∈ Tsr0 (V ):
0 0
⊗ : Tsr (V ) × Tsr0 (V ) → Ts+s
r+r
0 (V ).
52
The analogue of Theorem 7.5 can also be established for mixed tensors. In the next
sections we will see various special classes of covariant, contravariant and mixed
tensors.
J 7.10 Example. The inner product on a vector space V is an example of a covariant
2-tensor. This is also an example of a symmetric tensor. I
with respect to the bases {θi1 ⊗· · ·⊗θir } and {σ j1 ⊗· · ·⊗σ jr } for T r (W ) and T r (V )
respectively.
J 7.12 Remark. The direct sum
∞
∗
M
T (V ) = T r (V ),
r=0
consisting of finite sums of covariant tensors is called the covariant tensor algebra
of V with multiplication given by the tensor product ⊗. Similarly, one defines the
contravariant tensor algebra
∞
M
T∗ (V ) = Tr (V ).
r=0
T (v1 , · · · , vi , · · · , v j , · · · , vr ) = T (v1 , · · · , v j , · · · , vi , · · · , vr ),
where a({1, · · · , r}) = {a(1), · · · , a(r)}. From this notation we have that for two
permutations a, b ∈ Sr , b (a T ) = ba T . Define
1
Sym T = ∑ a T.
r! a∈Sr
S · T = Sym (S ⊗ T ).
54
Lemma 8.4. Let {v1 , · · · , vn } be a basis for V , and let {θ1 , · · · , θn } be the dual
basis for V ∗ . Then the set
BΣ = θi1 · · · θir : 1 ≤ i1 ≤ · · · ≤ ir ≤ n ,
Another important class of tensors are alternating tensors and are defined as
follows.
55
As before we define
1
Alt T = ∑ (−1)a a T,
r! a∈Sr
where (−1)a is +1 for even permutations, and −1 for odd permutations. We say
that Alt T is the alternating projection of a tensor T , and Alt T is of course a
alternating tensor.
J 8.6 Example. Let T, T 0 ∈ T 2 (R2 ) be defined as follows: T (x, y) = x1 y2 , and
T 0 (x, y) = x1 y2 − x2 y1 . Clearly, T is not alternating and T 0 (x, y) = −T 0 (y, x) is
alternating. We have that
1 1
Alt T (x, y) =T (x, y) − T (y, x)
2 2
1 1 1
= x1 y2 − y1 x2 = T 0 (x, y),
2 2 2
which clearly is alternating. If we do the same thing for T 0 we obtain:
1 0 1
Alt T 0 (x, y) = T (x, y) − T 0 (y, x)
2 2
1 1 1 1
= x1 y2 − x2 y1 − y1 x2 + y2 x1 = T 0 (x, y),
2 2 2 2
showing that operation Alt applied to alternating tensors produces the same tensor
again. Notice that T 0 (x, y) = det(x, y). I
This brings us to the fundamental product of alternating tensors called the wedge
product. Let S ∈ Λr (V ) and T ∈ Λs (V ) be symmetric tensors, then
(r + s)!
S∧T =Alt (S ⊗ T ).
r!s!
The wedge product of alternating tensors is anti-commutative which follows di-
rectly from the definition:
1
S ∧ T (v1 , · · · , vr+s ) = ∑ (−1)a S(va(1) , · · · , va(r) )T (va(r+1) , · · · , va(r+s) ).
r!s! a∈Sr+s
(i) (T ∧ S) ∧ R = T ∧ (S ∧ R);
(ii) (T + T 0 ) ∧ S = T ∧ S + T 0 ∧ S;
(iii) T ∧ S = (−1)rs S ∧ T , for T ∈ Λr (V ) and S ∈ Λs (V );
(iv) T ∧ T = 0.
The latter is a direct consequence of the definition of Alt . In order to prove these
properties we have the following lemma.
Lemma 8.7. Let T ∈ T r (V ) and S ∈ T s (V ), then
where the latter equality is due to the fact that r! terms are identical under the
definition of Alt ((Alt T ) ⊗ S).
Property (i) can now be proved as follows. Clearly Alt (Alt (T ⊗S)−T ⊗S) = 0,
and thus from Lemma 8.7 we have that
By definition
(r + s + t)!
(T ∧ S) ∧ R = Alt ((T ∧ S) ⊗ R)
(r + s)!t!
(r + s + t)! (r + s)!
= Alt Alt (T ⊗ S) ⊗ R
(r + s)!t! r!s!
(r + s + t)!
= Alt (T ⊗ S ⊗ R).
r!s!t!
57
The same formula holds for T ∧ (S ∧ R), which prove associativity. More generally
it holds that for Ti ∈ Λri (V )
(r1 + · · · + rk )!
T1 ∧ · · · ∧ Tk = Alt (T1 ⊗ · · · ⊗ Tk ).
r1 ! · · · rk !
Property (iii) can be seen as follows. Each term in T ∧ S can be found in S ∧ T .
This can be done by linking the permutations a and a0 . To be more precise, how
many permutation of two elements are needed to change
a ↔ (i1 , · · · , ir , jr+1 , · · · , jr+s ) into a0 ↔ ( jr+1 , · · · , jr+s , i1 , · · · , ir ).
This clearly requires rs permutations of two elements, which shows Property (iii).
J 8.8 Example. Consider the 2-tensors S(x) = x1 + x2 , and T (y) = y2 . As before
S ⊗ T (x, y) = x1 y2 + x2 y2 , and T ⊗ S(x, y) = x2 y1 + x2 y2 . Now compute
1 1 1 1
Alt (S ⊗ T )(x, y) = x1 y2 + x2 y2 − y1 x2 − x2 y2
2 2 2 2
1 1
= x1 y2 − x2 y1 = 2 S ∧ T (x, y) .
2 2
Similarly,
1 1 1 1
Alt (T ⊗ S)(x, y) = x2 y1 + x2 y2 − y2 x1 − x2 y2
2 2 2 2
1 1
= − x1 y2 + x2 y1 = −2 T ∧ S(x, y) ,
2 2
which gives that S ∧ T = −T ∧ S. Note that if T = e∗1 , i.e. T (x) = x1 , and S = e∗2 ,
i.e. S(x) = x2 , then
T ∧ S(x, y) = x1 y2 − x2 y1 = det(x, y).
I
J 8.9 Remark. Some authors use the more logical definition
¯ = Alt (S ⊗ T ),
S∧T
which is in accordance with the definition of the symmetric product. This definition
is usually called the alt convention for the wedge product, and our definition is
usually referred to as the determinant convention. For computational purposes the
determinant convention is more appropriate. I
If {e∗1 , · · · , e∗n } is the standard dual basis for (Rn )∗ , then for vectors a1 , · · · , an ∈
Rn ,
det(a1 , · · · , an ) = e∗1 ∧ · · · ∧ e∗n (a1 , · · · , an ).
Using the multilinearity the more general statement reads
β1 ∧ · · · ∧ βn (a1 , · · · , an ) = det βi (a j ) ,
(7)
58
n!
is a basis for Λr (V ), and dim Λr (V ) = (n−r)!r! . In particular, dim Λr (V ) = 0 for
r > n.
Proof: From Theorem 7.5 we know that any alternating tensor T ∈ Λr (V ) can
be written as
T = T j1 ··· jr θ j1 ⊗ · · · ⊗ θ jr .
We have that Alt T = T , and so
1
T = T j1 ··· jr Alt (θ j1 ⊗ · · · ⊗ θ jr ) = T j ··· j θ j1 ∧ · · · ∧ θ jr
r! 1 r
In the expansion the terms with jk = j` are zero since θ jk ∧ θ j` = 0. If we order the
indices in increasing order we obtain
1
T = ± Ti1 ···ir θi1 ∧ · · · ∧ θir ,
r!
which show that BΛ spans Λr (V ).
Linear independence can be proved as follows. Let 0 = λi1 ···ir θi1 ∧ · · · ∧ θir , and
thus λi1 ···ir = θi1 ∧ · · · ∧ θir (vi1 , · · · , vir ) = 0, which proves linear independence.
59
!
n
It is immediately clear that BΛ consists of elements.
r
which are called the covariant r-tensor bundle, contravariant s-tensor bundle,
and the mixed (r, s)-tensor bundle on M. As for the tangent and cotangent bundle
the tensor bundles are also smooth manifolds. In particular, T 1 M = T ∗ M, and
T1 M = T M. Recalling the symmetric and alternating tensors as introduced in the
previous section we also define the tensor bundles Σr M and Λr M.
Theorem 9.1. The tensor bundles T r M, Tr M and Tsr M are smooth vector bundles.
Proof: Using Section 6 the theorem is proved by choosing appropriate local
trivializations Φ. For coordinates x = ϕα (p) and x0 = ϕβ (p) we recall that
∂ ∂x0j ∂ ∂x0j i
= , dx0 j | p = dx | p .
∂xi p ∂xi ∂x0j p ∂xi
60
For a covariant tensor T ∈ T r M this implies the following. The components are
defined in local coordinates by
∂ ∂
T = Ti1 ···ir dxi1 ⊗ · · · ⊗ dxir , Ti1 ···ir = T ,··· , .
∂xi1 ∂xir
The change of coordinates x → x0 then gives
∂x0 ∂ ∂x0jr ∂ ∂x0j1 ∂x0jr
j1 0
Ti1 ···ir = T , · · · , = T j ··· j · · · .
∂xi1 ∂x0j1 ∂xir ∂x0jr 1 r
∂xi1 ∂xir
Define
Φ(Ti1 ···ir dxi1 ⊗ · · · ⊗ dxir ) = p, (Ti1 ···ir ) ,
then !
∂x 0 ∂x 0
j j
Φ0 ◦ Φ−1 p, (T j01 ··· jr ) = p, T j01 ··· jr 1 · · · r
.
∂xi1 ∂xir
The products of the partial derivatives are smooth functions. We can now apply
Theorem 6.4 as we did in Theorems 6.8 and 6.12.
On tensor bundles we also have the natural projection
π : Tsr M → M,
defined by π(p, T ) = p. A smooth section in Tsr M is a smooth mapping
σ : M → Tsr M,
such that π ◦ σ = idM . The space of smooth sections in Tsr M is denoted by Fsr (M).
For the co- and contravariant tensors these spaces are denoted by Fr (M) and Fs (M)
respectively. Smooth sections in these tensor bundles are also called smooth tensor
fields. Clearly, vector fields and 1-forms are examples of tensor fields. Sections in
tensor bundles can be expressed in coordinates as follows:
σ dxi1 ⊗ · · · ⊗ dxir , σ ∈ Fr (M),
i1 ···ir
σ = σ j1 ··· js ∂x∂j ⊗ · · · ⊗ ∂x∂js , σ ∈ Fs (M),
1
σ j1 ··· js dx ⊗ · · · ⊗ dxir ⊗ ∂ ⊗ · · · ⊗ ∂ , σ ∈ Fr (M).
i1 ···ir i1 ∂x j1 ∂x js s
Tensor fields are often denoted by the component functions. The tensor and tensor
fields in this course are, except for vector fields, all covariant tensors and covariant
tensor fields. Smoothness of covariant tensor fields can be described in terms of
the component functions σi1 ···ir .
27
Lemma 9.2. A covariant tensor field σ is smooth at p ∈ U if and only if
(i) the coordinate functions σi1 ···ir : U → R are smooth, or equivalently if and
only if
27See Lee, Lemma 11.6.
61
(ii) for smooth vector fields X1 , · · · , Xr defined on any open set U ⊂ M, then the
function σ(X1 , · · · , Xr ) : U → R, given by
is smooth.
The same equivalences hold for contravariant and mixed tensor fields.
Proof: Use the identities in the proof of Theorem 9.1 and then the proof goes as
Lemma 6.11.
J 9.3 Example. Let M = R2 and let σ = dx1 ⊗ dx1 + x12 dx2 ⊗ dx2 . If X = ξ1 (x) ∂x∂1 +
ξ2 (x) ∂x∂2 and Y = η1 (x) ∂x∂1 + η2 (x) ∂x∂2 are arbitrary smooth vector fields on Tx R2 ∼
=
2
R , then
σ(X,Y ) = ξ1 (x)η1 (x) + x12 ξ2 (x)η2 (x),
For covariant tensors we can also define the notion of pullback of a mapping
f between smooth manifolds. Let f : N → M be a smooth mappings, then the
pullback f ∗ : T r (T f (p) M) → T r (Tp N) is defined as
( f ∗ T )(X1 , · · · Xr ) := T ( f∗ X1 , · · · , f∗ Xr ),
J 9.5 Example. Let us continue with the previous example and let N = M = R2 .
Consider the mapping f : N → M defined by
For given tangent vectors X,Y ∈ Tx N, given by X = ξ1 ∂x∂1 + ξ2 ∂x∂2 and Y = η1 ∂x∂1 +
η2 ∂x∂2 , we can compute the pushforward
!
2 0
f∗ = , and
−1 3x22
∂ ∂
f∗ X = 2ξ1 + (−ξ1 + 3x22 ξ2 ) ,
∂y1 ∂y2
∂ ∂
f∗Y = 2η1 + (−η1 + 3x22 η2 ) .
∂y1 ∂y2
Let σ be given by σ = dy1 ⊗ dy1 + y21 dy2 ⊗ dy2 . This then yields
We have to point out here that f∗ X and f∗Y are tangent vectors in Tx N and not
necessarily vector fields, although we can use this calculation to compute f ∗ σ,
which clearly is a smooth 2-tensor field on T 2 N. I
Here we used the fact that computing the differential of a mapping to R produces
the pushforward to a 1-form on N. I
σ = a11 dy1 ⊗ dy1 + a12 dy1 ⊗ dy2 + a21 dy2 ⊗ dy1 + a22 dy2 ⊗ dy2 .
Then,
f ∗ σ = (4a11 − 2a12 − 2a21 + a22 )dx1 ⊗ dx1 + (6a12 − 3a22 )x22 dx1 ⊗ dx2
+(6a21 − 3a22 )x22 dx2 ⊗ dx1 + 9x24 a22 dx2 ⊗ dx2 ,
63
which produces the following matrix if we identify T 2 (Tx N) and T 2 (Ty M) with R4 :
4 −2 −2 1
0 6x2 0 −3x2
f∗ = 2 2
,
0 0 6x2 −3x22
2
0 0 0 9x24
which clearly equal to the tensor product of the matrices (J f )∗ , i.e.
! !
2 −1 2 −1
f ∗ = (J f )∗ ⊗ (J f )∗ = ⊗ .
0 3x22 0 3x22
( f ∗ σ) p (X1 , · · · Xr ) := σ f (p) ( f∗ X1 , · · · , f∗ Xr ),
for tangent vectors X1 , · · · , Xr ∈ Tp N.
Lemma 9.8. 29 Let f : N → M, g : M → P be smooth mappings, and let h ∈ C∞ (M),
σ ∈ Fr (M), and τ ∈ Fr (N), then:
(i) f ∗ : Fr (M) → Fr (N) is linear;
(ii) f ∗ (hσ) = (h ◦ f ) f ∗ σ;
(iii) f ∗ (σ ⊗ τ) = f ∗ σ ⊗ f ∗ τ;
(iv) f ∗ σ is a smooth covariant tensor field;
(v) (g ◦ f )∗ = f ∗ ◦ g∗ ;
(vi) id∗M σ = σ;
Proof: Combine the Lemmas 9.4 and 9.2.
and the components σi1 ···ir are smooth functions. An r-form σ acts on vector fields
X1 , · · · , Xr as follows:
Contraction is also linear in X, i.e. for vector fields X,Y it holds that
iX+Y σ = iX σ + iY σ, iλX σ = λ · iX σ.
Lemma 10.2. 30
Let σ ∈ Γr (M) and X ∈ F(M) a smooth vector field, then
(i) iX σ ∈ Γr−1 (M) (smooth (r − 1)-form);
(ii) iX ◦ iX = 0;
(iii) iX is an anti-derivation, i.e. for σ ∈ Γr (M) and ω ∈ Γs (M),
This can be proved as follows. From the definition of the wedge product and
Lemma 9.8 it follows that f ∗ (dy1 ∧ · · · ∧ dym ) = f ∗ dy1 ∧ · · · ∧ f ∗ dym , and f ∗ dy j =
∂ fej
∂xi dxi = dF j , where F = ψ ◦ f and fe = ψ ◦ f ◦ ϕ−1 . Now
11. Orientations
In order to explain orientations on manifolds we first start with orientations of
finite-dimensional vector spaces. Let V be a real m-dimensional vector space. Two
ordered basis {v1 , · · · , vm } and {v01 , · · · , v0m } are said to be consistently oriented if
the transition matrix A = (ai j ), defined by the relation
vi = ai j v0j ,
Let us now describe orientations for smooth manifolds M. We will assume that
dim M = m ≥ 1 here. For each point p ∈ M we can choose an orientation O p for
the tangent space Tp M, making (Tp M, O p ) oriented vector spaces. The collection
{O} p∈M} is called a pointwise orientation. Without any relation between these
choices this concept is not very useful. The following definition relates the choices
of orientations of Tp M, which leads to the concept of orientation of a smooth man-
ifold.
and is not a continuous function of p! It remains to verify that (ϕ̃2 )∗ (X) > 0 for
some neighborhood of N p. First, at p = N p,
!
−1 2
J ϕ̃2 | p = = 1 > 0.
0 1 + p2
∂
The choice of X(p) at p = N p comes from the canonical choice ∂x p with respect
to ϕ̃2 . Secondly,
!
1 −2 2p1 −p2 1 − p2
J ϕ̃2 | p (X(p)) = (1 − p2 ) = ,
2 1 + p2 (1 + p2 )2 p1 1 + p2
What the above example shows us is that if we choose X(p) = ∂x∂ i | p for all
charts, it remains to be verified if we have the proper collection of charts, i.e. does
{Jϕβ ◦Jϕ−1 −1
α (ei )} = [{ei }] hold? This is equivalent to having det(Jϕβ ◦Jϕα ) > 0.
These observations lead to the following theorem.
Theorem 11.5. A smooth manifold M is orientable if and only if there exists an
atlas A = {(Uα , ϕα } such that for any p ∈ M
As we have seen for vector spaces m-forms can be used to define an orientation
on a vector space. This concept can also be used for orientations on manifolds.
Theorem 11.6. 31 Let M be a smooth m-dimensional manifold. A nowhere van-
ishing differential m-form θ ∈ Γm (M) determines a unique orientation O on M for
which θ p is positively orientated at each Tp M. Conversely, given an orientation O
on M, then there exists a nowhere vanishing m-form θ ∈ Γm (M) that is positively
oriented at each Tp M.
Since g∗ (e1 ) × g∗ (e2 )|θ=0 = −g∗ (e1 ) × g∗ (e2 )|θ=2π , the pullback g∗ σ form cannot
be nowhere vanishing on R/2πZ × R, which is a contradiction. This proves that
the Möbius band is a non-orientable manifold. I
Let us now look at manifolds with boundary; (M, ∂M). For a point p ∈ ∂M we
distinguish three types of tangent vectors:
(i) tangent boundary vectors X ∈ Tp (∂M) ⊂ Tp M, which form an (m − 1)-
dimensional subspace of Tp M;
(ii) outward vectors; let ϕ−1 : W ⊂ Hm → M, then X ∈ Tp M is called an out-
ward vector if ϕ−1
∗ (Y ) = X, for some Y = (y1 , · · · , ym ) with y1 < 0;
(iii) inward vectors; let ϕ−1 : W ⊂ Hm → M, then X ∈ Tp M is called an inward
vector if ϕ−1
∗ (Y ) = X, for some Y = (y1 , · · · , ym ) with y1 > 0.
72
Using this concept we can now introduce the notion of induced orientation on ∂M.
Let p ∈ ∂M and choose a basis {X1 , · · · , Xm } for Tp M such that [X1 , · · · , Xm ] = O p ,
{X2 , · · · , Xm } are tangent boundary vectors, and X1 is an outward vector. In this case
[X2 , · · · , Xm ] = (∂O) p determines an orientation for Tp (∂M), which is consistent,
and therefore ∂O = {(∂O) p } p∈∂M is an orientation on ∂M induced by O. Thus for
an oriented manifold M, with orientation O, ∂M has an orientation ∂O, called the
induced orientation on ∂M.
J 11.8 Example. Any open set M ⊂ Rm (or Hm ) is an orientable manifold. I
J 11.9 Example. Consider a smooth embedded co-dimension 1 manifold
M = {p ∈ Rm+1 : f (p) = 0}, f : Rm+1 → R, rk( f )| p , p ∈ M.
Then M is an orientable manifold. Indeed, M = ∂N, where N = {p ∈ Rm+1 : f (p) >
0}, which is an open set in Rm+1 and thus an oriented manifold. Since M = ∂N the
manifold M inherits an orientation from M and hence it is orientable. I
73
Since Λm (Rm ) ∼
= R, a smooth m-form on Rm is given by
= ωx (e1 , · · · , em )dµ.
D
we can define ω ∈ Γm m m m
c (U). Clearly, Γc (U) ⊂ Γc (R ), and can be viewed as a
linear subspace via zero extension to Rm . For any open set U ⊂ Rm there exists a
domain of integration D such that U ⊃ D ⊃ supp(ω) (see Exercises).
If U ⊂ Hm open, then
Z Z
ω := ω.
U D∩Hm
The next theorem is the first step towards defining integrals on m-dimensional
manifolds M.
Theorem 12.3. Let U,V ⊂ Rm be open sets, f : U → V an orientation preserving
diffeomorphism, and let ω ∈ Γm
c (V ). Then,
Z Z
ω= f ∗ ω.
V U
f ∗ ω.
R R
If f is orientation reversing, then U ω=− V
Proof: Assume that f is an orientation preserving diffeomorphism from U to
V . Let E be a domain a domain of integration for ω, then D = f −1 (E) is a do-
main of integration for f ∗ ω. We now prove the theorem for the domains D and
E. We use coordinates {xi } and {yi } on D and E respectively. We start with
ω = g(y1 , · · · , ym )dy1 ∧ · · · ∧ dym . Using the change of variables formula for inte-
grals and the pullback formula in (11) we obtain
Z Z
ω = g(y)dy1 · · · dym (Definition)
E
ZE
= (g ◦ f )(x) det(J f˜|x )dx1 · · · dxm
D
Z
= (g ◦ f )(x) det(J f˜|x )dx1 ∧ · · · ∧ dxm
D
Z
= f ∗ ω (Definition).
D
Definition 13.1. Let M be smooth m-manifold with atlas A = {(ϕi ,Ui )}i∈I . A
partition of unity subordinate to A is a collection for smooth functions {λi : M →
R}i∈I satisfying the following properties:
(i) 0 ≤ λi (p) ≤ 1 for all p ∈ M and for all i ∈ I;
(ii) supp(λi ) ⊂ Ui ;
(iii) the set of supports {supp(λi )}i∈I is locally finite;
(iv) ∑i∈I λi (p) = 1 for all p ∈ M.
Condition (iii) says that every p ∈ M has a neighborhood U 3 p such that only
finitely many λi ’s are nonzero in U. As a consequence of this the sum in Condition
(iv) is always finite.
Theorem 13.2. For any smooth m-dimensional manifold M with atlas A there
exists a partition of unity {λi } subordinate to A.
In order to prove this theorem we start with a series of auxiliary results and
notions.
Lemma 13.3. There exists a smooth function h : Rm → R such that 0 ≤ h ≤ 1 on
Rm , and h|B1 (0) ≡ 1 and supp(h) ⊂ B2 (0).
Proof: Define the function f1 : R → R as follows
(
e−1/t , t > 0,
f1 (t) =
0, t ≤ 0.
One can easily prove that f1 is a C∞ -function on R. If we set
f1 (2 − t)
f2 (t) = .
f1 (2 − t) + f1 (t − 1)
This function has the property that f2 (t) ≡ 1 for t ≤ 1, 0 < f2 (t) < 1 for 1 < t <
2, and f2 (t) ≡ 0 for t ≥ 2. Moreover, f2 is smooth. To construct f we simply
write f (x) = f2 (|x|) for x ∈ Rm \{0}. Clearly, f is smooth on Rm \{0}, and since
f |B1 (0) ≡ 1 it follows that f is smooth on Rm .
An atlas A gives an open covering for M. The set Ui in the atlas need not be
compact, nor locally finite. We say that a covering U = {Ui } of M is locally finite
if every point p ∈ M has a neighborhood that intersects only finitely Ui ∈ U. If
there exists another covering V = {V j } such that every V j ∈ V is contained in some
V j ⊂ Ui ∈ U, then V is called a refinement of U. A topological space X for which
each open covering admits a locally finite refinement is called paracompact.
Lemma 13.6. Any topological manifold M allows a countable, locally finite cov-
ering by precompact open sets.
76
Proof of Theorem 13.2: From Lemma 13.8 there exists a regular refinement W
for A. By construction we have Zi = ϕ−1
i (B1 (0)), and we define
Zbi := ϕ−1
i (B2 (0)).
bλ = µ j (p) ,
∑i µi (p)
are, due to the local finiteness of Wi and the fact that the denominator is positive
for each p ∈ M, smooth functions on M. Moreover, 0 ≤ bλ j ≤ 1, and ∑ j bλ j ≡ 1.
Since W is a refinement of A we can choose an index k j such that W j ⊂ Uk j .
Therefore we can group together some of the function bλ j :
λi = ∑ bλ j ,
j : k j =i
(i) 0 ≤ f ≤ 1 on M;
(ii) f −1 (1) = A;
(iii) supp( f ) ⊂ U.
Such a function f is called a bump function for A supported in U.
By considering functions g = c(1− f ) ≥ 0 we obtain smooth functions for which
g−1 (0)can be an arbitrary closed subset of M.
Theorem 13.11. 34 Let A ⊂ M be a closed subset, and f : A → Rk be a smooth
mapping.35 Then for any open subset U ⊂ M containing A there exists a smooth
mapping f † : M → Rk such that f † |A = f , and supp( f † ) ⊂ U.
Let ω ∈ Γm c (M), with supp(ω) ⊂ U for some chart U in A. Now define the
integral of ω over M as follows:
Z Z
ω := (ϕ−1 )∗ ω,
M ϕ(U)
which show that the definition is independent of the chosen chart. We crucially do
use the fact that A ∪ A0 is an oriented atlas for M. To be more precise the mappings
ϕ0 ◦ ϕ−1 are orientation preserving.
By choosing a partition of unity as described in the previous section we can now
define the integral over M of arbitrary m-forms ω ∈ Γm c (M).
Definition 14.3. Let ω ∈ Γm c (M) and let AI = {(Ui , ϕi }i∈I ⊂ A be a finite subcov-
ering of supp(ω) coming from an oriented atlas A for M. Let {λi } be a partition
of unity subordinate to AI . Then the integral of ω over M is defined as
Z Z
ω := ∑ λi ω,
M i M
R
where the integrals M λi ω are integrals of form that have support in single charts
as defined above.
81
We need to show now that the integral is indeed well-defined, i.e. the sum is
finite and independent of the atlas and partition of unity. Since supp(ω) is compact
AI exists by default, and thus the sum is finite.
Lemma 14.4. The above definition is independent of the chosen partition of unity
and covering AI .
Proof: Let A0J ⊂ A0 be another finite covering of supp(ω), where A0 is a com-
patible oriented atlas for M, i.e. A ∪ A0 is a oriented atlas. Let {λ0j } be a partition
of unity subordinate to A0J . We have
Z Z Z
M
λi ω =
M
λ0
∑ j i ∑ λ0j λi ω.
λ ω =
M
j j
J 14.6 Remark. If M is a compact manifold that for any ω ∈ Γm (M) it holds that
supp(ω) ⊂ M is a compact set, and therefore Γm m
c (M) = Γ (M) in the case of com-
pact manifolds. I
Z Z Z
f ∗ω = (ϕ−1 )∗ f ∗ ω = (ϕ−1 )∗ f ∗ ψ∗ (ψ−1 )∗ ω
U ϕ(U) ϕ(U)
Z Z
= (ψ−1 )∗ ω = ω.
ψ(U 0 ) U0
Proof: As before it suffices the prove the above theorem for a single chart U,
i.e. supp(ω) ⊂ U. One can choose U to have a boundary ∂U so that ϕ(∂U) has
Since the interiors of the sets Ci (and thus Ai ) are disjoint it holds that
Z Z Z
ω = (ϕ−1 )∗ ω = ∑ (ϕ−1 )∗ ω
M K i Ci
Z Z
= ∑ g∗i ω =∑ g∗i ω,
i Bi i Di
This mapping can be viewed as a covering map. From this expression we derive
various charts for S2 . Given the 2-form ω = zdx ∧ dz let us compute the pullback
85
and thus
g∗ ω = − cos(ϕ) sin2 (ϕ) sin(θ)dϕ ∧ dθ.
The latter gives that S2 ω = − 02π 0π cos(ϕ) sin2 (ϕ) sin(θ)dϕdθ = 0, which shows
R R R
The pullback form g∗ ω = sin(ϕ)dϕ ∧ dθ, which shows that ω is a volume form on
S2 . I
then we define
(iii) d ◦ d = 0;
(iv) if ω ∈ Γm (M), then dω = 0.
This operation d is called the exterior derivative on differential forms, and is a
unique anti-derivation (of degree 1) on Γ(M) with d 2 = 0.
Proof: Let us start by showing the existence of d on a chart U ⊂ M. We have
local coordinates x = ϕ(p), p ∈ U, and we define
via (14). Let us write d instead of dU for notational convenience. We have that
d( f dxI ) = d f ∧ dxI . Due to the linearity of d it holds that
which proves (ii). As for (iii) we argue as follows. In the case of a 0-form we have
that
∂f ∂2 f j
d(d f ) = d dxi = dx ∧ dxi
∂xi ∂x j ∂xi
∂2 f ∂2 f i
= ∑ − dx ∧ dx j = 0.
i< j ∂x i ∂x j ∂x j ∂xi
where we used (ii). From (ii) it also follows that d(dx e I ) = (−1) j dxi1 ∧· · ·∧ d(dx
e i j )∧
· · · ∧ dxik = 0. By (i) d(ϕ(p)i j ) = dxi j , and thus by (iv) d(dx
e i j ) = de◦ d(ϕ(p)
e i j ) = 0,
which proves the latter. From (i) it also follows that dωI = dωI , and therefore
e
e I ∧ dxI + ωI d(dx
e = dω
dω e I ) = dωI ∧ dxI = dω,
which shows that the above definition is independent of the chosen chart.
The last step is to show that d is also uniquely defined on Γk (M). Let p ∈ U, a
coordinate chart, and consider the restriction
ω|U = ωI dxI ,
Using (i) we have that d(gxi )|W = dxi , and therefore ω̃|W = ω|W . Set η = ω
e − ω,
then η|W = 0. Let p ∈ W and h ∈ C (M) satisfying h(p) = 1, and supp(h) ⊂ W .
∞
89
Thus, hω ≡ 0 on M, and
0 = d(hω) = dh ∧ ω + hdω.
This implies that (dω) p = −(d f ∧ ω) p = 0, which proves that (d ω e )|W = (dω)|W .
If we use (ii)-(iii) then
e = d gωI d(gxi1 ) ∧ · · · ∧ d(gxik )
dω
= d(gωI ) ∧ d(gxi1 ) ∧ · · · ∧ d(gxik ) + gωI d d(gxi1 ) ∧ · · · ∧ d(gxik )
= d(gωI ) ∧ d(gxi1 ) ∧ · · · ∧ d(gxik ).
The exterior derivative has other important properties with respect to restrictions
and pullbacks that we now list here.
Theorem 15.5. Let M be a smooth m-dimensional manifold, and let ω ∈ Γk (M),
k ≥ 0. Then
(i) in each chart (U, ϕ) for M, dω in local coordinates is given by (14);
(ii) if ω = ω0 on some open set U ⊂ M, then also dω = dω0 on U;
(iii) if U ⊂ M is open, then d(ω|U ) = (dω)|U ;
(iv) if f : N → M is a smooth mapping, then
f ∗ (dω) = d( f ∗ ω),
Similarly,
ω = ωi dx1 ∧ · · · ∧ dx
ci ∧ · · · ∧ dxm .
Then,
Z Z
∂ωi
dω = (−1)i+1 dx1 · · · dxm
Hm IRm ∂xi
Z
i+1
= (−1) m−1
ωi |x i =R − ω i |x i =0 dx1 · · · dx
ci · · · dxm .
IR
If supp(ω)∩∂Hm 6= ∅, then ωi |xi =R −ωi |xi =0 = 0 for all i ≤ m−1, and ωm |xm =R = 0.
Therefore,
Z Z
∂ωi
dω = (−1)i+1 dx1 · · · dxm
Hm IRm ∂xi
Z
i+1
= (−1) m−1
ωi |xi =R − ω i |x i =0 dx1 · · · dx
ci · · · dxm
I
ZR
= (−1)m+1 m−1
−ωi |xm =0 dx1 · · · dxm−1
IR
Z
= (−1)m m−1
ωi xm =0 dx1 · · · dxm−1 .
|
IR
j∗ (ei ) = ei , i = 1, · · · , m − 1,
where the ei ’s on the left hand side are the unit vectors in Rm−1 , and the bold face
ek are the unit vectors in Rm . We have that the induced orientation for ∂Hm is
obtained by the rotation e1 → −em , em → e1 , and therefore
∂O = [e2 , · · · , em−1 , e1 ].
Under ϕ this corresponds with the orientation [e2 , · · · , em−1 , e1 ] of Rm−1 , which is
indeed the standard orientation for m even, and the opposite orientation for m odd.
The pullback form on ∂Hm , using the induced orientation on ∂Hm , is given by
Proof of Theorem 16.1: Let us start with the case that supp(ω) ⊂ U, where
(U, ϕ) is an oriented chart for M. Then by the definition of the integral we have
that
Z Z Z Z
−1 ∗ −1 ∗
dω = (ϕ ) (dω) = (ϕ ) (dω) = d (ϕ−1 )∗ ω ,
M ϕ(U) Hm Hm
where the latter equality follows from Theorem 15.5. It follows from Lemma 16.2
that if supp(ω) ⊂ int (Hm ), then the latter integral is zero and thus M ω. = 0. Also,
R
∂h ∂g ∂ ∂ f ∂h ∂ ∂g ∂ f ∂
curl F = ∇ × F = − + − + − .
∂y ∂z ∂x ∂z ∂x ∂y ∂x ∂y ∂z
Furthermore set
dx dydz
ds = dy , dS = dzdx ,
dz dxdy
94
then from Stokes’ Theorem we can write the following surface and line integrals:
Z Z
∇ × F · dS = F · ds,
S ∂S
which is usually referred to as the classical Stokes’ Theorem in R3 . The version in
Theorem 16.1 is the general Stokes’ Theorem. Finally let M = Ω a closed subset
of R3 with a smooth boundary ∂Ω, and consider a a 2-form ω = f dy ∧ dz + gdz ∧
dx + hdx ∧ dy. Then
∂ f ∂g ∂h
dω = + + dx ∧ dy ∧ dz.
∂x ∂y ∂z
Write
∂ f ∂g ∂h
div F = ∇ · F = + + , dV = dxdydz,
∂x ∂y ∂z
then from Stokes’ Theorem we obtain
Z Z
∇ · FdV = F · dS,
Ω ∂Ω
which is referred to as the Gauss Divergence Theorem. I
95
V. De Rham cohomology
and in particular
Bk (M) ⊂ Z k (M).
The sets Z k and Bk are real vector spaces, with Bk a vector subspace of Z k . This
leads to the following definition.
It is immediate from this definition that Z 0 (M) are smooth functions on M that
are constant on each connected component of M. Therefore, when M is con-
0 (M) ∼ R. Since Γk (M) = {0}, for k > m = dim M, we have
nected, then HdR =
k
that HdR (M) = 0 for all k > m. For k < 0, we set HdR k (M) = 0.
J 17.2 Remark. The de Rham groups defined above are in fact real vector spaces,
and thus groups under addition in particular. The reason we refer to de Rham
cohomology groups instead of de Rham vector spaces is because (co)homology
theories produce abelian groups. I
An equivalence class [ω] ∈ HdRk (M) is called a cohomology class, and two form
ω, ω0 ∈ Z k (M) are cohomologous if [ω] = [ω0 ]. This means in particular that ω and
ω0 differ by an exact form, i.e.
ω0 = ω + dσ.
Now let us consider a smooth mapping f : N → M, then we have that the pull-
back f ∗ acts as follows: f ∗ : Γk (M) → Γk (N). From Theorem 15.5 it follows that
96
d f ∗ ω = f ∗ dω = 0, and f ∗ dσ = d( f ∗ σ),
and therefore the closed forms Z k (M) get mapped to Z k (N), and the exact form
Bk (M) get mapped to Bk (N). Now define
f ∗ [ω] = [ f ∗ ω],
which is well-defined by
f ∗ ω0 = f ∗ ω + f ∗ dσ = f ∗ ω + d( f ∗ σ)
f ∗ : HdR
k k
(M) → HdR (N),
g∗ ◦ f ∗ = ( f ◦ g)∗ : HdR
k k
(K) → HdR (N),
id∗ = f ∗ ◦ ( f −1 )∗ = ( f −1 )∗ ◦ f ∗ ,
Using the notion of smooth homotopies we will prove the following crucial
property of cohomology:
Theorem 18.2. Let f , g : N → M be two smoothly homotopic maps. Then for k ≥ 0
it holds for f ∗ , g∗ : HdR
k (M) → H k (N), that
dR
f ∗ = g∗ .
J 18.3 Remark. It can be proved in fact that the above results holds for two homo-
topic (smooth) maps f and g. This is achieved by constructing a smooth homotopy
from a homotopy between maps. I
Proof of Theorem 18.2: A map h : Γk (M) → Γk−1 (N) is called a homotopy map
between f ∗ and g∗ if
(17) dh(ω) + h(dω) = g∗ ω − f ∗ ω, ω ∈ Γk (M).
Now consider the embedding it : N → N × I, and the trivial homotopy between
i0 and i1 (just the identity map). Let ω ∈ Γk (N × I), and define the mapping
Z 1
h(ω) = i ∂ ωdt,
0 ∂t
VI. Exercises
A number of the exercises given here are taken from the Lecture notes by J. Bochnak.
Manifolds
Topological Manifolds
+ 1 Given the function g : R → R2 , g(t) = (cos(t), sin(t)). Show that f (R) is a mani-
fold.
+ 2 Given the set T2 = {p = (p1 , p2 , p3 ) | 16(p21 + p22 ) = (p21 + p22 + p23 + 3)2 } ⊂ R3 ,
called the 2-torus.
(i ) Consider the product manifold S1 × S1 = {q = (q1 , q2 , q3 , q4 ) | q21 + q22 = 1, q23 +
q24 = 1}, and the mapping f : S1 × S1 → T2 , given by
f (q) = q1 (2 + q3 ), q2 (2 + q3 ), q4 .
2
Show that f is onto and f −1 (p) = pr1 , pr2 , r − 2, p3 , where r = |p| 4+3 .
4 Show that
A4,4 = {(p1 , p2 ) ∈ R2 | p41 + p42 = 1},
is a manifold and A4,4 ∼
= S1 (homeomorphic).
+ 6 Show that
(i ) {A ∈ M2,2 (R) | det(A) = 1} is a manifold.
(ii ) Gl(n, R) = {A ∈ Mn,n (R) | det(A) 6= 0} is a manifold.
(iii ) Determine the dimensions of the manifolds in (a) and (b).
8 Show that in Definition 1.1 an open set U 0 ⊂ Rn can be replaced by an open disc
D n ⊂ Rn .
10 Define the Grassmann manifold Gk Rn as the set of all k-dimensional linear sub-
spaces in Rn . Show that Gk Rn is a manifold.
Differentiable manifolds
+ 17 Which of the atlases for S1 given in Example 2, Sect. 1, are compatible. If not
compatible, are they diffeomorphic?
19 Prove that the taking the union of C∞ -atlases defines an equivalence relation (Hint:
use Figure 7 in Section 2).
102
+ 22 Show that the torus the map f defined in Exer. 2 of 1.1 yields a smooth embedding
of the torus T2 in R3 .
+ 25 Let f : Rn+1 \{0} → Rk+1 \{0} be a smooth mapping such that f (λx) = λd f (x),
d ∈ N, for all λ ∈ R\{0}. This is called a homogeneous mapping of degree d.
Show that fe : PRn → PRk , defined by fe([x]) = [ f (x)], is a smooth mapping.
28 Prove Theorem 3.24 (Hint: prove the steps indicated above Theorem 3.24).
+ 32 Show that the map f : Sn → PRn defined by f (x1 , · · · , xn+1 ) = [(x1 , · · · , xn+1 )] is
smooth and everywhere of rank n (see Boothby).
+ 34 Consider the map f : R2 → R given by f (x1 , x2 ) = x13 + x1 x2 + x23 . Which level sets
of f are smooth embedded submanifolds of R2 .
35 Let M be a smooth m-dimensional manifold with boundary ∂M (see Lee, pg. 25).
Show that ∂M is an (m − 1)-dimensional manifold (no boundary), and that there
exists a unique differentiable structure on ∂M such that i : ∂M ,→ M is a (smooth)
embedding.
+ 37 Use Theorem 3.27 to show that the 2-torus T2 is a smooth embedded manifold in
R4 .
Tangent spaces
(iii ) X = (x2 + y2 ) ∂x
∂
.
Cotangent spaces
+ 49 (Lee) Compute d f in coordinates, and determine the set points where d f vanishes.
p1
(i ) M = {(p1 , p2 ) ∈ R2 : p2 > 0}, and f (p) = |p|2
— standard coordinates in R2 .
(i ) θ = x2 dx + (x + y)dy;
R R
(i ) Let γ(t) = (t,t,t), t ∈ [0, 1], and compute γα and γ ω.
(ii ) Let γ be a piecewise smooth curve going from (0, 0, 0) to (1, 0, 0) to (1, 1, 0) to
(1, 1, 1), and compute the above integrals.
Vector bundles
57 Show that the open Möbius band is a smooth rank-1 vector bundle over S1 .
107
Tensors
s
73 Similarly, show that Ts M is a smooth manifold in R`+` , and Tsr M is a smooth
r s
manifold in R`+` +` .
74 Prove that the tensor bundles introduced in Section 7 are smooth manifolds.
Differential forms
+ 80 Given the differential form σ = dx1 ∧ d x2 − dx2 ∧ dx3 on R3 . For each of the fol-
lowing vector fields X compute iX σ:
∂
(i ) X = ∂x1 + ∂x∂2 − ∂x∂3 ;
(ii ) X = x1 ∂x∂1 − x22 ∂x∂3 ;
(iii ) X = x1 x2 ∂x∂1 − sin(x3 ) ∂x∂2 ;
Orientations
+ 86 Prove that the Klein bottle and the projective space PR2 are non-orientable.
89 Show that the projective spaces PRn are non-orientable for n = 2k, k ≥ 1.
110
Integration on manifolds
Integrating m-forms on Rm
+ 90 Let U ⊂ Rm be open and let K ⊂ U be compact. Show that there exists a domain
of integration D such that K ⊂ D ⊂ Rm .
Partitions of unity
+ 93 If U is open covering of M for which each set Ui intersects only finitely many other
sets in U, prove that U is locally finite.
+ 95 Let S2 = ∂B3 ⊂ R3 oriented via the standard orientation of R3 , and consider the
2-form
ω = xdy ∧ dz + ydz ∧ dx + zdx ∧ dy.
Given the parametrization
x sin(ϕ) cos(θ)
F(ϕ, θ) = y = sin(ϕ) sin(θ) ,
z cos(ϕ)
for S2 , compute
R
S2 ω.
+ 101 Let (x, y, z) ∈ R3 . Compute the exterior derivative dω, where ω is given as:
(i ) ω = exyz ;
(ii ) ω = x2 + z sin(y);
(vi ) ω = dx ∧ dz.
112
+ 103 Verify which of the following forms ω on R2 are exact, and if so write ω = dσ:
(i ) ω = xdy − ydx;
(ii ) ω = xdy + ydx;
(iii ) ω = dx ∧ dy;
(iv) ω = (x2 + y3 )dx ∧ dy;
+ 104 Verify which of the following forms ω on R3 are exact, and if so write ω = dσ:
(i ) ω = xdx + ydy + zdz;
(ii ) ω = x2 dx ∧ dy + z3 dx ∧ dz;
(iii ) ω = x2 ydx ∧ dy ∧ dz.
106 Find a 2-form on R3 \{(0, 0, 0)} which is closed but not exact.
Stokes’ Theorem
108 Let Ω ⊂ Rn be an n-dimensional domain. Prove the analogue of the previous prob-
lem.
111 Use the standard polar coordinates for the S2 ⊂ R3 with radius r to compute
xdy ∧ dz − ydx ∧ dz + zdx ∧ dy
Z
,
S2 (x2 + y2 + z2 )3/2
and use the answer to compute the volume of the r-ball Br .
Extra’s
where f : R3 → R, and F : R3 → R3 .
+ 117 Let C = ∂∆ be the boundary of the triangle OAB in R2 , where O = (0, 0), A =
(π/2, 0), and B = (π/2, 1). Orient the traingle by traversing the boundary counter
clock wise. Given the integral
Z
(y − sin(x))dx + cos(y)dy.
C
(i ) Compute the integral directly.
(ii ) Compute the integral via Green’s Theorem.
De Rham cohomology
S
120 Let M = j M j be a (countable) disjoint union of smooth manifolds. Prove the
isomorphism
k
HdR (M) ∼ k
= ∏ HdR (M j ).
j
is well-defined.
for k ≥ 2.
for k ≥ 2.
VII. Solutions
1 Note that (x, y) ∈ g(R) if and only if x2 + y2 = 1. So g(R) is just the circle S1 in
the plane. The fact that this is a manifold is in the lecture notes.
2 We will show that f has an inverse by showing that the funcrion suggested in the
exercise is indeed the inverse of f . It follows that f is a bijection and therefore it
is onto. By continuity of f and its inverse, it follows that it is a homeomorphism.
We first show that f maps S1 × S1 into T2 : We use the parametrisation of S1 × S1
given by
(s,t) 7→ (cos s, sin s, cost, sint),
where s,t ∈ [0, 2π). Suppose that q = (cos s, sin s, cost, sint) ∈ S1 × S1 . Then
We have
3 (i ) Given co-ordinate charts (U, ϕ) and (V, ψ) for S1 , we define a new co-ordinate
chart for T2 as follows; W = f (U) × f (V ), ξ : W → ϕ(U) × ψ(V ) is defined by
ξ(p) = (ϕ(x), ψ(y)) where (x, y) = f −1 (p) and p ∈ W . This shows how to make
an atlas for T2 using an atlas for S1 .
(ii ) A parametrisation for S1 × S1 was already given in the solution of 2. Using the
map f , we obtain the following parametrisation for T2 ;
(s,t) 7→ cos s · (2 + cost), sin s · (2 + cost), sint ,
where s,t ∈ [0, 2π).
(ii ) Let (U, ϕ) and (V, ψ) be charts for N and M respectively. The map ξ : U ×V →
ϕ(U) × ψ(V ) is defined by ξ(x, y) = (ϕ(x), ψ(y)). This is clearly a homeomor-
phism. This shows how to make an atlas for N × M, given atlases for N and M.
Note that if N is n-dimensional and M is m-dimensional, then N × M is n + m-
dimensional.
(ii ) Note that det : Mn,n (R) → R is a continuous function and that Gl(n, R) =
det−1 (R \ {0}). It follows that Gl(n, R) is an open subset of the space Mn,n (R).
2
But Mn,n (R) is homeomorphic to Rn , which is a manifold, and therefore Gl(n, R)
is an open subset of a manifold. It follows from 5 that this space is a manifold. The
dimension of Gl(n, R) is n2 .
12 The space M is not second countable; observe that every second countable space
is separable. To see that M is not separable, note that it contains an uncountable
collection of non-empty pairwise disjoint open subsets.
15 As usual, π : Rn+1 \ {0} → PRn is the quotient mapping, given by π(x) = [x]. For
i = 1, . . . , n + 1, we have Vi = {x ∈ Rn+1 \ {0} : xi 6= 0} and Ui = π(Vi ). For [x] ∈ Ui
we define x
1 xi−1 xi+1 xn+1
ϕi ([x]) = ,..., , ,..., ,
xi xi xi xi
and its inverse is given by
ϕ−1
i (z1 , . . . , zn ) = [(z1 , . . . , zi−1 , 1, zi , . . . , zn )].
ϕi ◦ ϕ−1
j (z1 , . . . , zn ) = ϕi ([z1 , . . . , z j−1 , 1, z j , . . . , zn ])
z
1 zi−1 zi+1 z j−1 1 z j zn
= ,..., , ,..., , , ,... .
zi zi zi zi zi zi zi
This function is clearly a diffeomorphisms. Note that
ϕi (Ui ∩U j ) = {z ∈ Rn : z j−1 6= 0},
ϕ j (Ui ∩U j ) = {z ∈ Rn : zi 6= 0}.
17 All the atlases are compatible. We will compute some of the transition maps. We
list three co-ordinate charts, one from each atlas;
(1) The map ψ ◦ ϕ−1 : (−1, 1) \ {0} → (−∞, −2) ∪ (2, ∞) is given by
2x
x 7→ √ .
1 − 1 − x2
120
(2) The map ϕ ◦ ψ−1 : (−∞, −2) ∪ (2, ∞) → (−1, 1) \ {0} is given by
4x
x 7→ 2 .
x +4
(3) The map ϕ ◦ ξ−1 : (0, π) → (−1, 1) is given by
θ 7→ cos θ.
All the functions mentioned above are C∞ -functions. If we check this for all tran-
sition maps, then we have shown that the atlases are compatible.
that ψ ◦ f ◦ ϕ−1 and ξ ◦ g ◦ (ψ0 )−1 are diffeomorphims. Also note that since N is a
differentiable manifold, the map ψ0 ◦ ψ−1 is a diffeomorphism. Now note that
ξ ◦ g ◦ f ◦ ϕ−1 = (ξ ◦ g ◦ (ψ0 )−1 ) ◦ (ψ0 ◦ ψ−1 ) ◦ (ψ ◦ f ◦ ϕ−1 ).
The map on the right hand side is a composition of diffeomorphisms, so it is again a
diffeomorphism. It follows that the map on the left hand side is a diffeomorphism.
Since p ∈ M was arbitrary, this show that the composition of diffeomorphisms
between manifolds is again a diffeomorphism; i.e. M and O are diffeomorphic. In
this proof we have implicitly used part (iii) of Theorem 2.14.
We have that
1+y x
J f˜x=ϕ(p) = −x(2 + y)(1 − x2 )1/2 (1 − x2 )1/2
0 −y(1 − y2 )1/2
It is not hard to verify that the matrix J f˜x=ϕ(p) has rank 2 for all p ∈ U. For example,
if p = ϕ−1 (0, 0), then
1 0
J f˜(0,0) = 0 1
0 0
This shows that f is of rank 2 at all p ∈ U. Of course, we can also prove this for
other similar charts. So it follows that rk( f ) = 2, so it is an immersion. Since f is
a homeomorphism onto its image, it follows that f is a smooth embedding.
By assumption, f (zi ) 6= 0, so there is some j such that f j (zi ) 6= 0. The fact that g̃ is
smooth follows from the smoothness of all co-ordinate functions fk of f .
32 Let Ui = {x ∈ Sn : xi > 0} and ϕ be the projection on all co-ordinates but the ith .
We have that q
ϕ−1 (z) = (z1 , . . . , zi−1 , 1 − |z|2 , zi , . . . , zn ).
where |z| < 1. Next we let Vi = π[Ui ] and
x xi−1 xi+1 xn+1
1
ψi ([x]) = ,..., , ,..., .
xi xi xi xi
We calculate f in local co-ordinates, so f˜ = ψ ◦ f ◦ ϕ−1 ,
1
f˜(z) = p · (z1 , . . . , zi−1 , zi+1 , . . . , zn ).
1 − |z|2
123
We note that
1+2zi
(1−|z|2 )3/2
i= j
∂ zj
p =
∂zi 1 − |z|2 2zi
(1−|z|2 )3/2
i 6= j
so
1 + 2z1 2z1 ··· 2z1
2z2 1 + 2z2 · · · 2z2
3/2
J f˜z = 1 − |z|2
.. .. .. ..
. . . .
2zn ··· 2zn 1 + 2zn
For all z with |z| < 1, this matrix has rank n.
We compute the rank of this matrix for various values of a and (x, y) ∈ Ma ;
34 So the question really is; which values q ∈ R are regular values of f ? For this, we
compute the Jacobian;
Let q ∈ R and (x, y) ∈ f −1 (q). The rank of J f |(x,y) is 0 if and only if:
3x2 + y = 3y2 + x = 0.
y = −3x2
y2 = −x/3
124
0 = 3x2 + y = 3(−3y2 )2 + y =
= 27y4 + y = y(27y3 + 1).
So for example, at the point p = (1, 0, 0) ∈ S, this matrix is of rank 1 and not of
rank 2. So q is not a regular value of f .
Jg| p = L ⊕ J f | p ,
125
and this map is onto. As in the proof of Theorem 3.28, g is invertible on a neigh-
bourhood V of (L(p), 0) such that
g−1 : Rn−m × {0} ∩V → f −1 (y).
Note that if p ∈ f −1 (0, 3), then p21 + p1 p3 = p22 + 2p2 p3 and therefore
p1 (p1 + p3 ) = p2 (p2 + 2p3 )
2p1 + p3 = p2 + 3.
126
It is left to the reader to verify that in this case, the rank of the Jacobian of f at p is
2.
Tp M = {(x, y, z) ∈ R3 : 2x + 2y + 3z = 0 & 2x − y + z = 0}
45 We recall that
∂ ∂
∂x = (1, 0) ∂r = (cos θ, sin θ)
∂ ∂
∂y = (0, 1) ∂θ = (−r sin θ, r cos θ).
So if r 6= 0, then
∂ 1
∂x = cos θ · ∂r∂ − r
∂
sin θ · ∂θ
∂ 1
∂y = sin θ · ∂r∂ + r
∂
cos θ · ∂θ .
Also recall that x = r cos θ and y = r sin θ. we obtain the following: (i )
∂ ∂ ∂ 1 ∂
x +y = r cos θ(cos θ · − sin θ · )
∂x ∂y ∂r r ∂θ
∂ 1 ∂
+r sin θ(sin θ · + cos θ · )
∂θ r ∂θ
∂ ∂
= r cos2 θ · + r sin2 θ ·
∂r ∂r
∂
= r
∂r
(ii )
∂ ∂ ∂ 1 ∂
−y +x = −r sin θ(cos θ · − sin θ · )
∂x ∂y ∂r r ∂θ
∂ 1 ∂
+r cos θ(sin θ · + cos θ · )
∂r r ∂θ
∂ ∂
= sin2 θ · + cos2 θ
∂θ ∂θ
∂
=
∂θ
127
(iii )
∂ ∂ 1 ∂
(x2 + y2 ) = (r2 cos2 θ + r2 sin2 θ)(cos θ · − sin θ · )
∂x ∂r r ∂θ
∂ ∂
= r2 cos θ · − r sin θ ·
∂r ∂θ
49 (i )
p1 x
f (p) = f˜(x, y) = .
p1 + p22
2 x 2 + y2
128
∂f ∂f
df = dx + dy
∂x ∂y
2
y −x 2 −2xy
= 2 2 2
dx + 2 dy
(x + y ) (x + y2 )2
(ii )
p1 r cos θ cosθ
f (p) = f˜(r, θ) = =
p1 + p22
2 r2 r
∂ f˜ ∂ f˜
df = dr + dθ
∂r ∂θ
− cos θ − sin θ
= 2
dr + dθ.
r r
(iv)
f (p) = |p|2 f˜(x1 , . . . , xn ) = x12 + . . . + xn2
df = 2xi dxi
(i ) We obtain:
ω = r2 cos2 θ(cos θdr − r sin θdθ) + (r cos θ + r sin θ)(sin θdr + r cos θdθ)
= (r2 cos3 θ + r cos θ sin θ + r sin2 θ)dr + (r2 cos2 θ + r2 sin θ cos θ − r3 cos2 θ sin θ)dθ
(ii ) We obtain:
ω = r cos θ(cos θdr − r sin θdθ) + r sin θ(sin θdr + r cos θdθ)
= rdr.
52 We use the fact that ( f ◦ γ)∗ = γ∗ f ∗ , see Lemma 6.3. We have the following se-
quence of equalities:
Z Z Z Z
f ∗ω = γ∗ f ∗ ω = ( f ◦ γ)∗ ω = ω.
γ [a,b] [a,b] f ◦γ
129
53 (i )
−4t 2t 2t
Z Z Z
∗
α= γ α = dt + dt + dt
γ [0,1] [0,1] (t 2 + 1)2 t2 + 1 t2 + 1
−4t 4t
Z Z
= 2 2
dt + 2
dt
[0,1] (t + 1) [0,1] t + 1
2 1 1
= 2 + 2 ln(t 2 + 1) = 2 ln 2 − 1.
t +1 0 0
and:
−4t 2 2t 2
Z Z Z
ω= γ∗ ω = 2 2
dt + 2 dt + 2 dt
γ [0,1] [0,1] (t + 1) t +1 t +1
2(t 2 + 1) − 4t 2 2t
Z Z
= 2 2
dt + 2
dt
[0,1] (t + 1) [0,1] t + 1
2t 1 1
= 2 + ln(t 2 + 1) = 1 + ln 2.
t +1 0 0
(ii ) We can split γ into three pieces; γ1 , γ2 and γ3 all ranging from [0, 1] to R3 where
γ1 (t) = (t, 0, 0), γ2 (t) = (1,t, 0) and γ3 (t) = (1, 1,t). We now compute the integrals
as follows:
Z Z Z Z
α = α+ α+ α
γ γ1 γ2 γ3
4·0 2t 2
Z Z Z
= 2 2
dt + 2
dt + dt
[0,1] (t + 1) [0,1] t + 1 [0,1] 1 + 1
2t 1 1
Z Z
2
= 2
+ 1dt = ln(t + 1) + t = ln 2 + 1.
[0,1] t + 1 [0,1] 0 0
and:
2t
Z Z Z Z Z Z
ω = ω+ ω+ ω= 2
dt + 1dt = ln 2 + 1.
γ γ1 γ2 γ3 [0,1] t + 1 [0,1]
(iii ) and (iv) : Recall that a 1-form is exact if it is of the form dg for some smooth
function g. The Fundamental Theorem for Line Integrals states that the integral of
an exact 1-form over a path γ only depends on the end-points of γ. So if either α
or ω is exact, then the answers in (i) and (ii) do not differ. So we can conclude
immediately that α is not exact. So what about ω? Well, we guess that it is exact,
but we have to find a function g : R3 → R such that ω = dg. This function g is
given by
2z
g(x, y, z) = 2 + ln(y2 + 1).
x +1
58 Let {θ1 , . . . , θn } be the standard basis for (Rn )∗ . So we have that for x ∈ Rn , θi (x) =
xi . Then the inner product on Rn is given by
θ1 ⊗ θ1 + . . . + θn ⊗ θn .
1
Sym T = ∑ T σ,
r! σ∈Sr
(λS + µT )σ = λSσ + µT σ ,
(Sym T )σ = Sym T σ .
1 1
Sym T π = ∑ (T π )σ = r! ∑ T σπ
r! σ∈Sr σ∈Sr
1 1
= ∑ T σπ = r! ∑ T σ = Sym T.
r! σπ∈Sr σ∈Sr
1
Alt T = ∑ (−1)σ T σ ,
r! σ∈Sr
132
−1
where (−1)σ is 1 if σ is even and −1 if σ is odd. Recall that (−1)σ = (−1)σ and
that (−1)σπ = (−1)σ (−1)π . As in the symmetric case, note that,
1
(Alt T )π = ∑ (−1)σ T π σ
r! σ∈Sr
1
= ∑ (−1)π (−1)π (−1)σ T π σ
r! σ∈Sr
= (−1)π Alt T.
1
Alt T π = ∑ (−1)σ T σ π = . . . = (−1)π Alt T.
r! σ∈Sr
75 The set Σr (Tp M) consists of all symmetric r-linear forms on Tp M and Σr M is the
union of these forms over all p ∈ M, so
Σr M = {(p, σ) : p ∈ M and σ ∈ Σr (Tp M)}.
78 Recall that the matrix for the pull-back is just the transpose of the Jacobian matrix
for f in p, so we have
!∗ !
∗ ∗ 2p1 −1 2p1 1
f = (J f | p ) = =
1 6p22 −1 6p22
f ∗ σ p (ξ, ϕ) = σq ( f∗ ξ, f∗ ϕ)
! !!
2p1 ξ1 − ξ2 2p1 ϕ1 − ϕ2
= σq ,
ξ1 + 6p22 ξ2 ϕ1 + 6p22 ϕ2
= (2p1 ξ1 − ξ2 )(ϕ1 + 6p22 ϕ2 ) + (p21 − p2 )(ξ1 + 6p22 ξ2 )(ϕ1 + 6p22 ϕ2 )
= (p21 + 2p1 − p2 )ξ1 ϕ1 + (6p22 p21 − 6p32 + 12p1 p22 )ξ1 ϕ2 +
(6p22 p21 − 6p32 − 1)ξ2 ϕ1 + (36p42 p21 − 36p52 − 6p22 )ξ2 ϕ2
f ∗ σ = (p21 + 2p1 − p2 )dx1 ⊗ dx1 + (6p22 p21 − 6p32 + 12p1 p22 )dx1 ⊗ dx2 +
(6p22 p21 − 6p32 − 1)dx2 ⊗ dx1 + (36p42 p21 − 36p52 − 6p22 )dx2 ⊗ dx2
∂ ∂ ∂ ∂
For vector fields X = X1 ∂x + X2 ∂y and Y = Y1 ∂x +Y2 ∂y , we get that
f ∗ σ(X,Y ) = (p21 + 2p1 − p2 )X1Y1 + (6p22 p21 − 6p32 + 12p1 p22 )X1Y2 + etc.
The fact that both σ and f ∗ σ are smooth follows from the fact that the component
functions of these 2-tensor fields are smooth.
81
i
− 2p1 dx1 − dx2 ⊗ cos(p1 + p2 )dx1 + cos(p1 + p2 )dx2
h
= sin2 (p1 + p2 ) 2p1 cos(p1 + p2 )dx1 ∧ dx1 + 2p1 cos(p1 + p2 )dx2 ∧ dx1
i
− cos(p1 + p2 )dx1 ∧ dx2 − cos(p1 + p2 )dx2 ∧ dx2
h i
= sin2 (p1 + p2 ) cos(p1 + p2 ) 2p1 dx2 ∧ dx1 − dx1 ∧ dx2
= − sin2 (p1 + p2 ) cos(p1 + p2 )(1 − 2p1 )dx1 ∧ dx2 .
82 (i ) Consider the component functions of iX σ and note that these are all smooth.
The second equality follows from the fact that σ ∈ Γr (M) and hence it is alternat-
ing. We have just proved that for all X1 , . . . , Xr−2 ∈ F(M), we have
But this means that all component functions of iX ◦ iX σ are identical 0, and hence
iX ◦ iX σ = 0. Since σ was arbitrary, it follows that iX ◦ iX = 0.
bi ∧ . . . σr = (−1)−1 σ1 ∧ . . . ∧ σi (X) ∧ . . . σr .
σi (X) ∧ σ1 ∧ . . . ∧ σ
Now the proof is by induction on r using (iii) of Lemma 10.2. If r = 2, then this
is just (iii) of Lemma 10.2. Now assume that it is true for r and consider the case
r + 1. Using (iii) of Lemma 10.2 we have
iX σ = iX (σ1 ∧ . . . ∧ σr ∧ σr+1 )
= iX (σ1 ∧ . . . ∧ σr ) ∧ σr+1 + (−1)r σ1 ∧ . . . ∧ σr ∧ (iX σr+1 )
= (−1)i−1 σi (X) ∧ σ1 ∧ . . . ∧ σ
bi ∧ . . . σr+1 .
84 We assume that n ≥ 2, the case for n = 1 is done in the lecture notes. Using
stereographic projections, it is possible to find two charts (U, ϕ) and (V, ψ) for Sn
which together form an atlas for Sn and such that U ∩ V is connected. Since the
determinant is a continuous function, it follows that the function f : U ∩ V → R
defined by
f (p) = det J(ψ ◦ ϕ−1 ) p .
136
is also continuous. Note that f (p) 6= 0 for all p and since U ∩ V is connected it
follows that either f (p) > 0 or f (p) < 0 for all p ∈ U ∩V . In the first case we are
done. If f (p) < 0 for all p ∈ U ∩V , then let ψ0 be defined by
ψ0 (x1 , x2 , . . . , xn ) = ψ(−x1 , x2 , . . . , xn ).
It is now easily verified that the Jacobian of the transition map ψ0 ◦ ϕ−1 has positive
determinant everywhere, hence Sn is orientable.
86 We only do the case for the Klein bottle. Note that if a manifold M is an open
subset of an orientable manifold N, then M is orientable. So non-orientability of
the Klein bottle follows from the fact that the Möbius strip is a non-orientable open
subspace of the Klein bottle.
87 We shall give an atlas for which the Jacobian of the transition maps has positive
determinant everywhere. This gives an induced orientation. So recall that PR3 is
a quotient space of R4 \ {0} with projection map π. We let Ui = {x ∈ R4 : xi 6= 0}
and Vi = π(Ui ). The homeomorphisms ϕi : Vi → R3 are given by (THESE ARE
NOT CORRECT: FIX THIS)
x2 −x3 −x4
ϕ−1
ϕ1 ([x]) = ,
x1 x1 , x1 1 (z) = (−1, −z1 , z2 , z3 )
x1 −x3 −x4
ϕ−1
ϕ2 ([x]) = x2 , x2 , x2 2 (z) = (−z1 , −1, z2 , z3 )
x1 −x2 −x4
ϕ−1
ϕ3 ([x]) = x3 , x3 , x3 3 (z) = (−z1 , z2 , −1, z3 )
x1 −x2 −x3
ϕ−1
ϕ4 ([x]) = x4 , x4 , x4 4 (z) = (−z1 , z2 , z3 , −1)
This matrix has positive determinant. The computations for the other transition
maps are naturally left to the reader.
137
90 Since the open rectangles form a basis for the topology on Rm , we may fix an open
covering U of K such that U ⊂ U and every member of U is a rectangle in Rm .
S
Since K is compact, we may pick a finite collection V ⊂ U such that V also covers
K. Now let D = V. Then D is the finite union of rectangles and therefore its
S
95 For completeness, note that F : D → S2 where D = [0, π]×[0, 2π]. We first compute
F ∗ ω. Note that
F ∗ ω = sin ϕ cos θ(cos ϕ sin θdϕ + sin ϕ cos θdθ) ∧ (− sin ϕdϕ)
+ sin ϕ sin θ(− sin ϕdϕ) ∧ (cos ϕ cos θdϕ − sin ϕ sin θdθ)
+ cos ϕ(cos ϕ cos θdϕ − sin ϕ sin θdθ) ∧ (cos ϕ sin θdϕ + sin ϕ cos θdθ)
= − sin3 ϕ cos2 θdθ ∧ dϕ + sin3 ϕ sin2 θdϕ ∧ dθ
− cos2 ϕ sin ϕ sin2 θdθ ∧ dϕ + cos2 ϕ sin ϕ cos2 θdϕ ∧ dθ
= sin ϕdϕ ∧ dθ.
96 We first compute F ∗ ω.
F ∗ ω = sin ϕ cos θ(cos ϕ sin θdϕ + sin ϕ cos θdθ) ∧ (− sin ϕdϕ)
cos ϕ(cos ϕ sin θdϕ + sin ϕ cos θdθ) ∧ (cos ϕ cos θdϕ − sin ϕ sin θdθ)
= − sin3 ϕ cos3 dθ ∧ dϕ − cos2 ϕ sin2 θ sin ϕdϕ ∧ dθ
cos2 ϕ sin ϕ cos2 θdθ ∧ dϕ
= (sin3 ϕ cos2 θ − sin ϕ cos2 ϕ)dϕ ∧ dθ.
and thus:
Z Z Z 2π Z π
ω = F ∗ω = sin3 ϕ cos2 θ − sin ϕ cos2 ϕdϕdθ
S2 D 0 0
Z 2π π π
2 1 3 1
= − cos θ cos ϕ − cos ϕ + cos3 ϕ dθ
0 3 0 3 0
Z 2π
4 2
= cos2 θ − dθ
0 3 3
θ 2ϕ 4
4 1
= sin 2θ + −
3 4 2 0 3π
= 0.
We can avoid this calculation if we use Stokes’ Theorem: first note that
dω = dx ∧ dy ∧ dz + dz ∧ dy ∧ dx = 0,
so we obtain: Z Z Z
ω= dω = 0 = 0.
S2 B3 B3
100 We have
g∗ dx1 = dr g∗ dx3 = dt
g∗ dx2 = ds g∗ dx4 = 2(2r − t)(2dr − dt).
Let ω be the form to be integrated. We have
g∗ ω = sds ∧ dt ∧ (2(2r − t)(2dr − dt)) + 2rtdr ∧ ds ∧ dt
= 4s(2r − t)ds ∧ dt ∧ dr + 2rtdr ∧ ds ∧ dt
= (8rs − 4st + 2rt)dr ∧ ds ∧ dt.
We now compute the integral:
Z Z
ω = g∗ ω
M D
Z 1Z 1Z 1
= (8rs − 4st + 2rt) drdsdr
0 0 0
Z 1Z 1
= (4s − 4st + t) dsdt
0 0
Z 1
1
= (2 − t) dt = 1 .
0 2
101 (i ) dω = yzexyz dx + xzexyz dy + xyexyz dx, (ii ) dω = 2xdx + z cos ydy + sin ydx,
(iii ) dω = 0 and ω = dσ, where σ = x2 +y2 , (iv) dω = dx ∧dy −dx ∧dz, (v) dω =
140
104 All of these forms are exact: (i ) σ = 12 (x2 + y2 + z2 ), (ii ) σ = 13 x3 dy + xz3 dz,
(iii ) σ = 31 x3 ydy ∧ dz.
So we obtain:
114
f ∗ ω = s2 cost sint [(costds − s sintdt) ∧ (sintds + s costdt) ∧ du]
= s3 cost sint(sin2 t + cos2 t)ds ∧ dt ∧ du
= s3 cost sintds ∧ dt ∧ du.
= 0.
This shows that ω is closed. Now, to see that ω is not exact, note that by 107 we
have Z
ω = 3 · Vol(B3 ) > 0.
S2
Now it follows from Stokes’ Theorem that if ω is exact, this integral is zero, so it
follows that ω is not exact.
116 Denote the parametrization mapping with f . The pullback of the first form is given
by
So we obtain
Z Z
dx1 ∧ dx2 ∧ dx4 = − ds ∧ dt ∧ du = −1.
M [0,1]3
117 For the direct computation, let f : [0, π/2] → R2 be given by f (t) = (t, 2/πt), i.e.,
f is a (reversed) parametrization of the line BO. Let