MATH3031 CH 3 Notes
MATH3031 CH 3 Notes
These are the “linear combinations” of vector fields that we will be concerned with.
Example 3.2: The first example is arguably the most important. Let φ ∈ C ∞ (U ), i.e.
φ : U → R is a differentiable function. We defined dφ ∈ Ω1 (U ) as follows: Given any
X ∈ Vect(U ), let dφ(X) : U → R be the function,
dφ(X)(p) = Dφ(p)(X(p)), p ∈ U.
1
later parts of this Chapter how it naturally extends to an operator on differential forms
of all degree. For now, let’s note some of the basic properties of this operator, including
its relation to the gradient operator ∇ : C ∞ (U ) → Vect(U ):
Proof: For (a), we have to show that the expressions on both sides give us the same map
from Vect(U ) to C ∞ (U ). To do this, let X ∈ Vect(U ) and p ∈ U be given. Then using
the definition of d from Example 3.2, and the product rule for derivatives of functions in
C ∞ (U ):
d(φψ)(X)(p) = D(φψ)(p)(X(p))
= [φ(p)Dψ(p) + ψ(p)Dφ(p)](X(p))
= φ(p)Dψ(p)(X(p)) + ψ(p)Dφ(p)(X(p))
= φdψ(X)(p) + ψdφ(X)(p).
For (b), we have to show that the expressions on both sides give us the same func-
tion in C ∞ (U ), so let p ∈ U be given. Then, since X(p) ∈ Rn , we have X(p) =
(X 1 (p), . . . , X n (p))T for X 1 (p), . . . , X n (p) ∈ R, and using this we calculate
dφ(X)(p) = Dφ(p)(X(p))
X 1 (p)
= D1 φ(p) . . . Dn φ(p) ...
X n (p)
n
X
= Di φ(p)X i (p)
i=1
= h∇φ(p), X(p)i
= h∇φ, Xi(p).
Finally, to prove (c) we must show that both sides give us the same map from Vect(U )
to C ∞ (U ), but since dφ, dxi ∈ Ω1 (U ) we know that both maps respect linear combinations
of vector fields. However, any X ∈ Vect(U ) can be written as
X = (X 1 , . . . , X n ) = X 1 E1 + . . . + X n En ,
2
where X j ∈ C ∞ (U ) and Ej ∈ Vect(U ) is given by Ej (p) = ej ∈ Rn (the jth standard
basis vector). Hence, it suffices to prove that
n
!
X ∂φ i
dφ(Ej ) = i
dx (Ej )
i=1
∂x
holds for each j = 1, . . . , n. However, for each p ∈ U , we see, using (b) and the definition
of Ej :
Example 3.4 (First-Year Calculus): It turns out you already know the above identities
and have been using them for at least the past three years. If n = 1 and U = (a, b) ⊂ R
is an open interval, then you are familiar with the technique of “u-substitution” used to
calculate integrals. For example, to compute an integral like this,
Z b
(1 − sin2 x) cos xdx,
a
the obvious thing is to try is the substitution u = sin x (this is only a legitimate substi-
tution if (a, b) is an interval where sin x is one-to-one, or else by splitting up the interval
into such sub-intervals), and use the identity du = cos xdx to simplify the integral as
Z sin b
(1 − u2 )du,
sin a
which can be easily calculated. As justification, you might have been told something like,
“since du
dx
= cos x, we can multiply by dx on both sides to get du = cos xdx”, but if your
lecturers were honest with you they should have admitted that “multiplying by dx” is
nonsense. Now we can see that it actually is possible to make sense of this identity, but
as an identity between differential 1-forms on U = (a, b). By Lemma 3.3(c), we see that
for the function u = sin x ∈ C ∞ (U ), we have
du
du = dx ∈ Ω1 (U ).
dx
This example suggests that what we’ve secretly been doing, since First-Year Calculus,
is integrating differential 1-forms on intervals in the real line. Likewise, the path integrals
3
defined in Multivariable Calculus (and the Contour Integrals in Complex Analysis) should
really be thought of as integrals of differential 1-forms on 1-manifolds (curves). Before
explaining this, we define differential 1-forms on a general manifold M , as well as the
operator d : C ∞ (M ) → Ω1 (M ):
dφ(X)(p) = Dφ(p)(X(p)) ∈ R,
One of the things that makes differential 1-forms much better to work with than vector
fields, is that there is a natural way to “move them around” using differentiable maps.
This is called the “pullback operation”:
for any X ∈ Vect(M ) and any p ∈ M , where Dϕ(p) : Tp M → Tϕp N is given by Defini-
tion 2.22, and ω(ϕ(p)) : Tϕ(p) N → R is the (linear) map given by letting ω(ϕ(p))(v) =
ω(Y )(ϕ(p)) for Y ∈ Vect(N ) any vector field with Y (ϕ(p)) = v ∈ Tϕ(p) N .
4
the vector field E1 ∈ Vect(I) (recall that E1 (t) = e1 ∈ R for all t ∈ I). Using Definition
3.6, we calculate, for t ∈ I,
Now, if we write t = x1 ∈ C ∞ (I) for the 1st (and only) coordinate function on the
real interval I = (a, b), then we have dt(E1 ) = 1, the constant function. Hence, the above
calculation shows that for the vector field X = Xω = ω1 E1 + . . . + ωn En ∈ Vect(U ) which
has the same coordinate functions as the functions used to define ω, we have
γ ∗ ω = hX ◦ γ, γ̇idt ∈ Ω1 (I).
This means, in particular, that the vector path integral of X along γ equals the integral
of the pullback of ω by γ:
Z Z b
X · dγ := hX(γ(t)), γ̇(t)idt
γ a
Z
= γ ∗ ω.
I
This is not just a more elegant way to write the formula for vector path integral. It
also allows us to use the fact that pullbacks and exterior derivatives commute, a fact
which we state without proof to end this section.
d(ϕ∗ f ) = ϕ∗ (df ) ∈ Ω1 (M ).
5
Consider the subsets U = {(r, θ) | r > 0} ⊂ R2 , V = {(x, y) | (x, y) 6= (0, 0)} ⊂ R2 , the
map f : U → V defined by
Calculate the pullback f ∗ ω ∈ Ω1 (U ), using the fact that any differential 1-form η ∈
Ω1 (U ) van be expressed as η = gdr + hdθ, for some differentiable real-valued functions
g = g(r, θ), h = h(r, θ) ∈ C ∞ (U ).
Use the definitions and the Chain Rule to prove Lemma 3.8:
d(f ∗ φ) = f ∗ dφ ∈ Ω1 (U ).
Xω = ω1 E1 + . . . + ωn En ∈ Vect(U ).
6
3.2: Tensor Algebra
From Chapter 3.1, we have some idea that differential 1-forms are good for integra-
tion, at least if we are integrating on the real line or along a path. In this section, we are
going to develop the algebra needed to generalise from differential 1-forms to differential
k-forms, for all integers k ≥ 0. Besides being the “right” things to integrate over higher-
dimensional manifolds, this algebra and the concepts in differential geometry related to it
turn out to be useful in a wide range of geometric and physical settings (if you’ve taken
a course in general relativity, you will know that you end up drowning in tensors).
Before we dive into the algebra, for motivation it’s worth thinking about the ge-
ometric significance of the determinant. Recall from Linear Algebra (or take it as a
challenge/exercise to prove) that if v1 , v2 ∈ R2 are two vectors and
P 2 = P (v1 , v2 ) = {α1 v1 + α2 v2 | 0 ≤ α1 , α2 ≤ 1}
is the parallelogram in R2 with v1 and v2 as its sides, then the area of P 2 is given by
the absolute value of the determinant of the 2 × 2 matrix with v1 and v2 as its column
vectors. Similarly, if v1 , v2 , v3 ∈ R3 , then the volume of the parallelipiped (3-dimensional
analog of a parallelogram) spanned by them,
is given by
vol(P 3 ) = | det(v1 , v2 , v3 )|.
On the other hand, recall from TUT problems Ch. 2.1, Question 1, and TUT problems
Ch. 2.2-4, Question 3, that the determinant of n × n matrices can be thought of as a
multilinear map
det : (Rn )n → R.
This is geometric evidence that the algebra of multilinear maps, or tensor algebra, will be
important for integration in higher dimensions.
If it helps, think of a k-tensor on V as any map that “eats k vectors and spits out a
number”, and does this in a way that is linear with respect to each of the vectors it eats.
The set of all k-tensors on V is denoted T k (V ). Since its elements are maps that take
values in the real numbers, we can add any two k-tensors together, and we can multiply
7
them by scalars: for T, S ∈ T k (V ) and α, β ∈ R, we can define αT + βS ∈ T k (V ) by
letting
(αT + βS)(v1 , . . . , vk ) = αT (v1 , . . . , vk ) + βT (v1 , . . . , vk ) ∈ R
for any input vectors v1 , . . . , vk ∈ V .
Also, if W is another vector space and L : V → W is a linear map, we can define the
pullback map L∗ : T k (W ) → T k (V ): Given T ∈ T k (W ), define L∗ T ∈ T k (V ) by letting
T 0 (V ) = R.
This can either be taken as the definition or by applying Definition 3.9 in the vacuous
case when there are 0 input vectors.
h·, ·i ∈ T 2 (V ).
Pn
Given any v, w ∈ Rn as input, the output is hv, wi = i=1 v i wi ∈ R.
det ∈ T n (V ).
(See Question 1 of Ch. 2.1 TUTs and Question 3 of Ch. 2.2-4 TUTs.)
ω(p)(v) = ω(X)(p).
(It is a fact that such an X can always be chosen, and that the value of ω(p)(v) does not
depend on which vector field X is chosen, as long as X(p) = v; this is an exercise you can
do if you feel like it.)
While Definition 3.9 gives us a lot of examples, we need more to be able to work
with and compute with. For one thing, we want to focus on k-tensors that have more
“symmetry”, and we also want to have a kind of “multiplication” on the tensors we work
with:
8
k-tensors on V is denoted Λk (V ). We call an element ω ∈ Λk (V ) an alternating k-tensor
or (linear) k-form. The number k is called the degree of ω.
For V = Rn , the properties of the determinant tell us that det ∈ Λn (V ), since the
sign of the determinant changes whenever we interchange two columns (or rows) of a
n × n matrix. In contrast, the Euclidean inner product h·, ·i ∈ T 2 (V ) is not an element
of Λ2 (V ), because
hv, wi = hw, vi, v, w ∈ V,
whereas for an alternating 2-form ω ∈ Λ2 (V ) we must have (by definition),
Besides being a linear subspace of T k (V ), the set of linear k-forms Λk (V ) has an addi-
tional structure that we’ll make use of, which allows us to multiply two alternating tensors
and get a third alternating tensors (the degrees add). The following Theorem tells us how
this is done, and gives some consequences. We will not worry about the proof in this
course, nor about how to define the product in general, since the properties of the prod-
uct can be used for our purposes for most calculations and further results, and since we can
give the definitions explicitly in the (low-dimensional) cases that are of most interest to us:
Theorem 3.12 (Wedge Product): For any linear k-form ω ∈ Λk (V ) and linear l-form
η ∈ Λl (V ), there is a linear (k + l)-form ω ∧ η ∈ Λk+l (V ). This defines a product (called
the wedge product) on the space of all alternating tensors of all degrees, and this wedge
product has the following properties (for any ω, ωi ∈ Λk (V ), η, ηi ∈ Λl (V ), θ ∈ Λj (V ),
α, β ∈ R):
(a) (αω1 + βω2 ) ∧ η = α(ω1 ∧ η) + β(ω2 ∧ η);
(b) ω ∧ (αη1 + βη2 ) = α(ω ∧ η1 ) + β(ω ∧ η2 );
(c) (ω ∧ η) ∧ θ = ω ∧ (η ∧ θ);
(d) ω ∧ η = (−1)kl η ∧ ω.
Moreover, if L : V → W is any linear map between two vector spaces, and ω ∈ Λk (W ),
η ∈ Λl (W ), we have
(e) L∗ (ω ∧ η) = (L∗ ω) ∧ (L∗ η) ∈ Λk+l (V ).
To see how these properties allow us to do most calculations of interest, first note
that Λ0 (V ) = T 0 (V ) = R, and Λ1 (V ) = T 1 (V ) (every 1-tensor is, vacuously, alternating
since it is not possible to interchange the order of two input vectors as a 1-tensor only
eats one vector from V as input). But Λ1 (V ) = T 1 (V ) can be assigned a basis once we
are given a basis for V . If v1 , . . . , vn is any given basis for V , then we define elements
ϕ1 , . . . , ϕn ∈ T 1 (V ) by letting, for i = 1, . . . , n, the linear map ϕi : V → R be given by
(
1 if i = j;
ϕi (vj ) := δji =
0 if i 6= j;
9
and then extending this definition linearly, i.e. for any vector w ∈ V , we write w =
α1 v1 + . . . + αn vn for α1 , . . . , αn ∈ R and let
n
X n
X
i j i
ϕ (w) = α ϕ (vj ) = αj δji = αj .
j=1 j=1
This is well-defined by the properties of a basis, and the maps ϕi : V → R are clearly
linear, so ϕ1 , . . . , ϕn ∈ T 1 (V ) = Λ1 (V ). In fact, it is a Theorem that the elements
ϕ1 , . . . , ϕn form a basis of Λ1 (V ), which is referred to as the dual basis corresponding to
the basis v1 , . . . , vn .
It turns out that by combining the elements of this dual basis, using the wedge product
of Theorem 3.12, we can form bases of Λk (V ) for all k ≥ 0, and calculate the dimensions
of these as vector spaces. Before stating the general result, let’s look at a few examples
for low-dimensional vector spaces. Since any two vector spaces of the same dimension are
isomorphic (once we’ve chosen a basis), we’ll just look at Rn for small values of n, and
we’ll fix ideas by using the standard basis e1 , . . . , en . For reasons that will become clear in
the next section, we denote the dual basis in this case by dx1 , . . . , dxn (or even by dx, dy
or dx, dy, dz when n = 2, 3). In other words, dxi = ϕi , so the defining identities are
The key fact in our calculations below will be the identities (a) and (b) of Theorem
3.12, as well as identity (d) for the case k = l = 1, which tells us that for i, j = 1, . . . , n,
we have
dxi ∧ dxj = −dxj ∧ dxi .
(In particular, dxi ∧ dxi = 0). On the other hand, for ω1 , ω2 ∈ Λ1 (V ), there is a simple
formula we could guess as the definition of ω1 ∧ ω2 ∈ Λ2 (V ) that has these properties:
Given v, w ∈ V , guess that
One can check that this really does define an element of Λ2 (V ), and that the relevant
properties of Theorem 3.12 are satisfied.
ω = ωx dx + ωy dy, for ωx , ωy ∈ R.
Hence, using this notation for any ω, η ∈ Λ1 (R2 ), we calculate their wedge product:
10
On the other hand, let v = v 1 e1 + v 2 e2 , w = w1 e1 + w2 e2 ∈ R2 be any given vectors. Then
using the formula for the wedge product of 1-forms discussed above, and the defining
identities for dx, dy, we calculate:
This shows that dx∧dy = det ∈ Λ2 (R2 ). Finally, we’ll show that every element θ ∈ Λ2 (R2 )
is a scalar multiple of dx ∧ dy = det, which shows that Λ2 (R2 ) is a 1-dimensional vector
space with this element giving a basis. Given any θ ∈ Λ2 (R2 ), define
θxy := θ(e1 , e2 ) ∈ R.
θ(v, w) = θ(v 1 e1 + v 2 e2 , w1 e1 + w2 e2 )
= v 1 w1 θ(e1 , e1 ) + v 1 w2 θ(e1 , e2 ) + v 2 w1 θ(e2 , e1 ) + v 2 w2 θ(e2 , e2 )
= v 1 w1 (0) + v 1 w2 (θxy ) + v 2 w1 (−θxy ) + v 2 w2 (0)
= θxy (v 1 w2 − w1 v2)
= θxy det(v, w).
Finally, we can show that Λk (R2 ) = {0} for all k ≥ 3. To do this, let µ ∈ Λk (R2 ), and let
u, v, w ∈ R2 be arbitrary vectors. Then writing u, v and w in terms of the basis vectors
e1 , e2 , we can show that µ(u, v, w, . . .) = 0 by using the fact that µ is multi-linear and
alternating, with a calculation similar to the one above.
On the other hand, given any linear 2-form θ ∈ Λ2 (R3 ), then defining the real coefficients
we can show using a similar calculation as was done in Example 3.13 that
which shows us that Λ2 (R3 ) is a 3-dimensional vector space with dy ∧ dz, dz ∧ dx, dx ∧ dy
forming a basis. Note that Λ1 (R3 ) and Λ2 (R3 ) are both 3-dimensional, as is V = R3 . We
fix isomorphisms between them by identifying their bases in the following way:
11
Φ : R3 → Λ1 (R3 ) is given by mapping
e1 →
7 Φ(e1 ) = dx; (1)
e2 →7 Φ(e2 ) = dy; (2)
e3 → 7 Φ(e3 ) = dz; (3)
Then using these identifications, and the expression for ω ∧ η ∈ Λ2 (R3 ) calculated
above, we see that for any v, w ∈ R3 , the following identity holds, which relates the wedge
product to the cross product × of vectors in R3 :
dx ∧ dy ∧ dz = det ∈ Λ3 (R3 ),
and that Λk (R3 ) = {0} for k > 3. This can be done using similar calculations and
arguments that were used in Example 3.13, although the calculations are slightly more
complicated.
Theorem 3.16: For V = Rn and dx1 , . . . , dxn ∈ Λ1 (Rn ) the dual of the standard basis
vector, we have
dx1 ∧ . . . ∧ dxn = det ∈ Λn (Rn ),
and this element forms a basis for Λn (Rn ).
We will not worry about giving proofs of either of these Theorems this year, since to
do so requires a more general formula for the wedge product. I will provide an outline of
the steps involved as optional TUT problems, for those who are interested.
12
3.2 TUT Problems
1. Let V = Rn with the standard basis denoted by e1 , . . . , en . Let the dual basis be
denoted by dx1 , . . . , dxn , which means that for any i, j = 1, . . . , n we have
(
1 if i = j;
dxi (ej ) = δji =
0 if i 6= j,
by showing that both sides of (18) give the same real value when evaluated on an arbitrary
vector v ∈ V . (Hint: You can simplify your calculations by first showing that it suffices
to show that they have the same value for v = ej , where j = 1, . . . , n is arbitrary, since
both sides of (18) are linear.)
α1 dx1 + . . . + αn dxn = 0 ∈ Λ1 (V )
(where the right-hand side means the “zero map” 0 : V → R which sends every vector
v ∈ V to 0 ∈ R), then α1 = α2 = . . . = αn = 0. Hence, the linear 1-forms dx1 , . . . , dxn
form a basis for Λ1 (V ).
(c) Given ω, η ∈ Λ1 (V ), use the properties (a), (b) and (c) of Theorem 3.12 to show
that
n
X
ω∧η = (ωi ηj − ωj ηi )dxi ∧ dxj ∈ Λ2 (V ), (9)
i,j=1,i<j
where ω = ω1 dx1 + . . . + ωn dxn and η = η1 dx1 + . . . + ηn dxn are the unique ways to express
ω and η in terms of the basis dx1 , . . . , dxn .
(d)* Given any linear 2-form θ ∈ Λ2 (V ) and any pair of indices 1 ≤ i < j ≤ n, define
θi,j := θ(ei , ej ) ∈ R. Prove that
n
X
θ= θi,j dxi ∧ dxj ∈ Λ2 (V ), (10)
i,j=1,i<j
by showing that both sides of (10) give the same real value when evaluated on an ar-
bitrary (ordered) pair of vectors v, w ∈ V . (Hint: You can simplify your calculations
by first showing that it suffices to prove that both sides give the same real value when
evaluated on an ordered pair of the form v = ek , w = el for some 1 ≤ k, l ≤ n, because
13
both sides of (10) are multi-linear.)
(e)* Show that if α1,2 , α1,3 , . . . , α2,3 , . . . , αn−1,n ∈ R are any real coefficients, and
(where the right-hand side 0 ∈ Λ2 (V ) is the linear 2-form given by 0(v, w) = 0 for all
v, w ∈ V ), then α1,2 = . . . = αn−1,n = 0. Hence, the linear 2-forms
(c) Denote by dx1 , . . . , dxn , dp1 , . . . , dpn ∈ Λ1 (V ) the linear 1-forms that are dual to
the standard basis e1 , . . . , e2n of V . In other words, for any v = v 1 e1 + . . . + v 2n e2n ∈ V ,
Show that
n
X
ω= dpi ∧ dxi ∈ Λ2 (V ). (12)
i=1
(d)* For n = 1, 2, 3, use the properties of the wedge product given by Theorem 3.12
to verify the identity
ωn
= −dx1 ∧ . . . ∧ dxn ∧ dp1 ∧ . . . ∧ dpn ∈ Λ2n (V ), (13)
n!
where ω n = ω ∧ . . . ∧ ω (n times).
Show that this map is a linear 2-form, i.e. ω ∧ η ∈ Λ2 (V ), and prove that the map
∧ : Λ1 (V ) × Λ1 (V ) → Λ2 (V )
14
determined by this wedge product satisfies properties (a), (b), (d) and (e) of Theorem
3.12 with k = l = 1.
Prove that this defines a linear 3-form ω ∧ η ∧ ϕ ∈ Λ3 (V ), and that in this way we get
wedge product maps
∧ : Λ1 (V ) × Λ2 (V ) → Λ3 (V ) and ∧ : Λ2 (V ) × Λ1 (V ) → Λ3 (V )
that satisfy the properties (a), (b), (d) and (e) of Theorem 3.12 (with k = 1, l = 2 or
k = 2, l = 1).
(c)* Following the pattern of (a) and (b) above, define for any linear 1-forms ω1 , . . . , ωk ∈
Λ (V ), the map ω1 ∧ . . . ∧ ωk : V k → R by letting
1
X
ω1 ∧ . . . ∧ ωk (v1 , . . . , vk ) = sgn(σ)ω1 (vσ(1) )ω2 (vσ(2) ) . . . ωk (vσ(k) ),
σ∈Sk
where Sk is the group of all permutations of the integers {1, 2, . . . , k}, which has k!
elements, and for σ ∈ Sk we have sgn(σ) = 1 if σ can be expressed as a composition of an
even number of transpositions of any two integers and sgn(σ) = −1 if σ can be expressed
as a composition of an odd number of transpositions of any two integers. This formula
can be used to form the wedge products of any linear forms, which can be shown to satisfy
the properties of Theorem 3.12. Using this formula, and Theorem 3.15, prove Theorem
3.16: if V = Rn , then
dx1 ∧ . . . ∧ dxn = det ∈ Λn (V ).
(Hint: Using the dimension of Λn (V ), show that the identity follows if it can be shown that
both sides give the same real value when evaluated on the n-tuple (e1 , e2 , . . . , en ) ∈ V n ,
and carry out that computation.)
15
3.3: Differential Forms and the Exterior Derivative
In this Section, we use the tensor algebra from Chapter 3.2 to define differential k-
forms on a general manifold, and work out ways of calculating with them, including the
notion of the exterior derivative of differential forms. While our initial definitions are
done for general manifolds, the most important example to keep in mind is for an open
subset of some Rn , and that is where we’ll do most actual calculations.
and if we interchange the order in which any two vector fields are inserted as inputs of ω,
then the output changes sign.
(b) A rule that assigns, for each p ∈ M , a linear k-form ω(p) ∈ Λk (Tp M ), and does
so in a differentiable way, meaning that for any differentiable vector fields X1 , . . . , Xk ∈
Vect(M ), the real-valued function ω(X1 , . . . , Xk ) : M → R that’s defined by
is differentiable.
Note: To see that both of these definitions of a differential k-form are equivalent, we
need to know how to go from an object ω as defined in (a) to one as defined in (b),
and vice-versa. To go from (b) to (a) is the easier part: Given a differential k-form ω
as in (b), for any differentiable vector fields X1 , . . . , Xk ∈ Vect(M ) we get a function
ω(X1 , . . . , Xk ) : M → R by using the formula in (b), and by definition this function is
in C ∞ (M ). Also, the fact that ω(p) ∈ Λk (Tp M ) allows us to easily prove that the map
Vect(M )k → C ∞ (M ) we get in this way is multilinear and alternating as defined in (a).
To go from (a) to (b) is more complicated, but can be done.
for uniquely determined differentiable real-valued functions ωi1 ,...,ik ∈ C ∞ (U ) which are
determined, for each set of strictly increasing indices 1 ≤ i1 < i2 < . . . < ik ≤ n, by
16
where Ei ∈ Vect(U ) is the constant vector field given by Ei (p) = ei ∈ Rn (the ith standard
basis vector) for all p ∈ U .
for some real coefficients αi1 ,i2 ,...,ik (we can have a different coefficient for EACH strictly
increasing index set 1 ≤ i1 < . . . < ik ≤ n, so our sum above potentially includes a term
for each such index). Defining a function ωi1 ,...,ik : U → R by
we can use the identity dxi (Ej ) = δji as well as the alternating property to show that if
1 ≤ i1 < . . . < ik ≤ n and 1 ≤ j1 < . . . < jk ≤ n are two strictly increasing sets of indices,
then
From this, and the above expression for ω(p), it follows that
for all p ∈ U , which concludes the proof, since ω(Ei1 , . . . , Eik ) ∈ C ∞ (U ) by definition of
differentiability of ω. (QED)
Definition 3.19 (Exterior Derivative): We can use the expression given in Lemma
3.18 to define the exterior derivative of differential forms on an open subset U ⊂ Rn :
(a) If ω ∈ Ωk (U ) is of the form ω = gdxi1 ∧ . . . ∧ dxik for some set of strictly increasing
indices 1 ≤ i1 < . . . < ik ≤ n, then its exterior derivative dω ∈ Ωk+1 (U ) is
17
Theorem 3.20: Definition 3.19 defines operators d : Ωk (U ) → Ωk+1 (U ), for all k =
0, 1, 2, . . .. The following properties hold:
dω = df ∧ dx + dg ∧ dy
∂f ∂f ∂g ∂g
= dx + dy ∧ dx + dx + dy ∧ dy
∂x ∂y ∂x ∂y
∂f ∂f ∂g ∂g
= det ∂x ∂y dx ∧ dy + det ∂x ∂y dx ∧ dy
1 0 0 1
∂g ∂f
= − dx ∧ dy,
∂x ∂y
∂g
which shows that h = ∂x − ∂f
∂y
. This expression should remind you of something: Green’s
Theorem, which is often written in the form
Z Z Z
∂g ∂f
f dx + gdy = − dxdy
∂D D ∂x ∂y
for a bounded domain D ⊂ R2 with piecewise smooth boundary ∂D. The calculation
above shows that we can restate Green’s Theorem as saying that for any differential
1-form ω ∈ Ω1 (D), Z Z Z
ω= dω.
∂D D
18
are i1 = 1, i2 = 2 and i1 = 1, i2 = 3 and i1 = 2, i2 = 3, we see from Lemma 3.18 that dω,
like any differential 2-form on U , can be expressed as a combination of dy ∧ dz, dz ∧ dx
and dx ∧ dy (with coefficients given by some differentiable real-valued functions on U ).
To find the coefficients, we calculate as before, but now using the formula for the wedge
product derived in Example 3.14 (we’ll save some space by writing partial derivatives as
fx = ∂f
∂x
, etc.) :
dω = df ∧ dx + dg ∧ dy + dh ∧ dz
= (fx dx + fy dy + fz dz) ∧ dx
+ (gx dx + gy dy + gz dz) ∧ dy
+ (hx dx + hy dy + hz dz) ∧ dz
= fz dz ∧ dx − fy dx ∧ dy
+ gx dx ∧ dy − gz dy ∧ dz
+ hy dy ∧ dz − hx dz ∧ dx
= (hy − gz )dy ∧ dz + (fz − hx )dz ∧ dx + (gx − fy )dx ∧ dy.
Note that these coefficients correspond to the coefficients of the curl of a vector field: If
X ∈ Vect(U ) is a vector field expressed as X = f E1 + gE2 + hE3 , then in the notation
of Example 3.14 we have ω = Φ(X) and dφ = Ψ(∇ × X). But, since any vector field
X ∈ Vect(U ) can be written in this form, we have proven the following:
Example 3.24: With U ⊂ R3 open and the same notation as above, now we calculate
formulas for the exterior derivative of differential 2-forms, d : Ω2 (U ) → Ω3 (U ). If θ ∈
Ω2 (U ), then as noted above we can write θ = f dy ∧ dz + gdz ∧ dx + hdx ∧ dy for some
f, g, h ∈ C ∞ (U ). Since dθ ∈ Ω3 (U ), then using Lemma 3.18 as in the previous examples
we can express dθ as the product of a differentiable real-valued function with dx ∧ dy ∧ dz,
and the value of the coefficient function can be calculated as above:
dθ = df ∧ dy ∧ dz + dg ∧ dz ∧ dx + dg ∧ dx ∧ dy
= (fx dx + fy dy + fz ) ∧ dy ∧ dz
+ (gx dx + gy dy + gz dz) ∧ dz ∧ dx
+ (hx dx + hy dy + hz dz) ∧ dx ∧ dy
= fx dx ∧ dy ∧ dz + gy dy ∧ dz ∧ dx + hz dz ∧ dx ∧ dy
= (fx + gy + hz )dx ∧ dy ∧ dz.
The coefficient in this case corresponds to the divergence of a vector field. We can make
this exact by introducing the bijection Ξ : C ∞ (U ) → Ω3 (U ) by letting
Ξ(φ) = φdx ∧ dy ∧ dz ∈ Ω3 (U ), φ ∈ C ∞ (U ).
19
Then we have proven the following:
for any p ∈ M .
Note: In the definition above, we are using the fact that for each p ∈ M , the derivative of
f at p is a linear map Df (p) : Tp M → Tf (p) N , and so since ω(f (p)) ∈ Λk (Tf (p) N ) (by the
definition of a differential k-form on N ), we have Df (p)∗ (ω(f (p))) ∈ Λk (Tp M ) as needed,
using the definition of pullback by a linear map given in Definition 3.9 (with V = Tp M
and W = Tf (p) N ). This means that the map
f ∗ ω(p) : (Tp M )k → R
One can verify that for k = 0, this gives is the same definition for the pullback of a
differentiable 0-form on N (which is simple a differentiable real-valued function on N ) as
in Lemma 3.8, while for k = 1 it gives us the pullback of differentiable 1-forms as defined
in Definition 3.6.
20
S). We want to calculate r∗ θF ∈ Ω2 (U ) in order to illustrate the concept and show how
it recovers an important quantity from the theory of surface integrals (see Chapter 1.1).
To carry out this calculation, we denote the coordinate functions on U by u, v, and their
exterior derivatives du, dv ∈ Ω1 (U ). We have already seen that any differential 2-form on
U can be expressed as the product of du ∧ dv with a differentiable real-valued function on
U , so r∗ θF = hdu ∧ dv for some h ∈ C ∞ (U ). To find a formula for h, note that for any
point q ∈ U we have
du ∧ dv(q)(e1 , e2 ) = 1,
and hence
h(q) = r∗ θF (q)(e1 , e2 )
= θF (r(q))(Dr(q)(e1 ), Dr(q)(e2 ))
= det(F (r(q)), D1 r(q), D2 r(q))
= hF (r(q)), D1 r(q) × D2 r(q)i,
where for the last equality we have used the following identity which relates the determi-
nant and the cross product of vectors in R3 :
(This identity, if unfamiliar, can easily be derived from the formula defining the cross
product.) This shows that the function h ∈ C ∞ (U ) is given by
∂r ∂r
h = F ◦ r, × ,
∂u ∂v
The expression on the right-hand side should look familiar: it is essentially the same
2
R R as the integrand (over a region D ⊂ R ) used to define the vector surface integral
thing
S
F · da of a parametrised surface (see Definition 1.7 in Chapter 1.1).
Note: It is possible, using Theorem 3.28, to define the exterior derivative of differential
forms on manifolds using the pullback to local coordinate systems, and to show that
the definition is independent of the choice of local coordinate system used. In this way
we get operators d : Ωk (M ) → Ωk+1 (M ), for a general n-manifold M ⊂ Rn+r and any
k = 0, 1, . . .. These operators can be shown to have the properties of Theorem 3.20 and
Theorem 3.28. As stated above, we will not go into the details here since they add more
complexity than is needed or justified for our aims in this course.
21
3.3 TUT Problems
(b) Use Theorem 3.23, Theorem 3.28 and Green’s Theorem (as expressed in Example
3.21 using differential forms on an open subset of R2 ) to prove Stokes’ Theorem (Thm.
1.9) in this setting: Z Z Z
(∇ × F ) · da = F · dγ.
S ∂S
(c)* Modify the above procedure, using Theorem 3.25, to express Gauss’ Diver-
gence Theorem (Thm. 1.24) as an identity of integrals involving Ψ(F ) ∈ Ω2 (U ) and
dΨ(F ) ∈ Ω3 (U ).
2. Let U = R3 − {(0, 0, 0)} be the open subset of R3 obtained by removing the origin,
and let θ be the differential 2-form
xdy ∧ dz + ydz ∧ dx + zdx ∧ dy
θ= ∈ Ω2 (U ).
(x2 + y 2 + z 2 )3/2
22
(a) Show that dθ = 0.
(b) Using part (a), prove that if U, V ⊂ Rn are open subsets and f : U → V is a
differentiable map, then for any differential n-form ω ∈ Ωn (V ), its pullback is given by
f ∗ ω = det(Df )ω ◦ f ∈ Ωn (U ).
This means that if we write ω = gdx1 ∧ dx2 ∧ . . . ∧ dxn for some differentiable real-valued
function g ∈ C ∞ (V ) (which can always be done, according to Lemma 3.18), then
∇ · B = 0; (18)
∂B
∇×E+ = 0. (19)
∂t
Define a differential 2-form B ∈ Ω2 (R4 ) and a differential 1-form E ∈ Ω1 (R4 ) by
B = B 1 dy ∧ dz + B 2 dz ∧ dx + B 3 dx ∧ dy;
E = E 1 dx + E 2 dy + E 3 dz;
F = B + E ∧ dt ∈ Ω2 (R4 ).
Prove that equations (18) and (19) are equivalent to dF = 0. (The Second Set of Maxwell’s
Equations can also be reformulated using differential forms, but the algebra needed is more
complicated and tedious. Doing this is important not only to show that differential forms
give us a nice way of writing physics equations; it’s also a necessary step for understanding
how Electromagnetism can be formulated as a classical gauge theory, which is what Yang
and Mills did when they introduced more general gauge theories in the 1950s.)
23