0% found this document useful (0 votes)
40 views23 pages

MATH3031 CH 3 Notes

Uploaded by

James Mlotshwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views23 pages

MATH3031 CH 3 Notes

Uploaded by

James Mlotshwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 3: Differential Forms

3.1: Differential 1-Forms


From Chapter 1, and Multivariable Calculus, we know about vector fields in Rn , in-
cluding how to define their integrals along (oriented) paths. In general, for U ⊂ Rn an
open subset, a vector field on U is given by a map X : U → Rn . We usually only consider
vector fields that are differentiable, often C ∞ , and we let Vect(U ) denote the set of differ-
entiable vector fields on U . Our aim in this section is to introduce the set of differential
1-forms on U , denoted Ω1 (U ), which turns out to be better suited for generalising to
manifolds, and for integrating on higher-dimensional subsets.

First, note that if we let C ∞ (U ) the set of differentiable real-valued functions f : U →


R, then we can add elements of Vect(U ) and multiply them by elements of C ∞ (U ) as
follows: Given X, Y ∈ Vect(U ) and f, g ∈ C ∞ (U ), define f X + gY ∈ Vect(U ) by letting

(f X + gY )(p) = f (p)X(p) + g(p)Y (p), p ∈ U.

These are the “linear combinations” of vector fields that we will be concerned with.

Definition 3.1 (1-Forms): Let U ⊂ Rn be an open subset. A differential 1-form on U ,


ω ∈ Ω1 (U ), is any map ω : Vect(U ) → C ∞ (U ) that is linear with respect to the above
operations on vector fields, i.e. such that

ω(f X + gY ) = f ω(X) + gω(Y )

for all f, g ∈ C ∞ (U ), X, Y ∈ Vect(U ). The set of all differential 1-forms on U is denoted


Ω1 (U ), and we can add elements of Ω1 (U ), or multiply them by elements of C ∞ (U ), in
the same way as we do with vector fields.

Example 3.2: The first example is arguably the most important. Let φ ∈ C ∞ (U ), i.e.
φ : U → R is a differentiable function. We defined dφ ∈ Ω1 (U ) as follows: Given any
X ∈ Vect(U ), let dφ(X) : U → R be the function,

dφ(X)(p) = Dφ(p)(X(p)), p ∈ U.

This clearly gives us a well-defined map dφ : Vect(U ) → C ∞ (U ), so to see that dφ ∈ Ω1 (U )


as claimed, we have to show that linearity (as in Def. 3.1) holds. For this, let f, g ∈ C ∞ (U )
and X, Y ∈ Vect(U ) be given. Then we calculate, for any p ∈ U :

dφ(f X + gY )(p) = Dφ(p)((f X + gY )(p))


= Dφ(p)(f (p)X(p) + g(p)Y (p))
= f (p)Dφ(p)(X(p)) + g(p)Dφ(p)(Y (p)), as Dφ(p) : Rn → R is linear,
= (f dφ(X) + gdφ(Y ))(p).

Note: The differential 1-form dφ ∈ Ω1 (U ) from Example 3.2 gives us an operator


d : C ∞ (U ) → Ω1 (U ). This is called the exterior derivative, and we will see in the

1
later parts of this Chapter how it naturally extends to an operator on differential forms
of all degree. For now, let’s note some of the basic properties of this operator, including
its relation to the gradient operator ∇ : C ∞ (U ) → Vect(U ):

Lemma 3.3 (Properties of d): Let U ⊂ Rn be an open subset, and denote by


xi : U → R the ith coordinate function on U , i = 1, . . . , n, defined by xi (p) = pi for
any p ∈ U . Then:

(a) For φ, ψ ∈ C ∞ (U ), we have


d(φψ) = φdψ + ψdφ ∈ Ω1 (U ).
(b) For φ ∈ C ∞ (U ) and X ∈ Vect(U ), we have
dφ(X) = h∇φ, Xi ∈ C ∞ (U ).
(c) For φ ∈ C ∞ (U ), we have
n
X ∂φ i
dφ = i
dx ∈ Ω1 (U ).
i=1
∂x

Proof: For (a), we have to show that the expressions on both sides give us the same map
from Vect(U ) to C ∞ (U ). To do this, let X ∈ Vect(U ) and p ∈ U be given. Then using
the definition of d from Example 3.2, and the product rule for derivatives of functions in
C ∞ (U ):
d(φψ)(X)(p) = D(φψ)(p)(X(p))
= [φ(p)Dψ(p) + ψ(p)Dφ(p)](X(p))
= φ(p)Dψ(p)(X(p)) + ψ(p)Dφ(p)(X(p))
= φdψ(X)(p) + ψdφ(X)(p).
For (b), we have to show that the expressions on both sides give us the same func-
tion in C ∞ (U ), so let p ∈ U be given. Then, since X(p) ∈ Rn , we have X(p) =
(X 1 (p), . . . , X n (p))T for X 1 (p), . . . , X n (p) ∈ R, and using this we calculate
dφ(X)(p) = Dφ(p)(X(p))
 
X 1 (p)
= D1 φ(p) . . . Dn φ(p)  ... 
 
X n (p)
n
X
= Di φ(p)X i (p)
i=1
= h∇φ(p), X(p)i
= h∇φ, Xi(p).
Finally, to prove (c) we must show that both sides give us the same map from Vect(U )
to C ∞ (U ), but since dφ, dxi ∈ Ω1 (U ) we know that both maps respect linear combinations
of vector fields. However, any X ∈ Vect(U ) can be written as
X = (X 1 , . . . , X n ) = X 1 E1 + . . . + X n En ,

2
where X j ∈ C ∞ (U ) and Ej ∈ Vect(U ) is given by Ej (p) = ej ∈ Rn (the jth standard
basis vector). Hence, it suffices to prove that
n
!
X ∂φ i
dφ(Ej ) = i
dx (Ej )
i=1
∂x

holds for each j = 1, . . . , n. However, for each p ∈ U , we see, using (b) and the definition
of Ej :

dφ(Ej )(p) = h∇φ(p), ej i


= Dj φ(p)
∂φ
= (p).
∂xj
On the other hand, we have ∇xi (p) = ei , and thus:
n
! n
X ∂φ i X ∂φ
i
dx (Ej )(p) = i
(p)dxi (Ej )(p)
i=1
∂x i=1
∂x
n
X ∂φ
= i
(p)hei , ej i
i=1
∂x
∂φ
= (p).
∂xj

Example 3.4 (First-Year Calculus): It turns out you already know the above identities
and have been using them for at least the past three years. If n = 1 and U = (a, b) ⊂ R
is an open interval, then you are familiar with the technique of “u-substitution” used to
calculate integrals. For example, to compute an integral like this,
Z b
(1 − sin2 x) cos xdx,
a

the obvious thing is to try is the substitution u = sin x (this is only a legitimate substi-
tution if (a, b) is an interval where sin x is one-to-one, or else by splitting up the interval
into such sub-intervals), and use the identity du = cos xdx to simplify the integral as
Z sin b
(1 − u2 )du,
sin a

which can be easily calculated. As justification, you might have been told something like,
“since du
dx
= cos x, we can multiply by dx on both sides to get du = cos xdx”, but if your
lecturers were honest with you they should have admitted that “multiplying by dx” is
nonsense. Now we can see that it actually is possible to make sense of this identity, but
as an identity between differential 1-forms on U = (a, b). By Lemma 3.3(c), we see that
for the function u = sin x ∈ C ∞ (U ), we have
du
du = dx ∈ Ω1 (U ).
dx

This example suggests that what we’ve secretly been doing, since First-Year Calculus,
is integrating differential 1-forms on intervals in the real line. Likewise, the path integrals

3
defined in Multivariable Calculus (and the Contour Integrals in Complex Analysis) should
really be thought of as integrals of differential 1-forms on 1-manifolds (curves). Before
explaining this, we define differential 1-forms on a general manifold M , as well as the
operator d : C ∞ (M ) → Ω1 (M ):

Definition 3.5: Let M ⊂ Rn+k be a n-manifold. A differentiable vector field on M is a


differentiable map X : M → Rn+k with the property that X(p) ∈ Tp M for all p ∈ M . The
set of all differentiable vector fields on M is denoted Vect(M ). Just as for an open subset
U ⊂ Rn , we can add elements of Vect(M ) and multiply them by elements of C ∞ (M ), the
set of differentiable real-valued functions on M . Then

(a) A differential 1-form on M , denoted ω ∈ Ω1 (M ), is any map ω : Vect(M ) →


C ∞ (M ) that is linear with respect to addition of vector fields and multiplication of vector
fields by differentiable real-valued functions.

(b) The operator d : C ∞ (M ) → Ω1 (M ), called the exterior derivative, is defined, for


φ ∈ C ∞ (M ), by letting dφ : Vect(M ) → C ∞ (M ) be the map

dφ(X)(p) = Dφ(p)(X(p)) ∈ R,

for any X ∈ Vect(M ), p ∈ M , where Dφ(p) : Tp M → R is the derivative of φ at p, as in


Definition 2.22. (Note that Tφ(p) R = R.)

One of the things that makes differential 1-forms much better to work with than vector
fields, is that there is a natural way to “move them around” using differentiable maps.
This is called the “pullback operation”:

Definition 3.6: Suppose M ⊂ Rn+k is a n-manifold, N ⊂ Rm+l is a m-manifold, and let


ϕ : M → N be a differentiable map. If ω ∈ Ω1 (N ) is a differential 1-form on N , then its
pullback by ϕ, denoted ϕ∗ ω ∈ Ω1 (M ), is the differential 1-form on M defined by letting

(ϕ∗ ω)(X)(p) = ω(ϕ(p))(Dϕ(p)(X(p))) ∈ R,

for any X ∈ Vect(M ) and any p ∈ M , where Dϕ(p) : Tp M → Tϕp N is given by Defini-
tion 2.22, and ω(ϕ(p)) : Tϕ(p) N → R is the (linear) map given by letting ω(ϕ(p))(v) =
ω(Y )(ϕ(p)) for Y ∈ Vect(N ) any vector field with Y (ϕ(p)) = v ∈ Tϕ(p) N .

Example 3.7: Let I = (a, b) ⊂ R be an open interval, and γ : I → U be a differentiable


curve defined on some open subset U ⊂ Rn . Let ω ∈ Ω1 (U ) be a differential 1-form on U ,
and write
ω = (ω1 , . . . , ωn ) = ω1 dx1 + . . . + ωn dxn
for some ω1 , . . . , ωn ∈ C ∞ (U ), where dx1 , . . . , dxn ∈ Ω1 (U ) are the exterior derivatives
of the coordinate functions on U (it is possible to show that this can always be done;
see the TUT problems). We want to derive a formula for γ ∗ ω ∈ Ω1 (I). To simplify the
derivation, note that any X ∈ Vect(I) is given by X = f E1 for some f ∈ C ∞ (I). Hence,
since η(f E1 ) = f η(E1 ) for a 1-form η ∈ Ω1 (I), we only need to calculate the formula for

4
the vector field E1 ∈ Vect(I) (recall that E1 (t) = e1 ∈ R for all t ∈ I). Using Definition
3.6, we calculate, for t ∈ I,

(γ ∗ ω)(E1 )(t) = ω(γ(t))(Dγ(t)(E1 (t)))


= ω1 dx1 + . . . + ωn dxn (γ(t))(Dγ(t)(e1 ))
 

= ω1 (γ(t))dx1 (γ(t)) + . . . + ωn (γ(t))dxn (γ(t)) (γ̇ 1 (t), . . . , γ̇ n (t))


 

= ω1 (γ(t))γ̇ 1 (t) + . . . + ωn (γ(t))γ̇ n (t),

where in the last line we’ve used the fact that


(
1 if i = j
dxi (p)(ej ) = δji = ,
0 6 j
if i =

for any p ∈ U and i, j = 1, . . . , n. This identity follows from Lemma 3.3(c).

Now, if we write t = x1 ∈ C ∞ (I) for the 1st (and only) coordinate function on the
real interval I = (a, b), then we have dt(E1 ) = 1, the constant function. Hence, the above
calculation shows that for the vector field X = Xω = ω1 E1 + . . . + ωn En ∈ Vect(U ) which
has the same coordinate functions as the functions used to define ω, we have

γ ∗ ω = hX ◦ γ, γ̇idt ∈ Ω1 (I).

This means, in particular, that the vector path integral of X along γ equals the integral
of the pullback of ω by γ:
Z Z b
X · dγ := hX(γ(t)), γ̇(t)idt
γ a
Z
= γ ∗ ω.
I

This is not just a more elegant way to write the formula for vector path integral. It
also allows us to use the fact that pullbacks and exterior derivatives commute, a fact
which we state without proof to end this section.

Lemma 3.8: Let M ⊂ Rn+k be a n-manifold, N ⊂ Rm+l a m-manifold, and ϕ : M → N a


differentiable map between them. If f ∈ C ∞ (N ) is any differentiable real-valued function
on N , then letting ϕ∗ f = f ◦ ϕ ∈ C ∞ (M ) be its pullback, we have

d(ϕ∗ f ) = ϕ∗ (df ) ∈ Ω1 (M ).

3.1 TUT Problems

1. Let U ⊂ Rn and V ⊂ Rm be open subsets, and f : U → V a differentiable function.


Recall that if ω ∈ Ω1 (V ) is a differential 1-form, then its pullback f ∗ ω ∈ Ω1 (U ) is defined,
for any X ∈ Vect(U ) and any p ∈ U , by

(f ∗ ω)(X)(p) = ω(f (p))(Df (p)(X(p)).

5
Consider the subsets U = {(r, θ) | r > 0} ⊂ R2 , V = {(x, y) | (x, y) 6= (0, 0)} ⊂ R2 , the
map f : U → V defined by

f (r, θ) = (r cos θ, r sin θ), (r, θ) ∈ V,

and the differential 1-form ω ∈ Ω1 (V ) defined by


y x
ω(x, y) = − dx + 2 dy.
x2 +y 2 x + y2

Calculate the pullback f ∗ ω ∈ Ω1 (U ), using the fact that any differential 1-form η ∈
Ω1 (U ) van be expressed as η = gdr + hdθ, for some differentiable real-valued functions
g = g(r, θ), h = h(r, θ) ∈ C ∞ (U ).

2. For U ⊂ Rn , V ⊂ Rm open and f : U → V , if φ ∈ C ∞ (V ) is a differentiable real-valued


function on V , its pullback f ∗ φ ∈ C ∞ (U ) is defined by

f (∗ φ)(p) = φ(f (p)), p ∈ U.

Use the definitions and the Chain Rule to prove Lemma 3.8:

For any φ ∈ C ∞ (V ), we have

d(f ∗ φ) = f ∗ dφ ∈ Ω1 (U ).

3. Let U ⊂ Rn be an open subset and φ ∈ C ∞ (U ) a differentiable real-valued function on


U . Use the notation of Example 3.7 from the Chapter 3.1 lecture notes: for a differential
1-form ω ∈ Ω1 (U ) expressed as ω = ω1 dx1 + . . . + ωn dxn for ω1 , . . . , ωn ∈ C ∞ (U ), define

Xω = ω1 E1 + . . . + ωn En ∈ Vect(U ).

(a) Show that Xdφ = ∇φ ∈ Vect(U ).


(b) Use Example 3.7, Lemma 3.8, and the result of part (a), to give a quick proof of
the “Fundamental Theorem of Vector Calculus”: If γ : [a, b] → U is a differentiable path,
and X ∈ Vect(U ) is a gradient vector field with potential function φ ∈ C ∞ (U ), then
Z
X · dγ = φ(γ(b)) = φ(γ(a)).
γ

6
3.2: Tensor Algebra
From Chapter 3.1, we have some idea that differential 1-forms are good for integra-
tion, at least if we are integrating on the real line or along a path. In this section, we are
going to develop the algebra needed to generalise from differential 1-forms to differential
k-forms, for all integers k ≥ 0. Besides being the “right” things to integrate over higher-
dimensional manifolds, this algebra and the concepts in differential geometry related to it
turn out to be useful in a wide range of geometric and physical settings (if you’ve taken
a course in general relativity, you will know that you end up drowning in tensors).

Before we dive into the algebra, for motivation it’s worth thinking about the ge-
ometric significance of the determinant. Recall from Linear Algebra (or take it as a
challenge/exercise to prove) that if v1 , v2 ∈ R2 are two vectors and

P 2 = P (v1 , v2 ) = {α1 v1 + α2 v2 | 0 ≤ α1 , α2 ≤ 1}

is the parallelogram in R2 with v1 and v2 as its sides, then the area of P 2 is given by

area(P 2 ) = | det(v1 , v2 )|,

the absolute value of the determinant of the 2 × 2 matrix with v1 and v2 as its column
vectors. Similarly, if v1 , v2 , v3 ∈ R3 , then the volume of the parallelipiped (3-dimensional
analog of a parallelogram) spanned by them,

P 3 = P (v1 , v2 , v3 ) = {α1 v1 + α2 v2 + α3 v3 | 0 ≤ αi ≤ 1},

is given by
vol(P 3 ) = | det(v1 , v2 , v3 )|.
On the other hand, recall from TUT problems Ch. 2.1, Question 1, and TUT problems
Ch. 2.2-4, Question 3, that the determinant of n × n matrices can be thought of as a
multilinear map
det : (Rn )n → R.
This is geometric evidence that the algebra of multilinear maps, or tensor algebra, will be
important for integration in higher dimensions.

Definition 3.9 (k-Tensors): Let V be a (finite-dimensional, real) vector space, and


k ≥ 0 an integer. A k-tensor on V is any map T : V k → R that is multilinear. In other
words, for any vectors v1 , . . . , vk , w ∈ V , any real scalars α, β ∈ R, and any i = 1, . . . , k,
we have

T (v1 , . . . , vi−1 , αvi + βw, vi+1 , . . . , vk ) = αT (v1 , . . . , vi−1 , vi , vi+1 , . . . , vk )


+ βT (v1 , . . . , vi−1 , w, vi+1 , . . . , vk ).

If it helps, think of a k-tensor on V as any map that “eats k vectors and spits out a
number”, and does this in a way that is linear with respect to each of the vectors it eats.

The set of all k-tensors on V is denoted T k (V ). Since its elements are maps that take
values in the real numbers, we can add any two k-tensors together, and we can multiply

7
them by scalars: for T, S ∈ T k (V ) and α, β ∈ R, we can define αT + βS ∈ T k (V ) by
letting
(αT + βS)(v1 , . . . , vk ) = αT (v1 , . . . , vk ) + βT (v1 , . . . , vk ) ∈ R
for any input vectors v1 , . . . , vk ∈ V .

Also, if W is another vector space and L : V → W is a linear map, we can define the
pullback map L∗ : T k (W ) → T k (V ): Given T ∈ T k (W ), define L∗ T ∈ T k (V ) by letting

L∗ T (v1 , . . . , vk ) = T (Lv1 , . . . , Lvk ) ∈ R

for any input vectors v1 , . . . , vk ∈ V .

Example 3.10: (a) If k = 0, we just get the real numbers:

T 0 (V ) = R.

This can either be taken as the definition or by applying Definition 3.9 in the vacuous
case when there are 0 input vectors.

(b) For V = Rn , the Euclidean inner product h·, ·i is a 2-tensor on V ,

h·, ·i ∈ T 2 (V ).
Pn
Given any v, w ∈ Rn as input, the output is hv, wi = i=1 v i wi ∈ R.

(c) For V = Rn , the determinant is a n-tensor:

det ∈ T n (V ).

(See Question 1 of Ch. 2.1 TUTs and Question 3 of Ch. 2.2-4 TUTs.)

(d) If M ⊂ Rn+k is a n-manifold and ω ∈ Ω1 (M ) is a differential 1-form (see Def.


3.5(a) of Ch. 3.1), then for any p ∈ M and V = Tp M we can think of ω as defining a
1-tensor ω(p) ∈ T 1 (V ). To get the linear map ω(p) : Tp M → R, let v ∈ Tp M be given.
Then choose a vector field X ∈ Vect(M ) with X(p) = v, and define

ω(p)(v) = ω(X)(p).

(It is a fact that such an X can always be chosen, and that the value of ω(p)(v) does not
depend on which vector field X is chosen, as long as X(p) = v; this is an exercise you can
do if you feel like it.)

While Definition 3.9 gives us a lot of examples, we need more to be able to work
with and compute with. For one thing, we want to focus on k-tensors that have more
“symmetry”, and we also want to have a kind of “multiplication” on the tensors we work
with:

Definition 3.11 (Alternating Tensors): A k-tensor on V is called alternating iff inter-


changing any two input vectors changes the sign of the output. The set of all alternating

8
k-tensors on V is denoted Λk (V ). We call an element ω ∈ Λk (V ) an alternating k-tensor
or (linear) k-form. The number k is called the degree of ω.

Note: Clearly, Λk (V ) ⊂ T k (V ), and in fact it is not very hard to prove that Λk (V ) is a


linear subspace (i.e. closed under addition and scalar multiplication).

For V = Rn , the properties of the determinant tell us that det ∈ Λn (V ), since the
sign of the determinant changes whenever we interchange two columns (or rows) of a
n × n matrix. In contrast, the Euclidean inner product h·, ·i ∈ T 2 (V ) is not an element
of Λ2 (V ), because
hv, wi = hw, vi, v, w ∈ V,
whereas for an alternating 2-form ω ∈ Λ2 (V ) we must have (by definition),

ω(v, w) = −ω(w, v).

Besides being a linear subspace of T k (V ), the set of linear k-forms Λk (V ) has an addi-
tional structure that we’ll make use of, which allows us to multiply two alternating tensors
and get a third alternating tensors (the degrees add). The following Theorem tells us how
this is done, and gives some consequences. We will not worry about the proof in this
course, nor about how to define the product in general, since the properties of the prod-
uct can be used for our purposes for most calculations and further results, and since we can
give the definitions explicitly in the (low-dimensional) cases that are of most interest to us:

Theorem 3.12 (Wedge Product): For any linear k-form ω ∈ Λk (V ) and linear l-form
η ∈ Λl (V ), there is a linear (k + l)-form ω ∧ η ∈ Λk+l (V ). This defines a product (called
the wedge product) on the space of all alternating tensors of all degrees, and this wedge
product has the following properties (for any ω, ωi ∈ Λk (V ), η, ηi ∈ Λl (V ), θ ∈ Λj (V ),
α, β ∈ R):
(a) (αω1 + βω2 ) ∧ η = α(ω1 ∧ η) + β(ω2 ∧ η);
(b) ω ∧ (αη1 + βη2 ) = α(ω ∧ η1 ) + β(ω ∧ η2 );
(c) (ω ∧ η) ∧ θ = ω ∧ (η ∧ θ);
(d) ω ∧ η = (−1)kl η ∧ ω.
Moreover, if L : V → W is any linear map between two vector spaces, and ω ∈ Λk (W ),
η ∈ Λl (W ), we have
(e) L∗ (ω ∧ η) = (L∗ ω) ∧ (L∗ η) ∈ Λk+l (V ).

To see how these properties allow us to do most calculations of interest, first note
that Λ0 (V ) = T 0 (V ) = R, and Λ1 (V ) = T 1 (V ) (every 1-tensor is, vacuously, alternating
since it is not possible to interchange the order of two input vectors as a 1-tensor only
eats one vector from V as input). But Λ1 (V ) = T 1 (V ) can be assigned a basis once we
are given a basis for V . If v1 , . . . , vn is any given basis for V , then we define elements
ϕ1 , . . . , ϕn ∈ T 1 (V ) by letting, for i = 1, . . . , n, the linear map ϕi : V → R be given by
(
1 if i = j;
ϕi (vj ) := δji =
0 if i 6= j;

9
and then extending this definition linearly, i.e. for any vector w ∈ V , we write w =
α1 v1 + . . . + αn vn for α1 , . . . , αn ∈ R and let
n
X n
X
i j i
ϕ (w) = α ϕ (vj ) = αj δji = αj .
j=1 j=1

This is well-defined by the properties of a basis, and the maps ϕi : V → R are clearly
linear, so ϕ1 , . . . , ϕn ∈ T 1 (V ) = Λ1 (V ). In fact, it is a Theorem that the elements
ϕ1 , . . . , ϕn form a basis of Λ1 (V ), which is referred to as the dual basis corresponding to
the basis v1 , . . . , vn .

It turns out that by combining the elements of this dual basis, using the wedge product
of Theorem 3.12, we can form bases of Λk (V ) for all k ≥ 0, and calculate the dimensions
of these as vector spaces. Before stating the general result, let’s look at a few examples
for low-dimensional vector spaces. Since any two vector spaces of the same dimension are
isomorphic (once we’ve chosen a basis), we’ll just look at Rn for small values of n, and
we’ll fix ideas by using the standard basis e1 , . . . , en . For reasons that will become clear in
the next section, we denote the dual basis in this case by dx1 , . . . , dxn (or even by dx, dy
or dx, dy, dz when n = 2, 3). In other words, dxi = ϕi , so the defining identities are

dxi (ej ) = δji .

The key fact in our calculations below will be the identities (a) and (b) of Theorem
3.12, as well as identity (d) for the case k = l = 1, which tells us that for i, j = 1, . . . , n,
we have
dxi ∧ dxj = −dxj ∧ dxi .
(In particular, dxi ∧ dxi = 0). On the other hand, for ω1 , ω2 ∈ Λ1 (V ), there is a simple
formula we could guess as the definition of ω1 ∧ ω2 ∈ Λ2 (V ) that has these properties:
Given v, w ∈ V , guess that

(ω1 ∧ ω2 )(v, w) := ω1 (v)ω2 (w) − ω1 (w)ω2 (v) ∈ R.

One can check that this really does define an element of Λ2 (V ), and that the relevant
properties of Theorem 3.12 are satisfied.

Example 3.13 (Exterior Algebra of V = R2 ): For V = R2 , let’s look at Λk (V ) for


k = 0, 1, 2, . . .. First of all, we always have Λ0 (V ) = T 0 (V ) = R. Second, Λ1 (V ) = T 1 (V ),
and by the above discussion we have a basis given by the dual basis to the standard basis,
which is denoted dx, dy. In other words, any linear 1-form ω ∈ Λ1 (R2 ) can be written as

ω = ωx dx + ωy dy, for ωx , ωy ∈ R.

Hence, using this notation for any ω, η ∈ Λ1 (R2 ), we calculate their wedge product:

ω ∧ η = (ωx dx + ωy dy) ∧ (ηx dx + ηy dy)


= ωx ηx dx ∧ dx + ωx ηy dx ∧ dy + ωy ηx dy ∧ dx + ωy ηy dy ∧ dy
= (ωx ηy − ωy ηx )dx ∧ dy
 
ωx ωy
= det dx ∧ dy.
ηx ηy

10
On the other hand, let v = v 1 e1 + v 2 e2 , w = w1 e1 + w2 e2 ∈ R2 be any given vectors. Then
using the formula for the wedge product of 1-forms discussed above, and the defining
identities for dx, dy, we calculate:

(dx ∧ dy)(v, w) = dx(v)dy(w) − dx(w)dy(v)


= v 1 w2 − w1 v 2
= det(v, w).

This shows that dx∧dy = det ∈ Λ2 (R2 ). Finally, we’ll show that every element θ ∈ Λ2 (R2 )
is a scalar multiple of dx ∧ dy = det, which shows that Λ2 (R2 ) is a 1-dimensional vector
space with this element giving a basis. Given any θ ∈ Λ2 (R2 ), define

θxy := θ(e1 , e2 ) ∈ R.

Then given any v = v 1 e1 + v 2 e2 , w = w1 e1 + w2 e2 ∈ R2 , we calculate, using the fact that


θ is multi-linear and alternating:

θ(v, w) = θ(v 1 e1 + v 2 e2 , w1 e1 + w2 e2 )
= v 1 w1 θ(e1 , e1 ) + v 1 w2 θ(e1 , e2 ) + v 2 w1 θ(e2 , e1 ) + v 2 w2 θ(e2 , e2 )
= v 1 w1 (0) + v 1 w2 (θxy ) + v 2 w1 (−θxy ) + v 2 w2 (0)
= θxy (v 1 w2 − w1 v2)
= θxy det(v, w).

Since v, w ∈ R2 are arbitrary vectors, this shows that

θ = θxy det = θxy dx ∧ dy ∈ Λ2 (R2 ).

Finally, we can show that Λk (R2 ) = {0} for all k ≥ 3. To do this, let µ ∈ Λk (R2 ), and let
u, v, w ∈ R2 be arbitrary vectors. Then writing u, v and w in terms of the basis vectors
e1 , e2 , we can show that µ(u, v, w, . . .) = 0 by using the fact that µ is multi-linear and
alternating, with a calculation similar to the one above.

Example 3.14 (Exterior Algebra of V = R3 ): Let V = R3 , and denote by dx, dy, dz


the 1-forms that make the dual basis to the standard basis e1 , e2 , e3 . As before, any 1-
form ω ∈ Λ1 (R3 ) can be expressed as ω = ωx dx + ωy dy + ωz dz, and we calculate, for
ω, η ∈ Λ1 (R3 ):

ω ∧ η = (ωx dx + ωy dy + ωz dz) ∧ (ηx dx + ηy dy + ηz dz)


= (ωy ηz − ωz ηy )dy ∧ dz + (ωz ηx − ωx ηz )dz ∧ dx + (ωx ηy − ωy ηx )dx ∧ dy.

On the other hand, given any linear 2-form θ ∈ Λ2 (R3 ), then defining the real coefficients

θyz := θ(e2 , e3 ), θzx := θ(e3 , e1 ), θxy := θ(e1 , e2 ) ∈ R,

we can show using a similar calculation as was done in Example 3.13 that

θ = θyz dy ∧ dz + θzx dz ∧ dx + θxy dx ∧ dy ∈ Λ2 (R3 ),

which shows us that Λ2 (R3 ) is a 3-dimensional vector space with dy ∧ dz, dz ∧ dx, dx ∧ dy
forming a basis. Note that Λ1 (R3 ) and Λ2 (R3 ) are both 3-dimensional, as is V = R3 . We
fix isomorphisms between them by identifying their bases in the following way:

11
Φ : R3 → Λ1 (R3 ) is given by mapping

e1 →
7 Φ(e1 ) = dx; (1)
e2 →7 Φ(e2 ) = dy; (2)
e3 → 7 Φ(e3 ) = dz; (3)

Ψ : R3 → Λ2 (R3 ) is given by mapping

e1 →7 Ψ(e1 ) = dy ∧ dz; (4)


e2 7→ Ψ(e2 ) = dz ∧ dx; (5)
e3 → 7 Ψ(e3 ) = dx ∧ dy; (6)

Then using these identifications, and the expression for ω ∧ η ∈ Λ2 (R3 ) calculated
above, we see that for any v, w ∈ R3 , the following identity holds, which relates the wedge
product to the cross product × of vectors in R3 :

Φ(v) ∧ Φ(w) = Ψ(v × w) ∈ Λ2 (R3 ). (7)

The exterior algebra of R3 is concluded by showing that

dx ∧ dy ∧ dz = det ∈ Λ3 (R3 ),

and that Λk (R3 ) = {0} for k > 3. This can be done using similar calculations and
arguments that were used in Example 3.13, although the calculations are slightly more
complicated.

Theorem 3.15: Let V be a n-dimensional vector space, and suppose that v1 , . . . , vn is a


basis for V . Let ϕ1 , . . . , ϕn be the dual basis for T 1 (V ) = Λ1 (V ). Then, for every integer
k ≥ 0, the set of linear k-forms,

{ϕi1 ∧ ϕi2 ∧ . . . ∧ ϕik | 1 ≤ i1 < i2 < . . . < ik ≤ n}

forms a basis for Λk (V ). In particular, the dimension of Λk (V ) is given by the number of


elements in this set, which is  
n n!
= .
k k!(n − k)!
As a result, Λn (V ) is 1-dimensional, and Λk (V ) = {0} for all k > n.

Theorem 3.16: For V = Rn and dx1 , . . . , dxn ∈ Λ1 (Rn ) the dual of the standard basis
vector, we have
dx1 ∧ . . . ∧ dxn = det ∈ Λn (Rn ),
and this element forms a basis for Λn (Rn ).

We will not worry about giving proofs of either of these Theorems this year, since to
do so requires a more general formula for the wedge product. I will provide an outline of
the steps involved as optional TUT problems, for those who are interested.

12
3.2 TUT Problems

1. Let V = Rn with the standard basis denoted by e1 , . . . , en . Let the dual basis be
denoted by dx1 , . . . , dxn , which means that for any i, j = 1, . . . , n we have
(
1 if i = j;
dxi (ej ) = δji =
0 if i 6= j,

and therefore if v = (v 1 , . . . , v n ) = v 1 e1 + . . . + v n en ∈ Rn is any vector, then dxi (v) = v i .


This problem shows how to use the dxi and the wedge product to form bases of Λ1 (V )
and Λ2 (V ).

(a) Given any linear 1-form ω ∈ Λ1 (V ), define ω1 , . . . , ωn ∈ R by letting ωi := ω(ei )


for each i = 1, . . . , n. Prove that

ω = ω1 dx1 + . . . + ωn dxn ∈ Λ1 (V ) (8)

by showing that both sides of (18) give the same real value when evaluated on an arbitrary
vector v ∈ V . (Hint: You can simplify your calculations by first showing that it suffices
to show that they have the same value for v = ej , where j = 1, . . . , n is arbitrary, since
both sides of (18) are linear.)

(b) Show that if α1 , . . . , αn ∈ R are any real coefficients, and

α1 dx1 + . . . + αn dxn = 0 ∈ Λ1 (V )

(where the right-hand side means the “zero map” 0 : V → R which sends every vector
v ∈ V to 0 ∈ R), then α1 = α2 = . . . = αn = 0. Hence, the linear 1-forms dx1 , . . . , dxn
form a basis for Λ1 (V ).

(c) Given ω, η ∈ Λ1 (V ), use the properties (a), (b) and (c) of Theorem 3.12 to show
that
n
X
ω∧η = (ωi ηj − ωj ηi )dxi ∧ dxj ∈ Λ2 (V ), (9)
i,j=1,i<j

where ω = ω1 dx1 + . . . + ωn dxn and η = η1 dx1 + . . . + ηn dxn are the unique ways to express
ω and η in terms of the basis dx1 , . . . , dxn .

(d)* Given any linear 2-form θ ∈ Λ2 (V ) and any pair of indices 1 ≤ i < j ≤ n, define
θi,j := θ(ei , ej ) ∈ R. Prove that
n
X
θ= θi,j dxi ∧ dxj ∈ Λ2 (V ), (10)
i,j=1,i<j

by showing that both sides of (10) give the same real value when evaluated on an ar-
bitrary (ordered) pair of vectors v, w ∈ V . (Hint: You can simplify your calculations
by first showing that it suffices to prove that both sides give the same real value when
evaluated on an ordered pair of the form v = ek , w = el for some 1 ≤ k, l ≤ n, because

13
both sides of (10) are multi-linear.)

(e)* Show that if α1,2 , α1,3 , . . . , α2,3 , . . . , αn−1,n ∈ R are any real coefficients, and

α1,2 dx1 ∧ dx2 + α2,3 dx2 ∧ dx3 + . . . + αn−1,n dxn−1 ∧ dxn = 0 ∈ Λ2 (V )

(where the right-hand side 0 ∈ Λ2 (V ) is the linear 2-form given by 0(v, w) = 0 for all
v, w ∈ V ), then α1,2 = . . . = αn−1,n = 0. Hence, the linear 2-forms

dxi ∧ dxj , 1 ≤ i < j ≤ n

form a basis for Λ2 (V ). (How many of these are there?)

2. Let V = R2n for n = 1, 2, . . . . Define a map ω : V × V → R by letting, for v =


(v 1 , . . . , v 2n ) = v 1 e1 + . . . + v 2n e2n and w = (w1 , . . . , w2n ) = w1 e1 + . . . + w2n e2n ∈ V any
two vectors,
n
X
ω(v, w) := v i+n wi − v i wi+n ∈ R. (11)
i=1

(a) Prove that this defines a linear 2-form ω ∈ Λ2 (V ).

(b) Prove that ω is non-degenerate: If ω(v, w) = 0 for all w ∈ V , then v = 0.

(c) Denote by dx1 , . . . , dxn , dp1 , . . . , dpn ∈ Λ1 (V ) the linear 1-forms that are dual to
the standard basis e1 , . . . , e2n of V . In other words, for any v = v 1 e1 + . . . + v 2n e2n ∈ V ,

dxi (v) = v i and dpi (v) = v i+n , i = 1, . . . , n.

Show that
n
X
ω= dpi ∧ dxi ∈ Λ2 (V ). (12)
i=1

(d)* For n = 1, 2, 3, use the properties of the wedge product given by Theorem 3.12
to verify the identity
ωn
= −dx1 ∧ . . . ∧ dxn ∧ dp1 ∧ . . . ∧ dpn ∈ Λ2n (V ), (13)
n!
where ω n = ω ∧ . . . ∧ ω (n times).

3. Let V be any finite-dimensional vector space, and let ω, η ∈ Λ1 (V ) be two linear


1-forms.

(a) Define a map ω ∧ η : V × V → R by letting, for any v, w ∈ V ,

ω ∧ η(v, w) := ω(v)η(w) − ω(w)η(v).

Show that this map is a linear 2-form, i.e. ω ∧ η ∈ Λ2 (V ), and prove that the map

∧ : Λ1 (V ) × Λ1 (V ) → Λ2 (V )

14
determined by this wedge product satisfies properties (a), (b), (d) and (e) of Theorem
3.12 with k = l = 1.

(b)* Given a linear 2-form θ ∈ Λ2 (V ), we can write


X
θ= θi,j ϕi ∧ ϕj
i<j

for some linear 1-forms ϕ1 , . . . , ϕn ∈ Λ1 (V ) (for n the dimension of V ) in a way similar


to what was done in Question 1(d)-(e). Therefore, for any ω ∈ Λ1 (V ) we can define
X
ω∧θ = θi,j ω ∧ ϕi ∧ ϕj ∈ Λ3 (V )
i<j

as long as we have a proper definition of ω ∧ η ∧ ϕ ∈ Λ3 (V ) for any linear 1-forms


ω, η, ϕ ∈ Λ1 (V ). To do this, define a map V 3 → R by letting, for any u, v, w ∈ V ,

ω ∧ η ∧ ϕ(u, v, w) = ω(u)η(v)ϕ(w) + ω(v)η(w)ϕ(u) + ω(w)η(u)ϕ(v)


− ω(v)η(u)ϕ(w) − ω(u)η(w)ϕ(v) − ω(w)η(v)ϕ(u).

Prove that this defines a linear 3-form ω ∧ η ∧ ϕ ∈ Λ3 (V ), and that in this way we get
wedge product maps

∧ : Λ1 (V ) × Λ2 (V ) → Λ3 (V ) and ∧ : Λ2 (V ) × Λ1 (V ) → Λ3 (V )

that satisfy the properties (a), (b), (d) and (e) of Theorem 3.12 (with k = 1, l = 2 or
k = 2, l = 1).

(c)* Following the pattern of (a) and (b) above, define for any linear 1-forms ω1 , . . . , ωk ∈
Λ (V ), the map ω1 ∧ . . . ∧ ωk : V k → R by letting
1

X
ω1 ∧ . . . ∧ ωk (v1 , . . . , vk ) = sgn(σ)ω1 (vσ(1) )ω2 (vσ(2) ) . . . ωk (vσ(k) ),
σ∈Sk

where Sk is the group of all permutations of the integers {1, 2, . . . , k}, which has k!
elements, and for σ ∈ Sk we have sgn(σ) = 1 if σ can be expressed as a composition of an
even number of transpositions of any two integers and sgn(σ) = −1 if σ can be expressed
as a composition of an odd number of transpositions of any two integers. This formula
can be used to form the wedge products of any linear forms, which can be shown to satisfy
the properties of Theorem 3.12. Using this formula, and Theorem 3.15, prove Theorem
3.16: if V = Rn , then
dx1 ∧ . . . ∧ dxn = det ∈ Λn (V ).
(Hint: Using the dimension of Λn (V ), show that the identity follows if it can be shown that
both sides give the same real value when evaluated on the n-tuple (e1 , e2 , . . . , en ) ∈ V n ,
and carry out that computation.)

15
3.3: Differential Forms and the Exterior Derivative
In this Section, we use the tensor algebra from Chapter 3.2 to define differential k-
forms on a general manifold, and work out ways of calculating with them, including the
notion of the exterior derivative of differential forms. While our initial definitions are
done for general manifolds, the most important example to keep in mind is for an open
subset of some Rn , and that is where we’ll do most actual calculations.

Definition 3.17 (Differential k-Forms): Let M ⊂ Rn+r be a n-manifold, and k =


0, 1, . . . a non-negative integer. A differential k-form on M , denoted ω ∈ Ωk (M ) is either
of the following equivalent objects:

(a) A map ω : Vect(M )k → C ∞ (M ) that is multilinear and alternating. In other


words, if X1 , . . . , Xk , Y ∈ Vect(M ) and f, g ∈ C ∞ (M ), then for any i = 1, . . . , k,

ω(X1 , . . . , f Xi + gY, . . . , Xk ) = f ω(X1 , . . . , Xi , . . . , Xk ) + gω(X1 , . . . , Y, . . . , Xk ),

and if we interchange the order in which any two vector fields are inserted as inputs of ω,
then the output changes sign.

(b) A rule that assigns, for each p ∈ M , a linear k-form ω(p) ∈ Λk (Tp M ), and does
so in a differentiable way, meaning that for any differentiable vector fields X1 , . . . , Xk ∈
Vect(M ), the real-valued function ω(X1 , . . . , Xk ) : M → R that’s defined by

ω(X1 , . . . , Xk )(p) := ω(p)(X1 (p), . . . , Xk (p)) ∈ R, p ∈ M,

is differentiable.

Note: To see that both of these definitions of a differential k-form are equivalent, we
need to know how to go from an object ω as defined in (a) to one as defined in (b),
and vice-versa. To go from (b) to (a) is the easier part: Given a differential k-form ω
as in (b), for any differentiable vector fields X1 , . . . , Xk ∈ Vect(M ) we get a function
ω(X1 , . . . , Xk ) : M → R by using the formula in (b), and by definition this function is
in C ∞ (M ). Also, the fact that ω(p) ∈ Λk (Tp M ) allows us to easily prove that the map
Vect(M )k → C ∞ (M ) we get in this way is multilinear and alternating as defined in (a).
To go from (a) to (b) is more complicated, but can be done.

Lemma 3.18: Let U ⊂ Rn be an open subset, with standard coordinate functions


denoted x1 , . . . , xn ∈ C ∞ (U ). If ω ∈ Ωk (U ) is any differential k-form on U , then we can
have
X
ω= ωi1 ,...,ik dxi1 ∧ . . . ∧ dxik , (14)
1≤i1 <i2 <...<ik ≤n

for uniquely determined differentiable real-valued functions ωi1 ,...,ik ∈ C ∞ (U ) which are
determined, for each set of strictly increasing indices 1 ≤ i1 < i2 < . . . < ik ≤ n, by

ωi1 ,...,ik := ω(Ei1 , . . . , Eik ) ∈ C ∞ (U ),

16
where Ei ∈ Vect(U ) is the constant vector field given by Ei (p) = ei ∈ Rn (the ith standard
basis vector) for all p ∈ U .

Proof: Since U ⊂ Rn is open, we have Tp U = Rn for all p ∈ U . Therefore, using Definition


3.17(b), we have ω(p) ∈ Λk (Rn ) for each p ∈ U . For e1 , . . . , en the standard basis of Rn ,
the dual basis of Λ1 (Rn ) is given by dx1 , . . . , dxn , the exterior derivatives of the coordinate
functions, since
dxi (Ej )(p) = dxi (ej ) = δji , i, j = 1, . . . , n.
Hence, using Theorem 3.15 we can write
X
ω(p) = αi1 ,i2 ,...,ik dxi1 ∧ dxi2 ∧ . . . ∧ dxik
1≤i1 <i2 <...<ik ≤n

for some real coefficients αi1 ,i2 ,...,ik (we can have a different coefficient for EACH strictly
increasing index set 1 ≤ i1 < . . . < ik ≤ n, so our sum above potentially includes a term
for each such index). Defining a function ωi1 ,...,ik : U → R by

ωi1 ,...,ik (p) = αi1 ,...,ik ,

we can use the identity dxi (Ej ) = δji as well as the alternating property to show that if
1 ≤ i1 < . . . < ik ≤ n and 1 ≤ j1 < . . . < jk ≤ n are two strictly increasing sets of indices,
then

dxi1 ∧ . . . ∧ dxik (Ej1 , . . . , Ejk ) = δji11 δji22 · · · δjikk


(
1 if i1 = j1 , i2 = j2 , . . . , ik = jk ;
=
0 otherwise.

From this, and the above expression for ω(p), it follows that

ωi1 ,...,ik (p) = αi1 ,...,ik = ω(Ei1 , . . . , Eik )(p)

for all p ∈ U , which concludes the proof, since ω(Ei1 , . . . , Eik ) ∈ C ∞ (U ) by definition of
differentiability of ω. (QED)

Definition 3.19 (Exterior Derivative): We can use the expression given in Lemma
3.18 to define the exterior derivative of differential forms on an open subset U ⊂ Rn :

(a) If ω ∈ Ωk (U ) is of the form ω = gdxi1 ∧ . . . ∧ dxik for some set of strictly increasing
indices 1 ≤ i1 < . . . < ik ≤ n, then its exterior derivative dω ∈ Ωk+1 (U ) is

dω = d(gdxi1 ∧ . . . ∧ dxik ) = dg ∧ dxi1 ∧ . . . ∧ dxik .

(b) If ω ∈ Ωk (U ) is a general differential k-form, then expand ω as in (18), and define


its exterior derivative dω ∈ Ωk+1 (U ) by (a) and linearity:
X
dω = dωi1 ,...,ik ∧ dxi1 ∧ . . . ∧ dxik .
1≤i1 <...<ik ≤n

17
Theorem 3.20: Definition 3.19 defines operators d : Ωk (U ) → Ωk+1 (U ), for all k =
0, 1, 2, . . .. The following properties hold:

(a) If ω, η ∈ Ωk (U ), then d(ω + η) = dω + dη;

(b) If ω ∈ Ωk (U ) and η ∈ Ωl (U ), then d(ω ∧ η) = dω ∧ η + (−1)k ω ∧ dη;

(c) If ω ∈ Ωk (U ), then d(dω) = 0.

Furthermore, if d : Ωk (U ) → Ωk+1 (U ) are operators for all k = 0, 1, 2, . . . with properties


(a)-(c), and dφ = dφ ∈ Ω1 (U ) for all φ ∈ Ω0 (U ) = C ∞ (U ), then dω = dω for ω ∈ Ωj (U )
and all k = 1, 2, . . . as well.

Example 3.21: Let U ⊂ R2 be an open subset and ω ∈ Ω1 (U ) a differential 1-form.


As in Chapters 3.1 and 3.2 we denote x = x1 , y = x2 for the coordinate functions and
dx = dx1 , dy = dx2 for both the linear 1-forms on Rn and the differential 1-forms on
U . By Lemma 3.18 we can write ω = ωx dx + ωy dy for two differentiable real-valued
functions ωx , ωy ∈ C ∞ (U ). We’ll write f = ωx and g = ωy to get the more familiar
form ω = f dx + gdy. On the other hand, since dω ∈ Ω2 (U ) Lemma 3.18 also tells us
that dω = hdx ∧ dy for some h ∈ C ∞ (U ), since the only strictly increasing set of indices
1 ≤ i1 < i2 ≤ 2 is i1 = 1, i2 = 2. To calculate the function h, we use Definition 3.19,
Lemma 3.3(c), and the formula calculated in Example 3.13:

dω = df ∧ dx + dg ∧ dy
   
∂f ∂f ∂g ∂g
= dx + dy ∧ dx + dx + dy ∧ dy
∂x ∂y ∂x ∂y
 ∂f ∂f   ∂g ∂g 
= det ∂x ∂y dx ∧ dy + det ∂x ∂y dx ∧ dy
1 0 0 1
 
∂g ∂f
= − dx ∧ dy,
∂x ∂y
∂g
which shows that h = ∂x − ∂f
∂y
. This expression should remind you of something: Green’s
Theorem, which is often written in the form
Z Z Z  
∂g ∂f
f dx + gdy = − dxdy
∂D D ∂x ∂y

for a bounded domain D ⊂ R2 with piecewise smooth boundary ∂D. The calculation
above shows that we can restate Green’s Theorem as saying that for any differential
1-form ω ∈ Ω1 (D), Z Z Z
ω= dω.
∂D D

Example 3.22: Let U ⊂ R3 be open and ω ∈ Ω1 (U ) a differential 1-form. We denote


the coordinate functions by x, y, z and thus, as above, ω can be expressed in terms of
their exterior derivatives dx, dy, dz ∈ Ω1 (U ), and we’ll write ω = f dx + gdy + hdz. Since
dz ∧ dx = −dx ∧ dz, and the only strictly increasing sets of two indices 1 ≤ i1 < i2 ≤ 2

18
are i1 = 1, i2 = 2 and i1 = 1, i2 = 3 and i1 = 2, i2 = 3, we see from Lemma 3.18 that dω,
like any differential 2-form on U , can be expressed as a combination of dy ∧ dz, dz ∧ dx
and dx ∧ dy (with coefficients given by some differentiable real-valued functions on U ).
To find the coefficients, we calculate as before, but now using the formula for the wedge
product derived in Example 3.14 (we’ll save some space by writing partial derivatives as
fx = ∂f
∂x
, etc.) :
dω = df ∧ dx + dg ∧ dy + dh ∧ dz
= (fx dx + fy dy + fz dz) ∧ dx
+ (gx dx + gy dy + gz dz) ∧ dy
+ (hx dx + hy dy + hz dz) ∧ dz
= fz dz ∧ dx − fy dx ∧ dy
+ gx dx ∧ dy − gz dy ∧ dz
+ hy dy ∧ dz − hx dz ∧ dx
= (hy − gz )dy ∧ dz + (fz − hx )dz ∧ dx + (gx − fy )dx ∧ dy.
Note that these coefficients correspond to the coefficients of the curl of a vector field: If
X ∈ Vect(U ) is a vector field expressed as X = f E1 + gE2 + hE3 , then in the notation
of Example 3.14 we have ω = Φ(X) and dφ = Ψ(∇ × X). But, since any vector field
X ∈ Vect(U ) can be written in this form, we have proven the following:

Theorem 3.23: If U ⊂ R3 is an open subset, and we denote by Φ : Vect(U ) → Ω1 (U )


and Ψ : Vect(U ) → Ω2 (U ) the bijections given by applying the maps from Example
3.14 point-by-point, then the exterior derivative d : Ω1 (U ) → Ω2 (U ) and curl operator
∇× : Vect(U ) → Vect(U ) are related by
dΦ(X) = Ψ(∇ × X) ∈ Ω2 (U ) (15)
for all X ∈ Vect(U ).

Example 3.24: With U ⊂ R3 open and the same notation as above, now we calculate
formulas for the exterior derivative of differential 2-forms, d : Ω2 (U ) → Ω3 (U ). If θ ∈
Ω2 (U ), then as noted above we can write θ = f dy ∧ dz + gdz ∧ dx + hdx ∧ dy for some
f, g, h ∈ C ∞ (U ). Since dθ ∈ Ω3 (U ), then using Lemma 3.18 as in the previous examples
we can express dθ as the product of a differentiable real-valued function with dx ∧ dy ∧ dz,
and the value of the coefficient function can be calculated as above:
dθ = df ∧ dy ∧ dz + dg ∧ dz ∧ dx + dg ∧ dx ∧ dy
= (fx dx + fy dy + fz ) ∧ dy ∧ dz
+ (gx dx + gy dy + gz dz) ∧ dz ∧ dx
+ (hx dx + hy dy + hz dz) ∧ dx ∧ dy
= fx dx ∧ dy ∧ dz + gy dy ∧ dz ∧ dx + hz dz ∧ dx ∧ dy
= (fx + gy + hz )dx ∧ dy ∧ dz.
The coefficient in this case corresponds to the divergence of a vector field. We can make
this exact by introducing the bijection Ξ : C ∞ (U ) → Ω3 (U ) by letting
Ξ(φ) = φdx ∧ dy ∧ dz ∈ Ω3 (U ), φ ∈ C ∞ (U ).

19
Then we have proven the following:

Theorem 3.25: If U ⊂ R3 is open, and Ψ : Vect(U ) → Ω2 (U ), Ξ : C ∞ (U ) → Ω3 (U )


are the bijections defined above, then the exterior derivative d : Ω2 (U ) → Ω3 (U ) and
divergence operator ∇· : Vect(U ) → C ∞ (U ) are related by

dΨ(X) = Ξ(∇ · X) (16)

for all X ∈ Vect(U ).

To conclude this Chapter, we introduce the notion of pullback of differential forms:

Definition 3.26: If M ⊂ Rn+r is a n-manifold, N ⊂ Rm+s a m-manifold, and f : M → N


is a differentiable map, then for any differential k-form on N , ω ∈ Ωk (N ), its pullback to
M , f ∗ ω ∈ Ωk (M ) is defined by letting

f ∗ ω(p) = Df (p)∗ (ω(f (p))) ∈ Λk (Tp M )

for any p ∈ M .

Note: In the definition above, we are using the fact that for each p ∈ M , the derivative of
f at p is a linear map Df (p) : Tp M → Tf (p) N , and so since ω(f (p)) ∈ Λk (Tf (p) N ) (by the
definition of a differential k-form on N ), we have Df (p)∗ (ω(f (p))) ∈ Λk (Tp M ) as needed,
using the definition of pullback by a linear map given in Definition 3.9 (with V = Tp M
and W = Tf (p) N ). This means that the map

f ∗ ω(p) : (Tp M )k → R

is given, for any tangent vectors v1 , . . . , vk ∈ Tp M , by

f ∗ ω(p)(v1 , . . . , vk ) = ω(f (p))(Df (p)(v1 ), . . . , Df (p)(vk )) ∈ R.

One can verify that for k = 0, this gives is the same definition for the pullback of a
differentiable 0-form on N (which is simple a differentiable real-valued function on N ) as
in Lemma 3.8, while for k = 1 it gives us the pullback of differentiable 1-forms as defined
in Definition 3.6.

Example 3.27: Let S ⊂ R3 be a 2-manifold, i.e. a general regular surface. Given


a differentiable vector field F : S → R3 (which, for this example, does NOT need to
be tangent to the surface S at each point), we can define a differential 2-form on S,
θF ∈ Ω2 (S), as follows: For any p ∈ S and any tangent vectors v, w ∈ Tp S, let

θF (p)(v, w) := det(F (p), v, w) ∈ R.

It is not too hard to verify that θF satisfies Definition 3.17 for k = 2.

Now consider an open subset U ⊂ R2 and let r : U → S be a differentiable map (the


case to keep in mind is when r defines a (regular) parametrisation of S or some subset of

20
S). We want to calculate r∗ θF ∈ Ω2 (U ) in order to illustrate the concept and show how
it recovers an important quantity from the theory of surface integrals (see Chapter 1.1).
To carry out this calculation, we denote the coordinate functions on U by u, v, and their
exterior derivatives du, dv ∈ Ω1 (U ). We have already seen that any differential 2-form on
U can be expressed as the product of du ∧ dv with a differentiable real-valued function on
U , so r∗ θF = hdu ∧ dv for some h ∈ C ∞ (U ). To find a formula for h, note that for any
point q ∈ U we have
du ∧ dv(q)(e1 , e2 ) = 1,
and hence

h(q) = r∗ θF (q)(e1 , e2 )
= θF (r(q))(Dr(q)(e1 ), Dr(q)(e2 ))
= det(F (r(q)), D1 r(q), D2 r(q))
= hF (r(q)), D1 r(q) × D2 r(q)i,

where for the last equality we have used the following identity which relates the determi-
nant and the cross product of vectors in R3 :

det(u, v, w) = hu, v × wi, u, v, w ∈ R3 .

(This identity, if unfamiliar, can easily be derived from the formula defining the cross
product.) This shows that the function h ∈ C ∞ (U ) is given by
 
∂r ∂r
h = F ◦ r, × ,
∂u ∂v

and hence the pullback of θF is given by


 
∗ ∂r ∂r
r θF = F ◦ r, × du ∧ dv ∈ Ω2 (U ). (17)
∂u ∂v

The expression on the right-hand side should look familiar: it is essentially the same
2
R R as the integrand (over a region D ⊂ R ) used to define the vector surface integral
thing
S
F · da of a parametrised surface (see Definition 1.7 in Chapter 1.1).

Theorem 3.28: If U ⊂ Rn and V ⊂ Rm are open subsets, and f : U → V is a differen-


tiable map, then the pullback and exterior derivative commute, i.e. for any differentiable
k-form on V , ω ∈ Ωk (V ),
d(f ∗ ω) = f ∗ (dω) ∈ Ωk+1 (U ).

Note: It is possible, using Theorem 3.28, to define the exterior derivative of differential
forms on manifolds using the pullback to local coordinate systems, and to show that
the definition is independent of the choice of local coordinate system used. In this way
we get operators d : Ωk (M ) → Ωk+1 (M ), for a general n-manifold M ⊂ Rn+r and any
k = 0, 1, . . .. These operators can be shown to have the properties of Theorem 3.20 and
Theorem 3.28. As stated above, we will not go into the details here since they add more
complexity than is needed or justified for our aims in this course.

21
3.3 TUT Problems

1. Let U ⊂ R3 be an open subset and F ∈ Vect(U ) a differentiable vector field on U ,


which we can write as
 1
F
F = F 2  = F 1 E1 + F 2 E2 + F 3 E3

F3

for some F 1 , F 2 , F 3 ∈ C ∞ (U ). We have defined the differential 2-forms Ψ(F ), θF ∈ Ω2 (U )


by letting
Ψ(F ) = F 1 dy ∧ dz + F 2 dz ∧ dx + F 3 dx ∧ dy,
and letting
θF (p)(v, w) = det(F (p), v, w) ∈ R
for any point p ∈ U and any vectors v, w ∈ Tp U = R3 .

(a) Show that Ψ(F ) = θF ∈ Ω2 (U ).

Using the calculations done in Example 3.7, if I = [a, b] ⊂ R is an interval and γ : I → U


is a (piecewise differentiable) parametrised curve in U ⊂ R3 , then the vector path integral
of F along γ can be expressed as
Z Z
F · dγ = γ ∗ Φ(F ),
γ I

where Φ(F ) ∈ Ω1 (U ) is the differential 1-form given by


Φ(F ) = F 1 dx + F 2 dy + F 3 dz.
Now suppose that S ⊂ U is a surface which has a (regular) parametrisation r : D → S
for some domain D ⊂ R2 , and suppose that the boundary of S is parametrised by γ:
∂S = r(∂D) = γ(I). Using the identity (4) shown in Example 3.27, and the identity
shown in part (a), we can express the vector surface integral of F over S as
Z Z Z Z
F · da = r∗ Ψ(F ).
S D

(b) Use Theorem 3.23, Theorem 3.28 and Green’s Theorem (as expressed in Example
3.21 using differential forms on an open subset of R2 ) to prove Stokes’ Theorem (Thm.
1.9) in this setting: Z Z Z
(∇ × F ) · da = F · dγ.
S ∂S
(c)* Modify the above procedure, using Theorem 3.25, to express Gauss’ Diver-
gence Theorem (Thm. 1.24) as an identity of integrals involving Ψ(F ) ∈ Ω2 (U ) and
dΨ(F ) ∈ Ω3 (U ).

2. Let U = R3 − {(0, 0, 0)} be the open subset of R3 obtained by removing the origin,
and let θ be the differential 2-form
xdy ∧ dz + ydz ∧ dx + zdx ∧ dy
θ= ∈ Ω2 (U ).
(x2 + y 2 + z 2 )3/2

22
(a) Show that dθ = 0.

(b)* Calculate the pullback f ∗ θ ∈ Ω2 (V ), where V = {(r, u, v) | r > 0} ⊂ R3 and


f : V → U is given by

f (r, u, v) = (r cos u cos v, r sin u cos v, r sin v), (r, u, v) ∈ V.

3. (a) Let L : Rn → Rn be a linear map and ω ∈ Λn (Rn ) a linear n-form on Rn . Prove


that
L∗ ω = det(L)ω ∈ Λn (Rn ).
(Hint: Use Theorem 3.15, in particular dim(Λn (Rn )) = 1, and the fact that det ∈ Λn (Rn )
is a non-zero element.)

(b) Using part (a), prove that if U, V ⊂ Rn are open subsets and f : U → V is a
differentiable map, then for any differential n-form ω ∈ Ωn (V ), its pullback is given by

f ∗ ω = det(Df )ω ◦ f ∈ Ωn (U ).

This means that if we write ω = gdx1 ∧ dx2 ∧ . . . ∧ dxn for some differentiable real-valued
function g ∈ C ∞ (V ) (which can always be done, according to Lemma 3.18), then

f ∗ (gdx1 ∧ . . . ∧ dxn ) = det(Df )(g ◦ f )dx1 ∧ . . . ∧ dxn .

4. Let B = (B 1 , B 2 , B 3 ) and E = (E 1 , E 2 , E 3 ) be two time-dependent vector fields on R3


(or some open subset of R3 ). This means that B i , E i : R4 → R for each i = 1, 2, 3. We
use x, y, z for the (“spacelike”) coordinates of R3 , and t for the additional (“timelike”)
coordinate in R4 . The First Set of Maxwell’s Equations for Electromagnetism are

∇ · B = 0; (18)
∂B
∇×E+ = 0. (19)
∂t
Define a differential 2-form B ∈ Ω2 (R4 ) and a differential 1-form E ∈ Ω1 (R4 ) by

B = B 1 dy ∧ dz + B 2 dz ∧ dx + B 3 dx ∧ dy;
E = E 1 dx + E 2 dy + E 3 dz;

and let F be the differential 2-form

F = B + E ∧ dt ∈ Ω2 (R4 ).

Prove that equations (18) and (19) are equivalent to dF = 0. (The Second Set of Maxwell’s
Equations can also be reformulated using differential forms, but the algebra needed is more
complicated and tedious. Doing this is important not only to show that differential forms
give us a nice way of writing physics equations; it’s also a necessary step for understanding
how Electromagnetism can be formulated as a classical gauge theory, which is what Yang
and Mills did when they introduced more general gauge theories in the 1950s.)

23

You might also like