0% found this document useful (0 votes)
76 views10 pages

Differential Forms

The document discusses differential forms, which can be thought of as things that can be integrated. Differential forms generalize the integrands of integrals to higher dimensions and more complex spaces in a way that avoids issues with changing coordinates. Specifically: 1) Differential forms assign a multilinear alternating tensor to each point in a space. This encodes how the integrand transforms under changes of coordinates. 2) Working with differential forms eliminates complications that arise from needing to account for Jacobians when changing coordinates in integrals. This makes integrals and their algebraic manipulations simpler. 3) Differential forms provide a coordinate-independent way of calculating areas and volumes, analogous to how the determinant works for two-dimensional spaces.

Uploaded by

pianochemiker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views10 pages

Differential Forms

The document discusses differential forms, which can be thought of as things that can be integrated. Differential forms generalize the integrands of integrals to higher dimensions and more complex spaces in a way that avoids issues with changing coordinates. Specifically: 1) Differential forms assign a multilinear alternating tensor to each point in a space. This encodes how the integrand transforms under changes of coordinates. 2) Working with differential forms eliminates complications that arise from needing to account for Jacobians when changing coordinates in integrals. This makes integrals and their algebraic manipulations simpler. 3) Differential forms provide a coordinate-independent way of calculating areas and volumes, analogous to how the determinant works for two-dimensional spaces.

Uploaded by

pianochemiker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Integration with differential forms

Jessica J. Zhang
October 8, 2020

R Roughly speaking, a differential form is just “something we can integrate.” When we evaluateR the integral
xdx, the xdx part is itself a differential form. In a line integral, where we suddenly have C f dx + gdy,
the entire integrand, namely f dx + gdy, is now a differential form. Later on, when working with higher
dimensions, we can have more complicated integrands like f dxdy + gdxdz, which would actually be more
accurately written with wedges, i.e., as f dx ∧ dy + gdx ∧ dz.
Now, it isn’t immediately obvious why it’s useful to talk about the integrands on their own, or why it is
useful to suddenly invent a new operation (called a wedge product).
The main motivation comes when we are trying to integrate over spaces more complex than Euclidean
space. These spaces are called smooth manifolds, and it is often difficult to integrate over them using the
traditional, coordinate-based approach.1
But another, equally important, motivation is that the relationship between integration and changes of
coordinate is often unnecessarily complicated. In general, even linear changes of coordinate require that we
multiply the integrand by some determinant, called the Jacobian. For example, let’s try to integrate x2 + y 2
over, say, the unit circle D. In Cartesian coordinates, the integral we would need to evaluate2 is
ZZ Z 1 Z √1−y2
π
(x2 + y 2 ) dA = (x2 + y 2 ) dxdy = .
D 0 0 2

But in polar coordinates, we can’t simply substitute x = r cos θ and y = r sin θ to find that the integral is
ZZ Z 2π Z 1
2 2π
r dA = r2 drdθ = .
D 0 0 3

In particular, as we learn in calculus, the dA part also changes upon a change of coordinates. To figure out
how it changes, we take the determinant of a matrix determined by the partial derivatives of r and θ with
respect to x and y and, basically, get that we need to multiply the whole thing by a factor of r. This gives
us that the integral is actually ZZ Z 2π Z 1
2 π
r dA = r3 drdθ = ,
D 0 0 2
which is what we expect.
Finding the Jacobian when working with R2 isn’t the worst thing in the world. But it gets a bit
cumbersome in higher dimensions. And, more importantly, it makes it rather difficult to do algebraic
manipulations with integrals. Differential forms effectively encode the change without relying on an outside
multiplier (e.g., the Jacobian). In particular, we should think of differential forms (i.e., integrands) as a
way to assign a function to each point in such a way that the assigned functions change as the coordinates
change. This eliminates many of the complications which arise from changes of coordinate.
Differential forms take some buildup to introduce, but they’re actually quite familiar things. In fact,
the first terms in each of the three equations above, which takes the integral with respect to some vague,
amorphous entity “A,” show us exactly how we want differential forms to work! In particular, transferring
from Cartesian to polar coordinates doesn’t mess up dA; instead, it’s that the relationship between dA
1 This basically comes from the fact that manifolds have less structure than Rn . The standard formula for a line integral, for

example, involves some notion of a metric—which not all manifolds have and which we do not always want to require. And taking
integrals the traditional way basically always requires a choice of coordinates, which often proves unnecessarily cumbersome.
Differential forms bypass this all—while retaining much of the traditional notation (helpful for intuition, consistency, and general
happiness).
2 Warning: Do not try to integrate this by hand.

1
and dxdy is different than that between dA and drdθ. The problem is that dA doesn’t really encode the
information we need, because it doesn’t tell us how to change drdθ to be equal to dxdy. Our goal is to figure
out how to make the dA have meaning and, in the process, make dxdy and drdθ behave just as nicely as the
dA does.

1 Tensors
As we said, differential forms are, first and foremost, ways to assign functions. In particular, they assign to
each point in a given space a specific type of function, known as an alternating tensor.
Let V be a vector space. Then a (covariant) k-tensor on V is a function

α : V × ··· × V → R
| {z }
k copies

which is multilinear. To be multilinear is to be linear with respect to each coordinate. In particular, we


must have
α(av1 + a0 v10 , v2 , . . . , vk ) = aα(v1 , v2 , . . . , vk ) + a0 α(v10 , v2 , . . . , vk ),
and similarly for each of the other coordinates.
Example 1.1. Many familiar functions on vector spaces, such as the dot product and the determinant, are
real-valued multilinear functions, and therefore are tensors. The cross product in R3 , on the other hand, is
multilinear (technically, bilinear), but is vector-valued and thus isn’t a tensor. In general, a 0-tensor is just
a constant and a 1-tensor is a real-valued linear map V → R, i.e., an element of the dual space V ∗ .
The main operation on tensors is called the tensor product. In particular, if α is a covariant k-tensor on
V and β is a covariant `-tensor on V , then we define their tensor product to be the function given by

α ⊗ β : (v1 , . . . , vk , w1 , . . . , w` ) 7→ α(v1 , . . . , vk )β(w1 , . . . , w` ).

One can show that α ⊗ β is multilinear and is therefore a covariant (k + `)-tensor on V .

1.1 Alternating tensors


Generally, we shouldn’t expect a tensor to behave in any predictable way if we swap two coordinates. Certain
tensors, such as the dot product, remain the same if their arguments are swapped. These are called symmetric
tensors; although they are very useful, they are not central to our discussion of differential forms, and so we
now will happily forget about their existence.
More useful for us are tensors that are similar to the determinant. In particular, the determinant is
an n-tensor on Rn , and it has the special property that it switches signs whenever two coordinates (i.e.,
rows/columns of a matrix) are swapped. In general, we say that a k-tensor is alternating if swapping two
arguments changes the sign. The set of all alternating k-tensors on a vector space V is denoted Λk (V ∗ ).3
Note that it is not particularly surprising that the kind of tensor which proves useful for building a theory
of integration is also the kind of tensor which mimics the determinant. After all, the determinant is indeed
a clean way to calculate areas; that is, it gives a coordinate-invariant formula to calculate an area, which is
precisely what we want differential forms to be able to do.
Example 1.2. All 0-tensors and 1-tensors are alternating. A 2-tensor is alternating if and only if α(v, w) =
−α(w, v). For example, if V = R2 , the map

α(v, w) = v 1 w2 − v 2 w1

is alternating. (Note that we index different vectors with subscripts, while indexing the coordinates of each
vector with superscripts; we will stick to this convention throughout this handout.)
3 The reason for using the dual V ∗ , rather than V , comes from a different, more abstract definition of tensors as elements

of the so-called tensor product of vector spaces. In this case, a covariant tensor is simply an element of the tensor product
V ∗ ⊗ · · · ⊗ V ∗.

2
Even if α and β are alternating tensors, there is no guarantee that their tensor product α ⊗ β is.
Example 1.3. Suppose α is the 2-tensor in Example 1.2 and β is the 1-tensor taking everything to 1. Then

(α ⊗ β)(u, v, w) = α(u, v).

However, if we swap u and w, then we find that

(α ⊗ β)(w, v, u) = α(w, v),

which does not have any relationship to α(u, v).


There is, however, a projection from the set of all k-tensors to the set of alternating k-tensors. If α is a
k-tensor on V , then its alternation is given by
1 X
Alt(α)(v1 , . . . , vk ) = (sgn σ)α(vσ(1) , . . . , vσ(k) ),
k!
σ∈Sk

where Sk is the set of all permutations of k elements. By sgn σ, we mean the sign of the permutation; this
is either +1 or −1.
The actual formula of Alt(α) is not critical to remember. The main idea is that it is obtained by applying
α on every possible permutation of the vi ’s, and then taking some alternating sum of the results. When
k = 2, this gives
1
Alt(α)(v, w) = (α(v, w) − α(w, v)).
2
Note that Alt(α) is indeed alternating in this case, because
1
Alt(α)(w, v) = (α(w, v) − α(v, w)) = −Alt(α)(v, w).
2
When k = 3, we have
1
Alt(α)(u, v, w) = (α(u, v, w) − α(v, u, w) − α(u, w, v) − α(w, v, u) + α(v, w, u) + α(w, u, v)).
6
It is relatively simple, though tedious, to check in this case, too, that Alt(α) is alternating. Indeed, we
generally have the following proposition.

Proposition 1.4. The alternation of a k-tensor α is an alternating k-tensor. That is, Alt(α) ∈ Λk (V ∗ ).
Thus we should think of Alt(α) as simply an alternating tensor defined by α. The particulars of the
definition are not very important to us.

1.2 Wedge products


We already defined the tensor product of two tensors. However, we can define an even more useful operation,
known as the wedge product, on alternating tensors. If α ∈ Λk (V ∗ ) and β ∈ Λ` (V ∗ ), then we write

(k + `)!
α∧β = Alt(α ⊗ β).
k!`!
More explicitly, we have
1 X
(α ∧ β)(v1 , . . . , vk+` ) = (sgn σ)α(vσ(1) , . . . , vσ(k) )β(vσ(k+1) , . . . , vσ(k+`) ).
k!`!
σ∈Sk+`

Note that α ⊗ β is a (k + `)-form, and so its alternation is also a (k + `)-form.

3
Proposition 1.5. The wedge product satisfies is bilinear, associative, and anticommutative, which means
that α ∧ β = (−1)k` β ∧ α. Moreover, suppose V (and therefore V ∗ ) is an n-dimensional vector space. Let
V ∗ have basis {ε1 , . . . , εn }. Then εi is a 1-tensor and

{εi1 ∧ · · · ∧ εik : 1 ≤ i1 < · · · < ik ≤ n}

is a basis for Λk (V ∗ ).
The main takeaway of this proposition is just that the wedge product behaves nicely, is anticommutative,
and allows us to construct a natural basis for the vector space of all alternating k-tensors on V . Often, we
denote εi1 ∧ · · · ∧ εik as εI , where I = (i1 , . . . , ik ) is an ascending multi-index, i.e., where i1 < · · · < ik .

An important note
The constant multiple in the definition of the wedge product is technically not necessary, and is not always
included, but is helpful because it provides a particularly nice (i.e., constant-free) basis for Λk (V ∗ ).
Recall that determinants are alternating tensors themselves. In fact, determinants basically define our
basis elements εI . In particular, the typical definition of εI is actually as the k-tensor such that
 i
ε 1 (v1 ) . . . εi1 (vk )

εI (v1 , . . . , vk ) = det  ... .. ..  .



. . 
εik (v1 ) . . . εik (vk )

Recall that εi1 is a 1-tensor, meaning that it takes in a vector and returns some real number, so this definition
makes sense.
Generally, we then prove that the collection of all εI for ascending multi-indices I is a basis for Λk (V ∗ ).
We finally show that εi1 ∧ · · · ∧ εik = εI , where I = (i1 , . . . , ik ).
The proof of each of these propositions is relatively involved and not particularly enlightening. However,
it is worth being aware of this determinant definition of the basis vectors for Λk (V ∗ ).
In particular, it gives us the following determinant-based interpretation for wedge products of 1-forms.
Proposition 1.6. Given 1-forms ω 1 , . . . , ω k and vectors v1 , . . . , vk , we have

ω 1 ∧ · · · ∧ ω k (v1 , . . . , vk ) = det(ω j (vi )). (1)

Proof. When ω i = εi , this is true by the note above that εi1 ∧ · · · ∧ εik = εI . Now simply observe that both
sides are linear in ω i , from which the proposition follows.

2 Differential forms
Differential forms basically assign an alternating tensor to each point of the space over which we are per-
forming our desired integration.
To be more specific, let M ⊂ Rn be a domain of integration. This just means that it is a bounded subset
of Euclidean space which doesn’t include its boundary.4 At each point p ∈ Rn , let Tp Rn denote the tangent
space. This basically consists of all the possible directions we can go from p, which we should think of as
just a copy of Rn whose origin is at p. If Rn has coordinates x1 , . . . , xn , then we write the basis of Tp Rn as

∂ ∂
,..., n .
∂x1 p ∂x p

For simplicity, we often identify this space with Rn by writing



1 ∂ n ∂
= (v 1 , . . . , v n ) ∈ Rn .

v 1
+ ··· + v n
∂x p ∂x p
4 We
technically also need that M has measure zero boundary. This basically translates to a requirement that the boundary
of M (which isn’t part of M , but is part of Rn !) has dimension less than n.

4
Note that (Tp Rn )∗ , which we typically just denote as Tp∗ Rn , also has dimension n.
Then a (differential) k-form defined on M is just a smooth alternating k-tensor field, i.e., a smooth
function ω which assigns to each point p ∈ M an alternating k-tensor ωp = ω(p) ∈ Λk (Tp∗ Rn ). We denote
the set of all differential k-forms on M as Ωk (M ).
The last part of the definition that we haven’t explained yet is the word “smooth.” In general, the issue
of smoothness only comes into play if we are trying to check if something is or isn’t a differential form; we
won’t worry ourselves about that here, and will just assume that everything that looks like a differential
form is one, and so we can safely ignore the smoothness criterion.5

Example 2.1. Let f (x, y) = x2 + y 2 . This is a smooth function. This gives a smooth 2-form ω taking p to

ωp = f (x, y)α,

where α is the 2-tensor in Example 1.2 taking (v, w) to v 1 w2 − v 2 w1 and (x, y) are the Cartesian coordinates
of p. The unit disk (without boundary) in R2 is an example of a domain of integration. These two will be
our prototypical examples in the remainder of this paper.

2.1 A basis for Ωk (M )


In this subsection, we focus on 0-forms, which are just smooth functions, and 1-forms, which are smooth
maps taking a point p ∈ M to a linear functional ωp : Tp Rn → R. We often refer to 1-forms as covector
fields, and the linear functionals ωp as covectors.
Our goal is to define and understand the differential of a function; this will allow us in Section 2.2 to
define a differential operator.
Given a smooth function f : M → R, we define the differential of f to be the map df taking a point
p ∈ M to the map

dfp : Tp Rn → Tf (p) R ∼
=R
s
X ∂f
v 7→ (v), (2)
∂xi
i=1 p

where x1 , . . . , xn are the coordinate axes of Rn . This is a 1-form.


This definition of the differential is analogous to the traditional definition of a total derivative; it effectively
provides a map taking the tangent space at p to the tangent space at f (p). As expected, it satisfies several
natural properties. In particular, the differential function is linear; satisfies the product (hence quotient)
rule, as well as a version of the chain rule; and is zero when f is a constant map.
If xj : Rn → R denotes the projection of p = (p1 , . . . , pn ) to its j-th coordinate,then (dxj )p is simply the
map with
n
∂xj
X
j
(dx )p (v) = (v) = v j ,
∂xi
i=1 p

where v = (v 1 , . . . , v n ). Note that the xj is a map and xi is a coordinate, so dxj is the differential 1-form
which assigns to each point p the constant map pj .
Observe, moreover, that {dxj } is a basis for T ∗ Rn , which is shorthand for saying that {(dx1 )p , . . . , (dxn )p }
is a basis for Tp∗ Rn for every p ∈ M . After all, the basis of Tp∗ Rn can be seen to be equal6 to the collection
of functions f 1 , . . . , f n such that
! (
j ∂ 0 if i 6= j
f = δi,j = .
∂xi p 1 otherwise
5 Toactually make this definition, we basically need to put a smooth structure on p Λk (Tp∗ Rn ). But that’s kind of annoying,
`
and not very interesting. As long as you don’t try to break this definition, it won’t break.
6 That is, the typical proof that the dual space has the same dimension as the original space uses this as its basis.

5
Yet the definition of dxj shows that it satisfies this property, and so {(dxj )p } is indeed a basis for Tp∗ Rn . In
particular, every 1-form can be written as
n
X
ω= ωj dxj ,
j=1

where
n
X
ωp (v) = ωj (v 1 , . . . , v n )(dxj )p .
j=1

Note that ωp denotes the alternating 1-tensor ω(p), while ωj denotes the j-th “coordinate” of ω.
In Proposition 1.5, we said that a basis for Λk (Tp∗ Rn ) was given by the wedges of basis elements for Tp∗ Rn .
Hence, taking all p ∈ M at once, it follows by definition that a basis for Ωk (M ) is given by the wedges of
basis elements for Ω1 (M ), where everything is taken pointwise. In other words, every ω ∈ Ωk (M ) has the
property that there exist unique maps ωI such that for each p ∈ M we can write
X
ωp (v) = ωI (v 1 , . . . , v s )(dxi1 )p ∧ · · · ∧ (dxik )p .
I=(i1 ,...,ik )

We are summing over all ascending k-indices, i.e., indices I = (i1 , . . . , ik ) where i1 < · · · < ik . As shorthand,
we write X
ω= ωI dxI .
Here dxI denotes the wedge of all dxij ’s, not the differential of some function xI . Note that we are using a
slightly different definition of the wedge product, taken between differential forms, rather than tensors: The
wedge product of two differential forms ω and η to be the differential form ω ∧ η which takes p to ωp ∧ ηp .
Example 2.2. Suppose we have coordinates (x, y) on R2 . (Note that we can do this exercise with any
coordinates on R2 , e.g., polar coordinates.) Then dx is the map taking v to its x-coordinate, and similarly
for dy. Thus, at every point p, the 2-form dx ∧ dy takes (v, w) to
h i 1 h i
2 Alt (dx)p ⊗ (dy)p )(v, w) = 2 · · (dx ⊗ dy)(v, w) − (dx ⊗ dy)(w, v)
2
= dx(v)dy(w) − dx(w)dy(v).

This is exactly equal to


(dx ∧ dy)(v, w) = v 1 w2 − w1 v 2 .
Hence we can write ω from Example 2.1 in terms of the basis elements:

ω = (x2 + y 2 ) dx ∧ dy.

(Not that, in this case, is only one basis element.)

2.2 Exterior derivatives


This differential map on 0-forms, which is closely related to the definition of a derivative, can be generalized
to k-forms, namely to functions d : Ωk (M ) → Ωk+1 (M ) for all integer k ≥ 0. We call this natural differential
on smooth forms the exterior derivative.
Let ω ∈ Ωk (M ) and write ω = ωI dxI . Then we define the exterior derivative of ω to be
P

X
dω = dωI ∧ dxI ∈ Ωk+1 (M ).

Recall that ωI is just a function, so dωI can be computed as in Equation (2). Moreover, when writing dxI ,
the xI does not denote some function; instead, we have dxI = dxi1 ∧ · · · ∧ dxik . To be more explicit, we have
! " n ! #
X X X ∂ωI
I i i1 ik
d ωI dx = dx ∧ dx ∧ · · · ∧ dx .
i=1
∂xi
I I

6
Note that d really denotes infinitely many derivative functions. If we were being particularly scrupulous,
we would have d0 taking 0-forms to 1-forms, and d1 taking 1-forms to 2-forms, and so on. But we can ignore
the subscripts because which d we are using is implied automatically by the form of which we are taking
the exterior derivative (i.e., if ω ∈ Ωk (M ), then the d in dω is clearly “dk ,” where we borrow our slightly
tongue-in-cheek definitions of d0 , d1 , etc.).
For convenience, we collect a few properties of exterior differentiation below.
Proposition 2.3. The exterior derivative satisfies the following properties:
• If ω, η ∈ Ωk (M ) and a, b ∈ R, then d(aω + bη) = adω + bdη;
• If ω ∈ Ωk (M ) and η ∈ Ω` (M ), then d(ω ∧ η) = dω ∧ η + (−1)k` ω ∧ dη; and
• For any ω ∈ Ωk (M ), we have (d ◦ d)(ω) = 0 ∈ Ωk+2 (M ).7
Example 2.4. Recall our “prototypical example” of ω = (x2 + y 2 ) dx ∧ dy. Its exterior derivative is
dω = d(x2 + y 2 ) ∧ (dx ∧ dy) = (2x dx + 2y dy) ∧ (dx ∧ dy).
But because the wedge product is anticommutative, this is simply equal to
2x − 2y) dx ∧ dy.

3 Integrating differential forms


Before we are able to fully introduce the theory of integration, we must briefly discuss the idea of an
orientation. For Rn , this amounts to choosing a positive direction for each of the n coordinate axes. A
domain of integration M always has an orientation defined by the orientation of Rn . In three dimensions,
for example, the right-hand rule gives the orientation for a lower-dimensional space. There is an opposite
orientation of M , achieved by taking the negative of our normal vector, which we denote as −M .
The important takeaway from this digression on orientation is that all domains of integration have two
orientations. For example, the interval [0, 1] has a positive orientation which moves rightward, and a negative
one which corresponds to an arrow pointing to the left. A circle on the xy-plane in R3 has two possible
orientations: The normal can point upward or downward.
Let ω be a differential n-form defined on M . Note that there is only one ascending multi-index in this
case—namely I = (1, 2, . . . , n)—and so we can write
ω = f dxI = f dx1 ∧ · · · ∧ dxn
for some smooth function f : M → R. Then we define the integral of ω over M as
Z Z Z Z
1 n 1 n
ω= f dx ∧ · · · ∧ dx = f dx . . . dx = f dV.
M M M M
In other words, to calculate the integral of a differential form, we simply “erase the wedges”!
Maybe this is slightly anticlimactic; the integral of a differential form is, after all, basically the exact
same as a normal integral. It isn’t immediately obvious that smooth maps won’t complicate this formula
(and it isn’t obvious that this formula isn’t just a trick of notation). But the following theorem shows why
differential forms are in fact the “natural” integrand.
Theorem 3.1. Suppose M and N are domains of integration. Suppose that G : M → N is an orientation-
preserving or orientation-reversing diffeomorphism. If ω is an n-form on N , then
Z
Z 
 ω if G is orientation-preserving, and
G∗ ω = N
Z
M  − ω if G is orientation-reversing.

N
The statement of this theorem needs some untangling. A diffeomorphism is just a bijective smooth
(infinitely differentiable) map whose inverse is also smooth.8 And G∗ is something called the pullback map,
7 This observation actually leads to something called de Rham cohomology, which is a cohomology theory based on the spaces

Ωk (M ) and the functions d.


8 If “bicontinuous” means “continuous with continuous inverse,” then can we say that a diffeomorphism is “bismooth”?

7
which is important enough that we’ll spend the next section talking about it.

3.1 Pullbacks of differential forms


Suppose M, N ⊆ Rn are domains of integration. Let ω = f dx1 ∧ · · · ∧ dxn be a differential n-form on
N ⊆ Rn and let G : M → N be a smooth map. We define the pullback of ω by G as the n-form

G∗ ω = (f ◦ G) d(x1 ◦ G) ∧ · · · ∧ d(xn ◦ G).

In general, if ω is a k-form, then the pullback is defined as


 
X X
G∗ ω = G∗  ωI dxI  = (ωI ◦ G) d(xi1 ◦ G) ∧ · · · ∧ d(xik ◦ G).
I=(i1 ,...,ik ) I=(i1 ,...,ik )

This is a rather unintuitive definition, but should be thought of as the natural way to pull a differential
form on N back into one on M .
Example 3.2. As before, let ω = (x2 + y 2 ) dx ∧ dy be a 2-form on R2 . Consider the substitution x =
r cos θ, y = r sin θ. This amounts to the identity diffeomorphism G on R2 , where we consider the domain
under rθ-coordinates and the codomain under xy-coordinates. More specifically, G takes (r, θ) to (x, y) =
(r cos θ, r sin θ). Then we find G∗ (ω)p is the map taking (v, w) to

(1 ◦ G)(x2 + y 2 ) d(x ◦ G) ∧ d(y ◦ G).

We can compute d(x ◦ G) as follows:



∂ ∂
d(x ◦ G) = d(r cos θ) = (r cos θ) + (r cos θ) = cos θdr − r sin θ dθ.
∂r p ∂θ p

We can similarly compute d(y ◦ G) to find that

(1 ◦ G)(x2 + y 2 ) d(x ◦ G) ∧ d(y ◦ G) = r2 (cos θ dr − r sin θ dθ) ∧ (sin θ dr + r cos θ dθ).

Using properties of the exterior derivative—namely, the fact that d ◦ d = 0 and that the wedge product is a
bilinear operation—we can find that this is equal to

r2 (r cos2 θ dr ∧ dθ − r sin2 θ dθ ∧ dr) = r3 dr ∧ dθ.

Notice the extra factor of r, just as we’d have in the formula for changing an integral from Cartesian to polar
coordinates! (Hint: This isn’t a coincidence.)
The pullback helpfully is well-behaved with respect to wedge products and exterior differentiation. In
fact, you can try to verify the following proposition.

Proposition 3.3. Suppose G : M → N is smooth.


(a) For every k, G∗ : Ωk (N ) → Ωk (M ) is an R-linear map.
(b) Pullbacks distribute over wedge products: G∗ (ω ∧ η) = (G∗ ω) ∧ (G∗ η).

(c) Pullbacks commute with the exterior differentiation operator: G∗ (dω) = d(G∗ ω).
The most helpful part of the pullback is that for n-forms on Rn , we have the following theorem, which
shows that the pullback accounts for the Jacobian, thus encoding a change of variables which usually would
need to be a different term in our integrand. This effectively is the heart of integrating with differential
forms.

8
Theorem 3.4. Suppose M, N ⊆ Rn are domains of integration. Suppose, moreover, that the Rn in which M
is embedded has coordinates (xi ), while the Rn in which N is embedded has coordinates (y i ). If G : M → N
is a smooth map, then

G∗ (f dx1 ∧ · · · ∧ dxn ) = (f ◦ G)(det DG) dx1 ∧ · · · ∧ dxn , (3)

where DG represents the Jacobian matrix of G in these coordinates, namely


 
∂G1 ∂G1
 ∂x1 . . . ∂xn 
 . .. .. 
DG =   .. . . .

 ∂G ∂Gn 
n
1
...
∂x ∂xn
Proof. To show that the two sides of the equation are the same, we just need to show that they take any
behave the same on the vector (∂/∂x1 |p , . . . , ∂/∂xn |p ) for every p. For simplicity, we omit the subscripts p
in the rest of the proof.
The definition of the pullback tells us that

G∗ (f dx1 ∧ · · · ∧ dxn ) = (f ◦ G) d(x1 ◦ G) ∧ · · · ∧ d(xn ◦ G).

Note that xi ◦ G is, by definition, the i-th coordinate Gi of G under the x-coordinates of M . Hence this is
equal to
(f ◦ G) dG1 ∧ · · · ∧ dGn .
But Proposition 1.6 implies that
      j
1 n ∂ ∂ j ∂ ∂G
(dG ∧ · · · ∧ dG ) 1
, . . . , n = det dG i
= det = det DG,
∂x ∂x ∂x ∂xi

where the second equality follows from the definition of the differential of a function. This implies that the
left side of Equation (3) takes the vector of partial derivative functions to

(f ◦ G)(det DG).

Basically by definition, the right side of the equation also takes (∂/∂x1 , . . . , ∂/∂xn ) to (f ◦ G)(det DG).

Again, all this really says is that the pullback takes into account the Jacobian determinant and, as such,
it adjusts to changes of coordinate. Compare this to Example 3.2.

3.2 Why differential forms are good


Now recall our main theorem:
Theorem 3.1. Suppose M and N are domains of integration. Suppose that G : M → N is an orientation-
preserving or orientation-reversing diffeomorphism. If ω is an n-form on N , then
Z
Z 
 ω if G is orientation-preserving, and
∗ N
G ω= Z
M  − ω if G is orientation-reversing.

N

Proof. Let’s use (x1 , . . . , xn ) for the coordinates on M and (y 1 , . . . , y n ) for those on N . Write ω = f dy 1 ∧
· · · ∧ dy n . Recall the change of variables formula, which tells us that
Z Z Z
ω= f dV = (f ◦ G)| det DG| dV.
N N N

9
If G is orientation-preserving, the Jacobian determinant det DG is nonnegative. Thus, using Theorem 3.4,
we find that this is equal to
Z Z
1 n
(f ◦ G)(det DG) dx ∧ · · · ∧ dx = G∗ ω.
D D

Otherwise, if G is orientation-reversing, the determinant det DG is negative, so taking the absolute value
of it swaps the sign of the integral. Other than that, it is the same proof.
Example 3.5. One final time, let’s look at ω = dx ∧ dy. Let M and N be the unit circle with polar and
Cartesian coordinates, respectively. Recall from our previous example that G∗ ω = r dr ∧ dθ. Now observe
that the left side of the equation in Theorem 3.1 is
Z Z Z
G∗ ω = r3 dr ∧ dθ = r3 drdθ.
M M M

We calculated this integral in the very beginning: π2 . On the right side of the equation, we have
Z Z Z
2 2 π
ω= (x + y ) dx ∧ dy = (x2 + y 2 ) dxdy = .
N N N 2

Thus the two expressions are indeed equal.


Really, at the heart of this all is the fact that wedges behave like determinants (as mentioned in the
“important note” at the end of Section 1). The proof of Theorem 3.1 is just a direct consequence of this.
The pullback gives a natural way to encode the Jacobian, which thus allows us to work with changes of
variables without worrying about extra factors.

10

You might also like