MATH3031 CH 1 Notes
MATH3031 CH 1 Notes
(b) We call a parametric surface S as defined in (a) regular if, in addition, the func-
∂r
tion r is differentiable; the vectors ∂u , ∂r
|(u,v) ∂v |(u,v)
are linearly independent (in R3 ) for all
(u, v) ∈ D; and the function r : D → S is one-to-one, with continuous inverse.
(c) A (general, regular) surface is any subset S ⊂ R3 that can be “patched together”
out of regular parametric surfaces.
Example 1.2: (a) A parabolic cylinder S = {(u, v, u2 ) | u ∈ (−3, 3), v ∈ (0, 3)} ⊂ R3
is a parametric surface: just let D = {(u, v) | u ∈ (−3, 3), v ∈ (0, 3)} and define
r(u, v) = (u, v, u2 ). It can be easily checked that this parametric surface is regular.
(b) In general, an explicit surface in R3 (that is, a subset defined by an equation of the
form {(x, y, z) | z = f (x, y)} for some continuous function f : R3 → R) is a parametric
surface: for parametrisation, simply take r : R2 → R3 to be r(u, v) = (u, v, f (u, v)). An
explicit parametric surface like this will be regular whenever the function f defining it is
differentiable.
(c) In contrast, a cylinder in R3 is a general regular surface, but it takes more than
one regular parametrisation to describe it. For example, the vertical cylinder of radius
3, defined by the equation x2 + y 2 = 9. If we let D1 = {(u, v) | u ∈ (0, 2π), v ∈ R},
let D2 = {(u, v) | u ∈ (π, 3π), v ∈ R}, and define ri : Di → R3 by letting ri (u, v) =
(3 cos u, 3 sin u, v), then it is clear (or easy to verify) that r1 and r2 are both regular
parametrisations, and that each point on the cylinder lies either in S1 = r1 (D1 ) or in
S2 = r2 (D2 ). This shows that the cylinder is a general regular surface. However, it is
NOT possible to find a single regular parametrisation whose image is the entire cylinder.
This is what we mean by saying that it must be “patched together” out of regular para-
metric surfaces.
(d) In general, a level surface (also called implicit surface) in R3 is given by considering
a function f : R3 → R, a constant c ∈ R, and considering the preimage of c, i.e. the
subset (in R3 ),
f −1 (c) := {(x, y, z) | f (x, y, z) = c}
(provided this subset is nonempty). If the function f is differentiable, and if the gradient
vector field ∇f|(x,y,z) is non-zero for all points (x, y, z) in the level set f −1 (c), then it is
a theorem that the level set is a general regular surface. (This theorem is basically a
consequence of the inverse/implicit function theorem of Real Analysis, and the example
1
given here generalises to arbitrary dimensions to give a very important class of examples
of manifolds, as we will explain in Chapter 2.)
(a) A tangent vector to S at p is any vector in R3 that can be written as γ 0 (0) for some
differentiable curve γ : (−ε, ε) → S with γ(0) = p. The set of all tangent vectors to S at
p is denoted Tp S and called the tangent space (of S) at p. It can be shown that it is a
2-dimensional linear subspace of R3 .
Theorem 1.4: Let S ⊂ R3 be a general regular surface, p ∈ S any point, and suppose
that r : D → R3 is a regular parametrisation for S with p ∈ r(D). Then, for (u0 , v0 ) ∈ D
the (unique) point such that r(u0 , v0 ) = p, we have
∂r ∂r
Tp S = span , .
∂u |(u0 ,v0 ) ∂v |(u0 ,v0 )
∂r ∂r
Moreover, any normal vector to S at p is a scalar multiple of the vector ∂u |(u0 ,v0 )
× ∂v |(u0 ,v0 )
.
(Part of the proof will be outlined in the TUT problems; the remainder will be done
in Chapter 2, when we generalise this to arbitrary dimensions.)
Example 1.5: (a) For an explicit surface in R3 given by z = f (x, y), as in Example
1.4(b), a tangent vector at a general point r(u, v) = (u, v, f (u, v)) can be expressed as
∂r ∂r ∂f ∂f
α +β = (α, β, α +β )
∂u ∂v ∂u ∂v
for some scalars α, β ∈ R.
2
regular parametric surface, S = r(D) for some region D ⊂ R2 and some (differentiable,
one-to-one) function r : D → R3 . In this case, define
Z Z Z Z
∂r ∂r
f da = f (r(u, v)) × dudv.
S D ∂u ∂v
It can be shown (using the change of variables formula for double integrals, and the chain
3
R R if r̃ : D̃ → R is
rule) that this definition is independent of the parametrisation, i.e. that
any other regular parametrisation of the surface S, then calculating S
f da using r̃ and
D̃ gives us the same value.
RR
This means that we can also give a well-defined meaning for S
f da if S is any general
regular surface: simply choose a Rnumber
R of local parametrisations that “patch together”
to cover all points of S, calculate S
f da, “locally”, on each of these pieces, and then add
them together, making sure not to count more than once the contribution from portions
of S where multiple local parametrisations overlap. Although the practical calculations
could be very difficult (and tedious) to carry out if lots of local parametrisations are
needed, theoretically this poses no problem other than the possibility that the integral
could become unbounded (this won’t happen, though, for example if S is compact).
RR
Intuitively, it may help to think of the meaning of S
f da as follows: if we think of S
as being constructed from R Ra very thin material whose density at a givenR point
R is described
by the function f , then S
f da is the total mass of S. In particular, S
1da is the area
of the surface S. Keeping this, and the formula above, in mind, we refer to the formal
symbols inside the integrand,
∂r ∂r
da = × dudv,
∂u ∂v
as the “area element” of a parametrised surface S. Even though its formula is given using
a choice of parametrisation, the fact that the integral we get as a result does not suggests
that this formula is really just a choice of representation for something that actually has
an independent existence on the surface S. This turns out to be true, and it is one of the
things that will be clarified later on in the course when we introduce differential forms as
the natural objects to integrate on manifolds.
3
whenever S = r(D) is a regular parametric surface, for any choice of positive parametrisa-
tion r : D → R3 . We then verify that this formula is independent of the choice of positive
parametrisation and “patch together” on a general regular surface, using the method dis-
cussed in Definition 1.6.
(c) If S is a general regular surface that has been given an orientation, then we have
a naturally induced orientation on its boundary, ∂S, which in general is a disjoint union
of curves. The simplest way to picture and understand this is to use the “right hand
rule”: curling your right hand in a loose fist, with the thumb pointed up, rest the bot-
tom of the hand (the side with the pinky finger) on the surface in such a way that the
thumb is pointed in the same direction as the normal vectors that determine the surface’s
orientation; then your pinky and the other fingers except the thumb will be curling in
an anti-clockwise direction relative to your thumb; if you place them so that the tops of
these fingers curl along the boundary ∂S, then the direction they point tells you which
direction the arrows should point that orient the curves of ∂S.
Example 1.8: Consider the vertical cylinder S of radius 3 with 0 ≤ z ≤ 5, with ori-
entation given by choosing normal vectors that point away from the z-axis. Then the
boundary ∂S consists of the disjoint union of two circles, both of radius 3 and lying in
horizontal planes parallel to the (x, y)-plane: one at the “top” of the cylinder with z = 5,
and one at the “bottom” of the cylinder with z = 0. Write ∂S = Γ1 ∪Γ2 , where Γ1 denotes
the circle at the top and Γ2 denotes the one at the bottom. To figure out which way to
draw the arrows on these circles, we simply rest our right hand on the surface with the
thumb pointing away from the z-axis, first so that the fingers curl along the top circle,
then so they curl along the bottom circle. As you see by trying this yourself, when the
fingers are curled along the top circle they will point from the y-axis toward the x-axis
(as long as you have plotted the x-, y- and z-axes according to the standard conventions);
but when they are curled along the bottom circle they will point from the x-axis toward
the y-axis. Hence, the following parametrisations of the curves Γ1 and Γ2 are positive
with respect to the induced orientation on ∂S:
and
Γ2 = {γ 2 (t) = (3 cos t, 3 sin t, 0) | 0 ≤ t ≤ 2π}.
4
(The proof uses Green’s Theorem and the Chain Rule to rewrite the LHS with a series
of manipulations and calculations to get the RHS. If you want you can go through it from
the past years’ MC II notes - see the “Background Resources” folder on Ulwazi - but it
is not terribly enlightening. It will be better, for now, to accept it and then derive the
proof as a special case of the General Stokes’ Theorem that we will prove in Chapter 4.)
Example 1.10: (a) Let Γ ⊂ R3 be any closed curve for which there exists a compact,
3
oriented general regular surface S such that Γ = ∂S, and suppose that F : RR → R3 is a
gradient vector field, i.e. F = ∇f for some function f : R3 → R. Prove that C F ·dγ = 0.
(b) If S ⊂ R3 is a closed
R R surface (meaning S is compact and ∂S = ∅, for example 3a
sphere or a torus), then S
(∇×F )·da = 0 for any differentiable vector field F : S → R .
(c) Let S be the portion of the cylinder x2 + y 2 = 4 lying between the planes z = 0 and
z = 1. Verify Stokes’ Theorem for the vector field F (x, y, z) = (y(1 + z), −x(1 + z), z).
1. Let
S = {(x, y, z) | x2 + y 2 + z 2 = 1}
be the unit sphere in R3 .
(b) Verify explicitly that S is a general regular surface by parametrising it (you will
need at least two parametrisations to cover all of S).
(c) Using Example 1.5(c), give a simple equation for the tangent space Tp S of the
sphere at an arbitrary point p = (x, y, z) ∈ S.
2. Let Γ ⊂ R3 be any closed curve for which there exists a compact, oriented general
regular surface S such that Γ = ∂S, and suppose that F : R3 → R3 is a gradient vector
field,
R i.e. F = ∇f for some function f : R3 → R. Using Stokes’ Theorem, prove that
Γ
F · dγ = 0.
4. Let S be the portion of the cylinder x2 + y 2 = 4 lying between the planes z = 0 and
z = 1. Verify Stokes’ Theorem for the vector field F (x, y, z) = (y(1 + z), −x(1 + z), z).
5
2-dimensional linear subspace of R3 , and equivalent ways of describing it:
for the value c := f (p) ∈ R. Show that the subspace ∇f (p)⊥ ⊂ R3 , defined by
(where · denotes the usual dot product) contains the tangent space Tp S.
(c) Conclude that if the assumptions of (a) and (b) both hold for p ∈ S, then Tp S is
a 2-dimensional linear subspace of R3 , and that we have equalities:
∂r ∂r
span , = Tp S = ∇f (p)⊥ .
∂u |(u0 ,v0 ) ∂v |(u0 ,v0 )
(In fact, it can be proven that the assumptions of (a) and (b) are always true for all points
of a general regular surface S ⊂ R3 , which we will discuss in Chapter 2 when we introduce
the higher-dimensional generalisations of general regular surfaces.)
6
1.2: Gradient and Conservative Vector Fields
Definition 1.11: (a) A domain in Rn is an open, connected subset D ⊂ Rn . Connected
means that if p, q ∈ D are any two points of the subset, there is a continuous curves
γ : [0, 1] → D with γ(0) = p and γ(1) = q. Open means with respect to the standard
Euclidean metric on Rn (see Chapter 2 of Intermediate Analysis III).
(b) A vector field F : D → Rn is called a gradient vector field iff there exists a function
f : D → R such that F = ∇f . If this is true, the function f is called a potential for the
vector field F . (Note that if F is a gradient vector field, then the difference between any
two potentials for F is a constant function.)
(c) A domain D ⊂ Rn is called simply connected iff every closed curve in D can be
continuously deformed to a point (more precisely, to a constant curve that never leaves
that point) without leaving the domain D and while keeping all of the deformed curves
closed.
R
(d) A vector field F : D → Rn is called conservative iff the vector path integral Γ
F ·dγ
over a path Γ in D depends only on the endpoints of Γ.
It can be verified that the function f (x, y, z) = xyz is a potential for F , which shows that
F is a gradient vector field.
(b) If p ∈ Rn is a fixed point and R > 0 a positive number, then the open ball with
centre p and radius R,
is a simply connected domain in Rn . To see, this, let γ : [a, b] → B(p, R) be any continuous
curve with γ(0) = γ(1) = x0 ∈ B(p, R). We can continuously deform γ to a constant curve
that never leaves the centre point p ∈ B(p, R) as follows: define H : [0, 1]×[a, b] → B(p, R)
by letting
H(s, t) = γ s (t) = (1 − s)γ(t) + sp
for any s ∈ [0, 1], t ∈ [a, b]. The function H is clearly continuous, and H(s, t) ∈ B(p, R) for
all inputs (s, t) ∈ [0, 1] × [a, b], since γ(t) ∈ B(p, R) and p ∈ B(p, R), so the line segment
between them also lies in B(p, R). Fixing any s ∈ [0, 1] therefore gives us a continuous,
closed curve γ s : [a, b] → B(p, R), and the curves vary continuously with the parameter
s, deforming the closed curve γ = γ 0 to the curve γ 1 , which is clearly a constant curve
that never leaves the centre p.
(c) Starting with the domain in (b), if we pick another positive number r > 0 with
r < R, we can define the open annulus with centre p, inner radius r, and outer radius R,
7
which will be a domain in Rn for any n ≥ 2 (for n = 1, the open annulus will not be
connected, because it consists of two disjoint open intervals). In contrast to the open ball,
the open annulus in the plane R2 is not simply connected, because it has a “hole” in the
middle, meaning that a closed curve that wraps around this hole cannot be continuously
deformed to a single point. For example, consider the curve γ : [0, 2π] → A(p, r, R) ⊂ R2
given by
1 R−r 2 R−r
γ(t) = p + cos t, p + sin t ,
2 2
where p = (p1 , p2 ) are the coordinates of the centre point p. This is clearly a closed,
continuous curve in A(p, r, R), but it is impossible to continuously deform γ, via closed
curves, to a constant curve. While this seems “obvious”, rigorously proving takes some
work.
Theorem 1.13: Let F be a vector field on a domain D of Rn . Then the following are
equivalent:
R
(a) Γ
F · dγ = 0 for all closed curves Γ in D.
Proof:
(a) ⇒ (b): Let Γ1 and Γ2 be two curves in D with the same endpoints, say starting at x0
and ending at x1 . We need to show that
Z Z
F · dγ = F · dγ.
Γ1 Γ2
(b) ⇒ (a): Let Γ be any closed curve in D. This means that Γ starts and ends at the
same point, say x0 ∈ D. The curve Γ− , with orientation reversed, Ralso starts and
R ends at
R hence it hasR the same endpoints at Γ. By (b), therefore, Γ F · dγ = Γ− F · dγ.
x0 , and
But Γ− F · dγ = − Γ F · dγ, so combining these equations gives us
Z Z
F · dγ = − F · dγ,
Γ Γ
R
which implies Γ
F · dγ = 0.
8
Theorem 1.14 (Fundamental Theorem of Vector Calculus): Let F be a continuous
vector field on a domain D ⊂ Rn . If F is a gradient vector field with potential function
f : D → R, and Γ is a (piecewise differentiable, continuous) curve in D which starts at
x0 and ends at x1 , then Z
F · dγ = f (x1 ) − f (x0 ).
Γ
In particular, every gradient vector field is conservative.
Proof: For simplicity, we consider the case when γ : [a, b] → D is a differentiable curve
starting at x0 and ending at x1 (the case of a piecewise differentiable curve can be handled
by induction). This means that γ(a) = x0 and γ(b) = x1 . We define a curve h : [a, b] → R
by letting h(t) = f (γ(t)) for all t ∈ [a, b]. By the Chain Rule, we have
since f is a potential for F . Thus, using the definition of the vector path integral, we
calculate, using the standard Fundamental Theorem of Calculus:
Z Z b
F · dγ = F (γ(t)) · γ 0 (t)dt
Γ a
Z b
= h0 (t)dt
a
= h(b) − h(a).
But h(b) = f (γ(b)) = f (x1 ) and h(a) = f (γ(a)) = f (x0 ), which proves the identity
claimed.
as in ExampleR1.12(a). If Γ is any curve in R3 from the point (1, 2, 3) to (3, 4, 5), then we
can calculate Γ F · dγ without parametrising the curve Γ (or even knowing what it is).
Namely, we know that f (x, y, z) = xyz is a potential function for F , so by Theorem 1.14,
Z
F · dγ = f (3, 4, 5) − f (1, 2, 3) = 60 − 6 = 54.
Γ
If the domain D is a simply connected of R3 , then the converse is also true: any curl-
free vector field is conservative, and hence is a gradient vector field. In other words, if D is
simply connected, then ∇×F = 0 implies F = ∇f for some potential function f : D → R.
9
R
“Conservative ⇒ Gradient”: We suppose F is conservative, so the value of Γ F · dγ
depends only on the endpoints of a curve Γ in D. We need to find a potential function
for F , i.e. f : D → R such that ∇f = F . To come up with a candidate for the
potential function we use a geometrically simple idea: first fix a base-point x0 ∈ D and
set f (x0 ) = 0; then, for any other point x ∈ D, we determine the value f (x) ∈ R by
letting Γ be any continuous curve in D that starts at x0 and ends at x, and define
Z
f (x) := F · dγ ∈ R.
Γ
This gives us a real value f (x) for each x ∈ D, and this value does not depend on the
choice of path Γ, because F is conservative. The complicated part of the proof is now to
show that this function f is a potential for F , i.e. that ∇f = F or, equivalently, that
∂f /∂xi = F i for each i = 1, . . . , n, where F = (F 1 , . . . , F n ) are the component functions
of the vector field. I will discuss how this is done in the lectures, or you can read about
it on your own in the MC notes posted under “Background Resources” on Ulwazi.
Once we know that conservative vector fields are gradient, it follows that in dimension
3 they are curl-free, since ∇ × F = ∇ × ∇f = 0, the curl of a gradient vector field always
vanishes (see MC).
“Curl-Free ⇒ Conservative”: For this direction, we need to assume that the domain
D ⊂ R3 is simply connected. If F is a curl-free
R vector field on D, our approach is to show
that F is conservative by showing that Γ F · dγ = 0 for any closed curve Γ in D (using
Theorem 1.13). Now, if Γ is a closed curve in D, then since D is simply connected it is
possible to continuously deform Γ, via a family of closed curves inside D, to a constant
curve that stays put at a fixed point. We can interpret this continuous deformation as
“tracing out” a parametric surface S in D with ∂S = Γ (this is actually being a little
loose, since it may not be true that the deformation really defines a regular parametric
surface as in Definition 1.1, but it turns out to work). Then, using Stokes’ Theorem and
the assumption that ∇ × F = 0, we calculate
Z Z
F · dγ = F · dγ
Γ ∂S
Z Z
= (∇ × F ) · da = 0.
S
(a) Verify that F is a gradient vector field and that the function f (x, y) = 3x2 y 2 − xy 3
is a potential for F .
R
(b) Hence, calculate the value of Γ
F · dγ, where Γ is any path from (0, 1) to (1, 0).
10
1.3: Triple Integrals and Gauss’ Theorem
Definition 1.17: Let B ⊂ R3 be a bounded region in three-dimensional Euclidean space,
and f : B → R a continuous function on B. To define the triple integral of f over B, we
follow the same method that was used to define single and double integrals (in BA and
N
MC): for each positive integer N , divide the region B into N sub-regions B1N , B2N , . . . , BN
in such a way that the diameter of each BiN approaches 0 as N → ∞. Then, for each N
and each i = 1, . . . , N , choose (xN N N N
i , yi , zi ) ∈ Bi , and we define the (three-dimensional)
Riemann sums and triple integrals as:
N
X
S 3 (f ; N ) := f (xN N N N
i , yi , zi )vol(Bi );
i=1
Z Z Z
f dV := lim S 3 (f ; N ).
B N →∞
RRR
Note: (a) B
1dV gives us the volume of B.
(b) There are clearly some issues that need to be dealt with in order to make sure
that this is a mathematically proper definition. For example, we need to ensure that
this limit exists and that its value is independent of the ways we chose to sub-divide B
N
into sub-regions B1N , . . . , BN for each N . We’re not going to go through all of that in
this course, for one thing because the principles involved are the same as those used to
rigorously develop the integral of functions on the real line as in Basic Analysis. Also,
in this course we will generally only be concerned with “well-behaved” domains B and
functions f , for which the following (Fubini’s Theorem) applies, and so the triple integral
is well-defined and can be calculated using iterated single derivatives:
where a1 , b1 ∈ R are constants with a1 ≤ b1 , a2 (x), b2 (x) are functions of the variable
x ∈ [a1 , b1 ] with a2 (x) ≤ b2 (x) for all x, etc.. Then the triple integral of f over B exists,
and satisfies
! !
Z Z Z Z b1
Z b2 (x)
Z b3 (x,y)
f dV = f (x, y, z)dz dy dx.
B a1 a2 (x) a3 (x,y)
Remark: This means that we can calculate a triple integral by repeating single in-
tegrals, whenever the region B can be described by giving the limits of the z-coordinate
in terms of x and y, the limits of the y-coordinate in terms of x, and the limits of the
x-coordinate are given by constants. In practice, it means you first integrate with respect
to z, thinking of the x and y coordinates as fixed; the result will be a function of only x
and y, which we then integrate with respect to y, thinking of x as fixed and giving us a
function of just x; finally, we integrate this with respect to x, giving us a number.
11
Of course, there is nothing important about the order of the coordinates: For example,
if B can be written with constant limits on the z coordinate, limits on the y-coordinate in
terms of z, and limits on the x-coordinate in terms of z and y, then we could calculate the
triple integral by “reversing” the order of integration above: first integrate with respect to
x, then with respect to y, and lastly with respect to z. Other orders of integration are also
possible for other regions, and there are regions which allow different orders of integration.
Rather than write out all the details abstractly, it’s more clarifying to look at examples
of how this is done. In any of the examples, there is a clear way of checking that you have
done the repeated integrals in an order that is acceptable: you have to get a number as
your final answer, i.e., no variables x, y or z are allowed to appear in your final expression.
These single integrals can all be calculated, and they give you a final answer of 324
35
. You
should try this yourself (you have to use integration by parts at some point), or look at
the details given in the old MC notes (Chapter 3), which have been posted on Ulwazi.
Example 1.20: Find the volume of the region B ⊂ R3 , where B is bounded by the
planes z = y + 1, z = −x − 1, and by the cylinder x2 + y 2 = 1.
Solution: Try to at least start this yourself, and then attend the lecture where I go over
the main steps.
12
Theorem 1.21 (Change of Variables): Suppose that B, B̃ ⊂ R3 are two bounded
regions, and that T : B̃ → B is a continuously differentiable bijection (so T (B̃) = B).
Then if f : B → R is a continuous function on B,
Z Z Z Z Z Z
f dV = f ◦ T | det T 0 |dV.
B B̃
Note: (a) It is also common to write f (x, y, z)dxdydz for f dV if (x, y, z) are the
coordinates on our region. In this notation, the change of variables formula becomes
Z Z Z Z Z Z
f (x, y, z)dxdydz = f (T (u, v, w))| det T 0 (u, v, w)|dudvdw,
B B̃
which makes it a little clearer what is happening: T lets us “change coordinates” from
(u, v, w) on B̃ to (x, y, z) on B, and the theorem tells us that by doing this we convert
the integral over B to an integral over B̃
(b) T 0 denotes the derivative of T . For each point (u, v, w) ∈ B̃, T 0 (u, v, w) : R3 → R3
is a linear map, usually identified with the 3 × 3 matrix of partial derivatives of T , and
so we can take its determinant, which gives us a real number. Thus, the factor appearing
in the change of variables formula, | det T 0 |, is a function from B̃ to R, and this function
will be continuous since T is assumed to be continuously differentiable. The | det T 0 | is
∂(x,y,z)
called the Jacobian of T , and is sometimes denoted by ∂(u,v,w) , which highlights the fact
that it gives some information about the dependence of the (x, y, z) variables upon the
(u, v, w) variables. We will discuss more about the geometry of this quantity in Chapter
3; in particular, we show that it is an expression for the volume of the parallelipiped
(the 3-dimensional analog of a parallelogram in 2 dimensions) spanned by the vectors
∂T ∂T ∂T
, , .
∂u ∂v ∂w
13
This is now a pretty straightforward repeated integral to calculate. I can go over it in
a video on tutorials if there are questions (warning: I think the old MC notes have a
mistake in the calculation).
(a) The volume of the 3-dimensional ball of radius R > 0, given by the inequality
x + y 2 + z 2 ≤ R2 .
2
RRR
(b) The integral B
(x2 + y 2 )3 zdxdydz, where B ⊂ R3 is that portion of the ball of
radius 3, x2 + y 2 + z 2 ≤ 9, lying in the octant x ≥ 0, y ≥ 0, z ≥ 0.
I will discuss some of the ideas in the proof in the lectures, and you can also consult
the old MC notes for the proof in a simple case. However, the rigorous proof is not going
to be emphasised at this point because the aim of this course is to understand Gauss’
Theorem, as well as Stokes’s Theorem, as a special case of the Generalised Stokes’ The-
orem which will be proven in Chapter 4. For now, the thing to pay attention to is that
Gauss’s Theorem has the same broad form as the Fundamental Theorem of Calculus,
as well as Green’s and Stokes’s theorems: it expresses the integral of some “thing” (in
Gauss’s Theorem a vector field) over a boundary as the integral of the “derivative” of
that “thing” (in Gauss’s Theorem the divergence of a vector field) over the interior space
that is bounded.
Example 1.25: (a) Prove that if B ⊂ R3 is a bounded region whose boundary is the
(general, regular) surface S ⊂ R3 , then the volume of B can be calculated as
Z Z
vol(B) = xi ei · da,
S
for each i = 1, 2, 3, where xi : R3 → R is the function given by taking the ith coordinate
(so x1 (x, y, z) = x, x2 (x, y, z) = y, etc.) and ei ∈ R3 is the ith standard basis vector.
14
where I : R3 → R3 is the identity vector field, defined by I(x, y, z) = (x, y, z).
Try this one on your own, too. I will go over the solution in lecture.
for each i = 1, 2, 3.
B = {(x, y, z) | x2 + y 2 + z 2 ≤ R2 },
the 3-dimensional ball of radius R, for an arbitrary R > 0, whose boundary is the sphere
of radius R. Calculate both integrals to show that both sides give 4π
3
R3 .
(Hint: The volume of B is calculated most easily by using the change of coordinates
15