0% found this document useful (0 votes)
98 views62 pages

DG

This document contains lecture notes for an elementary differential geometry course. It is divided into three chapters that cover curves, local surface theory, and intrinsic geometry of surfaces. The first chapter introduces fundamental concepts for curves in R3 including parametrization, arclength, curvature, torsion, and the Frenet frame. It presents the Frenet frame equations and the fundamental theorem for curves, which states that any smooth curvature and torsion functions determine a unique curve up to rigid motions.

Uploaded by

omerfarooq1998
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views62 pages

DG

This document contains lecture notes for an elementary differential geometry course. It is divided into three chapters that cover curves, local surface theory, and intrinsic geometry of surfaces. The first chapter introduces fundamental concepts for curves in R3 including parametrization, arclength, curvature, torsion, and the Frenet frame. It presents the Frenet frame equations and the fundamental theorem for curves, which states that any smooth curvature and torsion functions determine a unique curve up to rigid motions.

Uploaded by

omerfarooq1998
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Elementary Differential Geometry:

Lecture Notes

Gilbert Weinstein
Contents

Preface 5
Chapter 1. Curves 7
1. Preliminaries 7
2. Local Theory for Curves in R3 8
3. Plane Curves 10
4. Fenchel’s Theorem 14
Exercises 16
Chapter 2. Local Surface Theory 19
1. Surfaces 19
2. The First Fundamental Form 21
3. The Second Fundamental Form 23
4. Examples 25
5. Lines of Curvature 28
6. More Examples 30
7. Surface Area 33
8. Bernstein’s Theorem 37
9. Theorema Egregium 39
Exercises 41
Chapter 3. Local Intrinsic Geometry of Surfaces 45
1. Riemannian Surfaces 45
2. Lie Derivative 47
3. Covariant Differentiation 48
4. Geodesics 50
5. The Riemann Curvature Tensor 53
6. The Second Variation of Arclength 56
Exercises 59

Index 61

3
Preface

These notes are for a beginning graduate level course in differential geometry.
It is assumed that this is the students’ first course in the subject. Thus the choice
of subjects and presentation has been made to facilitate as much as possible a
concrete picture. For those interested in a deeper study, a second course would take
a more abstract point of view, and in particular, could go further into Riemannian
geometry.
Much of the material is borrowed from the following sources, but has been
adapted according to my own taste:
[1] M. P. Do Carmo, Differential geometry of curves and surfaces, Prentice-Hall.
[2] L. P. Eisenhart An introduction to differential geometry with use of the ten-
sor calculus, Princeton University Press.
[3] W. Klingenberg, A course in differential geometry, Springer-Verlag.
[4] B. O’Neill Elementary differential geometry, Academic Press.
[5] M. Spivak, A comprehensive introduction to Differential Geometry, Publish
or Perish.
[6] J. J. Stoker, Differential Geometry, Wiley & Sons.
The prerequisites for this course are: linear algebra, preferably with some ex-
posure to multilinear algebra; calculus up to and including the inverse and implicit
function theorem; the fundamental theorem of ordinary differential equations con-
cerning existence of solutions, uniqueness, and continuous dependence on parame-
ters, and some knowledge of linear systems of ordinary differential equations; linear
first order partial differential equations; complex analysis including Liouville’s the-
orem; and some elementary topology.
It is highly recommended for the students to complete all the exercises included
in these notes.

Gilbert Weinstein
Birmingham, Alabama
April 2000

5
CHAPTER 1

Curves

1. Preliminaries
Definition 1.1. A parametrized curve is a smooth (C ∞ ) function γ : I → Rn .
A curve is regular if γ 0 6= 0.
When the interval I is closed, we say that γ is C ∞ on I if there is an interval
J and a C ∞ function β on J which agrees with γ on I.
Definition 1.2. Let γ : I → Rn be a parametrized curve, and let β : J → Rn
be another parametrized curve. We say that β is a reparametrization (orientation-
preserving reparametrization) of γ if there is a smooth map τ : J → I with τ 0 > 0
such that β = γ ◦ τ .
Note that the relation β is a reparametrization of γ is an equivalence relation.
A curve is an equivalence class of parametrized curves. Furthermore, if γ is regular
then every reparametrization of γ is also regular, so we may speak of regular curves.
Definition 1.3. Let γ : I → Rn be a regular curve. For any compact interval
[a, b] ⊂ I, the arclength of γ over [a, b] is given by:
Z b
Lγ ([a, b]) = |γ 0 | dt.
a

Note that if β is a reparametrization of γ then γ and β have the same length.


More specifically, if β = γ ◦ τ , then
 
Lγ τ (c), τ (d) = Lβ ([c, d]).
Definition 1.4. Let γ be a regular curve. We say that γ is parametrized by
arclength if |γ 0 | = 1
Note that this is equivalent to the condition that for all t ∈ I = [a, b] we have:
Lγ ([a, t]) = t − a.
Furthermore, any regular curve can be parametrized by arclength. Indeed, if γ is a
regular curve, then the function
Z t
s(t) = |γ 0 | ,
a
is strictly monotone increasing. Thus, s(t) has an inverse function τ (s) function,
satisfying:
dτ 1
= 0 .
ds |γ |
It is now straightforward to check that β = γ ◦ τ is parametrized by arclength.
7
8 1. CURVES

2. Local Theory for Curves in R3


We will assume throughout this section that γ : I → R3 is a regular curve in
R3 parametrized by arclength and that γ 00 6= 0. Note that γ 0 · γ 00 = 0.
Definition 1.5. Let γ : I → R3 be a curve in R3 . The unit vector T = γ 0
is called the unit tangent of γ. The curvature κ is the scalar κ = |γ”|. The unit
vector N = κ−1 T 0 is called the principal normal . The binormal is the unit vector
B = T × N . The positively oriented orthonormal frame (T, N, B) is called the
Frenet frame of γ.
It is not difficult to see that N 0 + κT is perpendicular to both T and N , hence
we can define the torsion τ of γ by: N 0 + κT = τ B. Note that the torsion, unlike
the curvature, is signed. Finally, it is easy to check that B 0 = −τ N . Let X denote
the 3 × 3 matrix whose columns are (T, N, B). We will call X also the Frenet frame
of γ. Define the rotation matrix of γ:
 
0 κ 0
(1.1) ω := −κ 0 τ 
0 −τ 0
Proposition 1.1 (Frenet frame equations). The Frenet frame X = (T, N, B)
of a curve in R3 satisfies:
(1.2) X 0 = Xω.
The Frenet frame equations, Equation (1.2), form a system of nine linear ordi-
nary differential equations.
Definition 1.6. A rigid motion of R3 is a function of the form R(x) = x0 +Qx
where Q is orthonormal with det Q = 1.
Note that if X is the Frenet frame of γ and R(x) = x0 + Qx is a rigid motion
of R3 , then QX is the Frenet frame of R ◦ γ. This follows easily from the fact that
Q is preserves the inner product and orientation of R3 .
Theorem 1.2 (Fundamental Theorem). Let κ > 0 and τ be smooth scalar
functions on the interval [0, L]. Then there is a regular curve γ parametrized by
arclength, unique up to a rigid motion of R3 , whose curvature is κ and torsion is
τ.
Proof. Let ω be given by (1.1). The initial value problem
X 0 = Xω,
X(0) = I
can be solved uniquely on [0, L]. The solution X is an orthogonal matrix with
det X = 1 on [0, L]. Indeed, since ω is anti-symmetric, the matrix A = XX t is
constant. Indeed,
A0 = XωX t + Xω t X t = X(ω + ω t )X t = 0,
and since A(0) = I, we conclude that A ≡ I, and X is orthogonal. Furthermore,
det X is continuous, and det X(0) R = 1, so det X = 1 on [0, L]. Let (T, N, B) be
the columns of X, and let γ = T , then (T, N, B) is orthonormal and positively
oriented on [0, L]. Thus, γ is parametrized by arclength, γ 0 = T , and N = κ−1 T 0 is
2. LOCAL THEORY FOR CURVES IN R3 9

the principal normal of γ. Similarly B is the binormal, and consequently, κ is the


curvature of γ and τ its torsion.
Now suppose that γ̃ is another curve with curvature κ and torsion τ , and let X̃
be its Frenet frame. Then there is a rigid motion R(x) = Qx + x0 of R3 such that
Rγ(0) = γ̃(0), and QX(0) = X̃(0). By the remark preceding the theorem, QX is
the Frenet frame of the curve R ◦ γ, and thus both QX and X̃ satisfy the initial
value problem:
Y 0 = Y ω,
Y (0) = QX(0).

By the uniqueness of solutions of the initial value problem, it follows that QX = X̃.
In particular, (R ◦ γ)0 = γ̃ 0 , and since R ◦ γ(0) = γ̃(0) we conclude R ◦ γ ≡ γ̃. 

Assuming γ(0) = 0, the Taylor expansion of γ of order 3 at s = 0 is:


1 1
γ(s) = γ 0 (0)s + γ 00 (0)s2 + γ 000 (0)s3 + O(s4 ).
2 6
Denote T0 = T (0), N0 = N (0), B0 = B(0), κ0 = κ(0), and τ0 = τ (0). We have
γ 0 (0) = T0 , γ 00 (0) = κ0 N0 , and γ 000 (0) = κ0 (0)N0 + κ0 (−κ0 T0 + τ0 B0 ). Substituting
these into the equation above, decomposing into T , N , and B components, and
retaining only the leading order terms, we get:
κ  τ 
γ(s) = s + O(s3 ) T + s2 + O(s3 ) N + s3 + O(s4 ) B

2 6
The planes spanned by pairs of vectors in the Frenet frame are given special
names:
(1) T and N — the osculating plane;
(2) N and B — the normal plane;
(3) T and B — the rectifying plane.
We see that to second order the curve stays within its osculating plane, where it
traces a parabola y = (κ/2) s2 . The projection onto the normal plane is a cusp
2/3
to third order: x = (3τ /2) y . The projection onto the rectifying plane is to
second order a line, whence its name.
Here are a few simple applications of the Frenet frame.
Theorem 1.3. Let γ be a regular curve with κ ≡ 0. Then γ is a straight line.
Proof. Since |T 0 | = κ = 0, it follows that T is constant and γ is linear. 

Theorem 1.4. Let γ be a regular curve with κ > 0, and τ = 0. Then γ is


planar.
Proof. Since B 0 = 0, B is constant. Thus the function ξ = (γ − γ(0)) · B
vanishes identically:
ξ(0) = 0, ξ 0 = T · B = 0.
It follows that γ remains in the plane through γ(0) perpendicular to B. 

Theorem 1.5. Let γ be a regular curve with κ constant and τ = 0. Then γ is


a circle.
10 1. CURVES

Proof. Let β = γ + κ−1 N . Then


1
β 0 = T + (−κT + τ B) = 0.
κ
Thus β is constant, and |γ − β| = κ−1 . It follows that γ lies in the intersection
between a plane and a sphere, thus γ is a circle. 

3. Plane Curves
3.1. Local Theory. Let γ : [a, b] → R2 be a regular plane curve parametrized
by arclength, and let κ be its curvature. Note that κ is signed, and in fact changes
sign (but not magnitude) when the orientation of γ is reversed. The Frenet frame
equations are:
e01 = κ e2 , e02 = −κ e1
Proposition 1.6. Let γ : [a, b] → R2 be a regular curve with |γ 0 | = 1. Then
there exists a differentiable function θ : [a, b] → R such that
(1.3) e1 = (cos θ, sin θ).
Moreover, θ is unique up to a constant integer multiple of 2π, and in particular
θ(b) − θ(a) is independent of the choice of θ. The derivative of θ is the curvature:
θ0 = κ.
Proof. Let a = t0 < t1 < · · · < tn = b be a partition of [a, b] so that the
diameter of e1 ([ti−1 , ti ]) is less than 2, i.e., e1 restricted to each subinterval maps
into a semi-circle. Such a partition exists since e1 is uniformly continuous on [a, b].
Choose θ(a) so that (1.3) holds at a, and proceed by induction on i: if θ is defined
at ti then there is a unique continuous extension so that (1.3) holds. If ψ is any
other continuous function satisfying (1.3), then k = (1/2π)(θ − ψ) is a continuous
integer-valued function, hence is constant. Finally, e2 = (− sin θ, cos θ) hence
e01 = κ e2 = θ0 (− sin θ, cos θ),
and we obtain θ0 = κ. 
3.2. Global Theory.
Definition 1.7. A curve γ : [a, b] → Rn is closed if γ (k) (a) = γ (k) (b). A closed
curve γ : [a, b] → Rn is simple if γ|(a,b) is one-to-one. The rotation number of a
smooth closed curve is:
1 
(1.4) nγ = θ(a) − θ(b) ,

where θ is the function defined in Proposition 1.6.
We note that the rotation number is always an integer. For reference, we also
note that the rotation number of a curve is the winding number of the map e1 .
Finally, in view of the last statement in Proposition 1.6, we have:
Z
1
nγ = κ ds.
2π [0,L]
Theorem 1.7 (Rotation Theorem). Let γ : [0, L] → R2 be a smooth, regular,
simple, closed curve. Then nγ = ±1. In particular
Z
1
κ ds = ±1.
2π [0,L]
3. PLANE CURVES 11

For the proof we will need the following technical lemma. We say that a set
∆ ⊂ Rn is star-shaped with respect to x0 ∈ ∆ if for every y ∈ ∆ the line segment
x0 y lies in ∆.
Lemma 1.8. Let ∆ ⊂ Rn be star-shaped with respect to x0 ∈ ∆, and let e : ∆ →
1
S be a continuous function. Then there exists a continuous function θ : ∆ → R
such that:
(1.5) e = (cos θ, sin θ).
Moreover, if ψ is another continuous function satisfying (1.5), then θ − ψ = 2πk
where k is a constant integer.
In fact, it is sufficient to assume that ∆ is simply connected, but we will not
prove this more general result here.
Proof. Define θ(x0 ) so that (1.5) holds at x0 . For each x ∈ ∆ define θ
continuously along the line segment x0 x as in the proof of Proposition 1.6. Since
∆ is star-shaped with respect to x0 , this defines θ everywhere in ∆. It remains
to show that θ is continuous. Let y0 ∈ ∆. Since x0 y0 is compact, it is possible
to choose δ small enough that the following holds: y 0 ∈ x0 y0 and |y − y 0 | < δ
implies |e(y) − e(y 0 )| < 2 or equivalently e(y) and e(y 0 ) are not antipodal. Let
0 <  < π. Then there exists a neighborhood U ⊂ Bδ (y0 ) of y0 such that y ∈ U
implies θ(y) − θ(y0 ) = 2πk(y) + 0 (y) where |0 (y)| <  and k(y) is integer-valued.
It remain to prove that k ≡ 0. Let y ∈ U and consider the continuous function:
 
φ(s) = θ x0 + s(y − x0 ) − θ x0 + s(y0 − x0 ) , 0 6 s 6 1.
Since  
x0 + s(y − x0 ) − x0 + s(y0 − x0 ) = |s(y − y0 )| < δ,
 
it follows from our choice of δ that e x0 + s(y − x0 ) and e x0 + s(y0 − x0 ) are not
antipodal. Thus, φ(s) 6= π for all 0 6 s 6 1, and since φ(0) = 0 we conclude that
|φ| < π. In particular
|2πk(y) + 0 (y)| = |θ(y) − θ(y0 )| = |φ(1)| < π,
and it follows that
|2πk(y)| 6 |2πk(y) + 0 (y)| + |0 (y)| < 2π.
Since k(y) is integer-valued this implies k(y) = 0. 
Proof of the Rotation Theorem. Pick a line which intersects the curve
γ and pick a last point p on this line, i.e., a point with the property that one ray
of the line from p has no other intersection points with γ. Let h be the unit vector
pointing in the direction of that ray. We assume without loss of generality that γ
is parametrized by arclength, γ(0) = γ(L) = 0. Now, let ∆ = {(t1 , t2 ) ∈ R2 : 0 6
t1 6 t2 6 L}, and note that ∆ is star-shaped. Define the S1 -valued function:
 0

γ (t1 ) if t1 = t2 ;
 0
e(t1 , t2 ) = −γ (0) if (t1 , t2 ) = (0, L);
 γ(t 2 ) − γ(t 1 )

 otherwise.
|γ(t2 ) − γ(t1 )|
It is straightforward to check that e is continuous on ∆. By the Lemma, there
is a continuous function θ : ∆ → R such that e = (cos θ, sin θ). We claim that
12 1. CURVES

θ(L, L) − θ(0, 0) = ±2π which proves the theorem, since θ(t, t) is a continuous
function satisfying (1.3) in Proposition 1.6, and thus can be used on the right-hand
side of (1.4) to compute the rotation number.
To prove this claim, note that, for any 0 < t < L, the unit vector
γ(t) − γ(0)
e(0, t) =
|γ(t) − γ(0)|
is never equal to h. Hence, there is some value α such that θ(0, t)−θ(0, 0) 6= α+2πk
for any integer k. Thus, |θ(0, t) − θ(0, 0)| < 2π, and since e(0, L) = −e(0, 0) it
follows that θ(0, L) − θ(0, 0) = ±π.
Since the curves e(0, t) and e(t, L) are related via a rigid motion, i.e., e(t, L)
=
Re(0, t) where R  is rotation by π, it follows that ψ(t) = θ(t, L) − θ(0, L) −
θ(0, t) − θ(0, 0) is a constant. Since clearly ψ(0) = 0, we get θ(0, L) − θ(0, 0) =
θ(L, L) − θ(0, L), and we conclude:
 
θ(L, L) − θ(0, 0) = θ(t, L) − θ(0, L) + θ(0, t) − θ(0, 0) = ±2π.

Definition 1.8. A piecewise smooth curve is a continuous function γ : [a, b] →
Rn such that there is a partition of [a, b]:
a = a0 < a1 < · · · < bn = b
such that for each 1 6 j 6 n the curve segment γj = γ|[aj−1 ,aj ] is smooth.
The points γ(aj ) are called the corners of γ. The directed angle −π < ψj 6 π
from γ 0 (aj −) to γ 0 (aj +) is called the exterior angle at the j-th corner. Define
θj : [aj−1 , aj ] → R as in Proposition 1.6, i.e., so that γj0 = (cos θj , sin θj ). The
rotation number of γ is given by:
n n
1 X  1 X
nγ = θj (aj ) − θj (aj−1 ) + ψj .
2π j=1 2π j=1

Again, nγ is an integer, and we have:


Z n
1 1 X
nγ = κ ds + ψj .
2π [a,b] 2π j=1

The Rotation Theorem can be generalized to piecewise smooth curves provided


corners are taken into account.
Theorem 1.9. Let γ : [0, L] → R2 be a piecewise smooth, regular, simple, closed
curve, and assume that none of the exterior angles are equal to π. Then nγ = ±1.
3.3. Convexity.
Definition 1.9. Let γ : [0, L] → R2 be a regular closed plane curve. We say
that γ is convex if for each t0 ∈ [0, L] the curve lies on one side only of its tangent
at t0 , i.e., if one of the following inequality holds:

γ − γ(t0 ) · e2 6 0,

γ − γ(t0 ) · e2 > 0.
Theorem 1.10. Let γ : [0, L] → R2 be a regular simple closed plane curve, and
let κ be its curvature. Then γ is convex if and only if either κ > 0 or κ 6 0.
3. PLANE CURVES 13

We note that an orientation reversing reparametrization of γ changes κ > 0


into κ 6 0 and vice versa. Thus, ignoring orientation, those two conditions are
equivalent. We also note that the theorem fails if γ is not assumed simple.
Proof. We may assume without loss of generality that |γ 0 | = 1. Let θ : [0, L] →
R be the continuous function given in Proposition 1.6 satisfying:
e1 = (cos θ, sin θ),
0
and θ = κ.
Suppose that γ is convex. We will show that θ is weakly monotone, i.e., if
t1 < t2 and θ(t1 ) = θ(t2 ) then θ is constant on [t1 , t2 ]. First, we note that since γ is
simple, we have nγ = ±1 by the Rotation Theorem, and it follows that e1 is onto
S1 , see Exercise 1.5. Thus, there is t3 ∈ [0, L] such that
e1 (t3 ) = −e1 (t1 ) = −e1 (t2 ).
By convexity, the three parallel tangents at t1 , t2 , and t3 cannot be distinct, hence
at least two must coincide. Let p1 = γ(s1 ) and p2 = γ(s2 ), s1 < s2 denote these
two points, then the line p1 p2 is contained in γ. Otherwise, if q is a point on p1 p2
not on γ, then the line through q perpendicular to p1 p2 intersects γ in at least
two points r and s, which by convexity must lie on one side of p1 p2 . Without
loss of generality, assume that r is the closer of the two to p1 p2 . Then r lies in
the interior of the triangle p1 p2 s. Regardless of the inclination of the tangent at
r, the three points p1 , p2 and s, all belonging to γ, cannot all lie on one side of
the tangent, in contradiction to convexity. If p1 p2 6= {γ(s) : s1 6 s 6 s2 }, then
p1 p2 = {γ(s) : s2 6 s 6 L} ∩ {γ(s) : 0 6 s 6 s1 }. However, in that case, we would
have θ(s2 ) − θ(s1 ) = θ(L) − θ(0) = 2π, a contradiction. Thus, we have
p1 p2 = {γ(s) : s1 6 s 6 s2 } = {γ(t) : t1 6 t 6 t2 }.
In particular θ(t) = θ(t1 ) = θ(t2 ).
Conversely, suppose that γ is not convex. Then, there is t0 ∈ [0, L] such that
the function φ = γ − γ(t0 ) · e2 changes sign. We will show that θ0 also changes
sign. Let t+ , t− ∈ [0, L] be such that
min φ = φ(t− ) < 0 = φ(t0 ) = φ(t+ ) = max φ.
[0,L] [0,L]

Note that the three tangents at t− , t+ and t0 are parallel but distinct. Since
φ0 (t− ) = φ0 (t+ ) = 0, we have that e1 (t− ) and e1 (t+ ) are both equal to ±e1 (t0 ).
Thus, at least two of these vectors are equal. We may assume, after reparametriza-
tion, that there exists 0 < s < L such that e1 (0) = e1 (s). This implies that
θ(s) − θ(0) = 2πk, θ(L) − θ(s) = 2πk 0
with k, k 0 ∈ Z. By the Rotation Theorem, nγ = k + k 0 = ±1. Since γ(0) and γ(s)
do not lie on a line parallel to e1 (t0 ), it follows that θ is not constant on either
[0, s] or [0, L]. If k = 0 then θ0 changes sign on [0, s], and similarly if k 0 = 0 then θ0
changes sign on [s, L]. If kk 0 6= 0, then since k + k 0 = ±1, it follows that kk 0 < 0
and θ0 changes sign on [0, L]. 
Definition 1.10. Let γ : [0, L] → R2 be a regular plane curve. A vertex of γ
is a critical point of the curvature κ.
Theorem 1.11 (The Four Vertex Theorem). A regular simple convex closed
curve has at least four vertices.
14 1. CURVES

Proof. Clearly, κ has a maximum and minimum on [0, L], hence γ has at least
two vertices. We will assume, without loss of generality, that γ is parametrized by
arclength, has its minimum at t = 0, its maximum at t = t0 where 0 < t0 < L,
that γ(0) and γ(t0 ) lie on the x-axis, and that γ enters the upper-half plane in
the interval [0, t0 ]. All these properties can be achieved by reparametrizing and
rotating γ.
We now claim that p = γ(0) and q = γ(t0 ) are the only points of γ on the
x-axis. Indeed, suppose that there is another point r = γ(t1 ) on the x-axis, then
one of these points lies between the other two, and the tangent at that point must,
by convexity, contain the other two. Thus, by the argument used in the proof of
Theorem 1.10 the segment between the outer two is contained in γ, and in particular
pq is contained in γ. If follows that κ = 0 at p and q where κ has its minimum
and maximum, hence κ ≡ 0, a contradiction since then γ is a line and cannot be
closed. We conclude that γ remains in the upper half-plane in the interval [0, t0 ]
and remains in the lower half-plane in the interval [t0 , L].
Suppose now by contradiction that γ(0) and γ(t0 ) are the only vertices of γ.
Then it follows that:
κ0 > 0 on [0, t0 ], κ0 6 0 on [t0 , L].
Thus, if we write γ = (x, y), then we have κ0 y > 0 on [0, L], and x00 = −κy 0 , hence:
Z L Z L Z L
00 0
0= x ds = − −κy ds = κ0 y ds.
0 0 0
Since the integrand in the last integral is non-negative, we conclude that κ0 y ≡ 0,
hence y ≡ 0, again a contradiction.
It follows that κ has another point where κ0 changes sign, i.e., an extremum.
Since extrema come in pairs, κ has at least four extrema. 

4. Fenchel’s Theorem
We will use without proof the fact that the shortest path between two points
on a sphere is always an arc of a great circle. We also use the notation γ1 + γ2 to
denote the curve γ1 followed by the curve γ2 .
Definition 1.11. Let γ : [0, L] → Rn be a regular curve parametrized by ar-
clength. The spherical image of γ is the curve γ 0 : [0, L] → Sn−1 . The total curvature
of γ : [0, L] → Rn is: Z
Kγ = |γ 00 | ds.
I
We note that the total curvature is simply the length of the spherical image.
Theorem 1.12. Let γ be a regular simple closed curve in Rn parametrized by
arclength. Then the total curvature of γ is at least 2π:
Kγ > 2π,
with equality if and only if γ is planar and convex.
The proof will follow from two lemmata which are interesting in their own right.
Lemma 1.13. Let γ : [0, L] → Rn be a regular closed curve parametrized by
arclength. Then the spherical image of γ cannot map into an open hemisphere. If
γ 0 maps into a closed hemisphere, then γ maps into an equator.
4. FENCHEL’S THEOREM 15

Proof. Suppose, by contradiction, that there is v ∈ Sn−1 such that γ 0 · v > 0.


Then
Z L
0 = γ · v |L − γ · v |0 = γ 0 · v ds > 0.
0
If γ 0 · v > 0, then the same inequality shows that γ 0 · v ≡ 0, hence γ lies in the plane
perpendicular to v through γ(0). 

Lemma 1.14. Let n > 3, and let γ : [0, L] → Sn−1 be a regular closed curve on
the unit sphere parametrized by arclength.
(1) If the arclength of γ is less than 2π then γ is contained in an open hemi-
sphere.
(2) If the arclength of γ is equal to 2π then γ is contained in a closed hemi-
sphere.
Proof. (1) First observe that no piecewise smooth curve of arclength less
than 2π contains two antipodal points. Otherwise the two segments of of the curve
between p and q would each have length at least π, and hence the length of the
curve would have to be at least 2π. Now pick a point p on γ and let q on γ be
chosen so that the two segments γ1 and γ2 from p to q along γ have equal length.
Note that p and q cannot be antipodal. Let v be the midpoint along the shorter of
the two segments of the great circle between p and q. Suppose that γ1 intersects
the equator, the great circle v · x = 0. Let γ̃1 be the reflection of γ with respect
to v, then the length of γ1 + γ̃1 is the same as the length of γ hence is less than
2π. But γ1 + γ̃1 contains two antipodal points, a contradiction. Thus, γ1 cannot
intersect the equator. Similarly, γ2 cannot intersect the equator, and we conclude
γ stays in the open hemisphere v · x > 0.
(2) If the arclength of γ is 2π, we refine the above argument. If p and q are
antipodal, then both γ1 and γ2 are great semi-circle, thus, γ stays in a closed
hemisphere.1 So we can assume that p and q are not antipodal and proceed as
before, defining v to be the midpoint on the shorter arc of the great circle between
p and q. Now, if γ1 crosses the equator, then γ1 + γ̃1 contains two antipodal points
on the equator, and the two segments joining these points enter both hemispheres.
Thus, these segments are not semi-circle, and consequently both have arclength
strictly greater than π. Thus the arclength of γ1 + γ̃1 is strictly larger than 2π
a contradiction. Similarly, γ2 does not cross the equator, and we conclude that γ
stays in the closed hemisphere v · x > 0. 

Proof of Fenchel’s Theorem. Note that the total curvature is simply the
arclength of the spherical image of γ. By Lemma 1.13 γ 0 is not contained in an
open hemisphere, so by Lemma 1.14
Z
Kγ = |γ 00 | ds > 2π.
I

If the arclength of γ is 2π, then by Lemma 1.14, γ 0 is contained in a closed hemi-


0

sphere, and by Lemma 1.13, γ maps into an equator. If n > 3, we may proceed
by induction until we obtain that γ is planar. Once we have that γ is planar, the

1In fact, since γ is smooth, γ and γ are contained in the same great circle, and hence γ is
1 1
itself a great circle.
16 1. CURVES

Rotation Theorem gives nγ = ±1. Without loss of generality,2 we may assume that
nγ = 1. Hence Z

06 |κ| − κ ds = Kγ − 2π = 0,
I
and it follows that κ = |κ| > 0, which by Theorem 1.10 implies that γ is convex. 

Exercises
Exercise 1.1. A regular space curve γ : [a, b] → R3 is a helix if there is a fixed
unit vector u ∈ R3 such that e1 · u is constant. Let κ and τ be the curvature and
torsion of a regular space curve γ, and suppose that κ 6= 0. Prove that γ is a helix
if and only if τ = cκ for some constant c.

Exercise 1.2. Let γ : I → R4 be a smooth curve parameterized by arclength


such that γ 0 , γ 00 , γ 000 are linearly independent. Prove the existence of a Frenet frame,
i.e., a positively oriented orthonormal frame X = (e1 , e2 , e3 , e4 ) satisfying e1 = γ 0 ,
and X 0 = Xω, where ω is anti-symmetric, tri-diagonal, and ωi,i+1 > 0 for i 6 n − 2.
The curvatures of γ are the three functions κi = ωi,i+1 , i = 1, 2, 3. Note that
κ1 , κ2 > 0, but κ3 has a sign.

Exercise 1.3. Prove the Fundamental Theorem for curves in R4 : Given func-
tions κ1 , κ2 , κ3 on I with κ1 , κ2 > 0, there is a smooth curve γ parameterized by
arclength on I such that κ1 , κ2 , κ3 are the curvatures of γ. Furthermore, γ is unique
up to a rigid motion of R4 .

Exercise 1.4. Let γ : [a, b] → R2 be a regular plane curve with non-zero cur-
vature κ 6= 0, and let β = γ + κ−1 N be the locus of the centers of curvature of
γ.
(1) Prove that β is regular provided that κ0 6= 0.
(2) Prove that each tangent ` of β intersects γ at a right angle.
A curve satisfying (1) and (2) is called an evolute of γ.
(3) Prove that each regular plane curve γ : [a, b] → R2 has at most one evolute.

Exercise 1.5. A convex plane curve γ : [a, b] → R2 is strictly convex if κ 6= 0.


Prove that if γ : [a, b] → R2 is a strictly convex simple closed curve, then for every
v ∈ S1 , there is a unique t ∈ [a, b] such that e1 (t) = v.

Exercise 1.6. Let γ : [0, L] → R2 be a strictly convex simple closed curve. The
width w(t) of γ at t ∈ [0, L] is the distance between the tangent line at γ(t) and the
tangent line at the unique point γ(t0 ) satisfying e1 (t0 ) = −e1 (t) (see Exercise 1.5).
A curve has constant width if w is independent of t. Prove that if γ has constant
width then:
(1) The line between γ(t) and γ(t0 ) is perpendicular to the tangent lines at
those points.
(2) The curve γ has length L = πw.
2Reversing the orientation of γ if necessary.
EXERCISES 17

Exercise 1.7. Let γ : [0, L] → R2 be a simple closed curve. By the Jordan


Curve Theorem, the complement of γ has two connected components, one of which
is bounded. The area enclosed by γ is the area of this component, and according
to Green’s Theorem, it is given by:
Z Z
A= x dy = xy 0 dt,
γ γ
where the orientation is chosen so that the normal e2 points into the bounded
component. Let L be the length of γ, and let β be a circle of width 2r equal to
some width of γ. Prove:
(1) A = 12 γ (xy 0 − yx0 ) dt.
R

(2) A + πr2 6 Lr.


(3) The isoperimetric inequality: 4πA 6 L2 .
(4) If equality holds in (3) then γ is a circle.

Exercise 1.8. Prove that if a convex simple closed curve has four vertices,
then it cannot meet any circle in more than four points.
CHAPTER 2

Local Surface Theory

1. Surfaces
Definition 2.1. A parametric surface patch is a smooth mapping:
X : U → R3 ,
where U ⊂ R2 is open, and the Jacobian dX is non-singular.
Write X = (x1 , x2 , x3 ), and each xi = xi (u1 , u2 ), then the Jacobian has the
matrix representation:  1
x1 x12

dX = x21 x22 
 

x31 x32
where we have used the notation fi = fui = ∂f /∂ui . According to the definition,
we are requiring that this matrix has rank 2, or equivalently that the vectors X1 =
(x11 , x21 , x31 ) and X2 = (x12 , x22 , x32 ) are linearly independent. Another equivalent
requirement is that dX : R2 → R3 is injective.
Example 2.1. Let U ⊂ R2 be open, and suppose that f : U → R is smooth.
Define the graph of f as the parametric surface X(u1 , u2 ) = (u1 , u2 , f (u1 , u2 )). To
verify that X is indeed a parametric surface, note that:
 
1 0
dX =  0 1 
f1 f2
so that clearly X is non-singular.
A diffeomorphism between open sets U, V ⊂ R2 is a map φ : U → V which is
smooth, one-to-one, and whose inverse is also smooth. If det(dφ) > 0, then we say
that φ is an orientation-preserving diffeomorphism.
Definition 2.2. Let X : U → R3 , and X̃ : Ũ → R3 be parametric surfaces.
We say that X̃ is reparametrization of X if X̃ = X ◦ φ, where φ : Ũ → U is a
diffeomorphism. If φ is an orientation-preserving diffeomorphism, then X̃ is an
orientation-preserving reparametrization.
Clearly, the inverse of a diffeomorphism is a diffeomorphism. Thus, if X̃ is a
reparametrization of X, then X is a reparametrization of X̃.
Definition 2.3. The tangent space Tu X of the parametric surface X : U → R3
at u ∈ U is the 2-dimensional linear subspace of R3 spanned by the two vectors X1
and X2 .1
1Note that the tangent plane to the surface X(U ) at u is actually the affine subspace X(u) +
Tu X. However, it will be very convenient to have the tangent space as a linear subspace of R3 .

19
20 2. LOCAL SURFACE THEORY

If Y ∈ Tu X, then it can be expressed as a linear combination in X1 and X2 :


2
X
Y = y 1 X1 + y 2 X2 = y i Xi ,
i=1
i
where y ∈ R are the components of the vector Y in the basis X1 , X2 of Tu X.
We will use the Einstein Summation Convention: every index which appears twice
in any product, once as a subscript (covariant) and once as a superscript (con-
travariant), is summed over its range. For example, the above equation will be
written Y = y i Xi . The next proposition show that the tangent space is invariant
under reparametrization, and gives the law of transformation for the components
of a tangent vector. Note that covariant and contravariant indices have different
transformation laws, cf. (2.1) and (2.2).
Proposition 2.1. Let X : U → R3 be a parametric surface, and let X̃ = X ◦ φ
be a reparametrization of X. Then Tφ(ũ) X = Tũ X̃. Furthermore, if Z ∈ Tφ(ũ) X̃,
and Z = z i Xi = z̃ j X̃j , then:
∂ui
(2.1) z i = z̃ j ,
∂ ũj
where dφ = (∂ui /∂ ũj ).
Proof. By the chain rule, we have:
∂ui
(2.2) X̃j = Xi .
∂ ũj
Thus Tũ X̃ ⊂ Tφ(ũ) X, and since we can interchange the roles of X and X̃, we
conclude that Tũ X̃ = Tφ(ũ) X. Substituting (2.2) in z̃ j X̃j , we find:
∂ui
z i Xi = z̃ j Xi ,
∂ ũj
and (2.1) follows. 
3
Definition 2.4. A vector field along a parametric surface X : U → R , is a
smooth mapping Y : U → R32. A vector field Y is tangent to X if Y (u) ∈ Tu X for
all u ∈ U . A vector field Y is normal to X if Y (u) ⊥ Tu X for all u ∈ U .
Example 2.2. The vector fields X1 and X2 are tangent to the surface. The
vector field X1 × X2 is normal to the surface.
We call the unit vector field
X1 × X2
N=
|X1 × X2 |
the unit normal . Note that the triple (X1 , X2 , N ), although not necessarily or-
thonormal, is positively oriented. In particular, we can see that the choice of an
orientation on X, e.g., X1 → X2 , fixes a unit normal, and vice-versa, the choice of
a unit normal fixes the orientation. Here we chose to use the orientation inherited
from the orientation u1 → u2 on U .
Definition 2.5. We call the map N : U → S2 the Gauss map.
2We often visualize Y (u) as being attached at X(u), i.e. belonging to the tangent space of
R3 at X(u); cf. see footnote 1.
2. THE FIRST FUNDAMENTAL FORM 21

The Gauss map is invariant under orientation-preserving reparametrization.


Proposition 2.2. Let X : U → R3 be a parametric surface, and let N : U → S2
be its Gauss map. Let X̃ = X ◦ φ be an orientation-preserving reparametrization of
X. Then the Gauss map of X̃ is N ◦ φ.
Proof. Let v ∈ V . The unit normal Ñ (v) of X̃ at v is perpendicular to
Tv X̃. By Proposition 2.1, we have Tφ(v) X = Tv X̃. Thus, Ñ (v) is perpendicular
to Tφ(v) X, as is N (φ(v)). It follows that the two vectors are co-linear, and hence
Ñ (v) = ±N (φ(v)). But since φ is orientation preserving, the two pairs (X1 , X2 )
and (X̃1 , X̃2 ) have the same orientation in the plane Tv X̃. Since also, the two triples
(X1 (φ(v)), X2 (φ(v)), N (φ(v))) and (X̃1 (v), X̃2 (v), N (v)) have the same orientation
in R3 , it follows that N (φ(v)) = Ñ (v). 

2. The First Fundamental Form


Definition 2.6. A symmetric bilinear form on a vector space V is function
B : V × V → R satisfying:
(1) B(aX + bY, Z) = aB(X, Z) + bB(Y, Z), for all X, Y ∈ V and a, b ∈ R.
(2) B(X, Y ) = B(Y, X), for all X, Y ∈ V .
The symmetric bilinear form B is positive definite if B(X, X) > 0, with equality if
and only if X = 0.
With any symmetric bilinear form B on a vector space, there is associated a
quadratic form Q(X) = B(X, X). Let V and W be vector spaces and let T : V → W
be a linear map. If B is a symmetric bilinear form on W , we can define a symmetric
bilinear form T ∗ Q on V by T ∗ Q(X, Y ) = Q(T X, T Y ). We call T ∗ Q the pull-back of
Q by T . The map T is then an isometry between the inner-product spaces (V, T ∗ Q)
and (W, Q).
Example 2.3. Let V = R3 and define B(X, Y ) = X · Y , then B is a positive
2
definite symmetric bilinear form. The associated quadratic form is Q(X) = |X| .
Example 2.4. Let A be a symmetric 2 × 2 matrix, and let B(X, Y ) = AX · Y ,
then B is a symmetric bilinear form which is positive definite if and only if the
eigenvalues of A are both positive.
Definition 2.7. Let X : U → R3 be a parametric surface. The first funda-
mental form is the symmetric bilinear form g defined on each tangent space Tu X
by:
g(Y, Z) = Y · Z, ∀Y, Z ∈ Tu X.
Thus, g is simply the restriction of the Euclidean inner product in Example 2.3
to each tangent space of X. We say that g is induced by the Euclidean inner
product.
Let gij = g(Xi , Xj ), and let Y = y i Xi and Z = z i Xi be two vectors in Tu X,
then
(2.3) g(Y, Z) = gij y i z j .

 of g is at each point u0 ∈ U an
Thus, the so-called coordinate representation
instance of Example 2.4. In fact, if A = gij , and B(ξ, η) = ξ · Aη for ξ, η ∈ R2 as
in Example 2.4, then B is the pull-back by dXu : R2 → Tu X of the restriction of
the Euclidean inner product on Tu X.
22 2. LOCAL SURFACE THEORY

The classical (Gauss) notation for the first fundamental form is g11 = E, g12 =
g21 = F , and G = g22 , i.e.,
 
 E F
gij =
F G
Clearly, F 2 < EG, and another condition equivalent to the condition that X1 and
X2 are linearly independent is that det gij = EG − F 2 > 0. The first fundamental
form is also sometimes written:

ds2 = gij dui duj = E (du1 )2 + 2F du1 du2 + G (du2 )2 .

Note that the gij ’s are functions of u. The reason for the notation ds2 is that the
square root of the first fundamental form can be used to compute length of curves
on X. Indeed, if γ : [a, b] → R3 is a curve on X, then γ = X ◦ β, where β is a curve
in U . Let β(t) = β 1 (t), β 2 (t) , and denote time derivatives by a dot, then
Z b Z b q
Lγ ([a, b]) = |γ̇| dt gij β̇ i β̇ j dt.
a a

Accordingly, ds is also called the line element of the surface X.


Note that g contains all the intrinsic geometric information about the surface
X. The distance between any two points on the surface is given by:

d(p, q) = inf{Lγ : γ is a curve on X between p and q}.

Also the angle θ between two vectors Y, Z ∈ Tx X is given by:


g(Y, Z)
cos θ = p ,
g(Y, Y ) g(Z, Z)

and the angle between two curves β and γ on X is the angle between their tangents
β̇ and γ̇. Intrinsic geometry is all the information which can be obtained from the
three functions gij and their derivatives.
Clearly, the first fundamental form is invariant under reparametrization. The
next proposition shows how the gij ’s change under reparametrization.

Proposition 2.3. Let X : U → R3 be a parametric surface, and let X̃ = X ◦ φ


be a reparametrization of X. Let gij be the coordinate representation of the first
fundamental form of X, and let g̃ij be the coordinate representation of the first
fundamental form of X̃. Then, we have:

∂uk ∂ul
(2.4) g̃ij = gkl ,
∂ ũi ∂ ũj
where dφ = (∂ui /∂ ũj ).

Proof. In view of (2.2), we have:


 k
∂ul ∂uk ∂ul ∂uk ∂ul

∂u
g̃ij = g(X̃i , X̃j ) = g i
X k , j
X l = i j
g(Xk , Xl ) = gkl .
∂ ũ ∂ ũ ∂ ũ ∂ ũ ∂ ũi ∂ ũj

3. THE SECOND FUNDAMENTAL FORM 23

3. The Second Fundamental Form


We now turn to the second fundamental form. First, we need to prove a
technical proposition. Let Y and Z be vector fields along X, and suppose that
Y = y i Xi is tangential. We define the directional derivative of Z along Y by:
∂Z
∂Y Z = y i Zi = y i .
∂ui
Note that the value of ∂Y Z at u depends only on the value of Y at u, but depends
on the values of Z in a neighborhood of u. In addition, ∂Y Z is reparametrization
invariant, but even if Z is tangent, it is not necessarily tangent. Indeed, if we write
Y = ỹ i X̃i , then we see that:
∂Z ∂ ũi ∂Z ∂uk
ỹ i ∂˜i Z = ỹ i i = yj = y j ∂j Z.
∂ ũ ∂uj ∂uk ∂ ũi
The commutator of Y and Z can now be defined as the vector field:
[Y, Z] = ∂Y Z − ∂Z Y.
Proposition 2.4. Let X : U → R3 be a surface, and let N be its unit normal.
(1) If Y and Z are tangential vector fields then [Y, Z] ∈ Tu X.
(2) If Y, Z ∈ Tu X then ∂Y N · Z = ∂Z N · Y .
Proof. Note first that since X is smooth, we have Xij = Xji , where we have
used the notation Xij = ∂ 2 X/∂ui ∂uj . Now, write Y = y i Xi and Z = z j Xj , and
compute:
∂Y Z − ∂Z Y = y i z j Xji + y i ∂i z j Xj − y i z j Xij − z j ∂j y i Xi
= y i ∂i z j − z i ∂i y j Xj .


To prove (2), extend Y and Z to be vector fields in a neighborhood of u, and use (1):

∂Y N · Z − ∂Z N · Y = −N · ∂Y Z − ∂Z Y = 0.

Note that while proving the proposition, we have established the following
formula for the commutator:
[Y, Z] = y i ∂i z j − z i ∂i y j Xj

(2.5)
Definition 2.8. Let X : U → R3 be a surface, and let N : U → S2 be its
unit normal. The second fundamental form of X is the symmetric bilinear form k
defined on each tangent space Tu X by:
(2.6) k(Y, Z) = −∂Y N · Z.
We remark that since N · N = 1, we have ∂Y N · N = 0, hence ∂Y N is tangen-
tial. Thus, according to (2.6), the second fundamental form is minus the tangential
directional derivative of the unit normal, and hence measures the turning of the tan-
gent plane as one moves about on the surface. Note that part (2) of the proposition
guarantees that k is indeed a symmetric bilinear form. Note that it is not neces-
sarily positive definite. Furthermore, if we set kij = k(Xi , Xj ) to be the coordinate
representation of the second fundamental form, then we have:
(2.7) kij = Xij · N.
24 2. LOCAL SURFACE THEORY

This equation leads to another representation. Consider the Taylor expansion of X


at a point, say 0 ∈ U :
1 3
X(u) = X(0) + Xi (0) ui + ∂ij X(0) ui uj + O(|u| )
2
Thus, the elevation of X above its tangent plane at u is given up to second-order
terms by:
1 3
X(u) − X(0) − Xi (0)ui · N = kij (0)ui uj + O(|u| ).

2
The paraboloid on the right-hand side of the equation above is called the osculating
paraboloid . A point u of the surface is called elliptic, hyperbolic, parabolic, or planar ,
depending on whether this paraboloid is elliptic, hyperbolic, cylindrical, or a plane.
In classical notation the second fundamental form is:
 
 L M
kij = .
M N
Clearly, the second fundamental form is invariant under orientation-preserving
reparametrizations. Furthermore, the kij ’s, the coordinate representation of k,
changes like the first fundamental form under orientation-preserving reparametriza-
tion:
∂um ∂ul
k̃ij = k(X̃u , X̃j ) = kml ,
∂ ũi ∂ ũj
Yet another interpretation of the second fundamental form is obtained by con-
sidering curves on the surface. The following theorem is essentially due to Euler.
Theorem 2.5. Let γ = X ◦ β : [a, b] → R3 be a curve on a parametric surface
X : U → R3 , where β : [a, b] → U . Let κ be the curvature of γ, and let θ be the
angle between the unit normal N of X, and the principal normal e2 of γ. Then:
(2.8) κ cos θ = k(γ̇, γ̇).
Proof. We may assume that γ is parametrized by arclength. We have:
γ̇ = β̇ i Xi ,
and
κe2 = γ̈ = β̈ i Xi + β̇ i β̇ j Xij .
The theorem now follows by taking inner product with N , and taking (2.7) into
account. 

The quantity κ cos θ is called the normal curvature of γ. It is particularly inter-


esting to consider normal sections, i.e., curves γ on X which lie on the intersection
of the surface with a normal plane. We may always orient such a plane so that the
normal e2 to γ in the plane coincide with the unit normal N of the surface. In that
case, we obtain the simpler result:
κ = k(γ̇, γ̇).
Thus, the second fundamental form measures the signed curvature of normal sec-
tions in the normal plane equipped with the appropriate orientation.
4. EXAMPLES 25

Definition 2.9. Let X : U → R3 be a parametric surface, and let k be its


second fundamental form. Denote the unit circle in the tangent space at u by
Su X = {Y ∈ Tu X : |Y | = 1}. For u ∈ U , define the principal curvatures of X at
u by:
k1 = min k(Y, Y ), k2 = max k(Y, Y ).
Y ∈Su X Y ∈Su X
The unit vectors Y ∈ Su X along which the principal curvatures are achieved are
called the principal directions. The mean curvature H and the Gauss curvature K
of X at u are given by:
1
H = (k1 + k2 ), K = k1 k2 .
2
If we consider the tangent space Tu X with the inner product g and the unique
linear transformation ` : Tu X → Tu X satisfying:
(2.9) g `(Y ), Z) = k(Y, Z), ∀Z ∈ Tu X,
then k1 6 k2 are the eigenvalues of ` and the principal directions are the eigenvectors
of `. If k1 = k2 then k = λg and every direction is a principal direction. A point
where this holds is called an umbilical point. Otherwise, the principal directions
are perpendicular. We have that H is the trace and K the determinant of `. Let
(g ij ) be the inverse of the 2 × 2 matrix (gij ):
g im gmj = δji .
Set `(Xi ) = `ji Xj , then since kij = g(`(Xi ), Xj ) = `m
i gmj , we find:

`ji = kim g mj .
It is customary to say that g raises the index of k and to write the new object
ki j = kim g mj . Here since kij is symmetric, it is not necessary to keep track of the
position of the indices, and hence we write: `ji = kij . In particular, we have:

1 i det kij
(2.10) H = ki , K= .
2 det gij
Now, k ij = g im g jl klm , and we have
2
|k| = kij k ij = tr `2 = k12 + k22 = 4H 2 − 2K.
Hence, we conclude
1 2
(2.11) K = 2H 2 − |k|
2
4. Examples
In this section, we use u = u, and u2 = v in order to simplify the notation.
1

4.1. Planes. Let U ⊂ R2 be open, and let X : U → R3 be a linear function:


X(u, v) = Au + Bv,
with A, B ∈ R3 linearly independent. Then X is a plane. After reparametrization,
we may assume that A and B are orthonormal. In that case, the first fundamental
form is:
ds2 = du2 + dv 2 .
26 2. LOCAL SURFACE THEORY

Furthermore, |A × B| = 1, and N = A × B is constant, hence k = 0. In particular,


all the points of X are planar, and we have for the mean and Gauss curvatures:
H = K = 0.
It is of interest to note that if all the points of a parametric surface are planar,
then X(U ) is contained in a plane. We will later prove a stronger result: X has a
reparametrization which is linear.
Proposition 2.6. Let X : U → R3 be a parametric surface, and suppose that
its second fundamental form k = 0. Then, there is a fixed vector A and a constant
b such that X · A = b, i.e., X is contained in a plane.
Proof. Let A be the unit normal N of X. Let 1 6 i 6 2, and note that Ni
is tangential. Indeed, N · N = 1, and differentiating along ui , we get N · Ni = 0.
However, since k = 0 it follows from (2.6) that Ni ·Xj = −kij = 0. Thus, Ni = 0 for
i = 1, 2, and we conclude that N is constant. Consequently, (X · N )i = Xi · N = 0,
and X · N is also constant, which proves the proposition. 

4.2. Spheres. Let U = (0, π) × (0, 2π) ⊂ R2 , and let X : U → R2 be given by:
X(u, v) = (sin u cos v, sin u sin v, cos u).
The surface X is a parametric representation of the unit sphere. A straightforward
calculation shows that the first fundamental form is:
ds2 = du2 + sin2 u dv 2 ,
and the unit normal is N = X. Thus, Ni = Xi , and consequently kij = −Ni · Xj =
−Xi · Xj = −gij , i.e., k = −g. In particular, the principal curvatures are both
equal to −1 and all the points are umbilical. We have for the mean and Gauss
curvatures:
H = −1, K=1
Proposition 2.7. Let X : U → R3 be a parametric surface and suppose that
all the points of X are umbilical. Then, X(U ) is either contained in a plane or a
sphere.
Proof. By hypothesis, we have
(2.12) Ni = λXi .
We first show that λ is a constant. Differentiating (2.12), we get Nij = λj Xi +λXij .
Interchanging i and j, subtracting these two equations, and taking into account
Nij − Nji = Xij − Xji = 0, we obtain λi Xj − λj Xi = 0, e.g.,
λ1 X2 − λ2 X1 = 0.
Since X1 and X2 are linearly independent, we conclude that λ1 = λ2 = 0 and
it follows that λ is constant. Now, if λ = 0 then all points are planar, and by
Proposition 2.6, X is contained in a plane. Otherwise, let A = X − λ−1 N , then A
is constant:
Ai = Xi − λ−1 Ni = 0,
−1
and |X − A| = |λ| is also constant, hence X is contained in a sphere. 
4. EXAMPLES 27

4.3. Ruled Surfaces. A ruled surface is a parametric surface of the form:


X(u, v) = γ(u) + vY (u)
for a curve γ : [a, b] → R , and a vector field Y : [a, b] → R3 along γ. The curve γ
3

is the directrix , and the lines γ(u) + tY (u) for u fixed are the generators of X. We
may assume that Y is a unit vector field. Provided Ẏ 6= 0. We will also assume that
Ẏ 6= 0. In this case, it is possible to arrange by reparametrization that γ̇ · Ẏ = 0,
in which case γ is said to be a line of striction. Indeed, if this is not the case, then
we can set φ = γ̇ · Ẏ /|Ẏ |2 , and note that the curve
α = γ + φY
lies on the surface X, and satisfies α̇ · Ẏ = 0. Consequently, the surface:
X̃(s, t) = α(s) + tY (s)
is a reparametrization of X. Furthermore, there is only one line of striction on X.
Indeed, if β and γ are two lines of striction, then since both β is a curve on X we
may write β = γ + φY for some function φ and consequently:
β̇ = γ̇ + φ̇Y + φẎ .
Taking
2 inner product with Ẏ and using the fact that Y is a unit vector, we obtain
φ Ẏ = 0 which implies that φ = 0 and thus, β = γ.
We have Xu = γ̇ + v Ẏ , Xv = Y , and Xvv = 0. Thus, the first fundamental is:
!
 1 + v 2 |Ẏ |2 γ̇ · Y
gij =
γ̇ · Y 1
and 2
det gij = 1 + v 2 |Ẏ |2 + γ̇ · Y > v 2 |Ẏ |2 .


Hence, dX is non-singular except possibly on the line of striction.


 Furthermore,
2
kvv = N · Xvv = 0, hence det kij = −kuv and if det kij = 0 then Nv · Xu =
Nv · Xv = 0, i.e., N is constant along generators. We have proved the following
proposition.
Proposition 2.8. Let X be a ruled surface. Then X has non-positive Gauss
curvature K 6 0, and K(u) = 0 if and only if N is constant along the generator
through u.
4.3.1. Cylinders. Let γ : [a, b] → R3 be a planar curve, and A be a unit normal
to the plane which contains γ. Define X : [a, b] × R → R3 by:
X(u, v) = γ(u) + vA.
The surface X is a cylinder . The first fundamental form is:
ds2 = du2 + dv 2 ,
and we see that for a cylinder dX is always non-singular. After possibly reversing
the orientation of A, the unit normal is N = e2 . Clearly, Nv = 0, and Nu = −κe1 .
Thus, the second fundamental form is:
κ du2
The principal curvatures are 0 and κ. We have for the mean and Gauss curvatures:
1
H = − κ, K = 0.
2
28 2. LOCAL SURFACE THEORY

A surface on which K = 0 is called developable.


4.3.2. Tangent Surfaces. Let γ : [a, b] → R3 be a curve with nonzero curvature
κ 6= 0. Its tangent surface is the ruled surface:
X(u, v) = γ(u) + v γ̇(u).
Since γ̇ · γ̈ = 0, the curve γ is the line of striction of its tangent surface. We have
Xu = e1 + vκ e2 and Xv = e1 , hence the first fundamental form is:
!
 1 + v 2 κ2 1
gij = .
1 1
The unit normal is N = −e3 , and clearly Nv = 0. Thus,
4.3.3. Hyperboloid. Let γ : (0, 2π) → R3 be the unit circle in the x1 x2 -plane:
γ(t) = cos(t), sin(t), 0 . Define a ruled surface X : (0, 2π) × R → R3 by:
 
X(u, v) = γ(u) + v γ̇(u) + e3 = cos(u) − v sin(u), sin(u) + v cos(u), v .
Note that (x1 )2 + (x2 )2 − (x3 )3 = 1 so that X(U ) is a hyperboloid of one sheet. A
straightforward calculation gives:
1 
N=√ ( cos(u) − v sin(u), sin(u) + v cos(u), −v ,
1 + 2v 2

and
2 2
|Nv | = .
1 + 4v 2 + 4v 4
It follows from Proposition 2.8 that X has Gauss curvature K < 0.

5. Lines of Curvature
Definition 2.10. A curve γ on a parametric surface X is called a line of
curvature if γ̇ is a principal direction.
The following proposition, due to Rodriguez, characterizes lines of curvature
as those curves whose tangents are parallel to the tangent of their spherical image
under the Gauss map.
Proposition 2.9. Let γ be a curve on a parametric surface X with unit normal
N , and let β = N ◦ γ be its spherical image under the Gauss map. Then γ is a line
of curvature if and only if
(2.13) β̇ + λγ̇ = 0.
Proof. Suppose that (2.13) holds, then we have:
∂γ̇ N + λγ̇ = 0.
Let ` be the linear transformation on Tu X associated with k as defined by (2.9).
Then, we have for every Y ∈ Tu X:
 
g `(γ̇), Y ) = k γ̇, Y = −∂γ̇ N · Y = λg λγ̇, Y .

Thus, ` γ̇ = λγ̇, and γ̇ is a principal direction. The proof of the converse is
similar. 
5. LINES OF CURVATURE 29

It is clear from the proof that λ in (2.13) is the associated principal curvature.
The coordinate curves of a parametric surface X are the two family of curves
γc (t) = X(t, c) and βc (t) = X(c, t). A surface is parametrized by lines of curvature
if the coordinate curves of X are lines of curvature. We will now show that any non-
umbilical point has a neighborhood in which the surface can be reparametrized by
lines of curvature. We first prove the following lemma which is also of independent
interest.

Lemma 2.10. Let X : U → R3 be a parametric surface, and let Y1 and Y2 be


linearly independent vector fields. The following statements are equivalent:
(1) Any point u0 ∈ U has a neighborhood U0 and a reparametrization φ : V0 →
U0 such that if X̃ = X ◦ φ then X̃i = Yi ◦ φ.
(2) [Y1 , Y2 ] = 0.

Proof. Suppose that (1) holds. Then Equation (2.5) shows that [X̃1 , X̃2 ] = 0.
However, since the commutator is invariant under reparametrization, it follows that
[Y1 , Y2 ] = 0.
Conversely, suppose that [Y1 , Y2 ] = 0. Express Xi = aji Yj and Yi = bji Xj , and
note that bji is the inverse of aji . We now calculate:
 

0 = [Xi , Xj ]
= aki Yk , alj Yl
 

= ali ∂Yl akj − alj ∂Yl aki Yk + aki alj [Yk , Yl ]




= ali bm k l m k

l ∂m aj − aj bl ∂m ai Yk

= ∂i akj − ∂j aki Yk .


Since Y1 and Y2 are linearly independent, we conclude that:


(2.14) ∂i akj − ∂j aki = 0.
Now, fix 1 6 k 6 2, and consider the over-determined system:
∂ ũk
= aki , i = 1, 2.
∂ui
The integrability condition for this system is exactly (2.14), hence there is a solution
in a neighborhood of  u0 . Furthermore,
 since the Jacobian of the map ψ(u1 , u2 ) =
1 2 k k
(ũ , ũ ) is dψ = ai , and det ai 6= 0, it follows from the inverse function theorem,
that perhaps on yet a smaller neighborhood, ψ is a diffeomorphism. Let φ = ψ −1 ,
then φ is a diffeomorphism on a neighborhood V0 of ψ(u0 ), and if we set X̃ = X ◦ φ,
then:
∂uj
X̃i = Xj i = Xj bji = Yi .
∂ ũ


Proposition 2.11. Let X : U → R3 be a parametric surface, and let Y1 and


Y2 be linearly independent vector fields. Then for any point u0 ∈ U there is a
neighborhood of u0 and a reparametrization X̃ = X ◦ φ such that X̃i = fi Yi ◦ φ for
some functions fi .
30 2. LOCAL SURFACE THEORY

Proof. By Lemma 2.10 is suffices to show that there are function fi such that
f1 Y1 and f2 Y2 commute. Write [Y1 , Y2 ] = a1 Y1 − a2 Y2 , and compute:
  
[f1 Y1 , f2 Y2 ] = f1 f2 a1 Y1 − a2 Y2 + f1 ∂Y1 f2 Y2 − f2 ∂Y2 f1 Y1 .
Thus, the commutator [f1 Y1 , f2 Y2 ] vanishes if and only if the following two equa-
tions are satisfied:
∂Y2 f1 − a1 f1 = 0
∂Y1 f2 − a2 f2 = 0.
We can rewrite those as:
∂Y2 log f1 = a1
∂Y1 log f2 = a2 .
Each of those equation is a linear first-order partial differential equation, and can
be solved for a positive solution in a neighborhood of u0 . 
In a neighborhood of a non-umbilical point, the principal directions define two
orthogonal unit vector fields. Thus, we obtain the following Theorem as a corollary
to the above proposition.
Theorem 2.12. Let X : U → R3 be a parametric surface, and let u0 be a
non-umbilical point. Then there is neighborhood U0 of u0 and a diffeomorphism
φ : Ũ0 → U0 such that X̃ = X ◦ φ is parametrized by lines of curvature.
If X is parametrized by lines of curvature, then the second fundamental form
has the coordinate representation:
!
 k1 g11 0
kij =
0 k2 g22
Definition 2.11. A curve γ on a parametric surface X is called an asymptotic
line if it has zero normal curvature, i.e., k(γ̇, γ̇) = 0.
The term asymptotic stems from the fact that those curve have their tangent γ̇
along the asymptotes of the Dupin indicatrix , the conic section kij ξ i ξ j = 1 in the
tangent space. Since the Dupin indicatrix has no asymptotes when K > 0, we see
that the Gauss curvature must be non-positive along any asymptotic line.
The following Theorem can be proved by the same method as used above to
obtain Theorem 2.12.
Theorem 2.13. Let X : U → R3 be a parametric surface, and let u0 be a hyper-
bolic point. Then there is neighborhood U0 of u0 and a diffeomorphism φ : Ũ0 → U0
such that X̃ = X ◦ φ is parametrized by asymptotic lines.

6. More Examples
A surface of revolution is a parametric surface of the form:

X(u, v) = f (u) cos(v), f (u) sin(v), g(u) ,

where f (t), g(t) is a regular curve, called the generator , which satisfies f (t) 6= 0
. Without loss of generality, we may assume that f (t) > 0. The curves

γv (t) = f (t) cos(v), f (t) sin(v), g(t) , v fixed.
6. MORE EXAMPLES 31

are called meridians and the curves



βu (t) = f (u) cos(t), f (u) sin(t), g(u) , u fixed.
are called parallels. Note that every meridian is a planar curve congruent to the
generator and is furthermore also a normal section, and every parallel is a circle
of radius f (u). It is not difficult to see that parallels and meridians are lines of
curvature. Indeed, let γv be a meridian, then choosing as in the paragraph following
Theorem 2.5 the correct orientation in the plane of γv , its spherical image under the
Gauss map is σv = N ◦ γv = e2 , and by the Frenet equations, σ̇v = −κe1 = −κγ̇v .
Thus, using Proposition 2.9 and the comment immediately following it, we see that
γv is a line of curvature with associated principal curvature κ. Since the parallels
βu are perpendicular to the meridians γv , it follows immediately that they are also
lines of curvature. We derive this also follows from Proposition 2.9 and furthermore
obtain the associated principal curvature. A straightforward computation gives that
the spherical image of βu under the Gauss map is:
τu = N ◦ βu = cβu + B
3
where B ∈ R and c ∈ R are constants. Thus, τ̇u = cβ̇u and βu is a line of curvature
with associated principal curvature c.
The plane, the sphere, the cylinder, and the hyperboloid are all surfaces of
revolution. We discuss one more example.
The catenoid is the
 parametric surface of revolution obtained from the gener-
ating curve cosh(t), t :

X(u, v) = cosh(u) cos(v), cosh(u) sin(v), u .
The normal N is easily calculated:
 
− cos(v) − sin(v) sinh(u)
N (u, v) = , ,
cosh(u) cosh(u) cosh(u)
If γv (t) is a meridian, then σv (t) = N (t, v) is its spherical image under the Gauss
map, and differentiating with respect to t, we get the principal curvature associated
with meridians: κ(u, v) = −1/ cosh(u). Similarly, the principal curvature associated
with parallels is: 1/ cosh(u). Thus, we conclude that
1
H = 0, K=− .
cosh(u)2
Definition 2.12. A parametric surface X is minimal if it has vanishing mean
curvature H = 0.
For example, the catenoid is a minimal surface. The justification for the ter-
minology will be given in the next section. The following proposition is immediate
from (2.11).
Proposition 2.14. Let X be a minimal surface. Then X has non-positive
Gauss curvature K 6 0, and K(u) = 0 if and only if u is a planar point.
We will set out to construct a large class of minimal surfaces. We will use the
Weierstrass Representation.
Definition 2.13. A parametric surface X is conformal if the first fundamental
form satisfies g11 = g22 and g12 = 0. A parametric surface X is harmonic if
∆X = X11 + X22 = 0.
32 2. LOCAL SURFACE THEORY

Proposition 2.15. Let X : U → R3 be a parametric surface which is both


conformal and harmonic. Then X is a minimal surface.
 ij

Proof. We can write the first
 fundamental form gij , its inverse g , and
the second fundamental form kij as:
! ! !
 λ 0 ij
 λ−1 0  X11 · N X12 · N
gij = , g = , kij = .
0 λ 0 λ−1 X12 · N X22 · N
Thus, the mean curvature vanishes:
H = g ij kij = λ−1 X11 + X22 · N = 0.


In order to construct parametric surfaces which are both conformal and har-
monic,
√ we will use complex analysis in the domain U . Let ζ = u+iv where i denotes
−1, and let f (ζ) and h(ζ) be two complex analytic functions on U . Define
F1 = f 2 − h2 , F 2 = i f 2 + h2 ,

F3 = 2f h.
We have:
2 2 2
F1 + F2 + F3 = 0.
If we write Fj = ξj + iηj , then this can be written as:
3 h 3
X 2  2 i2 X
ξj − ηj + 2i ξj ηj = 0.
j=1 j=1

Now, in any simply connected


 subset of U , we can always find analytic functions
Gj = xj + iyj satisfying Gj ζ = Fj . We let X = (x1 , x2 , x3 ). Then X is conformal
and harmonic. Indeed, xj being the real parts of complex analytic  functions, are
harmonic, and hence X is harmonic. Furthermore, we have xj u = ξj , and by the
 
Cauchy-Riemann equations xj v = − yj u = −ηj . Thus, we see that
3 h
X 2 2 i2
Xu · Xu − Xv · Xv = ξj − ηj = 0,
j=1

and
3
X
Xu · Xv = − ξj ηj = 0,
j=1

and hence, X is conformal.3 Since X is real analytic, the zeroes of det Xi · Xj are


isolated. Removing the set Z of those zeroes from U , we get that X : U \ Z → R3


is a harmonic and conformal parametric surface, hence X is a minimal surface4.
If we carry out this procedure starting with the complex analytic functions
f (ζ) = 1 and h(ζ) = 1/ζ, then X is another parametrization of the catenoid,
cf. 2.6.

3Of course, Y = (y , y , y ) is also conformal, cf. 2.5.


1 2 3
4X is also said to be a branched minimal surface on U . The zeroes of det g  are called
ij
branched points.
7. SURFACE AREA 33

7. Surface Area
In this section we will give interpretations of the Gauss curvature and the mean
curvature. Both of these involve the concept of surface area. Before introducing
the definition, we first prove a proposition which will show that the definition is
reparametrization invariant.
Proposition 2.16. Let X : U → R3 be a parametric surface with first funda-
mental form gij , and V ⊂ U . Let X̃ : Ũ → R3 be a reparametrization of X, let
Ṽ = φ−1 (V ), and let g̃ij be the coordinate representation of the first fundamental


form of X̃. Then, we have:


Z q Z q
det g̃ij dũ1 dũ2 = det gij du1 du2 .
 
(2.15)
Ṽ V

Proof. By (2.4) we have


q  q
det g̃ij = det gij det φij
 

where φij = ∂ui /∂ ũj . Thus, for any open subset V ⊂ U , and Ṽ = φ−1 (V ), we have:
Z q Z q Z q
det g̃ij dũ1 dũ2 = det gij det φij dũ1 dũ2 = det gij du1 du2
   
Ṽ Ṽ V

Thus, the integral on the right-hand side of (2.15) is reparametrization invari-
ant. This justifies the following definition.

Definition 2.14. Let X : U → R3 be a parametric surface and let gij be its
first fundamental form. The surface area element of X is:
q
dA = det gij du1 du2 .


If V ⊂ U is open then the surface area of X over V is:


Z Z q
det gij du1 du2

(2.16) AX (V ) = dA =
V V

By Proposition 2.16, the surface area of X over V is reparametrization invari-


ant, and we can thus speak of the surface area of X(V ).
Definition 2.15. Let X : U → R3 be a parametric surface, and let V ⊂ U be
open. The total curvature of X over V is:
Z
KX (V ) = K dA.
V

It is easy to show, as in the proof of Proposition 2.16 that the total curvature
of X over V is invariant under reparametrization. We now introduce the signed
surface area, a variant of Definition 2.14 which allows for smooth maps Y into a
surface X, with Jacobian dY not necessarily everywhere non-singular, and which
also accounts for multiplicity.
Definition 2.16. Let X : U → R3 be a parametric surface, and let Y : U →
X(U ) be a smooth map. Define σ(u) to be 1, −1, or 0, according to whether the
pair Y1 (u), Y2 (u) has the same orientation as the pair X1 (u), X2 (u), the opposite
34 2. LOCAL SURFACE THEORY

orientation, or is linearly dependent, and let hij = Yi · Yj . If V ⊂ U is open then


the signed surface area of Y over V is:
Z q
σ det hij du1 du2

ÂY (V ) =
V
For a regular parametric surface, this definition reduces to Definition 2.14.
Next, we prove that the total curvature of a surface X over an open set U is the
area of the image of U under the Gauss map counted with multiplicity.
Theorem 2.17. Let X : U → R3 be a parametric surface, and let V ⊂ U be
open. Let N : U → S2 be the Gauss map of X, then:
KX (V ) = ÂN (V ).
Proof. We first derive a formula which is of independent interest:
(2.17) Ni = −kij Xj
To verify this formula, it suffices to check that the inner product of both sides with
the three linearly independent vectors X1 , X2 , N are equal. Since N · N = 1, we
have N · Ni = 0 = −kij Xj · N = 0, and −kij Xj · Xl = −kij gjl = −kij = −Ni · Xk .
In particular, if hij = Ni · Nj , then we find:
hij = kim Xm · kjn Xn = kim kjn gmn = kim kjn g mn .
 

In particular,
2
 det kij
det hij = 
det gij
Note also that Equation (2.17) implies
 that the pair N1 , N2 has the same orientation
as X1 , X2 if and only if det kij > 0. Furthermore, since N (u) is also the outward
normal to the unit sphere at N (u), and since X1 , X2 , N is positively oriented in
R3 , it follows that X1 (u), X2 (u) also gives the positive orientation
 on the tangent
space to the S2 at N (u). Thus, we deduce that sign det kij = σ. Consequently, in
view of Equation (2.10), we obtain:
 
q  sign det kij det hij q 
σ det hij = q  = K det gij
det gij
The proposition follows by integrating over V . 
We now turn to an interpretation of the mean curvature. Let X : U → be a
parametric surface. A variation of X is a smooth family F (u; t) : U × (−ε, ε) → R3
such that F (u; 0) = X. Note that since dF (u; 0) is non-singular, the same is true
of dF (u; t0 ) for any fixed u0 , perhaps after shrinking the interval (−ε, ε). Thus, all
the maps F (u; t0 ) for t0 close enough to 0 are parametric surfaces. The generator
of the variation is the vector field dF/dt(u; 0). The variation is compactly supported
if F (u; t) = X(u) outside a compact subset of U . The smallest such compact set is
called the support of the variation F . Clearly, if a variation is compactly supported,
then the support of its generator is compact in U . We say that a variation is
tangential if the generator is tangential; we say it is normal if the generator is
normal. Suppose now that the closure V is compact in U . We consider the area
AF (V ) of F (u; t) as a function of t. The next proposition shows that the derivative
of this function depends only on the generator, and in fact is a linear functional in
the generator.
7. SURFACE AREA 35

Proposition 2.18. Let X : U → R3 be a parametric surface, and let F (u; t) be


a variation with generator Y . Then:
Z
dAF (V )
(2.18) = g ij Xi · Yj dA
dt t=0 V
We first need the following lemma from linear algebra. We denote by S n×n
n×n
the space of n × n symmetric matrices, and by S+ the subset of those which are
positive definite.
n×n
Lemma 2.19. Let B : (a, b) → S+ be continuously differentiable. Then we
have:
0
log det B = tr B −1 B 0 .

(2.19)
Proof. First note that (2.19) follows directly if we assume that B is diagonal.
Next, suppose that B is symmetric with distinct eigenvalues. Then there is a
continuously differentiable orthogonal matrix Q such that B = Q−1 DQ, where D
is diagonal. Note that dQ−1 /dt = −Q−1 (dQ/dt)Q, hence:
B −1 B 0 = −Q−1 D−1 Q0 Q−1 DQ + Q−1 D−1 D0 Q + Q−1 Q0 ,
and in view of tr AB) = tr(BA), we obtain:
tr B −1 B 0 = tr D−1 D0 .
 

We also have that det B = det D. Thus taking into the account that (2.19) holds
for for D:
0 0
log det B = log det D = tr D−1 D0 = tr B −1 B 0 .
 

In order to prove the general case, it is more convenient to look at the equivalent
identity:
0
det B = tr (det B)B −1 B 0 .

(2.20)
Note that by Kramer’s rule, the matrix (det B)B −1 is the matrix of co-factors
of B, hence its components being determinants of minors of B, are multivariate
polynomials in the components of B. Thus, both sides of the identity (2.20) are
linear polynomials
n
X n
X
p(B 0 ; B) = pij (B)b0ij , q(B 0 ; B) = qij (B)b0ij ,
i,j=1 i,j=1

in the components b0ij 0


of B , whose coefficients pij (B) and qij (B) are themselves
multivariate polynomials in the components bij of B. Since the set of matrices
n×n
with distinct eigenvalues is an open set U ⊂ S+ , we have already proved that
0 0 0
p(B ; B) = q(B ; B) holds for all values of B , and all B ∈ U . For each such
B ∈ U the equality p(B 0 ; B) = q(B 0 ; B) for all B 0 implies that pij (B) = qij (B) for
i, j = 1, . . . , n. Since this holds for all B in an open set, we conclude that pij = qij ,
and hence p = q. 
We remark that the more general identity (2.20) in fact holds, as easily shown,
for all square matrices B. An immediate consequence of the proposition is that:
√ 0 1 √
(2.21) det B = tr B −1 B 0 det B,
2
for any continuously differentiable family of symmetric positive definite matrices
B. We are now ready to prove the proposition.
36 2. LOCAL SURFACE THEORY

Proof of Proposition 2.18. Differentiating the area (2.16) under the inte-
gral sign, and using (2.21), we get:
Z Z
dAF (V ) 1 dgij 1 dgij
q
g ij det gij du1 du2 = g ij

= dA.
dt 2 V dt 2 V dt
Since Y is smooth, we have at t = 0 that dFi /dt = (dF/dt)i = Yi , and thus
dgij
g ij = g ij Yi · Xj + Xi · Yj = 2g ij Xi · Yj .

dt
This completes the proof of the proposition. 
Since the variation of the area dAF (V )/dt is a linear functional in the generator
dF/dt of the variation, it is possible to decompose any variation into tangential and
normal components. We begin by showing that the area doesn’t change under a
tangential variation. This is simply the infinitesimal version of Proposition (2.16).
Proposition 2.20. Let X : U → R3 be a parametric surface, and let F (u; t) be
a compactly supported tangential variation. If V ⊂ U is open with V compact in
U , and the support of F contained in V , then dAF (V )/dt = 0.
Proof. Let Y be the generator of F (u; t). We will show that there is a smooth
family of diffeomorphisms φ : U × (−δ, δ) → U such that Y is also the generator of
the variation G = X ◦ φ. This proves the proposition since Proposition 2.16 gives
that AG (U ) is constant. Since Y is tangential, we can write Y = y i Xi . Consider
the initial value problem:
dv i
= y i (v), v i (0) = ui .
dt
Since the y i ’s are compactly supported, a solution v = v(u; t) exists for all t.
Defining φ(u; t) = v(u; t), then an application of the inverse function theorem shows
that φ(u; t) is a diffeomorphism for t in some small interval (−δ, δ). Finally, we see
that:
dX ◦ φ dv i
= Xi = Xi y i = Y.
dt dt

Our next theorem gives an interpretation of the mean curvature as a measure
of surface area variation under normal perturbations.
Theorem 2.21. Let X : U → R3 be a parametric surface, and let F (u; t) be a
compactly supported variation with generator Y . If V ⊂ U is open with V compact
in U , and the support of F contained in V , then
Z
dAF (V )
(2.22) = −2 (Y · N ) H dA.
dt V
Proof. By Propositions 2.18 and 2.20, it suffices to consider normal variations
with generator Y = f N . In that case, we find that Yj = fj N + f Nj , so that
g ij Xi · Yj = f g ij Xi · Nj = −f kii = −2f H. The theorem follows by substituting
into (2.18). 
Definition 2.17. A parametric surface X is area minimizing if AX (U ) 6
AX̃(U ) for any parametric surface X̃ such that X̃ = X on the boundary of U .
A parametric surface X : U → R3 is locally area minimizing if for any compactly
supported variation F (u; t), the area AF (U ) has a local minimum at t = 0.
8. BERNSTEIN’S THEOREM 37

Clearly, an area-minimizing surface is locally area-minimizing. The following


theorem is an immediate corollary of Theorem 2.21.
Theorem 2.22. A locally area minimizing surface is a minimal surface.
Note that in general a minimal surface is only a stationary point of the area
functional.

8. Bernstein’s Theorem
In this section, we prove Bernstein’s Theorem: A minimal surface which is a
graph over an entire plane must itself be a plane. We say that a surface X is a
graph over a plane Y : R2 → R3 , where Y is linear, if there is a function f : R2 → R
such that X = Y + f N where N is the unit normal of Y .
Theorem 2.23 (Bernstein’s Theorem). Let X be a minimal surface which is a
graph over an entire plane. Then X is a plane.
We may without loss of generality assume that  X is a graph over the plane
Y (u, v) = (u, v, 0), i.e. X(u, v) = u, v, f (u, v) as in example 2.1. It is then
straightforward to check that X is a minimal surface if and only if f satisfies the
non-parametric minimal surface equation:
(2.23) (1 + q 2 )pu − 2pqpv + (1 + p2 )qv = 0,
where we have used the classical notation: p = fu , q = fv . We say that a solution
of a partial differential equation defined on the whole (u, v)-plane is entire. Thus,
to prove Bernstein’s Theorem, it suffices to prove that any entire solution of (2.23)
is linear.
Proposition 2.24. Let f be an entire solution of (2.23). Then f is a linear
function.
By Exercise 2.7, if f satisfies (2.23), then p and q satisfy the following equations:
! !
∂ 1 + q2 ∂ pq
(2.24) p = p ,
∂u 1 + p2 + q 2 ∂v 1 + p2 + q 2
! !
∂ pq ∂ 1 + p2
(2.25) p = p .
∂u 1 + p2 + q 2 ∂v 1 + p2 + q 2
Since the entire plane is simply connected, Equation (2.25) implies that there exists
a function ξ satisfying:
1 + p2 pq
ξu = p , ξv = p ,
1 + p2 + q 2 1 + p2 + q 2
and Equation (2.24) implies that there exists a function η satisfying:
pq 1 + q2
ηu = p , ηv = p .
1 + p2 + q 2 1 + p2 + q 2
Furthermore, ξv = ηu , hence there is a function h so that hu = ξ, hv = η. The
Hessian of the function h is:
! !
 huu huv ξu ξv
hij = = ,
hvu hvv ηu ηv
38 2. LOCAL SURFACE THEORY

hence h satisfies the Monge-Ampère equation:



(2.26) det hij = 1.

In addition, h11 > 0, thus hij is positive definite, and we say that h is convex .
Proposition 2.24 now follows from the following result due to Nitsche.
Proposition 2.25. Let h ∈ C 2 (R2 ) be an entire convex solution of the Monge-
Ampère Equation (2.26). Then h is a quadratic function.
Proof. The proof uses the following transformation introduced by H. Lewy:
ϕ : (u, v) 7→ (ξ, η) = (u + p, v + q)
where p = hu , and q = hv . Clearly, ϕ is continuously differentiable, and its Jacobian
is: !
1+r s
dϕ = ,
s 1+t

where r = huu , s = huv , and t = hvv . Since det dϕ = 2 + r + t > 0, it follows
from the inverse function theorem that ϕ is a local diffeomorphism, i.e., each point
has a neighborhood on which ϕ is a diffeomorphism. In particular, ϕ is open.
In view of the convexity of the function h, we have, according to Exercise 2.8:
   
u2 − u1 (ξ2 − ξ1 + v2 − v1 η2 − η1
2 2    
= u2 − u1 + v2 − v1 + u2 − u1 p2 − p1 + v2 − v1 q2 − q1
2 2
> u2 − u1 + v2 − v1 ,
and therefore:
2 2 2 2
u2 − u1 + v2 − v1 6 ξ2 − ξ1 + η2 − η1 ,
i.e., ϕ is an expanding map. This implies immediately that ϕ is one-to-one. Ac-
cording to Exercise 2.9, ϕ is also onto. Thus, ϕ has an inverse (u, v) = ϕ−1 (ξ, η)
which is also a diffeomorphism. Consider now the function
f (ξ + iη) = u − p − i(v − q) = 2u − ξ + i(−2v + η),

where i = −1. In view of
! !
−1
uξ uη 1 1+t −s
dϕ = = ,
vξ vη 2+r+t −s 1+r
it is straightforward to check that f satisfies the Cauchy-Riemann equations, and
consequently f is analytic. In fact, f is an entire functions and so is f 0 . Further-
more,
(t − r) + 2is 2 4
f 0 (σ) = , |f 0 (σ)| = 1 − < 1,
2+r+t 2+r+t
and Liouville’s Theorem gives that f 0 is constant. Finally, the relations:
2
−i f 0 − f¯0 2

|1 − f 0 | |1 + f 0 |
r= 2 , s = 2 , t = 2,
1 − |f 0 | 1 − |f 0 | 1 − |f 0 |
show that r, s, t are constants. 
9. THEOREMA EGREGIUM 39

9. Theorema Egregium
In this section, we prove that the Gauss curvature can be computed in terms
of the first fundamental form and its derivatives. We then prove the Fundamental
Theorem for surfaces in R3 , analogous to Theorem 1.2 for curves, which states that a
parametric surface is uniquely determined by its first and second fundamental form.
Partial derivatives with respect to ui will be denoted by a subscript i following a
comma, unless there is no ambiguity in which case the comma may be omitted.
Proposition 2.26. Let X : U → R3 be a parametric surface. Then the follow-
ing equations hold:
(2.27) Xij = Γm
ij Xm + kij N,

where,
1 mn
Γm

(2.28) ij = g gni,j + gnj,i − gij,n ,
2
 
and gij and kij are the coordinate representations of its first and second fun-
damental form.
Proof. Clearly, Xij can be expanded in the basis X1 , X2 , N of R3 . We al-
ready saw in Equation (2.7), that the component of Xij along N is kij , hence
Equation (2.27) holds with the coefficients Γm
ij given by

Xij · Xm = Γnij gmn .


In order to derive (2.28), we differentiate gij = Xi · Xj , and substitute the above
equation to obtain:
(2.29) gij,m = Γnim gnj + Γnjm gni .
Now, permute cyclically the indices i, j, m, add the first two equations and subtract
the last one:
gij,m + gmi,j − gjm,i = 2Γnjm gni .
Multiplying by g il and dividing by 2 yields (2.28). 
5
The coefficients Γm
ij are called the Christoffel symbols of the second kind . It
is important to note that the Christofell symbols can be computed from the first
fundamental form and its first derivatives. Furthermore, they are not invariant
under reparametrization.
Theorem 2.27. Let X : U → R3 be a parametric surface. Then the following
equations hold:
Γm m n m n m mn

(2.30) ij,l − Γil,j + Γij Γnl − Γil Γnj = g kij kln − kil kjn ,
(2.31) kij,l − kil,j + Γm m
ij klm − Γil kjm = 0.

Proof. If we differentiate (2.27), we get:


Xijl = Γm m m
 
ij Xm l + kij N l = Γij,l Xm + Γij Xml + kij,l N + kij Nl .

5The Christoffel symbols of the first kind are: Γ 1 


ijm = gim,j + gjm,i − gij,m .
2
40 2. LOCAL SURFACE THEORY

Substituting Xml from (2.27) and Nl from (2.17), and decomposing into tangential
and normal components, we obtain:
Xijl = Am
ijl Xm + Bijl N,

where:
Am m n m
ijl = Γij,l + Γij Γnl − g
mn
kij kln ,
Bijl = kij,l + Γm
ij klm .

Taking note of the fact that Xijl = Xilj , we now interchange j and l and subtract
to obtain (2.30) and (2.31). 
Equation (2.30) is called the Gauss Equation, and Equation (2.31) is called
the Codazzi Equation. The Gauss Equation has the following corollary which has
been coined Theorema Egregium. It’s discovery marked the beginning of intrinsic
geometry, the geometry of the first fundamental form.
Corollary 2.28. Let X : U → R3 be a parametric surface. Then the Gauss
curvature K of X can be computed in terms of only its first fundamental form gij
and its derivatives up to second order:
1
K = g ij Γm m n m n m

ij,m − Γim,j + Γij Γnm − Γim Γnj ,
2
where Γmij are the Christoffel symbols of the first kind.

Proof. Combine (2.30) and (2.11). 


We now show, in a manner quite analogous to Theorem 1.2, that provided
they satisfy the Gauss-Codazzi Equations, the first and second fundamental form
uniquely determine the parametric surface up to rigid motion.
Theorem 2.29  (Fundamental Theorem). Let U ⊂ R2 be open and simply-
2×2

connected, let gij : U → S+ and kij : U → S 2×2 be smooth, and suppose that
they satisfy the Gauss-Codazzi Equations
 (2.30)–(2.31).
 Then there is a parametric
surface X : U → R3 such that gij and kij are its first and second fundamental
forms. Furthermore, X is unique up to rigid motion: if X̃ is another parametric
surface with the same first and second fundamental forms, then there is a rigid
motion R of R3 such that X̃ = R ◦ X.
Proof. We consider the following over-determined system of partial differen-
tial equations for X1 , X2 , N :6
(2.32) Xi,j = Γm
ij Xm + kij N,

(2.33) Ni = −kij g jm Xm ,

where Γm ij is defined in terms of gij by (2.28). The integrability conditions for
this system are:
Γm m
 
(2.34) ij Xm + kij N l = Γil Xm + kil N j

kij g jm Xm l = klj g jm Xm i .
 
(2.35)

6Here X is not to be understood as the derivative of X with respect to ui until later in the
i
proof.
EXERCISES 41

The proof of Theorem 2.27 also shows that the Gauss-Codazzi Equations (2.30)–
(2.31) imply (2.34) if Xi and N satisfy (2.32) and (2.33). We now check that (2.31)
also implies (2.35). First note that since Γm
ij is defined by (2.28), we have

1
Γm

ij gmn = gni,j + gnj,i − gij,n .
2
Interchanging n and i and adding, we get (2.29). Now, differentiate (2.33), and
taking into account that g,lij = −g ia gab,l g bj , substitute (2.29) to get:

Ni,l = −kij,l g jm Xm + kij g ja Γnal gnb + Γnbl gna g bm Xm




− kij g jm (Γaml Xa + kml g jm N = −kij,l + kin Γnjl g jm Xm + kij kml g jm N.


 

Note that the last term is symmetric in i and l so that interchanging i and l, and
subtracting, we get:
Ni,l − Nl,i = −kij,l + kil,j − Γnij kln + Γnil kjn g jm Xm


which vanishes by (2.31). Thus, it follows that (2.35) is satisfied. We conclude that
given values for X1 , X2 , N at a point u0 ∈ U there is a unique solution of (2.32)–
(2.33) in U . We can choose the initial values to that Xi · Xj = gij , N · Xi = 0, and
N · N = 1 at u0 . Using (2.32) and (2.33), it is straightforward to check that the
functions hij = Xi · Xj , pi = N · Xi and q = N · N , satisfy the differential equations:
hij,l = Γnil hnj + Γnjl hni + kil pj + kjl pi ,

pi,j = −kjl g lm hmi + Γm


ij pm + kij q,

qi = −2kij g jm pm .
However, the functions hij = gij , pi = 0 and q = 1 also satisfy these equations, as
well as the same initial conditions as hij = Xi · Xj , pi = N · Xi and q = N · N at u0 .
Thus, by the uniqueness statement mentioned above, it follows that Xi · Xj = gij ,
N · Xi = 0, and N · N = 1. Clearly, in view of (2.32) we have Xi,j = Xj,i , hence
3
there is a function X : U → R whose partial derivatives are Xi , cf. foonote 6. Since
gij is positive definite we have that X1 , X2 are linearly  independent, hence X is a
parametric surface with first fundamental form gij . Furthermore, it is easy to see
that the unit normal of X is N , and Ni · Xj = −N · Xij = −kij , hence the second
fundamental form of X is kij . This completes the proof of the existence statement.
Assume now that X̃ is another surface with the same first and second fun-
damental forms. Since X and X̃ have the same first fundamental form, it fol-
lows that there is a rigid motion R(x) = Qx + y with Q ∈ SO(n; R) such that
R X(u0 ) = X̃(u0 ), QXi (u0 ) = X̃i (u0 ), QN (u0 ) = Ñ (u0 ). Let X̂ = R ◦ X. Since
the two triples (X̃1 , X̃2 , Ñ ) and (X̂1 , X̂2 , N̂ ) both satisfy the same partial differen-
tial equations (2.32) and (2.33), it follows follows that they are equal everywhere,
and consequently X̃ = X̂ = R ◦ X. 

Exercises
Exercise 2.1. Let X : U → R3 and X̃ : Ũ → R3 be two parametric surfaces.
The angle θ between them is the angle between their unit normals: cos θ = N · Ñ .
Let γ be a regular curve which lies on both X and X̃, and suppose that the angle
42 2. LOCAL SURFACE THEORY

between X and X̃ is constant along γ. Show that γ is a line of curvature of X if


and only if it is a line of curvature of X̃.

Exercise 2.2. Let X : U → R3 be a parametric surface, and √ let γ be an


asymptotic line with curvature κ 6= 0, and torsion τ . Show that |τ | = −K

Exercise 2.3. Denote by SO(n) the set of orthogonal n × n matrices, and


by D(n) the set of n × n diagonal matrices. Let A : (a, b) → S n×n be a C k func-
tion, and suppose that A maps into the set of matrices with distinct eigenvalues.
Show that there exist C k functions Q : (a, b) → SO(n) and Λ : (a, b) → D(n) such
that Q−1 AQ = Λ. Conclude the matrix function A has C k eigenvector fields
e1 , . . . , en : (a, b) → Rn , Aej = λj ej . Give a counter-example to show that this last
conclusion can fail the eigenvalues of A are allowed to coincide.

Exercise 2.4. Let M n×n be the space of all n×n matrices, and let B : (a, b) →
n×n
M be continuously differentiable. Prove that:
0
det B = tr(B ∗ B 0 ),
where B ∗ is the matrix of co-factors of B.

Exercise 2.5. Two harmonic surfaces X, Y : U → R3 are called conjugate, if


they satisfy the Cauchy-Riemann Equations:
Xu = Yv , Xv = −Yu ,
where (u, v) denote the coordinates in U . Prove that if X is conformal then Y is
also conformal. Let X and Y be conformal conjugate minimal surfaces. Prove that
for any t:
Z = X cos t + Y sin t
is also a minimal surface. Show that all the surfaces Z above have the same first
fundamental form.

Exercise 2.6. Prove that setting f (ζ) = 1, g(ζ) = 1/ζ in the Weierstrass
representation, we get the catenoid. Find the conjugate harmonic surface of the
catenoid.

Exercise 2.7. Let U ⊂ R2 , let f : U → R be a smooth function, and let


X : U → R3 be given by (u, v, f (u, v)), where (u, v) denote the variables in U .
Show that X is a minimal surface if and only if it satisfies the non-parametric
minimal surface equation:
(1 + q 2 )pu − 2pqpv + (1 + p2 )qv = 0,
where we have used the classical notation: p = fu , q = fv . Show that if f satisfies
the equation above then the following equations are also satisfied:
! !
∂ 1 + q2 ∂ pq
p = p ,
∂u 1 + p2 + q 2 ∂v 1 + p2 + q 2
! !
∂ pq ∂ 1 + p2
p = p .
∂u 1 + p2 + q 2 ∂v 1 + p2 + q 2
EXERCISES 43

Exercise 2.8. Let f ∈ C 2 (U ) be a convex function defined on a convex open


set U , and let ∇f = (p, q) : U → R2 denote the gradient of f . Prove that for any
u1 , u2 ∈ U the following inequality holds:
 
u2 − u1 · ∇f (u2 ) − ∇f (u1 ) > 0.

Exercise 2.9. Let U ⊂ Rn be open. A map ϕ : U → Rn is expanding if


|x − y| 6 |ϕ(x) − ϕ(y)| for all x, y ∈ U . Let ϕ : U → Rn be an open expanding map.
Show that the image of the ball BR (x0 ) of radius R centered at x0 ∈ U contains
the disk BR ϕ(x0 ) of radius R centered at ϕ(x0 ). Conclude that if U = Rn , then
ϕ is onto Rn .
CHAPTER 3

Local Intrinsic Geometry of Surfaces

In this chapter, we change our point of view, and study intrinsic geometry, in
which the starting point is the first fundamental form. Thus, given a parametric
surface, we will ignore all information which cannot be recovered from the first
fundamental form and its derivatives only. In particular, we will ignore the Gauss
map and the second fundamental form. Thanks to Gauss’ Theorema Egregium, we
will still be able to take the Gauss curvature into account.

1. Riemannian Surfaces
Definition 3.1. Let U ⊂ R2 be open. A Riemannian metric on U is a smooth
function g : U → S2×2
+ . A Riemannian surface patch is an open set U equipped
with a Riemannian metric.
The tangent space of U at u ∈ U is R2 . The Riemannian metric g defines an
inner-product on each tangent space by:
g(Y, Z) = gij y i z j ,
where y i and z j are the components of Y and Z with respect to the standard
2
basis of R2 . We will write |Y |g = g(Y, Y ), and omit the subscript g when it is not
ambiguous.
Two Riemannian surface patches (U, g) and (Ũ , g̃) are isometric if there is a
diffeomorphism φ : Ũ → U such that
(3.1) g̃ij = glm φli φm
j ,

where φli = ∂ul /∂ ũi . In fact, Equation (3.1) reads:


dφ∗ g = g̃,
where dφ∗ g is the pull-back of g by the Jacobian of φ at ũ. We then say that φ is
an isometry between (U, g) and (Ũ , g̃). As before, we denote by g ij the inverse of
the matrix gij .
As in Chapter 2, we also denote the Riemannian metric:
ds2 = gij dui duj ,
and at times refer to it as a line element. The arclength of a curve γ : [a, b] → U is
then given by:
Z bq
Lγ = gij γ̇ i γ̇ j dt.
a
p
Note that the arclength is simply the integral of g(γ̇, γ̇).
45
46 3. LOCAL INTRINSIC GEOMETRY OF SURFACES


Example 3.1. Let U ⊂ R2 be open, and let δij be the identity matrix,
then (U, δ) is a Riemannian surface. The Riemannian metric δ will be called the
Euclidean metric.
Example 3.2. Let X : U → R3 be a parametric surface, and let g be the
coordinate representation of its first fundamental form, then (U, g) is a Riemannian
surface patch. We say that the metric g is induced by the parametric surface X. If
X̃ = X ◦ φ : Ũ → R3 is a reparametrization of X and g̃ the coordinate representation
of its first fundamental form, then (Ũ , g̃) is isometric to (U, g).
Example 3.3 (The Poincaré Disk). Let D = {(u, v) : u2 + v 2 < 1} be the unit
disk in R2 , and let
4
gij = δij
(1 − r2 )2

where r = u2 + v 2 is the Euclidean distance to the origin. We can write this line
element also as
du2 + dv 2
(3.2) ds2 = 4 .
(1 − u2 − v 2 )2
The Riemannian surface (D, g) is called the Poincaré Disk. Let U = {(x, y) : y > 0}
be the upper half-plane, and let
1
hij = 2 δij .
y
 
Then it is not difficult to see that D, gij and U, hij are isometric with the
isometry given by:
1 − u2 − v 2
 
2v
φ : (u, v) 7→ (x, y) = , .
(1 + u)2 + v 2 (1 + u2 ) + v 2
In fact, a good bookkeeping technique to check this type of identity is to compute
the differentials :
v(1 + u) (1 + u)2 − v 2
dx = −4 2 du + 2 2 dv
(1 + u)2 + v 2 (1 + u)2 + v 2
(1 + u)2 − v 2 v(1 + u)
dy = −2 2 du + 4 2 dv,
2
(1 + u) + v 2 (1 + u)2 + v 2
substitute into
dx2 + dy 2
,
y2
and then simplify using du dv = dv du to obtain (3.2). It is not difficult to see that
this is equivalent to checking (3.1).
Definition 3.2. Let (U, g) be a Riemannian surface. The Christoffel symbols
of the second kind of g are defined by:
1 mn
Γm

(3.3) ij = g gni,j + gnj,i − gij,n .
2
The Gauss curvature of g is defined by:
1
K = g ij Γm m n m n m

(3.4) ij,m − Γim,j + Γij Γnm − Γim Γnj .
2
2. LIE DERIVATIVE 47

If (U, g) is induced by the parametric surface X : U → R3 , then these definitions


agree with those of Section 9.

2. Lie Derivative
In this section, we study the Lie derivative. We denote the standard basis on
R2 by ∂1 , ∂2 . Let f be a smooth function on U , and let Y = y i ∂i ∈ Tu U be a vector
at u ∈ U . The directional derivative of f along Y is:
(3.5) ∂Y f = y i ∂i f = y i fi .
Since y i = ∂Y ui where (u1 , u2 ) are the coordinates on U , we see that Y = Z
follows from ∂Y = ∂Z as operators. The next proposition shows that the directional
derivative of a function is reparametrization invariant.
Proposition 3.1. Let φ : Ũ → U be a diffeomorphism, and let Ỹ be a vector
at ũ ∈ Ũ . Then for any smooth function f on U , we have:

∂dφ(Ỹ ) f ◦ φ = ∂Ỹ (f ◦ φ).

Proof. Denoting the coordinates on U by uj and the coordinates on Ũ by ũi ,


we let φji = ∂uj /∂ ũi , and we find, by the chain rule:
∂Ỹ (f ◦ φ) = ỹ i ∂i (f ◦ φ) = ỹ i (∂j f )φji = ∂dφ(Ỹ ) f ◦ φ.



i i
We define the commutator of two tangent vector fields Y = y ∂i and Z = z ∂i ,
as in Section (3), Equation (2.5):
[Y, Z] = y i ∂i z j − z i ∂i y j ∂j .

(3.6)
Note that
(3.7) ∂[Y,Z] f = ∂Y ∂Z f − ∂Z ∂Y f.
This observation together with Proposition 3.1 are now used to show that the
commutator is reparametrization invariant.
Proposition 3.2. Let Ỹ and Z̃ be vector fields on Ũ , and let φ : Ũ → U be a
diffeomorphism, then   
dφ [Ỹ , Z̃] = dφ(Ỹ ), dφ(Z̃) .
Proof. For any smooth function f on U , we have:
(3.8) ∂ [Ỹ ,Z̃] (f ◦ φ) = ∂Ỹ ∂Z̃ (f ◦ φ) − ∂Z̃ ∂Ỹ (f ◦ φ)
f = ∂
dφ [Ỹ ,Z̃]
 
= ∂Ỹ ∂dφ(Z̃) f ◦ φ − ∂Z̃ ∂dφ(Ỹ ) f ◦ φ = ∂dφ(Ỹ ) ∂dφ(Z̃) f − ∂dφ(Z̃) ∂dφ(Ỹ ) f
= ∂  f,
dφ(Ỹ ),dφ(Z̃)

and the proposition follows. 


We note for future reference that in the proofs of propositions 3.1 and 3.2, only
the smoothness of the map φ is used, and not the fact that it is a diffeomorphism.
The operator Z 7→ LY Z = [Y, Z], also called the Lie derivative, is a differential
operator, in the sense that it is linear and satisfies a Leibniz identity: LY (f Z) =
(∂Y f )Z + f LY Z. However, LY Z depends on the values of Y in a neighborhood of
a point as can be seen from the fact that it is not linear over functions in Y , but
48 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

rather satisfies Lf Y Z = f LY Z − (∂Z f )Y . Hence the Lie derivative cannot be used


as an intrinsic directional derivative of a vector field Z, which should only depend
on the direction vector Y at a single point1.

3. Covariant Differentiation
Definition 3.3. Let (U, g) be a Riemannian metric, and let Z be a vector field
on U . The covariant derivative of Z along ∂i is:
∇i Z = ∂i z j + Γjik z k ∂j .

(3.9)
Let Y ∈ Tu U , the covariant derivative of Z along Y is:
∇Y Z = y i Z;i .
We write the components of ∇i Z as:
(3.10) z j ;i = z j ,i + Γjik z k ,
so that ∇Y Z = y i z j ;i ∂j . Furthermore, note that
(3.11) ∇i ∂j = Γkij ∂k .
Our first task is to show that covariant differentiation is reparametrization
invariant. However, since the metric g was used in the definition of the covari-
ant derivative, it stands to reason that it would be invariant only under those
reparametrization which preserve the metric, i.e., under isometries.
Proposition 3.3. Let φ : (Ũ , g̃) → (U, g) be an isometry. Let Ỹ ∈ Tũ Ũ , and
let Z̃ be a vector field on Ũ . Then
(3.12) ˜ Z̃) = ∇
dφ(∇ Ỹ dφ(Ỹ ) dφ(Z̃).

Proof. This proof, although tedious, is quite straightforward, and is relegated


to the exercises. 
˜ is that
Note that on the left hand-side of (3.12), the covariant derivative ∇
obtained from the metric g̃.
Our next observation, which follows almost immediately from (2.27), gives
an interpretation of the covariant derivative when the metric g is induced by a
parametric surface X.
Proposition 3.4. Let the Riemannian metric g be induced by the parametric
surface X. Then the image under dX of the covariant derivative dX(∇i Z) is the
projection of ∂i Z onto the tangent space.
Proof. Note that dX(∂i ) = Xi . Thus, if Z = z j ∂j then we find:
dX(∇i Z) = z j ;i Xj = z j ,i Xj + Γjik z k Xj = ∂i z j Xj − kij z j N,


which proves the proposition. 

We now show that covariant differentiation is in addition well-adapted to the


metric g.

1Indeed ∂ Z as defined in Chapter 2 does depend only on the value of Y at a single point
Y
and satisfies ∂f Y Z = f ∂Y Z.
3. COVARIANT DIFFERENTIATION 49

Proposition 3.5. Let (U, g) be a Riemannian surface, and let Y and Z be


vector fields on U . Then, we have
(3.13) ∂i g(Y, Z) = g(∇i Y, Z) + g(∇i Y, Z).
Proof. We first note that, as in the proof of Theorem 2.29, the definition of
the Christoffel symbols (3.3) implies (2.29):
(3.14) gij,l = Γkil gkj + Γkjl gki .
Now, setting Y = y i ∂i and Z = z i ∂i , we compute:

∂i g(Y, Z) = ∂i gjk y j z k = Γm j k m j k j k j k
ji gkm y z + Γki gmj y z + gjk y ,i z + gjk y z ,i

= gjk (y j ,i + Γjmi y m )z k + gjk y j (z k ,i + Γkmi z m ) = g(Y;i , Z) + g(Y, Z;i ).


This completes the proof of (3.13) and of the proposition. 

Definition 3.4. Let Y = y i ∂i be a vector field on the Riemannian surface


(U, g). Its divergence is the function:
div Y = ∇i y i = ∂i y i + Γiij y j .
Note that:
1 im 1 p
Γiij = g (gmi,j + gmj,i − gij,m ) = g im gim,j = ∂j log det g.
2 2
Thus, we see that:
1 p 
(3.15) div Y = √ ∂i det g y i
det g
Observe that this implies
Z Z p 
div Y dA = ∂i det g y i du1 du2 .
U U
Thus, Green’s Theorem in the plane implies the following proposition.
Proposition 3.6. Let Y be a compactly supported vector field on the Riemann-
ian surface (U, g). Then, we have:
Z
div Y dA = 0.
U

Definition 3.5. If f : U → R is a smooth function on the Riemannian surface


(U, g), its gradient ∇f is the unique vector field which satisfies g(∇f, Y ) = ∂Y f .
The Laplacian of f if the divergence of the gradient of f :
∆f = div ∇f.
It is easy to see that ∇f = g ij fj ∂j , hence
1  p 
(3.16) ∆f = √ ∂i g ij det g fj .
det g
Thus, in view of Proposition 3.6, if f is compactly supported, we have:
Z
∆f dA = 0.
U
50 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

4. Geodesics
Definition 3.6. Let (U, g) be a Riemannian surface, and let γ : I → U be
a curve. A vector field along γ is a smooth function Y : I → R2 . The covariant
derivative of Y = y i ∂i along γ is the vector field:
∇γ̇ Y = ẏ i + Γijk y j γ̇ k ∂i .


Note that if Z is any extension of Y , i.e., a any vector field defined on a


neighborhood V of the image γ(I) of γ in U , then we have:
∇γ̇ Y = ∇γ̇ Z = γ̇ i Z;i .
Thus, any result proved concerning the usual covariant differentiation, in particular
Proposition 3.5 holds also for the covariant differentiation along a curve.
Definition 3.7. A vector field Y along a curve γ is said to be parallel along
γ if ∇γ̇ Y = 0.
Note that if Y and Z are parallel along γ, then g(Y, Z) is constant. This follows
from Proposition 3.5:
∂γ̇ g(Y, Z) = g(∇γ̇ Y, Z) + g(Y, ∇γ̇ Z) = 0.
Proposition 3.7. Let γ : [a, b] → U be a curve into the Riemannian surface
(U, g), let u0 ∈ U , and let Y0 ∈ Tu0 U . Then there is a unique vector field Y along
γ which is parallel along γ and satisfies Y (a) = Y0 .
Proof. The condition that Y is parallel along γ is a pair of linear first-order
ordinary differential equations:
ẏ i = −Γijk γ γ j y j .


Given initial conditions y i (a) = y0i , the existence and uniqueness of a solution on
[a, b] follows from the theory of ordinary differential equations. 
The proposition together with the comment preceding it shows that parallel
translation along a curve γ is an isometry between inner-product spaces Pγ : Ta U →
Tb U .
Definition 3.8. A curve γ is a geodesic if its tangent γ̇ is parallel along γ:
∇γ̇ γ̇ = 0.
If γ is a geodesic, then |γ̇| is constant and hence, every geodesic is parametrized
proportionally to arclength. In particular, if β = γ ◦ φ is a reparametrization of γ,
then β is not a geodesic unless φ is a linear map.
Proposition 3.8. Let (U, g) be a Riemannian surface, let u0 ∈ U and let
0 6= Y0 ∈ Tu0 U . Then there is and ε > 0, and a unique geodesic γ : (−ε, ε) → U ,
such that γ(0) = u0 , and γ̇(0) = Y0 .
Proof. We have:
∇γ̇ γ̇ = γ̈ i + Γijk γ̇ j γ̇ k ∂i .


Thus, the condition that γ is a geodesic can written as a pair of non-linear second-
order ordinary differential equations:
γ̈ i = −Γijk (γ(t))γ̇ j γ̇ k .
Given initial conditions γ i (0) = ui0 , γ̇ i (0) = y0i , there is a unique solution on defined
on a small enough interval (−ε, ε). 
4. GEODESICS 51

Definition 3.9. Let γ : [a, b] → U be a curve. We say that γ is length-


minimizing, or L-minimizing, if:
Lγ 6 Lβ
for all curves β in U such that β(a) = γ(a) and β(b) = γ(b).
Let γ : [a, b] → U be a curve. A variation of γ is a smooth family of curves
σ(t; s) : [a, b]×(−ε, ε) → I such that σ(t; 0) = γ(t) for all t ∈ [a, b]. For convenience,
we will denote derivatives with respect to t as usual by a dot, and derivatives
with respect to s by a prime. The generator of a variation σ is the vector field
Y (t) = σ 0 (t; 0) along γ. We say that σ is a fixed-endpoint variation, if σ(a; s) = γ(a),
and σ(b; s) = γ(b) for all s ∈ (−ε, ε). Note that the generator of a fixed-endpoint
variation vanishes at the end points. We say that a variation σ is normal if its
generator Y is perpendicular to γ: g(γ̇, Y ) = 0. A curve γ is locally L-minimizing
if Z bp
Lσ (s) = g(σ̇, σ̇) dt
a
has a local minimum at s = 0 for all fixed-endpoint variations σ. Clearly, an
L-minimizing curve is locally L-minimizing.
If γ is locally L-minimizing, then any reparametrization β = γ ◦ φ of γ is also
locally L-minimizing. Indeed, if σ is any fixed-endpoint variation of β, then τ (t; s) =
σ(φ−1 (t); s) is a fixed-endpoint variation of γ, and since reparametrization leaves
arclength invariant, we see that Lτ (s) = Lσ (s) which implies that Lσ also has
a local minimum at at s = 0. Thus, local minimizers of the functional L are
not necessarily parametrized proportionally to arclength. This helps clarify the
following comment: a locally length-minimizing curve is not necessarily a geodesic,
but according to the next theorem that is only because it may not be parametrized
proportionally to arclength.
Theorem 3.9. A locally length-minimizing curve has a geodesic reparametriza-
tion.
To prove this theorem, we introduce the energy functional:
1 b
Z
Eγ = g(γ̇, γ̇) dt
2 a
We may now speak of energy-minimizing and locally energy-minimizing curves.
Our first lemma shows the advantage of using the energy rather than the arclength
functional: minimizers of E are parametrized proportionally to arclength.
Lemma 3.10. A locally energy-minimizing curve is a geodesic.
Proof. Suppose that γ is a locally energy-minimizing curve. We first note
that if Y is any vector field along γ which vanishes at the endpoints, then setting
σ(t; s) = γ(t) + sY (t), we see that there is a fixed-endpoint variation of γ whose
generator is Y . Since γ is locally energy-minimizing, we have:
Z b
0 1 0
Eσ (0) = g(σ̇, σ̇) s=0 dt = 0.
a 2
We now observe that:
0 d j 0 d j 0
σ̇ j s=0 = σ s=0
= σ s=0
= ẏ j .
dt dt
52 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

where Y = y i ∂i is the generator of the fixed-endpoint variation σ, and:


0 0
gij s=0 = gij,k σ k s=0 = gij,k y k .
Thus, we have:
1 0 1 0 1 0 0
g(σ̇, σ̇) s=0 = gij σ̇ i σ̇ j s=0 = gij s=0 σ̇ i σ̇ j + gij σ̇ i σ̇ j s=0
2 2 2
1
= gij,k y k γ̇ i γ̇ j + gij γ̇ i ẏ j .
2
Since Y vanishes at the endpoints, we can substitute into Eσ0 (0), and integrate by
parts the second term to get:
Z b 
0 d  1
Eσ (0) = − gij γ̇ − gik,j γ̇ γ̇ y j .
i i k
a dt 2
Since:
d 1
gij γ̇ i = gij γ̈ i + gij,k γ̇ i γ̇ k = gij γ̈ i + gij,k + gkj,i γ̇ i γ̇ k ,
 
dt 2
We now see that:
Z b  Z b
1
Eσ0 (0) = − gij γ̈ i + gmj,k + gkj,m − gmk,j γ̇ m γ̇ k y j dt = −

g(∇γ̇ γ̇, Y ) dt.
a 2 a
Since Eσ0 (0) = 0 for all vector fields Y along γ which vanish at the endpoints, we
conclude that ∇γ̇ γ̇ = 0, and γ is a geodesic. 
The Schwartz inequality implies the following inequality between the length
and energy functional for a curve γ.
Lemma 3.11. For any curve γ, we have
L2γ 6 2Eγ (b − a),
with equality if and only if γ is parametrized proportionally to arclength.
Finally, the last lemma we state to prove Theorem 3.9, exhibits the relationship
between the L and E functionals.
Lemma 3.12. A locally energy-minimizing curve is locally length-minimizing.
Furthermore, if γ is locally length-minimizing and β is a reparametrization of γ by
arclength, then β is locally energy-minimizing.
Proof. Suppose that γ is locally energy-minimizing, and let σ be a fixed-
endpoint variation of γ. For each s, let βs (t) : [a, b] → U be a reparametrization
of the curve t 7→ σ(t; s) proportionally to arclength. Let τ (t; s) = βs (t), then it is
not difficult to see, using say the theorem on continuous dependence on parameters
for ordinary differential equations, that τ is also smooth. By Lemma 3.10, γ is a
geodesic, hence by Lemma 3.11, L2γ = 2Eγ (b − a). It follows that:
L2σ (0) = L2γ = 2Eγ (b − a) = 2Eτ (0)(b − a) 6 2Eτ (s)(b − a) = L2τ (s) = L2σ (s).
Thus, γ is locally length-minimizing proving the first statement in the lemma.
Now suppose that γ is locally length-minimizing, and let β be a reparametriza-
tion of γ by arclength. Then β is also locally length-minimizing, hence for any
fixed-endpoint variation σ of β, we have:
L2β L2σ (s)
Eσ (0) = Eβ = 6 6 Eσ (s).
2(b − a) 2(b − a)
5. THE RIEMANN CURVATURE TENSOR 53

Thus, β is locally energy-minimizing. 


We note that the same lemma holds if we replace locally energy-minimizing by
energy-minimizing. The proof of Theorem 3.9 can now be easily completed with
the help of Lemmas 3.10 and 3.12.
Proof of Theorem 3.9. Let β be a reparametrization of γ by arclength. By
Lemma 3.12, β is locally energy-minimizing. By Lemma 3.10, β is a geodesic. 

5. The Riemann Curvature Tensor


Definition 3.10. Let X, Y, Z, W be vector fields on a Riemannian surface
(U, g). The Riemann curvature tensor is given by:
  
R(W, Z, X, Y ) = g ∇X , ∇Y Z − ∇[X,Y ] Z, W .
We first prove that R is indeed a tensor , i.e., it is linear over functions. Clearly,
R is linear in W , additive in each of the other three variables, and anti-symmetric
in X and Y . Thus, it suffices to prove the following lemma.
Lemma 3.13. Let X, Y, Z, W be vector fields on a Riemannian surface (U, g).
Then we have:
R(W, Z, f X, Y ) = R(W, f Z, X, Y ) = f R(W, Z, X, Y ).
Proof. We have:

∇f X ∇Y Z−∇Y ∇f X Z−∇[f X,Y ] Z = f ∇X ∇Y X −∇Y f ∇X Z −∇f [X,Y ]−(∂Y f )X Z
 
= f ∇X ∇Y Z − ∂Y f ∇X Z − f ∇Y ∇X Z − f ∇[X,Y ] Z + ∂Y f ∇X Z

= f ∇X ∇Y Z − ∇Y ∇X Z − ∇[X,Y ] Z .
The first identity follows by taking inner product with W . In order to prove the
second identity, note that:
  
∇X ∇Y f Z = ∇X ∂Y f Z + ∇X f ∇Y Z
    
= ∂X ∂Y f Z + ∂Y f ∇X Z + ∂X f ∇Y Z + f ∇X ∇Y Z.
Interchanging X and Y and subtracting we get:
    
∇X , ∇Y f Z = ∂[X,Y ] f Z + f ∇X , ∇Y Z.
On the other hand, we have also:

∇[X,Y ] f Z = ∂[X,Y ] f Z + f ∇[X,Y ] Z.
Thus, we conclude:
    
∇X , ∇Y f Z − ∇[X,Y ] f Z = f ∇X , ∇Y Z − ∇[X,Y ] Z .
The second identity now follows by taking inner product with W . 
Let
Rijkl = R(∂i , ∂j , ∂k , ∂l ),
be the components of the Riemann tensor. The previous proposition shows that if
X = xi ∂i , Y = y i ∂i , Z = z i ∂i , W = wi ∂i , then
R(W, Z, X, Y ) = wi z j xk y l Rijkl ,
that is, the value of R(W, Z, X, Y ) at a point u depends only on the values of W ,
Z, X, and Y at u.
54 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

Proposition 3.14. The components Rijkl of the Riemann curvature tensor of


any metric g satisfy the following identities:
(3.17) Rijkl = −Rijlk = −Rjikl = Rklij
(3.18) Rijkl + Riljk + Riklj = 0.
Proof. We first prove (3.18). Since [∂k , ∂l ] = 0, it suffices to prove
     
(3.19) ∇k , ∇l ∂j + ∇j , ∇k ∂l + ∇l , ∇j ∂k = 0.
Note that (3.11) together with the symmetry Γm m
lj = Γjl imply that ∇l ∂j = ∇j ∂l .
Thus, we can write:  
∇k , ∇l ∂j = ∇k ∇j ∂l − ∇l ∇k ∂j .
Permuting the indices cyclically, and adding, we get (3.19). The first identity
in (3.23) is obvious from Definition 3.10. We now prove the identity:
Rijkl = −Rjikl .
Using Proposition 3.5 repeatedly, we observe that:
g(∇k ∇l ∂j , ∂i ) = ∂k g(∇l ∂j , ∂i ) − g(∇l ∂j , ∇k ∂i )

= ∂k ∂l gji − g(∂j , ∇l ∂i ) − ∂l g(∂j , ∇k ∂i ) + g(∂j , ∇l ∇k ∂i )
= ∂k ∂l gji − ∂k g(∂j , ∇l ∂i ) − ∂l g(∂j , ∇k ∂i ) + g(∂j , ∇l ∇k ∂i ).
It is easy to see that the first term, and the next two taken together, are symmetric
in k and l. Thus, interchanging k and l, and subtracting, we get:
        
Rijkl = g ∇k , ∇l ∂j , ∂i = g ∂j , ∇l , ∇k ∂i = −g ∇k , ∇l ∂i , ∂j = −Rjikl .
The last identity in (3.17) now follows from the first two and (3.18). We prove that
Bijkl = Rijkl − Rklij = 0. Note that Bijkl satisfies (3.17) as well as Bijkl = −Bklij .
Now, in view of the identities already established, we see that:
Rijkl = −Riljk − Riklj = −Rlikj − Riklj = Rljik + Rlkji − Riklj = Bljik + Rklij ,
hence Bijkl = Bljik . Using the symmetries of Bijkl , we can rewrite this identity as:
(3.20) Bijkl + Biklj = 0.
We now permute the first three indices cyclically:
(3.21) Bkijl + Bkjli = 0,
(3.22) Bjkil + Bjilk = 0,
add (3.20) to (3.21) and subtract (3.22) to get, using the symmetries of Bijkl :
Bijkl + Biklj + Biklj + Bkjli − Bkjli − Bijkl = 2Biklj = 0.
This completes the proof of the proposition. 
It follows, that all the non-zero components of the Riemann tensor are deter-
mined by R1212 :
R1212 = −R2112 = R2121 = −R1221 ,
and all other components are zero. The proposition also implies that for any vectors
X, Y, Z, W , the following identities hold:
(3.23) R(W, Z, X, Y ) = −R(W, Z, Y, Z) = −R(Z, W, X, Y ) = R(X, Y, W, Z),
(3.24) R(W, Z, X, Y ) + R(W, Y, Z, X) + R(W, X, Y, Z) = 0.
5. THE RIEMANN CURVATURE TENSOR 55

Proposition 3.15. The components Rijkl of the Riemann curvature tensor of


any metric g satisfy:

(3.25) g mj Rimkl = Γjik,l − Γjil,k + Γnik Γjnl − Γnil Γjnk .

Furthermore, we have:
R1212
(3.26) K= ,
det(g)

where K is the Gauss curvature of g.


j
Proof. Denote the right-hand side of (3.25) by Sikl . We have:

∇l ∇k ∂i = ∇k Γjik ∂j = Γjik,l + Γnik Γjnl ∂j ,


 

or equivalently:
Γjik,l + Γnik Γjnl = g jm g(∇l ∇k ∂i , ∂m ).

Interchanging k and l and subtracting we get:


j
= g jm g ∇l , ∇k ∂i , ∂m = g jm Rmilk = g jm Rimkl .
  
Sikl

According to 3.4 and (3.25), we have:


1 ik j 1
K= g Sikj = g ik g jl Rijkl .
2 2
In view of the comment following Proposition 3.14, the only non-zero terms in this
sum are:
1 11 22
g g R1212 + g 12 g 21 R1221 + g 21 g 12 R2112 + g 22 g 11 R2121 = det g −1 R1212 ,
 
K=
2
which implies (3.26) 

Corollary 3.16. The Riemann curvature tensor of any metric g on a surface


is given by:

(3.27) Rijkl = K gik gjl − gil gjk .

Proof. Denote the right-hand side of (3.27) by Sijkl , and note that it satis-
fies (3.17). Thus, the same comment which follows Proposition 3.14 applies and
the only non-zero components of Sijkl are determined by S1212 :

S1212 = −S2112 = S2121 = −S1221 .

In view of (3.27), we have R1212 = S1212 , thus it follows that Rijkl = Sijkl 

In particular, we conclude that:



(3.28) R(Z, W, X, Y ) = K g(W, X) g(Z, Y ) − g(W, Y ) g(Z, X) .
56 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

6. The Second Variation of Arclength


In this section, we study the additional condition Eσ00 (0) > 0 necessary for a
minimum. This leads to the notion of Jacobi fields and conjugate points.
Proposition 3.17. Let γ : [a, b] → U be a geodesic parametrized by arclength
on the Riemannian surface (U, g), and let σ be a fixed-endpoint variation of γ with
generator Y . Then, we have:
Z b

Eσ00 (0) = ∇γ̇ Y 2 − K ◦ γ |Y |2 − g(γ̇, Y )2 dt,

(3.29)
a

where K is the Gauss curvature of g.


Before we prove this proposition, we offer a second proof of the first variation
formula:
Z b
(3.30) Eσ0 (0) = − g(∇γ̇ γ̇, Y ) dt,
a

which is more in spirit with our derivation of the second variation formula. First
note that if σ is a fixed-endpoint variation of γ with generator σ 0 = Y , and with
σ̇ = X, then [X, Y ] = 0. Here Y denotes the vector field σ 0 along σ rather than
just along γ. Indeed, since X = dσ(d/dt) and Y = dσ(d/ds), it follows, as in
Propositions 3.1 and 3.2, that for any smooth function f on U , we have
 
d d
∂[X,Y ] f = , f ◦ σ = 0.
dt ds
In view of the symmetry Γijk = Γikj , this implies:

∇Y X − ∇X Y = [X, Y ] = 0.
We can now calculate:
Z Z b Z b
1
Eσ0 (s) = ∂Y g(X, X) dt = g(∇Y X, X) dt = g(∇X Y, X) dt
2 a a
Z b Z b Z b
d
= g(Y, X) dt − g(Y, ∇X X) dt = g(Y, X)|ba − g(Y, ∇X X) dt
a dt a a

Setting s = 0, (3.30) follows.

Proof of Proposition 3.17. We compute:


1 b
Z Z b Z b
00
Eσ = ∂Y ∂Y g(X, X) dt = ∂Y g(∇Y X, X) dt = ∂Y g(∇X Y, X) dt
2 a a a
Z b

= g(∇Y ∇X Y, X) + g(∇X Y, ∇Y X) dt
a
Z b 
  
= g(∇X ∇Y Y, X) + g ∇Y , ∇X Y, X + g(∇X Y, ∇X Y ) dt
a
Z b 
d
= g(∇Y Y, X) − g(∇Y Y, ∇X X) + R(X, Y, Y, X) + g(∇X Y, ∇X Y ) dt,
a dt
6. THE SECOND VARIATION OF ARCLENGTH 57

where as above X = σ̇, and Y = σ 0 . Now, the first term integrates to g(∇Y Y, X)|ba =
0, and when we set s = 0, the second term also vanishes since ∇X X = ∇γ̇ γ̇ = 0.
Furthermore, the last term becomes g(∇γ̇ Y, ∇γ̇ Y ). Hence, we conclude:
Z b 
Eσ00 (0) = ∇γ̇ Y 2 − R(X, Y, X, Y ) dt.

(3.31)
a
The proposition now follows from (3.28). 
Thus, Eσ00 (0) can be viewed as a quadratic form in the generator Y . The
corresponding symmetric bilinear form is called the index form of γ:
Z b

I(Y, Z) = g(∇γ̇ Y, ∇γ̇ Z) − K ◦ γ g(Y, Z) − g(γ̇, Y ) g(γ̇, Z) dt.
a
It is the Hessian of the functional E, and if E has a local minimum, I is positive
semi-definite. We will also write I(Y ) = I(Y, Y ).
Definition 3.11. Let γ be a geodesic parametrized by arclength on the Rie-
mannian surface (U, g). A vector field Y along γ is called a Jacobi field , if it satisfies
the following differential equation:
∇γ̇ ∇γ̇ Y + K (Y − g(γ̇, Y )γ̇) = 0.
Two points γ(a) and γ(b) along a geodesic γ are called conjugate along γ if there
is a non-zero Jacobi field along γ which vanishes at those two points.
The Jacobi field equation is a linear system of second-order differential equa-
tions. Hence given initial data specifying the initial value and initial derivative of
Y , a unique solution exists along the entire geodesic γ.
Proposition 3.18. Let γ be a geodesic on the Riemannian surface (U, g). Then
given two vectors Z1 , Z2 ∈ Tγ(a) U , there is a unique Jacobi field Y along γ such
that Y (a) = Z1 , and ∇γ̇ Y (a) = Z2 .
In particular, any Jacobi field which is tangent to γ is a linear combination of
γ̇ and tγ̇. The significance of Jacobi fields is seen in the following two propositions.
We say that σ is a variation of γ through geodesics if the curves t 7→ σ(t; s) are
geodesics for all s.
Proposition 3.19. Let γ be a geodesic, and let σ be a variation of γ through
geodesics. Then the generator Y = σ 0 of σ is a Jacobi field.
Proof. As before, denote X = σ̇ and Y = σ 0 . We first prove the following
identity:   
∇Y , ∇X X = −K Y − g(X, Y )X .
Indeed, in the proof of Lemma 3.13, it was seen that the left-hand side above is
a tensor, i.e., is linear over functions, and hence depends only on the values of
the vector fields X and Y at one point. Fix that point. If X and Y are linearly
dependent, then both sides of the equation above are zero. Otherwise, X and Y
are linearly independent, and it suffices to check the inner product of the identity
against X and Y . Taking inner product with X, both sides are zero, and equa-
tion (3.27) implies that the inner products with Y are equal. Since ∇X X = 0, we
get:
  
0 = ∇Y ∇X X = ∇X ∇Y X + ∇Y , ∇X X = ∇X ∇X Y − K Y − g(X, Y )X .
Thus, Y is a Jacobi field. 
58 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

We see that Jacobi fields are infinitesimal generators of variations through


geodesics. If there is a non-trivial fixed endpoint variation of γ through geodesics,
then the endpoints of γ are conjugate along γ. Unfortunately, the converse is not
true but nevertheless, a non-zero Jacobi field which vanishes at the endpoints can
be perceived as a non-trivial infinitesimal fixed-endpoint variation of γ through
geodesics. This makes the next proposition all the more important.
Proposition 3.20. Let γ be a geodesic, and let Y be a Jacobi field. Then, for
any vector field Z along γ, we have:
(3.32) I(Y, Z) = g(∇γ̇ Y, Z)|ba .
In particular, if either Y or Z vanishes at the endpoints, then I(Y, Z) = 0.
Proof. Multiplying the Jacobi equation by Z and integrating, we obtain:
Z b

0= g(∇γ̇ ∇γ̇ Y, Z) − K g(Y, Z) − g(γ̇, Y ) g(γ̇, Z) dt
a
Z b  
d 
= g ∇γ̇ Y, Z) − g(∇γ̇ Y, ∇γ̇ Z) − K g(Y, Z) − g(γ̇, Y ) g(γ̇, Z) dt
a dt
= g(∇γ̇ Y, Z)|ba − I(Y, Z). 
Thus, a Jacobi field which vanishes at the endpoints lies in the null space of
the index form I acting on vector fields which vanish at the endpoints.
Theorem 3.21. Let γ : [a, b] → (U, g) be a geodesic parametrized by arclength,
and suppose that there is a point γ(c) with a < c < b which is conjugate to γ(a).
Then there is a vector field Z along γ such that I(Z) < 0. Consequently, γ is not
locally-length minimizing.
Proof. Define: (
Y a6t6c
V =
0 c6t6b
and let W be a vector field supported in a small neighborhood of c which satisfies
W (c) = −∇γ̇ Y (c) 6= 0. We denote the index form of γ on [a, c] by I1 , and the index
form on [c, b] by I2 . Since V is piecewise smooth, we have, in view of (3.32):
I(V, W ) = I1 (V, W ) + I2 (V, W ) = I1 (Y, W ) = −|∇γ̇ Y (c)|2 < 0
It follows that:
I(V + εW, V + εW ) = I(V ) + 2εI(V, W ) + ε2 I(W ) = 2εI(V, W ) + ε2 I(W )
is negative if ε > 0 is small enough. Although V + εW is not smooth, there is for
any δ > 0 a smooth vector field Zδ , satisfying |Y |2 + |∇γ̇ Zδ |2 6 C uniformly in
δ > 0, which differs from V + εW only on (c − δ, c + δ). Since the contribution
of this interval to both I(V + εW, V + εW ) and I(Zδ , Zδ ) tends to zero with δ,
it follows that also I(Zδ , Zδ ) < 0 for δ > 0 small enough. Thus, γ is not locally
energy-minimizing. Since it is parametrized by arclength, if it was locally length-
minimizing, it would by Lemma 3.12 also be locally energy-minimizing. Thus, γ
cannot be locally length-minimizing. 
A partial converse is also true: the absence of conjugate points along γ guar-
antees that the index form is positive definite.
EXERCISES 59

Theorem 3.22. Let γ : [a, b] → (U, g) be a geodesic parametrized by arclength,


and suppose that no point γ(t), a < t 6 b, is conjugate to γ(a) along γ. Then the
index form I is positive definite.
Proof. Let X = σ̇, and let Y be a Jacobi field which is perpendicular to X,
and vanishes at t = a. Note that the space of such Jacobi fields is 1-dimensional,
hence Y is determined up to sign if we also require that |Ẏ (a)| = 1. Since Y is
perpendicular to X, it satisfies the equation:
∇X ∇X Y + KY = 0.
Furthermore, since Y never vanishes along γ, the vectors X and Y span Tγ(t) U
for all t ∈ (a, b]. Thus, if Z is any vector field along γ which vanishes at the
endpoints, then we can write Z = f X + hY for some functions f and h. Note that
f (a) = f (b) = h(b) = 0 and hY (a) = 0. We then have:
I(Z, Z) = I(f X, f X) + 2I(f X, hY ) + I(hY, hY ).
Since R(X, f X, X, f X) = 0 and ∇X f X = f˙X, it follows from (3.31) that:
Z b Z b
I(f X, f X) = ˙ ˙
g(f X, f X) dt = f˙2 dt.
a a
Furthermore,
Z b
I(f X, hY ) = g(f˙X, ∇X hY ) dt
a
Z b Z b
= g(f˙X, hY )|ba − g(∇X f˙X, hY ) dt = − g(f¨X, hY ) dt = 0.
a a
2
Finally, since |∇X hY |2 = g(∇X Y, ∇X h2 Y ) + ḣ2 |Y | , it follows from Proposi-
tion 3.20 that:
Z b Z b
2 2
I(hY, hY ) = ḣ2 |Y | dt + I(Y, hY ) = ḣ2 |Y | dt.
a a
Thus, we conclude that:
Z b
2
I(Z, Z) = f˙2 + ḣ2 |Y | dt > 0.
a

If I(Z, Z) = 0, then f˙ = 0 and ḣY = 0 on [a, b]. Since Y 6= 0 on (a, b], we conclude
that ḣ = 0 on (a, b], and in view of h(b) = f (b) = 0, we get that Z = 0. Thus, I is
positive definite. 

Exercises
Exercise 3.1. Two Riemannian metrics g and g̃ on an open set U ⊂ R2 are
conformal if g̃ = e2λ g for some smooth function λ.
(1) Prove that a parametric surface X : U → R3 is conformal if and only if its
first fundamental form g is conformal to the Euclidean metric δ on U .
(2) Let g̃ = e2λ g be conformal metrics on U , and let Γkij and Γ̃kij be their
Christoffel symbols. Prove that:
Γ̃kij = Γkij + δik λj + δjk λi + gij g km λm
60 3. LOCAL INTRINSIC GEOMETRY OF SURFACES

(3) Let g̃ and g be two conformal metrics on U , g̃ = e2λ g, and let K and K̃
be their Gauss curvatures. Prove that:
K̃ = e−2λ (K − ∆λ).
Index

angle, 22 Lie, 47
between surfaces, 41 developable, 28
exterior, 12 diffeomorphism, 19
arclength, 7, 45 differentials, 46
area minimizing, 36 directrix, 27
asymptotic line, 30, 42 distance, 22
divergence, 49
Bernstein’s Theorem, 37 Dupin indicatrix, 30
binormal, 8
Einstein summation convention, 20
catenoid, 31, 42 entire, 37
Cauchy-Riemann equations, 32, 38, 42 Euclidean metric, 46
Christoffel Symbols, 39 Euler, 24
Christoffel symbols, 46 evolute, 16
Codazzi Equation, 40 expanding map, 38, 43
commutator, 23, 47
conformal, 31, 59 Fenchel’s Theorem, 14
conjugate, 42 form
conjugate points, 56, 57 first fundamental, 21
convex, 12, 38 quadratic, 21
strictly, 16 second fundamental, 23
corner, 12 symmetric bilinear, 21
curvature Four Vertex Theorem, 13
center of, 16 Frenet frame, 8
Gauss, 25 Frenet frame equation, 8
line of, 29, 42 Fundamental Theorem
mean, 25 for curves in R3 , 8
normal, 24 for surfaces, 39, 40
of a curve, 8
Gauss curvature, 46
principal, see also principal, curvature
Gauss Equation, 40
Riemann tensor, 53
Gauss map, 20
total, see also total curvature generator, 27
curvature, line of, 28 geodesic, 50
curve geometry
closed, 10 intrinsic, 22
parametrized, 7 gradient, 43, 49
piecewise smooth, 12 graph, 19, 37
regular, 7
simple, 10 harmonic, 31
cylinder, 27 helix, 16
Hessian, 37
derivative hyperboloid, 28
covariant, 48
directional, 23, 47 index

61
62 INDEX

contravariant, 20 of a curve, 14
covariant, 20 under Gauss map, 28
raise, 25 star-shaped, 11
index form, 57 striction, line of, 27
induced metric, 46 surface
intrinsic geometry, 45 minimal, see also minimal surface
isometry, 21, 45 of revolution, 30
isoperimetric inequality, 17 generator, 30
parametric, 19
Jacobi field, 57 ruled, 27
Jacobi fields, 56 tangent, 28
surface area, 33
Laplacian, 49
element, 33
Leibniz, 47
signed, 34
length-minimizing, 51
locally, 51 tangent plane, 19
line element, 22, 45 tangent space, 19
tensor, 53
meridian, 31 Theorema Egregium, 39, 40
minimal surface, 31 torsion, 8
non-parametric, 37, 42 total curvature
Monge Ampère equation, 38 of a curve, 14
of a surface, 33
Nitsche, 38
normal section, 24 unit normal, 20
unit tangent, 8
orientation, 7, 19, 20
upper half-plane, 46
osculating paraboloid, 24
variation, 34, 51
parallel, 31 fixed-endpoint, 51
plane, 25 vector field, 20
normal, 9 vertex, 13
osculating, 9
rectifying, 9 Weierstrass representation, 31, 42
Poincaré Disk, 46 width, 16
point
ellitpic, 24
parabolic, 24
planar, 24, 26
umbilical, 25, 26
point,hyperbolic, 24
principal
curvature, 25
direction, 25
principal normal, 8
pull-back, 21

reparametrization
of curves, 7
of surfaces, 19
representation
coordinate, 21
Riemannian metric, 45
Riemannian surface, 45
Rodriguez, 28
rotation number, 10, 12
Rotation Theorem, 10, 12

sphere, 26
spherical image

You might also like