Grnotes Five

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

December 1997 Lecture Notes on General Relativity Sean M.

Carroll
5 More Geometry
With an understanding of how the laws of physics adapt to curved spacetime, it is undeniably
tempting to start in on applications. However, a few extra mathematical techniques will
simplify our task a great deal, so we will pause briey to explore the geometry of manifolds
some more.
When we discussed manifolds in section 2, we introduced maps between two dierent
manifolds and how maps could be composed. We now turn to the use of such maps in carrying
along tensor elds from one manifold to another. We therefore consider two manifolds M
and N, possibly of dierent dimension, with coordinate systems x

and y

, respectively. We
imagine that we have a map : M N and a function f : N R.
M
x
f = f
f
!
R
R
R
m
n

y
"
N
*
! !
It is obvious that we can compose with f to construct a map (f ) : M R, which is
simply a function on M. Such a construction is suciently useful that it gets its own name;
we dene the pullback of f by , denoted

f, by

f = (f ) . (5.1)
The name makes sense, since we think of

as pulling back the function f from N to M.


We can pull functions back, but we cannot push them forward. If we have a function
g : M R, there is no way we can compose g with to create a function on N; the arrows
dont t together correctly. But recall that a vector can be thought of as a derivative operator
that maps smooth functions to real numbers. This allows us to dene the pushforward of
129
5 MORE GEOMETRY 130
a vector; if V (p) is a vector at a point p on M, we dene the pushforward vector

V at the
point (p) on N by giving its action on functions on N:
(

V )(f) = V (

f) . (5.2)
So to push forward a vector eld we say the action of

V on any function is simply the


action of V on the pullback of that function.
This is a little abstract, and it would be nice to have a more concrete description. We
know that a basis for vectors on M is given by the set of partial derivatives

=

x

, and
a basis on N is given by the set of partial derivatives

=

y

. Therefore we would like


to relate the components of V = V

to those of (

V ) = (

V )

. We can nd the
sought-after relation by applying the pushed-forward vector to a test function and using the
chain rule (2.3):
(

V )

f = V

f)
= V

(f )
= V

y

f . (5.3)
This simple formula makes it irresistible to think of the pushforward operation

as a matrix
operator, (

V )

= (

V

, with the matrix being given by
(

=
y

. (5.4)
The behavior of a vector under a pushforward thus bears an unmistakable resemblance to the
vector transformation law under change of coordinates. In fact it is a generalization, since
when M and N are the same manifold the constructions are (as we shall discuss) identical;
but dont be fooled, since in general and have dierent allowed values, and there is no
reason for the matrix y

/x

to be invertible.
It is a rewarding exercise to convince yourself that, although you can push vectors forward
from M to N (given a map : M N), you cannot in general pull them back just keep
trying to invent an appropriate construction until the futility of the attempt becomes clear.
Since one-forms are dual to vectors, you should not be surprised to hear that one-forms can
be pulled back (but not in general pushed forward). To do this, remember that one-forms
are linear maps from vectors to the real numbers. The pullback

of a one-form on N
can therefore be dened by its action on a vector V on M, by equating it with the action of
itself on the pushforward of V :
(

)(V ) = (

V ) . (5.5)
Once again, there is a simple matrix description of the pullback operator on forms, (

=
(

, which we can derive using the chain rule. It is given by


(

=
y

. (5.6)
5 MORE GEOMETRY 131
That is, it is the same matrix as the pushforward (5.4), but of course a dierent index is
contracted when the matrix acts to pull back one-forms.
There is a way of thinking about why pullbacks and pushforwards work on some objects
but not others, which may or may not be helpful. If we denote the set of smooth functions
on M by F(M), then a vector V (p) at a point p on M (i.e., an element of the tangent space
T
p
M) can be thought of as an operator from F(M) to R. But we already know that the
pullback operator on functions maps F(N) to F(M) (just as itself maps M to N, but
in the opposite direction). Therefore we can dene the pushforward

acting on vectors
simply by composing maps, as we rst dened the pullback of functions:
F F (M) (N)
!
*
(V(p)) = V(p) !
R
!
V(p)
*
*
Similarly, if T
q
N is the tangent space at a point q on N, then a one-form at q (i.e., an
element of the cotangent space T

q
N) can be thought of as an operator from T
q
N to R. Since
the pushforward

maps T
p
M to T
(p)
N, the pullback

of a one-form can also be thought


of as mere composition of maps:
T M
p
!(p)
T N
!
*
= # (#)
*
!
!
*
#
R
If this is not helpful, dont worry about it. But do keep straight what exists and what
doesnt; the actual concepts are simple, its just remembering which map goes what way
that leads to confusion.
You will recall further that a (0, l) tensor one with l lower indices and no upper ones
is a linear map from the direct product of l vectors to R. We can therefore pull back
not only one-forms, but tensors with an arbitrary number of lower indices. The denition is
simply the action of the original tensor on the pushed-forward vectors:
(

T)(V
(1)
, V
(2)
, . . . , V
(l)
) = T(

V
(1)
,

V
(2)
, . . . ,

V
(l)
) , (5.7)
5 MORE GEOMETRY 132
where T

l
is a (0, l) tensor on N. We can similarly push forward any (k, 0) tensor S

k
by acting it on pulled-back one-forms:
(

S)(
(1)
,
(2)
, . . . ,
(k)
) = S(

(1)
,

(2)
, . . . ,

(k)
) . (5.8)
Fortunately, the matrix representations of the pushforward (5.4) and pullback (5.6) extend to
the higher-rank tensors simply by assigning one matrix to each index; thus, for the pullback
of a (0, l) tensor, we have
(

T)

l
=
y

1
x

1

y

l
x

l
T

l
, (5.9)
while for the pushforward of a (k, 0) tensor we have
(

S)

k
=
y

1
x

1

y

k
x

k
S

k
. (5.10)
Our complete picture is therefore:
!
*
!
*
( )
( )
k
0
( )
k
0
l l
0
( )
0
!
N
M
Note that tensors with both upper and lower indices can generally be neither pushed forward
nor pulled back.
This machinery becomes somewhat less imposing once we see it at work in a simple
example. One common occurrence of a map between two manifolds is when M is actually a
submanifold of N; then there is an obvious map from M to N which just takes an element
of M to the same element of N. Consider our usual example, the two-sphere embedded in
R
3
, as the locus of points a unit distance from the origin. If we put coordinates x

= (, )
on M = S
2
and y

= (x, y, z) on N = R
3
, the map : M N is given by
(, ) = (sin cos , sin sin , cos ) . (5.11)
In the past we have considered the metric ds
2
= dx
2
+ dy
2
+ dz
2
on R
3
, and said that it
induces a metric d
2
+ sin
2
d
2
on S
2
, just by substituting (5.11) into this at metric on
5 MORE GEOMETRY 133
R
3
. We didnt really justify such a statement at the time, but now we can do so. (Of course
it would be easier if we worked in spherical coordinates on R
3
, but doing it the hard way is
more illustrative.) The matrix of partial derivatives is given by
y

cos cos cos sin sin


sin sin sin cos 0

. (5.12)
The metric on S
2
is obtained by simply pulling back the metric from R
3
,
(

g)

=
y

1 0
0 sin
2

, (5.13)
as you can easily check. Once again, the answer is the same as you would get by naive
substitution, but now we know why.
We have been careful to emphasize that a map : M N can be used to push certain
things forward and pull other things back. The reason why it generally doesnt work both
ways can be traced to the fact that might not be invertible. If is invertible (and both
and
1
are smooth, which we always implicitly assume), then it denes a dieomorphism
between M and N. In this case M and N are the same abstract manifold. The beauty of
dieomorphisms is that we can use both and
1
to move tensors from M to N; this will
allow us to dene the pushforward and pullback of arbitrary tensors. Specically, for a (k, l)
tensor eld T

l
on M, we dene the pushforward by
(

T)(
(1)
, . . . ,
(k)
, V
(1)
, . . . , V
(l)
) = T(

(1)
, . . . ,

(k)
, [
1
]

V
(1)
, . . . , [
1
]

V
(l)
) ,
(5.14)
where the
(i)
s are one-forms on N and the V
(i)
s are vectors on N. In components this
becomes
(

T)

l
=
y

1
x

1

y

k
x

k
x

1
y

1

x

l
y

l
T

l
. (5.15)
The appearance of the inverse matrix x

/y

is legitimate because is invertible. Note


that we could also dene the pullback in the obvious way, but there is no need to write
separate equations because the pullback

is the same as the pushforward via the inverse


map, [
1
]

.
We are now in a position to explain the relationship between dieomorphisms and coordi-
nate transformations. The relationship is that they are two dierent ways of doing precisely
the same thing. If you like, dieomorphisms are active coordinate transformations, while
traditional coordinate transformations are passive. Consider an n-dimensional manifold
M with coordinate functions x

: M R
n
. To change coordinates we can either simply
introduce new functions y

: M R
n
(keep the manifold xed, change the coordinate
5 MORE GEOMETRY 134
maps), or we could just as well introduce a dieomorphism : M M, after which the
coordinates would just be the pullbacks (

x)

: M R
n
(move the points on the man-
ifold, and then evaluate the coordinates of the new points). In this sense, (5.15) really is
the tensor transformation law, just thought of from a dierent point of view.
!
*
!
n
( x)
x
y

R
M
Since a dieomorphism allows us to pull back and push forward arbitrary tensors, it
provides another way of comparing tensors at dierent points on a manifold. Given a dieo-
morphism : M M and a tensor eld T

l
(x), we can form the dierence between
the value of the tensor at some point p and

[T

l
((p))], its value at (p) pulled
back to p. This suggests that we could dene another kind of derivative operator on tensor
elds, one which categorizes the rate of change of the tensor as it changes under the dieo-
morphism. For that, however, a single discrete dieomorphism is insucient; we require a
one-parameter family of dieomorphisms,
t
. This family can be thought of as a smooth
map RM M, such that for each t R
t
is a dieomorphism and
s

t
=
s+t
. Note
that this last condition implies that
0
is the identity map.
One-parameter families of dieomorphisms can be thought of as arising from vector elds
(and vice-versa). If we consider what happens to the point p under the entire family
t
, it is
clear that it describes a curve in M; since the same thing will be true of every point on M,
these curves ll the manifold (although there can be degeneracies where the dieomorphisms
have xed points). We can dene a vector eld V

(x) to be the set of tangent vectors to
each of these curves at every point, evaluated at t = 0. An example on S
2
is provided by
the dieomorphism
t
(, ) = (, + t).
We can reverse the construction to dene a one-parameter family of dieomorphisms
from any vector eld. Given a vector eld V

(x), we dene the integral curves of the
vector eld to be those curves x

(t) which solve


dx

dt
= V

. (5.16)
Note that this familiar-looking equation is now to be interpreted in the opposite sense from
our usual way we are given the vectors, from which we dene the curves. Solutions to
5 MORE GEOMETRY 135
!
(5.16) are guaranteed to exist as long as we dont do anything silly like run into the edge of
our manifold; any standard dierential geometry text will have the proof, which amounts to
nding a clever coordinate system in which the problem reduces to the fundamental theorem
of ordinary dierential equations. Our dieomorphisms
t
represent ow down the integral
curves, and the associated vector eld is referred to as the generator of the dieomorphism.
(Integral curves are used all the time in elementary physics, just not given the name. The
lines of magnetic ux traced out by iron lings in the presence of a magnet are simply the
integral curves of the magnetic eld vector B.)
Given a vector eld V

(x), then, we have a family of dieomorphisms parameterized by
t, and we can ask how fast a tensor changes as we travel down the integral curves. For each
t we can dene this change as

t
T

l
(p) =
t
[T

l
(
t
(p))] T

l
(p) . (5.17)
Note that both terms on the right hand side are tensors at p.
T[ (p)] !
t
(p)
p
[T( (p))]
!
t t
!
*
T(p)
x (t)

!
t
M
We then dene the Lie derivative of the tensor along the vector eld as

V
T

1

k

1

l
= lim
t0

t
T

1

k

1

l
t

. (5.18)
5 MORE GEOMETRY 136
The Lie derivative is a map from (k, l) tensor elds to (k, l) tensor elds, which is manifestly
independent of coordinates. Since the denition essentially amounts to the conventional
denition of an ordinary derivative applied to the component functions of the tensor, it
should be clear that it is linear,

V
(aT + bS) = a
V
T + b
V
S , (5.19)
and obeys the Leibniz rule,

V
(T S) = (
V
T) S + T (
V
S) , (5.20)
where S and T are tensors and a and b are constants. The Lie derivative is in fact a more
primitive notion than the covariant derivative, since it does not require specication of a
connection (although it does require a vector eld, of course). A moments reection shows
that it reduces to the ordinary derivative on functions,

V
f = V(f ) = V

f . (5.21)
To discuss the action of the Lie derivative on tensors in terms of other operations we
know, it is convenient to choose a coordinate system adapted to our problem. Specically,
we will work in coordinates x

for which x
1
is the parameter along the integral curves (and
the other coordinates are chosen any way we like). Then the vector eld takes the form
V = /x
1
; that is, it has components V

= (1, 0, 0, . . . , 0). The magic of this coordinate
system is that a dieomorphism by t amounts to a coordinate transformation from x

to
y

= (x
1
+ t, x
2
, . . . , x
n
). Thus, from (5.6) the pullback matrix is simply
(
t
)

, (5.22)
and the components of the tensor pulled back from
t
(p) to p are simply

t
[T

l
(
t
(p))] = T

l
(x
1
+ t, x
2
, . . . , x
n
) . (5.23)
In this coordinate system, then, the Lie derivative becomes

V
T

1

k

l
=

x
1
T

1

k

l
, (5.24)
and specically the derivative of a vector eld U

(x) is

V
U

=
U

x
1
. (5.25)
Although this expression is clearly not covariant, we know that the commutator [V, U] is a
well-dened tensor, and in this coordinate system
[V, U]

= V

V

5 MORE GEOMETRY 137
=
U

x
1
. (5.26)
Therefore the Lie derivative of U with respect to V has the same components in this coordi-
nate system as the commutator of V and U; but since both are vectors, they must be equal
in any coordinate system:

V
U

= [V, U]

. (5.27)
As an immediate consequence, we have
V
S =
W
V. It is because of (5.27) that the
commutator is sometimes called the Lie bracket.
To derive the action of
V
on a one-form

, begin by considering the action on the


scalar

for an arbitrary vector eld U

. First use the fact that the Lie derivative with


respect to a vector eld reduces to the action of the vector itself when applied to a scalar:

V
(

) = V (

)
= V

)
= V

(

)U

+ V

) . (5.28)
Then use the Leibniz rule on the original scalar:

V
(

) = (
V
)

(
V
U)

= (
V
)

. (5.29)
Setting these expressions equal to each other and requiring that equality hold for arbitrary
U

, we see that

= V

+ (

, (5.30)
which (like the denition of the commutator) is completely covariant, although not manifestly
so.
By a similar procedure we can dene the Lie derivative of an arbitrary tensor eld. The
answer can be written

V
T

2

k

l
= V

l
(

V

1
)T

l
(

V

2
)T

l

+(

1
V

)T

l
+ (

2
V

)T

l
+ . (5.31)
Once again, this expression is covariant, despite appearances. It would undoubtedly be
comforting, however, to have an equivalent expression that looked manifestly tensorial. In
fact it turns out that we can write

V
T

2

k

l
= V

l
(

V

1
)T

l
(

V

2
)T

l

+(

1
V

)T

l
+ (

2
V

)T

l
+ , (5.32)
5 MORE GEOMETRY 138
where

represents any symmetric (torsion-free) covariant derivative (including, of course,


one derived from a metric). You can check that all of the terms which would involve connec-
tion coecients if we were to expand (5.32) would cancel, leaving only (5.31). Both versions
of the formula for a Lie derivative are useful at dierent times. A particularly useful formula
is for the Lie derivative of the metric:

V
g

= V

+ (

V

)g

+ (

V

)g

= 2
(
V
)
, (5.33)
where

is the covariant derivative derived from g

.
Lets put some of these ideas into the context of general relativity. You will often hear it
proclaimed that GR is a dieomorphism invariant theory. What this means is that, if the
universe is represented by a manifold M with metric g

and matter elds , and : M


M is a dieomorphism, then the sets (M, g

, ) and (M,

) represent the same


physical situation. Since dieomorphisms are just active coordinate transformations, this is
a highbrow way of saying that the theory is coordinate invariant. Although such a statement
is true, it is a source of great misunderstanding, for the simple fact that it conveys very little
information. Any semi-respectable theory of physics is coordinate invariant, including those
based on special relativity or Newtonian mechanics; GR is not unique in this regard. When
people say that GR is dieomorphism invariant, more likely than not they have one of two
(closely related) concepts in mind: the theory is free of prior geometry, and there is no
preferred coordinate system for spacetime. The rst of these stems from the fact that the
metric is a dynamical variable, and along with it the connection and volume element and
so forth. Nothing is given to us ahead of time, unlike in classical mechanics or SR. As
a consequence, there is no way to simplify life by sticking to a specic coordinate system
adapted to some absolute elements of the geometry. This state of aairs forces us to be very
careful; it is possible that two purportedly distinct congurations (of matter and metric)
in GR are actually the same, related by a dieomorphism. In a path integral approach
to quantum gravity, where we would like to sum over all possible congurations, special
care must be taken not to overcount by allowing physically indistinguishable congurations
to contribute more than once. In SR or Newtonian mechanics, meanwhile, the existence
of a preferred set of coordinates saves us from such ambiguities. The fact that GR has no
preferred coordinate system is often garbled into the statement that it is coordinate invariant
(or generally covariant); both things are true, but one has more content than the other.
On the other hand, the fact of dieomorphism invariance can be put to good use. Recall
that the complete action for gravity coupled to a set of matter elds
i
is given by a sum of
the Hilbert action for GR plus the matter action,
S =
1
8G
S
H
[g

] + S
M
[g

,
i
] . (5.34)
5 MORE GEOMETRY 139
The Hilbert action S
H
is dieomorphism invariant when considered in isolation, so the matter
action S
M
must also be if the action as a whole is to be invariant. We can write the variation
in S
M
under a dieomorphism as
S
M
=

d
n
x
S
M
g

d
n
x
S
M

i

i
. (5.35)
We are not considering arbitrary variations of the elds, only those which result from a
dieomorphism. Nevertheless, the matter equations of motion tell us that the variation of
S
M
with respect to
i
will vanish for any variation (since the gravitational part of the action
doesnt involve the matter elds). Hence, for a dieomorphism invariant theory the rst
term on the right hand side of (5.35) must vanish. If the dieomorphism in generated by a
vector eld V

(x), the innitesimal change in the metric is simply given by its Lie derivative
along V

; by (5.33) we have
g

=
V
g

= 2
(
V
)
. (5.36)
Setting S
M
= 0 then implies
0 =

d
n
x
S
M
g

d
n
x

gV

g
S
M
g

, (5.37)
where we are able to drop the symmetrization of
(
V
)
since S
M
/g

is already symmetric.
Demanding that (5.37) hold for dieomorphisms generated by arbitrary vector elds V

, and
using the denition (4.70) of the energy-momentum tensor, we obtain precisely the law of
energy-momentum conservation,

= 0 . (5.38)
This is why we claimed earlier that the conservation of T

was more than simply a conse-


quence of the Principle of Equivalence; it is much more secure than that, resting only on the
dieomorphism invariance of the theory.
There is one more use to which we will put the machinery we have set up in this section:
symmetries of tensors. We say that a dieomorphism is a symmetry of some tensor T if
the tensor is invariant after being pulled back under :

T = T . (5.39)
Although symmetries may be discrete, it is more common to have a one-parameter family
of symmetries
t
. If the family is generated by a vector eld V

(x), then (5.39) amounts to

V
T = 0 . (5.40)
5 MORE GEOMETRY 140
By (5.25), one implication of a symmetry is that, if T is symmetric under some one-parameter
family of dieomorphisms, we can always nd a coordinate system in which the components
of T are all independent of one of the coordinates (the integral curve coordinate of the
vector eld). The converse is also true; if all of the components are independent of one
of the coordinates, then the partial derivative vector eld associated with that coordinate
generates a symmetry of the tensor.
The most important symmetries are those of the metric, for which

= g

. A
dieomorphism of this type is called an isometry. If a one-parameter family of isometries
is generated by a vector eld V

(x), then V

is known as a Killing vector eld. The
condition that V

be a Killing vector is thus

V
g

= 0 , (5.41)
or from (5.33),

(
V
)
= 0 . (5.42)
This last version is Killings equation. If a spacetime has a Killing vector, then we know
we can nd a coordinate system in which the metric is independent of one of the coordinates.
By far the most useful fact about Killing vectors is that Killing vectors imply conserved
quantities associated with the motion of free particles. If x

() is a geodesic with tangent


vector U

= dx

/d, and V

is a Killing vector, then
U

(V

) = U

+ V

= 0 , (5.43)
where the rst term vanishes from Killings equation and the second from the fact that x

()
is a geodesic. Thus, the quantity V

is conserved along the particles worldline. This can


be understood physically: by denition the metric is unchanging along the direction of
the Killing vector. Loosely speaking, therefore, a free particle will not feel any forces in
this direction, and the component of its momentum in that direction will consequently be
conserved.
Long ago we referred to the concept of a space with maximal symmetry, without oering
a rigorous denition. The rigorous denition is that a maximally symmetric space is one
which possesses the largest possible number of Killing vectors, which on an n-dimensional
manifold is n(n +1)/2. We will not prove this statement, but it is easy to understand at an
informal level. Consider the Euclidean space R
n
, where the isometries are well known to us:
translations and rotations. In general there will be n translations, one for each direction we
can move. There will also be n(n 1)/2 rotations; for each of n dimensions there are n 1
directions in which we can rotate it, but we must divide by two to prevent overcounting
(rotating x into y and rotating y into x are two versions of the same thing). We therefore
5 MORE GEOMETRY 141
have
n +
n(n 1)
2
=
n(n + 1)
2
(5.44)
independent Killing vectors. The same kind of counting argument applies to maximally
symmetric spaces with curvature (such as spheres) or a non-Euclidean signature (such as
Minkowski space), although the details are marginally dierent.
Although it may or may not be simple to actually solve Killings equation in any given
spacetime, it is frequently possible to write down some Killing vectors by inspection. (Of
course a generic metric has no Killing vectors at all, but to keep things simple we often deal
with metrics with high degrees of symmetry.) For example in R
2
with metric ds
2
= dx
2
+dy
2
,
independence of the metric components with respect to x and y immediately yields two
Killing vectors:
X

= (1, 0) ,
Y

= (0, 1) . (5.45)
These clearly represent the two translations. The one rotation would correspond to the
vector R = / if we were in polar coordinates; in Cartesian coordinates this becomes
R

= (y, x) . (5.46)
You can check for yourself that this actually does solve Killings equation.
Note that in n 2 dimensions, there can be more Killing vectors than dimensions. This
is because a set of Killing vector elds can be linearly independent, even though at any one
point on the manifold the vectors at that point are linearly dependent. It is trivial to show
(so you should do it yourself) that a linear combination of Killing vectors with constant
coecients is still a Killing vector (in which case the linear combination does not count as
an independent Killing vector), but this is not necessarily true with coecients which vary
over the manifold. You will also show that the commutator of two Killing vector elds is a
Killing vector eld; this is very useful to know, but it may be the case that the commutator
gives you a vector eld which is not linearly independent (or it may simply vanish). The
problem of nding all of the Killing vectors of a metric is therefore somewhat tricky, as it is
sometimes not clear when to stop looking.

You might also like