Anmek
Anmek
Anmek
Contents
1 The
1.1
1.2
1.3
1.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
page 1
. .
2
. .
6
. .
8
. . 12
2 Lagrangian mechanics
18
2.1 The scope of Lagrangian mechanics . . . . . . . . . . . . . . . 18
2.2 Constrained systems . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Interlude: Conic sections
32
4 The
4.1
4.2
4.3
4.4
4.5
4.6
36
36
39
40
42
43
46
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Small oscillations
51
5.1 Forced oscillations . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Damped and forced oscillations . . . . . . . . . . . . . . . . . . 54
5.3 Several degrees of freedom . . . . . . . . . . . . . . . . . . . . 57
6 Rotation and rigid bodies
6.1 Rotations . . . . . . . . . . .
6.2 Rotating coordinate systems
6.3 The inertia tensor . . . . . .
6.4 Eulers equations . . . . . . .
6.5 The Lagrangian description .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
66
69
72
76
82
ii
8 The
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
Contents
Hamiltonian formulation
Hamiltons equations and Hamiltonian flows
Kets and bras and all that . . . . . . . . . .
The symplectic form . . . . . . . . . . . . . .
The algebraic structure of mechanics . . . . .
The sphere as a phase space . . . . . . . . .
Infinitesimal canonical transformations . . .
The symplectic one-form . . . . . . . . . . .
General transformation theory . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
. 85
. 88
. 92
. 95
. 97
. 99
. 101
. 103
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
108
108
110
113
116
119
Sir Isaac Newton was a Master of the Mint. He also formulated three celebrated
laws of mechanics, which we can paraphrase as follows:
1. A particle not subject to any force moves on a straight line at constant
speed.
2. In the presence of a force, the position of a particle obeys the equations of
motion
.
m
xi = Fi (x, x)
(1.1)
3. The force exerted by a particle on another is equal in magnitude, but opposite in direction, to the force exerted by the other particle on the first.
A particle is here thought of as an entity characterized by its mass m, its
location in space, and by nothing else.1 The aim of Newtons mechanics is to
predict the location at arbitrary times, given the position and velocity at some
initial time. This is done by means of a solution of the differential equations
above.
An overdot denotes differentiation with respect to the time parameter t
(this notation, as well as Differential Calculus itself, was invented by Newton),
and xi may denote a vector in three-dimensional space. Sometimes it will be
understood that we are dealing with a set of N particles, and moreover we
often suppress indices. Then the force Fi (x, x)
is a 3N component function
of the 3N variables xi and their 3N derivatives x i . Since the index notation
may be a bit unfamiliar, let me note that whenever indices occur in a formula,
it is understood that they can take any of a specified set of integer values.
1
For further discussion see A. Jenkins, On the title of Moriartys Dynamics of an Asteroid, eprint
arXiv:13012.5855.
If i {1, 2, ..., n} then eq. (1.1) stands for n separate equations. Throughout
we employ Einsteins summation convention, which means that whenever a
certain index occurs twice in a particular term, a sum over all its allowed
values is understood, eg.
xi yi
n
X
i=1
xi yi =
n
X
xj yj = xj yj .
(1.2)
j=1
It does not matter which letter is being used for a repeated index. To avoid
confusion, the same index never occurs thrice or more in a single term. In
section 8.2 we will introduce index notation in a more sophisticated tensorial
way, but for the time being this is all there is to it. By the way index notation
is not always the best choiceit does not, for instance, make use of any special
properties of three dimensional spacebut it has the advantage that it can be
used for everything, which is why I always use it.
dPi
d X
mx i = 0 ,
dt
dt particles
(1.3)
where the sum is over all the particles in the system. The existence of such
a conserved vector is clearly an interesting fact. By the way this formulation
is quite superior when we deal with time dependent masses, say with rockets
(see exercise 3). Now consider the function
mx 2
+ V (x) ,
(1.4)
2
where V is some function of x, known as the potential energy. The function T
is called the kinetic energy, while E itself is the energy of the system. Clearly
E =T +V =
E = x i (m
xi + i V (x)) .
(1.5)
(1.6)
then the energy of the system is conserved. Systems for which a conserved
energy function exists are called conservative. In our example, and indeed in
many interesting cases, the energy can be divided into kinetic and potential
parts, and the equation of motion is given by
m
xi = i V (x) .
(1.7)
This move is typical of analytical mechanics, where vectors are usually derived
from scalar functions.
Analytical mechanics devises methods to derive the differential equations describing a given system, strategies for solving them, and ways of describing
the solutions if they cannot be obtained in explicit form.2
We will tentatively restrict ourselves to conservative systems only. If you like
this is a strengthening of the third law, and it is believed that all isolated
systems in Nature are of this type.
What we are trying to do is to find some properties that all the Laws of
Physics, and in particular all allowed equations of motion, have in common.
Now the philosopher Leibnitzwho was the other of the two inventors of
Differential Calculusargued that we live in the best of all possible worlds.
Is it evident from eq. (1.7) that this is so? Indeed it is, as was first realized
half a century after the publication of Newtons Principia. The inspiration
came from optics, and the laws of reflection and refraction. It is known that
the angle of reflection is equal to the angle of incidence, and it was observed
2
by the Greeks that this implies that light always travels on the shortest path
available between two points A and B, subject to the restriction that it should
be reflected against the surface. If the angle of reflection were to differ from
the angle of incidence, the distance covered by light in going from A to B
would be greater than it has to be. For refraction, we have Snells Law. Any
medium can be assigned an index of refraction n, and the angle of refraction
is related to the angle of incidence through
n1 sin i = n2 sin r .
(1.8)
I=c
dt =
cds
=
v
n(x(s))ds
(1.10)
along an arbitrary path (s) between A and B. Then the path actually taken
by light in going from A to B through the medium is that specific path which
results in the smallest possible value of the integral I. This path may well not
be a straight line. The question is how to do the optimization. We will soon
come to it.
Is there a similar principle underlying mechanics? At least for systems obeying eq. (1.7), there is. Consider two points A and B, and suppose that a particle
starts out at A at time t = t1 , and then moves along an arbitrary path from
A to B with whatever speed that is consistent with the requirement that it
should arrive at B at the time t = t2 . In mathematical terms we are dealing
with a function x(t) such that
x(t1 ) = xA
x(t2 ) = xB ,
(1.11)
but otherwise arbitrary. For any such function x(t) we can evaluate the integral
S[x(t)] =
t2
t1
dt (T V ) =
t2
dt
t1
mx 2
V (x) .
2
(1.12)
is not a function of t, hence the square bracket notation. On the other hand
it is a function of xA , xB , t1 , and t2 , but this is rarely written out explicitly.
The statement, to be verified in the next section, is that the action functional (1.12) has an extremum (not necessarily a minimum) for precisely that
function x(t) which obeys the differential equation (1.7). This is known as
Hamiltons Principle, orwith less than perfect historical accuracyas the
Principle of Least Action.
Hamiltonian mechanics deals with those, and only those, equations of motion
which can be derived from Hamiltons Principle, for some choice of the action
functional.
This is a much more general class than that given by eq. (1.7), but it does
exclude some cases of physical interest. Hamiltonian mechanics forms only a
part of analytical mechanicsnamely that part that we will focus on.
Note once again what is going on. The original task of mechanics was to
predict the trajectory of a particle, given a small set of data concerning its
state at some intial time t. We claim that there exists another formulation of
the problem, where we can deduce the trajectory given half as much data at
each of two different times. So there seems to be a local, causal way of looking
at things, and an at first sight quite different global, teleological viewpoint.
The claim begins to look reasonable when we observe that the amount of free
data in the two formulations are the same. Moreover, if the two times t1
and t2 approach each other infinitesimally closely, then what we are in effect
specifying is the position and the velocity at time t1 , just as in the causal
approach.
Why do principles like Fermats and Hamiltons work? In both cases, we are
extremizing a quantity evaluated along a path, and the path actually taken
by matter in nature is the one which makes the quantity in question assume
an extremal value. The point about extremanot only minimais that if the
path is varied slightly away from the extremal path, to a path which differs to
order from the extremal one, then the value of the path dependent quantity
suffers a change which is of order squared. At an extremum the first derivative
vanishes. In the case of optics, we know that the description of light as a bundle
of rays is valid only in the approximation where the wavelength of light is
much less than the distance between A and B. In the wave theory, in a way,
every path between A and B is allowed. If we vary the path slightly, the time
taken by light to arrive from A to B changes, and this means that it arrives
out of phase with the light arriving along the first path. If the wavelength
is very small, phases from light arriving by different paths will be randomly
distributed, and will cancel each other out through destructive interference.
This argument fails precisely for the extremal paths: for them, neighbouring
paths take approximately the same time, light from all neighbouring paths will
arrive with the same phase, and constructive interference takes place. Thus,
whenever the wavelength is negligibly small, it will appear that light always
travels along extremal paths.
t2
dt
t1
mx 2
V (x) .
2
(1.13)
(1.14)
We assume that x is so small that second order terms can be ignored. If the
derivative is zero at the point x, the function has a minimum, or a maximum,
or at least an inflection point there. For a function of several variables, the
condition for an extremum (a minimum, a maximum, or a saddle point) is
that
f (x) =
X
i
xi
f
(x1 , . . . , xN ) = 0
xi
(1.15)
for arbitrary choices of the xi , which means that all the N partial derivatives
have to vanish at the extremal points. Now a functional of a function x(t)
can be regarded as a function of an infinite number of variables, say of the
Fourier coefficients of the original function. You can also regard t as a label of
the infinite number of variables on which the functional dependsa kind of
continuous indexand then what we have to do is to replace the sum in eq.
(1.15) with an integral. Like this:
S = S[x(t) + x(t)] S[x(t)] =
t2
t1
dt x(t)
S
(t) .
x
(1.16)
x(t) = f (t) ,
(1.17)
where f (t) is, for the time being, an arbitrary function while is an infinitesimally small constant. It is important for the following argument that f (t) is
arbitrary, or nearly so. That is infinitesimally small simply means that we
will neglect terms of quadratic and higher orders in in the calculation which
follows:
S = S[
x(t)] S[x(t)] =
=
t2
dt
t1
m
2
t2
t1
dt
m
2
(x + x)
2 V (x + x) S[x(t)]
x 2 + mx
x V (x) xx V (x) + o(2 ) S[x(t)] =
(1.18)
t2
t1
dt (mx
x xx V (x)) + o(2 ) .
The action functional has an extremum at the particular function x(t) for
which this expression vanishes to first order in . What we want to see is
what kind of restrictions this requirement sets on the function. To see this, we
perform a partial integration
S =
t2
dt
t1
x(m
x + x V (x)) +
d
(mxx)
.
dt
(1.19)
Unfortunately this is not quite of the form (1.16), due to the presence of the
total derivative in the integrand. Therefore we impose a restriction on the so
far arbitrary function f (t) that went into the definition of x, so that
x(t1 ) = x(t2 ) = 0 .
(1.20)
This is a way of saying that we are interested only in functions x(t) that
have certain preassigned starting and end points at specified times. With this
restriction, the total derivative in eq. (1.19) goes away. The first term has
to vanish for all allowed choices of the functions x(t). After a moments
reflection, we see that this can happen only if the factor multiplying x in
the integrand is zero! Hence we have proved that the action functional has an
extremum, among all possible functions obeying
x(t1 ) = xA
x(t2 ) = xB ,
(1.21)
(1.22)
(1.23)
t(x) =
x
c
dx
,
f (x )
(1.24)
and then invert the resulting function t(x) to obtain the function x(t), we
have solved the equation. We will regard eq. (1.24) as an implicit definition
of x(t), and eq. (1.23) is soluble in this sense. This is reasonable, since the
manipulations required to extract t(x) can be easily done on a computer, to any
desired accuracy, even if we cannot express the integral in terms of elementary
functions. But there are some limitations here: It may not be possible to invert
the function t(x) except for small times.
Next consider a second order equation, such as the equation of motion for
a harmonic oscillator:
m
x = ax .
(1.25)
This is a linear equation, and we know how to express the solution in terms of
trigonometric functions, but our third examplea pendulum of length lis
already somewhat worse:
ml2 = gml sin .
(1.26)
mx = p .
(1.27)
The second equation defines the new variable p. Unfortunately coupled first
order equations are difficult to solve, except in the linear case when they can
be decoupled through a Fourier transformation.
The number of degrees of freedom of a dynamical system is defined to be one
half times the number of first order differential equations needed to describe
the evolution. It will turn out that, for systems whose equations of motion are
derivable from the action principle, the number of first order equations will
always be even, so the number of degrees of freedom is always an integer for
such systems. A system with n degrees of freedom will be described by a set of
2n in general coupled first order equations, and the difficulties one encounters
in trying to solve them will rapidly become severe.
In the cases at hand, with one degree of freedom only, one uses the fact that
these are conservative systems, which will enable us to reduce the problem to
that of solving a single first order equation. For the harmonic oscillator the
conserved quantity is
mx 2 ax2
+
.
2
2
The number E does not depend on t. Equivalently
E=
x 2 =
2E
a
x2 .
m
m
(1.28)
(1.29)
10
Taking a square root we are back to the situation we know, and we proceed
as before:
dt = dx
m
2E ax2
t(x) =
dx
m
.
2E ax2
(1.30)
Inverting the function defined by the integral, we find the solution x(t). The
answer is a trigonometric function, with two arbitrary constants E and c determining its phase and its amplitude. For our purposes the trigonometric
function is defined by this procedure!
We can play the same trick with the non-linear equation for the pendulum,
and we end up with
t() =
d
q
2
(E
ml2
(1.31)
+ gml cos )
We integrate, and we invert. This defines the function (t). We could leave it
at that, but since our example is a famous one, we manipulate the integral a
bit further for the fun of it. Make the substitution
sin
k sin
2
2k cos d
d = p
.
1 k2 sin2
(1.32)
l
2g
()
2k cos d
q
E
1 k2 sin2 gml
+ 1 2 sin2
(1.33)
E
+1 .
gml
(1.34)
p
t() =
=
sin
x
g c
1 k2 sin2
s Z
l x()
dx
p
=
.
g c
(1 x2 )(1 k2 x2 )
(1.35)
11
Figure 1.1. A Lissajous figure, and how to draw it; x = cos t, y = sin 2t.
y = b cos (2 t + 2 ) .
(1.36)
The trajectory in the x-y-plane is a Lissajous figure. Fig. 1.1 explains how to
draw them; further examples are readily produced with a computer. If 1 = 2
the trajectory is an ellipse, with circles and straight lines as special cases. More
generally, if there exist integers m and n such that
m1 = n2
(1.37)
12
the trajectory is a closed curve. If there are no such integers the trajectory
eventually fills a rectangle densely, and never closes on itself. Now put yourself into the position of an experimentalist trying to determine by means of
measurements whether the trajectory will be closed or not!
Another example of this type is a particle moving on a straight line on a
plane, but confined to a quadratic box and bouncing elastically from its walls.
Let us ask whether the trajectory is periodic or whether it will eventually come
arbitrarily close to any point in the box. The answer depends on the initial
condition for the direction of motion. If the angle between this direction and
one of the walls is called , the question is whether tan is a rational number
or not. Theoretically this is fine, but for someone who wants to decide the
question by means of measurements of the initial velocity it is not!
1iN ,
(1.38)
(1.39)
There are theorems that guarantee the existence and uniqueness of such systems for some range of the parameter t. Thus
zi = zi (z01 , . . . , z0N , t) ,
(1.40)
13
This is the first of several abstract spaces that we will encounter, and you
must get used to the idea of abstract spaces.
A particle moving in space has a 6 dimensional phase space, because its
position (3 numbers) and its velocity (3 numbers) at a given time determine
its position at all times, given Newtons laws. Anything else can either be
computed from these numbersthis is true for its accelerationor else it can
be ignoredthis would be true for how it smells, if it does. The particle also
has a mass, but this number is not included in phase space because it is given
once and for all. Two particles moving in space have a 12 dimensional phase
space, so high dimensional phase spaces are often encountered. We will have
to picture them as best we may.
Now consider time evolution according to eq. (1.39). Because of the theorems
I alluded to, we know that through any point z0 there passes a unique curve
zi (t), with a unique tangent vector zi . These curves never cross each other.
When the system is at a definite point in phase space, it knows where it is
going. The curves are called trajectories, and their tangent vectors define a
vector field on phase space called the phase space flow. Imagine that we can
see such a flow. Then there are some interesting things to be observed. We
say that the flow has a fixed point wherever the tangent vectors vanish. If the
system starts out at a fixed point at t = 0, it stays there forever. There is an
important distinction to be made between stable and unstable fixed points. If
you start out a system close to an unstable fixed point it starts to move away
from it, while in the stable case it will stay close forever. The stable fixed point
may be an attractor, in which case a system that starts out close to the fixed
point will start moving towards it. The region of phase space which is close
enough for this to happen is called the basin of attraction for the attractor.
Consider a one dimensional phase space, with the first order system
z = f (z) .
(1.41)
For generic choices of the function f all fixed points are either stable attractors,
or unstable repellors, but for special choices of f we can have fixed points
that are approached by the flow only on one side. The latter are structurally
unstable, in the sense that the smallest change in f will either turn them into
pairs of attractors and repellors, or cause them to disappear altoghether.
In two dimensions there are more possibilities. We can have sources and
sinks, as well as stable elliptic and unstable hyperbolic fixed points. To see
what the latter two look like, we return to the examples given in section 1.3.
The phase space of the harmonic oscillator is R2 , and it contains one elliptic
fixed point. It is elliptic because it is surrounded by closed trajectories, and
hence it is stable. In the case of the pendulum phase space has a non-trivial
topology: since the coordinate is a periodic angle phase space is the surface
of an infinitely long cylinder. It contains two fixed points. One of them is
elliptic, and the otherthe state where the pendulum is pointing upwardsis
hyperbolic. What is special about the hyperbolic fixed point is that there are
14
Figure 1.2. A one dimensional phase space, containing one stable and one unstable fixed point, as well as one fixed point which is structurally unstable.
Figure 1.3. Fixed points in a two dimensional phase space: a source, a sink, a
limit cycle, an elliptic fixed point, and a hyperbolic fixed point.
two trajectories leading into it, and two leading out of it. The length of the
tangent vectors decrease as the fixed point is approached. Taking the global
structure of phase space into account we see that a trajectory leaving the fixed
point is in fact identical to one of the incoming ones. Hence there are really only
two special trajectories. A striking fact about them is that they divide phase
space into regions with qualitatively different behaviour. One region where
the trajectories go around the elliptic fixed point, and two regions where the
trajectories go around the cylinder. For this reason the special trajectories are
called separatrices, and the regions into which they divide phase space are
called invariant sets by definition an invariant set in phase space is a region
that one cannot leave by following the phase space flow.
It is very important that you see how to relate this abstract discussion of
the phase space of the pendulum to known facts about real pendula. Do this!
It is not by accident that the phase space of the pendulum is free of sources
and sinks. The reason is, as we will see in section 8.1, that only elliptic or
hyperbolic fixed points can occur in Hamiltonian mechanics. Real pendula
tend to have some amount of dissipation present (because they are imperfectly
isolated from the environment), and then the situation changes; see exercise
11. Speaking of Hamiltonian systems it is worthwhile to point out that the
example of the two harmonic oscillators in eq. (1.36) is less frivolous than it
15
may appear. The phase space is four dimensional, but there are two conserved
quantities
2E1 = p21 + 12 x21
(1.42)
This means that any given trajectory will be confined to a two dimensional
surface in phase space, labelled by E1 and E2 . This surface is a torus, with
topology S1 S1 . In a sense to be made precise later, non-chaotic motion in
a Hamiltonian system always takes place on a torus in phase space.
Finally we observe that we have the beginnings of a strategy to understand
any given dynamical system. We begin by locating the fixed points of the
phase space flow. Then we try to determine the nature of these fixed points. If
the equations are linear this is straightforward. If not, we can try linearization
of the equations around the fixed points. There is a theorem we can lean on
here:
The Hartman-Grobman theorem: The nature of the fixed points is unchanged
by linearization, as long as the fixed points are isolated and as long as no
elliptic fixed points occur.
The caveat in the statement will be explained presently.
Now consider the pendulum. Its phase space is a cylinder described by the
coordinates (, p ). To see if the phase space flow has any fixed points, you set
1
=
p = 0
p = gml sin = 0 .
(1.43)
ml2
Hence there are fixed points at (, p ) = (0, 0) and (, 0). Linearizing around
them you find the former to be elliptic and the latter to be hyperbolic. If this
remains true for the non-linear equations you can easily draw a qualitatively
correct picture of the phase space flow. No integration is needed.
Were we justified in assuming that the fixed points are elliptic? To see what
can go wrong, consider the non-linear equation
x
+ x2 x + x = 0 .
(1.44)
16
z1 = az1 + az2
z2 = bz1 z2 z1 z3
z3 = cz3 + z1 z2
(1.45)
.
Problem 1.1
Newtons Second Law says that the position and the velocity of
a particle can be freely specified; then the trajectory x(t), and therefore all derivatives
of order higher than one, is determined by the equation of motion. Suppose instead
that either
a) only the position can be freely specified, and that the equation of motion determines
all the derivatives, or
b) position, velocity and acceleration can be freely specified, and that the equation of
motion determines all derivatives of order higher than two.
Discuss these assumptions in the light of Newtons First Law.
Problem 1.3
(1.46)
where Fi is an external force, vi is the velocity of the rocket, and ui is the exhaust
velocity (relative to the rocket).
Problem 1.4
Prove Snells Law of optics, starting from Fermats principle.
Also argue for it using properties of plane waves.
Problem 1.5
An elastic bar extends between x = 0 and x = L. It resists
bending, has a load per unit length given by (x), and is subject to gravity. We may
therefore assume that its energy is given by
E=
dx
0
k
y y (x)y
2
where the slash denotes differentiation with respect to x and k is a constant. The
bar will minimize its energy. Analyse the variational problem to see what equation
determines the equilibrium position, and what conditions one must impose on the end
of the bar in order to obtain a unique solution. Archers want their bows to bend like
circles. Conclude that bows must have a value of k that depends on x.
17
Problem 1.6
Is it true that a once differentiable function x(t) is a solution
of eq. (1.25) if and only if it is a solution of eq. (1.29)? If not, find a non-trivial
counterexample.
Problem 1.7
Using the general solution for the pendulum, eq. (1.35), solve
for (t) in the special case k = 1. Physically, what does this solution correspond to?
Problem 1.8
Use Mathematica to compare the solutions for the pendulum
to those of the harmonic oscillator, for various values of the energy (which you adjust
so that E = 0 corresponds to the stable fixed point in both cases).
Problem 1.9
Consider a projectile that is fired straight up in a gravitational
field (V = GM/r), reaches a maximum height rmax , and falls back again. Prove
that the solution has the parametric form
rmax
r=
(1 cos ) ,
2
rmax
t=
2
rmax
( sin ) .
2GM
Show that the resulting curve in the tr plane is a cycloid, the curve followed by a
point on the perimeter of a circular disk rolling without slipping on the taxis.
Problem 1.10
Using the construction sketched in Fig. 1.1, draw Lissajous
figures for (x, y) = (cos t, cos (2t + )), for = 0, /4, /2. What is the first of
these called?
Problem 1.11 Linearize the pendulum around its fixed points, and then draw
+sin
a careful picture of its phase space. Add a friction term to the equation, +
=
0, and see in qualitative terms what this does to the phase space flow.
Problem 1.12
Give a simple example where linearization around a fixed
point gives an erroneous impression of its nature because the fixed point does not
stay isolated.
2 Lagrangian mechanics
With the agreement that the action integral is an important object, we give
a name also to its integrand, and call it the Lagrangian. In the examples that
we considered so far, and in fact in most cases of interest, the Lagrangian is a
function of a set of n variables qi and their n first order derivatives qi :
S[q(t)] =
t2
dt L(qi , qi ) .
(2.1)
t1
L
d
qi dt
L
qi
=0,
19
(2.2)
+
q
.
dt q
dtq
q
q
q
dt q
dt
q
t1
t1
(2.3)
The total derivative term gives rise to a boundary term that vanishes because
we are only varying functions whose values at t1 and t2 are kept fixed, so that
q is zero at the boundary. The Euler-Lagrange equations follow as advertized.
The question is to what extent the equations of motion that actually occur in
physics are of this form.
There are some that cannot be brought to quite this form by any means, including some of considerable physical interest; most of them involve dissipation
of energy of some sort. An example is that of a white elephant sliding down a
hillside covered with flowers.1 But then frictional forces are not fundamental
forces. A complete description of the motion of the elephant would involve the
motion of the atoms in the elephant and in the flowers, both being heated by
friction. It is believed that all complete, fundamental equations are derivable
from Hamiltons principle, and hence that they fall within the scope of Lagrangian mechanicsor of quantum mechanics, which is structurally similar
in this regard.
Generally speaking we expect Lagrangian mechanics to be applicable whenever there is no dissipation of energy. For many simple mechanical systems the
Lagrangian equals the difference between the kinetic and the potential energy,
S =
t2
L(x, x)
= T (x)
V (x) .
(2.4)
For instance,
mx 2
L
d L
V
V (x)
=
m
x.
(2.5)
2
x dt x
x
Even in some situations where there is no conservation of energy, analytical
mechanics applies. The simplest examples involve Lagrangians which depend
explicitly on the time t. Dissipation is not involved because we keep careful
track of the way that energy is entering or leaving the system.
Now for an example where the Lagrangian formalism is useful. Suppose we
wish to describe a free particle in spherical polar coordinates
L=
x = r cos sin
y = r sin sin
z = r cos .
(2.6)
and .
This requires
That is to say, we wish to derive the equations for r, ,
1
20
Lagrangian mechanics
m
xi = e (Ei (x, t) + ijk x j Bk (x, t)) .
(2.8)
The epsilon tensor occurring here may be unfamiliar (but see exercise 1). For
the moment let me just say that the second term on the right hand side means
the cross product of the velocity and the magnetic field. With this hint you
should be able to follow the argument at least in outline, so we proceed. This
example is more tricky than the previous ones, since the force depends not only
on the position but also on the velocity of the particle (as well as explicitly on
time, but this is no big deal). It turns out that in order to derive the Lorentz
equation from a Lagrangian, we need not only one but four potentials, as
follows:
Ei (x, t) = i (x, t) t Ai (x, t)
(2.9)
Here is known as the scalar potential and Ai as the vector potential. (They
are both parts of a relativistic four vector.) It is possible to show that the
following action yields the Lorentz equation when varied with respect to x:
S[x(t)] =
dt
mx 2
+ ex i Ai (x, t) e(x, t) .
2
(2.10)
(2.11)
This has the same form as Newtons Law of Gravity, if the potential is specified
correctly. The reason why the full Lorentz equation is much more complicated
has to do with the special relativity theory. The magnetic field is a relativistic
21
(2.12)
dt
m 2
x + y 2 + z 2 .
2
(2.13)
z = z(x, y) =
1 x2 y 2 ,
(2.15)
and insert the result back into the action that describes the free particle, ie.
22
Lagrangian mechanics
S[x, y] =
dt
m 2
x + y 2 + z(x,
y)2 .
2
(2.16)
Now we can vary x and y freely, except that they are not allowed to exceed
one in absolute value. The variations in z are now
z = x
z
z
+ y
,
x
y
(2.17)
and the equations of motion can be derived at the expense of some effort.
There are some unavoidable weaknesses here. From eq. (2.16) it appears as
if the configuration space were the unit disk in the plane, since x and y are not
allowed to take values outside this disk. Or perhaps the configuration space is
two copies of the unit disk, since there are two branches of the square root?
But the true configurations space is a sphere. What we see is a reflection of
the known fact that it is impossible to cover a sphere with a single coordinate
systemour equations have only a local validity. This kind of difficulties
will become more pronounced in the general problem we are heading for: Consider a Lagrangian L0 defined on an n dimensional configuration space, with
coordinates q1 , . . . , qn , and suppose that the system is confined to live in the
(n m) dimensional submanifold defined by the m conditions
I (q1 , . . . , qn ) = 0 ,
1Im.
(2.18)
dt L0 (q, q)
(2.19)
dt L0 (q, q)
+ 1 1 (q) + + m m (q)
(2.20)
23
under arbitrary variations of the functions q and . The s are the Lagrange
multipliers, and are treated as new dynamical variables.
Indeed, when the action (2.20) is varied with respect to the s we obtain the
constraints (2.18) as equations of motion. When we vary with respect to the
qs the resulting equations will contain the otherwise undetermined Lagrange
multipliers, and it not obvious that these equations have anything to do with
the problem we wanted to consider. But they do. Consider the analogous
problem encountered in trying to find the extrema of an ordinary function
f (q) of the n variables q, subject to the m conditions (q) = 0. (Remember
suppression of indices!) First suppose that we use the constraints to solve
for m of the qsit will not matter which onesand call them y, leaving
n m independent variables x. The extrema of f (q) may be found through
the equations
0 = f = xx f + yy f ,
(2.21)
where, however, the variations y are not independent variations, but have to
be consistent with the constraints. In fact they are linear function of the xs,
given by the conditions
0 = = xx + yy .
(2.22)
This equation has to be solved for y and the result inserted into eq. (2.21),
which is therefore really an expression of the form x(x f + something else) =
0. It does not imply x f = 0.
Since = 0 for the variations we consider, nothing prevents us from rewriting eq. (2.21) in the form
0 = f = f + = x(x f + x ) + y(y f + y ) ,
(2.23)
where the s are arbitrary functions. The ys are still given in terms of the xs,
so it would seem at first sight that we cannot conclude that x f + x = 0,
butand here comes the punch linein fact we can, provided we choose the
so far arbitrary functions in such a way that y f + y = 0. Since the
division of the qs into xs and ys was arbitrary, we see that the restricted way
of finding the extremamaking variations consistent with the constraintsis
equivalent to solving the n + m equations
(q) = 0
q f + q = 0
(2.24)
for q and . But these are precisely the equations that we obtain from the Lagrange multiplier method, in which we do not care about the constraints while
varying the action! In all fairness though, we have not solved the equations,
we have just derived them in a convenient way.
As long as the constraints depend only on q (and not on q)
it is straightforward to generalize the argument from functions to functionals. From the
action
24
Lagrangian mechanics
S[q, ] = S0 [q] +
dt (q)
(2.25)
(2.26)
This is the analogue of the second equation (2.24). Written out, if L = L(q, q)
=
+
dt qi
qi
qi
(2.27)
(q) = 0 .
(2.28)
m 2
x + y 2 + z 2 + x2 + y 2 + z 2 1 .
2
(2.29)
m
y = 2y
m
z = 2z .
(2.30)
Jy = z x xz
Jz = xy y x .
(2.31)
At this point we go over to polar coordinates, using eqs. (2.6) with r = 1. The
constants of the motion become
Jx = sin cos cos sin
2.3 Symmetries
25
It is possible to check directly, using the equations of motion for and , that
these are constants of the motionbut only Jz is obviously conserved. The
coordinate system (, ) somehow hides the others.
By the way, the kinetic energy can be expressed as
T =
m
m 2
+ 2 sin2 =
Jx2 + Jy2 + Jz2 .
2
2
(2.33)
2.3 Symmetries
Let us return to Newtons Third Law. It amounts to a restriction on the kind
of forces that are allowed in the second law, and implies that there exist a
set of constants of the motion, namely the momenta. (The terminology is a
little unfortunate, since we will soon introduce something called canonical
momenta. They are indeed identical with the conserved momenta in simple
cases, but logically there need be no connection.) Constants of the motion
are useful when trying to solve the equations of motion, and Emmy Noether
proved a theorem explaining when and why they exist. We present the proof
for a Lagrangian of the general form L = L(q, q),
and afterwards we discuss a
simple example. Let us say at the outset that the argument is quite subtle.
Consider first an arbitrary variation of the action. According to eq. (2.3)
the result is
S =
t2
t1
dt q
L
d L
q
dt q
t
L 2
+ q
.
q t1
(2.34)
26
Lagrangian mechanics
In deriving the equations of motion the variations q(t) were restricted in such
a way that the boundary terms vanish. This time we do something different.
The variations are left unrestricted, but we assume that the function q(t)
that we vary around obeys the Euler-Lagrange equations. Then the only nonvanishing term is the boundary term, and
S = (Q(t2 ) Q(t1 )) ,
Q(t) q
L
.
q
(2.35)
(2.36)
(2.37)
It is understood that the Lagrangian is such that eq. (2.37) holds as an identity,
regardless of the choice of q(t), for the special variations q. (Note that, given a
Lagrangian, it is not always the case that such variations exist. But sometimes
they do.)
Next comes the crux of the argument. Consider variations of the particular
kind that makes eq. (2.37) hold as an identityso that q = f is a known
functionand restrict attention to q(t)s that obey the equations of motion.
With both these restrictions in force, we can combine eqs. (2.37) and (2.35)
to conclude that
0 = S = (Q(t2 ) Q(t1 )) .
(2.38)
The times t1 and t2 are arbitrary, and therefore we can conclude that Q =
Q(q, q)
is a constant of the motion.
What this theorem does for us is to transform the problem of looking for
constants of the motion to the problem of looking for variations under which
the variation of the action is identically zero. Before we turn to examples
we generalize the argument slightly, and state the theorem properly. Thus,
suppose that there exists a special form of q, such that
S =
t2
dt
t1
d
(q, q)
.
dt
(2.39)
Here can be any functionthe important and unusual thing is that the
integrand is a total time derivative. Then the quantity Q, defined by
2.3 Symmetries
Q(q, q)
= qi
L
(q, q)
qi
27
(2.40)
is a constant of the motion. This is easy to see along the lines we followed
above.
The theorem can now be stated as follows:
Noethers theorem: To any variation for which S takes the form (2.39), there
corresponds a constant of the motion given by eq. (2.40).
We will have to investigate whether Lagrangians can be found for which such
variations exist, otherwise the theorem is empty. Fortunately it is by no means
empty, indeed eventually we will see that all useful constants of the motion
arise in this way.
For now, one examplebut one that has many symmetrieswill have to
suffice. Consider a free particle described by
m
x i x i .
2
Since only x appears in the Lagrangian, we can choose
L=
xi = i ,
(2.41)
(2.42)
where i is independent of time. Then the variation of the action is automatically zero, Noethers theorem applies, and we obtain a vectors worth of
conserved charges
Pi =
L
= mx i .
x i
(2.43)
We use the letter P rather than Q because this is the familiar conserved
momentum vector whose presence is postulated in Newtons Third Law.
Another set of three conserved charges can be found easily, since
xi = ijk j xk
S = 0 .
(2.44)
Here i is again independent of t, and ijk is the totally anti-symmetric epsilon tensor. Noethers theorem now implies the existence of another conserved
vector, namely
Li = ijk xj x k .
(2.45)
28
Lagrangian mechanics
xi = x i
S =
dt
d m 2
x
.
dt 2
(2.46)
m
x i x i .
(2.47)
2
This is the conserved energy of the particle. There is yet another conserved
quantity that differs from the others in being an explicit function of timebut
its total time derivative vanishes since it also depends on the time dependent
dynamical variables. Thus
E=
xi = i t
S =
dt
d
(mi xi ) .
dt
(2.48)
(2.49)
and it is easy to check that its total time derivative vanishes as a consequence
of the equations of motion. Our analysis of the free particle ends here, but we
will return to it in a moment, to show that the conserved quantities have a
clear physical meaing.
What does it all mean? What does it mean for an action S[q(t)] to admit
variations q(t) leaving the action unaffected? To see this, select a solution
q(t) of the equations of motion. We know that this gives an extremum of
the action. Then consider q (t) = q(t) + q(t), where the variation is of the
special kind that leaves the value of the action unchanged. Obviously then
S[q (t)] = S[q(t)], so that the extremum is not an isolated point in the space
of all qs, but rather occurs for a set of qs that can be reached from each other
by means of iteration of the special variation q(t), In other words, given a
particular solution of the equations of motion, we can get a whole set of new
solutions if we apply the special variation, without going through the work of
solving the equations of motion again. This leads to an important definition:
A symmetry transformation is any transformation of the space of functions
q(t) having the property that it maps solutions of the equations of motion to
other solutions.
This is not a property of the individual solutions, but of the set of all solutions.
The special variations occurring in the statement of Noethers theorem are
examples of symmetry transformations. Given the converse of the statement
that we proved, namely that any constant of the motion gives rise to a special
variation of the kind considered by Noether, we observe that any constant of
the motion arises because of the presence of a symmetry. (There is a converse
of this statement that we will come to in section 8.6.)
2.3 Symmetries
29
Let us interpret the symmetry transformations that we found for the free
particle, beginning with eq. (2.42). This is clearly a translation in space. Therefore momentum conservation is a consequence of translation invariance. It is
immediate that we can iterate the infinitesimal translations used in Noethers
theorem to obtain finite translations, and the statement is that given a solution
to the equations of motion all trajectories that can be obtained by translating
this solution are solutions, too. To be definite, given that (vt, 0, 0) is a solution for constant v, (a + vt, b, c) is a solution too, for all real values of (a, b, c).
Translation invariance acquires more content when used in the fashion of Newtons third law, which we can restate as the action for a set of particles has
translation symmetry. For free particles this is automatic. When interactions
between two particles are added, the law becomes a restriction on the kind of
potentials that are admitted in
m1 2 m2 2
x1 +
x V (x1 , x2 ) .
(2.50)
2
2 2
Indeed invariance under (2.42) requires that V (x1 , x2 ) = V (x1 x2 ), which is
a strong restriction.
Eq. (2.44) expresses the fact that the Lagrangian has rotation symmetry,
while eq. (2.46) is an infinitesimal translation in time: Given a solution x(t),
the function
L=
(2.51)
Problem 2.1
30
Lagrangian mechanics
Problem 2.2
Problem 2.3
Problem 2.4
1
Mij (q)qi qj ,
2
where the matrix elements of Mij depend on the configuration space coordinates,
and the matrix is assumed to have an inverse Mij1 . Write down the Euler-Lagrange
equations and solve for the accelerations.
L=
Problem 2.5
Write down the Lagrangian for a double pendulum. (The rod
of the second is attached to the bob of the first. Bobs are heavy, the rigid rods not.)
How many constants of the motion can you find?
Problem 2.7
Check that the result from eq. (2.17) is the same as that
obtained from the recipe in eq. (2.22). Beware of changes in notation!
Problem 2.8
Consider a particle with kinetic energy T = m(x 2 + y 2 z 2 )/2,
and constrain it to the hyperboloid x2 + y 2 z 2 = 1, z > 0. Treat this both with
coordinates adapted to the hyperboloid and with the Lagrange multiplier method.
Show that the kinetic energy is positive and find three constants of the motion.
Problem 2.9
In the brachistrone problem one considers a particle sliding
along a curve in the x-z-plane (z is vertical) under the influence of gravity. Choose
this curve so that the time of descent from (x, z) = (x0 , z0 ) to the origin is minimal.
Problem 2.10
1 2
q q n ,
2
where is a real number and n is an integer. Determine those values of n for which
the Lagrangian transforms into a total derivative under
L=
q
q = tq
.
2
This is known as conformal symmetry.
Problem 2.11
2.3 Symmetries
S=
t2
t1
31
m 2
x dt .
2
Evaluate this integral for an x(t) that solves the equations of motion, and express the
answer as a function of t1 , t2 , and the initial and final positions x1 and x2 . Repeat
the exercise for a harmonic oscillator.
The theory of conic sections was one of the crowning achievements of the
Greeks. Their results will be important in the gravitational two body problem,
but the theory is no longer as well known as it deserves to be, so here is a brief
account.
By definition a conic section is the intersection of a circular cone with a
a plane. The straight lines running through the apex of the cone are called
its generatorsand we will consider a cone that extends in both directions
from its apex. If you like, it is the set of one dimensional subspaces in a three
dimensional vector space. Generically, the plane will intersect the cone in such
a way that every generator crosses the plane once, or in such a way that exactly
two of the generators miss the plane. Apollonius proved that the intersection is
an ellipse in the first case, and a hyperbola in the second. There is a borderline
case when exactly one generator is missing. Then the intersection is a parabola.
We ignore the uninteresting case when the plane goes through the apex of the
cone.
This is all very easy if we use the machinery of analytic geometry. For
simplicity, choose a cone with circular base, symmetry axis orthogonal to the
base, and opening angle 90 degrees. It consists of all points obeying
x2 + y 2 z 2 = 0 .
(3.1)
(3.2)
Inserting the solution for z in the equation it is easy to see that the intersection
is either an ellipse, a hyperbola, or a parabolaprovided you recognize their
equations, as I assume. The section is a circle if c = 0 and a parabola if c = 1.
It is an interesting exercise to prove this in the style of Apollonius. Let the
cone have arbitrary opening angle. Take the case when the plane intersects
every generator once in the upper half of the cone. Place two spheres inside
the cone, one above and one below the plane, and let them grow until each
touches the plane in a point and the cone in a circle. (See Fig. 3.1.) This
clearly defines the spheres uniquely. Denote the points by F1 and F2 , and the
circles by C1 and C2 . Now consider a point P in the intersection of the cone
33
Figure 3.1. A vertical cross section through Apollonius proofbut to understand the proof you have to think in three dimensions.
and the plane. The generator passing through P intersects the circles C1 and
C2 in the points Q1 and Q2 . Now the trick is to prove that the distance P F1
equals the distance P Q1 , and similarly the distance P F2 equals the distance
P Q2 . This is true because the distances measure the lengths of two tangents
to the sphere, meeting at the same point. It then follows that the sum of the
distances P F1 and P F2 is constant and equal to the length of the segment of
the generator between the circles C1 and C2 , independently of which point P
on the intersection we choose. This property defines the ellipse. This is the
proof that the intersection between the cone and the plane is an ellipse with
its foci at F1 and F2 . If you are unable to see this, consult an old fashioned
geometry book.
34
p
x .
(3.3)
e
(To see this, note that the distance from the focus to the directrix is p/e.)
Otherwise expressed
p
= 1 + e cos ,
(3.4)
r
where = 0 gives the point closest to the directrix. For a general point on the
ellipse we find
r = p er cos
x2 + y 2 = r 2 = (p ex)2
(x + ea)2 y 2
+ 2 =1,
a2
b
(3.5)
where
p
b2 = pa = (1 e2 )a2 .
(3.6)
1 e2
The major axis of the ellipse has length 2a, and the minor axis has length 2b.
The ellipticity is given in terms of these by
a
a2 b2
.
(3.7)
a2
Finally the distance between the center and the focus equals ea. To see this
we set = 0 and = in eq. (3.7), and calculate
e2 =
r() r(0)
p
=
2
2
1
1
1e 1+e
= ea .
(3.8)
(3.9)
The parameter t must not be confused with the angle between the radius
35
vector and the x-axis. Surprisingly, if we square this ellipse we obtain an ellipse
with its focus at the origin. First we see that
a2 + b2
a2 b2
cos 2t + iab sin 2t +
.
2
2
Using eq. (3.7) we see that the eccentricity E of the new ellipse is
Z(t) = w2 =
a2 b2
.
a2 + b2
So we can rewrite the equation for the new ellipse as
E=
(3.10)
(3.11)
(3.12)
where
a2 + b2
,
B ab ,
EA = A2 B 2 .
(3.13)
2
E is the eccentricity of an ellipse with semi-major axis A and semi-minor axis
B, and consequently EA is the distance between its focus and its centre. This
is again an ellipse, but shifted to be centred at one of its fociin the sense
that the angle is now seen from a focusand traversed twice as the original
ellipse is traversed once.
This trick was introduced by Karl Bohlin, working at the Pulkovoobservatory in Russia in 1911. His point was that the transformation w = Z is not
analytic at the origin. This enabled him to deal with collisions between point
particles, thought of as limiting cases of elliptical orbits whose eccentricity
approaches 1. At the collision the particle presumably reverses its direction,
but this is a rather singular occurence. In terms of the variable w it is an
undramatic event.
A
Problem 3.1
Using the definition in terms of the directrix, prove that the
sum of the distances from the two foci to a point on an ellipse is constant. Also prove
that the area of an ellipse equals ab.
Problem 3.2 Place a lamp at one focus of an ellipse, and let the circumference
of the ellipse act as a mirror. Prove that all planar light rays reconverge at the same
time at the other focus. Try to do this in two different ways: by means of a calculation,
and by means of an argument that makes it all obvious.
Problem 3.3
Consider a family of highly eccentric Kepler ellipses Z(t), and
take the limit representing colliding particles. Exactly
what happens at the collision
p
when it is described by the Hooke ellipse w(t) = Z(t)?
Johannes Kepler spent his life pondering the observations of the solar system
made by Tycho Brahe, and found that the motion of the planets around the
sun follows three simple rules:
1. A planet moves along an ellipse with the sun in one of the foci.
2. The radius vector covers equal areas in equal times.
3. The square of the period of all the planets is proportional to the cube of
their major axes.
To appreciate Keplers work fully, note that there are important facts about
the solar system (such as what the distances are) that do not follow simple
rules. Moreover the observational data gave the planetary orbits projected on
a sphere centred at a point which itself moves along an ellipse around the sun,
so it was not obvious that they admitted of a simple description at all.
Newton derived Keplers laws from his own Laws, with the necessary assumption that the force between the planets and the sun is directed along
the radius vector (the force is central) and is inversely proportional to the
square of the distance. This remains the number one success story of physics,
so we should be clear about why this is so. Naively Keplers laws may seem
simpler than Newtons, but this is not so, for at least two reasons. One is that
Newtons laws unify a large body of phenomena, from the motion of planets
to the falling of stones close to the Earth. The other reason is that improved
observations reveal that Keplers laws are not quite exact, and the corrections
can be worked out mathematically from Newtons laws.
For Mercury (which is hard to observe) the eccentricity e = 0.21, for the
Earth e = 0.02, and for Mars e = 0.09. Keplers main concern was with Mars.
If e = 0 the ellipse becomes a circle. Unbound motion through the solar system
is described by hyperbolas with e > 1, but Kepler did not know this.
37
they can be approximated as being pointlike. (In the Principia, Newton proved
from properties of the inverse square law that this approximation is exact for
spherical bodies.) Later on, we can go back to these assumptions and see if we
can relax themthis will give the corrections referred to above.
So we have decided that the configuration space of our problem has six
dimensions, spanned by the positions of the sun (X) and one planet (xP ), and
we try the Lagrangian
M
mp
Xi Xi +
x P i x P i V (X, xP ) .
(4.1)
2
2
Depending on the form of the function V we may have to exclude the points
X = xP from the configuration spacewe insist that the function V takes
finite values only as a function on configuration space. Anyway this will give six
coupled second order differential equations. In general they will not admit any
simple solutions. Keplers work implies that the solutions should be simple,
so we try to build some symmetries into the problem. We aim for at least
six conserved quantitities, one for each degree of freedom, since this should
result in a soluble problem. Conservation of momentum and energy is already
postulated, so we need two more. The answer is rotational symmetry. It is not
an objection that an ellipse is not symmetric under rotations. All we need is
that if a particular ellipse is a solution, then any ellipse which can be obtained
from it by means of rotations is a solution too, even if there is no planet moving
along it due to the choice of initial conditions. It might seem that rotational
symmetry is overdoing it, since it will yield three conserved quantitities, but
for reasons fully explained by Hamilton-Jacobi theoryonly two of these are
really useful.
With translational and rotational symmetry in place, we find
L=
V (X, xP ) = V (X xP ) = V (|X xP |) .
(4.2)
This works for any function V of one variable. Next we introduce coordinates
xi = xP i Xi which are invariant under translations, together with coordinates
describing the centre of mass. Then the centre of mass coordinates decouple,
and their equations can be solved and set aside. There remains a Lagrangian
for a one-body problem, involving only three degrees of freedom:
m 2
x V (|x|) .
(4.3)
2 i
Here m is the reduced mass, almost equal to the mass of the planet since the sun
is very heavy in comparison. The coordinate xi vanishes at the centre of mass
of the system, which is well inside the sun, and can be approximately identified
with the centre of the sun. This maneouvre should be familiar from elementary
mechanics, I just want to emphasize that it is translational symmetry in action.
Rotational symmetry implies the existence of a conserved vector
L=
Li = mijk xj x k .
(4.4)
38
l=
L
= mr 2 .
(4.6)
(4.7)
where A is the area covered by the radius vector per unit time. But this is
Keplers Second Law, which therefore holds for all central forces. We are on
the right track!
Together with Keplers second law, energy conservation is enough to solve
the problem. Using eq. (4.6) the conserved energy is
m 2
mr 2
l2
+ V (r) = constant .
r + r 2 2 + V (r) =
+
2
2
2mr 2
This gives the formal solution
E=
dt = q
dr
2
m
l2
2mr 2
.
V (r)
(4.8)
(4.9)
ldt
ldr
= q
.
2
mr
l2
r 2 2m E 2mr
V
(r)
2
(4.10)
This equation determines (r(t)), and the central force problem is thereby
fully solved at the formal level. If we are only interested in the form of the
orbits, and not the time development, eq. (4.10) is all we needit will give us
(r), and after inversion r(), which is the equation for the form of the orbit.
39
Vef f (r) = 0 .
(4.12)
(4.13)
r=
Vef f (r) =
l2
km
1
+2
(4.14)
(4.15)
(4.16)
Thus stability of the circular orbit requires that the force does not fall off too
quickly with distance.
There is still the question whether small departures from the circular orbit
will give rise to ellipses, or to something more complicated. This is really a
question about the ratio between the time it takes for the planet to complete
40
rmax
rmin
r2
ldr
.
2m (E Vef f (r))
(4.17)
2
2mr
r
is in fact bounded from below whenever l 6= 0. The case when l = 0 is indeed
troublesome from a physical point of view: the two bodies will collide, and we
do not have a prescription for what is to happen after the collision. The case
of coinciding particles is not included in our configuration space.
Vef f (r) =
(4.20)
w
= w
|w|
2 + |w|2 = 2 .
41
(4.22)
d
= constant .
dt
(4.23)
This relates the parameter t to the angle between the radius vector and the
major axis. The idea now is to introduce a new time , related to t in such a
way that Keplers second law holds also for the ellipse we get when we square
the Hooke ellipse. Thus, remembering that the phase of Z is 2, we require
2|Z|2
d
= constant .
d
(4.24)
d
1 d
=
.
d
|w|2 dt
(4.25)
1 dw2
|w|2 dt
2 d
|w|2 dt
w
Z
= = 4 3 ,
w
|Z|
(4.26)
where is the constant energy of the Hooke ellipse. This is precisely Newtons
force law for gravity. So we conclude that Keplers First and Second Laws
together imply the inverse square law, with the potential
V (r) =
k
.
r
(4.27)
There is no other solution. Keplers First and Second Laws hold if and only if
the inverse square law holds for the force. The argument is watertight because
every Kepler ellipse can be obtained from a Hooke ellipse using Bohlins trick,
and a Hooke ellipse arises only in the harmonic oscillator potential.
To confirm our conclusion let us go back to eq. (4.10), which gives a formal
solution for the form of the orbit. We choose eq. (4.27) for V (r), and we also
perform the substitution
u=
The result is
1
r
du =
dr
.
r2
(4.28)
42
ldu
d =
.
2mE l2 u2 + 2mku
(4.29)
= 0 arccos q
1+
2El2
mk2
(4.30)
1+
2El2
.
mk2
(4.32)
p
l2 mk2
k
=
=
.
2
2
1e
mk 2|E|l
2|E|
(4.33)
The solution remains valid also for E > 0, in which case it describes a hyperbola with e > 1. Physically this is an unbound trajectory, like that of a
spaceship heading for the stars.
43
ma
q
k
rdr
l2
2m|E|
+ 2ar r 2
ma
rdr
p
.
2
2
k
a e (r a)2
(4.36)
We can do the integral if we can find a substitution that simplifies the integral
Z
rdr
,
1 (r 1)2
(4.37)
(4.38)
Now we can do the integral. With a suitable choice of the integration constant
we obtain
t=
ma3
( + e sin ) .
k
(4.39)
ma3
.
k
(4.40)
dq
dt
2
V (q) ,
(4.41)
where we do not use the dot notation because we will soon have two different
time parameters to reckon with. Assume that the potential is homogeneous
of degree , meaning that there exists a real number such that for any real
non-zero number
44
V (q) = V (q) .
(4.42)
There could be several variables qi . For simplicity I write only q. Let us also
change the time scale, and define a new function q by
q(t) q (t ) = q(t) ,
t =
2
2
t.
(4.43)
It follows that
dq
dq
dt d
.
= (q(t)) = 2
dt
dt dt
dt
(4.44)
We can now check that our rescalings represent a symmetry because, under
this transformation,
dq
L q,
dt
dq
L q,
d
dq
= L q,
dt
(4.45)
This has the effect of changing the value of the action with a constant factor,
and it follows that q (t ) is an extremum of S[q (t )] if q(t) is an extremum of
S[q(t)]. In this sense rescaling is a symmetry of the action. (Compare problem
2.10. If you find the argument difficult, you can check directly that q (t ) is a
solution whenever q(t) is.)
The harmonic oscillator has a potential V q 2 , homogeneous with = 2.
Scaling symmetry is present with t = t. Given a solution q(t) there is another
solution that is a blown up version of this, with amplitude a factor of larger.
Because t = t the period of the oscillations are unaffected by the scaling,
and we seewithout looking at any explicit solutionsthat the period of
the oscillations are independent of their amplitudes. Galilei first made this
observation while celebrating mass in the cathedral of Pisa.
Newtons law of gravity uses a homogeneous potential with = 1, so
similarity holds with t t = 3/2 t. Two ellipses with the same shape (and
the planetary orbits are all close to circular) will therefore have their periods
and their axes related by
R R = R
T T = 3/2 T
T 2
R3
=
.
T2
R3
(4.46)
dt f (t ) .
t0
(4.47)
45
For the argument to follow it is important that the time average of the derivative of a bounded function is zero, i.e.
df
dt
1
(f (t) f (t0 )) = 0
t t
= lim
(4.48)
(4.49)
xi i V (x) = V (x) .
(4.50)
(Proof: Take the derivative with respect to , and then set = 1.)
Now the calculation is easy:
2 hT i = mx
d
(mxi x i ) mxi x
i
dt
= hxi m
xi i =
(4.51)
(4.52)
hEi = hT + V i = hT i 0 .
(4.53)
This is the familiar fact that motion bounded by gravity can take place only
if the total energy is negative. For the harmonic oscillator we deduce that the
time averages hT i and hV i are equal.
The virial theorem has been used by astronomers to estimate the masses
46
of clusters of stars and clusters of galaxies, assuming that they are gravitationally bound. This led to the first evidence for dark matter.1 Note also the
counterintuitive fact that if energy leaves a self-gravitating cluster of particles
its energy hEi becomes more negative, which means that its average kinetic
energy hT i grows. In some sense the gas becomes hotter when energy is lost.
The calculation in eq. (4.51) is of interest even for non-potential forces, if
we break it off after the first line:
2 hT i = hxi Fi i .
(4.54)
If the forces are the constraint forces keeping an ideal gas contained inside a
box, we can use this relation to deduce the ideal gas law. We turn the sum
into an integral, recall the definition of the pressure P as force per unit area,
and apply Gauss law to the result:
2 hT i = P
dAi xi = P
dV i xi = 3P V .
(4.55)
(4.56)
z1 = m2
(4.57)
The pioneering work on the Coma cluster is described, very readably, in F. Zwicky, On the Masses
of Nebulae, and Clusters of Nebulae, Astrophys. J. 86 (1937) 217.
47
(4.58)
w2 = z1 z3 ,
w3 = z2 z1 .
(4.59)
w
1 = m
(4.60)
where m = m1 + m2 + m3 and
a=
w1
w2
w3
+
+
.
|w1 |3 |w2 |3 |w3 |3
(4.61)
This time we are looking for a special solution, not at the general case. So let
us assume that the triangle is an equilateral one,
w2 = e2i/3 w1 ,
w3 = e4i/3 w1 .
(4.62)
A glance at eqs. (4.60) shows that this property can be preserved in time.
Then we have that a = 0, and the only equation we need to solve is
w
1 = m
This we know how to do.
To interpret the solution, solve for
mz1 = m3 w2 m2 w3
w1
.
|w1 |3
(4.63)
(4.64)
z1 =
(m22 + m2 m3 + m23 ) 2 z1
,
m2
|z1 |3
(4.65)
and similarly for the other two particles. Hence the particles are all being
accelerated towards their common center of mass, with effective masses that
take an unexpected form. Each individual particle travels on an ellipse, but
the tree of them do so in unison, in such a way that they always span an
equilateral triangle.
48
Figure 4.1. The two stable Lagrange points in Jupiters orbit. The Greek and
Trojan asteroids lie within roughly one astronomical unit from the Lagrange
points.
49
With the advent of the computer it has become possible to follow a large
number of solutions to the three body problem on the screen, with no special
effort. The zoo of solutions include ones where the third body escapes from
the system, leaving the remaining pair more tightly bound than before.
Problem 4.1
The conservation of angular momentum is used in the gravitational two body problem to show that the trajectory is confined to a plane. Now
consider an electrically charged particle moving in the electromagnetic field of a magnetic monopole, that is to say a hypothetical particle which, if placed at the origin,
gives rise to the magnetic field
xi
.
r3
Here b is the magnetic charge of the monopole, and the electrically charged particle
obeys Lorentz Law. Compute the time derivative of the particles angular momentum.
Find a conserved quantity which is a modified version of the angular momentum, and
use the existence of this quantity to draw a qualitative conclusion about the trajectory
of the particle.
Bi = b
Problem 4.2
Consider the Earth-Moon system. Because of the tides some
dissipation of energy takes place. How does this affect the distance of the moon from
the earth?
Problem 4.3
er
,
k>0, >0.
r
What can you say about the existence and stability of circular orbits?
V (r) = k
Problem 4.4
In the theory of black holes one encounters the following
equation for particles orbiting the black hole,
2
2m
L
r 2 + 1
+
1
= E2 ,
r
r2
where r(t) is related to the distance to the event horizon at r = 2m, t is related
to time, m is the mass of the black hole, E is the energy of the particle, and L its
angular momentum. The equations make sense only if r > 2m. You can choose L2 and
E 2 freely. Compute the smallest possible value of r for which a (marginally) stable
circular orbit (with r = constant) exists.
Problem 4.5
If the Sun is flattened at its poles it will have a quadrupole
moment, and the potential is
q
k
+ 3 .
r
r
The orbits will no longer be closed. Compute the angle by which the perihelion moves
during one revolution, to first order in q.
V (r) =
50
Problem 4.6
For the Newtonian potential (4.27) the two body problem
admits an additional conserved vector
k
xi .
r
This is known as the Runge-Lenz vector. Check that it is indeed conserved. In what
direction does it point?
Mi = ijk x j Lk
Problem 4.7
The mass of Jupiter is 2 1027 kg and its distance from the Sun
is 8 1011 m. Suppose the Sun suddenly disappears. How long would it take for the
Trojan asteroids to crash onto Jupiter?
Problem 4.8
Let
w
= w|w|a1 ,
Z = w .
Choose a time parameter = (t) so that Keplers Second Law holds for Z( ), and
prove that
d2 Z
= cZ|Z|A1 ,
d 2
where c is a constant and
=
a+3
,
2
(a + 3)(A + 3) = 4 .
5 Small oscillations
An important class of equations that we can actually solve are the linear
ones. Their importance stems from the fact that departures from equilibrium
are described by equations that are linear to first ordernear a minimum,
most smooth potentials look like a collection of harmonic oscillators. When
left alone they are too simple to be interesting, but once we couple them to
external forces many delightful things happen. Or harmful things, depending
on your point of view. Either way the subject is of interest to engineers and
physicists alike.
It is perhaps worth remarking that once we allow the external forces to
depend on time the phase space picture is affected in a significant way
the phase space trajectories are now allowed to cross themselves, because the
external conditions may have changed by the time the trajectory returns to
the initial point.
x
+ 2 x = 0 ,
k
.
m
(5.1)
The assumption that the string constant k is positive was slipped in when we
wrote 2 = k/m. The general solution is
where
x(t) = a1 cos t + a2 sin t = A cos (t + ) = Re eit ,
= Aei ,
A=
a21 + a22 ,
tan =
a2
.
a1
(5.2)
(5.3)
52
Small oscillations
still a solution. It does not matter if we take the real part before or after the
additionbut care must be exercised if we multiply two solutions together.
The total energy of the oscillator is
m 2 k 2 m 2 2
x + x = A .
(5.4)
2
2
2
It is manifestly independent of time. And the subject is exhausted.
To make matters more interesting we introduce an external force, and consider the forced oscillator
E=
1
F (t) .
(5.5)
m
To solve this equation, with some specified function F (t), we appeal to the
general theory of ordinary differential equations: it is enough to find one particular solution and then add the general solution of the homogeneous equation
(the one with F = 0).
It goes without saying that some forces are more important than others. An
interesting example is the periodically varying force
m
x + kx = F (t)
x
+ 2x =
F (t) = f cos (t + ) ,
(5.6)
which we will solve under the assumption that 6= . For a particular solution
we try the Ansatz
xpart (t) = B cos (t + ) .
(5.7)
Plugging this into the equation will determine B. Adding the general homogeneous solution we find the noteworthy general solution
x(t) = A cos (t + ) +
f
cos (t + ) .
2 )
m( 2
(5.8)
The remarkable thing is that the amplitude will get very large if the system is
driven by a periodic force whose frequency is close to the natural frequency of
the system. This phenomenon is called resonance. In fact, it may well be that
resonance drives the system out of the regime in which the harmonic oscillator
approximation is valid. (For the special case = , do exercise 2.)
The solution is a superposition of two harmonics with different frequencies.
Close to resonance the frequencies of the two harmonics almost coincide, and
we will observe the phenomenon of beats. Recall that
(1 2 )t
(1 + 2 )t
cos
.
(5.9)
2
2
If 1 2 this can be regarded as a vibration with frequency 1 2 ,
but with an amplitude modulated by a sine-wave of very low frequency. Our
case is a bit more complicated because of the differing amplitudes and phases
cos 1 t + cos 2 t = 2 cos
53
(5.10)
(5.11)
(5.12)
z = x + ix ,
(5.13)
1
Im [z(t)] .
(5.15)
it
z(t) = e
1
z0 +
m
it
F (t )e
dt
(5.16)
(5.17)
54
Small oscillations
There is a net transfer of energy if the external force has a Fourier component
corresponding to the intrinsic frequency of the oscillator. If the force acts for
a short timeas compared with the natural time scale in the problem, which
is set by the exponential in the integrand can be ignored, and the energy
imparted
to the system is the kinetic energy associated to the momentum
R
F dt.
5.2 Damped and forced oscillations
A particle encountering air resistance, or more generally a particle moving in
a viscous medium, is really a system with very many degrees of freedom. It
often happens that the frequencies associated with the environment are very
much larger than the frequencies associated with the system of interest. In
many such cases we can introduce friction as an effective force. The coupling
is expected to grow with velocity, so we try the equation
m
x+ x+kx
=0
2
x
+2x+
0x = 0 ,
02 =
k
,
m
. (5.18)
2m
1
q=0.
C
(5.19)
= i
02 2 .
2 2
2 2
x(t) = a1 et+i 0 t + a2 eti 0 t .
(5.20)
(5.21)
There are two qualitatively distinct cases to consider. If the damping is weak,
that is if 02 > 2 , the general solution is a damped oscillation
x(t) = et a1 eieff t + a2 eieff t .
(5.22)
55
Note that the effective frequency eff is always smaller than the bare frequency 0 , as is reasonable. If the damping is strong enough so that 02 < 2
the solution decays exponentially without oscillations.
It is instructive to consider the critical case 02 = 2 . In the limit we obtain
the solution aet , but this cannot be the general solution since it contains
only one integration constant. In fact the general solution is
x(t) = (a + bt)et .
(5.23)
This must be the general solution because it contains two integration constants.
Finally we come to the forced and damped oscillator
m
x + x + kx = F (t) = f cos (t) ,
(5.24)
f it
e ,
m
(5.25)
(5.26)
insert in the equation, cancel the exponential, and solve for B. The result is
(2 + 2i + 02 )B =
f
m
B=
1
f
.
2
2
m 0 + 2i
(5.27)
b=
1
f
p
,
m (02 2 )2 + 42 2
tan =
2
.
02
(5.28)
x(t) = b cos (t + ) .
(5.29)
Of course we could add the general homogeneous solution, but since this decays
to zero we ignore it. When the force has acted for some time the transient part
will be effectively zero, and a steady state described by the above solution sets
in.
Because of the damping the amplitude no longer goes to infinity at resonance, but it still has a pronounced maximum. The phase shows that the
system does not oscillate in phase with the external force. When is very
small so is , meaning that the oscillations follow the external force with no
phase shift. The sign of is negative, but its tangent switches sign at = 0 .
Small oscillations
56
Indeed always lies between 0 and , meaning that the system always lags
behind the force. For very large the phase shift is close to , because the
accelerations are large, and the acceleration of an oscillator is 180 out of phase
with its displacement.
We should look at the energy budget of the damped oscillator. We begin
with the damped but free oscillator (F = 0). Then the time derivative of the
energy function must be negative. Indeed
d
E =
dt
m 2 k 2
x + x
2
2
= x(m
x + kx) = x 2 = 2mx 2 .
(5.30)
Now consider the forced and damped oscillator in steady state. It is still losing
energy to the frictional forces at exactly this rate. This energy must then be
supplied as work by the external force, which is something we may want to
know about. Inserting the solution (5.29) we obtain the energy supplied by
the external force per unit time as
= 2mb2 2 sin2 (t + ) .
|E|
(5.31)
For most purposes it will be enough to know the time average of this quantity.
This time average is a function of the frequency ; recalling from (5.28) how
the amplitude b depends on we obtain
2
2
=f
I() h|E|i
.
m (02 2 )2 + 42 2
(5.32)
f 2 1
.
4m 2 + 2
(5.33)
This is the famous Lorentzian line shape function, giving the sharp resonant
response of a low-loss system. Among many other applications it expains the
intrinsic width of spectral absorption (and emission) linesalthough the dominating effect when astronomers observe these is the broadening due to the
Doppler shift, since the atoms are in thermal motion.
We see that the width of the Lorentzian line shape function grows with
the damping . On the other hand the total area under the curve is (almost)
independent of , as follows from the calculation
Z
f
I()d =
I()d
I()d =
4m
0
1 d
f
=
. (5.34)
2
4m
1 + 2
57
(5.35)
with some prescribed function 2 (t). This leads to the theory of parametric
resonancea child setting a swing in motion belongs to this class. Another
interesting class is given by
x
(t) + 2 x(t) + gx(t c) = 0 .
(5.36)
Provided that 0 < c < the force is in phase with the velocity, and any initial
oscillation tends to grow. Energy is being absorbed from the environment. In
the limit of small c this is a negatively damped oscillator. See ex. 5. The
Tacoma Narrows bridgea famous example in the theory of small, and not so
small, oscillationsbelongs here.1
1 i, j N .
(5.37)
(5.39)
(5.40)
For a nice account, with many historical glimpses, see A. Jenkins, Self-oscillation, Phys. Rep. 525
(2013) 167.
58
Small oscillations
xi = i eit .
(5.41)
Inserting this into the equation, and then cancelling the exponential, gives the
matrix equation
( 2 mij + kij )j = 0 .
(5.42)
There is a non-zero solution for the vector i if and only if the determinant of
the matrix vanishes, that is if and only if
kij 2 mij = 0 .
(5.43)
1
q1 = (1 + 2 ) ,
2
The Lagrangian becomes
1
q2 = (1 2 ) .
2
(5.45)
59
1 2
q1 + q22 q12 2 q22 ,
2 = 1 + 2k .
(5.46)
2
The normal modes have a simple interpretation. If q2 = 0 the two pendula
move in phase with the original frequency (equal to 1), if q1 = 0 the pendula
move in opposite phase with a frequency increased by the spring. As an afterthought we observe that, in this case, the two normal modes could have
been identified from the outset using only symmetry considerations.
The general solution for 1 , 2 is thus a sum of two harmonicsand if the
difference in frequency between the two is small we should be able to observe
the interesting phenomenon of beats, as in eq. (5.11). See exercise 6.
When we couple a set of harmonic oscillators to an external force it is the
normal modes that count. As the simplest example, consider
L=
x
1 + 02 x1 + k(x1 x2 ) = f cos t
x
2 + 02 x2 k(x1 x2 ) = 0
(5.47)
.
Each normal mode gives rise to its own resonance frequency. The mathematical problem has been reduced to that of studying twoor in general N
independent forced oscillators.
Finally, let us consider eq. (5.40) for N degrees of freedom. We can decouple
the degrees of freedom in three steps: First we diagonalise the symmetric matrix mij , using rotations. Its eigenvalues must be positive (to prevent exercise
1.2 from becoming relevant). In the next step we rescale its eigenvectors so
that the diagonalized matrix becomes the identity matrix. These coordinate
changes are conveniently described on the Lagrangian level,
L=
1X
1X
(x i mij x j xi kij yj ) =
(y i i ij y j yi kij
yj ) =
2 i,j
2 i,j
=
1X
1 1
p zj
zi ij zj zi kij
2 i,j
i
j
(5.49)
.
For once we did not use the Einstein summation convention, since it would not
p ,
kij
= kij
i
j
(5.50)
60
Small oscillations
1X
(u i u i i2 ui ui ) .
2 i
(5.51)
The normal modes ui are now decoupled, and the equations of motion trivial
to solve.
If you remember your linear algebra, you may be a little surprised by our
success. In general the matrices mij and kij do not commute, so how could we
bring both of them to diagonal form? The answer of course lies in the rescaling
of the eigenvectors. We did bring both matrices to diagonal form, but not by
means of rotations only.
Problem 5.1
Verify eqs. (5.3). By the way you are supposed to verify all
equations as simple to derive as those two.
Problem 5.2
Problem 5.3
integral.
Problem 5.4
Consider a bouncing ball, obeying z = g but with a floor
at z = 0. At each bounce the absolute value of its velocity decreases by a factor ,
0 < < 1. Assume it left the floor with velocity v0 at time t = 0 and that bounces
are instantaneous. At what time does the motion stop? Sketch the motion in the zt
plane.
Problem 5.5
Consider the self-oscillator described by eq. (5.36). Assume the
time lag c is small. Expand the last term to first order in c and check that you get a
negatively damped oscillator.
Problem 5.6
Consider the two pendula connected by the spring, and let the
spring be very weak (k << 1). Choose initial data 1 = 2 = 0, 1 = v, 2 = 0. Show
that the amplitudes of the two pendula are modulated in such a way that after some
time T the first pendulum is stationary and all the energy has gone to the second.
Problem 5.7
Consider a particle in a plane, fastened to the corners of an
equilateral triangle by means of identical springs (using Hookes law). How will it
move?
We have all played with spinning tops, and know that their dynamics is very
rich. The subject is of considerable practical importance, say to spacecraft
designers. To understand it we must understand the Lie group of rotations in
three dimensional space, and this is where we begin the story.
6.1 Rotations
In the plane, the mathematics of rotations is fairly trivial. Any rotation takes
place around a fixed point, and is uniquely characterized by its angle of rotation
. Its action on the coordinates is
x
y
cos sin
sin cos
x
y
(6.1)
What does this mean? Actually it can mean two quite different things: A
passive coordinate transformation changing the coordinates used to describe
a given point, or an active rotation moving a given point to another point
described by different values of the coordinates. These are two very different
operations, although the formul describing them are mostly identical. One
must stay awake during calculations.
This attended to, we observe that rotations form a group. A group is a set of
objects g1 , g2 , etc, together with a rule for combining them, so that g1 g2 = g3
is again a member of the set. This rule must be associative. One of the elements
is the identity element e, and has the property that e g = g e = g for any
other element g, and finally every element g possesses an inverse g1 such that
g g1 = g1 g = e. In general it may or may not be true that g1 g2 = g2 g1 .
For the two dimensional rotation group the elements can be written g(), so
the number of elements are continuously infinite. The set of all group elements
thus forms an abstract space, called the group manifold. In this case the group
manifold is a circle, coordinatized by . It is also a Lie group. This means that
the continuous parameters of a product are given by a analytic functions of
the parameters of its factors. Thus
g(1 ) g(2 ) = g(3 (1 , 2 )) .
(6.2)
62
(6.3)
which is certainly analytic. The surprising thing about Lie groups is that they
can be almost completely understood through a careful study of their properties in a small neighbourhood of the identity element. All rotation groups are
Lie groups.
Rotations in three dimensional space are hard to understand, in the first
place because rotations do not commute in general. Rotations can be represented by matrices acting on vectors,
xi = Rij xj .
(6.4)
(6.5)
Hence the matrix Rij must be subject to some restrictions. On the other hand
it is immediate from the definition that rotations form a group. (Why?) To
see how the matrix is restricted, we observe that scalar products must be
preserved too, and then we perform a little calculation:
xi yi = Rki xi Rkj yj = xi Rki Rkj yj = xi yi
Here ij is the Kronecker delta. Since the vectors are arbitrary this is equivalent
to
Rki Rkj = ij .
(6.7)
RT R = RRT = 1 .
(6.8)
The group properties can be now be checked on the level of matrices. The
columns, and the rows, of the matrix R must form an orthonormal triplet of
vectors. Such matrices are called orthogonal. Matrix multiplication is associative, and moreover
R1 R2 = R3
(6.9)
det R = 1 .
(6.10)
The group can be restricted by insisting that det R = 1. This restriction can
be formulated in terms of the epsilon tensor:
6.1 Rotations
63
Figure 6.1. To coordinatize the set of all rotations (the group manifold of the
rotation group), choose a rotation axis and a number along it. Note that the
two endpoints of the axis represent the same rotation.
(6.11)
where the definition (sic!) of the determinant was used. In N dimensions the restricted rotation group is called the special orthogonal group, denoted SO(N ).
Special refers to the restriction that the determinant equals one. If matrices with determinant 1 are admitted the group is called O(N ), and includes
reflections.
Eulers theorem states that any rotation in 3-space is a rotation around a
fixed axis. To prove it, note that an orthogonal matrix is unitary, and the
eigenvalues of a unitary matrix always take the form
= ei
(6.12)
for some angle . One can prove this by diagonalizing the unitary matrix. But
an orthogonal matrix is a real matrix too, which means that
det (R 1) = 0
(6.13)
Hence complex eigenvalues, if they occur, must occur in pairs because their
complex conjugates are eigenvalues too. Since the number of eigenvalues of an
SO(3) matrix is odd, one of them must be real, and in fact it must equal one
because the determinant does. The corresponding eigenvector is the fixed axis
of rotation. Note that rotations in even dimensions (like two dimensions) work
differently. Note also that rotations can be hard to grasp: if the rotation axes
of R1 and R2 are known, what is the rotation axis of R1 R2 ?
The group manifold of SO(3), that is to say the set of all rotations, is easy
to visualize. The set of all rotation axes can be identified with the surface of
a sphere, or more precisely with the pairs of antipodal points where the axis
cuts the sphere. An arbitrary rotation is determined by its axis and an angle
, hence we can think of the set of all rotations as a solid ball with the identity
matrix at its center, with as a radial coordinate, and with antipodal points on
its surface identified because of the periodicity of . It sounds simple, but there
64
are some subtleties. The topology of this group manifold is such that there
are closed curves, starting and ending at the identity element, that cannot be
shrunk to zero in a continuous way. There is a famous trick one can play with
a pair of scissors sliding along a belt, to verify that this property has tangible
consequences.
We will need a coordinate system on the group manifold, and we choose
the Euler angles for this purpose. From our present perspective it is a little
difficult to make them appear natural. We introduce them by brute force, as
follows: Define
cos sin 0
R = sin cos 0
0
0
1
Compute
1
0
0
R = 0 cos sin
0 sin cos
cos sin 0
R = sin cos 0
0
0
1
R(, , ) = R R R =
(6.14)
(6.15)
sin sin
cos sin .
cos
In the absence of the argument that makes this construction appear natural,
how do we know that this is correct in the sense that an arbitrary rotation
matrix can be expressed in terms of the Euler angles? Recall that every orthogonal matrix can be thought of as three orthonormal column vectors. By
inspection we see that and can be chosen so that the third column agrees
with any unit vector, and further inspection of the remaining columns shows
that they are restricted only to the extent needed for them to fill out a right
handed orthonormal triad. All this is true provided that
0 < 2 ,
0 < 2 ,
0 .
(6.16)
We accept this, and now we have a coordinate system on the group manifold
SO(3) available whenever we need one.
In writing eq. (6.15) we proved that an arbitrary rotation can be effected by
first rotating around the z-axis, then around the x-axis, and finally around the
z-axis again. There are 3 2 2 = 12 different ways of choosing the axes here,
leading to 12 different ways of parametrizing an arbitrary rotation matrix.
This has the consequence that whenever Euler angles are encountered in the
literature, one must check which of the 12 possible definitions that was used.
6.1 Rotations
65
1 2
A + ... .
2!
(6.17)
(6.18)
R(0) = 1 .
(6.19)
We would like to claim that every rotation group element can be written as
etA , for some choice of anti-symmetric matrix A. This is actually so for any
SO(3) rotation, but not for the additional reflections present in O(3). To see
how it works, we begin by using the definition (6.17) to conclude that
0 1
cos t sin t
exp t
=
.
1 0
sin t cos t
(6.20)
So the statement is true for any two dimensional rotation matrix with unit
determinant. But we can use Eulers theorem to reduce the three dimensional
case to the two dimensional onewe simply adapt our coordinates so that one
of the axes points along the eigenvector of the given but otherwise arbitrary
rotation matrix.
We can now see by means of a second order Taylor expansion what the noncommutativity means in terms of what goes on close to the identity element:
R1 (t1 )R2 (t2 )R11 (t1 )R21 (t2 )
(1 + t1 A1 +
t21 2
t2
t2
t2
A1 )(1 + t2 A2 + 2 A22 )(1 t1 A1 + 1 A21 )(1 t2 A2 + 2 A22 )
2
2
2
2
(6.21)
1 + t1 t2 (A1 A2 A2 A1 ) .
66
0 0
A1 = 0 0
0 1
0
1
0
0 0
A2 = 0 0
1 0
1
0
0
0
A3 = 1
0
1 0
0 0 . (6.22)
0 0
They obey
[A1 , A2 ] = A3
[A2 , A3 ] = A1
[A3 , A1 ] = A2 .
(6.23)
They form a basis for the Lie algebraand would perhaps look more familiar
had we multiplied the Lie algebra elements Ai with an imaginary factor of
i, to make them Hermitian matrices. The idea here (a grand one!) is that it
works backwards toofrom a Lie algebra of commutators one can reconstruct
a Lie group.
In index notation an arbitrary element of the SO(3) Lie algebra can be
written as
Aij = ikj k ,
(6.24)
(6.25)
(6.26)
67
assume that the angular velocity vector i is constant. The time derivatives
are related by
X i = Rij x j + R ij xj .
(6.27)
(6.28)
The dot-notation can get confusing at this point, so it may be helpful to write
this as an operator relation
X i = Dtij xj =
d
ij + ikj k xj ,
dt
(6.29)
(6.31)
This is what Newtons second law looks like on the roundabout. The extra
terms on the right hand side are known as inertial or fictitious forcesbut
they feel real enough.
The third term on the right hand side of eq. (6.31) is known as the centrifugal force. It is responsible for the repulsive part of the effective potential
(4.8) in the two body problem. The second term is the Coriolis force. It is perpendicular both to the velocity x i and to the angular velocity i . To see that
such a term must be there, consider a free particle moving out from the center
on a frictionless rotating disk, and anchor the coordinate system to the disk.
Alternatively, consider the Foucault pendulum somewhere on earth. Choose a
coordinate system with a vertical z-axis, and let the pendulum perform small
oscillations in the x-y-plane. We ignore all terms of second order in the angular
velocity i of the earth. In particular we ignore the centrifugal force, but the
angular velocity enters the equations through the Coriolis term:
x
= k2 x + 2z y
y = k2 y 2z x .
(6.32)
2 + k2 + 2iz = 0 .
(6.33)
68
Figure 6.2. This picture illustrates Einsteins article on Beers Law, so presumably this is what the great mans tea cup looked like.
(6.34)
Therefore
(6.35)
69
law does not apply to small and bending rivers, where the centrifugal force
dominates.
Even more dramatically, fix a coordinate system in the earth so that one
of its axes point at the sun. It will make one revolution per year, and create
a centrifugal force just balancing the gravitational attraction felt by the sun.
This enables the sun to stand still. Now let the coordinate system follow the
earth in its daily rotation. As a result a huge centrifugal force will act on
the sun, completely overwhelming the gravitational force. In this coordinate
system the sun appears to be in rapid motion (x i 6= 0), and the Coriolis force
steps in to the rescue and prevents the sun from disappearing into outer space.
(6.36)
(6.37)
(6.38)
70
T =
Xm
2
(Vi + ikj k xj )2 =
(6.39)
m
=
V 2 + mikj Vi k xj + (ikj k xj )2 .
2
2
(The sum runs over all the particles in the body, even though no explicit
indices have been placed on them.) This can be rewritten in terms of the total
mass M as
X m
X
M 2 1X
mxj .
(6.40)
V +
m(x2 ij xi xj )i j + ikj Vi k
2
2
If O, the origin within the body, is placed at the centre of mass the last term
vanishes. It also vanishes if the origin coincides with a point that is fixed in
space (when Vi = 0). Now define the inertia tensor with respect to O,
T =
Iij =
m(x2 ij xi xj ) .
(6.41)
Again the sum runs over all the particles in the body. We observe that the
angular momentum with respect to O is
Li =
ijk xj mx k =
(6.42)
Unlike the angular velocity, both the angular momentum and the inertia tensor
change if we shift the position of O. Let us assume that O is placed at the
center of mass. Then
1
1
1
1
T = M V 2 + Iij i j = M 1 Pi Pi + Iij1 Li Lj .
(6.43)
2
2
2
2
The inertia tensor describes the resistance of the body to changes of its rotation, just as the mass describes its resistance to changes of its translational
state. But the former is a tensor, not a scalar, and hence much harder to grasp.
Written out, the inertia tensor is
P
m(x22 + x23 )
P
I=
mx2 x1
P
mx3 x1
mx1 x2
mx3 x2
m(x21
x23 )
P
P
mx1 x3
mx2 x3
m(x21 + x22 )
(6.44)
I1 0 0
I = 0 I2 0 .
0 0 I3
(6.45)
71
The eigenvalues are known as moments of inertia. The eigenvectors are known
as principal axes of the body, and are orthogonal to each other.
The moments of inertia are all positive since, for an arbitrary vector ni ,
ni Iij nj =
m n2 x2 (ni xi )2 0 .
(6.46)
Indeed the individual terms are all positive. A vanishing moment of inertia can
occur only if all the particles lie on a line (because it would be necessary that
(nx)2 = n2 x2 for each individual particlerecall that the notation suppresses
the sum over all particles in the body, but in this case all the individual terms
are positive or zero.) Once we have adapted the coordinates so that the inertia
tensor is diagonal we see immediately that
I1 + I2 =
m(x21 + x22 ) = I3 .
(6.47)
Hence no moment of inertia can exceed the sum of the other two. Equality
happens if and only if x3 = 0 for all the particles, that is for a planar body.
Some general facts about the inertia tensor can be stated:
If the body is symmetric under reflection in a plane, two of the principal
axes lie in it.
This must be so because the reflection must preserve the principal axes. The
only way to arrange this is to let two of them lie in the plane. The third axis
is then orthogonal to the plane and is taken into itself by the reflection. (Note
that this theorem is enough to identify the principal axes of an ellipsoid.)
The body can also be symmetric under rotations around some axis through
an angle which is some fixed fraction of 2:
If the body has a symmetry axis of order higher than two this axis must be
a principal axis, and the other two moments of inertia are equal (because
the corresponding eigenvectors cannot be unique).
A body with two equal moments of inertia is called a symmetrical top. If there
are at least three higher order symmetry axesthis is true for a cube, sayit
follows that all the eigenvalues are equal, so in fact every axis is a principal
axis.
The shape of the body is reflected in the inertia tensor. In the gravitational
N -body problem Newton proved that if all bodies are spherical, they can be
regarded as mass points. Now we see that if a body is rigid it can be regarded as
a homogeneous ellipsoid, since every tensor of intertia can be thus reproduced.
Nothing else matters as far as the response to arbitrary forces is concerned.
The inertia tensor depends on the point with respect to which it is computed:
Steiners theorem: For a body of total mass M , let Iij be its inertia tensor
relative to the centre of mass, and Iij its inertia tensor relative to a point
translated from the centre of mass by the vector ai . Then
72
(6.48)
Roughly speaking it is easiest to rotate the body around its centre of mass.
The proof is easy, once we recall that
X
mai xi = ai
mxi = 0 ,
(6.49)
1i4.
(6.50)
This permits us to define three vectors x, y, z in R4 . We define a fourth auxiliary vector whose components are equal,
vT = (1, 1, 1, 1) .
(6.51)
x1
x2
[x y z v] =
x3
x4
y1
y2
y3
y4
z1
z2
z3
z4
1
1
,
1
1
(6.52)
where the row index labels the four particles. Now we insist that the inertia
tensor be diagonal and that the centre of mass is at the origin. This translates
into the six conditions
xy =xz=yz=vx=vy =vz=0 .
(6.53)
In other words the four vectors must be mutually orthogonal, which is easily
arranged in R4 . It remains to check that we can reproduce the most general
diagonal inertia tensor; the solution is given in exercise 4.
73
When its centre of mass is fixed, the equation of motion for a rigid body
subject to a torque i is
L i =
mijk xj Fk = i ,
(6.54)
where both the angular momentum and the torque are computed with respect
to the centre of mass. The equation is complicated by the fact that the inertia
tensor is time dependent,
L i = Iij j + Iij j .
(6.55)
(6.56)
(6.57)
.
2 = 2
3 = 3 ,
(6.58)
I2
2 +
(I1 I3 )(I1 I2 ) 2
0 2
I3
=0
I3
3 +
(I2 I1 )(I3 I1 ) 2
0 3
I2
=0
(6.59)
.
74
Figure 6.3. The inertia ellipsoid, and its intersections (as curves) with spheres
of three different sizes.
The perturbation will grow, and the solution will be unstable, unless
I1 > I2 , I3
or
I1 < I2 , I3 .
(6.60)
We conclude that rotation around the largest and smallest of the principal
axes is stable, rotation around the remaining principal axis is unstable.
This can be seen more elegantly. The energy surface is the ellipsoid
J12 J22 J32
+
+
= 2E .
I1
I2
I3
(6.61)
The angular momentum vector itself is changing (in this coordinate system),
but its magnitude remains constant:
J12 + J22 + J32 = constant .
(6.62)
This is a sphere, intersecting the energy surface along one dimensional curves.
The system must move along them. It is seen that there are elliptic fixed points
at the major and minor axes, and hyperbolic fixed points at the middle axis.
Although it says nothing about the speed of the motion, this analysis does say
more than the perturbative calculation since it describes all solutions exactly.
Let us think a little bit more about this. The Euler equations define a dynamical system whose trajectories are confined to a three dimensional space,
for which we can use the coordinates J1 , J2 , J3 . A solution of the Euler equations is a curve in this three dimensional space, and curves in a three dimensional space can be very unruly indeed. But the Euler equations are special
because of the existence of two well behaved constants of the motion. Each of
them defines a set of easy-to-describe surfaces that fill phase space, and any
solution must lie where two such surfaces intersect. Hence the solutions are
also easy to describe. But this is in fact a quite special property of the Euler equations. The Lorenz equations (1.45) behave quite differently: for them
there are no well behaved constants of the motion, the solutions are unruly,
75
and the system is chaotic. We will come back to this kind of issues in chapter
9.
We still lack a complete description of the motion in space. The Poinsot
construction provides this. It describes how the angular velocity vector moves,
relative to the body, and relative to absolute space. The argument may be a bit
hard to follow, so please begin by looking at Fig. 6.3. From the point of view
of the top the angular momentum vector is moving along one of the curves
shown, but from the point of view of Absolute Space the angular momentum
vector is fixed while the topand its inertia ellipsoidis moving. We want to
turn this simple observation into a more detailed argument.
Using inertial coordinates we define the ellipsoid of inertia
Xi Iij Xj = 2E .
(6.63)
(6.64)
Since the angular momentum vector is fixed in absolute space we can assume
that it points along the X3 -axis, and then 3 is a constant, which means that
the herpolhode is a plane curve. The plane to which it is confined is known
as the invariable plane, and is orthogonal to Li . Now the normal vector of the
ellipsoid of inertia is
ni (X) = 2Iij Xj .
(6.65)
(6.66)
This coincides with the normal of the invariable plane. Because of eq. (6.64)
the distance from the center of the ellipsoid of inertia to the invariable plane
is constant, and having placed the latter appropriately we conclude that the
ellipsoid of inertia rolls on the invariable plane. It rolls without slipping because
the point of tangency lies on the instantaneous rotation axis, so its velocity
equals zero. Hence the polhode rolls without slipping on the herpolhode. The
polhode is always a closed curve, but the herpolhode need not bewhen the
point of tangency has made one full revolution on the ellipsoid, the body will
have turned through some angle around the X3 -axis. Hence there are two
frequencies involved, and if they are not commensurable in the sense of eq.
76
(1.37) the herpolhode never closes. This is quite reminiscent of the Lissajous
figures.
Things simplify for a symmetrical top because then both the polhode and
the herpolhode are circles. Moreover the symmetry axis of the top coincides
with a principal axis of the ellipsoid of inertia, which means that the tip of the
symmetry axis also traces out a circle around Li . This motion is called regular
precession, and is not to be confused with the precession of a top placed in
a gravitational field, or with the precession of the equinoxes caused by the
spinning earth. Note that Eulers equations (6.57) for a symmetrical top are
easily integrated in terms of trigonometric functions.
The Earth is a symmetrical top, with I1 6= I2 = I3 and
1
I1 I3
.
I1
305
(6.67)
This is the ellipticity of the Earth. Moreover its rotation axis differs slightly
from its geometrical symmetry axis. Judging from Eulers equations, in particular eqs. (6.59) with 0 = 2/day, we expect the rotation axis to move at the
rate of one revolution per 305 days, and the polhode to be a circle surrounding
the geometrical North Pole. Although smallthe polhode is only about 15 meters acrosssuch a motion is indeed observed, and is known as the Chandler
wobble. However, the polhode is not a circle, and there is a period of close to
14 months as well as an annual periodicity. The latter is presumably caused
by metereological disturbances, while the longer period is due to the effect we
have discussed. The discrepancy with our prediction arises because the Earth
is not perfectly rigid, and can be explained if the Earth has an elasticity approximating that of steel. Poincare produced arguments showing that a fluid
core inside a rigid shell need not invalidate the argument.
Incidentally, from this example one can see how difficult it would be to
devise an experimental test of rigid body dynamics. What experiments can do
is to estimate the importance of friction, the departure from rigidity, and so
on, and to guide mathematical modelling of such effects.
A = 1 A1 + 2 A2 + 3 A3 ,
(6.68)
77
where we use the basis introduced in eqs. (6.22) and i is the angular velocity
vector. Furthermore
R = etA
1 .
A = RR
(6.69)
A minus sign was used because we are now considering an active rotation of the
body, rather than a passive change to a rotating coordinate system. Once we
have A we can read off the angular velocity vector by comparing to eq. (6.68).
And at the expense of a minor amount of work we can express A in terms of
the Euler angles, introduced in section 6.1 as an example of an explicit set of
coordinates on the group manifold of SO(3). This is how we will express the
Lagrangian of the top in terms of coordinates. The calculation goes as follows:
1 =
A = RR
d
(R R R )(R R R )1 =
dt
= R R1 + R R R1 R1 + R R R R1 R1 R1 =
(6.70)
3 (cos
= A
A1 + sin A2 ) (sin
(sin A1 cos A2 ) + cos A3 ) .
In the final step we fell back on the explicit formul (6.14). Comparing to eq.
(6.68) we read off that
1 = cos + sin sin
2 = sin sin cos
3 = + cos
(6.71)
.
(6.72)
78
L=
I3
I1
21 + 22 + 23 V () =
2
2
(6.73)
I3
I1 2 2 2
=
+ sin + ( + cos )2 M gl cos .
2
2
For the symmetrical top neither nor appear in the Lagrangian, which
means thatenergy includedwe will get three constants of the motion,
enough to solve the equations of motion explicitly.
To be precise, we find the constants of motion
Lz =
L
= I1 sin2 + I3 cos ( + cos )
(6.74)
L
= I3 ( + cos ) .
(6.75)
L3 =
They are the angular momentum along the vertical axis (in absolute space) and
along the symmetry axis of the top, respectively. We can solve these equations
for the velocities:
L3
+ cos =
I3
(6.76)
Lz L3 cos
=
.
I1 sin2
(6.77)
I1 2
I3
( + 2 sin2 ) + ( + cos )2 + M gl cos =
2
2
(6.78)
2
L23
I1 2 (Lz L3 cos )
+ M gl cos +
.
+
2
2I3
2I1 sin2
This leads to a first order differential equation which can be explicitly solved
in terms of elliptic functions. Inserting the result successively in eqs. (6.77)
and (6.76) will lead us to the complete solution for the heavy symmetrical
top.
As usual one can go a long way with qualitative arguments. Eq. (6.78),
which governs the nutation of the top, can be described as one dimensional
motion in an effective potential. It can be written on the form
(b a cos )2
= 2 +
+ cos ,
sin2
where
(6.79)
79
2EI3 L23
,
I1 I3
=
b=
Lz
,
I1
a=
L3
,
I1
2M gl
>0.
I1
(6.80)
(6.81)
(6.82)
The velocity can change sign only when the right hand side vanishes. The
polynomial on the right hand side goes from to , and
u = 1
u 2 = (b a)2 .
(6.83)
80
These considerations are applicable to the Earth, which is an oblate symmetrical top subject to tidal forces (primarily from the Moon) trying to decrease the angle between its axis and the normal of the ecliptic. The monotonic
precession of the Earth was known to the Greeks. It has a period of 26 000
yearswhich incidentally means that the position of the Sun relative to the
Zodiac has drifted noticeably since the Greeks determined it, with no apparent
consequences to astrology. See ex. 10. The nutation of the Earth was reported
by Bradley in the 18th century, who first observed it for a complete period,
18.7 years. A lesson, perhaps, for modern astronomers. It is a small effect, but
significantly greater than the Chandler wobble.
Problem 6.1 You are given an SO(3) matrix explicitly. You know it describes
a rotation by an angle through some axis, but you are not told what axis. What is
the quickest way to compute ?
Problem 6.2
Compute the inertia tensor with respect to the centre of mass
for a sphere, a cube, a circular cylinder and a circular cone, all of them having constant
density.
Problem 6.3
Place four equal masses at the corners of a regular tetrahedron
and compute the inertia tensor with respect to their centre of mass. To what extent
is the result obvious?
Problem 6.4
Complete the proof that the inertia tensor of any body can be
reproduced by placing four equal masses at appropriate distances from each other.
Problem 6.5
The mass of the Sun is 2 1030 kg, its equatorial radius is 7 108
m, and its sidereal rotation period is 25 days. Approximate the Sun as a homogeneous
sphere and compute its angular momentum. The mass of Jupiter is 21027 kg, its semimajor axis is 8 1011 m, and its orbital period is 4332 days. Approximate Jupiter as a
point mass in circular orbit and compute its orbital angular momentum. Comment?
Repeat the same exercise for the Earth-Moon system. The mass of the Earth is 61024
kg and its equatorial radius is 6 106 m, the mass of the Moon is 7 1022 kg and its
semi-major axis is 4 108 m.
Problem 6.6
Suspend a large key-ring in a twisted thread and let go. What
happens? Analyze the situation using Eulers equations.
Problem 6.7
Solve Eulers equations for a symmetrical top, I2 = I3 , and
draw the analogue of Fig. 6.3.
Problem 6.8
valid definition
Repeat the derivation leading to eqs. (6.71), but use the equally
A = R1 R .
What difference does it make?
Problem 6.9
81
a solution for a sleeping top, that is a top spinning around the vertical axis. Show
that this solution is stable if and only if
L2z > 4M glI1 .
(6.84)
The top wakes up when friction has diminished its spin so that this bound is
violated.
Problem 6.10
Around 150 B.C. Hipparchos established the dates when the
Sun is in Capricorn. Given that the period of the precession is around 26 000 years,
use your understanding of the symmetrical spinning top to establish the direction in
which this assignment has been drifting since then, i.e. establish in which sign the
Sun actually is when it says in the astrology column that the Sun is in Capricorn.
(7.1)
This is not strictly necessary (see exercise 1) but convenient for our purposes.
A function obeying this condition is said to be convex. The Legendre transform
of the convex function f is a function g defined as
g(p) = maxx xp f (x) .
(7.2)
df
.
dx
(7.3)
Precisely because we imposed condition (7.1) the derivative of f is a monotoneously increasing function of x, which means that eq. (7.3) can be solved to
give the unique solution x = x(p). Inserting this into the right hand side of
eq. (7.2) defines the function g(p) uniquely.
A function and its Legendre transform are related by
83
f (x) + g(p) = xp .
(7.4)
If eq. (7.3) holds this equation defines the Legendre transform g(p) once the
original function f (x) is known. But the equation is fully symmetric between
f and g, which means that if
x=
dg
dp
(7.5)
then eq. (7.4) defines f (x) as the Legendre transform of the function g(p). To
make sure that this claim is valid one must show that the function g is convex
because the function f is. But this is so because
d2 g
dx
=
=
2
dp
dp
dp
dx
1
d2 f
dx2
1
>0.
(7.6)
Hence eq. (7.5) can be solved for p = p(x). We may conclude that all the
information in the function f is present in its Legendre transform g, and also
thatunlike the Fourier transformation, saythe Legendre transformation is
its own inverse. Generalization to an arbitrary number of variables is quite
straightforward.
84
There is an interesting geometric interpretation of the Legendre transformation, and I hope this can be deciphered from Fig. 7.1: to find out the value
of the function g at p0 , start by drawing the straight line p0 x, and then use the
definition of the Legendre transform to find g(p0 ) as an intercept of a tangent
of the curve with the ordinate axis. If the original function describes a curve
by giving the set of points on the curve, the Legendre transform describes it
by giving the set of lines that are tangent to it. This is a rather deep idea, but
one that belongs to another story.
Problem 7.1
1x
f (x) =
0
x2
, x<1
, 1x2
, 2<x
(7.7)
Any set of ordinary differential equations can be written in first order form,
provided we introduce enough extra variables. The Hamiltonian formulation is
a special way of doing this to the Euler-Lagrange equations, and reveals that
the latter have a very special form. The central features of mechanicsthose
that classical and quantum mechanics have in commonare brought out very
clearly by the Hamiltonian formulation.
(8.1)
These are second order ODEs. One obvious way to turn them into first order
equations is to define the canonical momenta
pi
L
.
qi
(8.2)
The expression on the right hand side will be equal to mqi in simple cases, and
in general it will be some function of q and q. We already know that the right
hand side is of some importance; it occurs when one sets boundary conditions
in the variational principle, and in connection with Noethers theorem. See
eq. (2.34). So the canonical momenta are the extra variables to be used in
turning the Euler-Lagrange equations into first order form. The use of q for
generalized coordinates and p for their momenta goes back to Jacobi, and
was solidified by Whittaker in a famous textbook.
Eqs. (8.1) now take the first order form
pi =
L
.
qi
(8.3)
But we need equations for q too. To this end we assume that eqs. (8.2) can be
inverted, that is
86
pi = pi (q, q)
qi = qi (q, p) .
(8.4)
(8.5)
H
pi
pi =
H
.
qi
(8.6)
These are Hamiltons equations, and the function H is known as the Hamiltonian. The variables pi are known as the canonical momenta. By means of the
Legendre transformation we have traded the coordinate q for the coordinate
p.
In practice, the Legendre transformation is usually easy to perform. We
know that
m 2
1 2
q
H=
p .
(8.7)
2
2m
An apparently more complicated case, involving many degrees of freedom, is
L=
1
(8.8)
L = Mij (q)qi qj V (q) .
2
Here the matrix M depends on the configuration space variables. See exercise
2.4, or for a concrete example see the Lagrangian for a spinning top in section
6.5. Using matrix notation the Hamiltonian is immediately found to be
1
H = pj Mij1 pj + V (q) .
(8.9)
2
We only have to check that the matrix M is invertible everywhere in configuration space. This in fact amounts to checking that the Lagrangian is a convex
function of the variables qi .
Our phase space is spanned by the 2n variables qi , pi . If we compare to
the general phase spaces discussed in section 1.4 we see that this is already
a restriction: the phase space of a Hamiltonian system is always even dimensional. But there is another and more dramatic difference. In section 1.4 time
87
evolution was described by the equations zi = fi (z), so that the general case
is obtained by choosing 2n independent functions fi . In the Hamiltonian case
the time evolution is determined by a single function H(q, p). This is indeed
a very strong restriction, but it is one that Nature seems to respect for all her
fundamental equations.
There are consequences. One of them is that
H
H
H
H H
H H
H
H
=
=
= 0 . (8.10)
+ pi
+
+
H = qi
qi
pi
t
pi qi
qi pi
t
t
Hence the Hamiltonian is always a conserved quantity, unless it depends explicitly on time. The set of points in phase space obeying
H(q, p) = E
(8.11)
is called the energy surface. Time evolution takes place within the energy
surface. In two dimensions (for one degree of freedom) the energy surface
coincides with the one dimensional trajectories; hence Hamiltonian systems
with one degree of freedom are always soluble.
Next let us imagine time evolution as the flow of a fluid. The flow lines
are defined by the little arrows in phase space. The fluid is incompressible
if, for any fixed finite volume in phase space, the amount of fluid going out
through its surface equals the amount that is going in. By Gauss theorem the
difference between them is equal to the integral of the divergence of the flow
over the volume. Since this must vanish for every volume we conclude that the
flow is that of an incompressible fluid if and only if its divergence vanishes.
For Hamiltonian time evolution this is indeed so:
divz = z = qi qi (q, p) + pi pi (q, p) = qi pi H + pi (qi H) = 0 . (8.12)
This conclusion is worth stating as a theorem.
Liouvilles theorem: In Hamiltonian mechanics the phase space flow preserves
volume.
Liouvilles theorem is the origin of the claimmade in section 1.4that
sources and sinks do not occur in Hamiltonian systems. In two dimensions
the only fixed points that occur are either elliptic or hyperbolic.
On reflection one sees that the behaviour allowed by Liouvilles theorem
can still be very complex. Already in two dimensions the shape of a small
piece of the phase space fluid passing close to a hyperbolic fixed point will
be squeezed in one direction and stretched in another. In the end the original
volume element can acquire a very involved shape, and for an observer with
limited resolution it may in fact seem as if it has been smeared all over the
energy surface, even though on microscopic scales it does preserve its volume.
88
(8.13)
(8.14)
is defined for all kets |i. Dirac calls this a bracket, which explains the names
kets and bras. (It is believed that Dirac did not know that bra already
had a meaning in English.) An added complication in quantum mechanics is
that the vector spaces are often infinite dimensional function spaces, but we
do not bother with this here. We do need to check that the bras form a linear
space. But this is so because the map to the real numbers is linear,
(a1 h1 | + a2 h2 |)|i = a1 h1 |i + a2 h2 |i .
Hence h3 | = a1 h1 | + a2 h2 | is a bra.
(8.15)
89
At this point we have two vector spaces, and a priori they are different. The
equation h| = |i makes no sense at all. Apples and pears are never equal.
However, there is a final twist to the story, because there does exist a one-toone correspondence between the kets and the bras. If you have a ket written out
as a column vector with respect to some basis, there is a unique bra obtained
by transposing the column to a row (and taking the complex conjugates of
all the numbers, but this is irrelevant here since all our components are real).
This is an extra piece of structure, giving rise to a one-to-one correspondence
h| |i. It means that we can associate a unique number to any ket, namely
the real number h|i. This number is the length squared of the ket |i.
How can we make the distinction between bras and kets in the index notation? The answer is simple. Let us denote all the original vectors, the kets,
by x . We are using Greek rather than Latin indices, and will continue to do
so whenever it is our intention to use the indices in a correct tensorial way
(that is, in the way I am just going to explain). When the indices are only
used to label a set of objects we continue to use Latin indices. The important thing is that we place the index on a ket upstairs. Such a vector is
called contravariant. A bra will be denoted by u , that is to say with its index
downstairs, and is called a covariant vector. This should represent a linear
map from the space of all kets to the real numbers, and we define this map as
u(x) = u x = a ,
(8.16)
(8.17)
where the vectors |e i form a basis (and must not be confused with the components u of a bra vector!). A matrix operating on the ket vector space will
be written as a mixed tensor A . Then the equations
y = A x
and
v = u B
(8.18)
(8.19)
90
is a new vector, obtained by transforming the old vector |i. This is an active
transformation of the vector. On the other hand the vector
|i = |e ix = |e i(A1 ) A x = |e iy
(8.20)
u u (A1 ) .
(8.21)
This is part of the origin of the names contravariant and covariant vectors
they transform in opposite directions, to ensure that u x remains unchanged.
We can go on to define tensors with more than one index in the same way.
A tensor with k indices running from 1 to n is defined as a collection of kn
components transforming in specific way under changes of basis. Examples
include
T A A A T ,
Students are usually disturbed by the fact that the components of a tensor
with more than two indices cannot be displayed as a matrix, but really the
definition does not require this.
Still something is missing. The whole point about chapter 6 was to discuss
the special matrices that preserve the length of the vectors. To do so here we
need to set up a one-to-one correspondence between the set of x and the set
of u , and then define the analogue of h|i.
The Kronecker delta is the key. We write it as a covariant tensorwith both
indices downstairsas . In fact we can be a bit more general. We introduce
a metric tensor. By definition this is any covariant symmetric tensor with two
indices downstairs,
g = g ,
(8.23)
(8.24)
Here the left hand side defines an operation which acts as the identity on both
our vector spaces,
x = x
and
u = u .
(8.25)
91
Figure 8.2. Bras and kets: kets are arrows, bras collections of parallel planes
(or measuring tapes). Their lengths are undefined but one can multiply them
with real numbers, and the quantity u x has a clear meaning.
u u = g u .
(8.26)
(8.27)
If the metric is the Kronecker delta, this equation says that the rotation is done
by means of an orthogonal matrix. In this way we recover the full content of
section 6.1.
There is a simple geometrical picture of kets and bras that may be helpful.
Represent the kets as arrows pointing from the origin. If the ket x is multiplied
with (say) 2, the length of the arrow in the picture doubles. Represent the bras
with measuring tapes through the origin, or more accurately as a set of parallel
hyperplanes with constant spacing (level curves of a linearly rising function).
The bras transform oppositely to the kets, so if the ket is multiplied by 2 the
bra must be multiplied with 1/2. The number u x should stay unchanged
92
(8.28)
(8.29)
But geometrically it is obvious that there is another number that we can assign
to the pair, namely the (oriented) area A that they span. Let us adapt our
coordinates so that the two vectors have only two non-zero components each.
Then A is given by a determinant
93
Figure 8.3. Two ways of associating a number to a pair of vectors: the angle
they subtend, and the area they span.
1 1
x y
A = 2 2 .
x y
(8.30)
A = y x ,
(8.31)
where
=
0 1
1 0
(8.32)
Like the scalar product the area is a bilinear function of the vectors.
The next step is to liberate eq. (8.31) from its origins. Any anti-symmetric
and invertible tensor can serve as a symplectic form on a vector space, just
as any symmetric invertible tensor can serve as a metric. In other words eq.
(8.31) will be taken to be a meaningful number associated to any pair of
vectors, regardless of the dimension of the vector space, provided only that
the symplectic form obeys
= ,
(8.33)
(8.34)
The latter equation is analogous to equation (8.24) for the metric. But it
happens that an anti-symmetric N N matrix has determinant zero if N
is odd. If the determinant is zero its inverse cannot exist. Hence symplectic
forms exist only on even dimensional vector spaces, while metrics exist on
vector spaces of all dimensions. (This fact gave us Eulers theorem in section
6.1the antisymmetric matrix defining an infinitesimal rotation in 3-space
94
necessarily has a zero eigenvalue, and hence the rotation it generates has a
fixed axis.)
Just as we may wish to define metrics on curved spaces, not only on vector
spaces, so we may wish to define symplectic forms on more general phase
spaces. A symplectic form on an arbitrary phase space will be represented by an
invertible and anti-symmetric tensor , but in principle its components may
depend on the particular point of phase space where it sits. This cannot happen
in a quite arbitrary fashion though, in fact the third and final requirement on
a symplectic form is
+ + = 0 .
(8.35)
We will mostly be interested in the case when the components of are constant, and then this extra requirement is trivial. For now we only remark that
the equation is recognisable as one of Maxwells equations in electrodynamics.
There it guarantees the existence of the vector potential. We will see that a
similar object arises in Hamiltonian mechanics, once we come to section 8.8.
The most common situation is that the symplectic form is constant and
block diagonal, that is to say it has the form
0 1 0 0 . . .
1 0 0 0 ...
0 0 0 1 . . .
0 0 1 0 ...
..
..
.
.
0 0 0 0 ...
0 0 0 0 ...
0
0
0
0
..
.
0
0
0
0
..
.
0 1
1 0
(8.36)
In exercise 4 you will show that, on a vector space, one can always choose
the coordinates in such a way that any symplectic form takes this form. In
fact there is a stronger statement known as Darboux theorem: On any phase
space with topology R2n , coordinates can always be introduced so that the
symplectic form takes this standard form. Although we do not need to go into
it now, it is interesting to know that metric tensors behave in a completely
different way in this respect.
Because the symplectic form has an inverse, it can be used to relate contravariant vectors with covariant ones in a unique manner, just as a metric
can. The archetypical contravariant vector is the little arrow z that gives
the time evolution of a system, while the archetypical covariant vector is the
gradient of a function, say H. Let us relate them:
z = H .
(8.37)
Let us further suppose that the symplectic form takes the standard form (8.36),
and let us adapt the description of the coordinates according to
z1
z2
z3
z4
..
.
z =
2n1
z
z 2n
q1
p1
q2
p2
..
.
=
qn
pn
95
(8.38)
z = H
qi =
H
pi
(8.39)
pi = H
.
qi
(8.40)
(8.41)
B]
= [B,
A]
,
[A,
(8.42)
[B,
C]]
+ [C,
[A,
B]]
+ [B,
[C,
A]]
=0,
[A,
(8.43)
where a1 , a2 are arbitrary numbers. These relations also characterize the Lie
bracket occurring in the study of Lie groups (see section 6.1); the last relation
is known as the Jacobi identity.
In classical mechanics all functions commute, so at first sight there does not
seem to be a classical analogue of the commutator. But there is one! Using the
96
symplectic form we define the Poisson bracket of two arbitrary phase space
functions as
{A, B} A B .
(8.44)
(8.45)
{A, B} = {B, A} .
(8.46)
It is less obvious, but true (see problem 5) that it also obeys the Jacobi identity
{A, {B, C}} + {C, {A, B}} + {B, {C, A}} = 0 .
(8.47)
(8.48)
(8.49)
The classical and the quantum mechanical Hamiltonians are analogous too.
These observations led Dirac, in the course of a Sunday walk in Cambridge,
to the belief that any classical system can be quantized by setting up a
correspondence between functions on phase space and operators on a (complex) linear space, and by replacing the Poisson brackets with commutators
according to the rule
{A, B}
1
[A, B] .
i~
(8.50)
On the whole, although various complications do arise, Diracs idea has proved
to be correct.
If the symplectic form takes the standard form (8.36) we can use eq. (8.38)
to split the 2n coordinates z into n pairs (qi , pi ). Then the Poisson bracket
reads
{A, B} =
A B
A B
.
qi pi pi qi
(8.51)
(Yes, as always a sum over repeated indices is understood.) Moreover eq. (8.48)
becomes
q = {q, H}
p = {p, H} .
97
(8.52)
(8.53)
(8.54)
In this sensewhich is a very important oneit acts like a kind of derivative. Using this the evaluation of any Poisson bracket can ultimately be made
starting from the fundamental Poisson brackets
{qi , pj } = ij ,
{qi , qj } = {pi , pj } = 0 .
(8.55)
We can now confirm that the Poisson bracket acts like a derivative in the
sense that
{qi , A} = pi A
{pi , A} = qi A ,
(8.56)
where A = A(q, p) is any function on phase space. Note that the quantum
mechanical commutator behaves in a similar way, except that in quantum
mechanics we have to care about the order in which we place things. The
ordering on the right hand side of eq. (8.54) would be the correct one to use
there, while ordering does not matter in classical mechanics.
J2 = J sin sin ,
J3 = J cos .
(8.57)
If we write out Eulers equations in terms of Ji = Ii i we find, after a calculation, that they become
98
= J
1
1
I1 I2
= J
1
1
I1 I3
cos2 cos J
1
1
I3 I2
sin2 cos .
H(, ) =
(8.59)
0
sin
sin
0
1
=
J sin
0 1
1 0
(8.60)
(The indexing here is such that = J sin .) A calculation now verifies that
Eulers equations for fixed angular momentum squared take the Hamiltonian
form
= H ,
= H .
(8.61)
{J2 , J3 } = J1 ,
{J3 , J1 } = J2 .
(8.62)
p=,
(8.63)
form a canonical pair, and they are good coordinates on the sphere everywhere except at the poles. In these coordinates the area element on the sphere
becomes
dA = J sin dd = Jdqdp .
(8.64)
Hence the canonical coordinates are suitable if you want a map of the sphere to
display areas correctly. (By all means draw a picture to illustrate this result!)
Closer examination of this example reveals that generalization to any evendimensional sphere is non-trivial. The point is that we cannot use any antisymmetric non-degenerate tensor as a symplectic form. According to the definition it must also solve eq. (8.35). In the two-dimensional case this is a trivial
99
point, but as it turns out there is simply no solution for any higher dimensional sphere. Other, non-spherical, examples of phase spaces with non-trivial
topologies do exist in higher dimensions.
p p = p (q, p) ,
(8.65)
(8.66)
(The new function takes the same value at the new point as the old function
does at the old point.) The transformation is canonical if
{A, B} = C
{A , B } = C .
(8.67)
(8.68)
The function A must be chosen so that eq. (8.67) holds, and is assumed to be
small so that terms of second order in A can be ignored when this property is
checked. To obtain a candidate A we choose a function F = F (q, p) on phase
space, and set
A = {A, F } .
(8.69)
B = B + {B, F } ,
(8.70)
100
and so on. Computing to first order in and using the Jacobi identity we find
that
{A , B } = {A, B} + ( {{A, F }, B} + {A, {B, F }} ) =
= C {F, {A, B}} = C + {C, F } = C + C = C
(8.71)
(to that order). This works for any phase space function F . In principle it is
possible to integrate eq. (8.69) to obtain a finite canonical transformation
the equation has the same form as Hamiltons equations of motion, and the
only catch is be that it may be difficult to do the integration explicitly.
Again, note the similarity to quantum mechanics: in quantum mechanics
any Hermitian operator generates a unitary transformation, in classical mechanics any phase space function generates a canonical transformation. At the
same time there is an interesting difference between canonical and unitary
transformations: there are many more phase space functions than Hermitian
operators, hence there are many more canonical transformations. This has
to do with the fact that the unitary transformations also preserve a scalar
product, and in this sense they are analogous to rotations. If the space has
a finite dimension, there are indeed rather few rotations, but the set of
canonical transformations is always infinite dimensional because the space of
all functions has infinite dimensions. (The discerning reader may think that
the comparison is unfair, since rotations take place in a vector space, while
the kind of transformation we now allow are more general than that. However,
it can be shown that the set of transformations leaving a given metric tensor
invariant is at most as large as the set of translations and rotations in a vector
space, and indeed often smaller than that. So the objection has no force.)
Let us revisit the discussion of Noethers theorem in section 2.3. We used
it already to motivate the definition of the canonical momenta, but there is
more to say about it. Let us choose spatial rotation, eq. (2.44), as an example
of a transformation that we will want to make. It is easy to see that
Li = ijk xj pk
{xi , j Lj } = ijk j xk = xi .
(8.72)
The Noether charge generates the transformation via the Poisson bracket!
More is true. The -tensor obeys the identity
ijk kmn + ink kjm + imk knj = 0 .
(8.73)
(8.74)
The educated way of describing this result is to say that the rotational Noether
charges form a Poisson bracket representation of the Lie algebra of the rotation
group. In quantum mechanics, we have a commutator representation of the
same Lie algebra.
101
(8.75)
Again the Noether charge generates the transformation to which it owes its
existence.
(8.76)
If this condition fails the level planes that exist at each point do not fit together
to form level surfaces extending over all space. Actually this phenomenon is
well known from the theory of curve integrals. A covariant vector field is just
what we need in the integrand of such an integral. Suppose that a curve
is defined explicitly by the functions z = z (t), where t is some parameter
along the curve. Then
Z
dz u (z) =
t2
dt
t1
dz
u (z(t)) .
dt
(8.77)
It is known that this integral is independent of the path and defines a function
f (z(t)), if and only if condition (8.76) holds. But the curve integral is well
defined regardless of whether this is true or not.
It makes sense: in a vector space a covariant vector is a map from the set
of all arrows to the real numbers, and in general a covariant vector field is a
map from the set of all flowlines to the real numbers.
This leads to a useful piece of notation. If the coordinates on our space
102
(8.78)
That is, the coordinate differentials dz are chosen as the basis in which we
expand the covariant vector. The gradient of a function f defines a one-form
of a special kind, namely
df = dx f .
(8.79)
The notation is useful since it gives the correct behaviour of u (x) under
coordinate transformations. Let x = x (x). Then
du = dx u = dx
x
u = dx u
x
u (x ) =
x
u (x) .
x
(8.80)
= .
(8.81)
F =
0
Ei
Ei ijk Bk
0 E1 E2 E3
E1
0
B3 B2
.
=
E2 B3
0
B1
E3 B2 B1
0
(8.82)
Eq. (8.81) are part of Maxwells equations; they guarantee the existence of
the vector potential, which is a one-form. There is also a second tensor equation relating the two-form to the electric current, but this has no analogue in
symplectic geometry.
What does the symplectic one-form look like if we use our standard coordinates (qi , pi ) on phase space, so that the symplectic two-form takes the form
103
(8.36)? There is no unique answer to this question since the previous discussion
implies that if
= +
(8.83)
then and give rise to the same symplectic two-form. However, a possible
choice of symplectic one-form is
= pi dqi .
(8.84)
This will be our standard choice. To make sure that you see how it works,
consider a two dimensional phase space with
z =
It is simple to verify that
=
q
p
0
q 2 p 1
p 1 q 2
0
p, 0
0 1
1 0
(8.85)
{q, p} = 1 . (8.86)
More generally, using eq. (8.38) we recover our standard symplectic two-form
(8.36) in any dimension.
P = P (q, p) .
(8.87)
(8.88)
104
pi dqi = Pi dQi + dF ,
(8.89)
(8.90)
P = Q F1 (q, Q) .
(8.91)
We assume that these equations are invertible, that is to say that they can be
used to derive the explicit formulas
Q = Q(q, p)
P = P (q, p)
q = q(Q, P )
p = p(Q, P )
(8.92)
.
P =p.
(8.93)
Evidently the pair q and Q does not coordinatize phase space. To circumvent
this difficulty we define a new generating function by means of a Legendre
transformation from the first:
F2 = F1 (q, Q) + QP .
(8.94)
The sign conventions here are a little odd. Still the relation between the functions F2 and F1 is recognisably a Legendre transformation, and it followsby
virtue of eqs. (8.91)that F2 is a function of q and P only. If we assume that
q and P together coordinatize phase spacewhich is certainly true for the
identity transformationwe can use F2 to generate canonical transformations
by repeating the previous logic:
105
Q = P F2 .
(8.96)
Again we assume that these equations can be inverted to yield eqs. (8.92) in
explicit form. The identity transformation is generated by
F2 (q, P ) = qP .
(8.97)
(8.98)
Problem 8.1
Derive the Hamiltonian corresponding to the Lagrangian for a
charged particle in an external field, eq. (2.10).
Problem 8.2
106
p
L = mc c2 x 2 ,
x 2 = x i x i .
(8.99)
Show that this reduces to the ordinary free particle when c is large compared to the
velocity. Then derive the Hamiltonian for the relativistic particle.
Problem 8.3
Let the coordinates of a space transform according to x
A x . Prove that this implies that the gradient of a function transforms like a covariant vector, f f (A1 ) .
Problem 8.4
Show that, given any metric g (with positive eigenvalues)
on a vector space, one can always change the coordinates x to new coordinates
X = A x , in such a way that
x g y = X G Y ,
(8.100)
and such that the new metric tensor G becomes equal to a Kronecker delta. Show
that if we instead require
y x = Y X
(8.101)
then the symplectic form in the new coordinates can always be made to assume the
standard form (8.36).
Problem 8.5
Use eq. (8.35) to prove the Jacobi identity (8.47) for an arbitrary symplectic form (i.e. with components that may be non-constant phase space
functions).
Problem 8.6
ural
q = q H
p = p H .
(8.102)
Problem 8.7
Consider Eulers equations for a spinning top. Write them in
terms of Ji . There are two conserved quantities of the form G = G(J1 , J2 , J3 ) and
H = H(J1 , J2 , J3 ). Prove that the time development of any function F = F (J1 , J2 , J3 )
is given by
F = ijk i F j Gk H ,
(8.103)
where ijk is totally anti-symmetric and the partial derivatives are with respect to Ji .
Indices run from 1 to 3. For your information, this is called Nambu mechanics.
Problem 8.8
Evaluate the mutual Poisson brackets enjoyed by the three
conserved charges (2.32) of the particle on the sphere, using polar coordinates for
the calculation. Do the same for the three conserved charges of the particle on the
hyperboloid, considered in problem 2.8.
Problem 8.9
107
d
1
d
(qi ) = qi qi V (qi ) + (qi ) ,
(8.104)
dt
2
dt
for some function . Derive the Hamiltonian formulations of these two Lagrangians,
and show that there is a canonical transformation relating them. Show explicitly that
Hamiltons equations are equivalent in the two cases. (Have you seen this Lagrangian
before?)
L (qi , qi ) = L(qi , qi ) +
Problem 8.10
Transform from Cartesian coordinates (x, y) to polar coordinates (r, ). Do the functions (x, r) coordinatize the plane? The upper half plane?
Problem 8.11
sin p
q
P = q cot p
(8.105)
Problem 8.12
where k is a constant.
(8.106)
p2
+ V (q) ,
2
(9.1)
q = p
p = q V
dp
q V
=
.
dq
p
(9.2)
2(E V (q)) ,
(9.3)
109
where the energy E is an integration constant that labels the individual flow
lines. And this is almost the end of the story.
Note that for non-autonomous systems the evolution equations depend explicitly on t, as in eq. (1.38). This includes systems that are driven by some
time-dependent external force. For them it is no longer true that the phase
space flow lines are non-intersecting. In this case we expect that trouble will
arise already in two dimensional phase spaces.
If the dimension of phase space is three things can become much more
complex. Once the flow lines can move in a third dimension there is absolutely
no guarantee that they form an orderly pattern. It is instructive to compare
two three dimensional systems that we have encountered, namely the Lorenz
and Euler equations. In the Euler equations (6.57) the phase space coordinates
can be taken to be J1 , J2 , J3 . We can solve the equations because any solution
Ji (t) is a curve that lies on two well behaved surfaces
J12 + J22 + J32 = constant ,
(9.4)
Hence any solution curve is the intersection of a sphere and an ellipsoid, and
such a curve is not difficult to describe, nor is the set of all such curves hard
to understand. The lesson is that the solutions of the Euler equations are easy
to describe precisely because there exist two constants of the motion, each
describing a well behaved surface in phase space. With the Lorenz equations
(1.45) things are very different: in this case there simply does not exist any
well behaved constants of the motion, and hence there is no reason to believe
that the set of all solutions can be described in any easy and comprehensive
manner. And indeed the solutions are chaotic, in particular they turn out to
be chaotic in the sense that the smallest change in the initial data will change
the long time behaviour of the solution in a dramatic fashion. The point here
is simply that one expects this to happen if the dimension of phase space is
three or moreunless there are special reasons to think otherwise.
Still, the Lorenz equations are not Hamiltonian. They cannot be, because
their phase space has odd dimension. Now Hamiltonian phase space flows have
some rather special properties, such as that uncovered by Liouvilles theorem
they are like the flow of an incompressible fluid, and certain special kinds of
fixed points cannot occur. Hence there is still some hope that the long time
behaviour of Hamiltonian systems may be significantly simpler than that of
a typical dynamical system. This is what we will look into next. (Eventually
our hope of understanding the detailed behaviour of the general solution of a
general Hamiltonian system will be completely dashed, but there will be many
compensations.)
To study the kind of phase space flows that can occur we must be able to
picture them effectively. If the space to which the flow is confined is three
dimensionaleither because the phase space has three dimensions or because
the flow is confined to a three dimensional energy surface in a four dimensional
phase spacethe Poincare section solves the problem for us. The idea is to
110
study a two dimensional cross section of the three dimensional space, through
which the orbits pass in some definite direction. Each time the orbit passes
out through the section one marks the corresponding point with a dot. If the
orbit is periodic, the number of dots one obtains is finite. If the orbit is highly
irregular, the pattern of dots will be highly irregular too.
(9.5)
p = A cos (t + ) ,
(9.6)
111
2 2
2E
E=
A=
A
.
(9.7)
2
2P
sin Q
p=
2P cos Q .
(9.8)
It follows that
E = H = P ,
(9.9)
q p
q p
= = 1 = {Q, P } .
Q P
P Q
(9.10)
P = {P, H} = 0 .
(9.11)
(9.12)
The idea is that each of these constants of the motion will serve as a member
of a canonical pair, whose other member will be called i . In the second step
we must devise a canonical transformation
qi = qi (, I)
pi = pi (, I) ,
(9.13)
(9.14)
Ii = i H(I) = 0 .
(9.15)
The new canonical variables Ii and i are called action-angle variables. If such
112
(9.16)
(9.17)
ni Ij = 0 .
(9.18)
Thus these vector fields are tangential to all the n hypersurfaces, and therefore
tangential to the n-dimensional submanifold. Moreover they must be everywhere non-vanishing, because
1 I1 = 1 I1 = {1 , I1 } = 1 6= 0
(9.19)
and so on for all the n vector fields. We have found n everywhere non-vanishing
vector fields pointing along the surface of the n dimensional submanifold defined by the n equations Ii = 0. Let us assume for simplicity that these submanifolds are closed and bounded. Then our conclusion is rather remarkable,
because very few closed and bounded manifolds admit even a single everywhere non-vanishing vector field. (The circle does, but the sphere does not
you cannot comb a sphere.) Closer inspection shows that eq. (9.12) means
that an additional technical condition on these vector fields is obeyed, namely
that the vector fields commute. Without bothering too much about what this
means, we can then rely on the following mathematical theorem:
The only closed and bounded n dimensional manifold admitting n everywhere
non-vanishing commuting vector fields is the n dimensional torus.
A one dimensional torus is a circle, and a two dimensional torus is an ordinary
torus, or more abstractly it is like a square with periodic boundary conditions.
A three dimensional torus is like a cube with periodic boundary conditions,
and so on. And the conclusion is that in an integrable Hamiltonian system all
trajectories are confined to tori with half the dimension of the entire phase
space.
The motion on a torus depends on the n frequencies that characterize the
motion on the given torus. The trajectories will be open or closed depending
on whether the frequencies are rationally related (as in the Kepler problem),
or not (as in the case of a Lissajous figure that never repeats, see section 1.3).
113
{L3 , L2 } = 0 .
(9.20)
(9.21)
where the split is defined in such a way that the equations of motion coming
from H0 alone are integrable. It is assumed that is some small parameter,
and the full problem is to be solved as a power series expansion in .
Already in two dimensions we can see some problems with this idea. Consider the case of a pendulum, with the Hamiltonian
1
2
H = p2
cos q .
(9.22)
2
2
The problem is not the non-linearity of the equations of motion as such. The
problem is that, as discussed in section 1.4, phase space splits into three regions separated by separatrices. The two separatrices, and the hyperbolic fixed
point, all have the same energy E, so the conserved energy will not do as a coordinate uniquely labelling the orbits. A related problem is that motion along
a separatrix is not periodic; formally it corresponds to zero frequency, because
it takes an infinite amount of time to traverse it.
When the energy is large the effect of gravity is small (the pendulum is
rotating almost freely). Thus we can choose
1
(9.23)
H 0 = p2 ,
2
and treat as a small parameter. For small oscillations on the other hand we
can set
114
Figure 9.2. The Kirkwood gaps. The asteroids clustered around 1 : 1 are not a
contradictionthey are the Greek and Trojan asteroids at the Lagrange points.
1
2
H 0 = p2 + q 2 .
2
2
(9.24)
(9.25)
for some integers n1 , n2 . Moreover the trouble is most severe if these integers
are small.
So are almost all Hamiltonian systems integrable? No. But if the Hamiltonian is of the form (9.21), with an that is not too large, then large regions
of phase space will still be filled with tori. In between there will be regions
where the tori have been destroyed and the motion is chaotic. The chaotic
regions grow in size with , and the tori that disappear first are those tori for
which the unperturbed motion obeys eq. (9.25) for small integers n1 , n2 . The
full story here is known as the KAM theorem, for Kolmogorov, Arnold, and
Moser.
A beautiful example of the KAM theorem in action is provided by the distribution of asteroids inside Jupiters orbit. Presumably the asteroid belt was
originally created in such a way that the number of asteroids as a function
of their angular frequencies could be approximated by a fairly smooth function. Then, as time goes on, an asteroid whose trajectory was on a torus in
115
phase space will remain there, while an asteroid on a chaotic orbit sooner or
later changes its angular frequency significantly. Let 1 be the frequency of
the asteroids unperturbed motion, and 2 that of Jupiters. The theory then
suggests that there will be gaps in the asteroid distribution, so that asteroids with an 1 obeying eq. (9.25) for small values of n1 and n2 are missing.
Observation bears this out, and the gaps are known as the Kirkwood gaps for
their discoverer. An exception is that the number of asteroids with 1 = 2
is particularly largebut these are not an exception to the KAM theorem,
rather they are the Trojan asteroids that we discussed in section 4.6.
116
gravitational field of a galaxy. The potential is not bounded from below, but
there is a potential well near the origin. Note that
1
1
V (x, y) =
6
3
1
y+
(y 1 3x)(y 1 + 3x) .
2
(9.27)
Setting this to zero defines three straight lines that bound a triangular well.
Provided the energy does not exceed E = 1/6 motion can be confined inside
this triangle, and the energy surface is bounded. The influence of the cubic
terms in the potential becomes more pronounced as the energy goes up.
To picture the dynamics it is convenient to study a Poincare section based
on some two dimensional cross section of phase space, say the plane spanned by
the coordinates y and py . When studying a trajectory (presumably generated
by a computer) one makes a dot on the two dimensional plane whenever the
trajectory passes through it. If the trajectory is confined to a torus in phase
space one should then see these dots lining up along some one dimensional
curve in the resulting picture. If this does not happen one concludes that the
toriwhich would be present if the cubic terms in the Hamiltonian could be
ignoredhave been destroyed by the perturbation.
What Henon and Heiles found illustrates the KAM theorem quite well. For
E = 1/24 and E = 1/12 the actual trajectories do give rise to closed curves on
the Poincare section, and these curves lie exactly where they should lie according to an eighth order canonical perturbation theory calculation performed by
Gustavson. Above E = 1/9, when the importance of the cubic terms is larger,
one finds orbits that definitely do not lie on tori since they fill large parts of the
picture with chaotic dots. But there will also be orbits (even when E > 1/6)
that do lie on tori, so the picture becomes that of a mixture of order and chaos.
117
118
Problem 9.1
Consider the Hamiltonian (9.1), and let V (q) be a fourth order
polynomial. Draw the phase space flow for all qualitatively different choices of this
polynomial. Pay special attention to the transition cases, when a maximum and a
minimum merge to produce an inflection point on the graph of V (q).
Problem 9.2
(9.28)
as a series expansion in , to second order in . Then solve the equation exactly, and
compare the results. As initial condition, set x(0) = k.
Problem 9.3
For the purposes of canonical perturbation theory it is convenient to write the Henon-Heiles Hamiltonian as
1 2
y3
(px + p2y + x2 + y 2 ) + (x2 y ) .
(9.29)
2
3
Explain why small coupling constants correspond to small values of the total energy
in eq. (9.26). What is the nature of the energy surface when is small? When it is
large?
H=
Problem 9.4
Once we know that integrable motion takes place on tori on
phase space we become interested in doubly periodic functions. Show that the elliptic
function = (t) which solves the equation for the pendulum has this property. Do
this indirectly by showing that the equations make sense also if we make t purely
imaginary. Then let the physics tell you that if t is taken to be a complex variable
(t) must be periodic under purely real and purely imaginary shifts in t.
Appendix 1 Books