Advanced Quantum Mechanics: Judith Mcgovern December 20, 2012

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

Advanced Quantum Mechanics

Judith McGovern

December 20, 2012


This is the web page for Advanced Quantum Mechanics (PHYS30201) for the session
2012/13.
The notes are not intended to be fully self-contained, but summarise lecture material and
give pointers to textbooks.
These notes have been prepared with TEX4ht, and use MathML to render the equations.
Try the link to “This course and other resources” to see if your browser is compatible. (In
particular there is a problem with square roots of fractions that you should check is OK.) If
you are using Internet Explorer, you may need to download “MathPlayer” from here.
Please report errors to Judith McGovern.
This course and other resources.

The books I have relied on most in preparing this course are:

• Shankar, R. Principles of Quantum Mechanics 2nd ed. (Plenum 1994) [preview] [Errata
to 13th printing, (2006)]

• Gasiorowicz, S. Quantum Physics 3rd ed. (Wiley 2003) [Supplementary material] [errata]

• Mandl, F. Quantum Mechanics (Wiley 1992)

• Townsend, J.S. A Modern Approach to Quantum Mechanics (McGraw-Hill 1992) [pre-


view]

Throughout these notes, I have provided section references, mostly to the first three of these.
Note that Shankar and Townsend use Gaussian rather than SI units, as do older editions of
Gasiorowicz. Notes on translation are given in section A.12.
I have found two sets of extensive on-line quantum mechanics notes that are at the right
level for this course. One is by Prof Richard Fitzpatrick, of the University of Texas at Austin.
His UG course notes are here and his PG notes here. The UG notes start at the level of our
PHYS20101 course, but continue to the end of this course. The PG notes also start from the
begining but more in the style of PHYS20602, and continue to the end of this course. They
seem very much equivalent (though not quite idential) for the coverage of this course.
The other is by Prof Jim Branson of the University of California, San Diego, notes avail-
able here; these again start at the beginning, but go beyond this course and cover parts of
PHYS30202 as well.
Both sets of notes should be useful for revision of background material and as an alternative
resource for this course. Both have more details in the derivations than I have given here.
These notes have been prepared with TEX4ht, and use MathML to render the equations.
If you are using Internet Explorer, you may need to download ”MathPlayer” from here. Once
you have done so, the following should look more or less the same:
Z ∞ r
2π 2 n −αx2 π
|hkf |eE · r|ki| δ(Ei − Ef ) x e dx =
~
−∞ α

The first is MathML, the second is an image. Subtle things to look out for are bold-face

for vectors k etc; not so subtle is whether the square root in the second equation covers both
numerator and denominator. If you are using Firefox, you may need to download STIX fonts;
see here. Safari (even 5.1) is still a mess, unfortunately.

2
The advantage of MathML is accessibility (at least in theory), equations that can be mag-
nified if you change the magnification of the page (or that pop out with a click using “Math-
Player”), and much cleaner directories. The disadvantage is that I don’t think it looks as nice
on average, and I don’t know in advance how much trouble might be caused by incompatable
browsers. I may revert to using images for equations (as the web notes discussed above do) if
there are widespread problems, so let me know.
Operator methods in Quantum
Mechanics

1.1 Postulates of Quantum Mechanics

Summary: All of quantum mechanics follows from a small set of assumptions,


which cannot themselves be derived.
There is no unique formulation or even number of postulates, but all formulations I’ve seen
have the same basic content. This formulation follows Shankar most closely, though he puts III
and IV together. Nothing significant should be read into my separating them (as many other
authors do), it just seems easier to explore the consequences bit by bit.
I: The state of a particle is given by a vector |ψ(t)i in a Hilbert space. The state is
normalised: hψ(t)|ψ(t)i = 1.
This is as opposed to the classical case where the position and momentum can be specified
at any given time.
This is a pretty abstract statement, but more informally we can say that the wave function
ψ(x, t) contains all possible information about the particle. How we extract that information
is the subject of subsequent postulates.
The really major consequence we get from this postulate is superposition, which is behind
most quantum weirdness such as the two-slit experiment.
II: There is a Hermitian operator corresponding to each observable property of the particle.
Those corresponding to position x̂ and momentum p̂ satisfy [x̂i , p̂j ] = i~δij .
Other examples of observable properties are energy and angular momentum. The choice of
these operators may be guided by classical physics (eg p̂ · p̂/2m for kinetic energy and x̂ × p̂
for orbital angular momentum), but ultimately is verified by experiment (eg Pauli matrices for
spin- 21 particles).
The commutation relation for x̂ and p̂ is a formal expression of Heisenberg’s uncertainty
principle.
III: Measurement of the observable associated with the operator Ω b will result in one of the
eigenvalues ωi of Ω. Immediately after the measurement the particle will be in the corresponding
b
eigenstate |ωi i.
This postulate ensures reproducibility of measurements. If the particle was not initially in
the state |ωi i the result of the measurement was not predictable in advance, but for the result of
a measurement to be meaningful the result of a subsequent measurement must be predictable.
(“Immediately” reflects the fact that subsequent time evolution of the system will change the
value of ω unless it is a constant of the motion.)
IV: The probability of obtaining the result ωi in the above measurement (at time t0 ) is
|hωi |ψ(t0 )i|2 .

4
If a particle (or an ensemble of particles) is repeatedly prepared in the same initial state
|ψ(t0 )i and the measurement is performed, the result each time will in general be different
(assuming this state is not an eigenstate of Ω; b if it is the result will be the corresponding ωi
each time). Only the distribution of results can be predicted. The postulate expressed this way
has the same content as saying that the average value of ω is given by hψ(t0 )|Ω|ψ(t b 0 )i. (Note
the distinction between repeated measurements on freshly-prepared particles, and repeated
measurements on the same particle which will give the same ωi each subsequent time.)
d
V: The time evolution of the state |ψ(t)i is given by i~ dt |ψ(t)i = H|ψ(t)i,
b where H b is the
operator corresponding to the classical Hamiltonian.
In most cases the Hamiltonian is just the energy and is expressed as p̂ · p̂/2m + V (x̂). (They
differ in some cases though - see texts on classical mechanics such as Kibble and Berkshire.)
In the presence of non-conservative forces such as magnetism the Hamiltonian is still equal to
the energy, but its expression in terms of p̂ is more complicated.
VI: The Hilbert space for a system of two or more particles is a product space.
This is true whether the particles interact or not, ie if the states |φi i span the space for one
particle, the states |φi i ⊗ |φj i will span the space for two particles. If they do interact though,
the eigenstates of the Hamiltonian will not just be simple products of that form, but will be
linear superpositions of such states.
References

• Shankar 4

1.2 Position and Momentum Representations

Summary: The position-space representation allows us to make contact with


quantum mechanics expressed in terms of wavefunctions

• Working in one dimension for the moment, it is convenient to define the state |x0 i which
is an eigenfunction of the position operator: x̂|x0 i = x0 |x0 i. Here we have used x0 to
indicate a specific value of x – say 3 m right of the origin - but we also write |xi when we
want to leave the exact value open. It’s like the difference between |ω1 i and |ωi i.

• The set {|xi} form a complete set as x̂ is Hermitian, but now when we expand another
state in terms of these, we have to integrate rather than sum, and
R rather than a discrete set
of coefficients {ai } we have a continuous set or function, |αi = α(x )|x idx0 . (Integration
0 0

limits are −∞ → ∞ unless otherwise specified.)

• We choose the normalisation to be hx|x0 i = δ(x − x0 ). Then α(x) = hx|αi. This is called
the wave function in position space of the state |αi. With this normalisation the identity
operator is Z ∞
|x0 ihx0 |dx0 .
−∞

These states have infinite norm and so don’t really belong to our vector space, but we
don’t need to worry about this! (See PHYS20602 notes for much more detail.)
• Since |αi = α(x0 )|x0 idx0 , from the fourth postulate we might expect that RP (x) ≡ |α(x)|2
R

is the probability of finding the particle at position x. But we also need P (x0 )dx0 = 1
– the probability of finding the particle at some x is unity. The transition from sums to
integrals corresponds to the transition from discrete probabilities to probability distribu-
tions, and so the correct interpretation is that P (x)dx is the probability of finding the
particle in an infinitesimal interval between x and x + dx.

• Since hx|x̂|αi = xhx|αi = xα(x), we can say that the position operator in the position
space representation is simply x, and operating with it is equivalent to multiplying the
wave function by x.

• If we make the hypothesis that hx|p̂|αi = −i~dα(x)/dx, we can see that the required
commutation relation [x̂, p̂] = i~ is satisfied.

• We can equally imagine states of definite momentum, |pi, with p̂|pi = p|pi. Then α̃(p) =
hp|αi is the wave function in position space; in this representation p̂ is given by p and x̂
by (i~)d/dp.

• Let’s write the position-space wave function of |pi, hx|pi, as φp (x). Since hx|p̂|pi can be

written as either pφp (x) or −i~dφp (x)/dx, we see that φp (x) = (1/ 2π~) eipx/~ . This is just
a plane wave, as expected. (The normalisation is determined by hp|p0 i = δ(p − p0 ).)
√ R 0
• This implies that α(x) = (1/ 2π~) eip x/~ α̃(p0 )dp0 , that is the position-
√ and momentum-
space wave functions are Fourier transforms of each other. (The 1/ ~ is present in the
prefactor because we are using p and not k = p/~ as the conjugate variable.)

• Strictly speaking we should write the position-space representations of x̂ and p̂ as hx|x̂|x0 i =


x0 δ(x−x0 ) and hx|p̂|x0 i = (i~)dδ(x−x0 )/dx0 . These only make sense in a larger expression
in which integration over x0 is about to be performed.

• The extension to three dimensions is trivial, with the representation of the vector operator
p̂ in position space being −i~∇. Since differentiation with respect to one coordinate
commutes with multiplication by another, [x̂i , p̂j ] = i~δij as required. (We have used x̂
as the position operator. We will however use r as the position vector. The reason we
don’t use r̂ is that it is so commonly used to mean a unit vector. Thus x̂|ri = r|ri.)
√ ip·r/~
The generalisation of φp (x) is φp (r) = (1/ 2π~) e , which is a plane wave travelling in
the direction of p.

• In the position representation, the Schrödinger equation reads

~2 2 ∂
− ∇ ψ(r, t) + V (r)ψ(r, t) = i~ ψ(r, t)
2m ∂t
Note though that position and time are treated quite differently in quantum mechanics.
There is no operator corresponding to time, and t is just part of the label of the state:
ψ(r, t) = hr|ψ(t)i.

• Together with the probability density, ρ(r) = |ψ(r)|2 , we also have a probability flux j(r) =
(−i~/2m)(ψ(r)∗ ∇ψ(r) − ψ(r)∇ψ(r)∗ ). The continuity equation ∇ · j = −∂ρ/∂t which
ensures local conservation of probability density follows from the Schrödinger equation.
• A two-particle state has a wave function which is a function of the two positions, Φ(r1 , r2 ),
and the basis kets are direct product states |r1 i ⊗ |r2 i. For states of non-interacting
distinguishable particles where it is possible to say that the first particle is in single-
particle state |ψi and the second in |φi, the state of the system is |ψi ⊗ |φi and the wave
function is Φ(r1 , r2 ) = (hr1 | ⊗ hr2 |)(|ψi ⊗ |φi) = hr1 |ψihr2 |φi = ψ(r1 )φ(r2 ).
 
d
• From i~ dt |ψ(t)i = H|ψ(t)i
b we obtain |ψ(t + dt)i = 1 − ~i Hdt b |ψ(t)i. Thus
 N
i (t − t0 ) b
|ψ(t)i = lim 1− H |ψ(t0 )i = e−iH(t−t0 )/~ |ψ(t0 )i ≡ U (t, t0 )|ψ(t0 )i
b
N →∞ ~ N

eΩ = n n!1 Ω b n . If the Hamiltonian


P
where the exponential of an operator is defined as 
b
Rt 
depends explicitly on time, we have U (t, t0 ) = T exp −i t0 H(tb 0 )dt0 /~ , where the time-
ordered exponential denoted by T exp means that in expanding the exponential, the op-
erators are ordered so that H(t
b 1 ) always sits to the right of H(t
b 2 ) (so that it acts first) if
t1 < t2 . (This will be derived later, and is given here for completeness.)

References

• Shankar ch 1.10

• Mandl ch 12.4

1.3 The Stern-Gerlach experiment

Summary: Simple extensions of the Stern-Gerlach experiment reduce the predic-


tions about quantum measurement to their essence.

The usual reason for considering the Stern-Gerlach experiment is that it shows experi-
mentally that angular momentum is quantised, and that particles can have intrinsic angular
momentum which is an integer multiple of 21 ~. An inhomogeneous magnetic field deflects par-
ticles by an amount proportional to the component of their magnetic moment parallel to the
field; when a beam of atoms passes through they follow only a few discrete paths (2j + 1 where
j is their total angular momentum quantum number) rather than, as classically predicted, a
continuum of paths (corresponding to a magnetic moment which can be at any angle relative
to the field).
For our purposes though a Stern-Gerlach device is a quick and easily-visualised way of
making measurements of quantities (spin components) for which the corresponding operators
do not commute, and thereby testing the postulates concerned with measurement. We restrict
ourselves to spin- 12 ; all we need to know is that if we write | + ni to indicate a particle with its
spin up along the direction n, and | − ni for spin down, then the two orthogonal states {| ± ni}
span the space for any n, and |h±n| ± n0 i|2 = 21 if n and n0 are perpendicular.
In the figure above we see an unpolarised beam entering a Stern-Gerlach device with its
field oriented in the z-direction. If we intercept only the upper or lower exiting beam, we have
a pure spin-up or spin-down state.
Here we see a sequence of Stern-Gerlach devices with their fields oriented in either the x- or
z-direction. The numbers give the fraction of the original unpolarised beam which reaches this
point. The sequence of two x-devices in a row illustrates reproducibility. The final z-device
is the really crucial one. Do you understand why half of the particles end up spin-down, even
though we initially picked the | + zi beam?

• Townsend ch 1.4

1.4 Ehrenfest’s Theorem and the Classical Limit

Summary: The form of classical mechanics which inspired Heisenberg’s formula-


tion of Classical Mechanics allows us to see when particles should behave classically.
d d
Using i~ dt |ψ(t)i = H|ψ(t)i
b and hence −i~ dt hψ(t)| = hψ(t)|H,
b and writing hΩi
b ≡ hψ(t)|Ω|ψ(t)i,
we have Ehrenfest’s Theorem

d b 1 b b ∂Ω
b
hΩi = h[Ω, H]i + h i
dt i~ ∂t

The second term disappears if Ω b is a time-independent operator (like momentum, spin...).


Note we are distinguishing between intrinsic time-dependence of an operator, and the time-
dependence of its expectation value in a given state.
This is very reminiscent of a result which follows from Hamilton’s equations in classical
mechanics, for a function of position, momentum (and possibly time explicitly) Ω(p, x, t)

d ∂Ω dx ∂Ω dp ∂Ω
Ω(p, x, t) = + +
dt ∂x dt ∂p dt ∂t
∂Ω ∂H ∂Ω ∂H ∂Ω
= − +
∂x ∂p ∂p ∂x ∂t
∂Ω
≡ {Ω, H} +
∂t
where the notation {Ω, H} is called the Poisson bracket of Ω and H, and is simply defined in
terms of the expression on the line above which it replaced. (For Ω = x and Ω = p we can in
fact recover Hamilton’s equations for ṗ and ẋ from this more general expression.)
p̂2
In fact for H
b=
2m
+ V (x̂), we can further show that

d p̂ d dV (x̂)
hx̂i = h i and hp̂i = −h i
dt m dt dx̂
which looks very close to Newton’s laws. Note though that hdV (x̂)/dx̂i 6= dhV (x̂)i/dhx̂i in
general.
This correspondence is not just a coincidence, in the sense that Heisenberg was influenced
by it in coming up with his formulation of quantum mechanics. It confirms that it is the
expectation value of an operator, rather than the operator itself, which is closer to the classical
concept of the time evolution of some quantity as a particle moves along a trajectory.
Similarity of formalism is not the same as identity of concepts though. Ehrenfest’s Theorem
does not say that the expectation value of a quantity follows a classical trajectory in general.
What it does ensure is that if the uncertainty in the quantity is sufficiently small, in other words
if ∆x and ∆p are both small (in relative terms) then the quantum motion will aproximate the
classical path. Of course because of the uncertainty principle, if ∆x is small then ∆p is large, and
it can only be relatively small if p itself is really large—ie if the particle’s mass is macroscopic.
More specifically, we can say that we will be in the classical regime if the de Broglie wavelength
is much less that the (experimental) uncertainty in x. (In the Stern-Gerlach experiment the
atoms are heavy enough that (for a given component of their magnetic moment) they follow
approximately classical trajectories through the inhomogeneous magnetic field.)

• Shankar ch 2.7, ch 6

• Mandl ch 3.2
Approximate methods I: variational
method and WKB

It is often (almost always!) the case that we cannot solve real problems analytically. Only a
very few potentials have analytic solutions, by which I mean one can write down the energy
levels and wave functions in closed form, as for the harmonic oscillator and Coulomb potential.
In fact those are really the only useful ones (along with square wells)... In the last century,
a number of approximate methods have been developed to obtain information about systems
which can’t be solved exactly.
These days, this might not seem very relevant. Computers can solve differential equations
very efficiently. But:
• It is always useful to have a check on numerical methods
• Even supercomputers can’t solve the equations for many interacting particle exactly in a
reasonable time (where “many” may be as low as four, depending on the complexity of
the interaction) — ask a nuclear physicist or quantum chemist.
• Quantum field theories are systems with infinitely many degrees of freedom. All ap-
proaches to QFT must be approximate.
• If the system we are interested in is close to a soluble one, we might obtain more insight
from approximate methods than from numerical ones. This is the realm of perturbation
theory. The most accurate prediction ever made, for the anomalous magnetic moment of
the electron, which is good to one part in 1012 , is a 4th order perturbative calculation.

2.1 Variational methods: ground state

Summary: Whatever potential we are considering, we can always obtain an upper


bound on the ground-state energy.
Suppose we know the Hamiltonian of a bound system but don’t have any idea what the
energy of the ground state is, or the wave function. The variational principle states that if we
simply guess the wave function, the expectation value of the Hamiltonian in that wave function
will be greater than the true ground-state energy:

hΨ|H|Ψi
b
≥ E0
hΨ|Ψi
This initially surprising result is more obvious if we consider expanding the (normalised)
b = P Pn En . Since all the probabilities
|Ψi in the true energy eigenstates |ni, which gives hHi n

10
Pn are non-negative, and all the En greater than or equal to E0 , this is obviously not less than
E0 .
It is also clear that the better the guess (in the sense of maximising the overlap with the
true ground state) the lower the energy bound, till successively better guesses converge on the
true result.
As a very simple example, consider the infinite square well with V = 0, 0 < x < a and
V = ∞ otherwise. As a trial function, we use Ψ(x) = x(a − x), 0 < x < a and Ψ(x) = 0
otherwise. Then
hΨ|H|Ψi
b 10 ~2 π 2 ~2
= = 1.013
hΨ|Ψi 2ma2 2ma2
This is spectacularly good! Obviously it helped that our trial wave function looked a lot like
what we’d expect of the true solution - symmetric about the midpoint, obeying the boundary
conditions, no nodes....
In general, we will do better if we have an adjustable parameter, because then we can find
the value which minimises our upper bound. So we could try Ψ(x) = x(a − x) + bx2 (a − x)2
(with our previous guess corresponding to b = 0). Letting Mathematica do the dirty work, we
get an energy bound which is a function of b, which takes its minimum value of 1.00001E0 at
b = 1.133/a2 . Not much room for further improvement here!

Above we have plotted, on the left, the true and approximate wave functions (except that
the true one is hidden under the second approximation, given in blue) and on the right, the
deviations of the approximate wave functions from the true one (except that for the second
approximation the deviation has been multiplied by 5 to render it visible!) This illustrates
a general principle though: the wave function does have deviations from the true one on the
part-per-mil scale, while the energy is good to 1 part in 105 . This is because the error in the
energy is proportional to the coefficients squared of the admixture of “wrong” states, whereas
the error in the wave function is linear in them.
Another example, for which the analytic solution is not known, is the quartic potential,
2
V (x) = βx4 . Here a Gaussian trial wave function Ψ(x) = e−ax /2 gives an upper bound for
the ground energy state of 83 61/3 = 0.68 in units of (~4 β/m2 )1/3 . (The value obtained from
numerical solution of the equation is 0.668).
• Gasiorowicz ch 14.4
• Mandl ch 8.1
• Shankar ch 16.1

2.2 Variational methods: excited states

Summary: Symmetry considerations may allow us to extend the variational


method to certain excited states.

Looking again at the expression hHib = P Pn En , and recalling that the Pn are the squares
n
of the overlap between the trial function and the actual eigenstates of the system, we see that
we can only find bounds on excited states if we can arrange for the overlap of the trial wave
function with all lower states to be zero. Usually this is not possible.
However an exception occurs where the states of the system can be separated into sets
with different symmetry properties or other quantum numbers. Examples include parity and
(in 3 dimensions) angular momentum. For example the lowest state with odd parity will
automatically have zero overlap with the (even-parity) ground state, and so an upper bound
can be found for it as well.
For the square well, the relevant symmetry is reflection about the midpoint of the well.
If we choose a trial function which is antisymmetric about the midpoint, it must have zero
overlap with the
P true ground state. So we can get a good bound on the first excited state,
since hHi = n>0 Pn En > E1 . Using Ψ1 (x) = x(a − x)(2x − a), 0 < x < a we get E1 ≤
b
42 ~2 /2ma2 = 1.064E1 .
If we wanted a bound on E2 , we’d need a wave function which was orthogonal to both the
ground state and the first excited state. The latter is easy by symmetry, but as we don’t know
the exact ground state (or so we are pretending!) we can’t ensure the first. We can instead
form a trial wave function which is orthogonal to the best trial ground state, but we will no
longer have a strict upper bound on the energy E2 , just a guess as to its value.
In this case we can choose Ψ(x) = x(a − x) + bx2 (a − x)2 with a new value of b which gives
orthogonality to the previous state, and then we get E2 ∼ 10.3E0 (as opposed to 9 for the
actual value).

• Mandl ch 8.3

• Shankar ch 16.1

2.3 Variational methods: the helium atom

Summary: The most famous example of the variational principle is the ground
state of the two-electron helium atom.
If we could switch off the interactions between the electrons, we would know what the
ground state of the helium atom would be: Ψ(r1 , r2 ) = φZ=2 Z=2 Z
100 (r1 )φ100 (r2 ), where φnlm is a
single-particle wave function of the hydrogenic atom with nuclear charge Z. For the ground
state n = 1 and l = m = 0 (spherical symmetry). The energy of the two electrons would be
2Z 2 ERy = −108.8 eV. But the experimental energy is only −78.6 eV (ie it takes 78.6 eV to
fully ionise neutral helium). The difference is obviously due to the fact that the electrons repel
one another.
The full Hamiltonian (ignoring the motion of the proton - a good approximation for the
accuracy to which we will be working) is

~2
 
2 2 1 1 1
− (∇1 + ∇2 ) − 2 ~cα + + ~cα
2m |r1 | |r2 | |r1 − r2 |

where ∇21 involves differentiation with respect to the components of r1 , and α = e2 /(4π0 ~c) ≈
1/137. (See here for a note on units in EM.)
A really simple guess at a trial wave function for this problem would just be Ψ(r1 , r2 ) as
written above. The expectation value of the repulsive interaction term is (5Z/4)ERy giving
a total energy of −74.8 eV. (Gasiorowicz demonstrates the integral, as do Fitzpatrick and
Branson.)
It turns out we can do even better if we use the atomic number Z in the wave function Ψ as
a variational parameter (that in the Hamiltonian, of course, must be left at 2). The best value
turns out to be Z = 27/16 and that gives a better upper bound of −77.5 eV – just slightly
higher than the experimental value. (Watch the sign – we get an lower bound for the ionization
energy.) This effective nuclear charge of less than 2 presumably reflects the fact that to some
extent each electron screens the nuclear charge from the other.

• Gasiorowicz ch 14.2, 14.4 (most complete)

• Mandl ch 7.2, 8.8.2 (clearest)

• Shankar ch 16.1 (not very clear; misprints in early printings)

2.4 WKB approximation

Summary: The WKB approximation works for potentials which are slowly-
varying on the scale of the wavelength of the particle and is particularly useful for
describing tunnelling.

The WKB approximation is named for G. Wentzel, H.A. Kramers, and L. Brillouin, who
independently developed the method in 1926. There are pre-quantum antecedents due to
Jeffreys and Raleigh, though.
We can always write the one-dimensional Schrödinger equation as

d2 φ
= −k(x)2 φ(x)
dx2
p
where k(x) ≡ 2m(E − V (x))/~. We could think of the quantity k(x) as a spatially-varying
wavenumber (k = 2π/λ), though we anticipate that this can only make sense if it doesn’t
change too quickly with position - else we can’t identify a wavelength at all.
Let’s see under what conditions a solution of the form
 Z x 
0 0
ψ(x) = A exp ±i k(x )dx

might be a good approximate solution. Plugging this into the SE above, the LHS reads −(k 2 ∓
ik 0 )ψ. (Here and hereafter, primes denote differentiation wrt x — except when they indicate
an integration variable.) So provided |k 0 /k|  |k|, or |λ0 |  1, this is indeed a good solution as
the second term can be ignored. And |λ0 |  1 does indeed mean that the wavelength is slowly
varying. (One sometimes reads that what is needed is that the potential is slowly varying.
But that is not a well defined statement, because dV /dx is not dimensionless. For any smooth
potential, at high-enough energy we will have |λ0 |  1. What is required is that the lengthscale
of variation of λ, or k, or V (the scales are all approximately equal) is large compared with the
de Broglie wavelength of the particle.
An obvious problem with this form is that the current isn’t constant: if we calculate it we
get |A|2 ~k(x)/m. A better approximation is
 Z x 
A 0 0
ψ(x) = p exp ±i k(x )dx
k(x)

which gives a constant flux. (Classically, the probability of finding a particle in a small region
is inversely proportional to the speed with which it passes through that region.) Furthermore
one can show that if the error in the first approximation is O |λ0 |, the residual error with
the second approximation is O |λ0 |2 . At first glance there is a problem with the second form
when k(x) = 0, ie when E = V (x). But near these points - the classical turning points - the
whole approximation scheme is invalid, because λ → ∞ and so the potential cannot be “slowly
varying” on the scale of λ.
For a region of constant potential, of course, there
R x is0 no 0difference between the two approx-
imations and both reduce to a plain wave, since k(x )dx = kx.
For regions where E < V (x), k(x) will be imaginary and there is no wavelength as such.
But defining λ = 2π/k still, the WKB approximation will continue to be valid if |λ0 |  1.
Tunnelling and bound-state problems inevitably include regions where E ≈ V (x) and the
WKB approximation isn’t valid. This would seem to be a major problem. However if such
regions are short the requirement that the wave function and its derivative be continuous can
help us to “bridge the gap”.

• Gasiorowicz Supplement 4 A

• Shankar ch 16.2

2.4.1 WKB approximation for bound states


In a bound state problem with potential V (x), for a given energy E, we can divide space into
classically allowed regions, for which E > V (x), and classically forbidden regions for which
E < V (x). For simplicity we will assume that there are only three regions in total, classically
forbidden for x < a and x > b, and classically allowed for a < x < b.
In the classically allowed region a < x < b the wave function will be oscillating and we can
write it either as a superposition of right- and left-moving complex exponentials or as
Z x 
A 0 0
ψ(x) = p cos k(x )dx + φ
k(x)

For the particular case of a well with infinite sides the solution must vanish at the boundaries,
so (taking the lower limit of integration as a for definiteness; any other choice just shifts φ)
Rb Rb
φ = (n0 + 21 )π and a k(x0 )dx0 + φ = (n00 + 12 )π; in other words a k(x0 )dx0 = (n + 1)π, with
integer n ≥ 0. Of course for constant k this gives k = nπ/(b − a), which is exact.
For a more general potential, outside the classically allowed region we will have decaying
exponentials. In the vicinity of the turning points these solutions will not be valid, but if we
approximate the potential as linear we can solve the Schrödinger equation exactly (in terms of
Airy functions). Matching these to our WKB solutions in the vicinity of x = a and x = b gives
the surprisingly simple result that inside the well
Z x Z x
A0
 
A 0 0 0 0
ψ(x) = p cos k(x )dx − π/4 and ψ(x) = p cos k(x )dx + π/4
k(x) a k(x) b

Rb
which can only be satisfied if A0 = ±A and a k(x0 )dx0 = (n+ 12 )π. This latter is the quantisation
condition for a finite well; it is different from the infinite well because the solution can leak
into the forbidden region. (For a semi-infinite well, the condition is that the integral equal
(n + 43 )π. This is the appropriate form for the l = 0 solutions of a spherically symmetric
well.) Unfortunately we can’t check this against the finite square well though, because there
the potential is definitely not slowly varying at the edges, nor can it be approximated as linear.
But we can try the harmonic oscillator, for which the integral gives Eπ/~ω and hence the
quantisation condition gives E = (n + 21 ) ~ω! The approximation was only valid for large n
(small wavelength) but in fact we’ve obtained the exact answer for all levels.
Details of the matching process are given in section 2.4.1.1, since I’ve not found them in
full detail in any textbook. They are not examinable.

• Gasiorowicz Supplement 4 A

• Shankar ch 16.2

2.4.1.1 Matching with Airy Functions


This section is not examinable. More about Airy functions can be found in section A.7.
If we can treat the potential as linear over a wide-enough region around the turning points
that, at the edges of the region, the WKB approximation is valid, then we can match the WKB
and exact solutions.
Consider a linear potential V = βx as an approximation to the potential near the right-hand
turning point b. We will scale x = ( ~2 /(2mβ))1/3 z and E = (~2 β 2 /2m)1/3 µ, so the turning point
is at z = µ. Then the differential equation is y 00 (z) − zy(z) + µy(z) = 0 and the solution which
decays as z → ∞ is y(z) = AAi(z − µ). This has Rto be matched,R for z not too close to µ, to the
x z
WKB solution. In these units, k(x) = (µ−z) and b k(x0 )dx0 = µ (µ−z 0 )dz 0 = −(2/3)(µ−z)3/2 ,
so

WKB B 2 3/2
 WKB C 2 3/2

ψx<b (z) = cos − 3
(µ − z) + φ and ψx>b (z) = exp − 3
(z − µ) .
(µ − z)1/4 (z − µ)1/4

(We chose the lower limit of integration to be µ in order that the constant of integration
vanished; any other choice would just shift φ.) Now the asymptotic forms of the Airy function
are known:  
2 3/2 π
z→−∞
cos 3 |z| − 4 − 2 z 3/2
z→∞ e 3
Ai(z) −→ √ and Ai(z) −→ √ 1/4
π |z|1/4 2 πz
so 2 3/2
2
− π4
3/2

z→−∞ cos 3 (µ − z) z→∞ e− 3 (z−µ)
Ai(z − µ) −→ √ and Ai(z − µ) −→ √
π(µ − z)1/4 2 π(z − µ)1/4
and these will match the WKB expressions exactly provided C = 2B and φ = π/4.
At the left-hand turning point a, the potential is V = −βx (with a different Rβ in general)
x
and
R z the solution y(z) = AAi(−z − µ). On the other hand the WKB integral is a k(x0 )dx0 =
µ
(µ + z 0 )dz 0 = 2/3(µ + z)3/2 . So in the classically allowed region we are comparing
2
− π4
3/2

z→∞ cos 3 (z + µ) WKB D 2 3/2

Ai(−z−µ) −→ √ with ψ x>a (z) = cos 3
(µ + z) + φ
π(z + µ)1/4 (µ + z)1/4
which requires φ = −π/4. (Note that φ is different in each case because we have taken the
integral from a different place.)
It is worth stressing that though the exact (Airy function) and WKB solutions match “far
away” from the turning point, they do not do so close in. The (z − µ)−1/4 terms in the latter
mean that they blow up, but the former are perfectly smooth. They are shown (for µ = 0)
below, in red for the WKB and black for the exact functions. We can see they match very well
so long as |z − µ| > 1; in fact z → ∞ is overkill!

So now we can be more precise about the conditions under which the matching is possible:
we need the potential to be linear over the region ∆x ∼ ( ~2 /(2mβ)) 1/3
2 where β = dV /dx.
Linearity means that ∆V /∆x ≈ dV /dx at the turning point, or d V /dx2 ∆x  dV /dx
(assuming the curvature
2 is the
dominant non-linearity,2/3
as is likely if V is smooth). For the
2
harmonic oscillator, d V /dx ∆x/(dV /dx) = 2(~ω/E) which is only much less than 1 for

very large n, making the exact result even more surprising!

2.4.2 WKB approximation for tunnelling


For the WKB approximation to be applicable to tunnelling through a barrier, we need as
always |λ0 |  1. In practice that means that the barrier function is reasonably smooth and
that E  V (x). Now it would of course be possible to do a careful calculation, writing down
the WKB wave function in the three regions (left of the barrier, under the barrier and right of
the barrier), linearising in the vicinity of the turning points in order to match the wave function
and its derivatives at both sides. This however is a tiresomely lengthy task, and we will not
attempt it. Instead, recall the result for a high, wide square barrier; the transmission coefficient
in the limit e−2κ∆L  1 is given by
16k1 k2 κ2
T = e−2κL
(κ2 + k12 )(κ2 + k22 )
where
p k1 and k2 are the wavenumbers on either side of the barrier (width L, height V ) and κ =
2m(V − E). (See the notes for PHYS20101, where however k1 = k2 .) All the prefactors are
not negligible, but they are weakly energy-dependent, whereas the e−2κL term is very strongly
energy dependent. If we plot log T against energy, the form will be essentially const − 2κ(E)L,
and so we can still make predictions without worrying about the constant.
For a barrier which is not constant, the WKB approximation will yield a similar expression
for the tunnelling probability:
Z b
T = [prefactor] × exp(−2 κ(x0 )dx0 )
a
p
where κ(x) ≡ 2m(V (x) − E)/~. The WKB approximation is like treating a non-square
barrier like a succession of square barriers of different heights. The need for V (x) to be slowly
varying is then due to the fact that we are slicing the barrier sufficiently thickly that e−2κ∆L  1
for each slice.
The classic application of the WKB approach to tunnelling is alpha decay. The potential
here is a combination of an attractive short-range nuclear force and the repulsive Coulomb
interaction between the alpha particle and the daughter nucleus. Unstable states have energies
greater than zero, but they are long-lived because they are classically confined by the barrier.
(It takes some thought to see that a repulsive force can cause quasi-bound states to exist!) The
semi-classical model is of a pre-formed alpha particle bouncing back and forth many times (f )
per second, with a probability of escape each time given by the tunnelling probability, so the
decay rate is given by 1/τ = f T . Since we can’t calculate f with any reliability we would be
silly to worry about the prefactor in T , but the primary dependence of the decay rate on the
energy of the emitted particle will come from the easily-calculated exponential.

In the figure above the value of a is roughly the nuclear radius R, and b is given by VC (b) = E,
with the Coulomb potential VC (r) = zZ ~cα/r. (Z is the atomic number of the daughter nucleus
and z = 2 that of the alpha.) The integral in the exponent can be done (see Gasiorowicz
Supplement 4 B for details; the substitution r = b cos2 θ is used), giving in the limit b  a
Z b r
mc2 Z 1 Z
2 κ(x0 )dx0 = 2πzZα = 39 p ⇒ log10 = const − 1.72 p
a 2E E(MeV) τ E(MeV)
Data for the lifetimes of long-lived isotopes (those with low-energy alphas) fit such a functional
form well, but with 1.61 rather than 1.72. In view of the fairly crude approximations made,
this is a pretty good result. Note it is independent of thepnuclear radius because we used b  a;
we could have kept the first correction, proportional to b/a, to improve the result.

• Gasiorowicz Supplement 4 B

• Shankar ch 16.2
Approximate methods II:
Time-independent perturbation theory

3.1 Formalism

Summary: Perturbation theory is the most widely used approximate method.


“Time-independent perturbation theory” deals with bound states eg the spectrum
of the real hydrogen atom and its response to extermal fields.

Perturbation theory is applicable when the Hamiltonian H b can be split into two parts, with
the first part being exactly solvable and the second part being small in comparison. The first
part is always written H b (0) , and we will denote its eigenstates by |n(0) i and energies by E (0) (with
n
(0)
wave functions φn ). These we know. The eigenstates and energies of the full Hamiltonian are
denoted |ni and En , and the aim is to find successively better approximations to these. The
zeroth-order approximation is simply |ni = |n(0) i and En = En(0) , which is just another way of
saying that the perturbation is small.
Nomenclature for the perturbing Hamiltonian H b −H b (0) varies. δV , H
b (1) and λH b (1) are all
common. It usually is a perturbing potential but we won’t assume so here, so we won’t use the
first. The second and third differ in that the third has explicitly identified a small, dimensionless
parameter (eg α in EM), so that the residual H b (1) isn’t itself small. With the last choice, our
expressions for the eigenstates and energies of the full Hamiltonian will be explicitly power
series in λ, so En = En(0) + λEn(1) + λ2 En(2) + . . . etc. With the second choice the small factor is
hidden in Hb (1) , and is implicit in the expansion which then reads En = En(0) +En(1) +En(2) +. . .. In
this case one has to remember that anything with a superscript (1) is first order in this implicit
small factor, or more generally the superscript (m) denotes something which is mth order. For
the derivation of the equations we will retain an explicit λ, but thereafter we will set it equal
to one to revert to the other formulation. We will take λ to be real so that H b 1 is Hermitian.
We start with the master equation
b (0) + λH
(H b (1) )|ni = En |ni.

Then we substitute in En = En(0) + λEn(1) + λ2 En(2) + . . . and |ni = |n(0) i + λ|n(1) i + λ2 |n(2) i + . . .
and expand. Then since λ is a free parameter, we have to match terms on each side with the
same powers of λ, to get
b (0) |n(0) i = En(0) |n(0) i
H
b (0) |n(1) i + H
H b (1) |n(0) i = En(0) |n(1) i + En(1) |n(0) i
b (0) |n(2) i + H
H b (1) |n(1) i = E (0) |n(2) i + E (1) |n(1) i + E (2) |n(0) i
n n n

19
We have to solve these sequentially. The first we assume we have already done. The second
will yield En(1) and |n(1) i. Once we know these, we can use the third equation to yield En(2) and
|n(2) i, and so on.
In each case, to solve for the energy we take the inner product with hn(0) | (ie the same state)
whereas for the wave function, we use hm(0) | (another state). We use, of course, hm(0) |H b (0) =
Em hm | and hm |n i = δmn .
(0) (0) (0) (0)

At first order we get


hm(0) |Hb (1) |n(0) i
En(1) = hn(0) |H
b (1) |n(0) i and hm(0) |n(1) i = ∀m 6= n
En(0) − Em (0)

The second equation tells us the overlap of |n(1) i with all the other |m(0) i, but not with |n(0) i.
This is obviously not constrained, because we can add any amount of |n(0) i and the equations
will still be satisfied. However we need the state to continue to be normalised, and when we
expand hn|ni = 1 in powers of λ we find that hn(0) |n(1) i is required to be imaginary. Since this
is just like a phase rotation of the original state and we can ignore it. Hence
X hm(0) |H b (1) |n(0) i
|n(1) i = (0) (0)
|m(0) i
m6=n
En − Em
b (0) is degenerate, there is a potential problem with this expression because
If the spectrum of H
the denominator can be infinite. In that case we have to diagonalise H b (1) in the subspace of
degenerate states exactly. This is called “degenerate perturbation theory”.
Then at second order
(0) b (1) (0) 2

X hm |H |n i
En(2) = hn(0) |H
b (1) |n(1) i =
m6=n
En(0) − Em(0)

The expression for the second-order shift in the wave function |n(2) i can also be found but it
is tedious. The main reason we wanted |n(1) i was to find En(2) anyway, and we’re not planning
to find En(3) ! Note that though the expression for En(1) is generally applicable, those for |n(1) i
and En(2) would need some modification if the Hamiltonian had continuum eigenstates as well
as bound states (eg hydrogen atom). Provided the state |ni is bound, that is just a matter of
integrating rather than summing. This restriction to bound states is why Mandl calls chapter 7
“bound-state perturbation theory”. The perturbation of continuum states (eg scattering states)
is usually dealt with separately.
Note that the equations above hold whether we have identified an explicit small parameter
λ or not. So from now on we will set λ to one, and En = En(0) + En(1) + En(2) + . . ..
Connection to variational approach:
For the ground state (which is always non-degenerate) E0(0) + E0(1) is a variational upper bound
on the exact energy E0 , since it is obtained by using the unperturbed ground state as a trial
wavefunction for the full Hamiltonian. It follows that the sum of all higher corrections E0(2) +. . .
must be negative. We can see indeed that E0(2) will always be negative, since for every term in
the sum the numerator is positive and the denominator negative.
• Gasiorowicz ch 11.1
• Mandl ch 7.1,3
• Shankar ch 17.1
• Townsend ch 11.1-3
3.1.1 Simple examples of perturbation theory
Probably the simplest example we can think of is an infinite square well with a low step half
way across, so that V (x) = 0 for 0 < x < a/2, V0 for a/2 < x < a and infinite elsewhere. We
treat this as a perturbation on the flat-bottomed well, so H (1) = V0 for a/2 < x < a and zero
elsewhere. q
The ground-state unperturbed wavefunction is ψ0(0) = a2 sin πx a
, with unperturbed energy
E0(0) = π 2 ~2 /(2ma2 ). A “low” step will mean V0  E0(0) . Then we have
2 a
Z
πx V0
(1) (0) (1) (0)
E0 = hψ0 |H |ψ0 i = V0 sin2 dx =
a a/2 a 2

This problem can be solved √ semi-analytically;p in both regions the solutions are sinusoids, but
with wavenumbers k = 2mE/~ and k 0 = 2m(E − V0 )/~ respectively; satisfying the bound-
ary conditions and matching the wavefunctions and derivatives at x = a/2 gives the condition
k cot(ka/2) = k 0 cot(k 0 a/2) which can be solved numerically for E. (You ought to try this, it
will be good practice for later sections of the course.) Below the exact solution (green, dotted)
and E0(0) + E0(1) (blue) are plotted; we can see that they start to diverge when V0 = 5 (everything
is in units of ~2 /(2ma2 )).

We can also plot the exact wavefunctions for different step size, and see that for V0 = 10 (the
middle picture, well beyond the validity of first-order perturbation theory) it is significantly
different from a simple sinusoid.

Another example is the harmonic oscillator, with a perturbing potential H (1) = λx2 . The
states of the unperturbed oscillator are denoted |n(0) i with energies E0(0) = (n + 21 )~ω.
p Recalling that† in terms of creation and annihilation operators (see section A.4), x̂ =
~/(2mω)(â + â ), with [â, ↠] = 1, and so
~λ λ
En(1) = hn(0) |H (1) |n(0) i = hn(0) |(↠)2 + â2 + 2↠â + 1|n(0) i = ~ω(n + 12 )
2mω mω 2
The first-order change in the wavefunction is also easy to compute, as hm(0) |H (1) |n(0) i = 0
unless m = n ± 2. Thus
X hm(0) |H b (1) |n(0) i
|n(1) i = (0) (0)
|m(0) i
m6=n
En − Em
p p !
~λ (n + 1)(n + 2) n(n − 1)
= |(n + 2)(0) i + |(n − 2)(0) i
2mω −2~ω 2~ω
(0) b (1) (0) 2

X hm |H |n i
and En(2) = hn(0) |H
b (1) |n(1) i =
m6=n
En(0) − Em
(0)

 2    2
~λ (n + 1)(n + 2) n(n − 1) 1 λ
= + = −2 ~ω(n + 21 )
2mω −2~ω 2~ω mω 2
We can see a pattern emerging, and of course this is actually a soluble
p problem, as all that the
perturbation has done is change the frequency. Defining ω 0 = ω 1 + 2λ/(mω 2 ), we see that
the exact solution is
 2 !
λ 1 λ
En = (n + 21 )~ω 0 = (n + 12 )~ω 1 + − + ...
mω 2 2 mω 2
in agreement with the perturbative calculation.

3.2 Example of degenerate perturbation theory


Suppose we have a three state basis and an Hb (0) whose eigenstates, |1(0) i, |2(0) i and |3(0) i, have
(0) (0) (0)
energies E1 , E2 and E3 (all initially assumed to be different). A representation of this
system is
       (0) 
1 0 0 E1 0 0
|1(0) i =  0  , |2(0) i =  1  , |3(0) i =  0  , Hb (0) =  0 E2(0) 0  .
0 0 1 0 0 E3(0)
Now let’s consider the perturbation
 
1 1 0
b (1)
H = a 1 1 0 .
0 0 1
Then we can show that, to first order in a
a a
E1(1) = E2(1) = E3(1) = a, |1(1) i = (0) (0)
|2(0) i, |2(1) i = (0) (0)
|1(0) i, |3(1) i = 0
E1 − E2 E2 − E1
And hence also
a2
E1(2) = −E2(2) = , E3(2) = 0
E1(0) − E2(0)
We note that because H b (1) is already diagonal in the |3(0) i space, the first-order shift in energy
is exact and there is no change to that eigenvector. In this case it is straightforward to obtain
the eigenvalues of Hb (0) + H b (1) exactly:
 q   q 
1 (0) (0) 2 (0) (0) 2 1 (0) (0) 2 (0) (0) 2
E1 = 2 2a + E1 + E2 − 4a + (E2 − E1 ) , E2 = 2 2a + E1 + E2 + 4a + (E2 − E1 )
and E3 = E3(0) + a, and so we can check the expansion to order a2 .
Now consider the case where E2(0) = E1(0) . We note that |1(0) i and |2(0) i are just two of an
infinite set of eigenstates with the same energy E1(0) , since any linear combination of them is
another eigenstate. Our results for the third state are unchanged, but none of those obtained
for the first two still hold. Instead we have to work in a new basis, |10 (0) i and |20 (0) i which
diagonalises H b (1) . By inspection we see that this is

1 1
|10 i = √ (|1(0) i + |2(0) i) |20 i = √ (|1(0) i − |2(0) i).
(0) (0)
and
2 2
b (1) |10 (0) i = 2a|10 (0) i and H
Then H b (1) |20 (0) i = 0. Hence

|10 i = |20 i = 0,
(1) (1)
E1(1)0 = 2a, E2(1)0 = 0, E1(2)0 = E2(2)0 = 0

In this case because H b (1) doesn’t mix states 1 & 2 with 3, diagonalising it in the subspace is
actually equivalent to solving the problem exactly. We can check our results against the exact
eigenvalues for E2(0) = E1(0) and see that they are correct, except that we made the “wrong”
choice for our labelling of |10 (0) i and |20 (0) i.

3.3 The fine structure of hydrogen


Although the Schrödinger equation with a Coulomb potential reproduces the Bohr model and
gives an excellent approximation to the energy levels of hydrogen, the true spectrum was known
to be more complicated right from the start. The small deviations are termed “fine structure”
and they are of order 10−4 compared with the ground-state energy (though the equivalent terms
for many-electron atoms can be sizable). Hence perturbation theory is an excellent framework
in which to consider them.
There are two effects to be considered. One arises from the use of the non-relativistic
expression
p p2 /2m for the kinetic energy, which is only the first term in an expansion of
(mc2 )2 + (pc)2 − mc2 . The first correction term is −p4 /(8m3 c2 ), and its matrix elements
are most easily calculated using the trick of writing it as −1/(2mc2 )(H b (0) − VC (r))2 , where H
b (0)
is the usual Hamiltonian with a Coulomb potential. Now in principle we need to be careful
here, because H b (0) is highly degenerate (energies depend only on n and not on l or m). However
we have hnl0 m0 |(H b (0) − VC (r))2 |nlmi = hnl0 m0 |(E (0) − VC (r))2 |nlmi, and since in this form the
n
operator is spherically symmetric, it can’t link states of different l or m. So the basis {|nlmi}
already diagonalises H b (0) in each subspace of states with the same n, and we have no extra work
to do here. (We are omitting the superscript (0) on the hydrogenic states, here and below.)
The final result for the kinetic energy effect is

α2 |En(0) |
 
(1) 2 3
hnlm|HKE |nlmi = −
b −
n 2l + 1 4n

In calculating this the expressions En(0) = 2n1 2 α2 mc2 and a0 = ~/(mcα) are useful. Tricks for
doing the the radial integrals are explained in Shankar qu. 17.3.4; they are tabulated in section
A.3. Details of the algebra for this and the following calculation are given here.
The second correction is the spin-orbit interaction:

b (1) = 1 dVC b b
HSO L·S
2m2 c2 r dr
In this expression L b and S b are the vector operators for orbital and spin angular momentum
respectively. The usual (somewhat hand-waving) derivation talks of the electron seeing a mag-
netic field from the proton which appears to orbit it; the magnetic moment of the electron then
prefers to be aligned with this field. This gives an expression which is too large by a factor of
2; an exact derivation requires the Dirac equation.
This time we will run into trouble with the degeneracy of H b (0) unless we do some work first.
The usual trick of writing 2L b·Sb=J b2 − L
b 2 −S
b 2 where J b = L+
b Sb tells us that rather than working
2 2
with eigenstates of L , L̂z , S and Ŝz , which would be the basis we’d get with the minimal
b b
direct product of the spatial state |nlml i and a spinor |sms i, we want to use eigenstates of
b 2, S
L b2 and Jˆz , |nljmj i, instead. (Since S = 1 for an electron we suppress it in the labelling
b2 J
2 q q
of the state.) An example of such a state is |n1 2 2 i = 3 |n11i ⊗ | 2 − 2 i − 13 |n10i ⊗ | 21 12 i.
11 2 1 1

Then
2
 
(1) α |E (0)
n | 2 2
hnljmj |H
b |nljmj i =
SO −
n 2l + 1 2j + 1
(This expression is only correct for l 6= 0. However there is another separate effect, the Darwin
term, which only affects s-waves and whose expectation value is just the same as above (with
l = 0 and j = 12 ), so we can use this for all l. The Darwin term can only be understood in the
context of the Dirac equation.)
So finally
α2 |En(0) | 3
 
(1) 2
Enj = − .
n 4n 2j + 1
The degeneracy of all states with the same n has been broken. States of l = j ± 21 are still
degenerate, a result that persists to all orders in the Dirac equation (where in any case orbital
angular momentum is no longer a good quantum number.) So the eight n = 2 states are split
by 4.5 × 10−5 eV, with the 2 p3/2 state lying higher that the degerate 2 p1/2 and 2 s1/2 states.
Two other effects should be mentioned here. One is the hyperfine splitting. The proton
has a magnetic moment, and the energy of the atom depends on whether the electon spin is
aligned with it or not— more precisely, whether the total spin of the electon and proton is 0 or
1. The anti-aligned case has lower energy (since the charges are opposite), and the splitting for
the 1s state is 5.9 × 10−6 eV. (It is around a factor of 10 smaller for any of the n = 2 states.)
Transitions between the two hyperfine states of 1s hydrogen give rise to the 21 cm microwave
radiation which is a signal of cold hydrogen gas in the galaxy and beyond.
The final effect is called the Lamb shift. It cannot be accounted for in quantum mechanics,
but only in quantum field theory.

The diagrams above show corrections to the simple Coulomb force which would be rep-
resented by the exchange of a single photon between the proton and the electron. The most
notable effect on the spectrum of hydrogen is to lift the remaining degeneracy between the 2 p1/2
and 2 s1/2 states, so that the latter is higher by 4.4 × 10−6 eV.
Below the various corrections to the energy levels of hydrogen are shown schematically. The
gap between the n = 1 and n = 2 shells is supressed, and the Lamb and hyperfine shifts are
exaggerated in comparison with the fine-structure. The effect of the last two on the 2 p3/2 level
is not shown.
• Gasiorowicz ch 12.1,2,4

• Mandl ch 7.4

• Shankar ch 17.3

• Townsend ch 11.6,7

3.4 The Zeeman effect: hydrogen in an external mag-


netic field
(Since we will not ignore spin, this whole section is about the so-called anomalous Zeeman
effect. The so-called normal Zeeman effect cannot occur for hydrogen, but is the special case
for certain multi-electron atoms for which the total spin is zero.)
With an external magnetic field along the z-axis, the perturbing Hamiltonian is H b (1) =
−µ · B = (eB/2m)(L bZ + 2Sbz ). The factor of 2 multiplying the spin is of course the famous g-
factor for spin, as predicted by the Dirac equation. Clearly this is diagonalised in the {|nlml ms i}
basis (s = 12 suppressed in the labelling as usual.) Then Enlm (1)
l ms
= −(eB~/2m)(ml + 2ms ). If,
for example, l = 2 there are 7 possible values of ml + 2ms between −3 and 3, with −1, 0 and
1 being degenerate (5 × 2 = 10 states in all).
This is fine if the magnetic field is strong enough that we can ignore the fine structure
discussed in the last section. But typically it is not. For a weak field the fine structure effects
will be stronger, so we will consider them part of H b (0) for the Zeeman problem; our basis is then
{|nljmj i} and states of the same j but different l are degenerate. This degeneracy however is
not a problem, because the operator (L̂z + 2Ŝz ) does not connect states of different l. So we
can use non-degenerate perturbation theory, with

(1) eB
Enljmj = hnljmj |L
bz + 2Sbz |nljmj i.
2m

If Jbz is conserved but L bz and Sbz are not, the expectation values of the latter two might be
expected to be proportional to the first, modified by the average degree of alignment: hSbz i =
~mj hS b · Ji/h
b Ji b 2 and similarly for Lz . (This falls short of a proof but is in fact correct; see
b·J
Mandl 7.5.3 for details.) Using 2S b=S b2 + Jb2 − Lb 2 and the equivalent with S
b↔L b gives
 
(1) eB~mj j(j + 1) − l(l + 1) + s(s + 1) eB~mj
Enljmj = 1+ = gjls .
2m 2j(j + 1) 2m
Of course for hydrogen s(s+1) = 43 , but the expression above, which defines the Landé g factor,
is actually more general and hence I’ve left it with an explicit s. For hydrogen, j = l ± 12 and
1
so g = (1 ± 2l+1 ).
Thus states of a given j (already no longer degenerate due to fine-structure effects) are
further split into (2j+1) equally-spaced levels. Since spectroscopy involves observing transitions
between two states, both split but by different amounts, the number of spectral lines can be
quite large.

• Gasiorowicz ch 12.3
• Mandl ch 7.5
• (Shankar ch 14.5)
• Townsend ch 11.8

3.5 The Stark effect: hydrogen in an external electric


field
With an external electric field along the z-axis, the perturbing Hamiltonian is H b (1) = −eEz (we
use E for the electric field strength to distinguish it from the energy.) Now it is immediately
obvious that hnlm|z|nlmi = 0 for any state: The probability density is symmetric on reflection
in the xy-plane, but z is antisymmetric. So for the ground state, the first order energy shift
vanishes. (We will return to excited states, but think now about why we can’t conclude the
same for them.) This is not surprising, because an atom of hydrogen in its ground state has no
electric dipole moment: there is no p · E term to match the µ · B one.
pTo calculate the second-order energy shift we need hn0 l0 m0 |z|100i. We can write z as r cos θ
or 4π/3rY10 (θ, φ). The lack of dependence on φ means that m can’t change, and in addition
l can only change by one unit, so hn0 l0 m0 |z|100i = δl0 1 δm0 0hn0 10|z|100i. However this isn’t the
whole story: there are also states in the continuum, which we will denote |ki (though these are
not plane waves, since they see the Coulomb potential). So we have
X |hn10|z|100i|2 2
3 |hk|z|100i|
Z
(2) 2 2
E100 = (eE) + (eE) d k
n>1
E1(0) − En(0) E1(0) − Ek(0)

(We use E1 for E100 ). This is a compact expression, but it would be very hard to evaluate
directly. We can get a crude estimate of the size of the effect by simply replacing all the
denominators by E1(0) − E2(0) ; this overestimates the magnitude of every term but the first, for
which it is exact, so it will give an upper bound on the magnitude of the shift. Then
!
(eE)2 XX Z
(2) 3
E1 > h100|z|nlmihnlm|z|100i + d kh100|z|kihk|z|100i
E1(0) − E2(0) n≥1 lm
(eE)2 2 8(eE)2 a30
= h100|z |100i = −
E1(0) − E2(0) 3~cα
where we have included n = 1 and other values of l and m in the sum because the matrix
elements vanish anyway, and then used the completeness relation involving all the states, bound
and unbound, of the hydrogen atom.
There is a trick for evaluating the exact result, which gives 9/4 rather than 8/3 as the
constant (See Shankar.) So our estimate of the magnitude is fairly good. (For comparison with
other ways of writing the shift, note that (eE)2 /~cα = 4π0 E2 —or in Gaussian units, just E2 .)
Having argued above that the hydrogen atom has no electric dipole, how come we are getting
a finite effect at all? The answer of course is that the field polarises the atom, and the induced
dipole can then interact with the field.
Now for the first excited state. We can’t conclude that the first-order shift vanishes here,
of course, because of degeneracy: there are four states and H b (1) is not diagonal in the usual
basis |2lmi. In fact as we argued above it only connects |200i and |210i, so the states |21 ± 1i
decouple and their first order shifts do vanish. Using h210|z|200i = −3a0 , we have in this
subspace (with h200| = (1, 0) and h210| = (0, 1))
 
(0)
b = −3a0 eE 0 1
H ,
1 0
p
and the eigenstates are 1/2(|200i±|210i) with eigenvalues ∓3a0 eE. So the degenerate quartet
is split into a triplet of levels (with the unshifted one doubly degenerate).
In reality the degeneracy of the n = 2 states is lifted by the fine-structure splitting; are
these results then actually relevant? They will be approximately true if the field is large; at
an intermediate strength both fine-structure and Stark effects should be treated together as a
perturbation on the pure Coulomb states. For very weak √ fields degenerate perturbation theory
1
holds in the space of j = 2 states, which are split by ∓ 3a0 eE.

• Gasiorowicz ch 11.3

• Shankar ch 17.2,3

• Townsend ch 11.4
Approximate methods III:
Time-dependent perturbation theory

4.1 Formalism

Summary: Time-dependent perturbation theory applies to cases where the per-


turbing field changes with time, but also to processes such as absorption and emis-
sion of photons, and to scattering.

In time-dependent perturbation theory, we typically consider a situation where, at the


beginning and end, the only Hamiltonian acting is the time-independent H b (0) , but for some
time in between another, possibly time-dependent, effect acts so that for this period H b =
Hb (0) + H
b (1) (t). If a system starts off in an eigenstate of Hb (0) it will have a certain probability of
ending up in another, and that transition probability is generally what we are interested in.
There are many similarities in approach to the time-independent case, but it is worth noting
that our interest in the states |n(0) i is slightly different. In the time-independent case these were
only approximations to the true eigenstates of the system. In the time-dependent case they
remain the actual eigenstates of the system asymptotically (a phrase that means away from
the influence of the perturbation, either in space or, as here, in time). Furthermore while the
time-dependent perturbation is acting the familiar connection between the TISE and TDSE
breaks down, so the eigenstates of the Hamiltonian, while instantaneously definable, are not
of any real significance anyway. For that reason we will drop the label (0) on the states, since
we won’t be defining any other set of states, and similarly we will refer to their unperturbed
energies just as En .
Because the states |ni are a complete set, at any instant we can decompose the state of the
system X X
|ψ(t)i = cn (t)|ni ≡ dn (t)e−iEn t/~ |ni
n n

In defining the dn (t) we have pulled out the time variation due to Hb (0) . So in the absence of
the perturbation, the dn would be constant and equal to their initial values. Conversely, it
is the time evolution of the dn which tells us about the effect of the perturbation. Typically
we start with all the dn except one equal to zero, and look for non-zero values of the others
subsequently.

(0) (1)
 d
H + H (t) |ψ(t)i = i~ |ψ(t)i
b b
Xdt
˙
⇒ i~dn = dm (t)eiωnm t hn|H
b (1) (t)|mi
m

28
where we used m as the dummy index in the summation and took the inner product with
eiEn t/~ hn|, and we have defined ωnm ≡ (En − Em )/~ (and d˙n represents the time derivative of
dn .)
So far this is exact, but usually impossible to solve, consisting of an infinite set of coupled
differential equations! Which is where perturbation theory comes in. If we start with di (t0 ) = 1
and all others zero (i for initial), and if the effect of H b (1) is small, then di will always be much
bigger than all the others and, to first order, we can replace the sum overall states by just the
contribution of the ith state. Furthermore to this order di barely changes, so we can take it
out of the time integral to get
Z t
i
dn (t) = − ~ eiωni t hn|H
b (1) (t)|iidt
t0

Having obtained a first-order expression for the dn , we could substitute it back in to get a
better, second-order, approximation and so on.

• Gasiorowicz ch 15.1

• Mandl ch 9.1-3

• (Shankar ch 18.2)

• Townsend ch 14.6

4.1.1 Perturbation which is switched on slowly


b (1) et/τ start acting at time t = −∞ so that it reaches full strength at t = 0.
b (1) (t) = H
Let H
Then
Z 0
i (1)
dn (0) = − ~ hn|H |ii
b et/τ eiωni t dt
−∞
i b (1) |ii
= − hn|H
(1/τ + iωni )~

In the limit τ  1/ωni we simply recover the expression for the first-order shifts of the eigenkets
in time-independent perturbation theory. With an adiabatic (exceedingly slow) perturbation
the system evolves smoothly from the state |i(0) i to the state |ii.

4.1.2 Sudden perturbation


b (1) which is switched on at t = −/2 and off at t = /2.
Consider a constant perturbation H
Z /2
dn (∞) = − ~i hn|H
b (1) |ii eiωni t d t
−/2

b (1) |ii 2i →0  b (1) |ii → 0.


= −hn|H sin(ωni /2) −→ −i hn|H
~ωni ~
So sudden changes to a system leave the state unchanged.
Actually we didn’t need to switch off at t = /2, we can instead regard the expression above
as dn (/2), ie the coefficient immediately after turning on the perturbation. The conclusion is
the same: a change in the system which is sudden (compared with 1/ωni ) doesn’t change the
state of the system. We can use this to conclude, for instance, that the electronic configuration
is unchanged when a tritium nucleus beta-decays to helium-3. Of course this configuration
is not an eigenstate of the new Hamiltonian, but we can take it as an initial state for the
subsequent time evolution.

4.2 Oscillatory perturbation and Fermi’s golden rule

Summary: Fermi’s golden rule is the most important result of this chapter

Without specifying anything more about the system or perturbation, let us consider the
case H b (1) e−iωt . (Note ω is not now anything to do with the harmonic oscillator, and
b (1) (t) = H
indeed if we wanted to apply this to that system we’d need to use labels to distinguish the
oscillator frequency from the applied frequency).
At the outset we should note that with this problem we are heading towards the interaction
of atoms with a radiation field.
For definiteness, let the perturbation apply from −t/2 to t/2. Then
Z t/2
0
dn (t) = − ~i hn|H
b (1) |ii ei(ωni −ω)t dt0
−t/2
2i b (1) |ii sin 1 (ωni − ω)t

= − hn|H 2
~(ωni − ω)
4 b (1) 2 2 1

⇒ Pi→n = 2 2
hn|H |ii sin 2 (ωni − ω)t
~ (ωni − ω)

t2 b (1) 2
|ii sinc2 21 (ωni − ω)t

= 2 hn|H
~
where sinc(x) ≡ sin(x)/x.
This expression is unproblematic provided ~ω doesn’t coincide with any excitation energy
of the system. In that case, the transition probability just oscillates with time and remains
very small.
The more interesting case is where we lift that restriction, but the expression then requires
rather careful handling! The standard exposition goes as follows: as t becomes very large,
t sinc2 ( 12 (ωni − ω)t) is a sharper and sharper – and taller and taller – function of ωni − ω, and
in fact tends to 2πδ(ωni R− ω). (The normalisation comes from comparing the integrals of each

side with respect to ω: −∞ sinc2 xdx = π. See section A.8 for more on δ-functions.) Then we
have
b (1) 2

Pi→n = 2πt
~2
hn|H |ii δ(ωni − ω).
The need to take the average over a long time period to obtain the frequency-matching delta
function is easily understood if we remember that any wave train only approaches monochro-
maticity in the limit that it is very long; any truncation induces a spread of frequencies.
So the probability increases linearly with time, which accords with our expectation if the
perturbation has a certain chance of inducing the transition in any given time interval. Finally
then, we write for the transition rate (probability per unit time):

b (1) 2

Ri→n = 1t Pi→n = 2π ~ hn|H |ii δ(Eni − ~ω)
This result is commonly known as “Fermi’s golden rule”, and it is the main result of this
section. We have written ~ω = E to highlight the δ-function as enforcing conservation of energy
in the transition: the energy difference between the final and initial state must be supplied by
the perturbation. Although we have assumed nothing about radiation in the derivation, we
have ended up with the prediction that only the correct frequency of perturbation can induce a
transition, a result reminiscent of the photoelectric effect which demonstrates empirically that
energy is absorbed from fields in quanta of energy ~ω.
Taking on trust its importance though, how are we to interpret it? Taken literally, it says
that if E 6= Eni nothing happens, which is dull, and if E = Eni the transition rate is infinite
and first-order perturbation theory would appear to have broken down! The resolution is that
for actual applications, we do not have perfectly monochromatic radiation inducing transitions
to single, absolutely sharp energy levels. Such a perfect resonance would indeed give infinite
transition rates, but is unphysical. In practice there is always an integration over energy, with
a weighting function ρ(E) which is a distribution function smearing the strength over a range
of energies. Authors who don’t like playing with δ functions leave the t2 sinc2 (. . .) form in place
till this is evident, but the bottom line is the same (see eg Mandl).
Had we started from H b (1) eiωt we would have ended up with δ(Eni + E) In this case, energy
must be given up to the perturbing field: this is emission rather than absorption. With a real
field cos(ωt) = 21 (eiωt + e−iωt ), both processes can occur.
Finally we must point out that though we’ve derived Fermi’s golden rule for oscillatory
perturbations, the expression holds equally in the ω → 0 limit i.e. for a constant perturbation
acting from time −t/2 → t/2. It should be easy to see that exactly the same result is obtained,
except that the energy-conservation δ-function is simply δEni . This form is more appropriate
if, instead of considering an external field which can supply or absorb energy, one thinks in
terms of photons and defines |ni and |ii to include not only the atomic state, but also any
incoming or out-going photon. Viewed in that way the energies of the initial and final state
must be the same. Both derivations are valid but the one given here is more appropriate for
our purposes given that we are not going to quantise the radiation field. (Wait till next year for
that!) However the constant perturbation form will be used in scattering in the next chapter.

• Gasiorowicz ch 15.2
• (Mandl ch 9.4)
• Shankar ch 18.2
• (Townsend ch 14.7)

4.3 Emission and absorption of radiation

Summary: Classical physics allows us to calculate rates of absorption and stim-


ulated emission of radiation. It cannot handle spontaneous emission, but a clever
argument about transition rates in thermal equilibrium allows us to predict that
too.
Consider an oscillating electric field E cos(ωt). (The unit vector  indicates the direction
of the field.) This corresponds to a perturbation
b (1) (t) = −eE cos(ωt) · r
H
which has the form we considered when deriving Fermi’s golden rule.
We note that a field of this form is a long-wavelength-approximation to the electric field
of an electromagnetic wave. Electric effects are much stronger than magnetic in this limit, so
we are justified in ignoring the magnetic field in writing H b (1) . This is called the electric dipole
approximation because H b ∝ er ·  = d ·  where d is the electric dipole moment (we don’t
(1)

use p for obvious reasons!). (Most textbooks more correctly start from the vector potential A
to derive the same perturbing Hamiltonian, see section 4.6. We follow Mandl here.)
To avoid unphysically sharp resonance as discussed above we actually need to consider a field
with a range of frequencies, each being incoherent (so we add probabilities not amplitudes). The
energy per unit volume in a frequency range ω → ω + dω is denoted ρ(ω)dω, and 21 0 E2 = ρ(ω).
(This expression allows for a factor of 2 from including the magnetic field energy, but also a
factor of 12 from the time average hcos2 ωti). Then we have
Z
2π 2
Ri→f = 2
1
2
eE(ω) |hf | · r|ii|2 (δ(ωf i − ω) + δ(ωf i + ω)) dω
~
πe2
= ρ(|ωf i |) |hf | · r|ii|2
0 ~2
where the result applies equally to absorption (with Ef > Ei ) or emission (with Ef < Ei ). We
note that as promised we now have a well-behaved result with no infinities around!
This is as far as we can go without being more specific about the system, except for one
thing. The symmetry between emission and absorption is rather odd. Of course absorption
needs a field to supply energy—or in quantum terms, to absorb a photon—a source of photons
is needed. But why should the same be true for emission? What we have calculated here is
stimulated emission, and it certainly does occur; it is behind the workings of a laser. Though
our calculation is classical, stimulated emission can be thought of as due to the bosonic nature of
photons - they “like to be in the same state”, so emission is more likely the more photons of the
right energy are already present. But surely an excited atom in a vacuum at zero temperature
can still decay! There must be spontaneous emission too, and the classical calculation can’t
predict it.
There is however a very clever argument due to Einstein which connects the rates for
stimulated and spontaneous emission, which says that if we regard the the energy density ρ(ω)
as proportional to the number of photons in a mode of this frequency, n(ω), then to allow
for spontaneous emission we simply need to replace n(ω) with n(ω) + 1. (Since classical fields
correspond to huge numbers of photons, the difference between n and n+1 is utterly negligible.)

• (Gasiorowicz ch 15.3)

• Mandl ch 9.5

• (Shankar ch 18.5)

• (Townsend ch 14.7)

4.3.1 Einstein’s A and B coefficients


Consider for simplicity a gas of “atoms” which can either be in their ground state or an excited
state (excitation energy E), and let the numbers in each be n0 and n1 . The atoms will interact
with the black-body radiation field, emitting and absorbing quanta of energy, to reach thermal
equilibrium. We will need Planck’s law for the energy density of the black-body radiation field
at a given frequency:
~ω 3 1
ρ(ω) = 2 3 ~ω/k T = f (ω)n(ω, T )
π c e B −1
where the temperature-independent prefactor f (ω) arises from the density of states, and n(ω, T )
is the Bose-Einstein expression for the average number of quanta of energy in a given mode.
See section A.10 for more details about the Bose-Einstein distribution.
The rates of absorption and stimulated emission are proportional to the energy density in
the field at ω = E/~, and the coefficients are denoted B01 and B10 , while the rate of spontaneous
emission is just A10 . (We have seen that B01 = B10 , but we won’t assume that here.) Then the
rate of change of n0 and n1 is

ṅ0 = −n0 B01 ρ(ω) + n1 A10 + n1 B10 ρ(ω) and ṅ1 = −ṅ0 .

At thermal equilibrium, ṅ0 = ṅ1 = 0 and n1 /n0 = e−E/kB T . Using the Planck law for ρ(E/~),
with some rearrangement we get

A10 (eE/kB T − 1) + B10 f (ω) − eE/kB T B01 f (ω) = 0

Now this has to be true for any temperature, so we can equate coefficients of eE/kB T to give
A10 = B01 f (ω) and A10 = B10 f (ω). So we recover B01 = B10 , which we already knew, but we
also get a prediction for A10 . Thus we have
  
ṅ1 = B10 f (ω) −n1 n(ω, T ) + 1 + n0 n(ω, T )

We see that the total emission probability corresponds to replacing n(ω, T ) with n(ω, T )+1.
This result is confirmed by a full calculation with quantised radiation fields, where the factor
arises from the fact that the creation
√ operator for quanta in a mode of the EM field has the
usual normalisation a†ω |nω i = nω + 1 |nω + 1i.

• Gasiorowicz Supplement 1

4.4 Radiative decay of 2p state of hydrogen

Summary: This is the classic test-case of the theory we’ve developed so far. It
passes with flying colours!
We wish to calculate the total decay rate of the 2p state of hydrogen. We start with the
expression for spontaneous decay deduced from the preceding arguments:
πe2
Ri→f = ~ωD(ω k̂) |hf | · r|ii|2
0 ~2

where ω ≡ ωf i is taken as read. The expression D(ω k̂) is the density of states factor for
photons with a particular direction of propagation k̂ (which must R be perpendicular
R to ):
2 3
D(ω k̂) = ω /(2πc) . It is just D(k) written in terms of ω so that D(ω k̂)dω = D(k)dk. See
section A.10 for more details.
Now we don’t care about the direction in which the final photon is emitted, nor about its
polarisation, and so we need to integrate and sum the decay rate to a particular photon mode,
as written above, over these. ( is the polarisation vector.) We can pick any of the 2p states
(with m = 0, ±1) since the decay rate can’t depend on the direction the angular momentum is
pointing, so we will choose m = 0. Then h100| · r|210i = h100|z z|210i as we can see by writing
 · r in terms of the l = 1 spherical harmonics (see A.2). There are two polarisation states and
both are perpendicular to k; if we use spherical polar coordinates in k-space, (k, θk , φk ), we can
take the two polarisation vectors to be (1) = θk and (2) = φk . Only the first contributes since
z = − sin θk but z = 0.
(1) (2)

We will need to integrate over all directions of the emitted photon. Hence the decay rate is

πe2 ω2
Z
Ri→f = 2
~ω 3 3 sin2 θk |h100|z|210i|2 sin θk dθk dφk
0 ~ 8π c
e2 ω 3 8π
= |h100|z|210i|2
8π 2 c3 ~0 3
4αω 3 215 a20
= where ~ω = (3/4)ERy
3c2 310
 8 5 2
2 α mc
= = 6.265 × 108 s−1
3 ~

This matches the experimental value of the 22 p1/2 state to better than one part in 10−5 , at
which point the effects of fine structure enter. The fact that this rate is very much smaller
than the frequency of the emitted photon (2.5 × 1015 Hz) justifies the use of the long-time
approximation which led to Fermi’s golden rule.

• Gasiorowicz ch 17.2,4

• Shankar ch 18.5

• Townsend ch 14.8

4.5 Finite width of excited state

Summary: A full treatment of the effect on states of coupling to radiation is


beyond the scope of the course. But we can motivate the fact that a finite lifetime
leads to broadening of the state.

In first-order perturbation theory, changes in the initial state are ignored. But in fact we
have often been told that a state which can decay with a lifetime τ has an uncertainty in its
energy of the order ∆E = ~/τ . How does this arise?
At second order in perturbation theory the first-order expressions for the coefficients dn
(n 6= i) give a non-vanishing expression for the rate of change of the initial state di . The
mathematical treatment required to do this properly is too subtle to be worth giving here, but
the bottom line is (taking t = 0 as the starting time for simplicity)
X
Pi→i = e−Γi t/~ where Γi = ~ Ri→n
n

which is exactly as one would expect. Γ, which has units of energy, is called the line width; ~/Γ
is the total lifetime of the state.
Furthermore if one returns to the derivation of the dn and allows for the fact that di is
decaying exponentially (rather than constant as assumed at that order of perturbation theory)
we find the integral
Z t
i i(ωni −ω)t0 −Γt0 /(2~) 0 1
e dt →
~
0 ~(ωni − ω) + 2i Γ

The modulus is
1
(Eni − ~ω)2 + 14 Γ2
This is a Lorentzian; Γ is the full width at half maximum and its presence tames the energy
δ-function we found originally. (Hence sensible results can be found even if there is not a con-
tinuum of photon states.) The upshot is that we no longer need exact energy matching for
a transition to occur; the energy of an unstable state is not precise, but has an uncertainty:
∆E ≈ Γ = ~/τ .

• Gasiorowicz ch 15.3 (and Supplement 15-A)

• Mandl ch 9.5.3

4.6 Selection rules

Summary: In atomic physics, almost all observed transitions are of the electric
dipole form we have been studying. Only certain transitions are allowed.

The heart of all our expressions for interaction with the electromagnetic field was the matrix
element eEhf | · r|ii, obtained from a spatially invariant electric field which we argued was the
long-wavelength limit of an electromagnetic wave. Since the wavelength of light generated from
atomic transitions if of the order of 10−7 − 10−8 m, while atomic sizes are of order 10−10 m, this
should be a good approximation.
For completeness we note that the expression E = −∇φ(r) is no longer actually valid for
time-dependent fields; the correct expression is E = −∇φ(r) − dA/d t where A is the vector
potential; furthermore a very convenient gauge for radiation is φ = 0, A = A0 ei(k·r−ωt) with
k ·  = 0 and A0 = E/ω. Anticipating results from PHYS30202 we have the Hamiltonian

b = (p − eA) · (p − eA) − eφ(r) + e (B · S)


H b
2m m
and so in this gauge (ignoring the spin term) H b (1) = −eA · p/m and the matrix element is
proportional to hf |eik·r  · p|ii. Now if λ  a0 , then the phase k · r will scarcely vary as r is
integrated over the extent of the atom. The long-wavelength (electric dipole) limit is equivalent
to setting eik·r = 1 + ik · r + . . . and discarding all terms except the first, as well as ignoring
the spin term in the Hamiltonian. The final step then comes in noting that p̂ = m i~
[b
r, H
b0]
and hf |[b
r, H]|ii
b = ~ωf i hf |b
r|ii. Putting everything together we recover eEhf | · r|ii as assumed
previously, provided ω = ωf i . As argued above this is a good approximation, unless the leading
term vanishes. In other words if a transition cannot take place because the electric dipole
matrix element vanishes, we cannot conclude that it cannot take place at all. It will however
be much slower, and if a different transition is allowed by the electric dipole selection rules it
will be hard to see the “forbidden” transition. Transitions which take place via the neglected
terms are successively called magnetic dipole, electric quadruple, magnetic quadruple, electric
octupole.... These transitions are important in nuclear physics, but rarely in atomic physics.
So what conditions must a pair of states satisfy for electric dipole transitions to be allowed?
We need the matrix element hn0 l0 s0 ; j 0 m0j |r|nls; jmj i to be non-vanishing; for hydrogen of course
s = s0 = 12 . It is useful to write r(= x e1 + y e2 + z e3 ) as
√ 1 0 −1
q
e± = ± 12 (e1 ± ie2 ), e0 = e3 .

r = 4π r Y1 e− + Y1 e0 + Y1 e+ where
The components of r in this spherical basis are referred to as rq , rather than the cartesian
ri .
Since the spherical components of r are just the l = 1 spherical harmonics, we see that acting
on a state with a component of r is like coupling in another l = 1 system. So the usual rules
of addition of angular momentum apply, and we see that the electric dipole operator can only
cause transitions between systems whose total angular momentum differs by at most one unit.
(This is an example of a general theorem that vector operators transfer one unit of angular
momentum—the Wigner-Eckhart theorem: see section A.2.) Similarly, the z-component of
angular momentum can’t change by more than one unit. Hence we have our the selection rules
∆j = 0, ±1 and ∆mj = 0, ±1
However the electric dipole operator is independent of spin. If we look at matrix elements
of the form hj 0 l0 m0j |rq |jlmj i, we see that it reduces to terms of the form hl0 m0l |rq |lml i which
will vanish unless l + 1 ≥ l0 ≥ |l − 1|. However there is an extra consideration. The spherical
harmonics have good parity: under reflection in the origin they are either odd (for odd l) or
even (for even l). r is odd. Now the integral of an odd-parity function over all angles is clearly
zero, so if l is odd (even), r|lml i is even (odd) and so |l0 m0l i must be even (odd) if the angular
integral is not to vanish. Therefore we see that l = l0 , while allowed by the rules of addition of
angular momentum (if l 6= 0), is not allowed by parity conservation. (The rule for the integral
of a product of three spherical harmonics is given in section A.2.)
So finally, for hydrogen, we have the following selection rules:
∆j = 0, ±1, ∆mj = 0, ±1 and ∆l = ±1
We saw from the example of the decay of the 2p state that the rate was proportional to the
energy difference cubed. In general states will decay fastest to the lowest state they are allowed
to, so states np will decay mostly to 1s, nd to 2p etc. But what about the 2s states?
In this course we are not generally concerned with multi-electron atoms, but at this point
we should say something about selection rules more generally. First P note that as every electron
feels the electric field, the dipole operator is eE · R where R = i ri , the sum of the individual
position vectors. Second we note that while strictly the P only good quantum number of an atom
is J, associated with the total angular momentum J b=
i (Li + Si ), for reasonably light atoms
b b
P P
L and S, corresponding to L b =
i Li and S =
b b
i Si , are also “reasonably good” quantum
b
numbers (in the sense that states so constructed are a good zeroth-order approximation to
the states of the full Hamiltonian). We are assuming this when we use spectroscopic notation
(2S+1)
LJ for the atomic states. This is called “LS coupling”. In that case we can again say
that since the dipole operator is spin-independent spin won’t change (∆S = 0), and coupling
a vector operator again means that L + 1 ≥ L0 ≥ |L − 1|. This time ∆L = 0 is not excluded
though, because L isn’t connected to the parity of the atomic state. So for light atoms we have,
along with a parity change,

∆J = 0, ±1, ∆MJ = 0, ±1, ∆S = 0 and ∆L = 0, ±1 but no L = 0 → L = 0 transitions

Examples of such allowed transitions are shown in this diagram of the energy levels of
helium: All of these levels have one excited electron and so have the configuration 1snl. Note

Figure 4.1: Hyperphysics


c at Georgia State University

that in helium the states of the same n are no longer degenerate as the higher the value of l,
the more the nuclear charge is screened by the 1s electron. Also, triplet states (orthohelium) lie
lower than singlet (parahelium), because the latter, having antisymmetric spin wave functions,
must have symmetric spatial wavefunctions which bring the electrons closer together and hence
increase the Coulomb repulsion. Note that the diagram above doesn’t specify J values.
If L and S are not good quantum numbers, there is still one thing we can say: since R is a
vector operator, we must have J + 1 ≥ J 0 ≥ |J − 1|:

∆MJ = 0, ±1, ∆J = 0, ±1 but no J = 0 → J = 0 transitions

In helium violations of LS coupling will be very small (giving rise to very weak transitions)
and will be of similar size to those due to terms ignored in making the dipole approximation.
An example is the first excited state of orthohelium, in which one electron is in the 2p state
and S = 1, with total J = 1. According the the electric dipole selection rules in LS coupling it
cannot decay to the ground state (which has S = 0), but the fact that the spin-orbit interaction
mixes the S = 0 and S = 1 states allows a very weak electric-dipole decay which has only
recently been observed.
Violations of the electric dipole selection rules—“forbidden” transitions—must be due to
higher-order operators (quadrupole and higher). These allow successively greater changes in J.
But all orders, J = 0 → J = 0 transitions are excluded. So for example the ground state of
orthohelium (one electron in the 2s state and S = 1) is forbidden from decaying via an electric
dipole transition by the parity selection rule. Even magnetic dipole transitions are only allowed
through violations of LS coupling , and as a result its lifetime is 104 s. The single-photon decay
of the 2s1/2 state of hydrogen is a similarly “forbidden” magnetic dipole transition, but in fact
the two-photon decay dominates with a lifetime of about 0.1 sec.
An example of an absolutely forbidden process is the single-photon decay of the first excited
state of parahelium (one electron in the 2s state and S = 0, hence J = 0). In fact it decays by
emitting two photons with a lifetime of 19.7 ms.

• Gasiorowicz ch 14.3, 17.3

• Mandl ch 6.2– most complete

• Shankar ch 17.2

• (Townsend ch 14.8)

4.7 Heisenberg vs Schrödinger pictures


When at the start we, for convenience, absorbed the time dependence due to H b (0) into the
(0)
definition of the coefficients (switching from cn (t) to dn (t) = eiEn t/~ cn (t)), we did something
which has a rather fancy name: we switched to the interaction picture.
The Schrödinger picture is the one we are used to. Our basis kets |ni (eigenkets of the Hamil-
tonian H) b are time-independent but quantum states are built from them with time-dependent
d
P
coefficients: |ψS (t)i = n cn (t)|ni. These satisfy the Schrödinger equation i~ |ψS (t)i =
R dt
t
b S (t)i which implies |ψS (t)i = U (t, t0 )|ψS (t0 )i where U (t, t0 ) = T exp{−(i/~) H(t
H|ψ b 0 )d t0 }.
t0
In this picture the Hamiltonian itself is not usually time-dependent (in the absence of external
forcing) nor are the other operators such as x̂, p̂. The expectation value of operators can change
though of course - as we’ve seen dhΩ̂i/dt = h[Ω̂, H]i b
One can also however work in an alternative picture, the Heisenberg picture, in which the
quantum states are time-independent: |ψH i = U † (t, t0 )|ψS (t)i = |ψS (t0 )i. The rational for
this is that nothing really changes fundamentally in such a state, in particular the chance of
obtaining energy En is constant. Clearly this state does not obey the TDSE or any equation
giving evolution in time. However the results for the time-dependence of observables must be
the same as before, so in the Heisenberg picture operators all carry time-dependence: ΩH (t) =
U † (t, t0 )ΩS U (t, t0 ). Thus it is the time-dependence of the operator itself that is given by the
commutator with H, b ie d Ω̂H = [Ω̂H , H].
b (Note H b itself is clearly the same in both pictures,
dt
and only time-dependent if there are external forcing terms.)
The interaction picture is somewhere in between. As we have done throughout this section,
we split the Hamiltonian into two parts, typically the “free” Hamiltonian H b (0) which will be
time-independent, and a (possibly time-dependent) “interaction” Hamiltonian H b (1) . We then
work in a picture which would be Heisenberg’s if H b (1) vanished, but use the Schrödinger picture
(1)
to include the effects of H b . Thus
† †
X
|ψI (t)i = U (0) (t, t0 )|ψS (t)i = dn (t)|ni and ΩI (t) = U (0) (t, t0 )ΩS U (0) (t, t0 )
n

with
d b (1) |ψI (t)i d b (0) ]
i~ dt |ψI (t)i = HI and Ω̂
dt I
= [Ω̂I , HI

Note that now we do have to specify H b I(1) (t) = U (0) † (t, t0 )HS(1) (t0 )U (0) (t, t0 ) because in general
b S(1) ] 6= 0.
b (0) , H
[H
Concentrating on the time evolution of the state, we define UI (t, t0 ) such that |ψI (t)i =
UI (t, t0 )|ψI (t0 )i. It is clear that this satisfies
d b I(1) (t)UI (t, t0 )
i~ dt UI (t, t0 ) = H

which is formally solved by


Z t
i b (1) (t0 )UI (t0 , t0 )d t0
UI (t, t0 ) = 1 − HI
~ t0

(just differentiate both sides by t; the 1 comes from the initial condition that UI (t0 , t0 ) = 1.)
The utility of this comes if HI(1) (t0 ) is small, since we can then develop a perturbative sequence
for UI , the zeroth approximation being just 1, the first correction coming from substituting 1
for UI in the integral, the second approximation from substituting the first approximation for
UI in the integral and so on:
Z Z 0
i t b (1) 0 0 1 t t b (1) 0 b (1) 00 0 00
Z
UI (t, t0 ) = 1 − H (t )d t − 2 H (t )HI (t )d t d t
~ t0 I ~ t0 t0 I
Z t Z t0 Z t00
i b I(1) (t0 )Hb I(1) (t00 )H
b I(1) (t000 )d t0 d t00 d t000 + . . .
+ 3 H
~ t0 t0 t0

Reverting to the Schrödinger picture, this reads

U (t, t0 ) = U (0) (t, t0 )UI (t, t0 )


i t (0)
Z
(0)
= U (t, t0 ) − U (t, t0 )H b S(1) (t0 )U (0) (t0 , t0 )d t0
~ t0
Z t Z t0
1
− 2 U (0) (t, t0 )H
b (1) (t0 )U (0) (t0 , t00 )H
S
b (1) (t00 )U (0) (t00 , t0 )d t0 d t00
S
~ t0 t0
Z t Z t0 Z t00
i
+ 3 U (0) (t, t0 )H
b (1) (t0 )U (0) (t0 , t00 )H
S
b (1) (t00 )U (0) (t00 , t000 )H
S
b (1) (t000 )U (0) (t000 , t0 )d t0 d t00 d t000 + . . .
S
~ t0 t0 t0

where we have used the combination property of the unitary time-evolution operators, U (0) (t, t0 )U (0) † (t0 , t0 ) =
U (0) (t, t0 ) if t0 is between t and t0 . We see that we have a picture where the free Hamiltonian
Hb (0) governs the time evolution for most of the time, interrupted by interactions (H b (1) ) on
occasions, with multiple interactions being less progressively less likely. This is illustrated be-
low. Note that eigenstates of HS(0) remain so under the action of U (0) (t0 , t00 ): U (0) (t0 , t00 )|n(0) i =
0 00 (0)
e−i(t −t )En /~ |n(0) i. So if the system starts off in a particular state |i(0) i it will remain in it till
it reaches the first interaction. At that point it can undergo a transition into any other state
|n(0) i for which hn(0) |H (1) |i(0) i =
6 0. If we are looking for the probability that the system ends up
in state |f i then in the first term in the expansion we need i = f , in the second we need them
(0)

to be linked by hf (0) |H (1) |i(0) i, but in the third and subsequent terms the transition from |i(0) i
to |f (0) i can be via one or more intermediate states. Energy does not have to be conserved in
these intermediate steps, only in the overall transition: the system can “borrow” energy from
the field for a short time, as reflected in the expression ∆E∆t ∼ ~. Exactly the same idea
is behind the picture of elementary particles interacting via virtual particles, as depicted in
Feynman diagrams.
This is a good opportunity to introduce the time-ordered exponential. If we introduce the
time-ordering operator, such that T Â(t)B̂(t0 ) = Â(t)B̂(t0 ) if t0 < t and B̂(t0 )Â(t) if t0 > t (and
assuming they commute if t0 = t), then we can write
Z t Z t0 Z tZ t
0 b 00 0 00 1 b 0 )H(t
b 00 )d t0 d t00 ;
H(t )H(t )d t d t = T
b H(t
t0 t0 2 t0 t0

extending the t00 integral from t0 to t generates terms in which the two Hamiltonians are in
the wrong order but the T operator switches them so the net effect is that we over-count by a
factor of two. Similarly with three factors we can extend the integrals to t but over count by
3!, and so on. Hence we can write
Z t Z tZ t
i (1) 0 0 1 1 b (1) (t0 )Hb (1) (t00 )d t0 d t00
UI (t, t0 ) = 1 − T HI (t )d t −
b
2
T H I I
~ t 2! ~ t0 t0
Z0 t Z t Z t
1 i b (1) (t0 )H
b (1) (t00 )H
b (1) (t000 )d t0 d t00 d t000 + . . .
+ 3
T H I I I
3! ~ t0 t0 t0
 Z t 
i (1) 0 0
= T exp H (t )d t .
b
~ t0 I
 R 
t b 0
Identical algebra is used in writing the full evolution operator as U (t, t0 ) = T exp ~i t0 H(t )d t0
as was asserted in section 1.2.

• (Gasiorowicz ch 16.3)
• Shankar ch 18.3
• Townsend ch 14.5
Approximate methods IV: Scattering
theory

5.1 Preliminaries

Summary: Scattering experiments provide us with almost all the information we


have about the nature of sub-atomic matter. Here we will only consider elastic
scattering.

All we know about the structure of sub-atomic matter comes from scattering experiments,
starting with Rutherford’s discovery of the nucleus right up to the Tevatron and LHC. In this
course we will introduce the main ideas, but largely confine ourselves to elastic scattering of
particles from a fixed, localised potential V (r); this is the limit of two-body scattering where
one particle (the projectile) is much lighter than the other (the stationary target). We will be
looking for the probability that the particle is scattered away from its original direction.
A careful approach would involve a wavepacket for the projectile, and the problem would
involve a fully time-dependent approach as the wavepacket aproached the scattering centre,
interacted with it and then moved away out of its influence. An easier approach is to use
an incident plane wave to represent a coherent beam of particles of fixed momentum; the
wavepacket aproach must give the same results in the limit that the momentum spread in the
packet is small. The lateral spread of the wavefronts will be large compared to the range of V (r)
but small compared to the perpendicular distance from the beam to the detector, so that unless
we put the detector directly in the line of the beam, we will only detect scattered particles.

We define the scattering cross section as the rate of particles scattered from the field divided
by the incoming flux. Since the units of flux are particles per second per metre2 , the cross

41
section has units of area. This has an obvious geometric interpretation. (The non-SI units of
cross-section used in particle physics are barns, where 1 barn is 10−28 m2 - that’s as in “barn
door”, something you can’t miss!) We are often interested in angular distributions too – they
are rarely isotropic as depicted above – so then we are interested in the scattering rate into
a given solid angle. Conventionally θ measures the angle from the beam direction, as shown
above, and φ the angle in a plane perpendicular to the beam direction.
An infinitesimal solid angle, spanned by infinitesimal range of θ and φ, is dΩ = sin θdθdφ. If
we have a detector at distance r with a (small) opening of area dS facing the scattering source
it subtends a solid angle dΩ = dS/r2 . If the scattering rate into this detector R divided by the
initial flux is denoted dσ, the differential cross section is dσ/dΩ. Obviously (dσ/dΩ)dΩ = σ
if the integral is taken over all angles.
Outside the range of V (r) the scattered particles are free and have the same energy as
initially (elastic scattering). The wavefunction must satisy Schrödinger’s equation which can
be written (∇2 + k 2 )ψsc = 0 (with ~2 k 2 /2m = E). Now we know that eik·r is a plane wave
solution of this, but we are looking for outgoing waves emanating from the centre. These are
spherical harmonics times spherical Bessel functions, and have the asymptotic form (as r → ∞)
X eikr eikr
ψsc → Alm (k)Ylm (θ, φ) = f (k, θ, φ)
lm
r r

The function f (k, θ, φ) is called the scattering amplitude - note it has dimensions of length.
(The fact that as r → ∞ the radial wavefunction doesn’t depend on l can be seen from the fact
that ∇2 → d2 /dr2 ; the angular part falls off as 1/r2 .)
The flux in this wave is
~ ∗ r→∞ ~k

(ψsc ∇ψsc − ψsc ∇ψsc ) −→ |f (k, θ, φ)|2 r̂
2im mr2
which is indeed outgoing (had we chosen e−ikr it would have been ingoing). If we have a detector
with a (small) opening of area r2 dΩ facing the scattering source the rate of particles hitting it
is |f (θ, φ)|2 (~k/m)dΩ. Since the flux in the beam wave eik·r is ~k
m
, we have


dσ = |f (k, θ, φ)|2 dΩ and = |f (k, θ, φ)|2
dΩ
So the scattering amplitude squared is the differential cross section. Now all we have to do is
calculate it!

• Gasiorowicz ch 19.1,

• Mandl ch 11.1

• Shankar ch 19.1,2

• Townsend ch 13.1

5.2 The Born approximation

Summary: If the potential is weak, we can use first order perturbation theory to
calculate cross sections.
If the scattering potential V (r) is weak, we can ignore multiple interactions and use first-
order perturbation theory. In this context, this is called the Born approximation.
First order time-dependent perturbation theory means using Fermi’s golden rule. V (r) is
constant, not oscillatory, so the energy-conservation δ-function links incoming and scattering
states with the same energy, hence we are dealing with elastic scattering (as already assumed).
Our goal is to calculate dσ/dΩ, so we are interested in all the out-going momentum states
which fall within dΩ at a given scattering angle {θ, φ}:
X
Rki →dΩ = 2π
~
|hkf |V (r)|ki i|2 δ(Ei − Ef )
kf ∈dΩ

2π ∞
Z  2 
2 ~ 2 2
= |hkf |V (r)|ki i| δ (k − ki ) D(kf )dΩdkf
~ 0 2m f
mV
= 2 3
kf |hkf |V (r)|ki i|2 dΩ
4π ~
m2 V 2
⇒ dσ = |hkf |V (r)|ki i|2 dΩ
4π 2 ~4Z
m
⇒ |f (k, θ, φ)| = ei(ki −kf )·r V (r)d3 r
2π~2
(Since the density of states is for single particles, we adjusted the normalisation of the incoming
beam to also contain one particle in volume V ; hence V drops out. For the density of states,
see A.10. For transforming the argument of delta functions, see A.8.)
Writing ki − kf = q, the momentum transfered from the initial to the final state, we have
the very general result that f (k, θ, φ) is just proportional to Ṽ (q), the Fourier transform of
V (r).
The classic application is Rutherford scattering, but we will start with a Yukawa potential
V (r) = −λe−µr /r; the Coulomb potential is the µ → 0 limit. Taking the z-axis along q for
the spatial integration (this is quite independent of the angles used in defining the scattering
direction) and noting that q = 2k sin(θ/2), we get
Z
m 0 0 0
|f (k, θ, φ)| = 2
eiqr cos θ λe−µr r0 sin θ0 dθ0 dφ0 dr0
2π~
2mλ
= 2 2
~ (µ + q 2 )
dσ 4m2 λ2
⇒ = 4 2
dΩ ~ (µ + 4k 2 sin2 (θ/2))2
dσ ~2 c2 α2
⇒ =
dΩ Coulomb 16E 2 sin4 (θ/2)
We note that the independence of φ is quite general for a spherical potential.
This result, though derived at first-order, is in fact correct to all orders and agrees with the
classical expression, which is called the Rutherford cross section. (Just as well for Rutherford!)
(The apearance of ~ is only because we have written e2 in terms of α.) The reason we took the
limit of a Yukawa is that our formalism doesn’t apply to a Coulomb potential, because there
is no asymptotic region - the potential is infinite ranged. The cross section blows up at θ = 0,
the forward direction, but obviously we can’t put a detector there as it would be swamped by
the beam.

• Gasiorowicz ch 19.3
• (Mandl ch 11.3)

• Shankar ch 19.3

• (Townsend ch 13.2)

(The references in brackets do not use the FGR to obtain the Born cross section.)

5.3 Phase Shifts


Summary: Scattering experiements in the real world yield complicated differential
cross sections. If the behaviour for (imagined) incoming particles of definite angular
momentum can be deduced from the angular dependence, it is easier to recognise
what the results are telling us about the potential.

An important concept in scattering is that of phase shifts. The basic idea is that since
angular momentum is conserved by a spherically symmetric potential V (r), if the incoming
wave were an eigenfunction of angular momentum, so would the outgoing wave. This would
be the case independently of the potential, and the only influence of the potential would be
on the relative phase of the outgoing wave compared to the incoming wave. Since this is a
set-up which cannot be experimentally realised it might seem a pointless observation, until you
recognise that a plane wave must be expressible as a sum of such angular momentum eigenstates
which are called partial waves. In practice it is often the case, for a short-ranged potential,
and an incoming wave which is not too high energy, that only the lowest partial waves are
significantly scattered. Classically, for a particle with momentum p, the closest approach is
d = L/p. If a is the range of the potential, then only particles with L . pa (or l . ka) will
be scattered. Thus the first few phase shifts can be an efficient way of describing low-energy
scattering data.
Let us recall that for a free particle in spherical coordinates the seperable solutions are
ψ = Rl (r)Ylm (θ, φ), where the radial wavefunction satisfies

1 d2 l(l + 1)
2
rRl (r) − Rl (r) + k 2 Rl (r) = 0
r dr r2
whose solutions are spherical Bessel functions, the regular ones jl (kr) and the irregular ones
nl (kr); the latter blow up at the orgin. For example j0 (z) = sin z/z, n0 (z) = − cos z/z and
j1 = sin z/z 2 −cos z/z. Note that all tend to either ± sin z/z or ± cos z/z at large z. Specifically,

z→∞ sin(z − πl/2) z→∞ cos(z − πl/2)


jl (z) −→ nl (z) −→ −
z z
It can be shown that the expansion of a plane wave in spherical coordinates is
∞ ∞  ikr −ikr

r→∞ 1 e le
X X
ik·r ikr cos θ l
e =e = i (2l + 1)jl (kr)Pl (cos θ) −→ (2l + 1) − (−1) Pl (cos θ)
l=0
2ik l=0 r r

where the Pl (cos θ) are Legendre polynomials (proportional to Yl0 ). Fairly obviously, for each
partial wave the plane wave consists of both outgoing and incoming waves with equal and
opposite flux. (Further notes on this expression and on spherical Bessel functions can be found
in A.6; see also Gasiorowicz Supplement 8-B.)
Now consider the case in the presence of the potential. Close to the centre, the wave is
not free and can’t be described by spherical Bessel functions. But beyond the range of the
potential, it can. Furthermore nl (kr) can enter since the origin isn’t included in the region for
which this description can hold. The general form will be


X ∞
X
ψk (r, θ) = (Cl jl (kr) + Dl nl (kr)) Pl (cos θ) = Al (cos δl jl (kr) − sin δl nl (kr)) Pl (cos θ)
l=0 l=0
∞ ∞  ikr+iδl −ikr−iδl

r→∞
X sin(kr − πl/2 + δl ) 1 X l e le
−→ Al Pl (cos θ) = (−i) Al − (−1) Pl (cos θ).
l=0
kr 2ik l=0 r r

The
p coeffients Cl and Dl can be taken to be real (since the wave equation is real), Al =
Cl2 + Dl2 and tan δl = −Dl /Cl . The magnitudes of the incoming and outgoing waves are thus
again equal.
Now we need to match the incoming waves with the plane waves, since that’s the bit we
control; hence (−i)l Al e−iδl = (2l + 1). So finally

∞  ikr −ikr

r→∞ 1 X 2iδl e le
ψk (r, θ) −→ (2l + 1) e − (−1) Pl (cos θ)
2ik l=0 r r
∞ ikr
eikr
 
2iδl e
X
ikr cos θ
=e + (2l + 1) e + Pl (cos θ)
l=0
r r

X eiδl sin δl
⇒f (k, θ) = (2l + 1) Pl (cos θ)
l=0
k

Because of the orthogonality of the Pl (cos θ), the cross section can also be written as a sum
over partial cross sections
Z ∞ ∞
2
X X 4π
σ= |f (k, θ)| dΩ = σl = (2l + 1) sin2 δl
l=0 l=0
k2

As argued at the start, this depends only on the phase shifts δl .


The significance of δl can be appreciated if we compare the asymptotic form of the radial
wavefunction in the presence and absence of the potential; without we have sin(kr − lπl/2)/r
but with the potential we have sin(kr − πl/2 + δl )/r. So δl is just the phase by which the
potential shifts the wave compared with the free case.

• Gasiorowicz ch 19.2

• Mandl ch 11.5

• Shankar ch 19.5

• Townsend ch 13.4,5
5.3.1 Hard sphere scattering
For hard sphere scattering, the potential is infinite for r < a and zero for r > a. The free
form of the wavefuntion therefore holds for all r > a, and the wavefunction must vanish at the
surface (r = a). Since the partial waves are all independent, this means each partial wave must
vanish at r = a, ie Cl jl (ka) + Dl nl (ka) = 0. Thus
   
Dl jl (ka)
δl ≡ arctan − = arctan
Cl nl (ka)

For l = 0 this is very simple: δ0 = −ka. For higher l it has to be solved numerically.

In this figure above we see the corresponding wave functions, blue with the potential and
red (dashed) in its absence. The phase shifts δl are the displacements between the two.
The graph below shows the phase-shifts and cross sections for this case.

The fact that the phase-shifts are negative indicates a repulsive potential. The fact that
the phase-shifts and cross sections don’t tend to zero as k → ∞ is atypical, and comes from
the potential being infinite - we can’t use the Born approximation here either.
Since as z → 0, jl (z)/nl (z) ∼ z 2l+1 , in the low-energy limit ka  1 all higher phase shifts are
negligible. Then σ ≈ σ0 = 4πa2 sinc2 (ka) which tends to 4 times the classical limit of πa2 . In the
high-energy limit ka  1, all phase shifts up to l ∼ ka will be significant, and if there are enough
2πa2
2 1
Pka
of them the average value of sin δl will just be 2 . Then we have σ = (ka)2 l=0 (2l + 1) → 2πa2 .
We might have expected πa2 in this, the classical, limit, but wave optics actually predicts the
factor of 2, a phenomenon related to Poisson’s spot.

5.3.2 Scattering from a finite square barrier or well


Here we are dealing with V (r) = 0 for r > a, and V (r) = V0 for r < a, where V0 can be
greater than zero (a barrier, repulsive) or less (a well, attractive). It will be useful to define
the quantity b = 2mV0 /~2 , which has dimesions of inverse length squared. Note that as we are
working in 3D the wells and barriers are all spherical.
In this case the wave function will again be Cl jl (kr) + Dl nl (kr) for r > a, and again
δl = arctan(−Dl /Cl ). However√this time the wave function for r < a doesn’t vanish, but has
the form Al jl (k 0 r) where k 0 = k 2 − b and we find Dl /Cl by matching the wave function and
its derivative at r√= a. Examples of the resulting √ wavefunctions are shown below, first for a
barrier with k < b, then a barrier with k > b, then a well. As before blue indicates the
wavefuntion in the presence of the potential and red (dashed) in its absence. In the first two
cases the repulsive barrier “pushes the wave outwards” and the phase-shift is negative, in the
third the well “draws the wave inwards” and the phase-shift is positive. (Note: what is marked
as “δ” on the plots is actually δ/ka — a given distance on the graph will represent a larger
phase shift if λ is small.)

The next two pictures show phase-shifts and cross sections for repulsive potentials (barriers),
with the lower one being stronger that the upper one. As expected the phase-shifts start at zero
and are negative; as in the hard-sphere case the magnitudes grow initially, but as k increases
the barrier gradually becomes less and less significant and the phase-shifts and cross sections
fall off to zero.

The next two pictures show attractive potentials, and as expected the phase shifts are
positive and they and the cross sections fall to zero as k → ∞. For the weaker well (top) the
phase shifts start at zero at k = 0 as expected, but for the stronger well the s-wave phase-shift
has jumped to π. Why is this? First, we should remember that since we determined δ via
an arctangent, it was ambiguous up to a multiple of π. (Another way of seeing that is to
recognise that the wave function itself is only defined up to an overall sign.) The correct value
can be determined by examining pictures of wavefunctions (if we know the exact solutions as
here), or by requiring δ to be a continuous function of k, falling to zero as k → ∞. Below
the wave functions are shown for low k and well depths either side of the critical value; in
the second −ψ is shown by the blue dotted line, and it is this which when compared with the
red dashed line gives a positive phase shift of more than π/2 (rather than a smaller negative
shift of δ − π). What determines the critical value? A finite square well in 3D has an s-wave

zero-energy
√ bound-state (i.e. states that would be bound if the well were infinitesimally deeper)
if b = (2n + 1)π/2a, n = 0, 1, 2 . . . . The lowest of these is b = 2.467/a2 , and that is the value
at which the phase shift at k = 0 jumps from 0 to π. This is an example of Levinson’s theorem,
which says that for well-behaved finite-range potentials the value of the phase-shift δl at the
origin is nπ where n is the number of bound states of the potential with angular momentum
l. This is an example of how we can obtain information about the potential from scattering.
This plot shows a much deeper well. From this we can deduce that for b = −50/a2 there are
two bound states with l = 0, two with l = 1, one each with l = 2, 3 and 4, but none with l = 5
(or higher). We can also see that for k greater than around 5/a the partial wave description
becomes too cumbersome - dozens of partial waves have to be included to reproduce the cross
section - far more than could realistically be deduced from experiment.
The plot above also suggests that something interesting is happening around k = 3.5/a,
where there is a very sharp peak in the cross section. It comes from the l = 5 partial wave,
where the phase-shift rises very rapidly from around 0 to nearly π. Since the cross section is
proportional to sin2 δl we see that it will rise and fall rapidly over the region in k, peaking at
δl = π/2. What is happening here is suggested by the next plot, of the l = 5 wave function:
We can see that for l > 0, the combination of the square well and the centrifugal barrier allows

for quasi-bound states to form with energies greater than zero — very similar to the situation
with radioactive decay, in which α particles were trapped for a time by the Coulomb repulsion
(see 2.4.2). When we send in a beam of particles, they will tend to get trapped and the wave
function within the well will be large. This is called a resonance. We can’t see the wavefunction,
of course, but the clue is in the phase shift, which rises rapidly through π/2 as we scan the
energy across the peak. As a function of k, the phase shift and cross section will have the form

Γ2 /4
 
Γ/2 4π
δl = δb + arctan σl = 2
Er − E k (E − Er )2 + Γ2 /4

This is a Lorentzian (ignoring the relatively slowly varying factor of k −2 ) and in the context of
scattering it is termed a Breit-Wigner curve. The width Γ gives the lifetime of the quasi-bound
state.
The phase shift is very useful for distinguishing random bumps in the cross section from
true resonances. For instance when b is just too low for there to be a zero-energy s-wave bound
state there will be a significant bump in the cross section, but it is not a resonance: there is no
centrifugal barrier to trap the particles. (Many text-books fudge this issue, since l > 0 is hard
to deal with analytically. The correct term for a not-quite-bound state is a virtual state.) In
the plot for b = −50, we can see a number of bumps; the one at ka ≈ 2.5 might be a d-wave
virtual state, but only the really sharp peak at ka ≈ 3.5 is a resonance. Examples of resonances
in particle physics are the ∆, a sort of excited proton, which shows up very strongly in p-wave
πN scattering, and the Z-boson which was seen in e+ e− scattering at LEP.
Low-energy nucleon-nucleon scattering shows nice examples of both resonances and virtual
states: The phase shift is given in degrees. (Note the potential is more complicated than a

square well!). In the proton-proton channel there is a virtual state, but no resonance or bound
state. If the nuclear force were a little stronger perhaps di-protons could exist. In the neutron-
proton channel though the phase shift starts at π (180◦ ) indicating the presence of a bound
state — the deuteron. (Plots are from the Nijmegen phase shift analysis of nucleon-nucleon
scattering.) The fact that the phase shift starts to go negative at high-enough energy suggests
that there is a repulsive hard core to the interaction (as indeed we know from the size of nuclei
that there must be).
It can be shown (see examples) that the Born approximation is good for high-enough energy
scattering (k  ba), and the shallower the well the greater its domain of validity till if ba2  1
it will be good everywhere. In the latter case the potential will not have any bound states or
resonances. Where the Born approximation breaks down, it is clear that multiple interactions
must be important (see 4.7).
Quantum Measurement

6.1 The Einstein-Poldosky-Rosen “paradox” and Bell’s


inequalities

Summary: This section is about nothing less important than “the nature of
reality”!

In 1935 Einstein, along with Boris Poldosky and Nathan Rosen, published a paper entitled
“Can quantum-mechanical description of physical reality be considered complete?” By this
stage Einstein had accepted that the uncertainty principle did place fundamental restrictions
on what one could discover about a particle through measurements conducted on it. The
question however was whether the measuring process actually somehow brought the properties
into being, or whether they existed all along but without our being able to determine what
they were. If the latter was the case there would be “hidden variables” (hidden from the
experimenter) and the quantum description—the wave function—would not be a complete
description of reality. Till the EPR paper came out many people dismissed the question as
undecidable, but the EPR paper put it into much sharper focus. Then in 1964 John Bell
presented an analysis of a variant of the EPR paper which showed that the question actually
was decidable. Many experiments have been done subsequently, and they have come down
firmly in favour of a positive answer to the question posed in EPR’s title.
The original EPR paper used position and momentum as the two properties which couldn’t
be simultaneously known (but might still have hidden definite values), but subsequent discus-
sions have used components of spin instead, and we will do the same. But I will be quite lax
about continuing to refer to “the EPR experiment”.
There is nothing counter-intuitive or unclassical about the fact that we can produce a pair
of particles whose total spin is zero, so that if we find one to be spin-up along some axis, the
other must be spin down. All the variants of the experiment to which we will refer can be
considered like this: such a pair of electrons is created travelling back-to-back at one point, and
travel to distant measuring stations where each passes through a Stern-Gerlach apparatus (an
“SG”) of a certain orientation in the plane perpendicular to the electrons’ momentum.
As I say there is nothing odd about the fact that when the two SGs have the same orientation
the two sequences recorded at the two stations are perfectly anti-correlated (up to measurement
errors). But consider the case where they are orientated at 90◦ with respect to each other as
below: Suppose for a particular pair of electrons, we measure number 1 to be spin up in the
z-direction and number 2 to be spin down in the x-direction. Now let’s think about what would
have happened if we had instead measured the spin in the x-direction of particle 1. Surely, say
EPR, we know the answer. Since particle 2 is spin down in the x-direction, particle 1 would
have been spin up. So now we know that before it reached the detector, particle 1 was spin up

52
in the z-direction (because that’s what we got when we measured it) and also spin up in the
x-direction (because it is anti-correlated with particle 2 which was spin down). We have beaten
the uncertainty principle, if only retrospectively.
But of course we know we can’t construct a wave function with these properties. So is there
more to reality than the wave function? Bell’s contribution was to show that the assumption
that the electron really has definite values for different spin components—if you like, it has
an instruction set which tells it which way to go through any conceivable SG that it might
encounter—leads to testable predictions.
For Bell’s purposes, we imagine that the two measuring stations have agreed that they will
set their SG to one of 3 possible settings. Setting A is along the z-direction, setting C is along
the x direction, and setting B is at 45◦ to both. In the ideal set-up, the setting is chosen just
before the electron arrives, sufficiently late that no possible causal influence (travelling at not
more than the speed of light) can reach the other lab before the measurements are made. The
labs record their results for a stream of electrons, and then get together to classify each pair
as, for instance, (A ↑, B ↓) or (A ↑, C ↑) or (B ↑, B ↓) (the state of electron 1 being given
first). Then they look at the number of pairs with three particular classifications: (A ↑, B ↑),
(B ↑, C ↑) and (A ↑, C ↑). Bell’s inequality says that, if the way the electrons will go through
any given orientation is set in advance,

N (A ↑, B ↑) + N (B ↑, C ↑) ≥ N (A ↑, C ↑)

where N (A ↑, B ↑) is the number of (A ↑, B ↑) pairs etc.


Now let’s prove that.
Imagine any set of objects (or people!) with three distinct binary properties a, b and c—say
blue or brown eyes, right or left handed, and male or female (ignoring messy reality in which
there are some people not so easily classified). In each case, let us denote the two possible
values as A and A etc (A being “not A” in the sense it is used in logic). Then every object
is classified by its values for the three properties as, for instance, ABC or ABC or ABC . . ..
The various possibilities are shown on a Venn diagram below (sorry that the bars are through
rather than over the letters...) In any given collection of objects, there will be no fewer than
zero objects in each subset, obviously. All the N s are greater than or equal to zero. Now we
want to prove that the number of objects which are AB (irrespective of c) plus those that are
BC (irrespective of a) is greater than or equal to the number which are AC (irrespective of b):

N (AB) + N (BC) ≥ N (AC)

This is obvious from the diagram below, in which the union of the blue and green sets fully
contains the red set.
A logical proof is as follows:

N (AB) + N (BC) = N (ABC) + N (ABC) + N (ABC) + N (ABC)


= N (ABC) + N (AC) + N (ABC) ≥ N (AC)
To apply to the spins we started with, we identify A with A ↑ and A with A ↓. Now if an
electron is A ↑ B ↓ (whatever C might be) then its partner must be A ↓ B ↑, and so the result
of a measurement A on the first and B on the second will be (A ↑, B ↑). Hence the inequality
for the spin case is a special case of the general one. We have proved Bell’s inequality assuming,
remember, that the electrons really do have these three defined properties even if, for a single
electron, we can only measure one of them.
Now let’s consider what quantum mechanics would say. We first remind ourselves of the
relation between the spin-up and spin-down states for two directions:

|θ, ↑i = cos 2θ |0, ↑i + sin 2θ |0, ↓i |0, ↑i = cos 2θ |θ, ↑i − sin 2θ |θ, ↓i
|θ, ↓i = − sin 2θ |0, ↑i + cos 2θ |0, ↓i |0, ↓i = sin 2θ |θ, ↑i + cos 2θ |θ, ↓i

where θ is the angle between the orientation of the two axes. For A and B or for B and C
θ = 45◦ ; for A and C it is 90◦ .
Consider randomly oriented spin-zero pairs and settings A, B and C equally likely. If the
first SG is set to A and the second to B (which happens 1 time in 9), there is a probability
of 1/2 of getting A ↑ at the first station. But then we know that the state of the second
electron is |A ↓i and the probability that we will measure spin in the B direction to be up is
sin2 22.5◦ . Thus the fraction of pairs which are (A ↑, B ↑) is 12 sin2 22.5◦ = 0.073, and similarly
for (B ↑, C ↑). But the fraction which are (A ↑, C ↑) is 21 sin2 45◦ = 0.25. So the prediction of
quantum mechanics for 9N0 measurements is

N (AB) + N (BC) = 0.146N0 < N (AC) = 0.25N0


So Bell’s inequality does not hold. The experiment has been done many times, starting with
the pioneering work of Alain Aspect, and every time the predictions of quantum mechanics
are upheld and Bell’s inequality is violated. (Photons rather than electrons are used. Early
experiments fell short of the ideal in many ways, but as loopholes have been successively closed
the result has become more and more robust.)
It seems pretty inescapable that the electrons have not “decided in advance” how they will
pass through any given SG. Do we therefore have to conclude that the measurement made at
station 1 is responsible for collapsing the wave function at station 2, even if there is no time
for light to pass between the two? It is worth noting that no-one has shown any way to use
this set-up to send signals between the stations; on their own they both see a totally random
succession of results. It is only in the statistical correlation that the weirdness shows up...

In writing this section I found this document by David Harrison of the University of Toronto
very useful.

• (Gasiorowicz ch 20.3,4)

• Mandl ch 6.3

• Townsend ch 5.4,5

Further discussions can be found in N. David Mermin’s book Boojums all the way through (CUP
1990) and in John S. Bell’s Speakable and unspeakable in quantum mechanics (CUP 1987).
Mathematical background and revision

Most of the material presented here is expected to be revision. Some however (eg Airy functions)
is not, but is useful background material for one or more sections of the course. There are
references to contour integrals which can be ignored by students who have not taken a course
on the subject.

A.1 Vector Spaces


Basic results are presented here without explanation. See your PHYS20602 notes for more
details. The first five points are definitions (except for the Schwarz inequality in the 4th point);
the rest can be proved.
• We will consider a complex vector space, with vectors (kets) written |αi. (What we write
inside is just a label; it might (anticipating) be the corresponding wave function, or an
eigenvalue, or just a number if we have some natural ordering principle.)
• For complex scalars a and b, the ket |aα + bβi = a|αi + b|βi is also in the space. The null
vector is written 0 (not |0i): 0|αi = 0.
• The corresponding bras hα| are taken from the dual space. haα + bβ| = a∗ hα| + b∗ hβ|.
(We use the same labels for bras and kets.)
• There is a finite inner product hα|βi; hα|αi(≡ |α|2 ) is real and positive (unless hα|αi = 0
which implies |αi = 0). hβ|αi = hα|βi∗ . For normalised states, hα|αi = 1 and |hα|βi| ≤ 1
(with equality only for |βi = |αi.)

• Operators transform one ket into another: Ω|αi


b = |Ωαi.
b (It may be necessary to specify
† b † = hΩα|. b† = Ω
a domain of validity.) The adjoint operator Ω is defined by hα|Ω
b b If Ω b the
operator is self-adjoint or Hermitian.
• The eigenvalues ωi of the Hermitian operator Ω b are real. The corresponding normalised
P
eigenfunctions |ωi i are orthogonal hωi |ωj i = δij and form a complete set: |αi = i ai |ωi i
for any |αi, where ai = hωi |αi. (If some of the ωi are degenerate the orthogonalisation
within the corresponding set of eigenkets
P will need to be done by hand, but it can be

P
done.) If |βi = i bi |ωi i, then hβ|αi = i bi ai .

• If Ω b “acting either way”: hβ|Ω|αi


b is Hermitian, the matrix element can be formed with Ω b =

P
hβ|Ωαi = hΩβ|αi(= i ωi bi ai ).
b b

• The object |βihα| is an operator, since acting on a ket it gives another ket: (|βihα|)|φi =
P
(hα|φi)|βi; the adjoint operator is |αihβ|. By completeness, i |ωi ihωi | = Ib (the identity
operator).

56
• In the basis {|ωi i}, an operator Θ b is characterised by its matrix elements θij ≡ hωi |Θ|ω
b j i,
and we have
  
θ11 θ12 θ13 . . a1
X X  θ21 θ22 θ23 . .   a2 
∗ ∗ ∗ ∗
  
hβ|Θ|αi =
b bi θij aj = (b1 , b2 , b3 , . . .) 
 θ31 θ32 θ33 . .   a3 
 
i j  . . . . .  . 
. . . . . .

The matrix elements of Θ b † are hωi |Θ


b † |ωj i = θ∗ . (Thus the matrix of coefficients is
ji
transposed as well as complex conjugated.) If Γ is another operator with matrix elements
b
in this basis γij , the matrix elements of Θ b are P θik γkj . Also, Θ
bΓ b ≡ P |ωi iθij hωj |.
k ij

See section 1.2 for the aplication to quantum mechanics, including states such as |xi and
|pi.
References
• Shankar 1.1-6

A.1.1 Direct Products


We can form a new vector space by taking the direct product of two vector spaces. In quantum
mechanics, this arises for instance if we have two particles, or if we have two sources of angular
momentum for a single particle. Taking two particles as an example, if the first (say a proton)
is in the state |φi and the second (say a neutron) is in the state |αi, the state of the whole
system is given by |φi ⊗ |αi, where we specify at the start that we will write the proton state
first, and the symbol “⊗” is really just a separator. All states of this form are in the new
vector space, but not all states of the new vector space are of this separable form. As a simple
example, |φi ⊗ |αi + |ψi ⊗ |βi is a possible state of the system in which neither the proton nor
the neutron is in a definite state, but the two are entangled, such that if we find the proton in
state |φi we know the neutron is in state |αi, but if the proton is in state |ψi the neutron must
be in state |βi.
Operators can be of three forms: they can act only on one particle, they can act on both
particles but independently on each, or they can be more complicated. If A b acts only on the
proton, and B only on the neutron, then in the two particle space we have to write the former
b
as Ab ⊗ Ibn where Ibn is the identity operator for neutron states, and the latter as Ibp ⊗ B.
b Again,
the “⊗” acts to separate “proton” from “neutron”. In each case one of the two particles’ states
is unchanged. On the other hand A b⊗ B
b changes the states of both particles, and A b⊗ B b⊗D
b +C b
does too, but in a more complicated way. (The latter expression is only valid if C b and D
b are
proton and neutron operators respectively.) Some examples:

  
hφ| ⊗ hα| |ψi ⊗ |βi = hφ|ψihα|βi
    
b ⊗ Ibn |φi ⊗ |αi
A = A|φi
b ⊗ |αi
      
b ⊗ Ibn + Ibp ⊗ B
A b |φi ⊗ |αi = A|φi
b ⊗ |αi + |φi ⊗ B|αi
b
          
A ⊗ B + C ⊗ D |φi ⊗ |αi + |ψi ⊗ |βi
b b b b = A|φi ⊗ B|αi + A|ψi ⊗ B|βi
b b b b
       
+ C|φi
b ⊗ D|αi
b + C|ψi
b ⊗ D|βi
b
If |φi is the spatial state ofa particle and |αi its spin state, the common notation φ(x)|αi
actually stands for hx| ⊗ IS |φi ⊗ |αi .
The direct product notation is clumsy, and shorthands are often used. If we indicate via a
label which space an operator acts on, eg by writing A bp and Bbn , or if it is otherwise obvious, we
often drop the explict identity operators in the other space and hence just write the operator in
the third equation above as A bp + B
bn . Even more succinctly, we may use a single ket with two
labels to stand for the state of the combined system, for example |φ, αi. An example of this
would be the two-dimensional harmonic oscillator described in section A.4. Such a labelling
though implies that we want to use basis states that are direct products of the sub-space
basis states, and that might not be convenient. In the case of addition of angular momentum
(section A.2), we are more likely to want to use eigenstates of the total angular momentum of
the system as our basis states, and they are not seperable.
References
• Shankar 10.1

A.2 Angular Momentum


We omit hats on operators, and don’t always distinguish between operators and their position-
space representation.
Orbital angular momentum
∂ ∂
L = r × p ⇒ Lz = −i~(x ∂y − y ∂x ) etc
1 ∂2
   
2 2 1 ∂ ∂ ∂
L = −~ sin θ + 2 2
; Lz = −i~
sin θ ∂θ ∂θ sin θ ∂φ ∂φ
2
 
1 1 ∂ ∂ψ 1 ∂
∇2 = ∇2r − 2 2 L2 ; ∇2r ψ ≡ 2 r2 = rψ
~r r ∂r ∂r r ∂r2
Eigenfunctions of L2 and Lz are spherical harmonics Ylm (θ, φ) with eigenvalues ~2 l(l + 1) and
~m respectively; l and m are integers and must satisfy l ≥ 0 and m = −l, −l + 1, . . . l. In Dirac
notation, the eigenstates are |lmi and Ylm = hr|lmi.
r r
1 ±1 3
0
Y0 (θ, φ) = Y1 (θ, φ) = ∓ sin θ e±iφ
4π 8π
r r
3 15
Y10 (θ, φ) = cos θ Y2±2 (θ, φ) = sin2 θ e±2iφ
4π 32π
r r
15 5
Y2±1 (θ, φ) = ∓ sin θ cos θ e±iφ Y20 (θ, φ) = (3 cos2 θ − 1)
8π 16π
The Mathematica function to obtain them is SphericalHarmonicY[l,m,θ,φ]. These are nor-
malised and orthogonal:
Z
0 ∗
(Ylm m
0 ) Yl dΩ = δll0 δmm0 where dΩ = sin θdθdφ

Rules obeyed by any angular momentum (eg J can be replaced by L or S):


[Jx , Jy ] = i~Jz etc; [J2 , Ji ] = 0; J± ≡ Jx ± iJy ; [J+ , J− ] = 2~Jz ; [Jz , J± ] = ±~J∓
2 2 2 2 1 2 2
J = Jx + Jy + Jz = 2 (J+ J− + J− J+ ) + Jz = J+ J− + Jz − 2~Jz
p
J2 |j, mi = ~2 j(j + 1)|j, mi; Jz |j, mi = ~m|j, mi; J± |j, mi = ~ (j ∓ m)(j ± m + 1)|j, m ± 1i
In the last line j and m must be integer or half-integer, and m = −j, −j + 1, . . . j.
For the special case of a spin- 12 particle (such as a proton, neutron or electron), the eigen-
states of S2 and Sz are | 12 , 12 i and | 12 , − 12 i, often simply written | 21 i and |− 21 i or even | ↑i and
| ↓i; then S2 |± 21 i = 34 ~2 and Sz |± 21 i = ± 12 ~|± 12 i. In this basis, with | 21 i ≡ 10 and |− 12 i ≡ 01 ,


the components of the spin operator are given by Sbi = 12 ~σi , where σi are the Pauli matrices
     
0 1 0 −i 1 0
σx = σy = σz =
1 0 i 0 0 −1

If a sytem has two contributions to its angular momentum, with operators J1 and J2 and
eigenstates |j1 m1 i and |j2 m2 i, the total angular momentum operator is J = J1 + J2 . The
quantum number j of the combined system satisfies j1 + j2 ≥ j ≥ |j1 − j2 |, and m = m1 + m2 .
Since enumerating states by {m1 , m2 } gives (2j1 + 1)(2j2 + 1) possible states, the number must
be unchanged in the {j, m} basis, which is verified as follows: labelling such that j2 > j1 we
have
j2 +j1  
X
2j+1 = (j2 +j1 )(j2 +j1 +1)−(j2 −j1 )(j2 −j1 −1) +(j2 +j1 )−(j2 −j1 )+1 = (2j1 +1)(2j2 +1)
j=j2 −j1

Depending on basis, we write the states either as |j1 m1 i ⊗ |j2 m2 i or |j1 , j2 ; j mi, and they
must be linear combinations of each other as both span the space:
X  
|j1 , j2 ; j mi = hj1 m1 ; j2 m2 |j mi |j1 m1 i ⊗ |j2 m2 i and
m1 m2
X
|j1 m1 i ⊗ |j2 m2 i = hj1 m1 ; j2 m2 |j mi |j1 , j2 ; j mi
jm

where the numbers denoted by hj1 m1 ; j2 m2 |j mi are called Clebsch-Gordan coefficients; they
vanish unless j1 + j2 ≥ j ≥ |j1 − j2 |, and m = m1 + m2 . These are tabulated in various
places including the Particle Data Group site (see here for examples of how to use them); the
Mathematica function to obtain them is ClebschGordan[{j1 , m1 }, {j2 , m2 }, {j, m}]. There is
also an on-line calculator at Wolfram Alpha which is simple to use if you only have a few to
calculate. We use the “Condon-Shortley” phase convention, which is the most common; in this
convention they are real which is why we have not written hj1 m1 ; j2 m2 |j mi∗ in the second
line above.
As an example we list the states arising from coupling angular momenta 1 and 21 (as in
p-wave states of the hydrogen atom):
q q
|1, 12 ; 12 12 i = 2
3
|1 1i ⊗ | 1
2
− 1
2
i − 1
3
|1 0i ⊗ | 21 12 i
q q
1 1 1 1
|1, 2 ; 2 − 2 i = 3
|1 0i ⊗ | 2 − 2 i − 23 |1 −1i ⊗ | 12 21 i
1 1

|1, 21 ; 32 32 i = |1 1i ⊗ | 12 21 i
q q
1 3 1 1
|1, 2 ; 2 2 i = 3
|1 1i ⊗ | 2 − 2 i + 23 |1, 0i ⊗ | 21 12 i
1 1

q q
|1, 21 ; 32 − 21 i = 2
3
|1 0i ⊗ | 1
2
− 1
2
i + 1
3
|1 −1i ⊗ | 12 21 i
|1, 12 ; 32 − 23 i = |1 −1i ⊗ | 12 − 21 i
1 1
Somewhat more generally, the coupling of l and 2
to give j = l ± 2
is
s s
1
l∓m+ l ± m + 12
|l, 21 ; l± 12 mi = |l m+ 12 i ⊗ | 12 − 21 i ±
2
|l m− 12 i ⊗ | 12 12 i.
2l + 1 2l + 1

(The following material is not revision, but is asserted without proof. See Shankar 15.3 for
a partial proof.)
Similarly we can write expressions for products of spherical harmonics:
Z
m0 0 ∗
q
X q
m 0 0 0 0 0 0
Yk Yl = f (k, l, l )hk q; l m|l m iYl0 ⇒ (Ylm m
0 ) Yk Yl dΩ = f (k, l, l )hk q; l m|l , m i

l0 m0
p p
where f (k, l, l0 ) = 1/4π (2l + 1)(2k + 1)/(2l0 + 1)hk 0; l 0|l0 0i. The factor hk 0; l 0|l0 0i is
the one which vanishes unless parity is conserved, ie unless k + l and l0 are both odd or both
even.
If an operator Ω is a scalar, [Ji , Ω] = 0. If V is a triplet of operators which form a vector,
then
p
[Jx , Vy ] = i~Vz ; ⇒ [Jz , Vm ] = m~Vm , [J± , Vm ] = ~ (1 ∓ m)(2 ± m)Vm±1
q
where V±1 = ∓ 12 (Vx ± Vy ), V0 ≡ Vz

The operator
√ r isma vector operator which obeys these rules. Its components in the spherical
basis are 4π r Y1 .
The following rules are obeyed by the matrix elements of a vector operator, by the Wigner-
Eckart theorem
hj 0 m0 |Vq |j mi = hj 0 kVk jihj 0 m0 ; 1 q|j mi
where hj 0 kVk ji is called the reduced matrix element and is independent of the m, m0 and q.
The Clebsch-Gordan coefficient will vanish unless j + 1 ≥ j 0 ≥ |j − 1|, in other words operating
with a vector operator is like adding one unit of angular momentum to the system. This is
the origin of electric dipole selection rules for angular momentum. (In the rule for combining
spherical harmonics the factor f (k, l, l0 ) is a reduced matrix element.)

A.3 Hydrogen wave functions


The solutions of the Schrödinger equation for the Coulomb potential V (r) = −~cα/r have
energy En = − n12 ERy , where ERy = 12 α2 mc2 = 13.6 eV (with m the reduced mass of the
electron-proton system). (Recall α = e2 /(4π0 ~c) ≈ 1/137.) The spatial wavefunctions are
ψnlm (r) = Rn,l (r)Ylm (θ, φ).
The radial wavefunctions are as follows, where a0 = ~c/(mc2 α):
 
2 r
R1,0 (r) = 3/2
exp − ,
a0 a0
   
2 r r
R2,0 (r) = 1− exp − ,
(2 a0 )3/2 2 a0 2 a0
 
1 r r
R2,1 (r) = √ exp − ,
3 (2 a0 )3/2 a0 2 a0
2 r2
   
2 2r r
R3,0 (r) = 1− + exp − ,
(3 a0 )3/2 3 a0 27 a02 3 a0
√    
4 2 r r r
R3,1 (r) = 1− exp − ,
9 (3 a0 )3/2 a0 6 a0 3 a0
√  2  
2 2 r r
R3,2 (r) = √ exp − .
27 5 (3 a0 )3/2 a0 3 a0
R∞
They are normalised, so 0 (Rn,l (r))2 r2 dr = 1. Radial wavefuntions of the same l but different
n are orthogonal (the spherical harmonics take care of orthogonality for different ls).
The following radial integrals can be proved:

a20 n2
hr2 i = 5 n2 + 1 − 3l (l + 1) ,

2
a0
3n2 − l (l + 1) ,

hri =
  2
1 1
= 2
,
r n a0
 
1 1
= ,
r2 (l + 1/2) n3 a02
 
1 1
= .
r3 l (l + 1/2) (l + 1) n3 a30

For hydrogen-like atoms (single-electron ions with nuclear charge |e| Z) the results are ob-
tained by substituting α → Zα (and so a0 → a0 /Z).

A.4 Harmonic oscillators, creation and annihilation op-


erators
For a particle of mass m in a one-dimensional harmonic oscillator potential 21 kx2 ≡ 12 mω 2 x2
where ω is the classical frequency of oscillation, the Hamiltonian is
2
b = p̂ + 1 mω 2 x̂2
H
2m 2
p
The energy levels are En = (n + 21 )~ω, n ≥ 0, and, defining the length scale x0 = ~/mω, the
wave functions are
2 /2x2 1
φ0 (x) = (πx20 )−1/4 e−x 0 φn (x) = √ Hn ( xx0 )φ0 (x)
n
2 n!
where the Hermite polynomials are H0 (z) = 1; H1 (z) = 2z; H2 (z) = 4z 2 − 2; H3 (z) =
8z 3 − 12z; H4 (z) = 16z 4 − 48x2 + 12. The Mathematica functions for obtaining them are
HermiteH[n, z].
In bra-ket notation, we will represent the state with quantum umber n as |ni, with φn (x) =
hx|ni.
If we define the annihilation and creation operators
   
1 x̂ x0 †
1 x̂ x0
â = √ + i p̂ and â = √ − i p̂
2 x0 ~ 2 x0 ~

which satisfy [â, ↠] = 1, we have

b = ~ω(↠â + 1 )
H [â, H] = ~ω â and [↠, H] = −~ω â†
2

from which it follows that


√ √
â|ni = n|n − 1i and ↠|ni = n + 1|n + 1i.

This suggests an interpretation of the states of the system in which the quanta of energy are
primary, with ↠and â respectively creating and annihilating a quantum of energy. Further
notes on creation and annihilation operators can be found here. p
For a particle in a two-dimensional potential 21 mωx2 x2 + 12 mωy2 y 2 , we define x0 = ~/mωx
p
and y0 = ~/mωy , and the wavefunction of the particle will be determined by two quantum
numbers nx and ny
2 2 2 2
φ0,0 (x, y) = (πx0 y0 )−1/2 e−x /2x0 e−y /2y0
1 1
φnx ,ny (x, y) = √ n Hnx ( xx0 ) p n Hny ( yy0 )φ0,0 (x, y)
2 x nx ! 2 y ny !

In bra-ket notation, we will represent the state with quantum numbers nx and ny as |nx , ny i
Creation operators âx and â†x can be constructed from x̂ and pˆx as above, and we can
construct a second set of operators ây and â†y from ŷ and pˆy (using y0 as the scale factor) in the
same way. Then â†x and â†y act on |nx , ny i to increase nx and ny respectively, and âx and ây to
decrease them, and both of the latter annihilate the ground state. So for instance

â†y |nx , ny i = ny + 1|nx , ny + 1i.
p
âx |nx , ny i = nx |nx − 1, nyi and

A.5 The Helium atom


New section for 2011/12, to be constructed (26/9/11)

A.6 Spherical Bessel functions


Spherical bessel functions are solutions of the following equation:

d2 f df  2 
z2 + 2z + z − l(l + 1) f =0
dz 2 dz
for integer l.
The regular solution is denoted jl (z) and the irregular one, nl (z) (or sometimes yl (z)). The
Mathematica functions for obtaining them are SphericalBesselJ[l, z] and SphericalBesselY[l,
2
z]. For l = 0 the equation is ddzg2 − g = 0, where g = zf , and so the solutions are j0 = sin z/z
and n0 = − cos z/z. The general solutions are
 l    l 
l 1 d sin z l1 d cos z 
jl (z) = z − and nl (z) = −z − .
z dz z z dz z
The asymptotic forms are

z→∞ sin(z − l π/2) z→∞ cos(z − l π/2)


jl (z) −→ and nl (z) −→ − ;
z z
z→0 zl z→0
jl (z) −→ and nl (z) −→ −(2l − 1)!!z −l−1 .
(2l + 1)!!

(Note “n!!” is like factorial but only including the odd (even) numbers for odd (even) n, eg
7!! = 7 × 5 × 3 × 1 and 6!! = 6 × 4 × 2, with 0!! = 0! ≡ 1.)
In spherical polar coordinates the Schrödinger equation for a particle in free space (V (r) = 0)
gives the following equation for the radial wavefuntion:

d2 Rl 2 dRl
 
2 l(l + 1)
+ + k − Rl = 0
dr2 r dr r2

where k 2 = 2mE/~2 . So the solution is

Rl (r) = Ajl (kr) + Bnl (kr)

where B will equal zero if the solution has to hold at the origin, but not if the origin is excluded
(for instance outside a hard sphere).
Poisson’s integral representation of the regular spherical Bessel functions
Z 1
zn
jn (z) = n+1 cos(zx)(x2 − 1)n dx
2 n! −1

together with Rodrigues representation of the Legendre polynomials can be used to show that
Z 1
n
1
jn (z) = 2 (−i) eizx Pn (x)dx
−1

whence follows the expression for the expansion of a plane wave in spherical polars given in
section 5.3.

A.7 Airy functions


Airy functions are the solutions of the differential equation:

d2 f
− zf = 0
dz 2
There are two solutions, Ai(z) and Bi(z); the first tends to zero as z → ∞, while the second
blows up. Both are oscillatory for z < 0. The Mathematica functions for obtaining them are
AiryAi[z] and AiryBi[z].
The asymptotic forms of the Airy functions are:
 
z→∞ e
− 32 z 3/2
z→−∞
cos 2
3
|z|3/2 − π
4
Ai(z) −→ √ 1/4 and Ai(z) −→ √
2 πz π |z|1/4
 
2 3/2 π
z→∞e3z
2 3/2
z→−∞
cos 3
|z| + 4
Bi(z) −→ √ 1/4 and Bi(z) −→ √ 1/4
πz π |z|
The Schrödinger equation for a linear potential V (x) = βx in one dimension can be cast in
the following form
~2 d2 ψ
− + βxψ − Eψ = 0
2m dx2
Defining z = x/x0 , with x0 = (~2 /(2mβ))1/3 , and E = (~2 β 2 /(2m))1/3 µ, and with y(z) ≡ ψ(x),
this can be written
d2 y
− zy + µy = 0
dz 2
(see section A.11 for more on scaling.) The solution is
 
y(z) = C Ai(z−µ)+D Bi(z−µ) or ψ(x) = C Ai (βx−E)/(βx0 ) +D Bi((βx−E)/(βx0 )

where D = 0 if the solution has to extend to x = ∞. The point z = µ, x = E/β is the point
at which E = V and the solution changes from oscillatory to decaying / growing.
The equation for a potential with a negative slope is given by substituting z → −z in the
defining equation. Hence the general solution is ψ(x) = C Ai(−x/x0 − µ) + D Bi(−x/x0 − µ),
with D = 0 if the solution has to extend to x = −∞.
The first few zeros of the Airy functions are given in Wolfram MathWorld.

A.8 Properties of δ-functions


The δ-function is only defined by its behaviour in integrals:
Z b Z x0 +b
δ(x)dx = 1; f (x)δ(x − x0 )dx = f (x0 )
−a x0 −a

where the limits a and b are positive and as large or small as we want; the integration simply
has to span the point on which the δ-function is centred.
The following equivalences may also be proved by changing variables in the corresponding
integral (an appropriate integration range is assumed for compactness of notation):
Z
1 b
δ(ax − b) = |a| δ(x − a ) since f (x)δ(ax − b)dx = a1 f ( ab )
X δ(x − xi )
δ(g(x)) = where the xi are the (simple) real roots of g(x).
i
|g 0 (xi )|

Note that the dimensions of a δ-function are the inverse of those of its argument, as should be
obvious from the first equation.
We encounter two functions which tend to δ-functions:
1 0
Z x/2 
0 0 sin (k − k )x x→∞
1

ei(k−k )x dx0 = π1 2
0
−→ δ(k − k 0 )
−x/2 k − k
2 1 0

sin (k − k )x x→∞
2
π
2
0 2
−→ δ(k − k 0 )
(k − k ) x
In both cases, as x → ∞ the function tends to zero unless k = k 0 , at which point it tends to x,
so it looks like an infinite spike at k = k 0 .

That the normalisation


R ∞ (with respect to integration
R∞ w.r.t k) is correct follows from the
following two integrals: −∞ sinc(t)dt = π and −∞ sinc2 (t)dt = π. The second of these follows
R∞ R∞
from the first via integration by parts. The integral −∞ sinc(t)dt = = −∞ eit /tdt may be
done via the contour integral below: As no poles are included by the contour, the full contour

integral is zero. By Jordan’s lemma the integral round the outer circle tends to zero as R → 0
as eiz decays exponentially in the upper half plane. So the integral along the real axis is equal
and opposite to the integral over the inner circle, namely 12 of the residue at x = 0, iπ. So the
imaginary part, the integral of sinc(x), is π.
A.9 Gaussian integrals
The following integrals will be useful:
Z ∞ ∞
dn
r Z r 
2 π −αx2 n π
e−αx dx = and x2n e dx = (−1)
−∞ α −∞ dαn α
Often we are faced with a somewhat more complicated integral, which can be cast in Gaus-
sian form by “completing the square” in the exponent and then shifting integration variable
x → x − β/(2α):
Z ∞ Z ∞ r
−αx2 −βx β 2 /(4α) −α(x+β/(2α))2 π β 2 /(4α)
e dx = e e dx = e
−∞ −∞ α
This works even if β is imaginary. One way of seeing this is as follows. In the diagram below,
as R → ∞ the blue contour is the original one (with β imaginary) and the red one the new
contour after shifting; the red and black paths together must equal the blue since there are no
2
poles in the region bounded by the complete contour. However as e−z tends to zero faster than
1/R as R → ∞ providing |x| > |y|, by Jordan’s lemma the contribution from the black paths
is zero. Hence the two integrals must be the same.

A.10 Density of states, periodic boundary conditions


and black-body radiation
In previous thermal and statistical physics courses we have tended to consider a particle in a
box (side lengths Lx , Ly Lz ), with boundary condition that the wavefunction must vanish at
the wall. Then the energy eigenfunctions are of the form
     
πmy
ψlmn (x, y, z) = A sin Lx sin Ly sin πnz
πlx
Lz
= A sin(kx x) sin(ky y) sin(kz z)

for positive integers l, m, n, that is, the allowed values of the wavenumber ki are quantised,
though very closely spaced for a macroscopic box. The density of states gives the number of
states in the vicinity of a given momentum or energy. In momentum space, the number of
states with k(≡ |k|) in the range k → k + dk, (where dk is small, but much bigger than the
spacing between states) is
V k2
D(k)dk = dk
2π 2
See here for details of the derivation. Note that as kx , ky and kz all have to be positive, the
vector k, which isn’t quite a momentum becuase we are dealing with standing waves, has to lie
in the positive octant.
In quantum mechanics we prefer not to deal with standing waves, but with momentum
eigenstates which are travelling waves. But we still want the advantage of a finite box so that
the states remain countable. The solution is to use periodic boundary conditions in which,
when a particle reaches a wall at, say, (x, y, Lz ) it leaves the box and reappears, with the same
momentum, at (x, y, 0). This may sound artificial but we get the same expression for D(k); the
advantage is that we can usefully talk about D(k) as well.
In this case the boundary condition is ψ(x, y, Lz ) = ψ(x, y, 0) and the wavefunction is
1   
i2πmy
   1
i2πlx
ψlmn (x, y, z) = exp Lx exp Ly i2πnz
exp Lz = eik·r
V V
noting now that kx = 2πl/Lx etc since a whole number of wavelengths have to fit into the box.
We have fixed the normalisation so that there is one particle in the box; this differs from the
δ-function normalisation used elsewhere. We can now talk about the number of states with k
in the range kx → kx + dkx , ky → ky + dky and kz → kz + dkz (where dki is small, but much
bigger than the spacing between states)

dkx dky dkz V 3 V k2 V k2


D(k)d3 k = 2π 2π 2π = d k = dkdΩk ⇒ D(k)dk = dk
Lx Ly Lz
8π 3 8π 3 2π 2

where dΩk = sin θk dθk dφk . We have obtained the same expression for D(k), as advertised.
This time though we integrated over all values of θk and φk , not just the positive octant.
The density of states can be defined with respect to energy, or to frequency, as well. In
each case the number of states remains the same: D(k)dk = D(E)dE = D(ω)dω so the
density of states will change by the inverse of the factor which relates k and the new variable:
D(x) = D(k)(dk/dx). √ √
For non-relativistic particles k = 2mE/~, so D(E) = (V m 2mE)/(2π 2 ~3 ). For photons
though, k = E/~c, so D(E) = (V E 2 )/(2π 2 ~3 c3 ) and D(ω) = (V ω 2 )/(2π 2 c3 ). In the notes we
define D(ω k̂) which is akin to D(k); the angle integral hasn’t yet been done, but the switch to
frequency has, bringing in a factor of 1/c3 .
If a particle has more than one spin state we need to multiply by the degeneracy factor
which is 2 for spin- 21 electrons and for photons.
Bose-Einstein statisitics gives the average number of photons n(ω, T ) in a mode of frequency
ω at temperature T . Hence we obtain the Planck law for the energy density in space at frequency
ω,
~ω 3 1
ρ(ω) = 2~ω V1 D(ω)n(ω, T ) = 2 3 ~ω/k T
π c e B −1
Note the dimesions, which are energy/frequency/length3 . It’s a “double density” - per unit
volume, but also per unit ω. To get the full energy density U (T ) we integrate over ω.

A.11 Checking units and scaling


In a general expression, multiply top and bottom by powers of c until, as far as possible, ~ only
occurs as ~c, m as mc2 and ω as ~ω or ω/c. Then use the table

~c mc2 ~ω ω/c ~
[Energy][Length] [Length] [Energy] [Length−1 ] [Energy][Time]
If the electric charge enters without external fields, write in terms of α. Use the combinations
eE and µB B (both with dimensions of energy) when external fields enter. (See section A.12 for
more on units in EM.)
In calculations use eV or MeV as much as possible (eg instead of using me in kg, use me c2 =
0.511 MeV). Remember ~c = 1973 eV Å = 197.3 MeV fm; also useful is ~ = 6.582×10−22 eV s−1 .
Often we need to cast the Schrödinger equation in dimensionless units to recognise the
solution in terms of special functions. Suppose we have a potential of the form V (x) = βxn
for some integer n. Then the dimensions of β are [Energy][Length−n ]. The other scales in the
problem are ~2 /2m which has dimensions [Energy][Length2 ] and the particle energy. We proceed
by forming a length scale x0 = (~2 /(2mβ))1/(n+2) and an energy scale E = (~2 β 2/n /2m)n/(n+2) .
Writing x = x0 z and E = Eµ, the Schrödinger equation for y(z) ≡ ψ(x) reads

f 00 − z n f − µf = 0

or its 3-d equivalent. For the harmonic oscillator, n = 2 and β = 12 mω 2 , so x0 = (~/mω)1/2


and E = 21 ~ω as expected. For the hydrogen atom, n = −1 and β = ~cα, so r0 = 12 a0 and
E = 2mc2 α2 = 4ERy , which illustrates the fact that we might have to play with numerical
factors in the length scale to obtain the standard form.

A.12 Units in EM
There are several systems of units in electromagnetism. We are familiar with SI units, but
Gaussian units are still very common and are used, for instance, in Shankar.
In SI units the force between two currents is used to define the unit of current, and hence the
unit of charge. (Currents are much easier to calibrate and manipulate in the lab than charges.)
The constant µ0 is defined as 4π × 10−7 N A−2 , with the magnitude chosen so that the Ampère
is a “sensible” sort of size. Then Coulomb’s law reads
q1 q2
F=
4π0 |r1 − r2 |2

and 0 has to be obtained from experiment. (Or, these days, as the speed of light is now has a
defined value, 0 is obtained from 1/(µ0 c2 ).)
However one could in principle equally decide to use Coulomb’s law to define charge. This
is what is done in Gaussian units, where by definition
q1 q2
F=
|r1 − r2 |2

Then there is no separate unit of charge; charges are measured in N1/2 m (or the non-SI equiv-
alent): e = 4.803 × 10−10 g1/2 cm3/2 s−1 . (You should never need that!) In these units,
µ0 = 4π/c2 . Electric and magnetic fields are also measured in different units.
The following translation table can be used:

Gauss e E p B
√ √
SI e/ 4π0 4π0 E 4π/µ0 B

Note that eE is the same in both systems of units, but eB in SI units is replaced by eB/c in
Gaussian units. Thus the Bohr magneton µB is e~/2m in SI units, but e~/2mc in Gaussian
units, and µB B has dimesions of energy in both systems.
The fine-structure constant α is a dimensionless combination of fundamental units, and as
such takes on the same value (≈ 1/137) in all systems. In SI it is defined as α = e2 /(4π0 ~c), in
Gaussian units as α = e2 /( ~c). In all systems, therefore, Coulomb’s law between two particles
of charge z1 e and z2 e can be written
z1 z2 ~cα
F=
|r1 − r2 |2

and this is the form I prefer.


Problem Sheets

• Examples 1

• Solutions 1

• Examples 2

• Solutions 2

• Examples 3

• Solutions 3

• Examples 4

• Solutions 4

• Examples 5

• Solutions 5

This formula sheet will be available in the exam. There are other formulae which you are
expected to know, of course!
The versions of problem and solutions sheets above have all known errors corrected. A list
of errata in the distributed sheets is here.
A list of suitable past exam questions for revision purposes is here.

70

You might also like