Feynman Path Integral Methods
Feynman Path Integral Methods
Feynman Path Integral Methods
Matthias Blau
Albert Einstein Center for Fundamental Physics
Institut für Theoretische Physik
Universität Bern
CH-3012 Bern, Switzerland
http://www.blau.itp.unibe.ch/Lecturenotes.html
1
Contents
0 Preface 4
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 The Trotter Product Formula and the Dirac Short-Time Kernel . . . . . . . . . . 11
2.8 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Gaussian Path Integrals and Determinants: the VVPM and GY Formulae . . . . 31
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2
4 Generating Functionals and Perturbative Expansions 41
4.2 Green’s Functions and the Generating Functional for Quadratic Theories . . . . 43
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3
0 Preface
These are notes for part of a course on advanced quantum mechanics given to 4th
year physics students. The only prerequisites, however, are a basic knowledge of the
Schrödinger and Heisenberg pictures of standard quantum mechanics (as well as the will-
ingness to occasionally and momentarily suspend disbelief). Thus the material could
easily be presented at an earlier stage. I covered the material in five 3-“hour” lec-
tures (1 “hour” = 45 minutes) and this time constraint (there are other topics that I
wanted to cover as well in the course) dictated the level of detail (or lack thereof) of
the presentation.
One of the aims of these lectures was to set the stage for a future course on quantum field
theory. To a certain extent this motivated the choice of topics covered in these notes
(e.g. generating functionals are discussed, while concrete applications of path integrals
to non-trivial quantum mechanics problems are not).
These notes do not include an introductory section on motivations, history, etc. - such
things are best done orally anyway. My own point of view is that the path integral
approach to quantum theories is simultaneously more intuitive, more fundamental, and
more flexible than the standard operator - state description, but I do not intend to get
into an argument about this. Objectively, the strongest points in favour of the path
integral appoach are that
• unlike the usual Hamiltonian approach the path integral approach provides a man-
ifestly Lorentz covariant quantisation of classical Lorentz invariant field theories;
The motivation for writing these notes was that I found the typical treatment of quantum
mechanics path integrals in a quantum field theory text to be too brief to be digestible
(there are some exceptions), while monographs on path integrals are usually far too
detailed to allow one to get anywhere in a reasonable amount of time.
I have not provided any referenes to either the original or secondary literature since
most of the material covered in these notes is completely standard and can be found in
many places (the exception perhaps being the Gelfand-Yaglom formula for fluctuation
4
determinants for which some references to the secondary literature are given in section
3.5).
http://www.blau.itp.unibe.ch/Lecturenotes.html
If you find any mistakes, or if you have any other comments on these notes, complaints,
(constructive) criticism, or also if you just happen to find them useful, please let me
know.
5
1 The Evolution Kernel
The dynamical information about quantum mechanics is contained in the matrix ele-
ments of the time-evolution operator U (tf , ti ). For a time-independent Hamiltonian Ĥ
one has
U (tf , ti ) = e −(i/~)(tf − ti )Ĥ (1.1)
whereas the general expression for a time-dependent Hamiltonian involves the time-
ordered exponential
Rt
−(i/~) tif dt0 Ĥ(t0 )
U (tf , ti ) = T e . (1.2)
6
This can also be interpreted as the transition amplitude
etc. Note that this is not the Schrödinger time-evolution of the state |xi > - this would
have the opposite sign in the exponent (this is related to the fact that U (tf , ti ) satisfies
the Schrödinger equation with respect to tf , not ti ). Rather, this state is characterised
by the fact that it is the eigenstate of the Heisenberg picture operator x̂H (t) at t = ti ,
x̂H (t)|x, t >= e (i/~)tĤ x̂e −(i/~)tĤ e (i/~)tĤ |x >= x|x, t > . (1.12)
Another way of saying this, or introducing the states |x, t >, is the following: In general
the operators x̂H (t) and x̂H (t0 ) do not commute for t 6= t0 . Hence they cannot be
simultaneously diagonalised. For any given t, however, one can choose a basis in which
x̂H (t) is diagonal. This is the basis {|x, t >}.
Matrix elements of the evolution operator between other states, say the transition am-
plitude between an initial state |ψi > and a final state |ψf >, are determined by the
kernel through
Z Z
< ψf |U (tf , ti )|ψi > = dxf dxi < ψf |xf >< xf |U (tf , ti )|xi >< xi |ψi >
Z Z
= dxf dxi ψf∗ (xf )ψi (xi )K(xf , xi ; tf − ti ) . (1.13)
2. The kernel satisfies the Schrödinger equation with respect to (tf , xf ), i.e.
7
is a solution of the Schrödinger equation, with
Z
Ψφ (x, t) = dxi < x, t|xi , ti >< xi , ti |φ >
Z
= dxi K(x, xi , t − ti )Ψφ (xi , ti ) . (1.17)
3. Since we can restrict our attention to evolution forwards in time, one frequently
also considers the causal propagator or retarded propagator
Kr (xf , xi ; tf − ti ) = Θ(tf − ti )K(xf , xi ; tf − ti ) (1.18)
where Θ is the Heavyside step function, Θ(x) = 1 for x > 0 and Θ(x) = 0 for
x < 0. It follows from the above two properties of the kernel, and Θ0 (x) = δ(x),
that the retarded propagator satisfies
[i~∂tf −Ĥ(xf , pf = (~/i)∂xf , tf )]Kr (xf , xi ; tf −ti ) = i~δ(tf −ti )δ(xf −xi ) . (1.19)
Thus the retarded propagator is a Green’s function for the Schrödinger equation.
8
1.4 Example: The Free Particle
Using the Fresnel integral formulae from Appendix A, one thus finds
im (xf −xi )2
m
r
K0 (xf , xi ; tf − ti ) = e~ 2 (tf −ti ) (1.27)
2πi~(tf − ti )
Note that the exponent has the interpretation as the “classical action”, i.e. as the action
S0 of the free particle evaluated on the classical path xc (t) satisfying the free equations
of motion and the boundary conditions xc (tf,i ) = xf,i ,
m (xf − xi )2
S0 [xc ] = . (1.28)
2 (tf − ti )
This at this stage rather mysterious fact has a very natural explanation from the path
integral point of view.
1.5 Exercises
1. Verify the results (1.27) and (1.28) for the evolution kernel of the free particle.
2. Verify that (1.27) is normalised as in (1.14), and satisfies the convolution property
(1.20) and the free particle Schrödinger equation (1.15).
9
for Gaussian integrals (Aab is a real symmetric positive matrix), and calculate the
integral Z +∞
a b a
dd x e −Aab x x /2 + Ja x (1.30)
−∞
10
2 Towards the Path Integral Representation of the Kernel
In general, it is a difficult (if not impossible) task to find a closed form expression for the
kernel. However, the convolution property allows us to reduce the determination of the
finite-time kernel to that of the short-time (or even infinitesimal time) kernel K(x, y; )
and, as we will see later on, this allows us to make some progress.
First of all we note that we can write (once again in the time-independent case, but the
argument works in general)
(tf −ti )
N
< xf |e −(i/~)(tf − ti )Ĥ |xi >=< xf | e −(i/~) N Ĥ |xi > . (2.1)
We think of this as dividing the time-interval [ti , tf ] into N equal time-intervals [tk , tk+1 ]
of length ,
= tk+1 − tk = (tf − ti )/N . (2.2)
Here k = 0, . . . , N − 1 and we identify tf = tN and ti = t0 . We can now insert N − 1
resolutions of unity at times tk , k = 1, . . . , N − 1 into the above expression for the kernel
to find
NY−1 Z ∞ N
Y −1
K(xf , xi ; tf − ti ) = [ dxk ][ K(xk+1 , xk ; = tk+1 − tk )] , (2.3)
k=1 −∞ k=0
where xf = xN and xi = x0 .
Now this expression holds for any N . But for finite N , the kernel is still difficult to
calculate. As we will see below, things simplify in the limit → 0, equivalently N → ∞.
In this limit, the kernel
N
Y −1 Z ∞ N
Y −1
K(xf , xi ; tf − ti ) = lim [ dxk ][ K(xk+1 , xk ; = (tf − ti )/N )] (2.4)
N →∞ −∞
k=1 k=0
2.2 The Trotter Product Formula and the Dirac Short-Time Kernel
Formally, it is not difficult to see that in the limit N → ∞ we only need to know
this short-time kernel K(xk+1 , xk ; ) to order . If everything in sight were commuting
(instead of operators), the argument for this would be the following: one writes
N
e x = e x/N = (1 + x/N + O(1/N 2 ))N (2.5)
11
and compares with the formula
to conclude that in the limit N → ∞ the subleading O(N −2 ) terms in (2.5) can indeed
be dropped.1 Of course, to establish an analogous result for (unbounded) operators
requires some functional analysis.
Provided that we can justify dropping terms of O(2 ), things simplify quite a bit. Indeed,
even though T̂ and V̂ are non-commuting operators, T̂ and V̂ commute up to order
2 , because their commutator is 2 [T̂ , V̂ ]. Thus, using the Baker-Campbell-Hausdorff
formula one has
To justify dropping these O(2 ) commutator terms, however, one needs some control
over the operator [T̂ , V̂ ] which should not become too singular.
Concretely, what one needs is the validity of the Trotter product formula
N N
e  + B̂ = Â/N + B̂/N Â/N B̂/N
?
e = lim e e (2.10)
N →∞
for  = T̂ and B̂ = V̂ .
This identity is not difficult to prove for bounded operators. The case of interest,
unbounded operators, is trickier. The identity holds, for instance, on the common
domain of A and B, provided that both are self-adjoint operators that are bounded
from below. Once again, we will gloss over these functional analysis complications and
proceed with the assumption that it is legitimate to drop the commutator terms (while
keeping in mind that this assumption is not valid e.g. for the Coulomb potential!).
1
The identity (2.6) can be proved directly, e.g. via a binomial expansion or by showing that
d
lim (1 + x/N )N = lim (1 + x/N )N
dx N →∞ N →∞
12
With the above assumptions, to order we can write the short-time kernel as
This is a lovely result, first obtained by Dirac in 1933. In the exponential, where we once
had operators, we now encounter the Legendre transform of the classical Hamiltonian,
i.e. the Lagrangian! Indeed, if we identify
xk+1 − xk xk+1 − xk dxk
= → , (2.14)
tk+1 − tk dt
as a discretised time-derivative, the exponent takes the classical form pk ẋk − H(pk , xk ).
We can implement the Legendre transformation explicitly by performing the Gaussian
(Fresnel) integral over pk (see Appendix A) to find
1
Z +∞ p2k
K(xk+1 , xk ; ) = dpk e (i/~)[p k (x k+1 − x k ) − ( 2m + V (xk ))]
2π~ −∞
r
m (i/~)L(xk , ẋk )
= e (2.15)
2πi~
m(xk+1 − xk )2 mẋ2k
L(xk , ẋk ) = − V (x k ) → − V (xk ) . (2.16)
22 2
Having obtained the above explicit formula for the short-time kernel in terms of the
Lagrangian, we can no go back to (2.4) to obtain an expression for the finite-time
kernel. We can use either the phase space expression (2.13) or the configuration space
expression (2.15). This iteration of Dirac’s result is due to Feynman (1942).
13
In this way we arrive at
N −1 Z ∞ N −1 Z ∞ PN −1 xk+1 −xk
Y Y dpk (i/~) [pk − H(pk , xk )]
K(xf , xi ; tf − ti ) = lim [ dxk ][ ]e k=0
N →∞ −∞ 2π~
k=1 k=0 −∞
m −1 Z ∞
N/2 NY PN −1
= lim [ dxk ]e (i/~) k=0 L(xk , ẋk ) (2.17)
N →∞ 2πi~ −∞
k=1
Note that in the phase space representation of the kernel there is always one more
momentum than position integral. This is a consequence of the fact that each short-
time propagator contains one momentum integral whereas the position integrals are
inserted between the short-time propagators and the two end-points xi and xf are not
integrated over.
and
xk = x(ti + k) . (2.19)
Likewise, we think of the pk as definig a curve p(t) in momentum space such that
With this interpretation the exponents in the integrand of the kernel can be written as
N −1 Z tf
X xk+1 − xk
lim [pk − H(pk , xk )] = dt [p(t)ẋ(t) − H(p(t), x(t))]
N →∞ ti
k=0
N
X −1 Z tf
lim L(xk , ẋk ) = dt L(x(t), ẋ(t)) = S[x(t); tf , ti ] (2.21)
N →∞ ti
k=0
In the same spirit, we formally now write the integrals as integrals over paths, intro-
ducing the notation
N
Y −1 Z ∞ Z x(tf )=xf
lim [ dxk ] = D[x(t)]
N →∞ −∞ x(ti )=xi
k=1
N −1 Z ∞ Z
Y dpk
lim [ ] = D[p(t)/2π~] . (2.22)
N →∞ −∞ 2π~
k=0
14
With this notation, we can now write the kernel as a path integral ,
Z x(tf )=xf Z Rt
(i/~) tif dt [p(t)ẋ(t) − H(p(t), x(t))]
K(xf , xi ; tf − ti ) = D[x(t)] D[p(t)/2π~]e
x(ti )=xi
Z x(tf )=xf
= N D[x(t)]e (i/~)S[x(t)] . (2.23)
x(ti )=xi
This is an integral over all paths x(t) with the specified boundary conditions and, in
the first version, all paths p(t) with t ∈ [ti , tf ].
Some remarks:
1. The first line, a phase space path integral, is valid for general Hamiltonians
H(p, x) = T (p) + V (x) (possibly time-dependent). The measure appears to be
an infinite-dimensional analogue of the canonical Liouville measure. The latter
is of course invariant under canonical transformations. This should however not
lead one to believe that the path integral representation of the kernel also enjoys
this invariance. Indeed, this cannot possibly be true since it is well known that
under a canonical transformation any Hamiltonian can be mapped to zero (the
Hamilton-Jacobi transformation) and hence into any other Hamiltonian, while the
kernel depends non-trivially on the Hamiltonian.
2. To pass to the second line, a configuration space path integral, we used the explicit
quadratic form T = p2 /2m to perform the Gaussian integral over p. The derivation
of the path integral for a Hamiltonian with a velocity dependent potential, such
as the magnetic interaction (p − A)2 /2m, is more subtle (the discretised version
requires a “mid-point rule” for the Hamiltonian) and will not be discussed here.
3. As mentioned above, as limits of piecewise linear continuous paths these paths are
continuous but not differentiable. Indeed differentiability would require existence
of a finite limit of (xk+1 − xk )/ as → 0. But xk+1 and xk are independent
variables, and hence there is no reason for the difference xk+1 − xk to go to zero
as → 0. Hence the paths entering the above sum/integral are typically nowhere
differentiable. Evidently, then, things like ẋ(t) require some (perhaps stochastic
or probabilistic) interpretation, but we will not open this Pandora’s box.
(we have replaced by (tf − ti )/N ) whose sole purpose in life is to make the
combined expression N times the integral well defined, finite and equal to the left
hand side (provided that all our functional analysis assumptions are satisfied).
15
We could have absorbed much of the prefactor into the definition of the measure by
writing (2.17) as
m 1/2 N −1 Z ∞ PN −1
m 1/2
dxk ]e (i/~) k=0 L(xk , ẋk )
Y
K(xf , xi ; tf − ti ) = lim [
2πi~ N →∞ 2πi~
k=1 −∞
(2.25)
and defining
Z x(tf )=xf N
Y −1 Z ∞ m 1/2
D̃[x(t)] = lim [ dxk ] . (2.26)
x(ti )=xi N →∞
k=1 −∞ 2πi~
Another useful normalisation of the measure is, as we will see in section 3, such that
Z x(tf )=xf Z x(tf )=xf
m
r
N D[x(t)]e (i/~)S[x(t)] = D̂[x(t)]e (i/~)S[x(t)] .
x(ti )=xi 2πi~(tf − ti ) x(ti )=xi
(2.28)
We will obtain a more informative expresion for N , and hence the relation between D[x]
and D̂[x], involving the determinant of a differential operator, in the next section.
Z(β) = Tr e −β Ĥ (2.29)
one defines the quantum mechanical partition function, or the partition function for
short, as the trace of the time evolution operator,
and thus the quantum mechanical and thermal partition function are formally related
by continuation of the time interval (tf − ti ) to the imaginary value
tf − ti = −i~β . (2.32)
16
Evaluating the trace in a basis of energy eingenstates |n >, one finds
e −(i/~)(tf − ti )En .
X
Z(tf − ti ) = (2.33)
n
On the other hand, evaluting the trace in a basis of position eigenstates |x >, one
obtains
Z +∞ Z +∞
Z(tf , ti ) = dx < x|U (tf , ti )|x >= dx K(x, x; tf − ti ) . (2.34)
−∞ −∞
This means that in the discretised expression (2.17) for the kernel there are now an
equal number of momentum and position integrals. In the continuum version (2.23),
setting xf = xi = x means that one is integrating not over all paths but over all closed
paths (loops) at x, and the integration over x means that one is integrating over all
closed loops,
Z +∞ Z x(tf )=x
Z(tf , ti ) = N dx D[x(t)]e (i/~)S[x(t)]
−∞ x(ti )=x
Z
= N D[x(t)]e (i/~)S[x(t)] . (2.35)
x(tf )=x(ti )
Since we now have an equal number of x- and p-integrals, in terms of the measure D̃[x]
(2.26) the partition function reads
Z
Z(tf , ti ) = D̃[x(t)]e (i/~)S[x(t)] . (2.36)
x(tf )=x(ti )
Comparison with (2.33) shows that, if we are able to calculate this path integral over
all closed loops we should (in the time-independent case only, of course) be able to read
off the energy spectrum.
The relation between statistical mechanics and quantum mechanics at imaginary time
is rather deep. In particular, for quantum field theory this implies a relation between
finite temperature quantum field theory in Minkowski space and quantum field theory in
Euclidean space with one compact (Euclidean “time”) direction. This has far-reaching
consequences (none of which will be explored here).
Since inside the path integral one is dealing with classical functions and functionals
rather than with operators, fairly simple “classical” manipulations of the path integral
can lead to non-trivial quantum mechanical identities.
17
As an example, consider the “trivial” statement that the path integral is invariant under
an overall shift
x(t) → x(t) + y(t) y(ti ) = y(tf ) = 0 (2.37)
of the integration variable,
Z x(tf )=xf Z x(tf )=xf
D[x(t)] e (i/~)S[x(t)] = D[x(t)] e (i/~)S[x(t) + y(t)] (2.38)
x(ti )=xi x(ti )=xi
Since the variation of the action with δx(t) vanishing at the endpoints gives the Euler-
Lagrange equations,
Z tf
∂L d ∂L
δS[x(t)] = dt − δx(t) , (2.42)
ti ∂x dt ∂ ẋ
and since (2.40) holds for any such δx(t), we deduce that the classical equation of motion
are valid inside the path integral (!),
Z x(tf )=xf
∂L d ∂L
D[x(t)] − e (i/~)S[x(t)] = 0 . (2.43)
x(ti )=xi ∂x dt ∂ ẋ
This is the path integral version of Ehrenfest’s theorem.
Since this was so easy, let us generalise this a bit and consider variations of the path
which fix x(ti ) but change x(tf ),
18
In this case, the variation of the action is
Z tf
∂L d ∂L ∂L tf
δS[x(t)] = dt − δx(t) + δx|
ti ∂x dt ∂ ẋ ∂ ẋ
Z tf
∂L d ∂L
= dt − δx(t) + p(tf )δx(tf ) , (2.46)
ti ∂x dt ∂ ẋ
where p(tf ) ≡ pf is the canonical momentum at t = tf . This leads to the standard
Hamilton-Jacobi relation
∂S[xc ]
pf = . (2.47)
∂xf
We deduce that
Z x(tf )=xf
∂ i
< xf , tf |xi , ti >= D[x(t)] p(tf ) e (i/~)S[x(t)] , (2.48)
∂xf ~ x(ti )=xi
which is nothing other than the familiar statement that in the position representation
one has
~ ∂
p̂f = . (2.49)
i ∂xf
1. The p2 -term in the Hamiltonian requires one to differentiate (2.48) once more. If
one assumes that ∂pf /∂xf = 0 (as one might naively believe based on Lagrangian
or Hamiltonian mechanics), the result follows. But in the Hamilton-Jacobi frame-
work (2.47) shows that this relation does not hold!
2
I am very grateful to T. Padmanabhan for alerting me to these issues and for discussions.
19
2. The above argument also ignores the correct normalisation factor of the path
integral. Since this normalisation factor can/will depend on tf , for the purposes
of the derivation of the time-dependent Schrödinger equation one cannot ignore
this prefactor.
Since we know that, by construction, the kernel calculated from the path integral satisfies
the Schrödinger equation, these two “mistakes” should cancel each other.3
While the above attempt to derive the Schrödinger equation from the path intetgral in a
slick way was not totally successful, there are of course (boring) standard derivations of
this fact, which can be found in most textbook accounts. They all essentially amount to
reverting the procedure that we have used to derive the path integral from the short-time
kernel.
This then essentially completes the (formal) proof of the equivalence of the path integral
description of quantum mechanics and the standard Schrödinger representation.
One piece of the dictionary that is still missing is how to translate matrix elements other
than the basic transition amplitude < xf , tf |xi , ti > into the path integral language.
This will be the subject of the next subsection.
As we saw in section 1 and above, a crucial property of the kernel is the convolution
property (1.20). How is this encoded in the path integral representation (2.23)
Z x(tf )=xf
K(xf , xi ; tf − ti ) = N D[x(t)]e (i/~)S[x(t); tf , ti ] (2.53)
x(ti )=xi
of the kernel? We need to consider seperately the integrand and the measure. As far as
the integrand is concerned, the action S[x(t); tf , ti ] obviously satisfies
20
In words: we can perform the integral over all paths from xi to xf , by considering
all paths from xi to x0 and x0 to xf for some fixed x0 = x(t0 ) and then integrating
over x0 . Taken together, the statements about the action and the measure imply4 the
convolution property (1.20).
So far we have only discussed the transition amplitude < xf , tf |xi , ti >, but it is also
possible to represent matrix elements of operators as path integrals. The most natu-
ral operators to consider in the present context are products of Heisenberg operators
x̂H (t1 )x̂H (t2 ) . . ..
We begin with a single operator x̂H (t0 ) with tf > t0 > ti and, to simplify the notation,
we will from now on drop the subscript H on Heisenberg operators. By elementary
manipulations we find
Z +∞
< xf , tf |x̂(t0 )|xi , ti > = dx(t0 ) < xf , tf |x̂(t0 )|x(t0 ), t0 >< x(t0 ), t0 |xi , ti >
−∞
Z +∞
= dx(t0 ) < xf , tf |x(t0 ), t0 >< x(t0 ), t0 |xi , ti > x(t0 ) .
−∞
Turning this into a statement about path integrals, using the convolution property, we
therefore conclude that
Z x(tf )=xf
< xf , tf |x̂(t0 )|xi , ti >= D[x(t)] x(t0 ) e (i/~)S[x(t); tf , ti ] . (2.57)
x(ti )=xi
Thus the insertion of an operator in the usual prescription corresponds to the insertion
of a classical function in the path integral formulation. While this is very charming, and
in line with the replacement of the Hamiltonian operator by the Lagrange function and
its action, it also immediately raises a puzzle. Namely, since x(t1 ) and x(t2 ) commute,
does the insertion of x(t1 )x(t2 ) into the path integral calculate the matrix elements of
x̂(t1 )x̂(t2 ) or x̂(t2 )x̂(t1 ) or . . . ? To answer this question, we reverse the above calcu-
lation, but this time with the insertion of x(t1 )x(t2 ). In order to use the convolution
property of the kernel or path integral, we need to distinguish the two cases t2 > t1 and
t2 < t1 . Then one finds
Z x(tf )=xf
D[x(t)] x(t1 )x(t2 ) e (i/~)S[x(t); tf , ti ]
x(ti )=xi
Z x(tf )=xf
= D[x(t)] x(t2 )x(t1 ) e (i/~)S[x(t); tf , ti ]
x(ti )=xi
(
< xf , tf |x̂(t2 )x̂(t1 )|xi , ti > t2 > t1
= (2.58)
< xf , tf |x̂(t1 )x̂(t2 )|xi , ti > t2 < t1
4
At least as long as one pretends that N = 1 or that N has somehow been incorporated into the
definition of a suitably regularised path integral. This is the recommended attitude at the present level
of rigour (better: non-rigour), and one that we will adopt henceforth.
21
Using the time-ordering operator we can summarise these results as
Z x(tf )=xf
D[x(t)] x(t1 )x(t2 ) e (i/~)S[x(t); tf , ti ] =< xf , tf |T (x̂(t1 )x̂(t2 ))|xi , ti > .
x(ti )=xi
(2.59)
This immediately generalises to
Z x(tf )=xf
D[x(t)] x(t1 ) . . . x(tn ) e (i/~)S[x(t); tf , ti ] =< xf , tf |T (x̂(t1 ) . . . x̂(tn ))|xi , ti > .
x(ti )=xi
(2.60)
Thus the path integral always evaluates matrix elements of time-ordered products of
operators. This explains why the ordering inside the path integral is irrelevant and how
the ordering ambiguity is resolved on the operator side.
As a final variation of this theme, we consider the path integral representation of tran-
sition amplitudes between states other than the position eigenstates |x, t >. To obtain
these, we simply use the formula (1.13),
Z Z
< ψf |U (tf , ti )|ψi >= dxf dxi ψf∗ (xf )ψi (xi )K(xf , xi ; tf − ti ) , (2.61)
2.8 Comments
We have thus passed from a formulation of quantum mechanics based on the Hamilto-
nian (and operators and Hilbert spaces) to a Lagrangian description in which there are
only commuting objects, no operators. In this framework, the quantum nature, which
in the usual Hamiltonian approach is reflected in the non-commutativity of operators,
arises because one is instructed to consider not only classical paths, i.e. extrema of the
action (solutions to the classical equations of motion) but all possible paths, weighted
by the exponential of (i/~) times the action.
It is important to note that at this point the path integral notation that we have
introduced is largely symbolic. It is a shorthand notation for the N → ∞ limit (2.17).
Provided that all our assumptions are satisfied, this is just another (albeit complicated
looking) way of writing the propagator.
22
However, the significance of introducing this symbolic notation should not be under-
estimated. Indeed, if one always had to calculate path integrals as the limit of an
infinite number of integrals, then path integrals might be conceptually interesting but
that approach would hardly be an efficient calculational tool. One might like to draw an
analogy here with Riemann integrals, defined as the limits of an infinite sum. In practice,
of course, one does not calculate integrals that way. Rather, one can use that definition
to establish certain basic properties of the resulting infinite sum, symbolically denoted
R
by the continuum sum (integral) , and then one determines definite and indefinite
integrals directly, without resorting to the discretised description. Of course, in this
process one may be glossing over several mathematical subtleties (which will ultimately
lead to the development of measure theory, the Lebesgue integral etc.), but this does
not mean that one cannot reliably calculate simple integrals without knowing about
these things.
The attitude regarding path integrals we will adopt in the following will be similar in
spirit. We will deduce some properties of path integrals from their “discretised” version
and then try to pass as quickly as possible to continuum integrals which will allow us to
perform path integrals via one “functional integration” instead of an infinite number of
ordinary integrations. Once again, this will be sweeping many important mathematical
subtleties under the rug (not the least of which is “does something like what you have
called D[x] exist at all?”), but that does not mean that we cannot trust the results that
we have obtained.5
2.9 Exercises
to order .
2. Generalise the derivation of the path integral to systems with d > 1 degrees of
5
By the way, the answer to that questions is ”no”, but that is irrelevant - it is simply not the right
R
question to ask. A more pertinent question might be “can one make sense of D[x] exp(i/~)S[x] as
something like a measure (or linear functional)?”.
23
freedom. Assume that the Hamiltonian has the standard form
p~2
H= + V (~x) , (2.63)
2m
where ~x = (x1 , . . . , xd ) etc.
3. Spell out the proof of (2.52) (the path integral satisfies the Schrödinger equation)
in detail.
24
3 Gaussian Path Integrals and Determinants
We thus now need to make sense of, and develop rules for evaluating, such path integrals.
For a general system described by an action S[x] an exact evaluation of the path integral
is certainly too much to hope for. Indeed, even in the finite-dimensional case integrals
of exponentials of elementary functions can typically be evaluated in closed form only in
the purely quadratic (Gaussian, Fresnel) case, whereas more general integrals are then
evaluated ‘perturbatively’ in terms of a generating function as in (A.26).
For path integrals, the situation is quite analogous. Typically, the path integrals that
can be calculated in closed form are purely quadratic (Gaussian, Fresnel) integrals, i.e.
actions of the general time-dependent harmonic oscillator type
m tf
Z
S[x] = (ẋ(t)2 − ω(t)2 x(t)2 ) , (3.2)
2 ti
Here I have suppressed the integration measure dt, and I will mostly continue to do so
R R
in the following, i.e. is short for dt etc.
Then the strategy to deal with more general path integrals, corresponding e.g. to an
action of the form Z tf
m
S[x] = ( ẋ(t)2 − V (x(t)) , (3.4)
ti 2
is to reduce it to an expansion about a quadratic action. In practice this is achieved
in one of two ways. Either the potential is of the harmonic oscillator form plus a
perturbation, V (x) = V0 (x) + λW (x), and one defines the path integral via a power
series expansion in λ, a perturbative expansion. Or one defines the path integral by an
expansion around a classical solution xc (t) of the equations of motion mẍ = −V 0 (x).
To quadratic order in the ‘quantum fluctuations’ around the classical solution one then
1 00
finds the action (3.2) with ω(t)2 = m V (xc (t)). This turns out to lead to an expansion
of the path integral in a power series in ~, a semi-classical expansion.
25
In either case, the Gaussian path integral can be evaluated in reasonably closed form
and the complete path integral is then defined in terms of the generating functional
associated with this quadratic action. In this section 3 we will deal exclusively with
Gaussian integrals. The evaluation of more general integrals in terms of generating
functionals will then be one of the subjects of section 4.
We are now ready to tackle our first path integral. For obvious reasons we will consider
the simplest dynamical system, namely the free particle, with Lagrangian
m
L0 (x(t), ẋ(t)) = ẋ(t)2 . (3.5)
2
We thus need to calculate
Z x(tf )=xf
K0 (xf , xi ; tf − ti ) = N D[x(t)]e (i/~)S0 [x(t)] , (3.6)
x(ti )=xi
Since in this case we already know the result, we can use this calculation to determine
the overall normalisation of the path integral from the continuum point of view (since
N is universal: it depends only on m and (tf − ti ) and not on the potential V (x)).
As described above, the general strategy is to expand the paths around a solution
to the classical equations of motion. Here the starting action is already quadratic,
but this expansion will have the added benefit of eliminating the boundary conditions
x(ti,f ) = xi,f from the path integral. We thus split any path satisfying these boundary
conditions into the sum of the classical path xc (t),
xf − xi
xc (t) = xi + (t − ti ) , (3.8)
tf − ti
26
Plugging this into the action, one finds
Z tf
m
S0 [xc + y] = S0 [xc ] + ẏ(t)2 (3.11)
2 ti
m (xf − xi )2
S0 [xc ] = . (3.12)
2 (tf − ti )
There is no linear term in y(t) because we are expanding around a critical point of the
action S0 [x(t)], and there are no higher than quadratic terms because the free particle
action itself is quadratic.
In the path integral, instead of integrating over all paths x(t) with the specified boundary
conditions we now integrate over all paths y(t) with the boundary conditions (3.10). We
thus find that the path integral expression for the kernel becomes
i m tf 2
Z R
K0 (xf , xi ; tf − ti ) = e (i/~)S0 [x c ] N D[y(t)]e ~ 2 ti ẏ(t) (3.13)
y(ti )=y(tf )=0
To get a handle on the path integral over y(t), we integrate by parts in the action to
obtain
i m tf 2 i m tf
y(t)(−∂t2 )y(t)
Z R Z R
ẏ(t)
D[y(t)]e ~ 2 ti = D[y(t)]e ~ 2 ti
y(ti )=y(tf )=0 y(ti )=y(tf )=0
(3.14)
Comparing with the fundamental Fresnel integral formula (A.32),
Z +∞ −1/2
a b A
d x e iAab x x /2 =
d
det , (3.15)
−∞ 2πi
we deduce that what this path integral formally calculates is the determinant of the
differential operator (−∂t2 ),
i m tf
y(t)(−∂t2 )y(t)
Z R −1/2
m
D[y(t)]e ~ 2 ti = Det [−∂t2 ] . (3.16)
y(ti )=y(tf )=0 2πi~
m m +1/2
r
N = Det [−∂t2 ] . (3.17)
2πi~(tf − ti ) 2πi~
27
We also see that the normalised path integral measure D̂[y(t)] introduced in (2.28) is
such that it normalises the free particle Gaussian fluctuation integral to unity,
i m tf
y(t)(−∂t2 )y(t)
Z R
D̂[y(t)]e ~ 2 ti =1 . (3.18)
y(ti )=y(tf )=0
with
m tf
Z
mX
y(t)(−∂t2 )y(t) = λn c2n . (3.21)
2 ti 2 n
and thus the path integral reduces to an infinite product of finite-dimensional Gaussian
integrals, with the result
i m tf Y Z +∞
Z R 2 )y(t)
y(t)(−∂ imP 2
D[y(t)]e ~ 2 ti t = dcn e ~ 2 n λn cn
y(ti )=y(tf )=0 n −∞
!−1/2
Y m
= λn . (3.23)
n
2πi~
28
It will be useful for later to know explicitly the eigenvalues λn . The properly normalised
eigenfunctions are s
2 t − ti
yn (t) = sin nπ . (3.24)
tf − ti tf − ti
Since y−n (t) = −yn (t), the linearly independent solutions are yn (t) with n ∈ N and the
corresponding eigenvalues are
n2 π 2
λn = . (3.25)
(tf − ti )2
We thus have
∞
Y n2 π 2
Det[−∂t2 ] = (3.26)
(tf − ti )2
n=1
This is clearly infinite, and thus Det−1/2 [−∂t2 ] is zero, but this is compensated by the
infinite normalisation constant in precisely such a way that one obtains the finite result
(3.17).
m −1/2 r m
N Det [−∂t2 ] = (3.27)
2πi~ 2πi~(tf − ti )
For more comments on determinants and regularised determinants see sections 3.5 and
3.6.
We have now accumulated all the techniques we need to tackle a more interesting ex-
ample, namely the time-independent harmonic oscillator, with action
m tf
Z
S[x] = (ẋ(t)2 − ω02 x(t)2 ) . (3.28)
2 ti
Following the same strategy as for the free particle, we decompose the path into a
classical path xc (t) and the fluctuation y(t), determine the classical action and the (still
quadratic) action for y(t), and then perform the Gaussian integral over y(t).
29
Thus the path integral we need to calculate is
im
R tf
y(t)(−∂t2 − ω02 )y(t)
Z
K(xf , xi ; tf − ti ) = e (i/~)S[xc ] N D[y(t)]e ~ 2 ti
y(ti )=y(tf )=0
(3.31)
This is once again a straightforward Gaussian (Fresnel) integral, and thus one finds,
using the result (3.17),
s
m
m Det 2πi~ [−∂t2 ]
r
K(xf , xi ; tf − ti ) = m e (i/~)S[xc ]
2πi~(tf − ti ) Det 2πi~ [−∂t2 − ω02 ]
s
m Det[−∂t2 ]
r
= e (i/~)S[xc ] (3.32)
2πi~(tf − ti ) Det[−∂t2 − ω02 ]
Since in writing the above we have taken into account the normalisation factor, the
result should be well defined and finite. This is indeed the case. To calculate this ratio
of determinants, we observe first of all that if the eigenvalues of the operator (−∂t2 ) are
λn , the eigenvalues µn of the operator (−∂t2 − ω02 ) are µn = λn − ω02 . Thus the ratio of
determinants is
s !1/2
Det[−∂t2 ] Y λn
=
Det[−∂t2 − ω02 ] n
λn − ω02
!−1/2
Y ω02
= (1 − ) . (3.33)
n
λn
Even though this may not be obvious from this expression, the result is actually an
elementary function. First of all, using the explicit expression for the λn , one has
∞ ∞
Y ω02 Y ω 2 (tf − ti )2
(1 − )= (1 − 0 2 2 ) . (3.34)
λn n π
n=1 n=1
The function
∞
Y x2
f (x) = (1 − ) (3.35)
n2 π 2
n=1
is an even function of x with f (0) = 1 and simple zeros at x = ±nπ. This shows (see
also Appendix B) that this is an infinite product representation of
sin x
f (x) = . (3.36)
x
Therefore, the final result for the ratio of determinants is
s s
Det[−∂t2 ] ω0 (tf − ti )
2 2 = , (3.37)
Det[−∂t − ω0 ] sin ω0 (tf − ti )
30
and our final compact result for the propagator of the harmonic oscillator is
s
m ω0 (tf − ti ) (i/~)S[xc ]
r
K(xf , xi ; tf − ti ) = e
2πi~(tf − ti ) sin ω0 (tf − ti )
mω0
r
= e (i/~)S[xc ] (3.38)
2πi~ sin ω0 (tf − ti )
with
m ω0 2
(xi + x2f ) cos ω0 (tf − ti ) − 2xi xf .
S[xc ] = (3.39)
2 sin ω0 (tf − ti )
It is easy to see that this result reduces to that for the free particle in the limit ω0 → 0.
Given the above result for the kernel, we also immediately obtain an expression for the
partition function by setting xi = xf = x and integrating over x. This is a Fresnel
integral, and the result is
1
Z(tf − ti ) = ω0 (tf −ti )
. (3.40)
2i sin 2
3.5 Gaussian Path Integrals and Determinants: the VVPM and GY For-
mulae
1 2 mω(t)2 2
H(t) = p + x (3.43)
2m 2
and the quadratic action
Z tf
m
S[x] = (ẋ(t)2 − ω(t)2 x(t)2 ) . (3.44)
2 ti
m tf
Z
2
S[x] = (ẋ(t)2 − V (x(t)) (3.45)
2 ti m
31
to second order around a classical solution xc (t) of the equations of motion mẍ =
−V 0 (x), one finds the action (3.44) with
1 00
ω(t)2 = V (xc (t)) . (3.46)
m
The resulting path integral is still Gaussian, and exactly the same strategy as above can
be used to show that the path integral result for the kernel of the evolution operator is
(cf. (3.32))
R tf s
m Det[−∂t2 ]
r
−(i/~) ti dt Ĥ(t)
< xf |T̂ e |xi >= e (i/~)S[xc ] .
2πi~(tf − ti ) Det[−∂t2 − ω(t)2 ]
(3.47)
Here xc (t) now denotes the classical harmonic oscillator solution with the given bound-
ary condition, xc (ti ) = xi and xc (tf ) = xf , S[xc ] is the classical action, and the fluctu-
ation determinants are to be calculated for zero (Dirichlet) boundary conditions.
In order to evaluate the result for the propagator (3.47), one needs to determine the
classical action and the ratio of fluctuation determinants. The former is rather straight-
forward provided that one can find the classical solution. An integration by parts shows
that the classical action can be calculated in terms of the boundary values of xc (t) and
ẋc (t) at t = ti , tf ,
Z tf
m m
S[xc ] = (ẋc (t)2 − ω(t)2 xc (t)2 ) = [xf ẋc (tf ) − xi ẋc (ti )] . (3.48)
2 ti 2
The calculation of the ratio of determinants would be complicated if one tried to cal-
culate these determinants directly, as we did in the time-independent case. Fortunately
there are two elegant shortcuts to calculating this ratio of determinants which are finite-
dimensional in nature and do not require the calculation of a fuctional determinant. I
will briefly describe these below.
One can for instance use the useful and remarkable result that the ratio of determinants
can be calculated from the classical action via the Van Vleck - Pauli - Morette (VVPM)
formula s s
m Det[−∂t2 ] 1 ∂ 2 S[xc ]
r
= √ − (3.49)
2πi~(tf − ti ) Det[−∂t2 − ω(t)2 ] 2πi~ ∂xi ∂xf
More generally, for a d-dimensional quantum system, the 2nd derivative of the classical
action would be replaced by the d-dimensional VVPM determinant
" #
∂ 2 S[xc ] ∂ 2 S[xc ]
→ det .
∂xi ∂xf ∂xµi ∂xνf
32
This is a non-trivial but standard and well-known result. Notice that, to evaluate the
ratio of quantum fluctuations in this manner, one only needs to know the classical
action.
In the case of the harmonic oscillator with constant frequency, agreement between the
VVPM formula and the result we obtained in (3.38) can be immediately verified from
the classical action (3.39) which gives
∂ 2 S[xc ] mω0
− = . (3.50)
∂xi ∂xf sin ω0 (tf − ti )
Alternatively, instead of the VVPM result one can use the equally remarkable, but
apparently much less well known, Gelfand-Yaglom (GY) formula that states that
Det[−∂t2 − ω(t)2 ] Fω (tf )
2 = (3.51)
Det[−∂t ] tf − ti
where Fω (t) is the solution of the classical harmonic oscillator equation
It is quite remarkable that the ratio of fluctuation determinants, which involves the
product over all eigenvalues of the operator −(∂t2 + ω(t)2 ) (with zero boundary condi-
tions), can be expressed in terms of the zero mode (solution with zero eigenvalue) of
the same operator with the GY boundary conditions (3.53).
Once again, as an example we consider the constant frequency harmonic oscillator. The
solution of the classical equations of motion satisfying the GY boundary conditions is
evidently
1
Fω0 (t) = sin ω0 (t − ti ) . (3.55)
ω0
Thus
1
Fω0 (tf ) = sin ω0 (tf − ti ) (3.56)
ω0
and the GY formula predicts
Det[−∂t2 − ω02 ] sin ω0 (tf − ti )
2 = , (3.57)
Det[−∂t ] ω0 (tf − ti )
33
in perfect agreement with the result (3.37).
Some comments:
f (tf )
Fω (tf ) = . (3.58)
f˙(ti )
If, on the other hand, f (t) is any solution of the harmonic oscillator equation with
f (ti ) 6= 0, the GY solution can be constructed from it as
Z t
1
Fω (t) = f (ti )f (t) dt0 (3.59)
ti f (t0 )2
∂ 2 S[xc ] m
=− . (3.63)
∂xi ∂xf Fω (tf )
With a bit of effort this purely classical (in the sense of classical mechanics) formula
can be proved directly via Hamilton-Jacobi theory (see Appendix C), but this by
itself provides no insight into the reason for the validity of either the VVPM or
the GY formula.
A slick proof of the GY formula, in the form (3.62), has been given by S. Coleman in
his Erice lectures on The uses of instantons (Appendix A), reprinted in
34
This proof is also reproduced in
It works roughly as follows (to make this argument more precise one should insert words
like “Fredholm operators” etc. in appropriate places):
Thus Fω,0 (t) is what we called Fω (t) above, and Fω,λ (t) has the property that Fω,λ (tf ) =
0 iff λ is an eigenvalue of the operator (−∂t2 − ω(t)2 ) with zero Dirichlet boundary
conditions (because then Fω,λ (t) is the corresponding eigenfunction).
To establish this claim, one considers the left and right hand sides as functions of the
complex variable λ. The left hand side is a meromorphic function of λ with a simple
zero at each eigenvalue λn of (−∂t2 − ω(t)2 ) (an eigenvalue λn of (−∂t2 − ω(t)2 ) is a zero
eigenvalue of (−∂t2 − ω(t)2 ) − λn )), and a simple pole at each eigenvalue λ0,n of (−∂t2 )
(for the same reason). By the remark above (3.66), exactly the same is true of the right
hand side. In particular, the ratio of the left and the right hand sides has no poles and
is therefore an analytic function of λ.
Moreover, provided that ω(t) is a bounded function of t, for λ sufficiently large, |λ| → ∞,
one can ignore ω(t), and hence both the left and the right hand side go to 1 in that limit
(everywhere except on the real positive line where one can find large real eigenvalues).
Putting these two observations together, one concludes that the ratio of the two sides
is an analytic function of λ that goes to 1 in any direction except perhaps along the
positive real axis, and this implies that the ratio is equal to 1 identically, which concludes
the proof of the identity (3.66).
Another elegant continuum (i.e. non-discretised) proof of the GY formula (3.51) can be
found in
H. Kleinert, A. Chervyakov, Simple Explicit Formulas for Gaussian Path Integrals with
Time-Dependent Frequencies, Phys. Lett. A245 (1998) 345-357; quant-ph/9803016.
35
Yet another proof can be assembled from sections 3.3, 3.5 and 34.2 of
J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Oxford Science Publi-
cations, 1989).
There are certain infinite-dimensional operators for which the definition of a determinant
poses no real problem. For example, for most intents and purposes, trace class operators
K (i.e. operators for which the trace exists), and operators of the form I + K, where I is
the identity operator and K a trace class operator, behave like finite-dimensional linear
operators (matrices). For invertible (n × n)-matrices M one can write the determinant
(with K = I − M ) as
since this series is absolutely convergent. However, most operators appearing in physics
are not of this form, and hence one needs to be more creative.
In section 3.4 we had seen that, even though Det(−∂t2 − ω02 ) (the product of the eigen-
values) diverges, the ratio of this determinant and the (equally divergent) free particle
determinant gave us a finite result. More generally, the ratio of determinants that ap-
pears in (3.47) can be interpreted as defining a regularised functional determinant of
the operator (−∂t2 − ω(t)2 ), in the sense of
Det[−∂t2 − ω(t)2 ]
Detreg [−∂t2 − ω(t)2 ] := (3.69)
Det[−∂t2 ]
36
It is this regularised determinant that the normalised path integral with measure D̂[x]
computes (cf. (2.28,3.18)),
i m tf 2 2
Z R
ti y(t)(−∂t − ω(t) )y(t) = Det 2 2 −1/2
reg [−∂t − ω(t) ]
D̂[y(t)]e ~ 2 .
y(ti )=y(tf )=0
(3.70)
However, usually in the physics literature one adopts a slighly different attitude. Instead
of regularising explicitly by means of the free particle determinant (which is neverthe-
less natural from the path integral point of view) one attempts to define meaningful
individual (instead of ratios of) regularised functional determinants in other ways, e.g.
via the so-called ζ-function or heat kernel regularisation.
To see the relation between the spectral ζ-function and the determinant, one differenti-
ates once,
X
0
ζA (s) = − λ−s
n log λn , (3.73)
n
to conclude that
X Y
0
ζA (0) = − log λn = − log λn . (3.74)
n n
Formally, therefore, one has
0
λn = e −ζA (0) ,
Y
(3.75)
n
but, at this point, the left hand side is as ill-defined as the right hand side because
ξA (s), as it stands, will be convergent only for Re(s) sufficiently large and positive (and
not at s = 0).
Likewise, the ordinary Riemann ζ-function, as it stands, converges only for Re(s) > 1.
However, in that case it is well known that ζ(s) can be analytically continued to a
meromorphic function of s in the entire complex s-plane, with a pole only at s = 1. In
particular, one then has cute results like
P∞
ζ(0) = − 21 ( 1 )
1 Pn=1
∞
ζ(−1) = − 12 ( n=1 n ) (3.76)
P∞ 2
ζ(−2) = 0 ( n=1 n )
37
(for a poor man’s (handwaving) proof of these identities, see Appendix D) as well as
(we will need this below)
ζ 0 (0) = − 21 log 2π . (3.77)
Analogously, under favourable circumstances the spectral ζ-function ζA (s) can be ex-
tended to a meromorphic function of s which is holomorphic at s = 0, and then (3.75)
can be used to define the ζ-function regularised determinant of A via
0
Detζ A = e −ζA (0) . (3.80)
To illustrate this method, let us go back to the calculation of the free particle determi-
nant in section 3.3. There we had seen that the eigenvalues of the operator A = −∂t2
(with Dirichlet boundary conditions) are
n2 π 2
λn = n = 1, 2, . . . (3.81)
T2
where T = tf − ti . To define the determinant, we construct the spectral ζ-function
∞ 2s
nπ −2s T
ζ(2s) = e 2s log(T /π) ζ(2s)
X
ζA (s) = = (3.82)
T π
n=1
and calculate
0
ζA (0) = 2ζ(0) log(T /π) + 2ζ 0 (0) = − log(T /π) − log(2π) = − log(2T ) . (3.83)
n2 π 2
λn = − ω02 n = 1, 2, . . . (3.85)
T2
one finds
sin T ω0
Detζ [−∂t2 − ω02 ] = 2T . (3.86)
T ω0
38
In particular, we have
Detζ [−∂t2 − ω02 ] sin T ω0
2 = (3.87)
Detζ [−∂t ] T ω0
in perfect agreement with the previously obtained result (3.37),
s s
Det[−∂t2 ] ω0 (tf − ti )
2 2 = . (3.88)
Det[−∂t − ω0 ] sin ω0 (tf − ti )
While this ζ-function regularised definition of the determinant captures most of the
essential properties of the standard determinant, some care is required in manipulating
these objects. For example, one important property of the standard determinant in
the finite-dimensional case is its multiplicativity det(M N ) = det(M ) det(N ). To what
extent an analogous identity
?
Detζ (AB) = Detζ A Detζ B (3.89)
and, from a more mathematical point of view, in (warning: not for the faint of heart)
3.7 Exercises
3. Verify the result (3.40) for the partition function of the harmonic oscillator, and
the expansion (3.41).
39
4. Fill in the missing steps in the proof of the identity (3.63) given in Appendix C.
In particular, verify (C.10).
or the GY formula
Det[−∂t2 ] F0 (tf )
= (3.92)
Det[−∂t2 − ω(t)2 ] Fω (tf )
and publish it and/or tell me about it.
40
4 Generating Functionals and Perturbative Expansions
In this section we will study some other important properties of the path integral, in
particular the perturbative and semi-classical expansion of non-Gaussian path integrals.
The treatment in this section is somewhat more cursory than in the previous sections -
the main intention is to give a flavour of things to come (in a course on quantum field
theory, say).
The main objects of interest in quantum field theory are vacuum expectation values
of time-ordered products of field operators. These matrix elements can be obtained
from a generating functional which, in turn, can be expressed as a path integral. This
motivates the following discussion of these concepts in the quantum mechanical context.
Before turning to the path integral, we introduce the generating functionals for the
correlation functions (n-point functions)
From these generating functionals, the individual correlation functions can evidently be
reconstructed by differentiation,
n
~ δ n Zf i [j]
Gf i (t1 , . . . , tn ) = |
i δj(t1 ) . . . δj(tn ) j(t)=0
n
~ δ n Z[j]
G(t1 , . . . , tn ) = | . (4.4)
i δj(t1 ) . . . δj(tn ) j(t)=0
For Zf i [j] we can easily deduce a path integral representation. Using (2.60) and the
definition (4.2), one finds
Z x(tf )=xf
Zf i [j] = N D[x(t)] e (i/~)S[x(t); j(t); tf , ti ] ,
x(ti )=xi
41
where Z tf
S[x(t); j(t); tf , ti ] = S[x(t); tf , ti ] + dt j(t)x(t) (4.5)
ti
is the action with a source term (or the action with a coupling of x(t) to the current
j(t)). Thus the generating functional Zf i [j] is the path integral for the action with a
source term.
Our aim is now to find a similar path integral representation for Z[j]. For that we need
to project from the states |x, t > to the ground state |0 >. To that end we first expand
the state |x, t > in a basis of eigenstates |n > of the Hamiltonian,
|x, t >= e (i/~)tĤ |x >= e (i/~)tĤ |n >< n|x >= e (i/~)tEn ψn∗ (x)|n > . (4.6)
X X
n n
Thus for the correlation functions (2.60) of time-ordered products of operators one finds
e −(i/~)tf En + (i/~)ti Em ×
X
Gf i (t1 , . . . , tp ) =
m,n
∗
ψn (xf )ψm (xi ) < n|T (x̂(t1 ) . . . x̂(tp ))|m > . (4.7)
To accomplish the projection onto the vacuum expectation value, we now take the limit
tf,i → ±∞. This can be understood in a number of related ways. They all amount
to the statement that, in the sense of distributions, exp(−itE) → 0 for t → ∞, the
dominant contribution in that limit coming from the smallest possible value of E, i.e.
from the ground state. Explicitly, one can for instance replace En → (1 − i)En for
a small positive and then take the limit → 0 at the end. Alternatively, one can
“analytically continue” to imaginary time, take the limit there, and then continue back
to real time. In whichever way one proceeds, one can conclude that
As the left hand side is independent of the boundary conditions xf,i imposed at t → ±∞,
so is the right hand side. Passing now to the generating functional Z[j] (4.3), we can
once again rewrite the infinite sum as an exponential in the path integral to deduce
(suppressing the dependence on the boundary conditions) that
Z
Z[j] ∼ D[x(t)] e (i/~)S[x(t); j(t); tf,i = ±∞] , (4.9)
where Z +∞
S[x(t); j(t), tf,i = ±∞] = dt [L(x(t), ẋ(t)) + j(t)x(t)] . (4.10)
−∞
42
The proportionality factor (normalisation constant) is fixed by
As we will reduce the calculation of a general path integral and its generating functional
to that of a quadratic theory, in this section we will determine explicitly the generating
functional for the latter.
For finite-dimensional Fresnel integrals one has (see e.g. Exercise 1.5.3 and equation
(A.13))
A −1/2 −iGab ja jb /2
Z
d iA x a xb /2 + ij xa
d xe ab a = det e , (4.13)
2πi
where Gab is the inverse matrix (“Green’s function”) to Aab , Gab Abc = δ ac . Thus the
“generating function” is
a b a
dd x e iAab x x /2 + ija x
R
z0 [j] := a b
dd x e iAab x x /2
R
ab
= e −iG ja jb /2 . (4.14)
is
1 ∂ 1 ∂
< xc xd > = z0 [j]
i ∂jc i ∂jd j=0
1 ∂
= −Gad ja z0 [j]
i ∂jc j=0
= iGcd . (4.16)
43
Thus the “2-point function” is the “Green’s function” of the Fresnel integral.
Higher moments (n-point functions) can be calculated in a similar way. For n odd they
are manifestly zero. For the 4-point function one finds (Exercise)
< xa xb xc xd >=< xa xb >< xc xd > + < xa xc >< xb xd > + < xa xd >< xb xc > (4.17)
etc. The general result, expressing the 2n-point functions as a sum over all possible
pairings P (x1 , . . . , x2n ),
X
< x1 . . . x2n >= < xi1 xi2 > . . . < xi2n−1 xi2n > (4.18)
P (x1 ,...,x2n )
(there are (2n − 1)!! terms) is also easily deduced from the generating function z0 [j].
In the quantum field theory context, this result is known as Wick’s Theorem and, even
though a simple result, is of enormous practical significance in perturbative calculations.
We now consider the analagous question for harmonic oscillator path integrals. In this
case, one finds, in precise analogy with the finite-dimensional case,
R +∞ R +∞
−(i/m~) −∞ dt −∞ dt0 j(t)G0 (t, t0 )j(t0 )/2
Z0 [j] = e , (4.19)
where G(t, t0 ) is a Green’s function (the inverse) of the operator −∂t2 − ω(t)2 ,
For the finite-time path integral, the Green’s function that appears here would have
been determined by the Dirichlet (zero) boundary conditions at ti,f to be the Green’s
function with
G0 (tf , t0 ) = G0 (t, ti ) = 0 . (4.21)
In the present case (infinite time interval), the relevant Green’s function is implicitly
determined by an i prescription. In particular, for the time-independent harmonic
oscillator with constant frequency ω0 one finds
1 −iω0 |t − t0 |
G0 (t, t0 ) = e . (4.22)
2iω0
Once we know the generating functional, we can use it to calculate n-point functions.
In particular, for the two-point function one has
0 ~ δ ~ δ
< 0|T (x̂(t)x̂(t ))|0 > = Z0 [j]
i δj(t) i δj(t0 ) j=0
i~
= G0 (t, t0 ) . (4.23)
m
That the time-ordered product < 0|T (x̂(t)x̂(t0 ))|0 > is a Green’s function can of course
also be verified directly in the standard operator formulation of quantum mechanics
(Exercise). Likewise, higher n-point functions can be expressed in terms of products of
2-point functions (Wick’s theorem again).
44
4.3 Perturbative Expansion and Generating Functionals
As we have seen, the generating functional Z[j] encodes all the information about the
n-point functions G(x1 , . . . , xn ). However, this is only useful if supplemented by a
prescription for how to calculate Z[j]. Since in practice the only path integrals that
we can do explicitly are Gaussian path integrals and their close relatives, the question
arises how to reduce the evaluation of Z[j] to an evaluation of Gaussian integrals.
This is achieved via a perturbative expansion of the path integral around a quadratic
(Gaussian) action. The (assumed) small perturbation expansion parameter can be a
coupling constant λ, as in
Ĥ = Ĥ0 + λŴ , (4.24)
with Ĥ0 a harmonic oscillator Hamiltonian. In this case one is studying a path integral
counterpart of standard quantum mechanical perturbation theory.
Alternatively, the small parameter could be Planck’s constant ~ itself. In this case one
is interested in evaluating the path integral
Z
D[x] e (i/~)S[x] (4.25)
for ~ → 0 as a power series in ~ by expanding the action around a classical solution xc (t).
This is a semi-classical expansion of the path integral, a counterpart of the standard
WKB approximation of quantum mechanics.
Both physically and technically (in one case one has a small parameter in front of a part
of the action, the perturbation, in the other a large parameter 1/~ in front of the entire
action) these two expansions appear to be quite distinct. Calculationally, however,
they are rather similar, since in both cases the path integral can be reduced to a series
expansion in derivatives of the generating functional of the quadratic theory. This is
immediate for the perturbative λ-expansion (which we will consider in this section)
but requires a minor bit of trickery for the ~-expansion (hence the excursion into the
stationary phase approximation for finite-dimensional integrals in section 4.5).
For definiteness, we will assume that the perturbation Ŵ arises from a velocity-independent
perturbation of the potential
with V0 (x) a harmonic oscillator potential. For more complicated perturbations one
would have to go back to the phase space path integral, introduce sources for both x̂(t)
and p̂(t), etc. Then the action takes the form
Z
S[x] = S0 [x] + λSI [x] = S0 [x] − λ dt W (x(t)) , (4.27)
45
where S0 [x] is the free action, and SI [x] the perturbation or interaction term.
To calculate the path integral, one introduces a source-term for the free action S0 [x],
Z
S0 [x, j] = S0 [x] + dt j(t)x(t) , (4.28)
and determines Zf i,0 [j] or Z0 [j]. Focussing on the latter, the vacuum generating func-
tional Z[j] for the perturbed action can then be written in terms of that for the free
action as
δ
(iλ/~)SI [ ~i δj(t) ]
Z[j] = N e Z0 [j] . (4.29)
This result for Z[j] is manifestly a power series expansion in λ - and the fact that this
perturbative expansion is so straightforward to obtain in the path integral formalism is
one of the reasons that makes the path integral approach to quantum field theory so
powerful.
Moreover, using the explicit expression (4.19) for the generating functional Z0 [j] in terms
of the Green’s functions of the free theory, one sees that the generating functional Z[j],
and thus all the n-point functions of the perturbed theory, are expressed as a series
expansion in terms of the Green’s functions of the unperturbed theory. A graphical
representation of this expansion leads to the Feynman diagram expansion of quantum
mechanics (and quantum field theory).
The integrals of interest in this section are oscillatory integrals of the kind
Z +∞
dx e (i/~)f (x) . (4.31)
−∞
The basic tenet of the stationary phase approximation of such integrals is that for small
~, ~ → 0, the integrand oscillates so rapidly that the integral over any small x-interval
will give zero unless one is close to a critical point xc of f (x), f 0 (xc ) = 0, for which to
first order around xc there are no oscillations. This suggests that for ~ → 0 the integral
is dominated by the contribution from the neighbourhood of some critical point(s) xc
of f (x), and that therefore in this limit the dominant contribution to the integral can
be obtained by a Taylor expansion of f (x) around x = xc .
46
To set the stage for this discussion, we will first reconsider the Gaussian and Fresnel
integrals of Appendix A from this point of view. There we had obtained the formula
(A.13), r
Z +∞
iax 2 /2 + ijx 2πi −ij 2 /2a
dx e = e (4.32)
−∞ a
by analytic continuation from the corresponding Gaussian integral and/or completing
the square. A (for the following) more instructive way of obtaining this result is to
consider the integral Z +∞
dx e (i/~)q(x) (4.33)
−∞
for some quadratic function of x (q(x) = ax2 /2 + jx + c, say). Such a function has a
unique critical point xc (xc = −j/a), and since q(x) is quadratic and q 0 (xc ) = 0 one can
write q(x) as
q(x) = q(xc ) + 12 (x − xc )2 q 00 (xc ) . (4.34)
Thus the integral is
Z +∞ Z +∞
(i/~)q(x) (i/~)q(x ) 00 2
dx e = e c dy e (i/~)q (xc )y /2
−∞ −∞
s
2πi~
= e (i/~)q(xc ) (4.35)
q 00 (xc )
47
The stationary phase approximation (also known as the saddle point approximation) to
the integral amounts to ignoring the higher than quadratic terms encoded in R(x − xc )
and leads to the approximate result
Z +∞
1 1
√ dx e (i/~)f (x) ≈ p e (i/~)f (xc ) . (4.40)
2πi~ −∞ f 00 (xc )
To justify this approximation, one needs to show that the contributions due to the
remainder R(x − xc ) are indeed subleading in ~ as ~ → 0. We will establish this below.
Another cause for concern may be that, while we have (hand-wavingly) argued that the
dominant contribution to the oscillatory integral should arise from a small neighbour-
hood of the critical point(s), in order to arrive at (4.40) we have taken the integral not
over a small neighbourhood of xc but, quite on the contrary, over (−∞, +∞).
To justify this, consider the contribution to the integral from an interval [a, b] without
critical points. In that interval, one can change the integration variable from x to f (x)
(this would not be allowed if f (x) had a critical point in the interval). Then it is easy
to see (e.g. by an integration by parts) that the integral
Z b
dx e (i/~)f (x) = O(~) . (4.41)
a
Thus regions without critical points contribute O(~) terms to the integral. On the
other hand, (4.40) shows that, regardless of what the neglected terms do, there are
some contributions to the total integral which are of order O(~1/2 ). Thus these must
be due to the contributions from (integrals over arbitrarily small neighbourhoods of)
the critical points. As these are dominant relative to the O(~) contributions as ~ → 0,
the difference betwen integrating over such a neighbourhood of the critical point and
integrating over all x is negligible in this limit.
To analyse the contributions due to R(x−xc ) and to make the dependence of the various
terms on ~ more transparent, it is convenient to define the fluctuation variable y not as
(x − xc ), as was implicitly done in (4.35), but via
√
x = xc + ~y . (4.42)
This has the effect of making the Gaussian part of the integral independent of ~,
(x − xc )2 /2~ = y 2 /2 . (4.43)
√
Moreover, since R(x−xc ) is at least cubic, (1/~)R( ~y) is now a power series in strictly
√
positive powers of ~,
√
r(y) ≡ (1/~)R( ~y) = ~1/2 f (3) (xc )y 3 /3! + ~f (4) (xc )y 4 /4! + . . . (4.44)
48
even (odd) powers of y appearing with integral (half-integral) powers of ~. All in all,
we thus have
Z +∞ Z +∞
1 (i/~)f (x) 1 (i/~)f (x ) 00 2
√ dx e =√ e c dy e i[f (xc )y /2 + r(y)] .
2πi~ −∞ 2πi −∞
(4.45)
To obtain the higher order corrections to the stationary phase approximation, one can
expand exp ir(y). Remembering that only even powers of y in that expansion give a
non-zero contribution to the y-integral, one concludes that this expresses the integral
as a power series in (integral) powers of ~,
Z +∞
1 1
√ dx e (i/~)f (x) = p e (i/~)f (xc ) (1 + ~(. . .) + ~2 (. . .) + . . .) (4.46)
2πi~ −∞ 00
f (xc )
The stationary phase approximation can be used to calculate the integral with reason-
able accuracy for very small ~, ~ → 0. However, the series expansion should not be
expected to converge in general, and the series is only an asymptotic series. If contribu-
tions from all critical points are included, it is under certain conditions possible to obtain
error estimates, but in practice applications of the stationary phase approximation are
usually restricted to (4.40).
Essentially all the work has already been done in sections 3.5 and 4.6 and we can be
brief about this here. Instead of the function f (x) we have an action S[x], and the
semi-clasical approximation amounts to expanding S[x] around a critical point, i.e. a
classical solution xc (t) of the corresponding Euler-Lagrange equations. This is precisely
the procedure we had already advocated for how to deal with general non-Gaussian
path integrals. We now see that this will lead to an expansion of the path integral in
49
powers of ~, the stationary phase approximation to the path integral agreeing with the
harmonic oscillator path integral of section 3.5.
1 tf
Z tf
δ2S m tf
Z Z
0 0 1
dt dt |
0 ) x(t)=xc (t)
δx(t)δx(t ) = (δ ẋ(t)2 − V 00 (xc (t))δx(t)2 )
2 ti ti δx(t)δx(t 2 ti m
(4.51)
As already noted in section 3.5, this is the action of a harmonic oscillator with time-
dependent frequency
1
ω(t)2 = V 00 (xc (t)) . (4.52)
m
Thus the stationary phase or semi-classical approximation to the path integral is (cf.
(3.47))
Z x(tf )=xf s
m Det[−∂t2 ]
r
D[x(t)] e (i/~)S[x(t); tf , ti ] ≈ e (i/~)S[xc ] .
x(ti )=xi 2πi~(tf − ti ) Det[−∂t2 − ω(t)2 ]
(4.53)
This can be evaluated using either the VVPM or the GY method.
The difference between (3.47)) and (4.53) is that in the former case one was dealing
with a quadratic action and the result was exact (i.e. the semi-classical approximation
is exact for the harmonic oscillator), whereas here this is really just the semi-classical
approximation. Note also that xc in (4.53) refers to a classical solution of the full
equations of motion mẍ = −V 0 (x) whereas xc in (3.47) is of course a solution of the
harmonic oscillator equation.
50
4.6 Scattering Theory and the Path Integral
Formal scattering theory can be succinctly described by the evolution operator UI (tf , ti )
in the interaction representation. Thus let the Hamiltonian be (in the simplest case of
two-body potential scattering)
H = H0 + V (4.54)
act on free particle states |~ka > (plane wave eigenstates of the free Hamiltonian H0 )
1 ~
< ~x|~ka >= 3/2
e ika .~x , (4.57)
(2π)
p~a = ~~ka , H0 |~ka >= Ea |~ka >, to produce the stationary scattering states ψa± (~x),
The S-matrix elements Sab are the transition amplitudes among the asymptotic in and
out scattering states |ψa± >,
Sab =< ψa− |ψb+ > , (4.59)
Such matrix elements can readily be expressed in terms of the path integral. First of
all, we have
51
To pass from the momentum space matrix elements of the evolution operator for H to
the kernel (the position space matrix elements), we perform a double Fourier transform
(this is a special case of (2.61)),
Z Z
−(i/~)(t − 1 ~ ~
~
< ka |e f ti )H ~
|kb >= 3
d~xa d~xb e i(kb .~xb − ka .~xa ) K(~xa , ~xb , tf − ti ) .
(2π)
(4.63)
Representing, in the usual way, the kernel by the path integral, we conclude that the
S-matrix elements Sab are given by a Fourier transform of the path integral. Either per-
turbative or semi-classical expansion techniques, as described earlier on in this section,
can now be employed to obtain series expansions for these S-matrix elements.
4.7 Exercises
2. Using the generating functional Z0 [j] (4.19), express the 4-point function
Note that two different terms contribute to the integral at this order.
52
A Gaussian and Fresnel Integrals
This result can for instance be established by the standard trick of squaring the integral
and passing to polar coordinates. The first generalisation we will consider is the integral
Z +∞
2
I1 [α, j] = dx e −αx /2 + jx j∈R . (A.2)
−∞
By completing the square and shifting the integration variable (using translation invari-
ance of the measure) one finds
Z +∞ r
−α(x − j/α)2 /2 + j 2 /2α 2π j 2 /2α
I1 [α, j] = dx e = e . (A.3)
−∞ α
∂
I1 [α, j] = (j/α)I1 [α, j] , (A.6)
∂j
called a Schwinger-Dyson Equation in the quantum field theory context, is evidently
solved by
2
I1 [α, j] = ce j /2α , (A.7)
53
The other generalisation that will interest us is the oscillatory Fresnel integral
Z +∞
2
J0 [a] = I0 [−ia] = dx e iax /2 . (A.9)
−∞
It can be obtained by noting that the basic Gaussian integral is well defined for α ∈ C
with Re(α) > 0, and that it continues to be well defined for Re(α) → 0 provided that
Im(α) 6= 0. It follows that
r r
2πi 2π iπ/4
J0 [a] = = e . (A.10)
a a
A useful way to remember this result is to relate it to the integral I0 with a phase
depending on the sign of a,
one finds r
2πi −ij 2 /2a
J1 [a, j] = e . (A.13)
a
For an alternative proof of this identity see section 4.5.
54
The integrals of even powers x2m can be calculated by relating them to the integrals I0
or I1 . For example, one has
Z +∞ Z +∞
2 ∂m 2
dx e −αx /2 x2m = (−2)m m dx e −αx /2
−∞ ∂α −∞
∂ m
= (−2)m m I0 [α] . (A.17)
∂α
This can be evaluated to give
Z +∞
2 √ ∂m
dx e −αx /2 x2m = (−2)m 2π m α−1/2
−∞ ∂α
√
= 2π(2m − 1)!!α−(2m+1)/2 . (A.18)
However, this formula does not generalise in any useful way to path integrals. An
alternative (and very slick) way to obtain the result (A.18) is to introduce, for the
integrals Z +∞
2
Z` = dx e −αx /2 x` , (A.19)
−∞
the generating function
∞
X
Z(j) = j ` Z` . (A.20)
`=0
55
More generally, one has
Z +∞
−αx2 /2 ∂
dx e F (x) = F I1 [α, j] (A.25)
−∞ ∂j j=0
and " #
+∞ ∂
F ∂j
Z
−αx2 /2 + F (x)
dx e = e I1 [α, j] (A.26)
−∞
j=0
for any function F (x). This justifies the name generating function and explains why
the central object of interest for integrals with Gaussian weight is the integral Z(j) =
I1 [α, j].
This generalises to
Z +∞ Z +∞ 2 2 2π
dx dy e −(αx + βy + 2γxy)/2 = p , (A.28)
−∞ −∞ αβ − γ 2
the exponent of the integrand can be written as Aab xa xb with xa = (x, y) and the above
identity reads
√ 2
A −1/2
Z +∞
2 −A x a xb /2 ( 2π)
d xe ab =√ = det . (A.30)
−∞ det A 2π
56
Thus the task of calculating Gaussian integrals and their various relatives and descen-
dants has been reduced to the purely algebraic task of calculating determinants of
matrices.
The proof of the fundamental identities (A.31,A.32) is remarkably simple and will be
left as an exercise. The strategy is to first prove them for A diagonal (this is trivial) and
to then use the fact that any real symmetric A can be diagonalised by an orthogonal
transformation, combined with the rotation invariance of the measure dd x.
57
B Infinite Product Identities
The validity of this formula is plausible as the right hand side has exactly the same zeros
and identical behaviour as x → 0 as the left hand side. It can be proved rigorously in a
variety of ways. The easiest (and closest in spirit to the rough argument in the previous
sentence) is to extend (sin x)/x to a holomorphic function (sin z)/z in the complex plane
and factorise using the Mittag-Leffler pole expansion - see e.g. section 7 of
The single identity above can be used to generate an infinite number of other identities,
in the spirit of Euler. The first non-trivial one of these results from comparing the
quadratic term in the expansion of the left hand side,
sin x x − x3 /6 + . . . x2
= =1− + ... (B.2)
x x 6
with the quadratic term arising from the right hand side,
∞ ∞
x2 x2
Y X
1− 2 2 =1− + ... (B.3)
n π n2 π 2
n=1 n=1
There are a host of other occasionally useful infinite product identities (none of which,
however, will be used in these notes). For example, closely related infinite product
representations of other trigonomotric and hyperbolic functions are
∞
x2
sinh x Y
= 1+ 2 2 (B.5)
x n π
n=1
∞
x2
Y
cos x = 1− (B.6)
(n − 1/2)2 π 2
n=1
∞
x2
Y
cosh x = 1+ (B.7)
(n − 1/2)2 π 2
n=1
58
There is also the more mysterious identity
∞
sin x Y x
= cos n . (B.8)
x 2
n=1
is
∞
Y 1
ζ(s) = −s , (B.10)
n=1
1 − p n
where pn = 2, 3, 7, 11, . . . is the sequence of prime numbers. This identity provides the
cornerstone of the relation between number theory and complex analysis.
The article of Kleinert and Chervyakov cited in section 3.5 contains a nice proof of the
classical identity (3.63) expressing the equivalence of the VVPM (3.49) and Gelfand-
Yaglom (3.51) results for the ratio of fluctuation determinants.
Consider the classical solution xc (t) with xc (tf,i ) = xf,i . This solution can equally well
be regarded as a function of the initial position xi and velocity ẋi ,
Writing this as a linear combination of two linearly independent solutions f1 (t) and
f2 (t) of the oscillator equation,
and imposing the conditions xc (ti ) = xi and ẋc (ti ) = ẋi , one finds
59
At this point, one can either use directly the Hamilton-Jacobi relation
∂S[xc ]
= −pi = −mẋi , (C.5)
∂xi
to deduce (3.63). Or one can evaluate explicitly the classical action in terms of Fω (t)
and f1 (t) ≡ Gω (t) (the “dual” GY solution, which plays a role analogous to the GY
solution when one considers periodic or anti-periodic boundary conditions instead of
zero boundary conditions). Thus one has
Since only boundary terms contribute to the classical action, it is simply given by (3.48)
m
S[xc ] = [xf ẋc (tf ) − xi ẋc (ti )] . (C.8)
2
Using (C.7) to eliminate ẋi,f in favour of xi,f , and using the fact that the Wronskian of
Fω (t) and Gω (t) is t-independent,
It follows that
∂ 2 S[xc ] m
=− , (C.11)
∂xi ∂xf Fω (tf )
as we set out to show.
The purpose of this appendix is to give a heuristic proof of the ζ-function identities
P∞
ζ(0) = − 12 ( 1 )
1 Pn=1
∞
ζ(−1) = − 12 ( n=1 n ) (D.1)
P∞ 2
ζ(−2) = 0 ( n=1 n )
60
where
∞
X
ζ(s) = n−s . (D.2)
n=1
Let us, for reasons that will become apparent below, start with the sum
∞
e −n
X
Z() = (D.3)
n=1
P∞
which we consider as the regularisation of n=1 1 (to which it reduces as → 0). This
sum is elementary,
1
Z() = (D.4)
e −1
The derivatives of Z() are
∞
ne −n
X
0
Z () = −
n=1
∞
n2 e −n ,
X
Z 00 () = (D.5)
n=1
P
etc. Therefore, just as we formally have Z(0) = n 1, we have
∞
X
n = −Z 0 (0)
n=1
∞
X
n2 = Z 00 (0) , (D.6)
n=1
Evidently, in each case the first term is singular as → 0. Now comes the trickery.
Assume that, for present purposes, the regularisation (analytic continuation) of the
Riemann ζ-function amounts to nothing more and nothing less than that this term is
61
absent. Then we find
∞
X 1
( 1)reg = Z(0)reg = −
2
n=1
∞
X 1
( n)reg = −Z 0 (0)reg = −
12
n=1
∞
X
( n2 )reg = Z 00 (0)reg = 0 , (D.9)
n=1
which agrees precisely with the values of ζ(s) for s = 0, −1, −2 respectively.
and
∞ Z ∞ ∞
ts−1
Z
dt e −nt ts−1 =
X
Γ(s)ζ(s) = dt . (D.14)
et − 1
n=1 0 0
From this, the expansion of (et − 1)−1 (as above) and the analytic continuation of the
Gamma-function, one can then obtain the analytic continuation of the ζ-function.
62