Rudnick Notes
Rudnick Notes
Rudnick Notes
Joseph Rudnick
Department of Physics and Astronomy, UCLA
1
Again, in the simplest case the length of each step is fixed and the walker
takes steps at regular time intervals. We will consider variations on the above
standard set of conditions later on.
Number of walks
Now we turn to quantities of fundamental interest in discussions of the prop-
erties of random walkers and random walks. The most widely used, and
generally most widely applicable, of those quantities is the number of ran-
dom walk, assuming that certain conditions apply. For example, consider
the following simple question:
A bit more concisely and technically: how many N -step walks are there that
start at ~x and end at ~y ? Figure 1 shows three random walks starting and
ending at the same point. The question is, how many of such paths are there?
This quantity, which we’ll call C(N ; ~x, ~y ) plays the same role in random walk
Figure 1: Three random walks starting and ending at the same point.
2
I mean by this, consider the following definition of the partition function
X
ZN = e−βEstate (1)
N −particle states
where the sum above is–as clearly indicated–over all of the states of a given
N -particle system. The exponential is the standard Boltzmann factor, β
being 1/kB T and Estate the energy of the given state. By comparison, the
quantity C(N ; ~x, ~y ) is given by the equation
X
C(N ; ~x, ~y ) = (2)
N −step walks starting at ~
x and ending at ~
y
In the case of the sum on the right hand side of (2), the “Boltzmann factor”
is one. We can alter this if the conditions on the walk change, and in fact we
will do so shortly.
Recursion relations
So, how do we evaluate the sum in question? One very useful method makes
use of a relationship between the number of N step walks and the number
of walks that require N + 1 steps. The argument leading to this relationship
is fairly transparent. Here is how it goes. If I have managed to find out how
many N -step walks start at a given point ~x and terminate at all possible end
points ~y 0 , then counting all N + 1 step walks starting at ~x and terminating
at a particular end point ~y just involves adding up all N step walks that
start at ~x and end up at a point from which the walker can get to y in one
step. Suppose I know the value of C(N ; ~x, ~y ) for all endpoints, ~y . Then,
C(N + 1, ~x, ~y ) equal is given by
X
C(N + 1; ~x, ~y ) = C(N ; ~x, w
~ i) (3)
w
~i
Here, the w ~ i ’s are the locations of the points from which the walker can make
it to the location ~y in one step. Figure 2 illustrates of the process summarized
in Eq. (3)1 .
1
In the figure, as noted in the caption, the process depicted is appropriate to a random
walker on a lattice. In general, I will not be precise about whether this assumption applies,
as most results are qualitatively independent of that particular restriction
3
w4
w1 y w3
w2
Furthermore, we will assume that the same sort of expansion is possible with
respect to the locations of the starting and ending points. Let’s focus our
attention on the kind of walk on a lattice shown in Fig. 2. Given that we
can write
w ~ i − ~y )
~ i = ~y + (w
~i
≡ ~y + ∆ (5)
we can then replace the right hand side of the recursion relation equation (3)
4
by
X
~ i)
C(N ; ~x, ~y + ∆
∆i
!
2
X
~i·∇
~ y C(N ; ~x, ~y ) + 1 X ∂
= C(N ; ~x, ~y ) + ∆ ∆i,l ∆i,m C(N ; x~,~y ) + · · ·
2 l,m ∂yl ∂ym
~i
∆
(6)
The quantity a on the right hand side of (9) is the magnitude of each ∆ ~ i,
th
and yk is the k component of the position vector ~y .
Collecting all the results together, we are left with the following equation
for the quantity C(N ; ~x, ~y ).
5
role of an imaginary time. As it turns out, the equation is not quite right.
We have made too much of an approximation in utilizing the first two terms
in the Taylor expansion in N in our treatment of the left hand side of the
recursion relation. To see this, let’s figure out what the equation tells us
about the number of N -step walks that start at x and end up anywhere. We
do this by simply summing over end-points, y. Making use of the fact that
the last term on the right hand side of (11) can be expressed in terms of
perfect derivatives,and that the integral over all ~y of the function C(N ; ~x, ~y )
converges, because when ~y is sufficiently far from ~x no N -step walk will
connect the two points, we have on integrating both sides of (11) over ~y
Z Z
∂ 3
C(N ; ~x, ~y )d y = 5 C(N ; ~x, ~y )dy (12)
∂N
or, defining Z
C(N ; ~x) = C(N ; ~x, ~y )dy (13)
∂C(N ; ~x)
= 5C(N ; ~x) (14)
∂N
The solution to this equation is
Equation (15) tells us that the number of N step walks increases exponen-
tially in the number of steps. Does this make sense? Consider the process.
At each step, the walker on the cubic lattice has a choice of six neighboring
sites to visit. This means that the total number of walks increases by a factor
of 6 at each step, or the total number of N step walks goes as 6N = eN ln 6 .
This is qualitatively consistent with the result displayed in (15). Is it quan-
titatively consistent? According to that result the number of walks increases
by the factor e5 = 148.4 at each step. We are right about the exponential
growth in the number of walks with steps taken but wildly off with respect to
the actual rate of exponential increase. The moral is cautionary. Be careful
when you treat a discrete quantity as if it were continuous.
There is another quantity related to the number of N -step walks for which
the kind of equation displayed in (11) is absolutely relevant and asymptoti-
cally correct. This quantity is equal to the fraction of N -step walks starting
at ~x and ending at ~y . To obtain this quantity, we divide C(N ; ~x, ~y ) by the
6
total number of N step walks, which we’ve seen is 6N . Let’s call this quantity
D(N ; ~x, ~y ). The recursion relation corresponding to (3) is
1X
D(N + 1, ~x, ~y ) = D(N ; ~x, w
~ i) (16)
6
w
~i
7
Figure 3: Five random walkers have stated at a common point and have
taken 100 steps away from it.
This leads to the following differential equation for the function d(N, ~q)
∂d(N, ~q) a2 q 2
=− d(N, ~q) (20)
dN 6
The solution to this equation is
2 q 2 N/6
d(N, ~q) = d(0, ~q)e−a (21)
8
3/2
1 6π 2 /2a2 N
= e−3|~y−~x| (22)
(2π)3 a2 N
In getting to the last line of (22), I made use of the fact that the integral over
~q can be broken up into three independent integrations of the components of
that vector and then of the classic result for Gaussian integrations
Z ∞ r
Ax−Bx2 π A2 /4B
e dx = e (23)
−∞ B
All three properties represent key qualitative features of diffusion and the
random walk. As a test of the validity of our expression, we can integrate
the expression on the R last line of (22) over all values of ~y . Making use of
(23), we verify that D(N ; ~x, ~y )d3 y = 1.
Given our solution for D(N ; ~x, ~y ), what can we say about the original
quantity of interest, C(N ; ~x, ~y )? In light of the relationship between the two,
it is almost a triviality to obtain the second function from the first. In fact,
we have
This tells us that the number of N step walks starting at ~x and ending at ~y has
the first two properties listed above, and additionally increases exponentially
in the number of steps. In fact a general feature of random walks will be this
exponential growth in the number of possibilities with the number of steps.
The power-law modification at the origin that is noted as the third item in
the list should also be kept in mind.
9
The current density of walkers
In the case of dye particles diffusing in water, we can express the invasion
of those impurities in terms of a current density, through the continuity
equation
∂d ~ · ~j
= −∇ (25)
∂N
In light of Eq. (17), we can easily intuit the current density
2
~j = − a ∇
~ y D(N ; ~x, ~y ) (26)
6
The relationship (26) between current density and the density of diffusers is
an example of Fick’s Law.
Suppose, now that there is an absorber, say a wall that sucks up any
walker that hits it. How does that effect the density of walkers? We know
that the overall effect has to be to decrease them, as walkers that hit the wall
are removed from the distribution. Without going into details, allow me to
simply say that the consequence on the density of walkers is to force it to
zero at the points of impact of the walkers and the absorber. To be more
precise, the density extrapolates to zero just beyond the point of impact, but
the difference between the more proper and precise effect and the effective
boundary condition that we will apply is not important for the phenomena
that will be discussed here.
Let’s start by considering the case of a walker, or a collection of walkers,
that begin some distance from an absorbing wall. We want a distribution
that starts out as a delta function, that obeys the equations above in the
region in which the walkers take their steps and that obeys the boundary
conditions that were asserted above at the wall. To be specific, let’s look
at the case of a collection of walkers that start out a distance l from a wall
that occupies the plane x1 = 0. The walkers start out in the half space
x > 0. We will locate the starting point at (l, 0, 0). Then, the solution for
the distribution D(N ; ~x, ~y ) that is consistent with the boundary conditions
is
3/2
1 6π
D(N ; ~x, ~y ) =
(2π)3 a2 N
2 2 2 2 2 2 2 2
× e−3((y1 −l) +y2 +y3 )/2a N − e−3((y1 +l) +y2 +y3 )/2a N (27)
10
This is reminiscent of the image charge solution for the electrostatic potential
in the presence of a conducting surface. In fact, one way to derive it is to
posit the existence of “image walkers” that meet and annihilate walkers that
impinge on the absorbing surface. See Fig. 4
x=0
ending point
a2 2
∇ D(~x) = −s(~x) (29)
6
Here, I have removed N from the equation, as there is no dependence on that
quantity.
Of course, we have all seen Eq. (29). It is Laplace’s equation. We all
know how to solve it in special cases. If the source term is a delta function,
corresponding to the introduction of walkers at a particular point in space,
say at ~x = 0, then
3 1
D(~x) = (30)
2πa2 |~x|
The steady state distribution of walkers is governed by the same equation,
and it takes on the same form, as the electrostatic potential. Let’s make
use of this fact to consider a couple of classic problems associated with the
random walk.
Gambler’s ruin
The problem of the gambler’s ruin has been cited by Montroll and Shlesinger
as the first example of a situation that can be analyzed in terms of a random
walk [Montroll and Shlesinger, 1983]. The solution is due to the celebrated
mathematician de Moivre. The question being asked is the following one.
Given a game of chance between two players in which neither one has an
advantage, suppose the first player starts out with an amount of money
equal to A and the second player has an amount of money equal to B. Each
will play until he or she has either won all the money or has exhausted his
or her resources. What is the likelihood that the first player will walk away
the winner?
A way to think of the problem is in terms of a random walker in one
dimension that starts off in a bounded interval. The walker is a distance
12
A from one of the boundaries and B from the other one. As soon as the
walker hits one of the boundaries it is absorbed, corresponding to the “ruin”
of one of the gamblers. Now, to apply the steady state model to the analysis
of Gambler’s Ruin, we imagine a constant source of walkers at the starting
point, located at x = A as shown in Fig. 5. This replaces the single pair
of gamblers by an ensemble of them. We can think of having a very large
tournament in which games are being started at a constant rate and continued
until a player is out of money. This source leads to a steady state distribution
x
A B
Figure 5: The source, at the location of the vertical dashed line, induces the
steady state distribution d(x), as shown.
13
Fortunately for the purposes of completing our analysis we do not need to
work out the value of K. We simply need to ratio of random walker currents,
and this is easy given (26). According to that version of Fick’s law, we have
for the walker current density
d
j(x) ∝ − d(x)
dx
−1/A 0 < x < A
∝ (33)
1/B A<x<A+B
In other words, walkers are going off the the left and eventually being ab-
sorbed by the boundary at x = 0 at a relative rate 1/A, while walkers are
wandering off to the right at a relative rate equal to 1/B. As exit to the left
corresponds to the ruin of the gambler with resources amounting to A, while
exit to the right corresponds to success for that gambler and ruin for the
gambler that starts with an amount equal to B. Let’s call the first gambler
Gambler A and the second one Gambler B. Then the probability PA that
Gambler A wins divided by the probability that Gambler B wins is given by
PA 1/B
=
PB 1/A
A
= (34)
B
The gambler with more resources is more likely to win, and the relative
likelihood is equal to the ratio of initial resources.
And that is why you are statistically doomed to lose in Las Vegas. Even
if the odds are not against you, unless you can match the resources of the
house you will eventually end up on the short end.
14
of interest) that is equal to zero at the surface of the sphere and that is a
constant infinitely far away from the sphere. Placing the origin at the center
of the sphere, we can also demand spherical symmetry about that origin.
The solution of interest is then readily intuited. It is
r0
d(~r) = c0 1 − (35)
r
where r is the distance from the center of the sphere, and the expression on
the right hand side of (35) holds when r > r0 .
We can use this formula to calculate the rate at which the walkers are
absorbed by the sphere. To do this, we calculate the current density of
walkers by taking the gradient of the right hand side of (35), and then by
calculating the current flux into the sphere. From (26), we have
~j(~r) = −βr̂ ∂ c0 1 − r0
∂r r
βc0 r0
= −r̂ 2 (36)
r
where I have replaced the constant depending on the size of the walker’s step
by the all-purpose symbol β. This tells us that the flux of walkers into the
sphere is equal to
r0
βc0 2 × 4πr02 = 4πβc0 r0 (37)
r0
We learn that the number of walkers that are absorbed by the sphere scales
linearly with the radius of the sphere. If the sphere represents a cell, and the
walkers are nutrients in the broth in which the cell sits, the rate at which
the cell takes those nutrients in is proportional to its linear size. On the
other hand, if the sphere really is a cell, it has metabolic requirements that
scale as its volume—in other words, as r03 . Ultimately, those requirements
will overwhelm the cell’s ability to absorb nutrients through diffusion, as the
size of the cell increases. In the case of an immobile cell, these considerations
place a limit on the maximum size that it can be. In general, the fact that
metabolic needs will exceed the rate at which nutrients can be gathered as
they diffuse inwards will mandate a different strategy for the acquisition of
biological fuel for any organism that is larger than a certain size [Berg, 1993].
It is also possible to calculate the distribution of walkers when there is
a point source outside of an absorbing sphere. In this case, we make use of
a modification of the image charge. If the source is a distance R away from
15
the center of a sphere with radius r0 , then the image source is a distance
ρ = r02 /R from the center of the sphere, and the “strength” of the source
is equal to −r0 /R times the strength of the original, external source. It is
possible to calculate how many of the walkers escape from the sphere by
taking the integral of the current flux through a surface that surrounds both
the source and the absorbing sphere. We can short-circuit this calculation
by noting that a version of Gauss’s law holds here, which tells us that the
net flux through a surface surrounding a set of sources is proportional to the
total strength of those sources. In this case, the total strength is equal to
the strength of the original source plus that of the image source, which is the
strength of the original source multiplied by 1 − r0 /R. The fraction of the
total number of walkers that emanate from the source that also escape from
the sphere is (R − r0 )/R.
r0 original source
ρ R
image source
Figure 6: The source and the image source in the case of an absorbing sphere.
16
inner surface is, then as given by Gauss’s law
Z
1 ~ r) · dS
~
Q=− ∇φ(~ (38)
4π
Making use of the relationships we’ve already established between electro-
static quantities and those in steady state diffusion, we can state that the
following holds
~
− ~j(~r) · dS
R
= 4πβC (39)
c∞
where c∞ is the concentration of walkers far away from the absorbing surface
and C is the capacitance of a capacitor consisting of the absorbing surface
surrounded, at a great distance, by a spherical shell. See Figure 7. For
inner
surface
outer
sphere
17
number of absorbing sites, or receptors, distributed across its surface. Imag-
ine that there are n of those receptors, and that the radius of a given site
is a. We’ll also assume that na r0 , where r0 is the sphere’s radius. This
means that the total surface area accounted for by the receptors is a small
fraction of the surface area of the sphere. Let’s represent this collection of
receptors as a network of small conducting surfaces, arranged in the shape
of a spherical shell. This network is utilized as one of the elements in a
capacitor . What is the capacitance of the resulting capacitor? √
First, note that the distance between the receptors is the order of r0 / n.
Then, notice
√ that the potential a distance r away from the sphere, where
r r0 / n, is going to be the same as the electrostatic potential generated
by a uniform distribution of the receptors, smeared out over the sphere. This
is because at such a distance, the difference between a set of discrete charge
and a uniformly distributed charge is negligible, as far as the electrostatic
potential goes. If a charge of Q/n is placed on each receptor, then the electro-
static potential of this array of charges goes as Q/(r + r0 ). The electrostatic
potential near the sphere is, then, well approximated by Q/r0 . This means
that the capacitance of the spherical arrangement is substantially equal to
the capacitance of a uniform sphere. Making use of the electrostatic analogy,
we find that the collection of receptors will absorb nutrients at the same rate
as if the entire surface of the sphere were a receptor.
We can be a little more explicit about the potential immediately above
the surface of the network. The electrostatic potential right next to one
of the small components of the network due to the charge carried by that
component will go as Q/na, because each of the components carries a charge
equal to Q/n, and each has an effective size equal to a. On the other hand,
the potential due to all the other components will be essentially the same
as if the charges on them were uniformly distributed over the surface of the
sphere. This potential is equal to Q(n − 1)/nr0 . If n is large enough that
na r√ 0 , which is possible for sufficiently large n, since all we require is that
a r/ n, the potential is dominated by the second contribution, which, in
the limit of large n, goes to Q/r0 . The potential at a point near the surface
of the sphere that is far away from one of the small components compared
to its size will be absolutely dominated by the second term. Thus, to a very
good approximation, the electrostatic potential in the immediate vicinity of
the network is the same as the electrostatic potential right next to a sphere
carrying a uniform charge equal to Q.
For a more extended discussion of the issued addressed in the last two
18
sections see the references [Berg, 1993] and [Berg and Purcell, 1977].
The quantity z plays the role of the fugacity, eβµ in statistical mechanics.
The function G(z; ~x, ~y ) is called the generating function because when it is
expanded in a power series in z the coefficients of that series are the functions
C(N ; ~x, ~y ). Of course, this relationship is pretty obvious, given the definition
of G(z; ~x, ~y ).
Thus, the generating function encapsulates information about the statis-
tical properties of random walks of all lengths. However, what makes the
generating function so absolutely valuable is the fact that its determination
is in many cases so much more straightforward than is the case for functions
describing random walks with a fixed number of steps. In this respect, it
possesses the same advantages as the grand partition function in many case,
and example being quantum statistical mechanics, the natural approach to
which is in the grand canonical ensemble.
To see how recourse to the generating function simplifies our consideration
of random walk statistics, consider the recursion relation (3). Let’s multiply
both sides of this equation by Z N +1 and sum from N = 0 to ∞. Then, we
find
∞
X ∞
X
N +1
Z C(N + 1; ~x, ~y ) = z M C(M ; ~x, ~y ) − C(0; ~x, ~y )
N =0 M =0
= G(z; ~x.~y ) − δ~x,~y
∞
X X
= z zN C(N ; ~x, ~y )
N =0 w
~i
19
X
= z G(z, ~x, w
~ i) (42)
w
~i
In deriving the expression on the second line of (42), we made use of the
fact that the the number of zero step walks starting at ~x and ending at ~y
is equal to one if the two locations are identical and is zero otherwise. The
relationship that can be abstracted from (42) is
X
G(z; ~x, ~y ) = z G(z; ~x, w
~ i ) + δ~x,~y (43)
w
~i
then, multiplying Eq. (43) by e−i~q·(~y−~x) and summing over ~x, we obtain
X
g(z; ~q) = e−i~q·(~y−~x) G(z; ~y − ~x)
~
x
XX X
= z e−i~q·(~y−~x) G(z; w
~ i − ~x) + e−i~q·(~y−~x) δ~x,~y
~
x w
~i ~
x
X X
= z ei~q·(w~ i −~y) e−i~q·(w~ i −~x) G(z; w
~ i − ~x) + 1
w
~i x
X
= z ei~q·(w~i −~y) g(z; ~q) + 1
w
~i
= zχ(~q)g(z; ~q) + 1 (45)
The
P function χ(~q) in the last line of (45) is shorthand for the quantity
q ·(w
i~ ~ i −~
y)
~i e
w . Again, we abstract an equation for g(z; ~q) from the several
lines of (45). The equation is
20
The solution is easy. It is
1
g(z; ~q) = (47)
1 − zχ(~q)
To extract the generating function in real space, we evaluate the Fourier
transform of the function g(z; ~q). We find
Z
G(z; ~y − ~x) ∝ g(z; ~q)ei~q·(~y−~x) dd q
ei~q·(~y−~x) d
Z
= d q (48)
1 − zχ(~q)
Note that I have left the dimensionality of the integration in (48) unspecified.
This because our general result is valid in all dimensions. The proportional-
ity sign and the lack of explicit indications of the range of integration over
the wave vector variable ~q arises from the lack of specification with regard
to the exact conditions under which the random walker moves. Once that
specification is supplied (Is the walker confined to the vertices of a lattice?
If so, what kind of lattice? If not, what kind of randomness is there in the
walk—in direction only or both in direction and in the length of the step?),
both the overall multiplying constant and the range of ~q-integration follows.
Given the generating function, we are now in a position to extract, if we so
desire, the statistical properties of the N -step walk. Given that we’ve gone to
the trouble to derive the expression on the last line of (48) for the generating
function, let’s see what we get when we do the power series expansion in z
of it. Fortunately the first step is easy. We have
∞
1 X
= z n χ(~q)n (49)
1 − zχ(~q) n=0
Performing the expansion in the integrand in the last line of (48), we end up
with the following result for C(N ; ~x, ~y ).
Z
C(N ; ~x, ~y ) ∝ ei~q·(~y−~x) χ(~q)N dd q (50)
The exact result will depend on the exact form of the function χ(~q). How-
ever, we can extract the important properties of the generating function by
expanding that function as a power series in ~q. To make contact with earlier
21
results, we will look at a walker that is, again restricted to the vertices of a
cubic lattice. The distance between neighboring points on that lattice will
be a, as previously specified. Then
X
ei~q·(w~ i −~y)
w
~i
We now recast the integral leading to the generating function and make use
of the expansion in (51).
C(N ; ~x, ~y )
Z
∝ ei~q·(~y−~x) eN ln χ(~q) d3 q
Z
ei~q·(~y−~x) exp N ln 6 − a2 (q12 + q22 + q32 ) d3 q
=
N a2 2 3
Z
q ·(~
i~ y −~
x)
= e exp N ln 6 − |~q| d q
6
3/2
6π 2 2
= 6 N
2
e−3|~y−~x| /2N a (52)
Na
Compare this to (24). We have the number of walks to within an overall
multiplicative constant, which, after all, was left as an open item.
Now, let’s turn our attention to the properties of the generating function
itself. As it turns out, the most important properties are determined by
the low order terms in the expansion of χ(~q). In other words, it suffices to
consider the quantity
1 1
≡
1 − z(χ(0) + q 2 χ00 (0)) 1 − (z/zc ) + Azq 2
1
→ (53)
1 − z/zc + zc Aq 2
We can, for instance, make use of the last line of (53) to reconstruct the
generating function in real space:
e−~q·(~y−~x)
Z
G(z; ~x, ~y ) ∝ dd q (54)
1 − z/zc + zc Aq 2
22
We can perform the integral in (54) in three dimensions. We have
eiq|~y−~x| cos θ sin θ
Z Z
2
G(z; ~x, ~y ) ∝ 2π q dq dθ
1 − z/zc + zc Aq 2
Z ∞
4π sin(q|~y − ~x|)
= qdq
|~y − ~x| 0 1 − z/zc + zc Aq 2
√ √
2π 2 e− 1−z/zc |~y−~x|/ zc A
= (55)
Azc |~y − ~x|
The last line of (55) applies for |~y − ~x| not too small. The fact that the
expression diverges as ~y → ~x is an artifact of our approximations and of
the absence of a restriction on the integration over ~q. One thing to note is
that, aside from the divergent term—which is absent if we do the integration
properly—the
√ leading order contribution to the generating function as ~y → ~x
goes as zc − z. This allows us to work out the dependence on N of the
number of walks that return to the point from which they started. First,
though, a digression on the art of extracting the coefficient of z N in the
power series of a function.
A simple pole
Let’s start with one of the simplest examples of a function with an infinite
power series expansion in z: f (z) = 1/(zc − z). If | z |<| zc |, we have
" 2 #
1 1 z z
= 1+ + + ···
zc − z zc zc zc
∞ n
1 X z
= , (56)
zc n=0 zc
−(N +1)
so the coefficient of z N is zc .
23
Two or more simple poles
Now, suppose zc and zd are both real, positive numbers and zc < zd . Fur-
thermore, let
a b
f (z) = + . (57)
zc − z zd − z
Then, if z is suficiently small(z < zc ),
∞ n ∞ n
a X z b X z
f (z) = + , (58)
zc n=0 zc zd n=0 zd
where δ = − ln(1 − ∆) and δ > 0. As N gets larger and larger the second
term in brackets in (59) vanishes exponentially. Thus, for very large N the
coefficient of z N in a/(zc −z) + b/(zd −z) is essentially equal to the coefficient
of a/(zc − z). We will eventually generalize this result as follows:
If the functions fa (z) and fb (z) have poles or branch points at za and zb ,
respectively, and if zb > za > 0 (za and zb both real), then, when N is large,
the coefficient of z N in Afa (z) + Bfb (z) is, for all practical purposes, equal
to the coefficient of z N in Afa (z).
24
Higher Order Poles and Branch Points
What about the more general case f (z) = (zc − z)−α , where the exponent α
need not be an integer? One way of finding the coefficient of z N is to use the
binomial expansion formula. Another way is the use the following identity:
Z ∞
tA e−xt dt = x−A−1 Γ(A + 1), (60)
0
To find the coefficient ofz N in (zc − z)−α we expand the right hand side of
the above equation with respect to z. The coefficient of z N in that expansion
is
Z ∞
1 1 1 1 −(N +α)
tα−1+N e−tzc dt = z Γ(α + N )
Γ(α) N ! 0 Γ(α) N ! c
Γ(α + N )
= zc−(N +α) . (62)
Γ(α)Γ(N + 1)
Now, we use Stirling’s formula for the gamma function of a large argument
Logarithmic Singularities
One more complication: suppose the function is of the form −(zc −z)−α ln(zc −
z). We obtain the coefficient of z N by noting that this function is just
25
d
(z
dα c
− z)−α . Taking the derivative with respect to α of the last term in the
equation above, we have for the desired coefficient
By the same token, we can find the coefficient of z N in −(zc −z)−α / ln(zc −z)
by extracting the coefficient of z N in the integral
Z ∞
(zc − z)y dy, (66)
−α
Using (64) to find the coefficient of z N in the integrand, one is left with
Z ∞ −N +z ∞ Z ∞
N z−1 zc−N +z
−N +z
zc z−1 1 z−1 d zc
N dz = − − N dz
−α Γ(−z) ln N Γ(−z) −α ln N −α dz Γ(−z)
N α−1 zc−N −α
1
= 1+O . (67)
ln(N )Γ(α) ln N
dd q
Z
G(z; ~x = ~y ) ∝
1 − z/zc + Aq 2
q d−1 dq
Z
∝ (68)
1 − z/zc + Aq 2
To simplify expressions, I have replaced the combination zc A by A in the
denominator. There is a quick and dirty way to extract the singular depen-
dence of the integral on the last line of (68) on zc − z, and that is to scale the
26
combination out of the integrand. We do this with a change of integration
variables, replacing q by (1 − z/zc )1/2 q 0 . This leaves us with the integral
Recurrence
One of the important properties of a random walk is related to the likelihood
that a walker will visit a particular site. An issue related to this property
has to do with the question of recurrence, which is to say, the problem of
calculating the likelihood that a walker returns to its point of origin. As it
turns out, the answer to this question depends importantly on the dimension
in which the random walk is executed. Here we will work out an expression
that allows us to answer the question of recurrence, and, when properly
extended, to work out the number of different sites visited by a random
walker. First, though, an altered version of the generating function
27
A new generating function
A useful—but apparently little known2 —quantity enables one obtain some
key results with remarkably little effort. This quantity is an expanded version
of the generating function we’ve been utilizing, and it refers to a walk that
may or may not visit a special site. Suppose the quantity C(N, M ; ~x, ~y , w)
~ is
the number of N -step walks that start at the location ~x, end at the location
~y and visit the site at location w
~ exactly M times in in the process of getting
from ~x to ~y . The quantity of interest is
∞
X
0
C (N, t; ~x, ~y , w)
~ = C(N, M ; ~x, ~y , w)(1
~ − t)M (71)
M =0
Clearly, terms in the sum on the right-hand side of the above equation for
which M > N + 1 will not count, as there is no walk that visits a site more
times than it leaves footprints. That is C(N, M ; ~x, ~y , w)
~ = 0 if M > N + 1.
We obtain the coefficients of the power series expansion in (1 − t)n that
produces this generating function in the standard way. It is easy to show
that
M
1 d 0
~ = (−1)M
C(N, M ; ~x, ~y , w) C (N, t; ~
x , ~
y , w)
~ (72)
M ! dtM
t=1
Note that in this case the value to which the expansion parameter is set is
not zero, but rather one.
28
Here, S is the special site of interest. The overall weighting factor for a given
walk is then
YN
(1 − tδsi ,S ) (75)
i=1
Suppose t = 1. Then, the weighting factor will have the effect of excluding
any walk that visits the special site S; all other walks will have a weighting
factor of one. This means that if we multiply all walks by the weighting
factor above, set t = 1 and sum, we will obtain the number of walks that
never visit the site S. In the case of N -step walks that start at ~x and end
up at ~y , this is just C(N, 0; ~x, ~y , w),
~ where w
~ is the position vector of the site
S. Suppose we take the derivative of the weighted sum with respect to t,
and then set t = 1. In that case, we will end up with −1 times the number
of walks that visit the site only once. Next, take the nth derivative of the
weighted sum over walks with respect to t, multiply by 1/n!, and then set
t = 1. This yields (−1)n times the number of walks that visit the special
site exactly n times. This is because each derivative generates a factor equal
to −δsj ,S and all terms that “escape” the derivative become (1 − δsk ,S ) when
t = 1. The factor 1/n! compensates for the n! ways in which the n derivatives
with respect to t operate on the product in (75).
Now, we can evaluate the weighted walk by expanding the weighting
factor, (75), in powers of t.3 At first order we generate the quantity
N
X
−t δsi ,S (76)
i=1
When walks are multiplied by this weighting factor and summed we end up
with −t times the sum of all N -step walks that visit the site S at one step,
with no restriction on what happens either before or after that step. At
second order in the expansion we have
N X
X i−1
t2 δsi ,S δsj ,S (77)
i=1 j=1
When walks are multiplied by this second-order weighting factor and summed,
we end up with t2 times the sum of all N -step walks that visit the site S
3
We are thus treating t as if it were a small quantity and expanding in it. This technique
will be used later for a different expansion parameter in the case of self-avoiding walks, in
which case we will generate a virial expansion.
29
twice with no restriction on what happens before, after or between those
two visitations. A graphical representation for the expansion in t of the new
generating function is shown in Fig. 8. If the starting and end-point of the
-t
3
+t2 -t
walk are fixed, and if the site in question is at the location v, then the sum
in Fig. 8 is given by
X
C(N ; ~x, ~y ) − t C(N1 ; ~x, w)C(N
~ 2 ; w,
~ ~y )
N1 +N2 =N
X
2
+t C(N1 ; ~x, w)C(N
~ 2 ; w,
~ w)C(N
~ ~ ~y ) + · · · (78)
3 ; w,
N1 +N2 +N3 =N,N2 ≥1
The inequality that applies to N2 in the sum above simply requires the walker
to take at least one step before revisiting the site at w.
~ Otherwise, we would
count zero step “walks” in the sum.
Now, we take the step of multiplying our new function by z N and sum-
ming. This has the effect of removing the convolution over the Ni ’s, and we
obtain
∞
X
G(z, t; ~x, ~y , w)
~ = z N C 0 (N, t; ~x, ~y , w)
~
N =0
= G(z; ~x, ~y ) − tG(z; ~x, w)G(z;
~ w,
~ ~y )
30
+t2 G(z; ~x, w)G
~ 1 (z; w,~ w)G(z;
~ ~ ~y ) + · · ·
w,
G(z; ~x, w)G(z;
~ w,
~ ~y )
= G(z; ~x, ~y ) − t (79)
1 − tG1 (z; w,
~ w)
~
dM
M 1
(−1) G(z, t; ~
x , ~
y , w)
~
M ! dtM
t=1
M
M 1 d G(z; ~x, w)G(z;
~ w,
~ ~y )
= (−1) G(z; ~x, ~y ) − t (80)
M ! dtM 1 − tG1 (z; w,
~ w)
~
t=1
is, then, the generating function for all walks that start at ~x, end up at ~y
and visit the site at w
~ exactly M times.
The reason for the subscript 1 in the numerator of the second term on the left
hand side of (81) is that we want to count only those walks that take at least
one step from the starting point at x before revisiting that point. Otherwise,
we count walks that “revisit” their point of origin after zero steps.
31
To find out how many N -step walk start out at ~x and end up at ~y , never
having revisited ~x, we need to calculate the coefficient of z N in the function
G(z; ~x − ~y ) G(z; ~x − ~y )
= . (82)
1 + G1 (z; 0) G(z; 0)
The right hand side of the equality in (82) follows from the fact that the
contribution to the generating function G(z; 0) of walks consisting of no steps
is exactly one, by convention.
Let’s be even less restrictive and ask how many of the walks that start
out at ~x and end up anywhere never revisit the starting point ~x. We simply
sum the expression above over all possible end-points ~y —excluding ~x—and
see what we have. Using the relation between G(z; ~x − ~y ) and its spatial
fourier transform, g(z; ~q) we have
X G(z; ~x − ~y ) g(z; 0) G(z; 0)
= −
G(z; 0) G(z; 0) G(z; 0)
y 6=~
~ x
g(z; 0)
= − 1, (83)
G(z; 0)
for the fourier coefficient g(z, ~q). From Chapter 2, we know that
1
g(z, ~q = 0) =
1 − zχ(~q = 0)
1
≡ (85)
1 − z/zc
The last line of (85) serves as a definition of the quantity zc . This tells us
that the number of N -step walks that start out at ~x and end up anywhere
is the coefficient of z N in (1 − z/zc )−1 , while the number of walks that start
at ~x and end up anywhere, not having ever revisited the point of origin, ~x,
is the coefficient of z N in
zc 1
(86)
zc − z G(z; 0)
32
As it turns out, the z-dependence of the generating function G(z; 0) in two
dimensions or less is different in a very important respect from its behavior
in three and higher dimensions. This will lead to fundamentally different
results for the “recurrence” of walks in two and one dimensions from what
we will find in the case of three dimensional walks.
33
as one might expect, a bit elusive. For one thing, there is no generic “shape”
for a random walk. However, statistical statements can be made, with regard
to the probability that a walk takes on a particular shape, at least as char-
acterized by the principal radii of gyration. In addition, there is one limit in
which the shape of the trail left by a walker is fixed and predictable. That
is the limit of a walker in an infinite dimensional space (d = ∞). We will
discuss the construction of an expansion about that limit, the 1/d-expansion.
This expansion yields the shape distribution of a random walker’s trail when
the walker wanders in a high spatial dimension environment. As we will see,
this expansion is—at least for some puroposes—respectably accurate in three
dimensions.
34
interested in how many time a given point at location ~r1 is visited by a walker
that starts out at the location ~r0 , we find, after suitable averaging, that the
answer depends only on the distance between those points in space, |~r1 − ~r0 |.
This is true because for every walker that tends to go off in one direction
there will be another walker that ends up going in the opposite direction.
The statistical distribution of places visited is rotationally symmetric about
the point of origin. In other words, the totality of walkers in the ensemble
create a “cloud” that is spherically symmetric. Figure 9 shows just such a
cloud, which consists of the paths of 1,000 random walkers each of whom has
taken 100 steps from a common point of origin. The near spherical symmetry
of the cloud is evident from the figure.
This result of averaging obscures the fact that a given random walk can
be quite anisotropic spatially. Figure 10 is a stereographic pair of images of
a single 1,000-step random walk. The elongated nature of the walk shown
in this figure is not a statistical anomaly. Figure 11 shows several examples
of 1,000-step walks. Note that not one of those walks is reminiscent of the
cloud of walkers shown in Figure 9.
35
Figure 11: Several examples of a 1,000-step random walk.
36
Figure 12: The anisotropic nature of a 1,000-step two-dimensional walker.
The red lines indicate the directions in which its linear extent is the greatest
and the smallest. The lines also run parallel to the eigenvectors of the matrix
defined in Eqs. (87)–(89). The point of intersection of those two lines is the
“center of gravity” of the walk.
Here, rjk is the k th component of the position vector of the j th step, and
hrk i is the average of the k th component of the locations of the steps of the
37
walker:
N
1 X
hrk i = rjk (88)
N j=1
↔
For example, a walker in two dimensions has radius of gyration tensor T
with the following form
PN !
1 2 1
PN
↔
j=1 (x j − hxi) (x
j=1P j − hxi) (y j − hyi)
T= 1
PN N
N
1 N 2 (89)
N j=1 (x j − hxi) (y j − hyi) N j=1 (yj − hyi)
The eigenvectors and eigenvalues of this tensor quantify the linear dimensions
of the walker—its girth—in various directions. The eigenvectors point in the
direction in which this span is maximized, and the direction in which it
is minimized. The eigenvalues tell us how extended the walk is in those
extremal directions. In fact, the lines in Fig. 12 lie along the directions in
which those two eigenvectors point. The lengths of those lines are directly
↔
proportional to the eigenvalues of the matrix T appropriate to the walk in
that figure.
For a discussion of the relationship between the eigenvectors and eigen-
↔
values of T and the maximal and minimal spans of a walk see the section
beginning on page 47 in Supplement 1 at the end of this chapter.
↔
Eigenvalues of the matrix T : the asphericity of a random
walk
↔
The eigenvalues of the matrix T are the squares of the principal radii of
gyration, Ri , of the object in question. They are essentially the mean square
deviations of the steps of the walker from the “center of gravity” of the walk.
In Fig 12 the walk’s center of gravity lies at the point of intersection of the
two thick lines, each of which lie in the direction of the eigenvectors of the
↔ ↔
matrix T for that walk. This means that diagonalized, the matrix T takes
the form
↔
λ 1 0 0
T = 0 λ2 0
0 0 λ3
38
R12 0 0
≡ 0 R22 0 (90)
0 0 R32
↔
The relative magnitudes of the eigenvalues of the radius of gyration tensor T
then tell us to what extent the object in question has a shape that differs from
that of a sphere. Clearly if all Ri2 ’s in (90) are equal, then the linear span of
the object will be the same in all directions, and it is reasonable to attribute
a kind of spherical symmetry to it. However, if, for example, R12 R22 , R32 ,
which means that R1 is significantly larger than R2 and R3 , then the object
can be thought of as greatly elongated, and not at all spherical.
The eigenvalues of an object’s radius of gyration tensor are invariant
with respect to the overall orientation of the object. That is, a rotation of
the object will not change those eigenvalues. On the other hand, the tensor
itself does change as the object is rotated. If the brackets h· · ·ir stand for
averaging with respect to overall orientation, then the average hRi2 ir is just
↔
the same as Ri2 . On the other hand, performing the same average over T
produces a matrix altered by the averaging process. In fact, it is pretty
straightforward to argue that h(x − hxi)(y − hyi)ir will average to zero, while
h(x − hxi)2 ir = h(y − hyi)2 ir = h(z − hzi)2 ir This means that
↔
T̄ 0 0
hT i = 0 T̄ 0 (91)
0 0 T̄
The eigenvalues of this matrix are clearly all equal to T̄ . In averaging the
radius of gyration tensor, we are performing the kind of ensemble average
that destroys information regarding the non-spherical shape of the object in
question. This clearly means an informative characterization of the shape of
the random walk is not contained in the averaged radius of gyration tensor.
We can, nevertheless extract useful shape information by averaging quan-
tities that are directly derivable from the radius of gyration matrix. What
we need to do is use quantities that are invariant with respect to rotations
and reflections in space (the matrix is automatically invariant with respect
to translations). All of these quantities are directly related to the eigenval-
ues of the matrix. In the case of a three dimensional matrix there are three
independent invariants. One choice of those three is
↔
Tr T = T11 + T22 + T33
39
= R12 + R22 + R32 (92)
↔2
Tr T = (R12 )2 + (R22 )2 + (R32 )2 (93)
↔3
Tr T = (R12 )3 + (R22 )3 + (R32 )3 (94)
Another well-known invariant of the tensor, its determinant is obtained as
follows
↔
Det T = R12 R22 R32
1h 2 3
R1 + R22 + R32 + 2 (R12 )3 + (R22 )3 + (R32 )3
=
6
−3 R12 + R22 + R32 (R12 )2 + (R22 )2 + (R32 )2
1 ↔3 ↔3 ↔ ↔2
= Tr T + 2 Tr T −3 Tr T Tr T (95)
6
Now, it is possible to average the three invariants defined in Eqs. (92)–(94).
These averages retain important information regarding the devation from
spherical symmetry of the shape of the “average” random walk. Consider,
for example, the following combination of eigenvalues
2 2 2
R12 − R22 + R12 − R32 + R22 − R32
2
= 3 (R12 )2 + (R22 )2 + (R32 )2 − R12 + R22 + R32
↔2 ↔2
= 3 Tr T − Tr T (96)
Both sides of this equation can be averaged over all orientations of an object,
and, given the fact that they are invariants with respect to translation, rota-
tion and reflection, they will remain unchanged. In the case of the random
walk, this means that if we average the last line of (96) over an ensemble of
walkers we are left with a quantity that tells us something about the differ-
ences between the various principal radii of gyration. That is, we find out
how different the shape a random walk is, on the average, from that of a
sphere.
To construct a quantity that interpolates between zero when all principal
radii of gyration are equal and one when one of the Ri ’s is much greater than
the others we will divide by
3
X ↔
2h( Ri2 )2 i = 2hTr T i2 (97)
i=1
40
We can, in fact, generalize this quantity and define the mean asphericity,
Ad , of d-dimensional random walks as follows [Aronovitz and Nelson, 1986,
Theodorou and Suter, 1985, Rudnick and Gaspari, 1986]:
Pd 2
− Rj2 )2 i
i>j h(Ri
Ad = (98)
(d − 1)h( di=1 Ri2 )2 i
P
The last line of (99) follows from the equations for the trace of a tensor and
2 2 2
of its square. It also follows from the fact that hT11 i = hT22 i = · · · hTdd i, and
2
similar equalities for hTii Tjj i and hTij i. The denominator of the last line of
(98) can be reduced in the same way, leading to the following expression for
the asphericity:
2
d(d − 1) (hT11 i − hT11 T22 i) + d2 (d − 1)hT12
2
i
Ad = 2 2
d(d − 1)hT11 i + d(d − 1) hT11 T22 i
2 2
(hT11 i − hT11 T22 i) + dhT12 i
= 2
(100)
hT11 i + (d − 1)hT11 T22 i
41
1750
1500
1250
1000
750
500
250
walks. Given the examples depicted in Fig. 11, it seems much more likely
that random walks come in a wide variety of shapes and that a quantity such
as the mean asphericity provides a very broad-brush characterization of that
property of random walks. Figure 13 illustrates this point. It is a histogram
of the distribution of the individual asphericities of 20,000 three-dimensional
walks, each comprised of 100 steps. Note that the distribution spans the
range from 0 to 1, and that no narrow region dominates.
It is also important to note that what is presented in (101) is not, strictly
speaking, the average of the individual asphericities of the walks, which is
given by
2 2
*P +
2
0 i<j Ri − Rj
Ad = (102)
(d − 1) N 2
P
k=1 Rk
This quantity can also be found exactly in the case of the ordinary d-
dimensional walk. The analytical result for this quantity is [Diehl and Eisenregler, 1989]
0 d 4 d
Ad = 3 + − Md/2 (103)
4 d 2
where Z ∞
Mp = xp+1 sinh−p x dx (104)
0
42
Shape of a self-avoiding random walk
Work on the shape of a self-avoiding walk has been performed [Aronovitz and Nelson, 1986].
The calculation is based on an expansion in the difference between the di-
mensionality in which the walk takes place and an “ upper critical dimen-
sionality,” equal to four. The quantity = 4 − d is the expansion parameter.
To first order in
2d + 4
Ad = + 0.008 (105)
5d + 4
The main conclusion to be gleaned from this result is that self-avoidance
plays a non-trivial, but far from decisive, role in the shape of a random walk.
5
5
Notice that the first term on the right hand side of (105) has not been expanded about
d = 4. This is a (minor) violation of the spirit of the expansion in = 4 − d, which does
not materially affect the conclusion stated above.
43
So far, so good. This is all pretty elementary. Now, let’s focus on ro-
tational motion. We derive Newton’s second law, as it applies to rotational
motion, by taking the cross product of the position vectors ~rl with the cor-
responding equation in the set (107). Defining the total torque ~τ as the sum
of the ~rl × F~l ’s, we end up with the equation
X d~vl
~τ = ml~rl ×
l
dt
d X
= ml~rl × ~vl
dt l
dL~
≡ (109)
dt
The last two lines of (109) constitute a definition of the angular momentum,
~ of a system of point particles. Note that the precise definition of angular
L
momentum depends on the origin with respect to which the position of each
particle is defined. It is often convenient to place the origin in at the center
of mass of the set of particles. If the internal force between each pair of
particles is along the line joining them, then the internally generated torques
cancel, and the total torque, ~τ , is entirely due to external forces.
Now, suppose that the motion of the system is entirely rotational, about
some point R. ~ Then,
~vl = ω ~
~ × ~rl − R (110)
Here, ω~ is the angular velocity of the system of particles. See Figure 14.
Now, we can choose R ~ as the center of our system of coordinates, so that
~ is replaced by ~rl . In this case, the angular momentum becomes
~rl − R
X
ml~rl × (~ω × ~rl ) (111)
l
We can rewrite the above relationship with the use of the standard identity
for the triple product:
X
~ =
L ml (~ω (~rl · ~rl ) − ~rl (~rl · ω
~ )) (112)
l
↔
Suppose, now, we define the matrix T as follows:
X
Tij = ml rl,i rl,j (113)
l
44
ω
r-R
v
Figure 14: The angular velocity, ω
~ , and its relation to the velocity,~v , of a
particle.
~ and ω
Then, the relationship between L ~ is
↔ ↔
~ = Tr T ω
L ~ − T ·~ω (114)
Defining
↔ ↔ ↔ ↔
C =T − I Tr T (115)
↔
where I is the identity operator we find that
↔
~ = − C ·~ω
L (116)
↔
Note that the trace of the operator C has a trace equal to twice the trace of
↔
T . This is because the trace of the identity operator is three. The matrix
↔
C is the moment-of-inertia matrix. The fact that the angular velocity and
the angular momentum are not parallel is just one of the complications of
rotational motion. ↔
Now, because the matrix T is real and symmetric, we know that it has
real (in fact, positive) eigenvalues. Those eigenvalues have a name. They are
known as the principal radii of gyration, Ri2 . If ω
~ points in the same direction
as the eigenvector of one of them, say R12 , then the angular momentum and
the angular velocity point in the same direction, and the relationship between
the two becomes
~ = R2 + R2 ω
L 2 3 ~ (117)
↔ ↔
This is because Tr T = R12 + R22 + R32 , while T ·~ω = R12 ω
~.
45
↔
The eigenvalues of the tensor T can also be used as a measure of the
extent to which the system of particles possesses spherical symmetry, at
least in terms of rotational motion. If all the egenvalues are equal, then the
system has approximately the same weighted extent. In fact, in this case, L ~
is always parallel to ω
~.
Now imagine that all the masses, ml , are equal to one. Then, the matrix
measures the extent to which the particles are in a spherically symmetric
distribution. While equality of the principal radii of gyration is not equivalent
to spherical symmetry, it provides a very useful quantitative measure of that
property, and of departure from it.
↔
We can construct our tensor, Q, as follows
↔ ↔ 1 ↔ ↔
Q=T − I Tr T (118)
3
The trace of this matrix is equal to zero, and, if all the principal radii
of gyration are the same, then the diagonalized form of this matrix has all
entries equal to zero.
Position vectors transform under rotations about the center of mass as
follows X
rk0 = Rkl rl (119)
l
where Rkl are the elements of the rotation matrix. This matrix has the
property that its transpose is also its inverse. That is
↔ ↔T ↔
R · R =I (120)
A matrix whose transpose is also its inverse is known as an orthogonal matrix.
↔
Then, the matrix T with elements going as rl rk transforms as follows
X
Tk0 1 k2 = Rk1 l1 Tl1 l2 Rk2 l2
l1 ,l2
X
= Rk1 l1 Tl1 l2 RlT2 k2
l1 ,l2
X
= Rk1 l1 Tl1 l2 Rl−1
2 k2
(121)
l1 ,l2
or, in shorthand,
↔0 ↔ ↔ ↔−1
T =R · T · R (122)
46
↔
The same relationship clearly holds for the tensor Q. The demonstration
that traces of powers of this tensor are invariant under rotations follows from
↔
this equation for the way in which rotations give rise to changes in Q.
↔
Just a little bit more on the meaning of the operator T .
Suppose we are interested in finding the direction in which an object is has
the greatest spatial extent. We start by assuming a vector ~n, which points
Figure 15: Looking for the direction in which an object has the greatest
spatial extent.
along the direction of interest. We will set the length of the vector at unity.
See figure 15. Then the extent of the object in the direction established by
~n is
X XX
(~rl · ~n)2 = ni rl,i rl,j nj
l l i,j
↔
= ~n· T ·~n (123)
Remember that the position vectors are drawn with their tails at the center
of mass of the object. If we wish to extremize the above quantity with respect
to ~n, subject to the condition that its length is held constant, we take the
derivative with respect to each component of ~n of the expression below
↔
~n· T ·~n − λ~n · ~n (124)
47
The quantity λ in (124) is a Lagrange multiplier. The extremum equation
easily reduces to
↔
T ·~n = λ~n (125)
That is, in order to find the direction of greatest (or least) extent of the object,
↔
we solve the eigenvalue equation of the operator T . The largest eigenvalue
is the greatest extent, as defined by (123), with ~n chosen to extremize the
quantity.6 The smallest extent, similarly defined, is given by the smallest
↔
eigenvalue of T .
This means that to calculate the mean aspericity we need to find ratios of
averages, rather than the averages themselves. This simplifies our task a bit.
Although the averages that we have to perform in order to arrive at a
numerical result for the asphericity of the random walk involve squares of
entries in the radius of gyration tensor, or products of two entries, it is useful
to look at the average of a single element of that tensor, lying along the
diagonal. Eventually, we will perform this calculation in another way when
we develop an expansion in 1/d for the principal radii of gyration. However,
we will start out by showing how the calculation can be done with the use of
the generating functions that have proven so useful in the study of random
walk statistics. As a first step, we recast the expression for the entries Tkl : 7
N
1 X
Tkl = (rjk − hrk i) (rjl − hrl i)
N j=1
6
In other words, the extremizing choice for the vector ~n is the eigenvector of the operator
↔
T with the largest eigenvalue.
7
Here, we make no distinction between N and N + 1
48
N N
1 XX
= (rik − rjk ) (ril − rjl ) (127)
2N 2 i=1 j=1
The first line in the above equation is a recapitulation of (87). The second
line can be established by inspection. Consider, now the average
N N
1 XX
hT11 i = h(xi − xj )2 i (128)
2N 2 i=1 j=1
The dashed curve joins the ith and j th footprints on the walk. The solid
lines stand for the walk that begins at the leftmost end of the three-line
segment and ends at the righttmost point. There will, in general, be N1
steps from the far left point to the leftmost vertex at which the dashed curve
touches the line, N2 steps in the central segment of the walk and N3 steps in
the far right segment of the walk. Subject to the overall constraint that the
Ni ’s add up to the total number of steps in the walk, we sum over all values
of N1 , N2 and N3 . The evaluation of the sum represented by this diagram
is most conveniently carried out in the grand canonical ensemble, with the
use of the generating function. We seek the coefficient of z N −1 in the direct
product
Z Z Z
1
× 2 d r0 d r2 dd r3 G(z; ~r1 − ~r0 )G(z; ~r2 − ~r1 )G(z; ~r3 − ~r2 )(x2 − x1 )2
d d
2N 2
(129)
The “missing” integration in (129), over ~r1 , would yield a factor equal to the
volume of the portion of space in which the random walk occurs. The factor
49
of two multiplying the integral represent the two possible orderings of the
indices i and j (i > j and i < j). As the next step, we rewrite the generating
functions in terms of their spatial Fourier transforms,
Z
1 ~
G(z; ~r) = d
dd kg(z; ~k)e−k·~r (130)
(2π)
Making use of this representation, we find that the expression in (129) reduces
to
2
1 ~k) − ∂ ~k)
g(z; g(z; k) g(z; (131)
N2 ∂kx2 ~
k=0
The second derivative follows from the identity
~ ∂ 2 i~k·(~r1 −~r2 )
(x1 − x2 )2 eik·(~r1 −~r2 ) = − e (132)
∂kx2
where l represents the mean distance covered by the walker in each step.
This leaves us with the following result for the expression (131):
1 l2 1
(134)
N 2 d (1 − zzc−1 )4
50
This result is the desired value of hT11 i, multiplied by the total number of
random walks with N − 1 steps. To obtain the average, we divide this by the
total number of N − 1-step walks, which is equal to the coefficient of z N −1 in
Z
dd rG(z; ~r) = g(z; ~k = 0)
1
= (136)
1 − zzc−1
−(N −1)
The coefficient in question is zc . We thus find
l2
hT11 i = N (137)
6d
↔
The average of the trace of the tensor T is, by symmetry, equal to dhT11 i.
Given (137) we have
↔
hTr T i = N l2 /6 (138)
The quantity above is also known as the mean radius of gyration .
51
a b c
Figure 17: The three different forms of the graph involved in the calculation
2
of hT11 i.
2
Determination of hT11 i.
We proceed diagram-by-diagram.
Diagram a
Here, the calculation proceeds as it did in the evaluation of hT11 i. Taking
second derivatives and performing integrations by parts, we are left with the
following expression
2 2
1 ~ ∂ ~ ~ ∂ ~ ~
4
× 8 g(z; k) 2
g(z; k) g(z; k) 2
g(z; k) g(z; k) (140)
4N ∂kx ∂kx ~k=0
The factor of 8 in the above expression counts the number of ways of con-
structing diagram a, exchanging end-points of the two dashed curves, and
permuting the two curves amomng themselves. The quantity of interest is,
of course, the coefficient of z N −1 in (140). We will defer the power series
expansion in the fugacity z. Making use of (133) for the Fourier transformed
generating function, we end up with the result
2
2 l2 1
(141)
N 4 d (1 − zzc−1 )7
Diagram b
The evaluation of this diagram is a bit more involved. Utilizing the generating
function in real space, we have the average of interest proportional to the
following expression:
Z Z Z Z Z
d r0 d r1 d r2 d r3 dd r4 G (z; ~r1 − ~r0 )
d d d d
×G (z; ~r2 − ~r1 ) G (z; ~r3 − ~r2 ) G (z; ~r4 − ~r3 ) G (z; ~r5 − ~r4 )
× (x4 − x1 )2 (x3 − x2 )2 (142)
52
The next step is to express the generating functions in terms of their Fourier
transforms. We end up with a product containg the factor
~ ~ ~ ~ ~
(x4 − x1 )2 (x3 − x2 )2 eik0 ·(~r1 −~r0 ) eik1 ·(~r2 −~r1 ) eik2 ·(~r3 −~r2 ) eik3 ·(~r4 −~r3 ) eik4 ·(~r5 −~r4 )
2 2
∂ ∂ ∂ ∂ ~ ~ ~
= + + eik0 ·(~r1 −~r0 ) eik1 ·(~r2 −~r1 ) eik2 ·(~r3 −~r2 )
∂k2x ∂k1x ∂k2x ∂k3x
~ ~
×eik3 ·(~r4 −~r3 ) eik4 ·(~r5 −~r4 ) (143)
After a series of integrations of parts in the variables ~ki the derivatives above
act on the Fourier transforms of the generating functions. The integrations
over the ~rl ’s produce delta functions, and we are left with the following result
2 2
∂ ∂ ∂ ∂
+ +
∂k2x ∂k1x ∂k2x ∂k3x
g z; ~k0 g z; ~k1 g z; ~k2 g z; ~k3 g z; ~k4
~ ~ ~ ~ ~ k0 =k1 =k2 =k3 =k4 =0
(144)
53
Diagram c
In this case, the relevant identity is
~ ~ ~ ~ ~
(x3 − x1 )2 (x4 − x2 )2 eik0 ·(~r1 −~r0 ) eik1 ·(~r2 −~r1 ) eik2 ·(~r3 −~r2 ) eik3 ·(~r4 −~r3 ) eik4 ·(~r5 −~r4 )
2 2
∂ ∂ ∂ ∂ ~ ~ ~
= + + eik0 ·(~r1 −~r0 ) eik1 ·(~r2 −~r1 ) eik2 ·(~r3 −~r2 )
∂k1x ∂k2x ∂k2x ∂k3x
~ ~
×eik3 ·(~r4 −~r3 ) eik4 ·(~r5 −~r4 ) (147)
The same set of steps as outined immediately above leads to the following
non-vanishing contributions to the diagram, combinatorial factors having
been left out,
2
∂2 ∂2 ∂2 ∂2 ∂2 ∂4
∂
2 2
+ 2 2
+ 2 2
+ 4
∂k1x ∂k2x ∂k3x ∂k2x ∂k1x ∂k3x ∂k2x
g z; ~k0 g z; ~k1 g z; ~k2 g z; ~k3 g z; ~k4
~ ~ ~ ~ ~ k0 =k1 =k2 =k3 =k4 =0
(148)
Taking the derivatives indicated, evaluating the ~ki = 0 limits and inserting
the required combinatorial factors, we end up
2
18 l2 1
(149)
N 4 d (1 − zzc−1 )7
54
The ratios
2
The calculations of expressions contributing to hT11 T22 i and hT12 i proceed
along the lines laid out above. For each of these averages, there are three
contributions, corresponding to the three diagrams in Fig. 17. Carrying out
the required computations, we find
2
hT11 i 9
= (151)
hT11 T22 i 5
2
hT11 i 9
2
= (152)
hT12 i 2
Inserting these results into the right of (126), we obtain (101) for the mean
asphericity of a d-dimensional random walk. .ep
References
[Aronovitz and Nelson, 1986] Aronovitz, J. A. and Nelson, D. R. (1986).
Universal features of polymer shapes. Journal de Physique, 47(9):1445–56.
55
symposium in celebration of Melvin Lax’s sixtieth birthday, CCNY physics
symposium in celebration of Melvin Lax’s sixtieth birthday, page 364, New
York. City College of New York Physics Dept.
[Rudnick and Gaspari, 1986] Rudnick, J. and Gaspari, G. (1986). The as-
pherity of random walks. Journal of Physics A (Mathematical and Gen-
eral), 19(4):L191–3.
[Weiss and Rubin, 1976] Weiss, G. H. and Rubin, R. J. (1976). The theory of
ordered spans of unrestricted random walks. Journal of Statistical Physics,
14(4):333–50.
56