The Sum of Log-Normal Variates in Geometric Brownian Motion
The Sum of Log-Normal Variates in Geometric Brownian Motion
Brownian motion
arXiv:1802.02939v1 [cond-mat.stat-mech] 8 Feb 2018
Abstract
Geometric Brownian motion (GBM) is a key model for represent-
ing self-reproducing entities. Self-reproduction may be considered the
definition of life [7], and the dynamics it induces are of interest to those
concerned with living systems from biology to economics. Trajecto-
ries of GBM are distributed according to the well-known log-normal
density, broadening with time. However, in many applications, what’s
of interest is not a single trajectory but the sum, or average, of several
trajectories. The distribution of these objects is more complicated.
Here we show two different ways of finding their typical trajectories.
We make use of an intriguing connection to spin glasses: the expected
free energy of the random energy model is an average of log-normal
variates. We make the mapping to GBM explicit and find that the free
energy result gives qualitatively correct behavior for GBM trajecto-
ries. We then also compute the typical sum of lognormal variates using
Itô calculus. This alternative route is in close quantitative agreement
with numerical work.
1
in order. . . . Realism usually requires models that include noise, to account
for the multitude of effects not explicitly modelled. GBM is a simple, intu-
itive, and analytically tractable model of noisy multiplicative growth. A deep
understanding of this basic model of self-reproduction is crucial and only in
the last few decades have we achieved this.
A quantity, x, of self-reproducing resources (such as biomass or capital)
follows GBM if it evolves over time, t, according to the Itô stochastic differ-
ential equation,
dx = x(µdt + σdWt ). (1)
dt denotes the infinitesimal time increment and dWt the infinitesimal incre-
ment in a Wiener process, which is a normal variate with hdWt i = 0 and
hdWt dWs i = δ(t − s)dt. µ and σ are constant parameters called the drift
and volatility. Put simply, relative changes in resources, dx/x, are assumed
to be drawn independently from a stationary normal distribution at each
time step. Eq. (1) represents the continuous-time limit of this process. Its
solution,
σ2
x(t) = x(0) exp µ − t + σW (t) , (2)
2
yields exponential growth at a noisy rate. Hereafter, we shall assume x(0) = 1
and neglect it, except where retaining it is illustrative. The distribution of
x(t) is a time-dependent log-normal,
σ2
2
ln(x(t)) ∼ N µ− t, σ t , (3)
2
whose mean, median, and variance grow (or decay) exponentially in time:
hx(t)i = exp(µt); (4)
median(x(t)) = exp[(µ − σ 2 /2)t]; (5)
var(x(t)) = exp(2µt)[exp(σ 2 t) − 1]. (6)
GBM is a standard model in finance for self-reproducing quantities such
as stock prices. Since relative changes are modelled as normal variates, the
central limit theorem means that GBM is an attractor process for a large
class of multiplicative dynamics. Any quantity whose relative changes are
random variables with finite mean and variance will behave like a GBM
after a sufficiently long time. We work with GBM because it is standard
and general. It exemplifies important qualitative and universal features of
multiplicative growth.
Specifically, we are interested in GBM as a model of economic resources,
owned by some entity. We will think of x as measured in currency units,
such as dollars.
2
1.1 Non-ergodicity of GBM
The non-ergodicity of this growth process manifests itself in an intriguing
way as a difference between the growth of the expectation value of x and
the growth of x over time. Imagine a world where people’s wealth follows
Eq. (1). In such a world each person’s wealth grows exponentially at rate
gt = µ − σ 2 /2 (7)
with probability 1 if we observe the person’s wealth for a long time. The
expectation value of each person’s wealth grows exponentially at
ghi = µ. (8)
A) for large N , the PEA should resemble the expectation value, exp(µt);
3
for estimating when this occurs is to look at the relative variance of the PEA,
var(hx(t)iN )
R≡ . (10)
hhx(t)iN i2
ln N
t< . (12)
σ2
This hand-waving tells us roughly when the large-sample (or, as we see from
Eq. (12), short-time) self-averaging regime holds. A more careful estimate
of the cross-over time in Eq. (28) is a factor of 2 larger, but the scaling is
identical.
For t > ln N/σ 2 , the growth rate of the PEA transitions from µ to its
t → ∞ limit of µ − σ 2 /2 (Situation B). Another way of viewing this is to
think about what dominates the average. For early times in the process, all
trajectories are close together, but as time goes by the distribution broadens
exponentially. Because each trajectory contributes with the same weight to
the PEA, after some time the PEA will be dominated by the maximum in
the sample,
1 N
hx(t)iN ≈ max{xi (t)}, (13)
N i=1
as illustrated in Fig. 1. Self-averaging stops when even the “luckiest” trajec-
tory is no longer close to the expectation value exp(µt). This is guaranteed to
happen eventually because the probability for a trajectory to reach exp(µt)
decreases towards zero as t grows. Of course, this takes longer for larger
samples, which have more chances to contain a lucky trajectory.
4
106
exp(g t)
104 exp(gtt)
x(t) N
N
102 max
i {xi(t)}/N
100
10 2
10 4
10 6
0 100 200 300 400 500
t
Figure 1: PEA and maximum in a finite ensemble of size N = 256. Red line:
expectation value hx(t)i. Green line: exponential growth at the time-
average growth rate. In the T → ∞ limit all trajectories grow at this
rate. Yellow line: contribution of the maximum value of any trajectory
at time t to the PEA. Blue line: PEA hx(t)iN . Vertical line: Crossover –
for t > tc = 2 ln
σ2
N
the maximum begins to dominate the PEA (the yellow line
approaches the blue line). Grey lines: randomly chosen trajectories – any
typical trajectory soon grows
√ at the time-average growth rate. Parameters:
N = 256, µ = 0.05, σ = 0.2.
5
1.3 Our previous work on PEAs
In [10] we analysed PEAs of GBM analytically and numerically. Using Eq. (2)
the PEA can be written as
N
σ2
1 X (i)
hxiN = exp µ − t + σW (t) , (14)
N i=1 2
where W (i) (t) i=1...N are N independent realisations of the Wiener process.
Taking the deterministic part out of the sum we re-write Eq. (14) as
N
σ2
X
1
exp t1/2 σξi ,
hxiN = exp µ − t (15)
2 N i=1
6
We didn’t look in [10] at the expectation value of gest (t, N ) for finite time
and finite samples, but it’s an interesting object that depends on N and t but
is not stochastic. Note that this is not gest of the expectation value, which
would be the N → ∞ limit of Eq. (16). Instead it is the S → ∞ limit,
1
hgest (t, N )i = hln(hx(t)iN )i = f (N, t), (20)
t
where, as in Sec. (1.2), h·i without subscript refers to the average over all
possible samples, i.e. limS→∞ h·iS . The last two terms in Eq. (19) suggest an
exponential relationship between ensemble size and time. The final term is a
tricky stochastic object on which the properties of the expectation value in
Eq. (20) will hinge. This term will be the focus of our attention: the sum of
exponentials of normal random variates or, equivalently, log-normal variates.
7
Like the growth rate estimator in Eq. (16), this involves a sum of log-
normal variates and, indeed, we can rewrite Eq. (19) as
σ 2 ln N βF
gest = µ − − − , (24)
2 t t
which is valid provided that
r
K
βJ = σt1/2 . (25)
2
Equation (25) does not give a unique mapping between the parameters of
our GBM, (σ, t), and the parameters of the REM, (β, K, J). Equating (up
to multiplication) the constant parameters, σ and J, in each model gives us
a specific mapping:
J √
σ=√ and t1/2 = β K. (26)
2
The expectation value of gest is interesting. The only random object in
Eq. (24) is F . Knowing hF i thus amounts to knowing hgest i. In the statistical
mechanics of the random energy model hF i is of key interest and so much
about it is known. We can use this knowledge thanks to the mapping between
the two problems.
Derrida identifies a critical temperature,
1 J
≡ √ , (27)
βc 2 ln 2
above and below which the expected free energy scales differently with K
and β. This maps to a critical time scale in GBM,
2 ln N
tc = , (28)
σ2
with high temperature (1/β > 1/βc ) corresponding to short time (t < tc )
and low temperature (1/β < 1/βc ) corresponding to long time (t > tc ). Note
that tc in Eq. (28) scales identically with N and σ as the transition time,
Eq. (12), in our sketch.
In [5], hF i is computed in the high-temperature (short-time) regime as
hF i = E − S/β (29)
K βKJ 2
= − ln 2 − , (30)
β 4
8
and in the low-temperatures (long-time) regime as
√
hF i = −KJ ln 2. (31)
Short time
We look at the short-time behavior first (high 1/β, Eq. (30)). The relevant
computation of the entropy S in [5] involves replacing the number of energy
levels n(E) by its expectation value√hn(E)i. This is justified because the
standard deviation of this number is n and relatively small when hn(E)i >
1, which is the interesting regime in Derrida’s case.
For spin glasses, the expectation value of F is interesting, supposedly,
because the system may be self-averaging and can be thought of as an en-
semble of many smaller sub-systems that are essentially independent. The
macroscopic behavior is then given by the expectation value.
Taking expectation values and substituting from Eq. (30) in Eq. (24) we
find
σ 2 1 KJ 2
hgest ishort = µ − + . (32)
2 t 4T 2
KJ 2
From Eq. (25) we know that t = 2σ 2 T 2
, which we substitute, to find
9
we’ve stopped using since then because it caused confusion – hgi there denotes
the growth rate of the expectation value, which is not the expectation value
of the growth rate.
It is remarkable that the expectation value hgest (N, t)i so closely reflects
the median, q0.5 , of hxiN , in the sense that
In [9] it was discussed in detail that gest (1, t) is an ergodic observable for
Eq. (1), in the sense that hgest (1, t)i = limt→∞ gest . The relationship in
Eq. (35) is far more subtle. The typical behavior of GBM PEAs is com-
plicated outside the limits N → ∞ or t → ∞, in the sense that growth rates
are time dependent here. This complicated behavior is well represented by
an approximation that uses physical insights into spin glasses. Beautiful!
10
102
Typical PEA
101
gest 256 10, 000
Ito gest 256
100 median gest 256
Derrida short time
Derrida long time
10 1
11
3 Another route via Itô calculus
Another way to find the expectation value of PEA growth rates, Eq. (20), is
to compute hd ln hxiN i using Itô calculus. We compute this directly, without
invoking the random energy model. To apply Itô calculus, we will need the
first two partial derivatives of d ln hxiN with respect to xi .
∂ ln hxiN 1
= (36)
∂xi N hxiN
and
∂ 2 ln hxiN 1
=− (37)
∂xi2
N 2 hxi2N
Now, Taylor-expanding d ln hxiN we find
X ∂ ln hxi
N 1 X X ∂ 2 ln hxiN
d ln hxiN = dxi + dxi dxj + . . . (38)
i
∂x i 2 i j
∂x i ∂x j
1 X 1 XX
≈ dxi − 2 dxi dxj . (39)
N hxiN i 2N 2 hxiN i j
Parts of the cross-terms are negligible because they are of order dt2 and the
rest vanishes when taking the expectation value, as we see by writing out
one cross term
We therefore drop these terms now, as we take the expectation value, using
the Wiener identity hdWi2 i = dt for the final term
* +
1 X
hd ln hxiN i = µdt − 2 x2i (µ2 dt2 + σ 2 dt) (42)
2
2N hxiN i
* +
1 2 hx2 iN
= µdt − σ dt + O(dt2 ). (43)
2 N hxi2N
12
Discarding terms of higher-than-first order in dt and re-writing the last term
as a fraction of sums, we thus have
* P 2 +
1
hd ln hxiN i 1 2 N i xi
= µ− σ 2 (44)
dt 2 N N1 i xi
P
* P +
2
1 2 x
= µ− σ Pi i 2 (45)
2 ( i xi )
This expression has the correct behaviour: for short times, all xi are essen-
tially identical, and the second term is − 12 σ 2 N1 , which is negligible if N is
large. So, for short times we see expectation-value behaviour. For long times,
the largest xi will dominate both the numerator and the denominator, and
we have − 12 σ 2 : the full Itô correction is felt for long times.
4 Discussion
The Itô result is exact. A Monte-Carlo estimate of Eq. (45) (which is easy
to obtain) is shown in Fig. 2 (yellow line). This agrees well with numerical
observations. The approximations from the random energy model have the
right shape and asymptotic behavior, though they’re not on the same scale
as the median PEA. This is, of course, not surprising because these estimates
are not designed to coincide with the median PEA. Quantitatively they are
closer to a higher quantile of the distribution of PEAs. An intriguing question
is this: is our computation using Itô calculus helpful to compute the expected
free energy of the random energy model?
References
[1] A. Adamou and O. Peters. Dynamics of inequality. Significance,
13(3):32–35, 2016.
[3] J.-P. Bouchaud. On growth-optimal tax rates and the issue of wealth
inequalities. http://arXiv.org/abs/1508.00275, August 2015.
13
[5] B. Derrida. Random-energy model: Limit of a family of disordered
models. Phys. Rev. Lett., 45(2):79–82, July 1980.
14