Instant Access to Thinking Probabilistically: Stochastic Processes, Disordered Systems, and Their Applications First Edition Ariel Amir ebook Full Chapters
Instant Access to Thinking Probabilistically: Stochastic Processes, Disordered Systems, and Their Applications First Edition Ariel Amir ebook Full Chapters
Instant Access to Thinking Probabilistically: Stochastic Processes, Disordered Systems, and Their Applications First Edition Ariel Amir ebook Full Chapters
com
https://textbookfull.com/product/thinking-probabilistically-
stochastic-processes-disordered-systems-and-their-
applications-first-edition-ariel-amir/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-loucas/
textboxfull.com
https://textbookfull.com/product/theory-of-stochastic-objects-
probability-stochastic-processes-and-inference-first-edition-micheas/
textboxfull.com
https://textbookfull.com/product/morning-sidekick-journal-3rd-edition-
amir-atighehchi-ariel-banayan-mikey-ahdoot/
textboxfull.com
https://textbookfull.com/product/stochastic-game-strategies-and-their-
applications-1st-edition-bor-sen-chen/
textboxfull.com
Theory and Statistical Applications of Stochastic
Processes 1st Edition Yuliya Mishura
https://textbookfull.com/product/theory-and-statistical-applications-
of-stochastic-processes-1st-edition-yuliya-mishura/
textboxfull.com
https://textbookfull.com/product/stochastic-processes-and-calculus-an-
elementary-introduction-with-applications-1st-edition-uwe-hassler/
textboxfull.com
https://textbookfull.com/product/theory-of-stochastic-objects-
probability-stochastic-processes-and-inference-1st-edition-micheas/
textboxfull.com
https://textbookfull.com/product/applied-probability-and-stochastic-
processes-second-edition-beichelt/
textboxfull.com
https://textbookfull.com/product/applied-probability-and-stochastic-
processes-second-edition-beichelt-2/
textboxfull.com
Thinking Probabilistically
Ariel Amir is a Professor at Harvard University. His research centers on the theory of
complex systems.
Thinking Probabilistically
Stochastic Processes, Disordered Systems,
and Their Applications
ARIEL AMIR
Harvard University, Massachusetts
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
www.cambridge.org
Information on this title: www.cambridge.org/9781108479523
DOI: 10.1017/9781108855259
© Ariel Amir 2021
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2021
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Amir, Ariel, 1981– author.
Title: Thinking probabilistically : stochastic processes, disordered systems,
and their applications / Ariel Amir.
Description: Cambridge, United Kingdom ; New York, NY : Cambridge University
Press, 2021. | Includes bibliographical references and index.
Identifiers: LCCN 2020019651 (print) | LCCN 2020019652 (ebook) |
ISBN 9781108479523 (hardback) | ISBN 9781108789981 (paperback) |
ISBN 9781108855259 (epub)
Subjects: LCSH: Probabilities–Textbooks. | Stochastic processes–Textbooks. |
Order-disorder models–Textbooks.
Classification: LCC QA273 .A548 2021 (print) | LCC QA273 (ebook) | DDC 519.2–dc23
LC record available at https://lccn.loc.gov/2020019651
LC ebook record available at https://lccn.loc.gov/2020019652
ISBN 978-1-108-47952-3 Hardback
ISBN 978-1-108-78998-1 Paperback
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Contents
1 Introduction 1
1.1 Probabilistic Surprises 5
1.2 Summary 12
1.3 Exercises 13
2 Random Walks 15
2.1 Random Walks in 1D 16
2.2 Derivation of the Diffusion Equation for Random Walks in Arbitrary
Spatial Dimension 18
2.3 Markov Processes and Markov Chains 24
2.4 Google PageRank: Random Walks on Networks as an Example
of a Useful Markov Chain 25
2.5 Relation between Markov Chains and the Diffusion Equation 30
2.6 Summary 32
2.7 Exercises 32
5 Noise 81
5.1 Telegraph Noise: Power Spectrum Associated with a Two-Level-System 83
5.2 From Telegraph Noise to 1/f Noise via the Superposition of Many Two-
Level-Systems 88
5.3 Power Spectrum of a Signal Generated by a Langevin Equation 89
5.4 Parseval’s Theorem: Relating Energy in the Time and Frequency Domain 90
5.5 Summary 92
5.6 Exercises 92
Appendix B A Brief Linear Algebra Reminder, and Some Gaussian Integrals 211
B.1 Basic Linear Algebra Facts 211
B.2 Gaussian Integrals 212
Appendix D Review of Newtonian Mechanics, Basic Statistical Mechanics, and Hessians 217
D.1 Basic Results in Classical Mechanics 217
D.2 The Boltzmann Distribution and the Partition Function 218
D.3 Hessians 218
References 225
Index 232
Acknowledgments
I am indebted to the students of Harvard course APMTH 203 for their patience and
perseverance as the course materials were developed, and I am hugely grateful to all
of my teaching fellows along the years for their hard work: Sarah Kostinksi, Po-Yi Ho,
Felix Wong, Siheng Chen, Pétur Rafn Bryde, and Jiseon Min. Much of the contents of
the Appendices draws on their helpful notes.
I thank Eli Barkai, Stas Burov, Ori Hirschberg, Yipei Guo, Jie Lin, David Nelson,
Efi Shahmoon, Pierpaolo Vivo, and Ahmad Zareei for numerous useful discussions
and comments on the notes. Christopher Bergevin had excellent suggestions for Chap-
ter 2, and Julien Tailleur for Chapter 3. Ori Hirschberg and Eli Barkai had many
important comments and useful suggestions regarding Chapter 6. I thank Grace Zhang,
Satya Majumdar, and Fernando L. Metz for a careful reading of Chapter 8. I am
grateful to Martin Z. Bazant and Bertrand I. Halperin for important comments on
Chapter 9. I thank Terry Tao for allowing me to adapt the discussion of Black–Scholes
in Chapter 3 from his insightful blog. I also thank Dr. Yasmine Meroz and Dr. Ben
Golub for giving guest lectures in early versions of the course. Finally, I am grateful
to Simon’s coffee shop for providing reliably excellent coffee, which was instrumental
to the writing of this book.
I dedicate this book to Lindy, Maayan, Tal, and Ella, who keep my feet on the
ground and a smile on my face.
1 Introduction
I know too well that these arguments from probabilities are impostors, and unless
great caution is observed in the use of them they are apt to be deceptive – in
geometry, and in other things too
(from Plato’s Phaedo)
The purposes of this book are to familiarize you with a broad range of examples
where randomness plays a key role, develop an intuition for it, and get to the level
where you may read a recent research paper on the subject and be able to understand
the terminology, the context, and the tools used. This is in a sense the “organizing
principle” behind the various chapters: In all of them we are driven by applications
where probability plays a fundamental role, and leads to exciting and often intriguing
phenomena. There are many relations between the chapters, both in terms of the
mathematical tools and in some cases in terms of the physical processes involved,
but one chapter does not follow from the previous one by necessity or hinge on it –
rather, the idea is to present a rich repertoire of problems involving randomness,
giving the reader a good basis in a broad range of fields . . . and to have fun
along the way.
Randomness leads to new phenomena. In a classic paper, Anderson (1972) coined
the phrase “more is different”. It is also true that “stochastic is different” . . . The book
will give you some tools to understand phenomena associated with disordered systems
and stochastic processes. These will include percolation (relevant for polymers,
gels, social networks, epidemic spreading); random matrix theory (relevant for
understanding the statistics of nuclear and atomic levels, model certain properties
of ecological systems and more); random walks and Langevin equations (pertinent
to understanding numerous applications in physics, chemistry, cell biology as well
as finance). The emphasis will be on understanding the phenomena and quantifying
them. Note that while all of the applications considered here build on random-
ness in a fundamental way, the collection of topics covered is far from repre-
sentative of the vast realm of applications that hinge on probability theory. (For
instance, two important fields not touched on here are statistical inference and
chemical kinetics).
2 Introduction
and creatures such as the Fourier transform of eiωt . For a physicist, these objects
should be interpreted under the appropriate regularization – a δ-function should be
thought of as having a finite width, but much smaller than any other relevant scale in
the problem. (Physicists are relatively used to this sort of regularization – for instance,
in computing Green functions using contour integration the contour often has to be
shifted by an amount ±i to make the results convergent). If the final result depends
on this finite width – then the treatment using δ-function is inadequate and should be
revisited. But as long as the final results are plausible (e.g., in some cases we can com-
pare with numerics) we will not re-derive them in a rigorous fashion. The advantage of
this non-rigorous approach is that it seems to be the one more relevant to applications,
which are the focus of this book. Experiments and real-life phenomena do not conform
to the mathematical idealization we make anyhow, and von Neuman’s quote comes to
mind again. In other words, the more important thing for explaining physical reality is
to have a good model rather than specify the conditions rigorously (a related quote is
attributed to Kolmogorov: “Important is not what is rigorous but what is true”). That
is not to take anything away from the beautiful work of mathematicians – it is just not
the point of this book.
For some students, this non-rigorous approach could prove challenging. When the
rigorously inclined student encounters a situation where they feel the formal manipula-
tions are unjustified, it may prove useful for them to construct counter-examples, e.g.,
functions which do not obey the theorem, and then to consider the physical meaning
of these “good” and “bad” functions – which class is relevant in which physical
situations, and what we learn from the scenarios where the derivation fails. Our goal
here is not to undermine the importance of rigorous mathematics, but to provide a non-
rigorous introduction to the plethora of natural sciences phenomena and applications
where stochasticity plays a central role.
How to read this book A few words on the different topics covered and their rela-
tions. Chapter 1 is introductory, and gives some elementary examples where basic
probability theory leads to perhaps counter-intuitive results. One of the examples,
Benford’s law, touches on some of the topics of Chapter 6 (dealing with heavy-
tailed distributions, falling off as a power-law). Chapter 2 presents random walks
and diffusion, and provides the foundational basis for many of the other chapters.
Chapter 3 directly builds on the simple random walks introduced in Chapter 2,
and discusses the important concepts of Langevin and Fokker–Planck equations.
The first part of the chapter “builds” the formalism (albeit in a non-technical
and non-rigorous fashion), while the second part of the chapter deals with three
applications of the ideas (cell size control – an application in biology, the Black–
Scholes equation – one in economics, and finally a short application in hydrology).
A reader may skip these applications without affecting the readability of the rest
of the materials. Similarly, Chapter 4 (dealing with the “escape over a barrier”
problem) can be viewed as a sophisticated application of the ideas of Chapter 3,
with far-reaching implications. It certainly puts the materials of the previous chap-
ters to good use, but again can be skipped without affecting the flow. Chapter 5
4 Introduction
is of particular importance to those dealing with signals and noise, and builds on ideas
introduced in earlier chapters (e.g., the Markov chains of Chapter 2) to analyze the
power spectrum (i.e., noise characteristics) of several paradigmatic systems (including
white noise, telegraph noise, and 1/f noise). Chapter 6 derives a plethora of basic
results dealing with the central limit theorem, its limitations and generalizations,
and the related problem of “extreme value distributions”. It is more technical (and
lengthier) than previous chapters. Chapter 7, dealing with anomalous diffusion, can be
viewed as an advanced application of the materials of Chapter 6, “reaping the fruits” of
the labor of the previous chapter. In a sense, it extends the results of the random walks
of Chapter 2 to scenarios where some of the assumptions of Einstein’s approach do
not hold – and have been shown to be relevant to many systems in physics and biology
(reminiscent of the quote, “everything not forbidden is compulsory” . . .) Chapter 8
deals with random matrices and some of their applications. It is the most technical
chapter in this book, and is mostly independent from the chapter on percolation theory
that follows. Moreover, a large fraction of Chapter 8 deals with a non-trivial derivation
of the “circular law” associated with non-Hermitian matrices, and a reader can skip
directly to Chapter 9 if they prefer. (Note that most of this lengthy derivation “unzips”
the short statements made in the original paper, perhaps giving students a glimpse into
the compact nature in which modern research papers are written!) The final chapter on
percolation theory touches on fundamental concepts such as emergent behavior, the
renormalization group, and critical phenomena. Throughout the chapters, numerical
simulations in MATLAB are provided when relevant.∗ Often, results are easy to obtain
numerically but challenging to derive analytically, highlighting the importance of the
former as a supplement to analytic approaches. Finally, note that occasionally “boxes”
are used where we emphasize an idea or concept by placing the passage between two
solid lines.
Note that each chapter deals with a topic on which many books and many hundreds
of papers have been written. This book merely opens a narrow window into this vast
literature. The references throughout the book are also by no means comprehensive,
and we apologize for not including numerous relevant references – this text is not
intended to be a comprehensive guide to the literature! When possible, we refer to
textbooks on the topic that provide a more in-depth discussion as well as a more
extensive list of references.
A comment on the problems in this book (and their philosophy) The problems at the
end of each chapter are a little different from those encountered in most textbooks. The
phrasing is often laconic or even vague. Students might complain that “the problem is
not hard – I just cannot figure out what it is!” This actually reflects the typical situation
in many real-life problems, be it in academia or industry, where figuring out how to
set up the problem is often far more challenging than solving the problem itself.
The Google PageRank algorithm described in Chapter 2 is a nice example where
simple, well-known linear algebra can be highly influential when used correctly in
the appropriate context. The situation might be frustrating at times, when trying to
∗ The codes can be downloaded here: https://github.com/arielamir/ThinkingProbablistically
1.1 Probabilistic Surprises 5
prove something without being given in advance the precise conditions for the results
to hold – yet this mimics the situation encountered so often in research. Indeed,
many of the original problems arose from the author’s own research experience or
from (often recent) research papers, and as such reflect “natural” problems rather
than contrived exercises. In other cases, the problems supplement the materials of the
main chapter and essentially “teach” a classic theorem (e.g., Pólya’s theorem in
Chapter 2) through hands-on experience and calculations (and with the proper
guidance to make it manageable, albeit occasionally challenging). We made a
conscious choice to make the problems less defined and avoid almost categorically
problems of the form “Prove that X takes the form of Y under the assumptions Z.”
The philosophy behind this choice is to allow this book (and the problems) to serve
as a bridge between introducing the concepts and doing research on related topics.
The typical lack of such bridges is nicely articulated by Williams (2018), which was
written by a graduate student based on his own first-hand experience in making the
leap from undergraduate course work to graduate-level physics research. Trickier
problems will be denoted by a * (hard) or ** (very hard), based on the previous
experience of students tackling these problems.
the offspring’s genotype (and their probabilities in brackets) given the mother’s and
father’s genotypes.
mother \ father aa AA aA
aa aa(1) aA(1) aA 12 ,aa 12
AA aA(1) AA(1) AA 12 ,aA 12
aA aa 12 ,aA 12 AA 12 ,aA 12 aa 14 ,AA 14 ,aA 12
We shall denote the relative abundance of genotypes AA, aA, and aa by p, 2q, and r,
respectively (thus, by definition p + 2q + r = 1). Surprisingly, at the beginning of the
twentieth century, it was not clear what controls the relative abundance of the three
types: What are the possible stable states? What are the dynamics starting from a
generic initial condition?
On surprises If you haven’t seen this problem before, you might have some prior
intuition or guesses as to what the results might be. For instance, it might be reasonable
to expect that if a disease corresponding to a recessive gene is initially very rare in the
population, then over time it should go extinct. This is, in fact, not the case, as we shall
shortly see. In that sense, you may call the results “surprising.” But perhaps a reader
with better intuition would have guessed the correct result a priori, and will not find the
result surprising at all – in that sense, the notion of a “surprising result” in science is, in
fact, a rather unscientific concept. In retrospect, mathematical results cannot really be
surprising . . . Nevertheless, the scientific process itself is often driven by intuition and
lacks the clarity of thought that is the luxury of hindsight, and for this reason scientists
do often invoke the concept of a “surprising result.” Moreover, this often reflects
our expectations from prior null models that we are familiar with. For example, in
Section 1.1.2 we will show a simple model suggesting an exponential distribution
of the time intervals between subsequent buses reaching a station. Armed with this
insight, we can say that the results described in Chapter 8, finding a distribution of time
intervals between buses that is not only non-exponential but in fact non-monotonic,
are surprising! But this again illustrates that our definition of surprising very much
hinges on our prior knowledge, and perhaps a more (or less) mathematically sophisti-
cated reader would not find the latter finding surprising. For a related paper, see also
Amir, Lemeshko, and Tokieda (2016b).
Remarkably, it was not until 1908 that the mathematician G. H. Hardy sent a letter
to the editor of Science magazine clearing up this issue (Hardy 1908). His letter
became a cornerstone of genetics (known today as the Hardy–Weinberg equilibrium, a
name also crediting the independent contributions of Wilhelm Weinberg). The model
and calculations are extremely simple. Assuming a well-mixed population in its nth
generation, let us compute the abundance of the three genotypes in the n + 1 genera-
tion, assuming for simplicity random mating between the three genotypes. Using the
1.1 Probabilistic Surprises 7
table, it is straightforward to work out that the equations relating the fractions in one
generation to the next are:
Note that we dropped the n subscript on the RHS (the abbreviation we will use for
“right-hand side” through the text) to make the notation less cumbersome. As a sanity
check, you can check that these sum up to (p + 2q + r)2 = 1.
If we reach a stationary (“equilibrium”) state, then pn+1 = pn , etc. This implies
that
pr = q 2 . (1.4)
(and the second equation has the same structure – can you see why there is no need to
check the third?).
Finally, how long would it take us to reach this state starting from general initial
conditions p1 , q1 , and r1 ? Note that qn+1 = (p + q)(r + q), hence:
What should we expect? In case A, it is equally probable for the waiting time to
be 1,2 . . . ,6 minutes, hence the average waiting time is 3.5 minutes. Try to think
about case B. Given that the total number of buses per day and the average time
between buses is identical, you might expect that the average waiting time would be
identical too. This is not the case, as we shall shortly show: In case B, the average
time between buses is also 6 minutes, but, perhaps counterintuitively, this is also the
average waiting time!
To see this, let us assume that we got to the station at a random time. The probability
to wait a minute until the next one is 1/6. The probability for a 2 minute wait is 56 16 ,
and more generally the probability to wait n minutes, pn , is
pn = (1 − p)n−1 p (1.6)
(with p = 1/6).
Therefore, the average waiting time is
∞
T = pn n = n(1 − p)n−1 p. (1.7)
n=1
Without the n in front, this would be a geometric series. To deal with it, define
q ≡ 1 − p, and note that
∞
q n = 1/(1 − q). (1.8)
n=0
Looking back, this makes perfect sense, since the fact that a bus just left does not
“help” us regarding the next one – the process has no memory. Interestingly, this
example is relevant for the physics of a (classical) model of electron transport, known
as the Drude model – where our calculations imply that an additional factor of “2”
should not be present in the final result.
What about the distribution of time gaps? It is given by Eq. (1.6), and is therefore
exponential. This process is a simple example of a random process, and in the con-
tinuum limit where the time interval is vanishingly small this is known as a Poisson
process (see Appendix A for the related Poisson distribution, describing the probabil-
ity distribution of the number of events occurring within a fixed time interval).
1.1 Probabilistic Surprises 9
A note on terminology Throughout this book, we will follow the physicists’ terminol-
ogy of referring to a probability density function (pdf) as a “probability distribution,”
and referring to a “cumulative distribution” for the cumulative distribution function
(cdf). Moreover, for a real random variable X we will denote the probability dis-
tribution by p(x), rather than the notation fX often used in mathematics. Further
notational details are provided in Appendix F.
What about real buses? An online blog analyzed the transportation system in London
and showed that it is Poissonian (i.e., corresponds to the aforementioned random
bus scheduling) (http://jasmcole.com/2015/03/02/two-come-along-at-once/). This
implies that the system is not optimal (since we can get buses coming in “bunches,” as
well as very long waits). On the other hand, later in the book (Chapter 8) we will see
a case where buses were not Poissonian but also not uniform – the distribution was
very different from exponential (the Poisson case) but was not narrowly peaked (the
uniform case). Interestingly, it vanished at zero separation – buses “repelled” each
other. It was found to be well described by the results of random matrix theory, which
we shall cover in Chapter 8.
Method 1: Choosing the endpoints. Let us choose the two endpoints of the chord at
random. The chord is longer than the side of the triangle in 1/3 of the cases – as is
illustrated in Fig. 1.1
Method 2: Choosing the midpoint. What about if we choose a point randomly and
uniformly in the circle, and define it to be the middle of the chord? From the
Figure 1.2 Method 2: Choosing the chord midpoint randomly and uniformly in the circle.
Figure 1.3 Method 3: Defining the chord midpoint by choosing a point along the radius.
construction of Fig. 1.2, we see that when the point falls within the inner circle
the chord will be long enough. Its radius is R sin(30) = R/2, hence its area is 1/4
times that of the outer circle – therefore, the probability will be 1/4.
Method 3: Choosing a point along the radius to define the midpoint. If we choose the
chord midpoint along the radius of the circle with uniform probability, the chord
will be long enough when the chosen point is sufficiently close to the center – it is
easy to see that the triangle bisects the radius, so in this case the probability will be
1/2 (see Fig. 1.3).
Importantly, there is no right or wrong answer – but the point is that one has to
describe the way through which the “random” choice is made to fully describe the
problem.
0.4 0.4
Results from USA 2016 First digit for Massachusetts city populations
Benford's law Benford's law
0.3 0.3
Probability
Probability
0.2 0.2
0.1 0.1
0 0
Digit Digit
Figure 1.4 (left) Example of a dataset approximately following Benford’s law, obtained by
using readily available data for the vote total of the candidates in the 2016 elections in the
USA across the different states (data from Wikipedia). You can easily test other datasets
yourself. A similar analysis was used to suggest fraud in the 2009 Iranian elections (Battersby
2009). (right) Similar analysis of city population size in Massachusetts, based on the 2010
census data (www.togetherweteach.com/TWTIC/uscityinfo/21ma/mapopr/21mapr.htm).
(this implies that 1 occurs in about 30 % of cases and 9 in less than 5!) Although here
it makes no difference, log refers to the natural logarithm throughout the text, unless
otherwise specified.
Clearly, this is not always true, e.g., MATLAB’s random number generator closely
follows a uniform distribution, and hence the distribution of the first digit will be
uniform. But it turns out to be closely followed for, e.g., tax returns, city populations,
election results, physics constants, etc. What do these have in common? The random
variable is broadly distributed, i.e., the distribution spans many decades, and it is far
from uniform. In fact, to get Eq. (1.11), we need to assume that the logarithm of the
distribution is uniformly distributed over a large number of decades, as we shall now
show. If x is such a random variable, with y = log(x), then for values of y within the
support of the uniform distribution we have
p(x)dx = g(y)dy = Cdy, (1.12)
The importance of specifying the ensemble Although the problems are very different
in nature, there is a deep analogy between Bertrand’s paradox and Benford’s law
in the following sense: In both cases the “surprising result” comes from a lack of
definition of the random ensemble involved. In Bertrand’s paradox case, it is due to
our loose phrasing of “random.” In the case of Benford’s law, it is manifested in our
misconception that given that we are looking at a random variable, the distribution of
the first digit should be uniform – this would indeed be true if the random variable
is drawn from a broad, uniform distribution, but such distributions typically do not
correspond to naturally occurring datasets.
It is easy to show that Benford’s law arises if we assume that the distribution of the first
digit is scale invariant (e.g., the tax returns can be made in dollars or euros). But why
do such broad distributions arise in nature so often? One argument that can be made to
rationalize Benford’s law relies on multiplicative processes, an idea that (potentially)
traces back to Shockley. He noticed that the productivity of physicists at Bell labs –
quantified in terms of their publication number – follows a log-normal distribution,
and wanted to rationalize this observation (Shockley 1957). We repeat the argument
here, albeit for the example of the population of a city, x.
This depends on a large number N of “independent” variables: the availability of
water, the weather, the quality of the soil, etc. If we assume that:
N
x= x1 · x2 . . . xN , (1.16)
i=1
with xi some random, independent variables (e.g., drawn from a Gaussian distribu-
tion), then the distribution of the logarithm of x can be approximated by a Gaussian
(by the central limit theorem, see Appendix A), hence p(x) will be a log-normal
distribution – which is very similar to the uniform distribution we assumed above,
and Benford’s law will approximately follow.
For another example where a similar argument is invoked to explain the observation
of a log-normal distribution of file sizes, see Downey (2001). Another example of
a variant of this argument relates to the logarithmic, slow relaxations observed in
nature: see Amir, Oreg, and Imry (2012). We will revisit this example in Chapter 6
in more detail. Finally, the log-normal distribution and the multiplicative mechanism
outlined above also pop up in the “numerology” context of Amir, Lemeshko, and
Tokieda (2016b).
1.2 Summary
For further reading See Mlodinow (2009) for an elementary but amusing book on
surprises associated with probability and common pitfalls, with interesting historical
anecdotes.
1.3 Exercises
(a) Consider a given electron in the metal. Determine the probability of the electron
not colliding within the time interval [0,t].
(b) Assume that a given electron scatters at t = 0 and let T be the time of the
following scattering event. Find the probability of the event that t < T < t + dt.
Also calculate the expected time T between the two collisions.
(c) Let t = 0 be an arbitrary observation time. Let Tn be the time until the next
collision after t = 0 and Tl be the time since the last collision before t = 0;
Consider the random variable T = Tl + Tn . Determine the distributions of Tl and
Tn , and from them deduce T . Does your answer agree with the result in part
(b)? If not, explain the discrepancy between the two answers.
1.6 Mutating Genome*
Consider an organism with a genome in which mutations happen as a Poisson process
with rate U . Assume that all mutations are neutral (i.e., they do not affect the rate of
reproduction). Assume the genome is large enough that the mutations always happen
at different loci (this is known as the infinite sites model) and are irreversible. We start
at t = 0 when there are no mutations.
(a) What is the probability that the genome does not obtain any new mutations within
the time interval [0,t)?
(b) What is the expected number of mutations for time T ?
(c) Consider a population following the Wright–Fisher model: At each generation,
each of N individuals reproduces, but we keep the population size fixed by ran-
domly sampling N of the 2N newborns. Find the probability of two individuals
having their “first” (latest chronologically) common ancestor t generations ago,
P (t). Hint: Go backwards in time with discrete time steps. What is the continuum
limit of this probability? (i.e., the result for a large population size).
(d) Let us add mutations to the Wright–Fisher model. Assume we sample two indi-
viduals that follow two different lineages for precisely t generations (i.e., their
first common ancestor occured t generations ago). What is P (π|t), the probabil-
ity of π mutations arising during the t generations?
(e) What is P (π), the probability of two individuals being separated by π mutations
after they were born from the same parent? What is the expected value of π? (You
may work in the continuum limit as in (c), corresponding to a large population
size N 1).
These motions were such as to satisfy me . . . that they arose neither from currents in
the fluid, nor from its gradual evaporation, but belonged to the particle itself
(Robert Brown)
Consider a small particle suspended in a liquid. Due to the constant collisions with
the surrounding liquid molecules, the path followed by the particle will be erratic, as
was first noticed by Robert Brown in the nineteenth century in experiments where
he was tracking the motion of pollen grains (Brown 1828). As a result, this is often
known as Brownian motion – see also Pearle et al. (2010) for a modern take on
Brown’s experiments, and Fig. 2.1 for a later example of a quantitative study of
particle diffusion by Jean Baptiste Perrin, which we will mention again in Chapter 3.
This process was modeled by Albert Einstein, who derived the so-called diffusion
equation, and understood the nature of the particle’s dynamics. In this chapter, we
will first study a simplified model for diffusion, where, following Einstein’s original
derivation from 1905, time will be discrete (i.e., at every time step the particle will
move in some random direction). We will understand how far the particle typically
gets after making N moves, and what its probability distribution is. These ideas are
central in understanding numerous processes around us: from the dynamics of dif-
fusing particles in liquids as well as in living cells, to modeling the dynamics of the
stock market (which we will get to later in the book, in Chapter 3). In fact, the concept
of “random walks,” as this dynamics is often referred to, will also play an impor-
tant role in our discussion of “Google PageRank,” the algorithm at the heart of the
search engine.
Asking the right question In science, it is often as important (and hard) to ask the right
question than to come up with the right answer. The great statistician Karl Pearson
(1905) sent a letter to Nature magazine posing, quite literally, the problem of the
random walker: given that at every step a person chooses a random direction and walks
a constant number of steps, what is the distribution of their position after N steps? It
is remarkable that random walks were only introduced in the twentieth century, and a
strange coincidence that they were almost simultaneously suggested by Pearson and
Einstein.
16 Random Walks
Figure 2.1 An experimental observation by Perrin of the diffusion of small (spherical) particles
suspended in a liquid. From Perrin (2013).
1
[(x + 1)2 + (x − 1)2 ] = x 2 + 1. (2.1)
2
xN
2
+1 = xN + 1;
2
(2.2)
xN
2
= N. (2.3)
2.1 Random Walks in 1D 17
√
This implies that the typical distance from the origin scales like N . What about the
position distribution? If N is even, then it is clear that the probability to be a distance
M from the origin after N steps is zero for odd M, and for even M it is
1 N 1 N!
pM = N = N , (2.4)
2 R 2 R! (N − R)!
where R is the number of steps to the right, thus R −(N −R) = 2R −N = M. We can
now evaluate this probability for N M using Stirling’s
√ formula,
N which provides an
(excellent) approximation for N !, namely N ! ≈ 2πN Ne . This leads to
1 1 N N +1/2
pM ≈ √ N
2π 2 R R+1/2 (N − R)N −R+1/2
1
= √ e−N log(2)+(N +1/2) log(N )−(R+1/2) log(R)−(N −R+1/2) log(N −R) . (2.5)
2π
We can proceed by using our assumption that N M, implying that R is approxi-
mately equal to N/2.
Simplifying leads to
(R + 1/2) log(R) + (N − R + 1/2) log(N − R) ≈ (N + 1) log(N/2) + M 2 /2N .
(2.9)
Finally,
2 M2
pM ≈ √ e− 2N . (2.10)
2πN
Hence the distribution is approximately Gaussian. In fact, this had to be the case since
x is a sum of independent random variables, hence according to the central limit
theorem (see Appendix B for a reminder) it should indeed converge to a Gaussian
distribution! Note that the distribution indeed sums up to 1, since the support of this
Gaussian is only on even sites.
In summary, we learnt that
√
1. A diffusing particle would get to a distance ∼ t from its starting position after a
time t.
2. The probability distribution describing the particle’s position is approximately
Gaussian.
Random walks in 1D are strange and interesting! The problem set at the end of the
chapter will expose you to some of the peculiarities associated with random walks in
one dimension. Problems 2.6 and 2.7 deal with recurrent vs. transient random walks.
It turns out that in one- and two-dimensional random walks a walk will always return
to the origin sometime in the future (“recurrent random walks”) – but that this is not
the case for higher dimensions, where the random walk is called “transient.” This is
known as Pólya’s theorem. Given that 1D random walks are recurrent, we may ask
what the “first return time” distribution is – the distribution of the time to return to the
origin for the first time. Solving Problems 2.1, 2.3 or 2.9 will show you that this is a
power-law distribution, which, remarkably, has diverging mean – so while you always
return to the origin, the mean time to return is infinite! This property is discussed in
detail in Krapivsky, Redner, and Ben-Naim (2010). In fact, mean first passage times
of random walkers (in any dimension) have a beautiful analogy with resistor networks
(see also Doyle and Snell 1984) and relations with harmonic functions. Additional
peculiarities arise if we consider the distribution of the last time a random walker
returns to the origin within a given time interval. This is studied in Problem 2.2 (see
also Kostinski and Amir 2016).
We shall now approach the problem with more generality, following nearly precisely
the derivation by Einstein. We will work in 3D but the approach would be the same
2.2 Derivation of the Diffusion Equation for Random Walks in Arbitrary Spatial Dimension 19
in any dimension. The approach will have discrete time steps, but the step direction
will be a continuous random variable, described by a probability distribution g()
(here is a vector describing the step in the 3D space – not to be confused with the
Laplacian operator!). We will not limit the random walker to a lattice, though it is
possible to implement such a scenario by taking g() to be a sum of δ-functions (can
you see how?).
We will seek to obtain the probability distribution p(r), i.e., p(r)dV will be the
probability to find the particle in a volume dV around the point r (at some point in
time corresponding to a a given number of steps). If the original problem is cast on a
lattice, this distribution will be relevant to describe the coarse grained problem, when
we shall zoom-out far enough such that we will not care about the details at the level
of the lattice constant.
If at time t the probability distribution is described by p(r,t), let us consider what
it will be a time τ later, where τ denotes the duration of each step. As you can guess,
in a realistic scenario the time of a step is non-constant, and τ would be the mean step
time. Thus, we haven’t lost too much in making time discrete – but we did make an
assumption that a mean time exists. In Chapter 7 we will revisit this point, and show
that in certain situations when the mean time diverges, we can get subdiffusion (slower
spread of the probability distribution over time compared with diffusion).
To find p(r,t + τ), we need to integrate over all space, and consider the probability
to have the “right” jump size to bring us to r. This leads to
p(r,t + τ) = p(r − ,t)d 3 g(). (2.11)
To which order should we expand? In deriving both Eqs. 2.6 and 2.14 we had to
decide to which order we should Taylor expand a function. Were we to only retain
the first-order term in the expansion, the results would have been nonsensical. In
principle, we should make sure that the next order in the expansion is negligible
compared with the terms we have kept, but we will often omit this step and rely on
our physical intuition instead in deciding the “correct” order of expansion.
20 Random Walks
Once again we may make further progress if we make assumptions regarding the
symmetries associated with g: If we assume isotropic diffusion then g(x,y,z) =
g(−x,y,z), etc., implying that the only terms that would survive in the integration
are the “diagonal” ones, and they would all be equal. Hence
1 ∂ 2p 2
p(r,t + τ) − p(r,t) ≈ g()d 3 .
2 i
(2.16)
2 ∂x i
i
To which order should we expand? Again! One might argue, correctly, that to better
approximate p(r,t + τ) − p(r,t) we should evaluate the partial derivative with respect
to time in the middle of the interval (t,t + τ). Can you see why in the above derivation
it suffices to evaluate it at time t?
Notice that on the way we made another “hidden” assumption: that the second moment
√
of the jump distribution exists. If it doesn’t, the t scaling that we are familiar with
will break down, and this time we will get superdiffusion (faster spread of the prob-
ability distribution over time compared with diffusion) – this scenario is known as a
Lévy-flight. An interesting case arises when the variance of the step size diverges as
well as the mean time between steps. Should we get sub or super diffusion in this case?
As one may anticipate, the two effects compete with each other, and both options can
occur, depending on the details. We will study this in detail later on in the book, in
Chapter 7.
Returning to Eq. (2.18), we will now find the probability distribution as a function
of space and time for a particle found at the origin at time t = 0. To proceed, let us
Fourier transform the equation, denoting by p̂ the F.T. of the distribution (with respect
to space), to find that
Another Random Scribd Document
with Unrelated Content
ochtend en, nog slaapdronken, waggelend op hun beenen; alles was
gekleurd met een grauwe tint, die met elke minuut lichter werd, tot ze
zich oploste in de eerste zonnestralen, die de dauwdroppels op de
klapperbladeren deden fonkelen.
’t Was Van Brakel of hij blind werd, toen hij, de oogen openend, in de
zon keek.
Toen Van Brakel eenige dagen later zijn plannen en ontwerpen den
resident aanbood, was deze daar wonderwel tevreden over; hij
beschouwde ze met groote belangstelling; hij zag in hoe nuttig zoo’n
verbindingsweg zou wezen in het fraaiste deel der gemeente en hoe
sierlijk de aanleg zou kunnen zijn; de resident werd er enthousiast
voor en beloofde [255]in alles zijn medewerking, behalve natuurlijk in
’t finantiëele, maar daarom vroeg Van Brakel niet.
’t Was alles fraai en wel; men twijfelde volstrekt niet aan het goed
inzicht en de architectonische bekwaamheden van Van Brakel; men
was ook wel geneigd kapitaal te verstrekken,—maar de soliditeit van
den ingenieur op finantiëel gebied werd betwijfeld en daarom stelde
men hem moeilijke en zeer bindende voorwaarden, waardoor hij en
zijn werk altijd vast lagen in handen van de firma.
Hij nam die voorwaarden aan zonder zich te bedenken; hij was zóó
zeker van zijn zaak!
Zondags, als er niet werd gewerkt, was het een dag van het meest
volkomen epicurisme, slechts afgewisseld door wandelingen ’s
morgens en rijtoertjes ’s middags en ’s avonds. Met dat al verdiepten
zich Van Brakel en zijn schoonvader zoo dapper in de spiritualiën,
dat de laatste er sufferig onder werd; de ijzeren natuur van den
ingenieur bood als altijd weerstand en hoeveel hij ook dronk des
avonds,—klokke vijf den anderen morgen was hij weer op het werk,
net als in ’s lands dienst.
Dan zette hij er gang in. Er moest, zoo was de afspraak met de
geldschietende firma, een huis worden afgemaakt en verkocht; dat
was niet alleen goed voor ’t geld, maar het stond [257]flink voor de
commanditairen in Europa; die zagen dan, dat er wat binnenkwam.
Toen daar het eerste huis stond, niet te groot, keurig net, met
marmer bevloerd, doelmatig ingedeeld en van alle gemakken
voorzien, kwamen belanghebbenden kijken, want het was te koop
geannonceerd.
Daar zouden zij dan weer komen, midden in de wereld, die hen had
uitgestooten; terug in de kringen, die zich voor hen [258]hadden
gesloten, in het gezelschap van hen, die hun gezelschap hadden
vermeden.
Van Brakel glom van trots en genoegen. Met een glimlach vol
zelfvoldoening om den mond en schitterende oogen liep hij met
groote stappen in ’t voorgalerijtje op en neer.
Zij moesten alles nieuw hebben en hij was niet meer gewoon aan
min of meer voornaam gezelschap in „pakean deftig.”
Het liep echter uitmuntend af. Menschen, die hem in geen maanden
hadden gegroet, spraken hem aan alsof ze voortdurend met hem in
de beste verstandhouding hadden verkeerd; dames, die Lucie van
vroeger kende, behandelden haar als zusters en verzekerden hoe
gelukkig ze waren, dat nu alles zóó geschikt was.
Wat ze meenden, wist Lucie niet goed, maar ze vond het heerlijk. En
ze danste ook weer, al was het niet zooveel als [259]vroeger toen
Herman nog in dienst was en inferieuren had.
Ceciel had een halven nacht besteed aan het nagaan der
verantwoording. [260]
„Je moet het geld vooral solied beleggen,” zei ze, toen het niet in de
zaak mocht blijven.
„Wat zou het, Jules? Maar als de zaak eens fout ging!”
„A l l o n s ! ” spotte hij. Welk een gek idee had ze daar! „Als de heele
zaak, zooals die nu staat, de zwaarste klappen kreeg, heeft papa
nog persoonlijk fortuin genoeg om à p a r i te liquideeren.”
„Het is mogelijk, maar i k heb het niet op zaken. Ik heb liever solide
staatspapieren, waarvoor men n o o i t bang heeft te zijn. Kom Jules,
doe het maar.….”
En ofschoon hij er maar weinig zin in had, deed hij het ten slotte toch
op haar aandringen.
„Maar.….…”
„Dat zal ik ook niet. We gaan, net als pa zegt, een dag vroeger naar
de naaste havenplaats, waar Jules een vriend heeft, wien hij nog
zoo graag de hand zou drukken. ’s Avonds te voren komen wij
afscheid nemen, met een reiswagen voor de deur, en ’s nachts gaan
we met een tambangan naar boord.”
’t Beviel Jules Geerling maar half. H i j zag er nu zoo’n kwaad niet in,
dat mevrouw Du Roy met veel tranen afscheid nam. Welke haan zou
in Amsterdam, dacht hij, daarnaar kraaien? En dan, hij was nu toch
gebrouilleerd met zijn familie!
Ook het tweede huis, dat Van Brakel had gebouwd, bracht een
fraaien prijs op, maar daarvan werd zooveel notitie niet genomen.
Zij hadden er al eens ruzie over gehad; hij verweet haar dat ze
spilziek was; dat ze vroeger niet het vierde gedeelte gebruikte van
de dranken, die thans werden geschonken. Maar zij was sterk; zij
had een sociëteitsrekening van over de honderd gulden in zijn zak
gevonden en daarmee gewapend, bestreed zij hem.
„Nu maar; er moet toch een eind aan komen,” zei hij later. „Ik zal van
mijn kant ook wat inkrimpen.”
„Je mocht waarlijk wel het voorbeeld geven. Jij verteert met je
homberen, je sociëteit, je havana’s, je paarden en aan je verdere
verteringen, meer alléén dan wij met het geheele huishouden.”
Dien avond aan tafel stond hij als gewoonlijk na het dessert dadelijk
op en nam zijn hoed.
Toen hij weg was, stak de oude Drütlich een groote pijp op en begon
bier te drinken, waarbij Lucie hem met een glas likeur gezelschap
hield. De eene flesch bier volgde de andere tot Drütlich vond, dat het
tijd werd om over te gaan tot [263]een brendy-soda en Lucie meende,
dat het uur van slapen was aangebroken.
Wat kon ’t hem schelen! Bovendien, hij wist dat Drütlich in zijn
goeden tijd voor hem ook steeds royaal was geweest en daarmede
nam hij dus genoegen. Maar dien nacht kwam hij erg laat thuis; hij
had heel veel gedronken en was ook voor zichzelven zwaar op de
hand. Onderweg mopperde hij in zijn eentje over de quaestie, die hij
met Lucie had, en voor de eerste maal rekende hij het haar aan als
een soort van verwijt, dat haar vader bij hen aan huis woonde. Toen
hij uit zijn wagen stapte, achter op het erf, voelde Van Brakel, dat zijn
gang eenigszins waggelend was en niet zonder eenige moeite klom
hij de trappen op naar de achtergalerij, waar één lampje brandde.
De huizen vonden niet langer dien aftrek. Men had niet gerekend op
het gering aantal menschen in staat en genegen om te koopen. De
Chineezen hadden k o n g s i gemaakt. Waarom zouden zij zich
haasten nu te koopen voor veel geld? Zij zouden hun beurt
afwachten.
Drie, vier huizen gingen goed van de hand, maar toen was het uit.
Op het vijfde werd te weinig geboden; het werd niet gegund en stond
nu daar met een bordje er aan, vermeldend, dat het te huur was of te
koop.
Doch het toeval trof, dat iedereen goed was voorzien en daar de
huurprijs van het nieuwe huis vrij hoog was, schrikte dit ook de
huurders af.
Intusschen werkte Van Brakel voort, als ging hem dat alles niet aan,
schoon hij inwendig zeer ongerust was; als hij op ’t kantoor kwam
om geld te halen, dan kreeg hij het, maar ’t ging niet van harte en hij
moest allerlei klaagliederen aanhooren en zinspelingen op een
speculatie, die dreigde te mislukken; op goed geld, dat naar kwaad
geld werd geworpen; op mooie, maar niet geheel te verwezenlijken
plannen enzoovoort.
Het werd elken keer erger. Telkens kreeg hij, wat hij [265]noodig had
om te kunnen werken, maar telkens onwilliger.
Van Brakel werd verzocht in een aparte kamer te komen; men ging
zitten en een van de firmanten, zijn bedenkelijk gezicht bewarend,
begon met hem te zeggen, dat het hun verschrikkelijk speet; dat zij
niets liever hadden gedaan dan dóórwerken, maar dat zij er reeds
veel geld hadden ingestoken en dat zij voorzagen er, bleven zij
voortwerken, nog veel meer te moeten insteken.
„Die is er af getrokken.”
„Het is toch zoo. Maar we dachten wel, dat je het niet zoudt
begrijpen; hier is de rekening.”
Van Brakel kreeg een papier met „Aan’s” en „Per’s” waarvan hij
weinig of niets begreep. Hij zag er getallen op staan van zes cijfers,
waarmede, zoo zei men ter toelichting, hij zich niet behoefde te
bemoeien; dat betrof alleen de boekhouding.
Met een domme uitdrukking op het gelaat, bekeek hij het met zwarte
en roode inktstreepjes bewerkte blauw gelijnde papier.
Hij begreep er niets van dan het eindcijfer, ja, dat zag hij: ’t was over
de dertig mille. Overigens deed de net geschreven rekening-courant
hem denken aan de nota’s, die hijzelf bij gelegenheid indiende
wegens reparaties, met een specificatie van groote en kleine soorten
van spijkers en draadnagels, [266]afmetingen van verwerkt hout
enzoovoort, waaruit evenmin iemand wijs kon worden.
„Ja, het eindcijfer is zoo. Ik kon het me niet voorstellen. Maar dit is,
dunkt me, toch geen reden om er mee uit te scheiden.”
„Voor u niet,” zei een der chefs met een slim lachje, „maar voor onze
firma wel.”
„Toch niet. De huizen zullen hun geld opbrengen. Men moet een
beetje geduld hebben.”
„Voorloopig, ja. Wij zullen zien of we iemand kunnen vinden, die den
boel wil overnemen, zooals hij reilt en zeilt.”
Toen hij ’t kantoor verliet, kon Van Brakel zich het geval [267]eigenlijk
niet best voorstellen. Hij had ’t altijd beschouwd als z i j n bouwerij
en ten slotte bleek, dat hij in ’t geheel geen baas was geweest
hoegenaamd; de winst op de verkochte huizen had de firma
opgestoken en hij had gewerkt voor niet eens het traktement, dat hij
zou bedongen hebben als hij het werk eenvoudig voor iemand had
aangenomen.
„Ik heb het wel gedacht,” zei Drütlich, „ik heb het wel gedacht! Zoo
gaat het altijd in de wereld en overal. Laat je dat een troost zijn.”
Lucie nam het zoo kalm niet op. Zij was woedend en schold op
dezelfde menschen, die ze eerst zoo dankbaar was; ze waren
„dieven” en Van Brakel was een domme vent. Ze schold zoo lang tot
hij ook boos werd, en beiden, opgewonden, elkaar allerlei verwijten
deden en uitmaakten voor al wat leelijk was.
Eerst ’s avonds dronken zij het af, waaraan ook Drütlich meedeed,
schoon hij eigenlijk niets af te drinken had.
„Eer ik nu toch voor zoo’n intensen ploert werkte,” zei Van Brakel,
„schoot ik me liever dood.”
„Je moet ’t zelf weten,” meende Drütlich. „De vent z’n geld is net zoo
goed als dat van een ander.”
„Ik kan mij toch niet feitelijk onder zoo’n schurk stellen?”
„Dat is ook niet noodig. Het is geen zaak van onder of boven. Je
krijgt geld voor je werk. Daarmee is het uit.”
„Ze hebben je bij ’t lijf gehad,” zei Drütlich verder. „Ik heb ’t je
gezegd: ze doen het altijd. Spreek er een advocaat over.”
„Waarover?”
Van Brakel volgde dien raad, en inderdaad liet de firma, om van alle
soesah af te wezen, naar zij verklaarde, en uit medelijden met Van
Brakels toestand, zich door diens advocaat tot een uitkeering in geld
overhalen. Maar nu ook, werkte zij hem geregeld tegen. Wat hij ook
beproefde—nergens vond hij meer eenig werk te doen. En terwijl het
beetje [269]beschikbare geld hun door de vingers gleed, twistten zij
elken dag, en werd het huiselijk leven voortdurend ondraaglijker.
Toch hielden zij het nog maanden vol, tot ook het krediet geheel was
uitgeput.
Iedereen behandelde hem nog als een „heer”,—hij was immers een
gediplomeerd man; men had innig medelijden met zijn
omstandigheden; daarbij bleef het. Stuk voor stuk, dat waarde had,
verdween intusschen uit zijn huis, en binnen enkele maanden lag de
boel, zooals hijzelf zei, voor den grond. Moedeloos en verslagen,
huilde Lucie elken dag, tot zij haar leed verzette en meedronk met
haar vader. Als Van Brakel, die langzamerhand tot de jenever bij
Chineezen was afgedaald, dan terugkwam van een vruchteloozen
tocht om werk, was hij drie-kwart dronken, en Lucie en haar vader
waren ook niet normaal. Dan dronken zij voort, eerst elkaar
troostend, tot de demon der tweedracht over hen kwam en een
heftig krakeel losbrak.
Tien palen buiten de stad liet die een huis zetten voor den
administrateur zijner rijstlanden. Van Brakel zou slechts het toezicht
houden voor honderd en vijftig gulden in de maand.
Nabij een desa in de buurt, waar het huis moest worden gebouwd,
had Van Brakel een kleine houten woning gehuurd. Die meubelden
zij met hun bedden, tafels en stoelen, een paar kasten en eenige
lampen; zoo eenvoudig mogelijk.
Ook zoo onverschillig mogelijk,—want het kon Lucie niets meer
schelen. Als zij maar in een luierdstoel kon liggen, met wat te
drinken naast zich, de oude heer om gezelschap te houden, en de
kinderen, smerig en half gekleed, om nu en dan tegen te
schreeuwen, als ze leven maakten, dan kon de rest haar niets
hoegenaamd meer schelen.
Zoo trok hij uit, al heel vroeg in den morgen, de dampen van den
sterken drank des vorigen avonds nog in het hoofd en een pas
werkenden ochtenddronk in de maag. [271]
Ja, dat was toch heel anders dan hier de dreun volgens die andere
gamelan toonladder, dreinend en sleepend.
Hidong, hidong.
Kapal api loeda amboong
Oleh, oleh.… oleh.… Boem!
Maar ’t was toch eigenlijk ’t zelfde. Hij had altijd zooveel gehouden
van al wat tot bouwen, timmeren en metselen in betrekking stond. ’t
Was thuis aangemoedigd: hij was bestemd voor het
ingenieursvak.… Toen hij „er door” was, huilde mama van vreugde.
Ze moest ’t nu eens kunnen zien! Gelukkig was ze dood.… Dat
heien toch.… heel anders hier! Wat trekken de poldergasten in
Holland flink, en hoe vroolijk zingt de baas, als het n i v e a u haast is
bereikt, zijn:
[272]
Boem! sloeg het op ’t blok vóór hem. Hè, dat viel nu al heel komiek!
En zachtjes neuriede hij, al voortdommelend mee met den
inlandschen heibaas:
Api abang.
Api-nja abang.
Olah, olah,—olah!
„Nu,” zei deze, even schor lachend als hij sprak. „Je hoeft me zoo
vervaarlijk niet aan te kijken, omdat ik je „mandoer” noemde; ’t was
maar uit gekheid, ofschoon je toch anders mandoers-werk doet.
Houd asjeblieft je oogen open, dat is alles!”
Uit een fleschje, dat hij in een zak van zijn kabaja droeg, nam hij een
flinke teug, kurkte het weer en smakte met de lippen. „Mandoer! Ook
goed!” herhaalde hij, en keek naar ’t werk.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com