0% found this document useful (0 votes)
9 views

Particle Based Simulations Padding Small

The document discusses particle-based simulations, emphasizing their importance in science and technology since the 1950s. It outlines various methods such as molecular dynamics and Brownian dynamics, and highlights the similarities among these methods despite their complexity. The text serves as a comprehensive guide to understanding the principles, goals, and applications of particle-based simulations across different scales.

Uploaded by

rajat1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Particle Based Simulations Padding Small

The document discusses particle-based simulations, emphasizing their importance in science and technology since the 1950s. It outlines various methods such as molecular dynamics and Brownian dynamics, and highlights the similarities among these methods despite their complexity. The text serves as a comprehensive guide to understanding the principles, goals, and applications of particle-based simulations across different scales.

Uploaded by

rajat1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 175

Particle-based Simulations

(6KM59)

Dr. Ir. J.T. Padding

April 16, 2013

C ONTENTS

Contents 1

Preface 5

1 Goals of particulate modeling and simulation 7


1.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Simulation: a third branch of science . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Validation of a model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Practicum: phase behaviour of spherical molecules . . . . . . . . . . . . . 9

2 General principles of particle-based simulations 11


2.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Program structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Forces on particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Neighbourlists and cell-linked-lists . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6 Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.7 Updating particle positions and velocities . . . . . . . . . . . . . . . . . . . 45
2.8 Practicum: Debye crystal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

1
2 CONTENTS

3 Dimensionless numbers and scales 49


3.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2 Physical phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3 Dimensionless numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4 Microscopic, mesoscopic and macroscopic simulations . . . . . . . . . . 63

4 The microscopic world 67


4.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2 A short introduction to molecular dynamics simulations . . . . . . . . . . 67
4.3 Molecular force fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Controlling temperature and pressure . . . . . . . . . . . . . . . . . . . . . 71
4.5 Measuring structural properties . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6 Measuring dynamic properties . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.7 Limitations of Molecular Dynamics simulations . . . . . . . . . . . . . . . 94
4.8 Practicum: Molecular Dynamics simulation of liquid methane . . . . . . 95

5 The mesoscopic world 97


5.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2 Coarse-graining, soft matter systems and hydrodynamic interactions . . 97
5.3 Brownian motion of a single particle . . . . . . . . . . . . . . . . . . . . . . 99
5.4 Langevin and Brownian dynamics of multiple particles . . . . . . . . . . . 103
5.5 Mesoscale methods with hydrodynamic interactions . . . . . . . . . . . . 106
5.6 Dissipative particle dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.7 Multi-particle collision dynamics . . . . . . . . . . . . . . . . . . . . . . . . 113
5.8 Colloidal suspensions in external fields and flows . . . . . . . . . . . . . . 132
5.9 Limitations of mesoscopic simulation methods . . . . . . . . . . . . . . . 137
5.10 Practicum: Colloidal sedimentation . . . . . . . . . . . . . . . . . . . . . . 138

6 The macroscopic world 139


6.1 Chapter objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.2 Introduction to granular systems . . . . . . . . . . . . . . . . . . . . . . . . 139
6.3 Equations of motion for the particles in a granular system . . . . . . . . . 141
6.4 Dissipative collisions: contact models . . . . . . . . . . . . . . . . . . . . . 143
6.5 Coupling to a fluid flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.6 Stochastic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.7 Limitations of macroscopic particulate models . . . . . . . . . . . . . . . . 153

A Hydrodynamic forces on slowly moving spheres 155


A.1 Navier-Stokes and Stokes equations . . . . . . . . . . . . . . . . . . . . . . 155
A.2 Friction on a single slowly moving sphere . . . . . . . . . . . . . . . . . . . 156
A.3 Hydrodynamic interactions between slowly moving spheres . . . . . . . . 158

B Mathematical relations 161


B.1 Gaussian integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
CONTENTS 3

B.2 Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162


B.3 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
B.4 Logarithms and exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . 163

C Random number generators 165


C.1 Uniform random numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
C.2 Gaussian random numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
C.3 Constructing random numbers with other distributions . . . . . . . . . . 166

D Physical constants 167

Bibliography 169

Index 173
P REFACE

Particle-based computer simulations play an increasingly important role in science


and technology, for reasons that I discuss in the first chapter. Since the 1950’s a large
number of methods have been developed to tackle all sorts of problems, ranging from
the motion of molecules to the motion of colloidal particles, sand particles, rocks,
boulders, planets, and even galaxies. The methods go by many names such as molec-
ular dynamics, Langevin dynamics, Brownian dynamics, dissipative particle dynamics,
stochastic rotation dynamics, discrete element model, discrete particle model, etcetera.
For the uninitiated, the sheer number of different methods must look overwhelming.
However, what I have learned over the years – starting from my graduation work
in 1997 on molecular dynamics of polymer melts to my current work on granular sys-
tems, fluidized beds, spray dryers, etcetera – is that in their core there is a great deal of
similarity between all these particle-based methods. Having learned the general prin-
ciples and tricks of the trade at one level, it is relatively easy to apply them at another
level. Thus, in my opinion, it is important to know about common elements of (al-
most) all particle-based simulation methods. This is what I aim to do in chapter 2. Of
course there are peculiarities that pop up only at certain scales, which is why I spend
the second half these lectures on microscale, mesoscale and macroscale simulations.
These lectures would not have been possible without my intense interactions with
four people. First and foremost, I am indebted to my PhD tutor, Prof. Wim Briels
(University of Twente), who has introduced me to the world of molecular dynamics,
coarse-graining, and learned me to appreciate both the subtle points and the strength
of statistical mechanics. Second, I am indebted to Dr. Ard Louis (University of Cam-
bridge / Oxford University), with whom I have had many wonderful adventures into the
low-Reynolds number world of colloidal dynamics and stochastic rotation dynamics.
Finally, I am indebted to Prof. Hans Kuipers and Dr. Niels Deen (Eindhoven University
of Technology) for accepting me in the world of macroscopic particle simulations com-
bined with fluid mechanics, and teaching me the subtle points of the world at higher
Reynolds numbers.
Finally, I would like to thank Luuk Seelen for throroughly proof-reading these lec-
ture notes. Still, because these notes have been produced almost entirely during just
two frantic months in March and April 2013, I am convinced that some errors will re-
main. Please report them to me, and I will update them for the next edition.
JTP
The Hague, 15th April 2013.

5
½
CHAPTER
G OALS OF PARTICULATE MODELING AND
SIMULATION

1.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about different goals of particle-based simulations.

• You will learn about simulations as a third branch of science.

• You will encounter a first example of a particle-based simulation, namely a molec-


ular dynamics simulation of neutral spherical molecules.

1.2 Simulation: a third branch of science


Some texts on computer simulations start with an overview of current computing ca-
pabilities, boasting about teraflop processing speeds and terabyte memory capacities.
I will not give in to this temptation because it is not my intention to make these lec-
ture notes look horribly outdated in five to ten years. The core message is that with the
ever increasing computational power available to us, increasingly large and complex
systems can be modeled and simulated on a computer.
These lectures focus on a particular type of computer simulation, particle-based
simulations, where particles interact with each other and with external fields. You will
learn how to program and perform such particle-based simulations, with a focus on
fluid and fluid-solid systems such as molecular fluids, suspensions of colloidal parti-
cles, and fluidized granular particles. But before delving into this, the first question

7
8 CHAPTER 1. GOALS OF PARTICULATE MODELING AND SIMULATION

that needs to be answered is why one would be interested in performing such simu-
lations anyway? It may be argued that science has progressed just fine for hundreds
of years by careful experimentation, summarising observations in laws of nature, the-
oretical derivations of the consequences of these laws, confirmation by experiments,
etcetera, without the use of any simulation.
A simple answer to the above question is that the trajectories of a large group of
particles quickly become too difficult to solve analytically. But this is also a boring
answer. The more interesting answers come when we consider the different goals of
simulations. The following list is not exhaustive, but gives a flavour of the different
goals of simulations, with some examples.

• Often the goal of a simulation is to study the collective behaviour of the count-
less interactions between the components of a system. It is an amazing fact that
even very simple interaction rules (forces) between very large numbers can lead
to very complex emergent behaviour. For example, simple collisions between
the molecules in a liquid and between the molecules and sand particles lead to
fascinating swirly patterns in the sedimentation of sand through water.

• A deeper goal of a simulation is to generate a fundamental understanding or an


explanation of the phenomena or processes occurring in real or model systems.
In the simulation, you have full control over the objects, interactions and forces.
This allows you to find the determining factors in a phenomenon or process. For
example, you may ask whether the mutual uncrossability of polymer chains has
an effect on the diffusion. By simulating polymer chains with and without such a
constraint, you can discover that there is hardly an influence for relatively short
polymers, but a very large effect on longer polymers.

• Sometimes the goal is to obtain information about a system which is difficult to


access or control experimentally. With simulations we can perturb and do com-
putational measurements in systems which are too small to observe accurately
(molecular processes on sub-nanometer scales), too large to influence (dynam-
ics of galaxies of stars), or difficult to achieve experimentally (very high temper-
atures or pressures).

• Another goal of simulation is to predict the behaviour of a system. From an en-


gineering point of view, this enables us to optimise a process, or make a choice
between different designs. For example, rotating impellers are often used in con-
tainer tanks to keep the contents well mixed and baffles are used to stop the swirl.
There is much freedom in the number, shape and size of these impellers and baf-
fles. Simulations enable us to explore a large number of designs, and to select the
best candidates for further experimental testing.

The above list shows why simulation has appeared as a third main branch of sci-
ence, situated between experimentation and theory. Simulations enable us to study
theoretical models which are simply too difficult to solve analytically, or systems which
1.3. VALIDATION OF A MODEL 9

are too difficult or expensive to access experimentally. Simulations allow us to play


with the system in a virtual world in which the simulator has full control over all ele-
ments, and thus generate explanations for experimental observations.

1.3 Validation of a model


Before you can play god of your own virtual world, a word of warning is at place:

“Garbage in → garbage out”

The above quote means that the outcome of a simulation is only as good as the as-
sumptions that go into the model. Unless the simulation is meant to explore the con-
sequences of a well-defined theoretical (toy-)model, it is important to check the valid-
ity of the simulation model against experimental results.1 In other words, the simu-
lation should reproduce with good accuracy some of the observed quantities in well-
controlled experiments of the same system. Only then the model can be used to gen-
erate understanding, perform numerical measurements, or predict the behaviour in
other circumstances. The definition of “good accuracy” depends on the precise goal of
the simulation and the quantities one is interested in, and therefore differs from case
to case.

1.4 Practicum: phase behaviour of spherical molecules


As an introduction to the field of particle-based simulations, you will study the phase
behaviour of molecules though an interactive Molecular Dynamics simulation java ap-
plet.
Molecules can organise themselves in different phases, such as a solid, a liquid or
a gas. The phase of a material depends on the chemical structure of the material, the
temperature and the pressure or density. At high enough temperatures the distinction
between a liquid and a gas disappears, and we speak of a fluid. In this practicum you
will study the phase behaviour of neutral sperical molecules such as argon, krypton
(which are actually atoms) or methane (which is approximately spherical). You will
investigate if or how intermolecular interactions, molecular mass, and temperature
affect the formation of a phase.
Answer the questions in the html page “explorephaseNew.html” and allow the java
applet to run on your computer. Discuss the questions and answers with your neigh-
bour.

1
By validity we do not mean a correct implementation of the model. A correct implementation
must be checked by performing simulations for analytically calculable special cases. This may be called
verification of the (implementation of the) model. However, the assumptions of a special case probably
include all assumptions of the model, and therefore do not validate the model.
¾
CHAPTER
G ENERAL PRINCIPLES OF
PARTICLE - BASED SIMULATIONS

2.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about similaries in structure of (almost) all particle-based simula-
tions.

• You will learn to distinguish different types of forces such as conservative versus
dissipative forces, and external forces versus multi-body forces.

• You will learn to calculate pairwise forces in a simulation code.

• You will learn to use neighbourlists and cell-linked-lists to speed up simulations.

• You will learn to apply different types of boundary conditions, including walls
and periodic boundaries.

• You will learn to properly initialise a simulation.

• You will learn how to solve the equations of motion in discrete time steps.

2.2 Program structure


Whether the simulation applies to a system of molecules on the nanometer scale or
to a system of granular particles on the meter scale, the general structure of a particle-

11
12 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

based program is usually the same:1 there is a time loop in which forces on the particles
are evaluated at a certain time, positions and velocities are updated to the next time
step, and boundary conditions (walls or periodic boundaries) are applied. With the
new velocities and positions, new forces are evaluated, etcetera.
The forces depend on relative distances and sometimes relative velocities between
neighbouring pairs, triplets, etc. of particles. A large number of distances and veloci-
ties need to be evaluated if the number of particles is large, making the force evaluation
step usually the most time-consuming part of the program. To reduce the computa-
tional cost it is advantageous to efficiently identify nearby particles by using neigh-
bourlists or cell-linked-lists.
Of course the particle positions and velocities need to be initialised before the time
loop, and saved when the required amount of simulation time T r un has been reached.
A flowchart outlining the structure of a general particle-based simulation program
is given in Fig. 2.1. In the following sections we will focus on the essential features of
each block. The preferred method of initialisation depends on the range of interaction
between the particles and the system size and boundary conditions, and will therefore
be discussed after discussing forces and boundary conditions.

2.3 Forces on particles


There are many different types of forces that may act upon a particle. Generally, we
make a distinction between energy-conserving forces and dissipative forces, depend-
ing on whether energy is apparently conserved or not.
We can also make a distinction between external forces and multi-body forces.
The distinction depends on what we define as our system of particles and what we
designate as surroundings. An external force is a force that acts on a certain partice ir-
respective of the positions of the other particles in the system. Forces which do depend
on the positions of the other particles are multi-body forces.
In the next subsections we will give examples and details of these various types of
forces.

Conservative forces
Definition of conservative force

Conservative forces are most commonly encountered in the interactions between very
small particles, such as Van der Waals interactions between molecules. At larger scales
we can also find examples of conservative forces, such as Newton’s force of gravity and
1
In this course we focus on time-driven simulation programs for particles that interact through
forces and in which particles have finite collision times. Time progresses with regular time steps. There
exists another class of dynamical simulation programs for hard particles with instantaneous collisions.
In these so-called event-driven simulations time progresses with irregular intervals from one collision
event to the next.
2.3. FORCES ON PARTICLES 13

initialise positions
and velocities

if necessary update
pair list and/or
cell-linked-list

calculate forces

update positions
t ← t + dt
and velocities

boundary conditions

yes t < T r un ?
Figure 2.1: Flowchart of a general particle-
based time-driven simulation program. If
no
we are dealing with a continued run, the
first block ‘initialisation’ means loading a save configuration
previously saved configuration.

Coulomb’s force between two charged particles. Often forces at large scales are ide-
alised to conservative forces, such as the elastic forces associated with the deformation
of a solid material (where plastic deformation or failure of the material is neglected).
Conservative forces can be derived from a potential energy Φ. This potential energy
depends on the set of positions of the particles, but crucially not on their velocities:
Φ = Φ(r1 , . . . , rN ). The force on an individual particle i is given by

Fi = −∇i Φ(r1 , . . . , rN ), (2.1)


 
where ∇i = ∂/∂xi , ∂/∂y i , ∂/∂z i is the gradient operator with respect to the position of
particle i . To get a better feeling for what this means, imagine that we move a particle
i taking various small excursions from its current position, while keeping the positions
of all other particles fixed. Each small excursion leads to a different change in the to-
tal potential energy; Eq. (2.1) is telling us that the force on the particle points in the
direction of steepest descent of the total potential energy.
Forces that can be derived from a potential energy like this are called conservative
because they conserve total energy. This can easily be proven from Newton’s equations
14 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Figure 2.2: Pictorial representation of the


interaction between two neutral spherical
atoms. The nuclei (+) are much heavier than
the electrons (-). In the Born-Oppenheimer
approximation, the nuclei move in effec-
tive (electronically averaged) potentials. Nu-
clear translation, rotations and vibrations
can therefore be treated by using classical
mechanics.

of motion:
dvi
mi = −∇i Φ
dt

N dvi  N
mi · vi = − ∇i Φ · vi
i =1 dt i =1
 
d  1 d
m i v i2 = − Φ
dt i 2 dt
d
⇒ H = 0 (2.2)
dt

N 1
H = m i v i2 + Φ. (2.3)
i =1 2

Eq. (2.2) shows that the total energy, expressed as the Hamiltonian H , is a conserved
quantity:

H (t ) = H (0) = E . (2.4)

This does not mean that the potential energy and the kinetic energy are conserved
quantities! There is a constant exchange of energy between the kinetic energy K =
 1
i 2 m i v i and the potential energy Φ such that their sum is constant and equal to the
2

total energy E .

Example: interatomic potential

The potential energy can often be derived from a fundamental knowledge of the rel-
evant (usually chemical or physical) properties of the system. To give an explicit ex-
ample, consider a group of N molecules interacting with each other. We will treat the
simplest case, namely that of neutral spherical atoms. Noble gases such as argon and
krypton are excellent examples. Additionally, we may treat nearly spherical molecules,
such as methane, in a similar way.
First suppose we have just two atoms, fixed with their nuclei at positions r1 and r2 ,
as in Fig. 2.2. We can write the total ground state energy of the two atoms as
2.3. FORCES ON PARTICLES 15

Pauli repulsion
Figure 2.3: The total interatomic interac-
tion between two neutral spherical atoms 1
is well described by the Lennard-Jones for-

ϕ(r)/ε
mula, Eq. (2.7). At large distances the van r=σ
der Waals attraction is dominant. At short 0
distances the atoms repel each other be-
cause of the Pauli exclusion principle. The van der Waals attraction
diameter of the atom may be defined as the -1
distance σ where these two interactions ex-
actly cancel out. 0.5 1.0 1.5 2.0 2.5
r/σ

0 (r1 , r2 ) = 0 (r1 ) + 0 (r2 ) + ϕ(r1 , r2 ). (2.5)


Here 0 (r1 ) is the ground state energy of atom 1 in the absence of atom 2, and similarly
for 0 (r2 ). So the term ϕ(r1 , r2 ) is the correction to the sum of two unperturbed ground
state energies of the atoms. This term is also called the interatomic interaction or in-
teratomic potential. Because of the rotational symmetry of the atoms, the interatomic
potential only depends on the distance r 12 = |r1 − r2 | between the two atoms, i.e.
ϕ(r1 , r2 ) = ϕ(r 12 ). (2.6)
It is also clear that because of its definition ϕ(∞) = 0. At finite distances, the electrons
in one atom will feel the electrons in the other atom. A classical picture would be the
following: the charge distribution in an atom is not constant, but fluctuates in time
around its average. Consequently, the atom has a fluctuating dipole moment which
is zero on average. The instantaneous dipoles in the atoms, however, influence each
other in a way which makes each dipole orient a little in the field of the other. This
leads to the so-called van der Waals attraction between two neutral atoms. The van
der Waals attraction becomes stronger as the atoms get closer to one another. At a
certain point, however, the atoms will repel each other because of the Pauli exclusion
principle. The total interatomic interaction as a function of distance is well described
by the Lennard-Jones formula (see Fig. 2.3):

σ 12 σ 6
ϕ(r ) = 4 − . (2.7)
r r
The parameter  is the depth of the interaction well, and σ is the diameter of the atom.
The values of  and σ are characteristic for each atomic species. For example for argon
/k B = 117.7 K and σ = 0.3504 nm, for krypton /k B = 164.0 K and σ = 0.3827 nm,
and for methane /k B = 148.9 K and σ = 0.3783 nm. Here k B = 1.38065 × 10−23 J/K is
Boltzmann’s constant. Note that at room temperature T ≈ 300K, the magnitudes of 
are of the same order of magnitude as the thermal energy k B T . This means that these
atoms form a fluid (liquid or gas) at room temperature: the interatomic interactions are
so weak that they allow the structural arrangement of the atoms to change dynamically
under the influence of thermal fluctuations. This is hardly allowed in a solid.
16 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Multiple particles

When dealing with more than two spherical atoms or molecules, it is often assumed
that the total potential energy (due to particle interactions) may be approximated as a
sum of pair interactions :


N−1 
N
Φ (r1 , . . . , rN ) = ϕ(r i j ). (2.8)
i =1 j =i +1

Note that the double sum is constructed such that each pair interaction is counted only
once. In practice the pair-approximation is often a reasonable assumption.
We now ask ourselves: what is the force Fi on molecule i due to all these pair inter-
actions? According to Eq. (2.1) we must take minus the gradient of the potential energy
with respect to the position of molecule i :

Fi = −∇i Φ
 
N−1 N
= −∇i ϕ(r j k )
j =1 k=j +1

= − ∇i ϕ(r i j ). (2.9)
j =i

Going from the second to the third line we used the fact that the position ri appears
only in terms ϕ(r j k ) where either j or k is equal to i . There are exactly N − 1 of these
terms: the distances of all particles, except i itself, to particle i . We continue to evaluate
the gradient ∇i of the intermolecular potential ϕ(r i j ) between a particular pair i and
j . We will first consider one component, say the x-component:

∂ ∂ϕ(r i j ) ∂r i j
ϕ(r i j ) =
∂xi ∂r i j
∂xi

= ϕ (r i j ) (xi − x j )2 + (y i − y j )2 + (z i − z j )2
∂xi
xi − x j
= ϕ (r i j )
(xi − x j )2 + (y i − y j )2 + (z i − z j )2
xi − x j
= ϕ (r i j ) , (2.10)
ri j

where the prime indicates differentiation of the intermolecular potential with respect
to its argument (the pair distance). Similar expressions hold for the y- and z-components.
Combining everything, we can write the force Fi on molecule i as
  ri j
Fi = Fi j ≡ − ϕ (r i j ) , (2.11)
j =i j =i ri j

where ri j = ri − r j and we have defined Fi j as the force on particle i due to the pres-
ence of particle j . We can interpret this expression as follows: for particles interacting
2.3. FORCES ON PARTICLES 17

through radial pair potentials the magnitude of Fi j is given by minus the derivative of
the the pair potential, and the direction is given by the unit vector ri j /r i j pointing from
particle j to particle i .
When dealing with the force on particle j due to the presence of particle i the same
pair distance between particles i and j will be encountered, leading to a contribution
to the force on particle j that is exactly opposite the previous contribution to the force
on particle i , i.e. F j i = −Fi j . Therefore, in practice, each pair distance is evaluated
once, and the forces on both i and j are updated using the above expression.

A code example: calculating forces between Lennard-Jones particles

A specific code example will make things clear. Suppose we have stored the positions
of a collection of Lennard-Jones particles as ,  and , with  the parti-
cle index. Suppose furthermore we have stored the squared particle diameter σ2 in a
variable  and the energy  in a variable . We can then calculate the total po-
tential energy  and the force components ,  and  on the particles
with the following pseudo-code:

    


  
  
  
  
  ! "
 #  $ !
#  "#
#  "#
#  "#
#%  #&#$#&#$#&#
'   (#%)*
  '&'
  $+& & "'
  +& & & "'(#%
  $&#
  $&#
  $&#
#  #"&#
#  #"&#
#  #"&#


 
18 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Note that  is the force divided by the particle distance, i.e. −ϕ (r i j )/r i j . This way
we only need to multiply by the distances along the respective cartesian components,
xi −x j , y i −y j , z i −z j to obtain the forces. We have also avoided calculating the distance
r i j directly (but rather used the squared distance r i2j ), because taking a square root is
computationally expensive. We could increase the computational speed even more
by postponing the multiplication by constant factors such as 4 until after the double
loop.

External forces
External forces are forces on a particle that may depend on its position or velocity
but do not depend on the positions or velocities of other particles within the system.
Well-known examples include gravity force, and the force on magnetised or electrically
charged particles due to an external magnetic or electric field.2
In many cases the external field can be written as a potential energy, in which case
the results of the previous subsection apply again. For example, for a system of N
particles moving in a constant gravity field, the potential energy due to gravity forces is


N
Φe (r1 , . . . , rN ) = mi g zi , (2.12)
i =1

where g is the gravitational acceleration, m i the mass of particle i , and we have as-
sumed that the gravity force is directed in the negative z-direction:

Fei = −∇i Φe = −m i g êz . (2.13)

The gravity force is an external force because the total potential energy in Eq. (2.12) is
a sum of terms which depend on individual particle positions, but crucially not on the
relative particle positions.
Sometimes external forces cannot be written as a potential energy. For example,
the Lorentz force on a charged particle i moving with a velocity vi through an external
electric field E and magnetic field B is given by

Fei = q i (E + vi × B) , (2.14)

where q i is the charge of the particle. The force due to the magnetic field cannot be
written as minus the gradient of a potential energy.3 The Lorentz force is an external
force because it does not depend on the positions or velocities of other particles within
the system.
2
Note that gravitational, magnetic and electric forces are also active between the particles within
a system, but may be negligible compared to these forces due to interactions with matter outside the
system, i.e. an external field.
3
There are ways to include external electromagnetic forces as gradients and curls, using the so-
called Lagrangian and magnetic vector potential, but this goes beyond the scope of this course.
2.3. FORCES ON PARTICLES 19

Dissipative forces
Although the law of conservation of energy is generally valid, this is not always im-
mediately apparent. When we are dealing with large objects (larger than molecules),
we often ignore the detailed motion of particles inside and around these objects and
instead take a more lumped (coarse-grained) view of an object.

Example: a swinging pendulum

For example, when a pendulum swings through the air, on a microscopic scale count-
less collisions take place between the atoms of the pendulum and the molecules in the
air. Total energy is conserved in each and every collision. However, usually we do not
wish to track the atoms in the pendulum and the molecules in the air, but rather de-
scribe the effect the air has on the motion of the pendulum which we treat as a solid
body. On this level of description the air exerts a dissipative friction force on the pen-
dulum, that tends to slow down its motion, and energy is apparently lost.

Example: a sphere moving through a liquid

A related example is the friction force experienced by a sphere moving through a sta-
tionary liquid such as water, which at low velocities scales linearly with the velocity:
f r ic
Fi = −ζvi . (2.15)

Here ζ (in units of kg/s) is the friction coefficient, which can be related to the water vis-
cosity μ and the sphere radius a through ζ = 6πμa, as we will see in Chapter 5. On a mi-
croscopic scale, the water molecules are continually colliding with the sphere and with
each other. In rest, these collisions tend to cancel each other out.4 However, when the
sphere moves through the liquid, there will on average be more collisions at the front
of the sphere than at the back. This explains why the friction force acts opposite the
direction of motion. In the absence of external forces the all motion would eventually
cease.5

Example: a block sliding down an inclined plane

A last example of a dissipative force due to collisions and interactions at smaller scales
is the dynamic friction experienced by a dry object (say a block) sliding down an in-
clined plane. It is an empirical fact that this friction force scales linearly with the nor-
mal force N on the surface:
f r ic vi
Fi = −μ f N , (2.16)
|vi |
4
More precisely, the average force is zero. The second moment of the force on the sphere is not zero;
this leads to diffusion. We will discuss this also in Chapter 5.
5
More precisely, the average velocity would tend to zero for the same reason as in the previous foot-
note.
20 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

where μ f is the (dimensionless) coefficient of (Coulomb or kinetic) friction. On a


microscopic scale the surfaces of the block and the inclined plane are not perfectly
smooth, but rough. This rough surface may be viewed as a series of peaks and troughs.
At small normal force, only the peaks of the surfaces physically touch each other, and
it is relatively easy to transversally slide the two surfaces. At larger normal force, the
surface material is deformed and a larger area is in direct physical contact. It is then
much more difficult to slide the two surfaces past each other. To first order, the contact
area increases linearly with normal force, which explains the equation given above.
The term −vi /|vi | indicates that this friction force also acts opposite the direction of
motion. The coefficient of kinetic friction depends on the materials used; for exam-
ple, ice on steel has a low coefficient of friction, while rubber on pavement has a high
coefficient of friction.

Is energy really lost in all these examples?

Although it may appear that energy is lost in all these examples – without external
forces all motion would eventually cease – it is important to realise that energy is not
really lost, but rather converted into a more invisible form of energy, namely the kinetic
energy of the fluid flow around the moving pendulum or sphere, and an increased in-
tensity of the random motions of the molecules or atoms that constitute the gas, liquid
or solid material. On a macroscopic scale, we say that some kinetic energy of the ob-
ject has converted into kinetic energy of the fluid flow field and some kinetic energy
has dissipated into heat, which which we measure as a slight increase in temperature
of the air, water, pendulum, sphere or block.

2.4 Neighbourlists and cell-linked-lists


The evaluation of the forces is usually the most time-consuming part of a simulation
code because this involves calculating the interactions between neighbouring pairs,
triplets, etc. of particles. Here we will focus on pair interactions only, but the same
techniques will apply to three-body forces.
A simple double loop, with i running from 1 to N − 1 and j running from i + 1 to N
as we have done in the previous section to calculate the forces in a system of Lennard-
Jones particles, requires the evaluation of 12 N (N − 1) pair distances. For a simulation
containing 10.000 particles, this means that the computer needs to calculate about
50.000.000 distances each time step. This is very inefficient.
In many cases particles have a certain interaction range beyond which the interac-
tions are zero, or so small that they may be neglected. Let us call this range the cut-off
range r cut . For particles interacting through the Lennard-Jones potential, see Fig. 2.3,
we routinely choose r cut = 2.5σ. In a system containing a large amount of particles, an
overwhelming majority of particles is located at a distance larger than r cut from any
given particle. Moreover, in one time step the particle positions do not change much,
2.4. NEIGHBOURLISTS AND CELL-LINKED-LISTS 21

Figure 2.4: If the interactions between


particles can be neglected beyond a cut-
off range r cut , it is advantageous to use a
neighbourlist. This neighbourlist contains, 7 1
for each particle (in this example indi-
2
cated by the red particle), the indices of all
neighbouring particles within the the cut-
6
off range (here particles 1 to 4) and within
rcut rshell
an additional spherical shell of thickness 4
r shel l (in this example particles 5 to 7). The
additional spherical shell allows us to re- 3 5
use the same neighbourlist for several time
steps.

so a particle will be surrounded by the same set of closest neighbours for a consider-
able amount of time.

Neighbourlist
It now becomes apparent that it is computationally advantageous to create a list that
contains the close neighbours of each particle [4]. This so-called neighbourlist can
be re-used for several time steps (typically 10-50 time steps). The neighbourlist may
be generated by evaluating all 12 N (N − 1) pair distances and storing, for each particle
i , the indices of all particles that lie within a certain range r l i st = r cut + r shel l of that
particle, as shown in Fig. (2.4). The additional shell of thickness r shel l is necessary to
already include neighbours in the neighbourlist that may enter the cut-off radius at a
later time as they may move towards the central particle. The larger the shell thickness
r shel l , the longer we can re-use the same neighbourlist. However, a thicker shell also
implies that the total list range r l i st is larger, and that a larger number of pair distances
need to be evaluated each time step. It is therefore important to (empirically) find the
shell thickness that results in the largest efficiency. For particles interacting through
a Lennard-Jones potential, the optimal value of r shel l is usually in the range of 10% to
50% of the particle diameter σ.
No interactions between pairs of particles closer than r cut should be missed. To
ensure no pair interactions are missed, we monitor the particle displacement vectors
di (t ) = ri (t ) − ri (tl i st ) during the time a neighbourlist is re-used. The displacement
vectors are initialised to zero when the neighbourlist is generated at time tl i st , and
updated during the general position update (which is discussed in section 2.7). From
the set of displacement vectors we identify the largest displacement length: d max =
max{|d1 |, . . . , |dN |}. The neighbourlist must be updated if:
d max > r shel l /2 (neighbourlist update necessary) (2.17)
The factor 12 arises because two particles may be moving toward each other (think
about this). Variations of this theme are possible. For example we could monitor the
22 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

largest and second largest displacement length and decide to update the neighbourlist
if their sum is larger than the list radius. Because in practice the second largest dis-
placement is not much smaller than the largest displacement, this does not really fur-
ther improve the computational efficiency.

Cell-linked-list
If the number of particles is very large, say 105 or more, generating the neighbourlist
by evaluating all pair distances is still computationally very demanding. In that case
the efficiency of a simulation program is greatly improved by first sorting the particles
into cells by means of a so-called cell-linked-list. Usually the cells have a length of r l i st ,
or slightly more. In that case, to create the neighbourlist for a particular particle, it is
only necessary to evaluate distances to particles within the same cell and particles in
directly neighbouring cells (including diagonals). This means that in two dimensions
only the particles in 8 neighbouring cells need to be checked, and in three dimensions
26 neighbouring cells (actually half these numbers, but more on that later).
The total number of cells in a system depends on the system size. If the system
dimensions are  x  (in 2d) or  x  x  (in 3d), then the number of cells in each
direction is given by

     
     
     

where ‘ ’ is the truncated integer of its argument. The latter ensures that the cell size
is always at least equal to r l i st .
A larger system, containing a larger number of cells, will gain relatively more from
the use of a cell-linked-list because a smaller fraction of the total number of particles
needs to be processed when determining the distance to a certain particle. In practice,
use of a cell-linked-list already becomes advantageous when the number of cells is at
least 4 in each direction.
So how does a cell-linked-list look like? A cell-linked-list consists of two arrays: a
 array containing the index of the first particle in a cell, and another  array
containing, for each particle index, the index of the next particle within the same cell.
An example is given in Fig. 2.5. Here we focus on a particular cell number 19. The
first particle in this cell is (19)=13. The next particle in cell number 19 can be
found by looking up the value of  (13). The answer is 9. This continues,  (9)=7,
 (7)=6,  (6)=4, until we hit  (4)=0, where the value zero indicates that par-
ticle 4 was the last particle in that particular cell. For an empty cell, for example cell 2
in Fig. 2.5, the value of  is already 0.
To generate these two arrays is surprisingly simple. If the x-coordinate of particle i
is given by  , which lies between 0 and , and similarly for   and  , then in
pseudocode:
2.4. NEIGHBOURLISTS AND CELL-LINKED-LISTS 23

19
21 22 23 24 25 13
9
16 17 18 19 20 4
Figure 2.5: To generate a cell- 6
linked-list, the particles are sorted
11 12 13 14 15
into cells of size ≥ r l i st . When gen- 7
erating the neighbourlist of a cer-
6
tain particle, only pair distances with
7 8 9 10
other particles in the same cell and
with particles in directly neighbour-
1 2 3 4 5
ing cells need to be evaluated. The HEAD(:)
actual cell-linked-list consists of two
arrays:  and . The contents
0 13
of    is the index of the 1 2 3 . . . 19
first particle in cell number  .
LIST(:)
The contents of   is the in-
dex of the next particle residing in 0 4 6 7 9
the same cell as particle i . 1 2 3 4 5 6 7 8 9 ... 13 . . .

      


   
  
 
  
  
  
            
     
     

 
Here  = ! is the inverse of the cell size in the x-direction, and similarly for the
other directions. By using the above expression for the cell number, we have arranged
the cells as indicated in Fig. 2.5. Another option is to make   a three-dimensional
array, where   stores the first particle in cell .
It was already hinted that only half the neighbouring cells need to be checked. This
is the case because the complete neighbourlist has to contain each unique particle
pair only once. In two dimensions, the 4 cells to the top, the top-right, the right, and
bottom-right need to be checked (think about why this choice is not unique). For ex-
ample, when generating the neighbourlist of a particle in the red cell 19 in Fig. 2.5,
only distances to particles in cells 19, 24, 25, 20 and 15 need to be checked. Because
the cells do not move during the simulation, the indices of these 4 (in 2d) or 13 (in 3d)
neighbouring cells can be calculated once and stored in an array, say "  , in the
24 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

initialisation stage of a simulation. This will allow for a fast identification of neighbour-
ing cells during the simulation. Calculating the neighbouring cells goes as follows in
2d:

     
    
 
 
   
      
      
      
      


 
The generalisation to 13 neighbouring cells in 3d is left as an exercise to the reader. The
function   gives a unique index to each cell. When our system is periodic
(as we will discuss later), there is a particularly easy way of generating the cell index.
On the other hand, if the system is bounded, we must ensure that no cells beyond
the system boundaries will appear as neighbouring cells. The following pseudo-code
accomplishes this:

      !


      "
 # $ $ % $ $ # $ $ %   
  
 

Building a neighbourlist using a cell-linked-list


Suppose we have created a cell-linked-list. The next step is to actually generate the
neighbourlist. When evaluating the pair distances of a certain particle we only need to
check distances with particles that follow after that particle in the cell-linked-list of that
cell (think about why!), and all particles in the 4 or 13 neighbouring cells. The following
pseudo-code builds a neighbourlist  &, where i is the particle index, k the k’th
neighbour of i ,  & the index of this neighbour, and the number of neighbours
of i is stored in  . As discussed before, the neighbourlist should be updated as
soon as the largest displacement of a particle in the system is more than half the shell
thickness.

    '   


   
   "
2.4. NEIGHBOURLISTS AND CELL-LINKED-LISTS 25

  
     
  
  
         
     
        
!  "  
   #
  $  
 !
  

      %      
 #&        &
   #$%
    $ 
     
  
     
        
!  "  
   #
  $  
 !
  



  $  
 


   
Here   is the squared list range r l2i st . The above routine loops over each cell and
then, for each particle i in that cell, generates a neighbourlist of i . This is accomplished
first, by finding neighbouring particles in the rest of the cell-linked-list in which i is
residing, and second by finding neighbouring particles in the neighbouring cells.

Calculating forces using the neighbourlist


Having obtained the neighbourlist, we can evaluate the pair interactions in the sys-
tem for several time steps (this is why we went through all the trouble of generating
26 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

a neighbourlist after all!). For example for particles interacting through the Lennard-
Jones interaction potential, as in subsection 2.3, we have in 2d and pseudo-code:

       
  
  
  
   
    
   
   
   
!"  #$#
! %  !&' () !"*+
! (  ! %#! %
  $,#-!#! ( ! %
  (,#-!#(#! ( ! %) !"
  $ #
  $ #
    #
    #


 

Note that in practice the efficiency of a simulation can be increased some more by
already evaluating the pair interactions during the building of the neighbourlist (when
the partice coordinates are available anyway). For sake of simplicity we have not done
that here.

2.5 Boundary conditions


If the number and type of particles in a system have been specified, what happens
inside a system is determined by the conditions that apply at the boundaries of the
system. A simple example is the pressure of an amount of gas in a closed container,
which depends on the volume enclosed by its impermeable walls. A more complicated
example is the spatial evolution of the flowfield of a fluid forced to flow through a pipe,
which depends on the imposed flow velocity at the inlet of the pipe (or alternatively
the elevated pressure) and the amount of slip with the wall.
Conversely, if we are interested in the bulk properties of a system, the boundaries
should be as far removed from the region of interest as possible, because the presence
of walls is often felt over a length scale much larger than the typical particle-particle
2.5. BOUNDARY CONDITIONS 27

interaction range. For example, in a Lennard-Jones liquid the radial distribution func-
tion (which we will encounter in detail in section 4.5) is typically still influenced by
walls as far as 10 particle distances away from the walls. Therefore, in a 3d simula-
tion containing N Lennard-Jones particles, only (N 1/3 − 20)3 particles find themselves
in a bulk-like region. This means that there is no bulk-like region in a box containing
N = 104 particles, and that at least 106 particles are necessary to have at least 50% of
the particles find themselves in a bulk-like region. In such a case it is helpfull to avoid
walls altogether and use so-called periodic boundary conditions.
In the next subsections we will discuss the use of solid walls, flow boundary condi-
tions, and periodic boundary conditions in particle-based simulations.

Solid walls
An obvious property of a solid wall is that no particles can penetrate through the wall.
This could be accomplished through a potential, leading to an external force on each
particle, or by direct modification of the velocity of a particle.

Wall-potentials

In microscopic (Molecular Dynamics) simulations it is customary to generate a wall


through a wall-potential, although the same approach can be applied at any scale.
When the detailed interactions with the walls are important for the problem, for ex-
ample when dealing with wall-adsorption, a realistic but quite expensive option is to
include a large number of wall particles, and freeze their positions, or connect them to
each other or to lattice positions through spring forces. Note that this implies that the
wall particles should also be included in the neighbourlists of the free particles.
For many problems, such an approach is too detailed and it suffices to mimick a
wall through a potential than depends on the perpendicular distance between particle
and wall. In detail, an additional energy term


N
Φw (r1 , . . . , rN ) = ϕw (d i w ) (2.18)
i =1

is added for each wall. Here d i w is the closest distance between particle i and the wall.
For a planar wall with unit normal n̂ (pointing inwards towards the system), d i w =
(ri − rw ) · n̂, where rw is any point on the wall. For example, for a wall at x = 0, with the
wall-normal in the positive x-direction, we have simply d i w = xi . For a wall at x = L x ,
with the wall-normal in the negative x-direction, we have d i w = L x − xi .
The wall potential ϕw (d i w ) should decay smoothly to zero at a finite range, and
be sufficiently divergent for small distances to prevent the particle from ever reach-
ing d i w = 0. An often-employed example is the repulsive part of a Lennard-Jones-like
potential:

w σ 2n σ n 1
ϕ (d ) = 4 − + (for d < 21/n σ) (2.19)
d d 4
28 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Check for yourself that this potential is purely repulsive, and smoothly decays to zero at
d = 21/n σ. The parameter n controls the stiffness of the wall, where n = 6 corresponds
to the stiffness of the Lennard-Jones interaction.
Because the walls act as an external force on each particle (i.e. independent of the
positions of the other particles), the calculation of this type of wall force is relatively
cheap and is best handled by a separate routine which loops once over each particle.
For example, if we apply the following routine particles will be confined between two
Lennard-Jones-like walls at x = 0 and x = L:

        
  
 
  
    
   !
"  #
   $%#&#"'
(  ($"%#&#"#"' 
   )*'   
   *'!
"  #
   $%#&#"'
(  ('"%#&#"#"' *'


 

Here  = 21/6 σ is the cut-off distance for the wall interaction, and 
stores the total wall potential energy.
The above wall potential leads to wall-induced forces on the particles which are ori-
ented perpendicularly to the wall. In many cases this is sufficient, for example when
one is interested in equilibrium properties such as the pressure of the system. How-
ever, it is important to realise that a planar wall potential cannot exert any force par-
allel to the wall. This means that flow of material parallel to the wall is uninhibited,
corresponding to perfect slip boundary conditions at the continuum scale.
In some cases, especially when dealing with flow, it is important that a wall can
sustain a certain amount of shear stress, which means that the particles moving par-
allel to the wall are slowed down by the wall. At the continuum scale this corresponds
to no-slip (or partial-slip) boundary conditions. This can be achieved by including a
large number of explicit wall particles and freezing their positions, or connecting them
to each other or to lattice positions through spring forces. As mentioned before, this is
computationally more expensive. A good alternative is to directly alter the velocities of
the moving particles when they cross the boundary, as we will discuss next.
2.5. BOUNDARY CONDITIONS 29

Solid walls through direct modification of particle velocities

Instead of explicitly simulating the entire collision process between a particle and a
wall, where the wall-potential slowly grows and decays as the particle approaches and
leaves the wall region, it is often sufficient to simply take into account the effect of a
wall collision on the particle velocity.
The best known example is specular reflection. This kind of wall collision we know
from our experience when a billiard ball bounces (without spin) against the edge of a
billiard table. At the time of impact the velocity of the particle v is decomposed in a
component v⊥ perpendicular to the wall and the remaining part v|| = v − v⊥ parallel to
the wall. The effect of the collision is to invert the normal component, i.e. the new ve-
locity is given by v = v|| −v⊥ . Because the parallel component v|| of the particle velocity
is conserved, this situation again corresponds to perfect slip conditions at the contin-
uum scale. The following pseudo-code gives an example where (non-rotating) spheres
of diameter σ collide against walls at x = 0 and x = L (i.e. they collide when the particle
x-coordinate is equal to σ/2 or L − σ/2, respectively). For simplicity we assume that a
particle has moved on a straight path with velocity ( , ,  ) during the
last time step of length  , so that we can simply mirror the x-position   in x = σ/2
or x = L − σ/2:

    
 
     !
     " 
   " 
   #$"   !
   %$"  " 
   " 
  
 
  
In some simulations bounce-back conditions are applied. In this case the full ve-
locity vector of the particle is inverted at the time of impact: v = −v. It is easy to see
that on average this leads to a zero velocity at the wall, i.e. no-slip conditions at the
continuum scale. In pseudo-code, using the same example of spheres of diameter σ
with bounce-back walls at x = 0 and x = L:

 & &'
 
        #$"   !
     !
     "  ( 

     "$)  ( 
30 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS


   
   
   
 
 
 


 
Here   is the remainder of the time step , after the time of collision, with
which the particle needs to retrace its path. Although the bounce-back condition is
rather strange when considering a single particle (imagine a particle exactly retracing
its steps after colliding with a wall), it is acceptable when considering a large collection
of particles interacting with the wall and with each other, because this will naturally
lead to a statistical distribution in the impact velocities and angles.
In microscopic and mesoscopic simulations the statistical distribution of parti-
cle velocities inside the fluid is known: it is the Maxwell-Boltzmann distribution [45]
shown in Figure 2.6:
 3/2 m v x2 +v 2 2
y +v z
m −
P (v x , v y , v z ) = e 2k B T
, (2.20)
2πk B T

where m is the mass of the particle and k B is Boltzmann’s constant.6 This distribution
shows that a higher temperature T is associated with larger random velocity fluctua-
tions. The idea behind diffusive or thermal wall conditions is the following: when we
are dealing with small particles, we should also consider the morphology of the wall at
microscopic scales. At small scales a wall is never perfectly smooth but rough and will
contain numerous defects. This leads to a random scattering of the small particle back
into the system. Also, the atoms that form the wall are not sitting still but performing
thermal motions of themselves, dictated by the wall temperature and the atomic mass
according to the Maxwell-Boltzmann equation above. When we consider an ensem-
ble of many collisions between microscopic particles and such a wall, the ensemble of
particles will equilibrate its kinetic energy with that of the wall. So the particles will
attain the temperature of the wall. This may be modeled by drawing the new particle
velocities from an appropriate thermal distribution. The particles are released into the
fluid with this new velocity.
So what is the appropriate thermal distribution? The velocity distribution of the
incoming particles (those that cross the wall boundary in a certain time interval) is not
exactly the same as the Maxwell-Boltzmann distribution. They have a bias in their ve-
locity component normal to the wall. This is easy to understand: only particles that
6
Here we assume there is no background flow. If there is a background flow, v should be interpreted
as the velocity relative to the background flow velocity, i.e. (v x , v y , v z ) are the velocity fluctuations.
2.5. BOUNDARY CONDITIONS 31

0.4

Figure 2.6: The probability of encoun-


0.3
tering a particle with velocity v x along

P (v x )
the x-direction in a thermal system is
0.2
given by the Maxwell-Boltzmann distri-
bution. This is a Gaussian distribution
0.1
with zero average
 and standard devia-
tion equal to k B T /m. The same ap-
0
plies to the other cartesian components −3 −2 −1 0 1 2 3

v y and v z . v x / k B T /m

actually move towards the wall will collide with the wall. More precisely, particles with
a larger velocity component v ⊥ toward the wall can originate from a larger distance
(v ⊥ Δt ) from the wall and still collide with it within a certain small time interval Δt .
The probability for a normal component of the velocity distribution is therefore biased
by an additional factor proportional to the normal component of the velocity. The
components parallel to the wall are unaffected. In conclusion, the outgoing velocity
components should be drawn from the following distributions (see Appendix C for de-
tails on how to generate such random numbers):
mv 2
m − ⊥
P (v ⊥ ) = v ⊥ e 2kB T (2.21)
kB T
 mv 2
m − 2k ||1
P (v ||1 ) = e BT (2.22)
2πk B T
 mv 2
m − 2k ||2
P (v ||2 ) = e BT . (2.23)
2πk B T
Another advantage of using these boundary conditions is that the walls simultaneously
act as a thermostat, draining or providing heat as needed to keep the temperature near
the walls constant.

Inflow and outflow boundaries


If we study the properties of a system under flow, for example the convection and dif-
fusion of a collection of colloidal particles in a liquid flowing through a pipe, we need
to impose inflow and outflow boundary conditions.7 There are many possible forms of
inflow boundary conditions, depending on whether a certain pressure, a certain mass
flux of particles and/or a precise velocity profile needs to be maintained at the en-
trance. Here we will describe the simple case of an imposed velocity profile.
7
An often employed alternative to explicit inflow and outflow boundaries, especially useful for
steady-state simulations with periodic boundaries (see next subsection) is to let the flow develop it-
self by exerting a constant external force Fext on each fluid particle. This is equivalent to the effect of
applying a pressure gradient −∇P equal to ρ # Fext , where ρ # is the number density of fluid particles.
32 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Figure 2.7: In simulations of flow-


ing particles, new particles can be
inserted in the inflow region and,
after streaming through the sys-
inflow system outflow tem, removed again in the outflow
region.

Inflow boundaries

Inflow boundary conditions are usually constructed as a region in space which acts as
an entrance for new particles into the system of interest, see Figure 2.7. We will refer to
this as the ‘inflow region’ from here on. Particles can be added with a certain mass rate
Q m (in kg/s) into the inflow region, and each particle is initialised with a certain flow
velocity which depends on the desired inlet flow profile vi nl et (r).
The main problem usually is how to place the particles. For dilute and not too dense
fluids we can achieve this by placing the particles randomly in the inflow region and
continuously checking for (too large) overlap with the particles that have already been
placed. For dense fluids this is often not possible and we need to resort to adding pre-
packed layers of particles, for example with a crystalline or nearly-crystalline structure,
to achieve a high enough packing. To avoid sudden shocks in the forces on particles
that are inside the system of interest, the interaction range or strength (e.g. σ or 
in the Lennard-Jones potential) may be gradually grown from a small initial value to
the final value while the particle travels from its initial position in the inflow region to
the entrance of the system of interest. We will discuss random insertion, crystalline
packing and slow growth of interaction range and strength in more detail in section 2.6
on initialisation.
After a new particle i has been placed at ri , choosing its velocity is relatively straight-
forward. Depending on the location in the inflow region, the particle should get a ve-
locity equal to the velocity of the desired flow profile at the location of the particle:
vi = vi nl et (ri ). For molecular and mesoscopic (i.e. thermal) systems appropriate ther-
mal fluctuations δvi should be added to this local average:
vi = vi nl et (ri ) + δvi , (2.24)
where all three components of δvi are drawn from a Maxwell-Boltzmann distibution:
 mδv 2
m − 2k iT,x
P (δv i ,x ) = e B (2.25)
2πk B T
 mδv 2
m − 2k T
i ,y
P (δv i ,y ) = e B (2.26)
2πk B T
 mδv 2
m − 2k iT,z
P (δv i ,z ) = e B (2.27)
2πk B T
Remember that temperature is a measure for the local velocity fluctuations, so without
adding these fluctuations the inflow of fluid will effectively be at a temperature of zero!
2.5. BOUNDARY CONDITIONS 33

Outflow boundaries

Outflow boundary conditions are usually simpler than inflow boundary conditions.
Often the only task is to remove particles from the system as soon as they cross the
boundary. In some cases, especially in dense systems or in systems with long range
interactions, removing a particle may lead to a sudden shock on the forces on the par-
ticles that are still inside the system of interest. If this is the case, we may choose to
gradually decrease the interaction strength from the moment a particle moves into the
outflow region (figure 2.7). When the interaction strength is small enough the particle
can be removed entirely.

Complication: the number of particles is not constant!

There is a complication that must be taken into account when using inflow and out-
flow boundary conditions: not all particles are always inside the system plus inflow
and outflow regions. Therefore the number of particles inside the system plus in- and
outflow regions is not necessarily constant. If the largest possible number of particles
inside the system can be estimated, this is not a very large problem. We can still define
all our position and velocity arrays as before, but now add an extra boolean array (per-
haps called  ) which indicates for each particle whether it is inside the system.8
When generating the cell-linked-list and/or neighbourlist, we should only process
particles that are inside the system (     ). Conversely, when new par-
ticles have been introduced to the system in the inlet region, we should add these par-
ticles to the appropriate cell-linked-lists and/or neighbourlists.

Example: flow of a gas between two parallel plates

A specific example will make things clear. Suppose, we study the flow of a dilute gas
of Lennard-Jones particles of mass m, flowing in the positive x-direction between two
parallel plates at y = 0 and y = L y . Let us assume that the two plates are solid walls
with no-slip boundary conditions with the gas, and that we feed the slit with a ho-
mogenous gas with a flat flow profile (〈v x 〉 (y) = v 0 = const). As the gas flows through
the system, it will decelerate near the walls, and because of molecular collisions the
slowing down effect will grow inwards.9 If the desired initial number density is ρ #0 (in
two dimensions: the number of particles per unit area), the mass rate with which we
must feed new particles is Q m = mρ #0 v 0 L y . This means that on average we should each
time step Δt add n i nser t = Q m Δt /m = ρ #0 L y v 0 Δt particles in the inlet region.10 Because
the gas is dilute we may place the particles at random, and there is no need to slowly
8
Using a boolean array is generally more efficient than the other used practice where a particle is
placed at some coordinates outside the system and the (real number) coordinates are used to check
whether a particle is inside or outside the system.
9
In other words: the boundary layer thickness will grow in time due to the viscosity of the fluid.
10
Of course we can only insert an integer number of particles. We assume here that Q m is so large
that ninser t  1 and simply taking the nearest integer of ninser t is sufficiently accurate. In the other
extreme, if ninser t is smaller than 1, we can draw a uniform random number between 0 and 1 and insert
34 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

increase the interaction strength in the inflow region and decrease it in the outflow re-
gion. To prevent particles from escaping in the negative x-direction, we can insert a
simple slip-boundary wall at the left side of the inflow region. A pseudo-code for all
these boundaries is as follows.

       


     
   
  
                
           
            
  ! !
                "
           "  
#  #   "$%& " "#  "    
#  "$%&   '    #  
 ! !
    
 
 
 ( $   ) *+    +    *
   +
   
  ! ! * 
  
   &#
 
   (*&#
 
  (,!-  -#
  (,!-  -#
'  '(,!-  -#'
#  (#
#  (#
#'  (#'
 
  
  (  .     
#  (#
 
one particle if the random number is less than ninser t . In between these two extremes we could choose
the actual number of particles to insert from a so-called Poisson distribution with an average of ninser t .
2.5. BOUNDARY CONDITIONS 35

   
    
      




In the first part relating to the inflow boundary, as long as the number of particles to
be inserted has not been reached, we keep looking for the index of a particle outside of
the system (     ). When such a particle is found, new coordinates
inside the inflow region are chosen, the neighbourlists are updated, a velocity is drawn
from a Maxwell-Boltzmann distribution, and   is made  . If there are
insufficient particles outside of the system ( ) to accommodate the new particles in
the inflow region, the program is stopped. For most modern programming languages
this is not necessary, because they allow for a re-allocation of the arrays to a new (in-
creased) length N .
In the second part, we recognize the specular and bounce-back reflection rules.
When the x-coordinate of particle i becomes larger than the box size L x , it is moved
outside of the system by setting    and removing it from all neigh-
bourlists. The more difficult operations are handled by routines     ,
     and      .
The routine      places a particle inside the inflow region, which
here is a rectangular region between x = 0 and x = L i nl et and y = 0 and y = L y . The new
particle should not overlap with other particles. The check for overlap can be made ef-
ficient by again making use of the cell-linked-list, only searching for overlap with par-
ticles in nearby cells. Too keep this example short and simple, here we will simply loop
over all particles.

      


    
    
                    !"#
$  !%  & $'            $'
    
 (  #"   )    ' * '       
  ( 
 (   '(
$ (  $ '$(
 ( +   ( ( & $ ($ (
  ( + ,      



36 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

  
  
  

Note that the vertical position of the new particle is between σ/2 and L y − σ/2, not
between 0 and L y , because the centre of a particle cannot approach the solid walls at
y = 0 and y = L y closer than one particle radius σ/2.
A word of caution is at place here. The above routine will work for sufficiently dilute
systems. In more dense systems the routine could get stuck looping endlessly trying to
find a vacant position for a new particle. In that case, as mentioned before, we should
not insert particles randomly but for instance stack them in well-arranged layers.
The routine     adds the new particle to the neighbourlists. We
could add the new particle to the neighbourlist of each particle placed within one list
radius. However, because in the evaluation of the forces we treat each unique pair only
once, we may as well insert existing particles in the neighbourlist of the new particle.
Again, this could be done efficiently using a cell-linked-list, but for simplicity we will
loop over all particles inside the system:

      


  
  
   
              !!
!    "
   
   
#  $ % $
!  # &  $   "   "    !   
    %
    
 !
 !

   
  

We loop over all particles and check first if the particle is inside the system and then
whether it is within a distance  of the new particle i . You may worry that particle
i itself will end up in the neighbourlist of i . This is not the case because at this point
   is still '!'.
The routine  (   is the opposite of the previous routine: it re-
moves a particle from all existing neighbourlists. When this particle is removed from a
particular neighbourlist, in most cases it will create a hole in the neighbourlist. We can
fill this hole with the last other particle of this neighbourlist. In pseudo-code:
2.5. BOUNDARY CONDITIONS 37

        


             
         
  !             
     !
     
       

   ! 
        "     
 !    #     




  

We loop over all particles and check first of the particle is inside the sytem and then
whether one of the elements of the neighbourlist is equal to i . The location of i in the
neighbourlist is stored in  and the last neighbour in the neighbourlist is used to
fill the hole created by the removal of i . Because at this point  is still  ,
the neighbourlist of i itself will not be checked.
We note that checking the neighbourlists of all particles is an expensive operation
and should in practice be avoided, for example by identifying nearby particles using a
cell-linked-list or, in case of an outflow boundary, by only checking the neighbourlists
of particles within a distance r l i st of the outflow boundary.
This concludes our treatment of inflow and outflow boundaries.

Periodic boundary conditions


Sometimes we are interested in the bulk properties of a system. In such a case, the
boundaries should be as far removed from the region of interest as possible, because
walls are known to structure a fluid over a length scale much larger than the typical
particle-particle interaction range. Bulk behaviour may be approximated by the use of
periodic boundary conditions.11
When we use periodic boundary conditions, exact copies of the central simula-
tion box are generated in all directions, leading to an infinitely large grid of simulation
boxes, see Figure 2.8. This has two consequences for the simulation:
11
Bulk behaviour is only approximated by using periodic boundary conditions, because the number
of degrees of freedom is still finite. One should therefore always test the effect of box size on the results.
For example, near a phase transition and near a critical point, the characteristic size of density fluctu-
ations in the fluid may grow beyond the size of a typical periodic simulation box. Also, hydrodynamic
modes are limited to characteristic wavelengths commensurate with the simulation box size. This leads
to subtle finite box-size effects which we will not discuss here.
38 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Figure 2.8: Explanation of periodic


boundary conditions and the minimum
image convention in a two-dimensional
system. The contents of the central box
(yellow) is copied in all directions. The red
particle has a cut-off distance, indicated
by the dashed circle, less than half the box
size. The red particle in the central box
therefore does not directly interact with
the blue particle in the central box. Rather,
it interacts with the closest image of the
blue particle to its left side.

1. After the position update, when a particle in the central simulation box has left
the box through the right boundary, it re-enters through the left boundary, and
similarly in all directions.

2. During the calculation of forces, a particle not only interacts with neighbouring
particles within the central box, but also with images of particles in neighbouring
copies of the box.

To explain these two consequences in more detail, we suppose in the following that
our box is two-dimensional and periodic in both directions, with the central box x-
coordinates between 0 and L x , and y-coordinates between 0 and L y .

Applying periodic boundary conditions to the particle positions

As soon as the x-coordinate of a particle is larger than L x , it has left the right boundary
and should re-appear from the left boundary at an x-coordinate diminished by L x .
Conversely, when the x-coordinate is less than 0, we should add L x to get the new x-
coordinate. With a similar approach for the y-direction, we arrive at:

      


 
    
     
    
     
 
 
2.5. BOUNDARY CONDITIONS 39

Depending on the computer hardware, the coding language, and the compiler, eval-
uating many if-statements can be relatively expensive. In that case a more elegant
approach is to subtract an integer number of box lengths, where the number is de-
termined by using the nearest integer (  ) evaluation.12 If the central box is between
−L x /2 ≤ x < L x /2, applying a periodic boundary in the x-direction is rather simple:
       (find out yourself why this works). For a central box
between 0 ≤ x < L x , as we have assumed in the previous examples, we first subtract half
the box length:

           


 
      
      
 
   

Particle interactions with periodic boundaries: the minimum image convention

The use of periodic boundary conditions also implies that particles interact with im-
ages of particles in neighbouring copies of the central box. A particle may even interact
with more than one copy of the same particle if the cut-off distance r cut is larger than
half the smallest box dimension (think about why?). This will both complicate the cal-
culations and lead to severe finite box-size effects.
To avoid these complications, in practice we choose the smallest box size at least
twice as large as he cut-off distance,

min(L x , L y ) ≥ 2r cut , (2.28)

or when we use a neighbourlist twice the list distance:

min(L x , L y ) ≥ 2r l i st . (2.29)

We can then use the so-called minimum image convention: in the calculation of the
pair interaction between a pair of particles i and j we need only take into account the
closest image pair.
To make this more explicit, let us consider an example of two particles i and j which
happen to have the same y-coordinate. Suppose particle i is located at xi = 0.1L x and a
particle j at x j = 0.8L x , and that the cut-off distance
 is rcut = 0.4L x . The direct distance

between particles i and j in the central box is xi − x j  = 0.7L x . Because this is larger
than r cut these particles do not interact directly inside the central box. However, the
image of particle j closest to particle i is located at x j = x j − L x = −0.2L x . The distance
 
 
between particle i and this image of particle j is smaller than r cut : xi − x j  = 0.3L x .
The force between i and j should be calculated based on this distance.
12
For example,   ,    ,    ,     .
40 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

In the above example the two particles have the same y-coordinate. More gener-
ally, for particles interacting through a pair potential ϕ, the force on particle i due to
interaction with particle j must be calculated as
  ri − r
   j
Fi j = −ϕ ri − r j  , (2.30)
 
ri − r j 

where rj is the image of j that is closest to i . To find this closest image we can again
use the   (nearest integer) function:
xi − x j = xi j − L x nint(xi j /L x ), (2.31)
where xi j = xi − x j is the displacement from particle j to particle i inside the central
box. Similar expressions should be applied to the other (periodic) directions. In other
words, anywhere in the simulation code where the vector from j to i is needed, the
following lines:
   
   
should be replaced by:
   
      
   
      
It is important to emphasize that the minimum image convention should not only
be applied when evaluating forces between particles, but also when building the neigh-
bourlist, because particles at opposite sides of the box may actually be close together.
If the neighbourlist is generated through a cell-linked-list, we also need to adapt the
function  . Remember that this function yields the index of a cell located
at  . For a bounded system, a value zero was returned if   happened to
fall outside the system boundaries. Now, when we search for neighbouring particles
of a particle located near a periodic boundary, we should also check particles in cells
located at the opposite side of the box. This may easily be achieved by using the modulo
operator.13 The adaptation then simply reads:
       
         
 
   
With this adaptation, we can still use the same initialisation routine identifying neigh-
bouring cells   !. It is left as an exercise to the reader to find out,
with the help of Figure 2.5, why this yields the correct neighbouring cell indices.
13
The modulo mod (a, b) is defined as the (non-negative) remainder of a after division by b. For
example mod (1, 3) = 1, mod (3, 3) = 0 and mod (−1, 3) = 2.
2.6. INITIALISATION 41

simple cubic body-centered cubic face-centered cubic

Figure 2.9: Primitive unit cells of cubic crystal arrangements.

2.6 Initialisation
The simulation has to start from a certain configuration of particles. So how do we
choose the initial positions? This depends on the system at hand: we may place the
particles in a lattice configuration or insert them randomly, possibly combined with
slow growth of the particles. In all cases this should be followed by an equilibration
simulation. In the following we will briefly discuss the applicability and advantages
and disadvantages of these choices. Choosing the initial velocities of the particles is
usually less complicated, and well known for microscopic (molecular) systems, as we
will discuss at the end of this section.

Initialising on lattice positions


When dealing with an atomic or molecular crystal, we may place the particles in an ap-
propriate lattice configuration that corresponds with the minimum (free) energy con-
figuration of the particular material (under the desired thermodynamic conditions of
temperature and/or pressure). Fig. 2.9 shows some common three-dimensional ar-
rangements for spherical particles in a cubic periodic unit cell: the simple cubic, body-
centered cubic (bcc), and face-centered cubic (fcc) configurations.
Even when we are not dealing with a crystal but with a liquid or dense gas, it is often
advantageous to start with a crystal because this way potential problems with overlap-
ping particles are easily avoided. When the total energy in the system is high enough
(i.e. by choosing large enough initial velocities and/or by controlling the temperature
through a thermostat), the equilibrium state will not be a crystalline solid but a liquid
or gas. A certain amount of initial simulation time therefore needs to be allocated to
ensure that the crystal will melt or evaporate. This is usually no problem for spheri-
cal particles, but may require a large amount of time for particles of non-trivial shape
or particles with multiple attractive interaction sites, which may be associated with a
high activation barrier for melting. In that case it may be better to randomly insert and
slowly grow particles, as discussed in the following subsections.
42 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

Figure 2.10: Random insertion


of particles inside a simulation
box. When attempting to place
a new particle (red), we need to
ensure that no previously placed
particles are within a range r cr
(dashed line) of the new posi-
tion. This only works well for rel-
atively dilute systems.

Random insertion
We can be pragmatic and choose to simply insert the particles, one at a time, at ran-
dom positions inside the simulation box. We already encountered this approach when
we discussed a possible inflow boundary condition for a dilute gas of Lennard-Jones
particles. If the center of the new particle ends up within a prescribed critical range r cr
of the centers of already placed particles, we try a new random position, see Fig. 2.10.
An example pseudo-code is given below for the case of a two-dimensional periodic
system of size  by . The critical range can be chosen a little larger than the typical
range of particle-particle interactions (the diameter for hard-sphere particles). As can
be expected, this approach works well for relatively dilute particle configurations, but
quickly fails when the particle density gets higher.

      


   
     
    
                !"
     
     
 #  $ % %&        %   %
#  $#
#  $#
#  #$   #'      %  
#  #$   #' 
# (  ## ) ##
 # ( * %%      


  
  

 
2.6. INITIALISATION 43

Growing particles or shrinking boxes


For very dense systems it is not always possible to place the particles by random inser-
tion. Besides initialising on a lattice, another possibility is to initialise the system by
deliberately starting with smaller particles or by placing the particles in a too large box.
Subsequently, the size of the particles is increased in small steps or the box dimensions
are shrunk in small steps, always with intermittent equilibration runs.
For example, for the case of a Lennard-Jones system, we can choose the initial par-
ticle size σ at a value of 50% of its final value. This allows us to place the particles by
random insertion without too many rejections and without causing the subsequent
equilibration run to immediately diverge. Actually, during the equilibration run the
various energies (kinetic and potential) should be monitored until they reach a steady
state. Then σ can be increased to 60% of its final value, and another equilibration run
is executed. Etcetera, until the desired particle size is reached.
The other possibility is to choose the initial box dimensions too large, for example
at 150% of the final box dimensions. This way there is also enough room for all particles
to be placed by random insertion. After an equilibration run, the box dimensions are
rescaled by a factor x close to unity (e.g. 0.9). When rescaling the box dimensions,
of course the particle coordinates should be rescaled as well, ri → xri , and another
equilibration run is executed. Etcetera, until the desired box dimensions are reached.
Both methods to generate a dense system are valid. Both methods are in a sense
also the same: scaling of the particle size σ is equivalent to scaling of the box size L
because L/σ is the important parameter.14

Choosing initial velocities


Before the time loop of a simulation program can commence, the particles should
be initialised with a velocity. For thermal systems in equilibrium we should use the
Maxwell-Boltzmann distribution, which we have by now encountered several times,
but repeat here for completeness. For a particle of mass m i the velocities are dis-
tributed according to:

 mi v 2
m − i ,x
P (v i ,x ) = e 2k B T
(2.32)
2πk B T
 mi v 2
m −
i ,y
P (v i ,y ) = e 2k B T
(2.33)
2πk B T
 mi v 2
m − i ,z
P (v i ,z ) = e 2k B T
(2.34)
2πk B T
14
Similarly, scaling of the interaction strength  is equivalent to another equilibration method we
have not discussed, namely a gradual temperature quench in which the system is cooled down in small
steps from a too high temperature to the desired temperature. Here /kB T is the important parameter.
44 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

It is often desirable to initialise the system with a total momentum of zero, meaning
that the centre of mass of the system does not move. This may be achieved by sub-
tracting the centre of mass velocity from the velocity of each particle. This requires
two loops over the particles. The following pseudo-code gives an example for a two-
dimensional system with particles of different mass m i .

      


  
  
  
   
            
     ! "       
   # $ 
   # $ 
   #

  
  
   
  % 
  % 

 

We note that in some simulation programs, after removing the centre-of-mass velocity,
the velocities of the particles are rescaled to ensure that the kinetic energy of the system
equals exactly d2 N k B T , where d is the number of dimensions. However, if we take the
point of view that the total kinetic energy should consist of 12 k B T per degree of freedom
(equipartition of energy for quadratic terms in the Hamiltonian [45]), such a rescaling
is not necessary because the number of degrees of freedom is d (N − 1) instead of d N :
when calculating the expected (ensemble average) kinetic energy we should take into
account that conservation of the centre of mass momentum removes d degrees of free-
dom.15

15
An extreme example would be a system consisting of two particles (N = 2) of mass m in one
dimension (d = 1). Choosing both v 1 and v 2 from a Maxwell-Boltzmann distribution will generally
lead to a non-zero centre of mass velocity (v com = (v 1 + v 2 )/2 = 0). Subtracting
 v com from both ve-
locities leads to an (ensemble averaged) kinetic energy of 〈E kin 〉 = m (v 1 − v 2
com ) + (v 2 − v com )
2
=

m 1 2 1 2
 m kB T 1
2

2 2 v 1 − v 1 v 2 + 2 v 2 = 2 m = 2 k B T . Indeed the number of degrees of freedom in this case is equal to


d(N − 1) = 1(2 − 1) = 1.
2.7. UPDATING PARTICLE POSITIONS AND VELOCITIES 45

2.7 Updating particle positions and velocities: numerical


integration of the equations of motion
Now almost all elements of our basic particle-based simulation program are in place.
We have initialised the particle positions and velocities, and built routines to calculate
forces on the particles and deal with various types of boundaries. To actually determine
the dynamics of the system, i.e. the time-dependence of the positions and velocities of
the particles, we need to solve Newton’s equations of motion:16

dri
= vi , (2.35)
dt
dvi
mi = Fi , (2.36)
dt

where m i is the mass of particle i and Fi is the sum of all forces on particle i , both due
to interactions with other particles and due to external forces. The equations of motion
are solved numerically. Rather than evaluating time derivatives based on infinitesimal
time steps, we will evaluate time derivatives using a small but finite time step Δt .

Maximum time step is limited by the fastest oscillation


The time step Δt should be at least small enough to properly resolve the smallest oscil-
lation times Tmi n present in the system, i.e. Δt should be at least, say, 20 times smaller
than Tmi n . The smallest oscillation time may be estimated by evaluating the largest
curvature of the potential energy (effectively the largest spring constant k max ) between
a pair of particles. Using the analogy of a harmonic oscillator, the minimum oscillation
time is never smaller than:
 
  −1
m d2 ϕ 
Tmi n = 2π = 2π m (2.37)
k max dr 2 max

where m is the (smallest) mass of a particle. This equation shows that heavier particles
or particles interacting with softer potentials allow for larger integration time steps.
What is meant by “practically reacheable curvature” depends on both the pair inter-
action and the typical relative velocities (i.e. the temperature for microscopic thermal
systems) of the particles. For thermal systems we may estimate the curvature at a typ-
ical maximal potential energy of the order of 5k B T .
The curvature considerations lead to an upper limit of the time step. However, the
accuracy of the numerical integration of the equations of motion depends on the type
of time discretisation.
16
We will encounter a different kind of equation of motion when we treat Brownian Dynamics in
Chapter 5.
46 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

First order Euler method


The simplest approach is called the first order Euler method:

ri (t + Δt ) = ri (t ) + vi (t ) Δt , (2.38)
1
vi (t + Δt ) = vi (t ) + Fi (t ) Δt (2.39)
mi

A disadvantage of this method is that it does not conserve the total energy (even when
the forces are conservative), as will be discovered in the Practicum. Comparing Eq. (2.38)
with the exact Taylor expansion around time t ,

1 1 ...
ri (t + Δt ) = ri (t ) + ṙi (t )Δt + r̈i (t )Δt 2 + r i (t )Δt 3 + . . . , (2.40)
2 6

shows that the truncation error of the first order Euler algorithm is quadratic in Δt .
Generally, a method is called of order n if the truncation error scales as Δt n+1 .

Leap-frog method
If we increase the accuracy of the calculation by decreasing the time step, then at equal
total simulation time the required number of time steps will increase, and hence the
required calculation time. Therefore, it is advisable to choose an inherently more ac-
curate algorithm to integrate the equations of motion.
One particularly popular algorithm is the so-called leap-frog method [4]. It may be
derived as follows: performing a Taylor expansion back in time, we find

1 1 ...
ri (t − Δt ) = ri (t ) − ṙi (t )Δt + r̈i (t )Δt 2 − r i (t )Δt 3 + . . . . (2.41)
2 6

Adding Eq. (2.41) to Eq. (2.40), and rearranging, we find


 
ri (t + Δt ) = 2ri (t ) − ri (t − Δt ) + r̈i (t )Δt 2 + O Δt 4 , (2.42)

Note that this is a third order algorithm with the favourable property that it is time-
reversible: when at a certain time all Δt ’s are changed to −Δt ’s, the particles will move
back along their exact previous trajectories. This is not the case for the first order Euler
algorithm and the core reason why that algorithm does not conserve total energy. To
apply this algorithm in practice, we need to store the positions of the particles at a pre-
vious time step ri (t −Δt ); only forces (r̈i (t ) = Fi (t )/m i ) and no velocities are needed for
the update of the particle positions. In many cases we would also like to have available
the velocities, for example to calculate the kinetic energy. It is therefore convenient to
first define a new variable vi (t + Δt /2), such that

ri (t + Δt ) − ri (t ) ≡ vi (t + Δt /2)Δt . (2.43)
2.7. UPDATING PARTICLE POSITIONS AND VELOCITIES 47

Using this definition, we may rewrite the integration algorithm as:


Fi (t )
vi (t + Δt /2) = vi (t − Δt /2) + Δt , (2.44)
mi
ri (t + Δt ) = ri (t ) + vi (t + Δt /2)Δt . (2.45)

It is important to stress that, despite the introduction of the new variable vi , numer-
ically the integration of the position is still equal to that of Eq. (2.42), and therefore
of third order. Instead of storing the positions at a previous time step, we now need
to store the new variable, which is an approximation of the velocity at time t + Δt /2,
and we therefore refer to loosely at “the” velocity. The above algorithm is called the
leap-frog scheme because of the way in which the velocity is updated to t + Δt /2 using
the force at time t , and subsequently the position is updated to t + Δt using the just-
obtained velocity at time t + Δt /2. Algorithmically, the position and velocity updates
are very similar to that of the first order Euler algorithm. The difference is subtle (find
it for yourself), but important!
The leap-frog algorithm leads to (approximate) velocities at half-time steps. The
velocity at a whole time step, say time t , can be estimated as soon as vi (t + Δt /2) is
known, by taking following average:
1
vi (t ) ≈ [vi (t − Δt /2) + vi (t + Δt /2)] . (2.46)
2
The so-obtained velocities are accurate up to second order in Δt .

Velocity Verlet method


Sometimes it is more convenient to have the velocity at time t directly available, in-
stead of having to calculate it through Eq. (2.46). In that case we can use the velocity
Verlet method, which numerically is the same as the leap-frog algorithm [4]. Suppose
we know the position and velocity at a certain time t , the velocity Verlet method then
proceeds to the next time step as follows:
1 Fi (t )
vi (t + Δt /2) = vi (t ) + Δt , (2.47)
2 mi
ri (t + Δt ) = ri (t ) + vi (t + Δt /2)Δt , (2.48)
(then evaluate force at t + Δt )
1 Fi (t + Δt )
vi (t + Δt ) = vi (t + Δt /2) + Δt . (2.49)
2 mi
Note that the force only needs to be evaluated once per time step, because F(t + Δt ) in
one time step is equal to F(t ) in the next.
Also note that the higher order accuracy of both leap-frog and velocity Verlet meth-
ods are retained only as long as the force at time t only depends on the positions of the
particles at time t , and not on the velocities at time t . For dissipative systems (where
the force does depend on the velocity), the accuracy is generally lower, i.e. smaller time
steps need to be made to achieve a sufficient level of accuracy.
48 CHAPTER 2. GENERAL PRINCIPLES OF PARTICLE-BASED SIMULATIONS

2.8 Practicum: Debye crystal


In this practicum you will code your first full simulation program of a harmonic os-
cillator dumbbell (two particles) without friction. You will check various numerical
schemes for energy conservation. Then you will extend the code to N coupled har-
monic oscillators with periodic boundaries. This so-called Debye model is a model
for a crystal. Finally, you will investigate the influence of non-harmonic interactions
between the particles.
¿
CHAPTER
D IMENSIONLESS NUMBERS AND SCALES

3.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about common physical phenomena that may be relevant in your
simulations.

• You will learn about dimensionless numbers that characterise ratios of forces or
transport properties.

• You will learn to distinguish between the three main types of particle-based sim-
ulations, namely microscopic, mesoscopic and macroscopic simulations.

3.2 Physical phenomena


“Everything should be made as simple as possible. . . but not simpler.”
- Albert Einstein

When performing particle-based simulations, it is important to focus on what question


you try to answer. As a simulator it is your responsibility to assess the relevance of var-
ious physical phenomena and make the right kind of approximations. As an extreme
and obvious example, if you want to study circulating patterns in the path of granular
particles in a fluidised bed, it does not make sense to include the velocity fluctuations
of the atoms in the granular particles. Rather, it makes more sense to include the gran-
ular particles as single entities with a small number of degrees of freedom (position

49
50 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

and orientation) and certain collision properties. The above quote warns that, con-
versely, we should neither simplify the problem too much. In our example, we could
for simplicity leave out the particle interactions and instead study the balance between
gravity and hydrodynamic drag on a single particle. However, in such a case no circu-
lating patterns will appear because the interactions (contact forces and hydrodynamic
forces) are essential for disturbing particle from a simple one-dimensional path.
In the next subsections we will describe some common forces and transport phe-
nomena. This list is by no means exhaustive, but will give you an impression of the
different factors that may be taken into account in a simulation. Where appropriate
we will give typical values for particles in water or air at room temperature and under
standard atmospheric conditions.

Gravity and buoyant forces


All particles have a mass and therefore attract each other through gravitational forces.
Unless we are dealing with the motion of stars or planets, it is usually sufficient to treat
the gravity as an external force acting on each particle and ignore mutual gravitational
attraction. On the surface of the Earth, a particle of mass m will feel a gravitational
force

Fg = −mg êz (3.1)

with êz pointing away from the Earth centre and g = 9.81 m/s2 .
Gravitation also causes a density and pressure gradient in fluids; under hydrostatic
(equilibrium) conditions the pressure in a fluid changes with depth as ∇P = −ρ f g êz ,
where ρ f is the (local) fluid density. For example, ρ f = 1.0 × 103 kg/m3 for water and
ρ f = 1.2 kg/m3 for air under atmospheric conditions. For solid particles of volume Vp
embedded in a fluid, this pressure gradient leads to a buoyant force

Fb = −Vp ∇P = Vp ρ f g êz . (3.2)

So the total effect of gravity on the particle is determined by the difference


 between the
1
particle density ρ p and the fluid density ρ f , Fg + Fb = −Vp ρ p − ρ f g êz .

Viscous and drag forces


In general, for a shear deformation rate of γ̇ in a fluid, the viscous force exerted on an
area A parallel to the flow direction is given by μA γ̇, where μ is the dynamic viscos-
ity (units Pa.s = kg/(m.s)). For water μ = 10−3 Pa.s, while for air under atmospheric
conditions μ = 1.8 × 10−5 Pa.s.
The viscosity is an important parameter determining the drag on a particle moving
with velocity V through a stationary fluid. Because the fluid cannot penetrate the par-
ticle, the liquid bounces off the particle and needs to flow around the particle, leading
1
Sometimes this total force is referred to as the buoyant force, so always check the definition.
3.2. PHYSICAL PHENOMENA 51

to both a pressure distribution and a shear flow near the surface of the particle. At low
particle velocities (how low will be made more exact later), the flow field in the fluid
surrounding the particle scales linearly with the particle velocity V , and consequently
the drag force will scale linearly with the particle velocity.2
We can make an order of magnitude estimate for the drag force on a single spherical
particle of radius R moving at low velocity. The typical shear deformation rate is of the
order of V /R and the (relevant) sphere area scales as R 2 . We therefore expect the drag
force Fd to scale as μRV . An exact calculation, performed by Stokes in 1851, shows that
the drag on a sphere with no-slip boundaries is given by:

Fd = −6πμRV (V low) (3.3)

This is the famous Stokes’ law. A full derivation is given in Appendix A.2.

Inertial forces
Continuing with the previous example, at higher particle velocities two things happen.
First, the boundary layer (in which the fluid velocity changes from zero to V near the
particle surface) decreases in thickness, leading to a faster-than-linear scaling of the
typical shear deformation rate. Second, the inertia of the fluid which is suddenly ac-
celerated in front of the particle becomes increasingly important. Indeed, if the particle
velocity V becomes very high, the drag force on the particle will be dominated by the
inertia of the fluid. Again we can make an order of magnitude estimation. The fluid at
the frontal area A of the particle will be accelerated. Every second an amount of fluid
of mass ρ f V A is accelerated to a velocity V . This leads to an inertial force of the order
of ρ f V 2 A. In general, the relation between drag force on a particle and its velocity is
given by

1
Fd = −C d ρ f A |V| V (3.4)
2

with C d the so-called drag coefficient. For a smooth sphere A = πR 2 and C d ≈ 0.44 at
high velocities, meaning that the drag force increases with the square of the particle
velocity.

Surface tension
Molecules generally attract each other through cohesive forces (e.g. the Van der Waals
r −6 part of the Lennard-Jones potential). In the bulk of a liquid, the molecules are
pulled equally in all directions, leading to a net force of zero. However, when the liquid
is in contact with a gas or another (immiscible) liquid, the molecules at the interfa-
cial surface feel different forces toward the two sides of the surface; this imbalance in
2
In this so-called Stokes flow regime the hydrodynamic equations, governing the motion of the fluid,
are linear.
52 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

forces may be counteracted by curving the interface. Well-known examples include


the spherical interface of a water droplet floating in air (in the absence of gravity) or a
droplet of oil floating in water. In all cases, the interface has a “tension”, i.e. tends to
contract and minimize the surface area.
The surface tension γ is defined as the force along a line of unit length, where the
force is parallel to the interfacial surface but perpendicular to the line. In terms of
energy, an interface of area A between two phases is associated with an energy γA.
Clearly, surface tension is not a property of a single fluid, but that of a fluid and another
phase. For example, the surface tension between water and air is γ = 7.2 × 10−2 N/m.
One of the consequences of surface tension is that the pressure inside a droplet of a
fluid is larger than the pressure outside the droplet. A balance between surface tension
and pressure difference ΔP is achieved when


ΔP = , (3.5)
R
where R is the radius of the droplet. This equation, known as the Young-Laplace equa-
tion, shows that the internal pressure increases when the droplets get smaller.

Diffusive transport of mass, momentum or energy


Many particle-based simulations are ultimately concerned with transport of mass, mo-
mentum or energy from one location to the other. This transport can occur through
two main mechanisms, namely diffusive or convective. In the next subsection we treat
convection, which is the process by which mass, momentum or energy is transported
due to the mean motion of the fluid in which it is carried. In this subsection we treat
diffusion, which is the process by which material is transported by the random thermal
motion of the molecules within the fluid, even in the absence of any mean flow.

Mass diffusion

Small particles such as molecules and colloidal particles perform random motions in
fluids, driven by thermal fluctuations. As a consequence, concentration differences in
these particles tend to equalise. Defining ci as the concentration of particle species i
(in mol/m3), the molar flux Jc,i (in mol/(m2 s)) is proportional to the gradient in particle
concentration (Fick’s law):

Jc,i = −D i ∇ci , (3.6)

where the constant of proportionality D i is called the diffusion coefficient of species


i (in units m2 /s). Combining the above equation with the conservation equation of
mass, ∂ci /∂t + ∇ · Jc,i = 0, yields the diffusion equation:

∂ci
= D i ∇2 c i . (3.7)
∂t
3.2. PHYSICAL PHENOMENA 53

Fick’s law applies to the transport of many particles simultaneously, induced by


concentration gradients. Similar arguments may be made for the time evolution of the
probability distribution P (r, t ; r0, t0 ) to find a given tracer particle near location r at time
t , if it was located at r0 at time t0 . This leads to an equation similar to Fick’s law, ∂P /∂t =
D s ∇2 P , but where D s is the so-called self-diffusion coefficient.3 It is easy to understand
that the initial condition for this differential equation is limt→0 P (r, t ; r0 , t0 ) = δ(r − r0 ),4
indicating that the probability becomes more peaked around r0 for shorter intervals t −
t0 . The solution of this equation is the so-called Green’s-function for the (free) diffusion
equation. In three dimensions:

1 (r − r0 )2
P (r, t ; r0 , t0 ) = exp − (3.8)
(4πD s (t − t0 ))3/2 4D s (t − t0 )

We can use Green’s function to study the ensemble dynamics of a diffusing particle.
The ensemble average position of the particle at time t is

〈r(t )〉 = P (r, t ; r0 , t0 )r d3 r = r0 , (3.9)

meaning that on average the particle remains where it was initially located at t0 . So
what happens to the mean-square displacement of the particle? The mean-square dis-
placement is a measure of the degree of fluctuations in the particle’s position, given by
the second moment of Green’s function:

 
(r(t ) − r0 ) = P (r, t ; r0 , t0 )(r − r0 )2 d3 r = 6D s t .
2
(3.10)

In other words, the mean-square-displacement is proportional to time and the self-


diffusion coefficient. The self-diffusion coefficient of water at room temperature is
2.3 × 10−9 m2 /s and that of oxygen in air is 1.8 × 10−5 m2 /s.
In summary, even in the absence of flow, small particles will move on a random
path. Although the average vector displacement  is zero, the typical root-mean-square
distance from the original position grows as t .

Thermal diffusion

We first treat thermal diffusion because the equations are very similar to those of mass
diffusion. As the Maxwell-Boltzmann equations (2.20) show, the temperature T is a
measure for the random velocity fluctuations of molecules and other small particles.
3
At not too high particle densities, the diffusion coefficient of Fick’s law (which applies to concentra-
tion) is equal to the self-diffusion coefficient. At high concentrations individual particles may become
trapped while collective rearrangements still lead to an equalisation of the concentration. In that case
the self-diffusion coefficient is lower than the (collective, Fick) diffusion coefficient.
4
Here δ(r) is the three-dimensional
 Direc-delta function, which is zero everywhere except at the
origin, and has the property d3 r δ(r) = 1.
54 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

Because these particles interact with each other, temperature differences between dif-
ferent regions in the system will be equalised. This equalisation is perceived as a heat
flux q (in W/m2 ), proportional to the gradient in temperature (Fourier’s law):

q = −k ∇T, (3.11)

where the constant of proportionality k is called the thermal conductivity coefficient


(in units W/(m.K)). The thermal conductivity coefficient of water is 0.60 W/(m.K) and
that of air 0.025 W/(m.K).
By the divergence theorem, a small volume V of material will loose a net heat of
V ∇ · q per second, and the temperature of the material in that volume will decrease
(for positive ∇ · q). This link between temperature decrease and heat loss (per unit
volume) is provided by the enthalpy H: dH = ρc p dT , where ρ is the mass density (in
kg/m3 ) and c p the specific heat capacity of the material (in J/(kg.K)). Assuming k is
constant, we get:

∂H ∂ρc p T
= = −∇ · q = k∇2 T (3.12)
∂t ∂t
If we can also assume that the product of specific heat capacity and density is approx-
imately constant, we find the so-called heat equation:

∂T
= α∇2 T, (3.13)
∂t

where α = k/(ρc p ) is the thermal diffusivity (in units m2 /s). The thermal diffusivity of
water is 1.4 × 10−7 m2 /s and that of air 1.8 × 10−5 m2 /s.

Momentum diffusion

We previously discussed viscous drag forces, and introduced the dynamic viscosity μ
to characterise shear forces on an area (parallel to the flow direction) caused by shear
flow. These shear forces, like mass diffusion and thermal conductivity, are determined
by molecular processes.
Let us now analyse the situation in some more detail. Consider a fluid between
two very large parallel plates which are some distance apart along the y-direction. The
plates are so large that we may neglect end-effects, meaning that all physics is the same
in the x- and z-directions. If the upper plate is moved with a velocity V along the x-
direction, collisions with the fluid molecules immediately adjacent to this plate will
accellerate the molecules in the x-direction too. Thermal motion and molecular col-
lisions in the fluid will cause this effect to grow downwards, all the way to the bottom
plate. As long as the fluid at a larger y-position moves with a higher x-velocity than the
fluid below, the fluid below will be accelerated in the x-direction (and the fluid above
decelerated). In other words, x-momentum will continually diffuse down through the
fluid from the upper plate, i.e. in the negative y-direction. For simple fluids such as
3.2. PHYSICAL PHENOMENA 55

water and gases, the flux of x-momentum τ yx (in N/m2 = Pa) passing every second
through an imaginary plane with normal along y, i.e. y = const, is proportional to the
local gradient in the velocity field (Newton’s law):

∂v x
τ yx = −μ , (3.14)
∂y

where the constant of proportionality μ is the dynamic viscosity (in units Pa.s = kg/(m.s)).
The minus sign occurs because for a positive ∂v x /∂y the flow of x-momentum is in the
negative y-direction.
Now a flux of x-momentum is nothing but a force per unit area acting in the x-
direction (to convince yourself, just consider the units). The net shear force on a vol-
ume of fluid enclosed by imaginary planes at y and y +dy of an area A (in the xz-plane)
is given by the momentum flux over the plane at y minus the momentum flux over the
plane at y + dy (minus, because the inwards direction is pointing down at y + dy). So
we have: F xshear = Aτ yx (y)− Aτ yx (y +dy) ≈ −A(∂τ yx /∂y)dy. Combining this with New-
ton’s law, we find a net shear force density of f xshear = μ(∂2 v x /∂y 2 ). This net shear force
can be used to accelerate the fluid:5

∂v x ∂2 v x
ρ =μ 2 (3.16)
∂t ∂y

which we may also write as ∂v x /∂t = ν(∂2 v x /∂t 2 ), with ν = μ/ρ the kinematic viscosity.
The kinematic viscosity of water is 1.0 × 10−6 m2 /s, that of air is 1.5 × 10−5 m2 /s.
Note that the kinematic viscosity ν in Eq. (3.16) has the same units as the mass
diffusion coefficient D in Eq. (3.7) and thermal diffusivity α in Eq. (3.13).  All these
phenomena are diffusive, meaning that their extent will grow with time as t . Also
note that in a gas diffusive transport is mainly taking place because of the motion of
the particles, not because of interactions between molecules. As a consequence, in a
gas all three diffusion coefficients are of the same order of magnitude. Indeed, we have
found that the self-diffusion of oxygen (very similar to the self-diffusion of nitrogen)
is D s = 1.8 × 10−5 m2 /s, the thermal diffusivity of air is α = 1.8 × 10−5 m2 /s, and the
kinematic viscosity of air is ν = 1.5 × 10−5 m2 /s.

Convective transport of mass, momentum or energy


Convective transport is transport as a consequence of macroscopic motion of the fluid,
i.e. flow. Suppose the fluid is flowing locally with a velocity v. For a general conserved
5
Note that we have derived an incomplete and non-general version of the Navier-Stokes equation
here. For a general incompressible (∇ · v = 0) flow of Newtonian fluids, the Navier-Stokes equation is:
 
∂v
ρ + v · ∇v = −∇P + μ∇2 v + ρg, (3.15)
∂t

where P is the pressure, and ρg the force density due to gravitational forces (or any other external forces).
56 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

quantity Y (which can be moles, mass, momentum or thermal energy), we can define
the local density X (molar concentration c, mass density ρ, momentum density ρv or
thermal energy density ρc p T ). The amount of X that flows through a unit area perpen-
dicular to v every second, i.e. the convective flux, is then equal to Jconv
X = X v. Explicitly,
the convective concentration flux, convective mass flux, convective momentum flux
and convective heat flux are given by
Jconv
c,i = ci v (3.17)
Jconv
m,i = ρv (3.18)
Jconv
ρv = ρvv (3.19)
conv
q = ρc p T v (3.20)
Here we have used the symbol q for heat flux. Note that almost all these fluxes may be
represented by vectors with elements (J x , J y , J z ), i.e. they have a magnitude and a di-
rection. There is one exception: the convective momentum flux is a tensorial quantity
of second order, i.e. it may be represented by a matrix with elements J xx = ρv x2 , J x y =
ρv x v y , J xz = ρv x v z on the first row, and similarly for the second and third row. In gen-
eral, an element J αβ = ρv α v β represents the convective flux of α-momentum trans-
ported in the β-direction.6

Convective heat flux

If the fluid is moving past a solid surface, and the fluid has a temperature T f differ-
ent from the solid surface temperature T s , a thermal boundary layer may develop. In
this thermal boundary layer a temperature profile exists due to the energy exchange
resulting from the temperature difference. The heat flux is in that case given by
q conv = h(T s − T f ), (3.21)
where h is defined as the heat transfer coefficient (in W/(m2 .K)).

Convective mass flux

Similarly, if the fluid is moving past another phase (an immiscible fluid or a porous
solid), and the fluid has a certain species concentration different from the concentra-
tion in the other phase, a mass-transfer boundary layer may develop. In this mass-
transfer boundary layer a concentration profile exists due to the mass exchange result-
ing from the concentration difference. The concentration flux (mass transfer) is in that
case given by
conv
J c,i = K Δci , (3.22)
where Δci is the concentration difference (mol/m3 ) and K is defined as the mass trans-
fer coefficient (in units of m/s).
6
Since the convective momentum flux is a symmetric tensor, we directly find that the convective
flux of α-momentum transported in the β-direction is equal to the convective flux of β-momentum in
the α-direction.
3.3. DIMENSIONLESS NUMBERS 57

Propagation of sound
Sound waves are longitudinal waves of momentum, whose distance from a momen-
tum source grows linearly in time. In other words, sound waves are pressure waves
which move with a certain velocity, the speed of sound c s , through the system. The
speed of sound is directly related to the compressibility of a material:
∂P
c s2 = (3.23)
∂ρ

The speed of sound in water is c s = 1.48 × 103 m/s, while the speed of sound in air is
c s = 3.43 × 102 m/s.
Sound waves behave very different from the diffusive transport of momentum we
encountered before,7 for two reasons. First, sound waves cannot transport net mo-
mentum; the longitunal waves correspond to mere oscillations in local momentum.
Second, the sound  waves travel as c s t from the source of momentum, while viscous
effects travel as νt . This means that in most practical situations sound waves are felt
long before the viscous effects are felt. Can you estimate at what distance from a mo-
mentum source in water the sound waves and viscous effects are felt simultaneously?

3.3 Dimensionless numbers


We mentioned before that it is your responsibility as a simulator to assess the relevance
of various physical phenomena and make the right kind of approximations for your
model. Making this assessment is greatly facilitated by estimating various dimension-
less numbers. These dimensionless number are usually ratios of two types of forces or
two types of transport properties. If a dimensionless number is much smaller or much
larger than unity, one of the two physical phenomena may possibly be neglected.
Now, given the large number of physical phenomena, the number of dimensionless
numbers is very large. To create some structure in this zoo of dimensionless numbers,
we present in Table 3.1 the dimensionless numbers associated with ratios of common
physical forces, and in Table 3.2 the dimensionless numbers associated with ratios of
common transport properties.
In the following we give an (alphabetical) list of the most common dimensionless
numbers encountered in systems that can be treated in particle-based simulations.
The meaning of the symbols is the following:

• L is a characteristic size of a particle (e.g. the diameter of a sphere) [m].

• V is a characteristic velocity (of a particle or fluid, depending on context) [m/s].

• D is the self-diffusion coefficient of a particle or the molecular diffusion coeffi-


cient in the fluid, depending on context [m2 /s].
7
Diffusive transport of momentum, i.e. viscous diffusion, is associated with transversal waves of
momentum.
58 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

surface
inertia viscosity tension gravity
inertia 1/Re 1/We 1/Fr
viscosity Re 1/Ca Ar, Gr
surface tension We Ca Eo
gravity Fr 1/Ar,1/Gr 1/Eo
Table 3.1: Dimensionless ratios of physical forces (force mentioned in top row divided
by force mentioned in left column).

mass momentum thermal mass heat


diffusion diffusion diffusion convection convection
mass diffusion Sc Le Pe, Sh
momentum diffusion 1/Sc 1/Pr
thermal diffusion 1/Le Pr Pe f Nu
mass convection 1/Pe, 1/Sh 1/Pe f
heat convection 1/Nu
Table 3.2: Dimensionless ratios of transport properties (transport property mentioned
in top row divided by transport property mentioned in left column).

• ρ p is the mass density of the particle [kg/m3 ].

• ρ f is the mass density of the fluid [kg/m3 ].

• Δρ is the mass density difference between two fluids [kg/m3 ].

• γ is the surface tension between two fluids [kg/s2 ].

• μ is the dynamic viscosity of the fluid [kg/(m.s)].

• ν is the kinematic viscosity of the fluid [m2 /s].

• β is the thermal expansion coefficient of the fluid [1/K].

• k is the thermal conductivity coefficient of the fluid [W/(m.K)].

• α is the thermal diffusivity of the fluid [m2 /s].

• h is the heat transfer coefficient between fluid and solid (particle) [W/(m2 .K)].

• K is the mass transfer coefficient between the fluid and another phase [m/s].

• λ f r ee is the mean free path of a fluid molecule [m].

• g is gravitational acceleration (or acceleration due to a similar external force)


[m/s2 ].
3.3. DIMENSIONLESS NUMBERS 59

Archimedes number Ar

The Archimedes number is the ratio of gravitational forces to viscous forces on a parti-
cle, taking into account the buoyancy:

g L 3 ρ f (ρ p − ρ f )
Ar = . (3.24)
μ2

When Ar  1 natural convection dominates, i.e. less dense particles rise and denser
particles sink in the fluid. When Ar  1 forced convection dominates, i.e. the particle
tends to follow the flow of the fluid.

Capillary number Ca

The capillary number represents the relative effect of viscous forces versus surface ten-
sion forces acting across an interface between a liquid and a gas or between two im-
miscible liquids:

μV
Ca = . (3.25)
γ

For low Ca (typically Ca < 10−5 ), flow in porous media is dominated by capillary forces.

Eötvös number Eo

The Eötvös number represents the relative effect of gravity forces (taking into account
the buoyancy) to surface tension forces acting on a bubble or droplet moving in a fluid:

Δρg L 2
Eo = . (3.26)
γ

When Eo < 1, the surface tension dominates and the bubble or droplet may be ap-
proximated by a sphere. When Eo  1, the bubble or droplet is relatively unaffected
by surface tension effects, generally leading to non-spherical shapes when they rise or
sink through the fluid.

Froude number Fr

The Froude number is used in a number of ways. In general it represents the ratio of a
characteristic velocity V to a gravitational wave velocity c,

V
Fr = . (3.27)
c
The gravitational wave velocity depends on the application. For example, for gravity
induced surface waves in shallow fluids (such as tidal waves) of uniform depth d we
60 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES


have Fr = V / g d . The Froude number may also be viewed as a ratio of a particle’s
inertia to gravitational forces.
In rotating equipment rotating with an angular velocity Ω, another (local) Froude
number is defined, which signifies the relative importance of centrifugal force to grav-
itational force. For a fluid at distance R from the axis of rotation, we have Fr = Ω2 R/g .
When Fr < 1 the flow is subcritical, meaning no shock waves are generated. When
Fr > 1 shock waves emerge, similar to the shock waves encountered in supersonic flow
(see Mach number).

Grashof number Gr

The Grashof number is the ratio of buoyancy to viscous force acting on a particle or
droplet due to a heat transfer, i.e. a difference between temperature T p of the particle
and the temperature T f of the (bulk) fluid,

g β(T p − T f )L 3
Gr = . (3.28)
ν2
The Grashof number is equivalent to the Archimedes number but when the density
difference between is caused by heat transfer and a thermal expansion coefficient β of
the fluid.

Knudsen number Kn

The Knudsen number is the ratio of the mean free path λ f r ee of a molecule in a fluid
and a characteristic length scale L (such as the size of an embedded particle),

λ f r ee
Kn = . (3.29)
L
The mean free path is the average distance a molecule travels between collisions with
other molecules. For most liquids this distance is relatively small, less than the typical
size of the molecule, but in gases the mean free path can be considerably larger. The
Knudsen number is a measure for the applicability of continuum scale approaches
for solving fluid flow. If Kn  1 (in practice < 0.01), the mean free path is sufficiently
much smaller than the size of the object for a continuum approach to hold. However, if
Kn ≥ 1 the discrete molecular nature of the fluid becomes important, and a continuum
approach is no longer appropriate.

Lewis number Le

The Lewis number is the ratio of molecular thermal diffusivity to molecular mass dif-
fusivity,
α
Le = . (3.30)
D
3.3. DIMENSIONLESS NUMBERS 61

The Lewis number is relevant in fluid flows where simultaneous heat and mass transfer
is taking place. In convective flows, the Lewis number controls the relative thickness
of the thermal and mass-transfer boundary layers.

Mach number Ma

The Mach number is the ratio between the velocity of an object moving though a fluid,
and the speed of sound in that fluid,
V
Ma = . (3.31)
cs
When Ma > 1 shock waves are generated in the fluid, but density waves are also ob-
served for Ma < 1. Generally the effects of finite compressibility scale as Ma2 , meaning
that in many cases a Mach number of 0.1 or lower is sufficient to approach “incom-
pressible” behaviour of a fluid.

Nusselt number Nu

The Nusselt number is the ratio of convective to conductive heat transfer normal to a
boundary. The conductive component is measured under the same conditions as the
heat convection, but for a fictitious stagnant fluid:
hL
Nu = . (3.32)
k
A Nusselt number close to unity, where convective and conductive heat flows are of
similar magnitude, a characteristic of laminar flow. A larger Nusselt number corre-
sponds to more active convection, with turbulent flow typically in the 100-1000 range.

Péclet number Pe

The Péclet number is the ratio of mass convection by flow to mass diffusion of a particle
of size L,
LV
Pe = . (3.33)
D
For Pe  1 the dynamics of the particle is dominated by thermal fluctuations, i.e. Brow-
nian motion. This does not necessarily implicate that the fluid flow can be neglected.
At larger time scales, the convection
 (which is linear in time) may still dominate the
diffusion (which scales as t ). Conversely, for Pe  1 the dynamics of the particle is
dominated by the fluid flow, and thermal fluctuations can be ignored. Note that for
mass transfer of a molecular fluid, we have Pe = Re · Sc.
In pure fluid flow problems, the Péclet number is sometimes defined differently, as
the ratio of convective mass transfer rate to thermal diffusion rate,
LV
Pe f = , (3.34)
α
in which case Pe f = Re · Pr.
62 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

Prandtl number Pr

The Prandtl number is the ratio of momentum diffusion to thermal diffusion in a fluid,

ν cP μ
Pr = = . (3.35)
α k

Note that the Prandtl number is a property of the fluid and the fluid state, but does not
depend on a length scale. Generally, in convective flows the Prandtl number controls
the relative thickness of the momentum and thermal boundary layers. The Prandtl
number of water is around 7, whereas the Prandtl number of air and many other gases
is 0.7 to 0.8. The Prandtl number of liquid mercury is 0.015, indicating that in a liquid
metal thermal diffusivity is dominant over momentum diffusion.

Reynolds number Re

The Reynolds number is the ratio of inertial to viscous forces. For a fluid moving with
velocity V around a particle of size L, the particle Reynolds number is

LV ρ f LV
Re = = . (3.36)
ν μ

For a fluid flowing between two plates, through a pipe, or a similar confined geometry
of dimension L, the Reynolds number is also given by the above equation. For Re  1
inertial effects can be neglected, meaning that the non-linear term in the Navier-Stokes
equation may be neglected. In this so-called Stokes flow regime, the hydrodynamic
equations are linear, greatly facilitating theoretical solutions of the fluid flow. For Re > 1
inertial effects become increasingly important. For not too high Reynolds numbers
(typically up the order of 100 for flow around a particle and the order of 1000 for flow
through a confined geometry) the fluid flow is still laminar, characterised by smooth,
constant fluid motion. For even higher Reynolds numbers, turbulent flow occurs, char-
acterised by chaotic eddies, vortices and other flow instabilities.

Schmidt number Sc

The Schmidt number is the ratio of momentum diffusion to molecular mass diffusion,

ν μ
Sc = = . (3.37)
D ρf D

For a fluid flowing past a surface (possibly of a particle), the Schmidt number controls
the relative thickness of the hydrodynamic momentum and mass-transfer boundary
layers. The Schmidt number of water is approximately 1000, whereas the Schmidt
number of air and many other gases is 1.
3.4. MICROSCOPIC, MESOSCOPIC AND MACROSCOPIC SIMULATIONS 63

Sherwood number Sh

The Sherwood number is the equivalent of the Nusselt number for mass transfer prob-
lems. It is defined as the ratio of convective mass flux to diffusive mass flux (similar to
the Peclet number),
KL
Sh = , (3.38)
D
where K is the mass transfer coefficient (in m/s).

Weber number

The Weber number measures the relative importance of inertia to surface tension be-
tween two fluids,

ρ f V 2L
We = , (3.39)
γ

where L is a characteristic size. For example, for two colliding droplets in air, L is
the (smallest) droplet diameter. When We < 1 the surface tension dominates and the
droplets either coalesce or bounce off each other. When We  1 the inertia of the
droplets dominates, resulting in a violent collision with large deformation of the droplets
and possible formation of new satellite droplets.

3.4 Microscopic, mesoscopic and macroscopic simulations


We have seen that most particle-based simulation programs share the same features.
Still, particle-based simulations can be performed on many different scales with dif-
ferent levels of detail. There are three main types of simulations: microscopic, meso-
scopic and macroscopic simulations. Below we explain how to distinguish between
these different types of simulations, summarised in Table 3.3. In the following three
chapters we will deal with the particular details in detail.

Microscopic simulations
Microscopic simulations are simulations in which all details of the molecular interac-
tions are included. This is the field of molecular dynamics. An important characteristic
is that all interactions are conservative, i.e. in terms of potential energies that depend
on atom-atom distances, angles, dihedral angles, etcetera. On this molecular scale
there is no dissipation of energy, and the motion of the molecules is dominated by
thermal fluctuations. Molecular dynamics simulations typically deal with the motion
of the order of 105 to 106 atoms, each of which need to be updated with time steps of
the order of a few femtoseconds (1 fs = 10−15 s), resulting in system sizes of up to tens
of nanometers (1 nm = 10−9 m) and total simulation times of up to 100 nanoseconds.
64 CHAPTER 3. DIMENSIONLESS NUMBERS AND SCALES

dissipative thermal particle time surface


forces flucts. size step gravity tension
microscopic - + ≈ 10−10 m ≈ 10−15 s - +
mesoscopic + + 10−9 − 10−5 m 10−13 − 10−6 s ± +
macroscopic + - ≥ 10−4 m ≥ 10−6 s + ±
Table 3.3: Distinction between three main types of particle-based simulation methods.

Microscopic systems are too small to feel the effects of gravity. Often diffusive pro-
cesses (self-diffusion, viscosity, thermal diffusion) and surface properties (surface ten-
sion) are of interest. In other words, the Archimedes, Eötvös, Grashof, Peclet, Reynolds
and Weber numbers are all very small ( 1).

Mesoscopic simulations
Mesoscopic simulations are simulations in which the molecular interactions have been
lumped into effective interactions between larger assemblies of molecules: they have
been coarse-grained. Various simulation methods have been developed in the last
decades, such as dissipative particle dynamics, Langevin dynamics, Brownian dynam-
ics, and multiparticle collision dynamics. The effective interactions do not only consist
of a conservative part, but also of a dissipative and stochastic part corresponding to
thermal fluctuations. So an important characteristic of mesoscopic simulations is that
the interactions are dissipative and that thermal fluctuations remain important. Sys-
tems that fall in this category are also called soft matter systems, because small forces
are sufficient to change the state of the system. Examples include polymeric and col-
loidal solutions, oil-and-water emulsions, blood, etcetera. In these lectures we will
mainly focus on colloidal solutions, which are suspensions of particles with a diameter
between 10 nanometers and 10 micrometers.
Similar to molecular dynamics simulations, mesoscopic simulations typically deal
with 105 to 106 particles, although some methods can deal efficiently with up to 100
times more particles. Because the particles are coarse-grained, larger time steps can
be made and larger length scales can be reached than in the case of microscopic sim-
ulations. The precise gain depends on the system because, generally, softer interac-
tions and higher frictions allow for larger time steps. System sizes range from tens of
nanometers to approximately 100 micrometers, and total simulation times range from
100 nanoseconds to several seconds, depending on the system.
Large mesoscopic systems can already feel the effects of gravity. Both diffusive
and convective processes can be important, but convection is usually not dominat-
ing strongly. Surface tension is usually still dominating. In other words, the Eötvös
and Weber numbers remain very small ( 1), but the Archimedes, Grashof, Peclet and
Reynolds numbers range from very small to about 10.
3.4. MICROSCOPIC, MESOSCOPIC AND MACROSCOPIC SIMULATIONS 65

Macroscopic simulations
Macroscopic particle-based simulations are simulations of large particles that can be
seen by the naked eye, i.e. particles of 100 micrometers and larger. An important char-
acteristic is that thermal fluctuations are no longer important, i.e. the self-diffusion
of the particles is negligible. The interaction between the particles, usually referred to
as a contact model, is dissipative. Contrary to the previous two types of simulations,
without further perturbations the particles will come to complete rest.8 Particle-based
simulation methods on this scale include Discrete Particle and Discrete Element meth-
ods.
Similar to molecular dynamics simulations, macroscopic simulations typically deal
with 105 to 106 particles. The time step depends on the stiffness of the particle inter-
action. Often the stiffness can be artificially lowered without affecting the results, al-
lowing for time steps as large as 10−5 or 10−4 s. The resulting system sizes range from
several millimeters to a meter, and total simulation times range from seconds to min-
utes.
For these large particles, the effects of gravity are often dominant. In typical engi-
neering applications, convective processes are of interest and dominate over diffusive
processes. Surface tension may or may not be important, depending on the system. In
other words, the Eötvös and Weber numbers can range from small (<1) to large (>1),
and the Archimedes, Grashof, Peclet and Reynolds numbers are often large (> 1).

8
Perturbations can be provided by flow of the fluid around the particles, as in a fluidized bed.
CHAPTER
T HE MICROSCOPIC WORLD

4.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about force fields in molecular dynamics simulations.

• You will learn how to control the temperature and/or pressure in a particle-based
simulation.

• You will learn how to measure structural properties such as the radial distribu-
tion function in a particle-based simulation.

• You will learn how to measure dynamic properties such as the self-diffusion co-
efficient and the viscosity in a particle-based simulation.

• You will learn about limitations of molecular dynamics simulations

4.2 A short introduction to molecular dynamics simula-


tions
Molecular dynamics simulations enable us to calculate structural and thermodynamic
properties, as well as dynamical properties, of realistic molecular systems, provided
they exclude chemical reactions and other phenomena of a quantum mechanical na-

67
68 CHAPTER 4. THE MICROSCOPIC WORLD

Figure 4.1: The traces of 108


hard-sphere particles with pe-
riodic boundary conditions for
about 3000 collisions in one
of the first molecular dynamics
simulations by Alder and Wain-
wright [2].

ture [4].1 The first molecular dynamics simulations were performed by Alder and Wain-
wright in the 1950’s [2]. To minimise finite system size effects, they applied periodic
boundary conditions, as we discussed in section 2.5. Figure 4.1 shows their result
for the trajectories of 108 particles interacting through hard-sphere interactions. Of
course real molecules do not interact as hard-spheres. Still, the hard-sphere fluid is a
useful concept because it is the simplest representation of repulsive interactions be-
tween spherical molecules, with the added advantage that analytical solutions can be
obtained for various properties of hard sphere fluids.
For the properties of real molecular fluids, we need more accurate interaction po-
tentials, collectively referred to as force fields. The force fields may be obtained from
quantum-chemical calculations, but more often the force field parameters are tuned
to achieve maximum agreement with experimental thermodynamic data such as the
state points (temperature and pressure) where phase transitions take place. A whole
industry has emerged that is focusing on developing increasingly accurate force fields.
Many open source codes exist that allow us to simulate 105 to 106 atoms efficiently,
such as GROMACS , LAMMPS , NAMD, ESPRESSO, AMBER , DL _ POLY, CHARMM, etcetera.
In the following section we will give an example of the popular CHARMM force field.
Then we will describe how to simulate different thermodynamic ensembles and mea-
sure structural and dynamical properties of the system.

4.3 Molecular force fields


In most molecular force fields, the total potential energy Φ is divided up into terms de-
pending on the coordinates of individual particles, pair, triplets, etc. Bonded interac-
tions within molecules typically consist of a harmonic potential between two bonded
atoms, a harmonic bending potential for three consecutively bonded atoms, and a di-
hedral (torsional) potential for four consecutively bonded atoms. Non-bonded inter-
actions between atoms are usually described by a Lennard-Jones potential, Eq. (2.7),
1
There is another important technique, called Monte Carlo (MC) simulation, which enables us to
efficiently sample the phase space in a specified ensemble. This also yields structural and thermody-
namic properties, but no dynamical properties. We will not treat the Monte Carlo technique here.
4.3. MOLECULAR FORCE FIELDS 69

Figure 4.2: In molecular


dynamics simulations the
interaction potential is usu-
ally approximated as a sum
of intramolecular bonded
pair interactions, valence
angle interactions involv-
ing three consecutively
bonded atoms, dihedral
(torsional) interactions
between four consecu-
tively bonded atoms, and
non-bonded interactions
(both intramolecular and
intermolecular).

supplemented with Coulombic terms if the atoms are charged. The non-bonded in-
teractions can be further subdivided into intermolecular interactions, i.e. interactions
between atoms on different molecules, and intramolecular non-bonded interactions,
usually between atoms further than two bonds away on the same molecule. Figure 4.2
shows an example of all these forces in the interaction between two molecules. In the
following subsections we will descibe each of the terms in the CHARMM force field:2
    
Φ = k b (b − b 0 )2 + k θ (θ − θ0 )2 + k φ 1 + cos(nφ − δ)
bonds angles dihedrals
   6 
 σi j 12 σi j  1 qi q j
+ 4i j − + (4.1)
ri j ri j 4π0 r i j
non-bonded charged pairs

Bond stretch interactions


The interaction between two covalently bonded atoms may be approximated as har-
monic:

ϕb (b) = k b (b − b 0 )2 , (4.2)

where b − b0 is the distance from equilibrium that the current pair of bonded atoms
has moved, and k b is the spring stiffness associated with that particular bond.3
2
We have excluded from this description the improper dihedrals which enforce out-of-plane bend-
ing, and Urey-Bradley interactions which are cross-terms accounting for angle bending using 1-3 non-
bonded interactions.
3
Note that, confusingly, in the CHARMM force field all harmonic interactions are denoted k(x − x0 )2
instead of the more usual 12 k(x − x0 )2 . So k in this force field is actually half the spring stiffness.
70 CHAPTER 4. THE MICROSCOPIC WORLD

Valence angle interactions


The stiffness of angles within a molecule is achieved by including harmonic angle in-
teractions:

ϕθ (θ) = k θ (θ − θ0 )2 , (4.3)

where θ − θ0 is the angle from equilibrium between 3 bonded atoms, and k θ is the
angular stiffness associated with that angle.

Dihedral (torsional) interactions


Torsional stiffness of a molecule is achieved by including dihedral angle interactions:
 
ϕφ (φ) = k φ 1 + cos(nφ − δ) , (4.4)

where φ is the dihedral angle between 4 bonded atoms (the angle between the plane
of atoms 1,2,3 and the plane of atoms 2,3,4), n is the multiplicity of the dihedral, and δ
is the phase shift (to control the location of the equilibrium dihedrals).

Non-covalently bonded interactions


Non-covalently bonded atoms usually attract each other through Van der Waals inter-
actions (caused by induced dipole-interactions) that scale as r −6 . When non-bonded
atoms come too close together they will repel each other because of the Pauli exclusion
principle. A convenient way to summarize these two effects is through the Lennard-
Jones potential, as introduced in section 2.3, which we repeat here for completeness.
For two particles i and j a distance r i j apart:

 12  6
LJ
σi j σi j
ϕ (r i j ) = 4i j − . (4.5)
ri j ri j

The parameter i j is the depth of the interaction well, and σi j is the effective diameter
for the pair interaction, both of which depend on the atom type, and even chemical
environment, of both i and j . Often the following simplified “mixing rule” is used,

1 
σi j = σi + σ j , (4.6)
2

i j = i  j , (4.7)

where σi is the effective diameter and i the effective dispersion energy of atom i , and
similarly for atom j .
4.4. CONTROLLING TEMPERATURE AND PRESSURE 71

Charge interactions
When dealing with ions, we clearly need to take into account the Coulombic charge
interactions. For two ions with charge qi and q j we have
1 qi q j
ϕC (r i j ) = , (4.8)
4π0 r i j
where 0 = 8.854 × 10−12 F/m is the permittivity of vacuum.
Besides ions, charge interactions are often also important in molecular interac-
tions. Although the distribution of negative electron charges largely cancels out the
positive charge of the atomic nuclei, the cancellation is often not perfect in molecules,
and a small net charge remains.

4.4 Controlling temperature and pressure: thermostats


and barostats
The force field leads to a force on each of the atoms. This force may be used to update
the velocities and positions of the atoms to the next time step. Proceeding in this fash-
ion, molecular dynamics simulations are in principle energy conserving: the sum of
kinetic and potential energy is conserved. Experimentally, it is very difficult to control
the energy of a system. In most realistic cases we control the temperature. Similarly, we
may wish to control the pressure of a molecular system instead of the system volume.
In this section we will describe how this may be achieved in a molecular dynamics
simulation. For this, we will need a short recap of statistical thermodynamics.

Short recap of statistical thermodynamics


Consider a system composed of N particles in a volume V . According to quantum me-
chanics this system may find itself in one of a countable number of states each with its
own energy E n , where n is the index of the particular state. Now suppose that we fix the
energy of the system to the value U . Then there are Ω (U ) states for which the energy is
equal to U , where Ω is the degeneracy of energy U . Although each of these states has
the same energy, there may be other observables A of interest having different values
A n in different states. According to statistical physics a macroscopic measurement of
the observable A will yield the ensemble average4

〈A〉 = P n A n (4.9)
n

Where P n is the probability to observe state n:


 1
En = U
P n = Ω(U ) (4.10)
0 E n = U
4
We will consistently denote the ensemble average of an observable A as 〈A〉 and the time-average
of A as 〈A〉T . The ergodicity hypothesis states that for equilibrium systems 〈A〉 = 〈A〉T [45].
72 CHAPTER 4. THE MICROSCOPIC WORLD

So, it is assumed that each of the states with energy U has equal probability 1/Ω(U ).
In many cases, instead of fixing the energy we fix the temperature T . The way to
do this is to put the system in contact with a huge second system with which it may
exchange energy, and which is called a thermostat. By isolating the totality of system
plus thermostat we may apply the rules just stated and calculate the average of any
observable, in particular of any observable defined in terms of the particles composing
the system and independent of the particles composing the thermostat. By doing so,
we again find Eq. (4.9), but now with
1  
Pn = exp −βE n (4.11)
Q
  
Q = exp −βE n (4.12)
n
1
β = (4.13)
kB T
where k B is Boltzmann’s consant and Q is referred to as the partition function. No-
tice that in this case the energy is not fixed, but that it is assumed that a macroscopic

measurement yields the average energy U = n P n E n
One way to envision a macroscopic measurement is to imagine being given a huge
number of similar systems in different states distributed according to the probabili-
ties given in Eq. (4.10) or (4.11). Such collections of systems are traditionally called
ensembles. Eq. (4.10) represents the so-called micro-canonical or (N ,V,U ) ensemble
and Eq. (4.11) the canonical or (N ,V, T ) ensemble. From the above it is clear that in the
(N ,V, T ) ensemble the energy is a fluctuating quantity. The width of the distribution of
 
energies is conveniently measured by σE = E 2 − 〈E 〉2 . From statistical physics we
know that
σE 1
∼ (4.14)
〈E 〉 N
Evidently, for large enough systems all relative fluctuations are insignificant and all
ensembles yield the same results. This is called the thermodynamic equivalence for
large systems.
It is a known fact that for large systems the degeneracy Ω (in a microcanonical sys-
tem) scales as follows:
 
V U N
Ω(N ,V,U ) = ω O (N ) . (4.15)
N, N
Here O (N ) represents a factor which growths with N at most as fast as N itself. There-
fore it is convenient to define

S = k B ln Ω (4.16)

which, for large enough systems, is proportional to the system size. According to sta-
tistical physics it has all the properties of the entropy. Since S is given in terms of its
4.4. CONTROLLING TEMPERATURE AND PRESSURE 73

Figure 4.3: The classicale probabil-


ity of encountering a system near a
certain point Γ in phase space de-

P (Γ)
creases exponentially with the as-
sociated Hamiltonian (energy) H
(green). High values of H are
sampled with higher probability at
higher temperature (red). H(Γ)

characteristic variables, N ,V and U , all thermodynamic information can be extracted


from it. Applying this equation to a system in a thermostat, we find

A = −kT ln Q (4.17)

where A is the free energy.


Although for convenience the above discussion has been given in terms of discrete
states like they occur in quantum mechanical treatments of confined systems, it is
equally possible to use the language of classical mechanics. Taking the classical limit
(i.e. excluding very low temperatures) turns the degeneracy Ω into a density-of-states

1
Ω = 3N d Γ δ (H(Γ) −U ) (4.18)
h N!
 
where h is Planck’s constant, Γ = r 3N , p 3N is a point in phase space and H is the
Hamiltonian (potential plus kinetic energy) of the system. The above equations indi-
cates that, in the classical limit, each point in phase space on a hypersurface of energy
U is equally likely to occur. The canonical partition function Q becomes

1  
Q = 3N d Γ exp −βH(Γ) , (4.19)
h N!
where the classical probability of observing the system near a certain point (Γ) in phase
space is given by the Boltzmann distribution (Figure 4.3):
 
P (Γ)dΓ ∼ exp −βH(Γ) dΓ. (4.20)

Thermostats
In all relevant examples that we will encounter, the Hamiltonian splits into a poten-
tial energy that only depends on the particle positions, and a kinetic energy that only
depends on the particle velocities, i.e.


N 1
H(Γ) = Φ(r1 , . . . , rN ) + m i v i2 (4.21)
i =1 2
74 CHAPTER 4. THE MICROSCOPIC WORLD

If we combine this with Eq. (4.20), we find that the classical probability factorises into
a probability for the positions and a probability for the velocities:

P (r1 , . . . , rN , v1 , . . . , vN ) = P r (r1 , . . . , rN )P v (v1 , . . . , vN ). (4.22)

where the velocity probability density P v is given by


   
v
 N 1
2
 N m i v i2
P (v1 , . . . , vN ) ∼ exp −β mi v i = exp − . (4.23)
i =1 2 i =1 2k B T

This is exactly the Maxwell-Boltzmann distribution, which we have used several times
before! The mean square velocity5 is therefore a direct function of the system temper-
ature:
  kB T
v i2 = . (4.24)
mi

We can invert this relationship to determine the instantaneous temperature of a


system from the particle velocities:

1 
T meas = m i v i2 , (4.25)
N f r ee k B i

where N f r ee is the number of degrees of freedom. If we use periodic boundary condi-


tions and have removed the initial centre-of-mass velocity of the system, the number
of degree of freedom in d dimensions is N f r ee = d (N − 1); otherwise N f r ee = d N . The
instantaneous temperature T meas of the system is not necessarily equal to the thermo-
dynamic temperature. Only the long time average, or ensemble average, 〈T meas 〉 of a
simulation is equal to the thermodynamic temperature T of the system.

Temperature constraining through direct velocity scaling

An obvious way to control the thermodynamic temperature of the system is velocity


scaling. If the temperature of the system at time t is T meas (t ) and the velocities are
multiplied by a factor λ,

vi → λvi , (4.26)

then the associated temperature change can be calculated as


1  1 
ΔT = m i (λv i )2 − m i v i2
N f r ee k B i N f r ee k B i
 2 
= λ − 1 T meas (t ). (4.27)
5
Note that statistical thermodynamics applies to equilibrium states, and that therefore we exclude
average flow fields from our description.
4.4. CONTROLLING TEMPERATURE AND PRESSURE 75

The simplest way to control the temperature is thus to multiply the velocities at each
time step by a factor

T0
λ= meas
(direct scaling), (4.28)
T

where T 0 is the desired temperature (check this for yourself!). This direct velocity scal-
ing leads to a strict constraint on the kinetic energy.

Velocity scaling: the Berendsen thermostat

One problem with direct velocity scaling is that is does not allow fluctuations in the
instantaneous temperature T meas which should be present in the canonical ensemble.
A weaker formulation of the velocity scaling approach is the Berendsen thermostat.
To maintain the temperature, the system is coupled to an external heat bath of fixed
temperature T0 . The velocities are scaled at each step, such that the rate of change of
temperature is proportional to the difference in temperature:

dT (t ) 1
= (T0 − T (t )) , (4.29)
dt τ
where τ is the coupling parameter which determines how tightly the heat bath and the
system are coupled together. This method gives an exponential decay of the system
towards the desired temperature. The required change in temperature between suc-
cessive times steps Δt is:

Δt
Δt = (T0 − T (t )) . (4.30)
τ
Thus, the scaling factor for the velocities is

2 Δt T0
λ = 1+ −1 (Berendsen). (4.31)
τ T (t )

The following pseudo-code shows how the velocies could be rescaled using the Berend-
sen thermostat. Note that for efficiency, it would be better to make this rescaling a
direct part of the discretised update of the velocity.

       
  
   
 
     
 
!   " # 
#    $  "!"!  
 
76 CHAPTER 4. THE MICROSCOPIC WORLD

    
    
    
 
 

In practice, τ is used as an empirical parameter to adjust the strength of the coupling.


Its value has to be chosen with care. In the limit τ → ∞ the Berendsen thermostat is
inactive and the run is sampling a microcanonical (constant energy) ensemble. The
temperature fluctuations will grow until they reach the appropriate value of a micro-
canonical ensemble. However, they will never reach the appropriate value for a canon-
ical ensemble. On the other hand, too small values of τ will cause unrealistically low
instantaneous temperature fluctuations. If τ is chosen the same as the timestep Δt , the
Berendsen thermostat is nothing else than the direct velocity scaling thermostat. Val-
ues of τ ≈ 0.1 ps are typically used in molecular dynamics simulations of condensed-
phase systems. We should realise, however, that the ensemble generated when using
the Berendsen thermostat is not a canonical ensemble.

Adding an external variable: the Nosé-Hoover thermostat

If it is important to sample a correct canonical ensemble, the best way to control the
temperature in a molecular dynamics simulation is to let the system exchange energy
with an external reservoir represented by a new coordinate with associated mass Q
and velocity. The magnitude of the mass Q determines the coupling between the sys-
tem and the reservoir, and so influences the (instantaneous) temperature fluctuations.
This leads to the so-called Nosé equations of motion, which are rather complex to write
down here. However, Nosé and Hoover showed that the equations may also be ex-
pressed in a more intuitive and simple way as:

d2 ri Fi dri
= − γ(t ) , (4.32)
dt 2 mi dt
 
dγ 1  N
2
= m i v i − N f r ee k B T0 . (4.33)
dt Q i =1

Note that γ acts like time-dependent friction caused by coupling with the reservoir.
This friction is also a dynamical variable, which is updated with a rate controlled by
the fictitious mass Q. Care must be taken in choosing Q. On the one hand, too large
values of Q (loose coupling) may cause a poor temperature control (if Q → ∞ the run is
sampling the microcanonical ensemble). Although any finite (positive) Q is sufficient
to guarantee in principle the generation of a canonical ensemble, if Q is too large, the
canonical distribution will only be obtained after very long simulation times. On the
other hand, too small values (tight coupling) may cause high-frequency temperature
oscillations. The variable γ may oscillate at a very high frequency, it will tend to be
4.4. CONTROLLING TEMPERATURE AND PRESSURE 77

off-resonance with the characteristic frequencies of the real system, and effectively de-
couple from the physical degrees of freedom (slow exchange of kinetic energy). Usually
it requires a few trial-and-error guesses to find a correct fictitious mass Q.

Barostats
Up to this point we have assumed that our system is evolving in a constant volume.
Many processes in our daily lives take place under atmospheric pressure, rather than
constant volume. It is possible to control the pressure in molecular dynamics simula-
tions by dynamically rescaling the box dimensions. To understand how, we first derive
a microscopic expression for the pressure.

Virial expression for the pressure

Consider a system of N atoms that is developing in a finite space. Let us introduce a


function, called the Clausius virial function:


N
W tot (r 3N ) = ri · Fitot , (4.34)
i =1

where Fitot is the total force acting on atom i , both due to internal forces with other
atoms and due to external forces with confining walls. Averaging over the molecular
dynamics trajectory, and using Newton’s law, we find
T 
 tot  1 N d2 ri
W = lim ri (t ) · m i (t )dt
T →∞ T 0 i =1 dt 2
dri dr dri dri

N
dt
(T ) · dti (T ) − dt
(0) · dt
(0)
= lim mi
T →∞ i =1 T
T N
1 dri dri
− lim mi (t ) · (t )dt (4.35)
T →∞ T 0 i =1 dt dt

where we have integrated by parts. If the system is localized in a finite region and
particles are not accelerated to infinity, then the first term of the second line is zero.
We then recognize that the expectation value of the Clausius virial function is related
to the temperature of the system:
T   
 tot  N  dri 2
W = − lim 
mi  (t ) dt = −2 〈E ki n 〉 = −N f r ee k B T (4.36)
T →∞ 0 i =1 dt

Writing the total force on atom i as Fitot = Fii nt + Fext


i
, we can write the total virial func-
tion as a sum of internal and external virials:
!
 tot    
W = ri · Fii nt + W ext = −N f r ee k B T. (4.37)
i
78 CHAPTER 4. THE MICROSCOPIC WORLD

Now, how do we connect this to the macroscopic pressure? The macroscopic pres-
sure P is the normal force per unit area exerted by walls surrounding our system. For
simplicity, we consider a rectangular container with sides L x , L y and L z , and the coor-
dinate origin on one of its corners. Then the force exerted on the atoms by the right
wall at x = L x is given by F xext = −P L y L z . Similarly, the force exerted on the atoms by
the back wall at y = L y is given by F yext = −P L x L z , and the force exerted on the atoms
by the top wall at z = L z is given by F zext = −P L x L y . The forces exerted by the left,
front and bottom walls are irrelevant for the virial function because atoms influenced
by these walls have x, y and z-positions very close to zero, respectively. Therefore, the
external virial is given by
 ext     
W = L x −P L y L z + L y (−P L x L z ) + L z −P L x L y = −3PV. (4.38)

Combining this with the total virial equation, we arrive at the virial expression for
the pressure P :
!
N kB T 1 
N
i nt
P= + ri · Fi , (4.39)
V 3V i =1

where we have approximated the number of degrees of freedom by 3N (generally N 


1). The first term is the kinetic contribution to the pressure (the pressure of an ideal
gas), and the second term is often referred to as the virial part of the pressure.
The virial expresion is very important because it allows us to calculate the pres-
sure of a system entirely in terms of particle coordinates and internal forces, without
reference to an actual wall. For pairwise interactions:6
  1 
ri · Fii nt = ri · Fi j = ri · Fi j + r j · F j i
i i j =i 2 i j =i
1 
= ri j · Fi j = ri j · Fi j . (4.40)
2 i j =i i j >i

In other words, for atoms interacting through a pairwise interaction potential ϕ(r ), the
pressure is given by:
 !
N kB T  
1 N−1 N dϕ(r ) 
P= − ri j (4.41)
V 3V i =1 j =i +1 dr r i j

Note that the virial expression also allows us to calculate the pressure in a periodic
system, in which no walls are present at all. In that case V is the volume of one of
the periodic cells, and the pair sum should run over each minimum image pair exactly
once.
6
More generally, for any interaction for which the sum of forces on groups of particles is zero, the
virial pressure is obtained by summing (ri − rr e f ) · Fi over this group, with a suitably chosen reference
position rr e f .
4.4. CONTROLLING TEMPERATURE AND PRESSURE 79

Box size scaling: the Berendsen barostat

Having obtained a microscopic expression for the pressure, we may control it much in
the same way as we control the temperature, in this case by rescaling the box volume
and optionally the fractional atom coordinates (xi /L x , y i /Ly, z i /Lz) (preferred for peri-
odic boundary conditions). Similarly to the Berendsen thermostat, we may also weakly
couple the system to an external pressure reservoir as follows:

dP (t ) 1
= (P 0 − P (t )) , (4.42)
dt τP
where P 0 is the pressure of the external pressure reservoir and τP is the response time of
the barostat. This is achieved by scaling the volume of the box by μ3 and the positions
by μ every time step, where
1/3
Δt
μ = 1− κ (P (t ) − P 0 ) (Berendsen). (4.43)
τP
Here κ = −1/V (∂V /∂P ) is the compressibility of the system. The following pseudo-
code shows how the positions could be rescaled using the Berendsen barostat, assum-
ing that a thermostat is also used to reach a temperature T0 . Note that for efficiency it
would be better to use the neighbourlist and to make this rescaling a part of the discre-
tised update of the position.

       
 
 
 
              
  
 
 
   !"  #$% # &'&(&!%
   #     %%)#$%
&'  &'
&(  &(
&!  &!
 
'%  '%
(%  (%
!%  !%
 
 
Often the compressibility κ is not known beforehand, and the relaxation time τP is sim-
ply an empirical paramter to tune the strength of the pressure coupling, controlling the
80 CHAPTER 4. THE MICROSCOPIC WORLD

allowed pressure and volume fluctuations. In such a case, the combination κ/τP (in
units 1/(Pa.s)) can be treated as one single parameter of the Berendsen thermostat. In
the limit κ/τp → 0 the Berendsen barostat is inactive and the run is sampling a micro-
canonical (constant energy) ensemble. The pressure fluctuations will grow until they
reach the appropriate value of a microcanonical ensemble. On the other hand, if τp is
chosen the same as the time step Δt (and a realistic κ is used), the Berendsen barostat
leads to a constant pressure P (t ) = P 0 , without any pressure fluctuations. This is also
not desirable. Optimal values of κ/τ lie in between these two extremes.
If correct fluctuations of the pressure are important (e.g. when one is interested in
thermodynamic properties of the system), a barostat can be implemented, similarly
to the Nosé-Hoover thermostat, by extending the system with additional coordinates.
This so-called Parrinello-Rahman barostat is beyond the scope of these lectures.

4.5 Measuring structural properties


The positions and orientations of atoms are not completely random because the total
potential energy Φ contains terms which depend on the relative positions and orienta-
tions of two or more molecules. From Eqs. (4.20) and (4.21) we learn that the positional
probability density in the canonical ensemble is given by
 
1 Φ(r 3N )
P r (r 3N ) = exp − , (4.44)
Z kB T

where Z is a normalisation constant, also known as the configuration integral. In other


words, the interactions between the molecules in a liquid or gas cause correlations
in their positions. During the execution of a molecular dynamics simulation, we can
measure these structural correlations in the system.
Here we will focus on the two best-known structural correlations, namely the radial
distribution function and the static scattering function.

The radial distribution function g (r )


The aim of almost all modern theories of liquids is to calculate the radial distribu-
tion function by means of statistical thermodynamical reasoning. Alternatively, the
radial distribution function can be measured directly in particle-based computer sim-
ulations. We will discuss its use in calculating the energy, compressibility and pressure
of a fluid, with a particular application to a hard sphere fluid.

Definition of the radial distribution function

Imagine we have placed ourselves on a certain molecule in a liquid or gas (Fig. 4.4).
Now let us count the number of molecules in a spherical shell of thickness dr at a
distance r , i.e. we count the number of molecules within a distance between r and
4.5. MEASURING STRUCTURAL PROPERTIES 81

Figure 4.4: A typical radial distribution func-


tion in a liquid of spherical molecules with
diameter σ. The radial distribution function
g (r ) measures the local number density of
particles at a distance r from a given particle
(black circle), relative to the average number
density ρ # = N /V .

r + dr . If r is very large the measured number of molecules will be equal to the volume
of the spherical shell times the number density ρ # = N /V , so equal to 4πr 2 dr N /V .
At distances smaller than the diameter of the molecules we will find no molecules at
all. We now define the radial distribution function g (r ) by equating the number of
molecules in the spherical shell of thickness dr at a distance r to
N
4πr 2 g (r )dr. (4.45)
V
According to our remarks above, g (∞) = 1 and g (0) = 0. A typical g (r ) is given in
Fig. (4.4). We see that g (r ) = 0 when r is smaller than the molecular diameter σ. The
first peak is caused by the attractive part of the potential; at distances where the poten-
tial has its minimum there are more particles than average. Consequently at distances
less than σ further away there are less particles than average.
Measuring the radial distribution function in a molecular dynamics code is accom-
plished by initialising with     , calling the routine  at regular
intervals during the simulation, and finalising the calculation with    . We
divide the range from r = 0 to half the box length into   bins of width  .

      
         
        
     !    " #
82 CHAPTER 4. THE MICROSCOPIC WORLD


  
     
 
 
   
  
     


   
  
The bin index is calculated for each pair distance r i j . The bin is updated by 2 because
i is a neighbour of j and j is also a neighbour of i , while we process each unique pair i
and j only once.

    
! "#$"%$"& '(   !     
 
) ( *+$$,+,+$ ,+
 ! $) ( $  $
  -$  

  
When creating the output we should normalise our results properly. The variable ) (
is equal to the volume of the spherical shell between $  and $ . The
number of particles in an ideal gas one would expect in that volume is (N − 1)/V times
the spherical shell volume (we must subtract the one particle in the origin). Addition-
ally we need to divide by the number of particles N , because we essentially added the
radial distribution functions of N particles, and the number of calls to    .

Statistical formulas for g (r )

Integrating the probability density for a configuration of N spherical particles, Eq. (4.44),
over the coordinates of all particles except the first two, we find
   
1 Φ(r 3N )
P 12 (r1 , r2 ) = d3 r 3 . . . d3 r N exp − , (4.46)
Z kB T
where P 12 (r1 , r2 ) is the probability density to have particle 1 at r1 and particle 2 at r2 .
For convenience of notation we write
   
 1 3 3 Φ(r 3N ) 
P 12 (r, r ) = d r 3 . . . d r N exp − . (4.47)
Z k B T r1 =r,r2 =r
4.5. MEASURING STRUCTURAL PROPERTIES 83

Because all particles are equal, this is equal to the probability density P 1j (r, r ) of having
particle 1 at r and particle j at r . The probability density of having particle 1 at r and
any other particle at r equals

P 1j (r, r ) = (N − 1)P 12 (r, r ) (4.48)
j =1
1 #
ρ g (|r − r |) = (N − 1)P 12 (r, r ) (4.49)
V
This is equal to the probability density of having particle 1 at r, which is simply 1/V ,
times the conditional density at r , which is ρ # g (|r − r |). Multiplying by N we get
 # 2
ρ g (|r − r |) = N (N − 1)P 12 (r, r ). (4.50)

Once we know g (r ), we can derive all non-entropic thermodynamic properties.

Relation between the radial distribution function and energy

The simplest is the energy:


∞
int 3 1 N
U =U + N kB T + N dr 4πr 2 g (r )ϕ(r ). (4.51)
2 2 V 0

The first term originates from the internal energies of the molecules, the second from
the translations, and the third from the interactions. The average total potential energy
equals 12 N times the average interaction of one particular molecule with all others; the
factor 12 serves to avoid double counting. The contribution of all particles in a spherical
shell of thickness dr at a distance r to the average interaction of one particular particle
with all others is 4πr 2 dr (N /V )g (r )ϕ(r ). Integration finally yields Eq. (4.51).

Relation between the radial distribution function and compressibility

The isothermal compressibility κT is defined as:


 
1 ∂V
κT ≡ − (4.52)
V ∂P T,N

From thermodynamics it is known that κT can be linked to spontaneous fluctuations


in the number of particles in an open volume V , see Fig. 4.5:
   
〈N 〉 ρ # k B T κT = (N − 〈N 〉)2 = N 2 − 〈N 〉2 , (4.53)

where the pointy brackets indicate a long time average or an average over many in-
dependent configurations commensurate with the thermodynamic conditions (in this
case constant temperature T and volume V ). From Eq. (4.50) we obtain (where r 12 =
|r1 − r2 |):
 
 2  
d r 1 d3 r 2 ρ # g (r 12 ) = 〈N (N − 1)〉 = N 2 − 〈N 〉 .
3
(4.54)
V V
84 CHAPTER 4. THE MICROSCOPIC WORLD

Figure 4.5: The compress-


ibility of a fluid is a measure
for the magnitude of spon-
taneous fluctuations in the
number of particles (black
circles) in an open volume V
(indicated by a dashed line).

We can use this to link the compressibility to the radial distribution function:

   
〈N 〉 ρ # k B T κT = ρ# d3 r 1 ρ # d3 r 2 g (r 12 ) + 〈N 〉 − ρ # d3 r 1 ρ # d3 r 2
V V V V
# 3 # 3
 
= ρ d r1ρ d r 2 g (r 12 ) − 1 + 〈N 〉
V V
 
= ρ# d3 r 1 ρ # d3 r g (r ) − 1 + 〈N 〉 (4.55)
V R3

Dividing by 〈N 〉 we find

# #
 
ρ k B T κT = 1 + ρ d3 r g (r ) − 1 . (4.56)
R3

This so-called compressibility equation shows that the compressibility of a fluid is in-
timately connected to the radial distribution function of its constituent molecules.

Relation between the radial distribution function and pressure

We will now consider the pressure of a fluid. We can rewrite the virial expression for
the pressure, Eq. (4.39), as:
∞
1  2 dϕ
P = ρ kB T − ρ#
#
dr 4πr 2 g (r )r (r ). (4.57)
6 0 dr

Try to derive this equation yourself.


The above equation is valid for a fluid of any density. If the density of the fluid is not
too high, correlations between three or more particles may be ignored, in which case
Eq. (4.44) tells us that the radial distribution function is given by

 
g (r ) ≈ exp −βϕ(r ) , (4.58)
4.5. MEASURING STRUCTURAL PROPERTIES 85

where ϕ(r ) is the pair interaction potential. Inserting this into Eq. (4.57) we find after
some manipulation:

# 1  # 2 ∞   dϕ
P ≈ ρ kB T − ρ dr 4πr 3 exp −βϕ(r ) (r )
6 0 dr

1  2 ∞ ∂    
= ρ# kB T + ρ# dr 4πr 3 k B T exp −βϕ(r ) − 1
6 0 ∂r
 ∞

2π # 2     ∞    
= #
ρ kB T + 3
ρ k B T r exp −βϕ(r ) − 1 0 −  dr 3r 2 exp −βϕ(r ) − 1
3 0
 ∞
   
= ρ # k B T 1 − 2πρ # dr r 2 exp −βϕ(r ) − 1 . (4.59)
0

In the second line we have used the chain rule of differentiation, in the third line we
integrated by parts, and in the fourth line we used the fact that ϕ(r → ∞) = 0 to get rid
of the first term.
In the classical chemical and physical literature, the pressure of a fluid is often ex-
pressed as an expansion in the number density, leading to the so-called virial expan-
sion:
  2 
N N
PV = N k B T 1 + B 2 (T ) + B 3 (T ) +... . (4.60)
V V

where B 2 (T ) is a temperature-dependent coefficient, called the second virial coeffi-


cient. Although for high densities the third, fourth, etc, virial coefficients become im-
portant, for low densities the second virial coefficient suffices to accurately predict the
system’s pressure. This is exactly the range where Eq. (4.59) is valid. So we find the
following equation for the second virial coefficient:
∞
B 2 (T ) = −2π dr r 2 e−βϕ(r ) − 1 (4.61)
0

The above equation is important because it allows us to calculate the pressure of a fluid
knowing only the pair interaction ϕ(r ) between its constituent molecules. In the next
subsection we will apply this to a hard sphere fluid.

The hard sphere fluid

In many theories of liquids the hard sphere fluid is used as a reference system, to which
interparticle attractions are added as a perturbation. It is therefore useful to study
the radial distribution function, second virial coefficient and pressure of a hard sphere
fluid.
The pair interaction in a hard sphere fluid is given by

∞ for r ≤ σ
ϕ(r ) = (4.62)
0 for r > σ
86 CHAPTER 4. THE MICROSCOPIC WORLD

20

15
Figure 4.6: The Carnahan-Starling
P /(ρ # k B T )

10 equation for the pressure of a hard


sphere fluid as a function of solid volume
fraction φ. Note that the prediction for
5
very high densities φ > 0.5 is incorrect
since the pressure in a real hard sphere
0
0 0.1 0.2 0.3 0.4 0.5 0.6 fluid diverges at φmax ≈ 0.64 for random
φ close packing.

At very low densities the radial distribution function and second virial coefficient are
therefore given by

0 for r ≤ σ
g (r ) ≈ (4.63)
1 for r > σ
∞ σ
2 −βϕ(r ) 2
B 2 = −2π dr r e − 1 = 2π dr r 2 = πσ3 . (4.64)
0 0 3
According to Eq. (4.60), and using φ = 16 πρ # σ3 for the volume fraction of spheres, the
pressure of a hard sphere fluid can be expressed as:
 
P = ρ # k B T 1 + 4φ . (4.65)
The above expressions are valid for not-too-high densities. At higher densities the
probability to find another hard sphere in (near-)contact with a given hard sphere is
higher than 1, and the pressure is higher than predicted by the second virial coeffi-
cient alone. Using a computer one has calculated that the pressure for more general
densities is given by:
P
= 1 + 4φ + 10φ2 + 18.365φ3 + 28.24φ4 + 39.5φ5 + 56.6φ6 + . . .
ρ#kB T
(4.66)

This is approximately
P
= 1 + 4φ + 10φ2 + 18φ3 + 28φ4 + 40φ5 + 54φ6 + . . . (4.67)
ρ#k BT

Extrapolating and summing we find


P ∞
2 n 1 + φ + φ2 − φ3
= 1 + (n + 3n)φ = . (4.68)
ρ#kB T n=1 (1 − φ)3
This is called Carnahan and Starling’s equation for the pressure of a hard sphere fluid
(Figure 4.6). Monte Carlo simulations of hard sphere fluids have shown that Eq. (4.68)
is nearly exact at all possible volume fractions, except near random close packing den-
sities.
4.5. MEASURING STRUCTURAL PROPERTIES 87

Figure 4.7: An incoming wave with wave


vector ki n is scattered and analysed in the k in
k out
direction kout , with |ki n | = |kout | for elastic
scattering. The scattered intensity depends
on density fluctuations inside the fluid.

The static scattering function


In the precious section we have linked the compressibility of a fluid to spontaneous
fluctuations in the number of particles in a large volume. More generally, density fluc-
tuations in a fluid can be described by means of their Fourier components:

# 1 #
ρ (r) = ρ + 3
d3 k ρ̂(k) exp {−ik · r} , (4.69)
 (2π)
 
ρ̂(k) = d3 r ρ # (r) − ρ # exp {ik · r} , (4.70)

where ρ # = N /V is the average number density of the system and ρ # (r) the local num-
ber density near r. The microscopic variable corresponding to a density Fourier com-
ponent is7
  

N  
ρ̂(k) = d3 r δ r − r j − ρ # exp {ik · r} , (4.71)
j =1

where δ(r) = δ(x)δ(y)δ(z) is the three-dimensional Dirac delta-function. This may be


rewritten as

 
N  
ρ̂(k) = exp ik · r j − ρ # d3 r exp {ik · r}
j =1

N  
= exp ik · r j − (2π)3 ρ # δ(k). (4.72)
j =1

Density fluctuations in a fluid can be measured experimentally by means of scat-


tering of light, neutrons, or X-rays (depending on the scale of interest), see Fig. 4.7.
The scattered intensity also depends on details such as wave polarization and scatter-
ing strength or form factor, but generally scattering experiments measure correlation
functions of Fourier components of the density. The correlation function of ρ̂(k) with
its complex conjugate ρ̂ ∗ (k) = ρ̂(−k), i.e. the mean square of the density fluctuation
7
In order to avoid overly dressed symbols, we use the same symbol for the macroscopic quantity
and the microscopic variable. In general a microscopic variable A micr is an expression given explicitly
in terms of positions and/or velocities of  the particles,
 which after ensemble averaging yields the corre-
sponding macroscopic quantity A, i.e. A micr = A. For example  the microscopic
 density at r is given

by ρ micr (r) = j δ(r − r j ), and the macroscopic density by ρ(r) = ρ micr (r) .
88 CHAPTER 4. THE MICROSCOPIC WORLD

Figure 4.8: Self-diffusion: each


particle, initially residing within
a very small dot, will diffuse
away via a different path.

with wave vector k, is a real function of the wavevector, called the structure factor S(k):

1  
S(k) ≡ ρ̂(k)ρ̂ ∗ (k) . (4.73)
N
The division by N leads to a quantity which for large enough systems is independent
of system size (that is to say, the mean square density fluctuations grow linearly with
system size). The structure factor gives a lot of information about the structure of a
fluid. It is essentially a Fourier transform of the radial distribution function, as can be
shown as follows:
!  2  
1  N  N    ρ#   
S(k) = exp i k · r j − rk − d r d3 r  exp ik · r − r
3
N j =1 k=1 N
! 
1  N  N    #
= 1+ exp i k · r j − rk −ρ d3 r exp {ik · r}
N j =1 k= j

 
= 1 + ρ # d3 r g (r ) − 1 exp {ik · r} . (4.74)

Comparison with Eq. (4.56) shows, perhaps surprisingly, that the compressibility of a
fluid can be obtained not only by compressing the fluid and measuring the pressure,
but also by performing a scattering experiment:

ρ # k B T κT = lim S(k). (4.75)


k→0

4.6 Measuring dynamic properties

Mean-square displacement and self-diffusion


Suppose we label some particles inside a very small region (a dot) in an otherwise ho-
mogeneous fluid, at time t = t0 at position r0 , as in Fig. 4.8. When the dot, although on
a macroscopic scale concentrated at r0 , is dilute enough on a molecular scale, we may
consider the concentration decay as due to the self-diffusion of the separate labeled
particles. As we discussed in subsection 3.2, the conditional probability P (r, t ) that a
particle is at r at time t , given it was at r0 at time t0 , may be obtained from Fick’s law:
∂P (r, t ; r0 , t0 )
= D s ∇2 P (r, t ; r0 , t0 ), (4.76)
∂t
together with the boundary condition limt→t0 P (r, 0; r0 , t0 ) = δ(r − r0 ). D s is the self-
diffusion coefficient, which has units of length squared over time (m2 /s). The mean
4.6. MEASURING DYNAMIC PROPERTIES 89

square displacement of the labeled particles can be related to the self-diffusion coeffi-
cient as follows:

d  2
 ∂P (r, t ; r0 , t0 )
|r(t ) − r(t0 )| = d3 r |r(t ) − r(t0 )|2
dt  ∂t
= D s d3 r |r(t ) − r(t0 )|2 ∇2 P (r, t ; r0 , t0 )

= D s d3 r P (r, t ; r0 , t0 )∇2 r 2
= 6D s , (4.77)

where we have used partial integration and the fact that P (r, t ; r0 , t0 ) and its derivative
are zero far from r0 . For real fluid particles Fick’s law only holds for large values of t .8
Integration of Eq. (4.77) yields the Einstein equation for the self-diffusion coefficient,
1  
D s = lim |r(t ) − r(t0 )|2 . (4.78)
t→∞ 6(t − t 0 )

The Einstein equation learns us that we can measure the self-diffusion coefficient of a
particle by studying its mean square displacement. The pointy brackets indicate that
we can average over many different time origins t0 ; what is important is the correlation
time τ = t − t0 between the current time t and the time origin t0 . To achieve a higher
statistical accuracy, we should therefore use more than one time origin. How to do this
is the topic of the next section.

Measuring time correlation functions


To obtain information about dynamical (transport) properties of a system, we often
need to measure time correlation functions of certain variables in our particle-based
simulations. These correlations functions can be in the form of the time-averaged
product of a variable with the same variable some correlation time τ later (an auto-
correlation function),
T −τ
1
〈A(τ)A(0)〉T ≡ A(t0 + τ)A(t0 )dt0 , (4.79)
T −τ 0
or in the form of the time-averaged moment n of a change in a variable that occurred
during a correlation time τ,
T −τ
  1
[A(τ) − A(0)]n T ≡ [A(t0 + τ) − A(t0 )]n dt0 . (4.80)
T −τ 0

Although such time correlation functions may be determined during post-processing,


after the simulation run has finished, in practice this requires storage of a large amount
8
At short times the fluid particles are not yet moving completely randomly. For example, they may
still be trapped inside a temporary cage formed by their neighbours. Fick’s law applies to time scales on
which the particles are diffusing freely.
90 CHAPTER 4. THE MICROSCOPIC WORLD

0 curframe ncor-1

store A

icor

Figure 4.9: An on-the-fly calculation of time correlation functions is made possible by stor-
ing historical values of a variable in a ring buffer . The last value is stored at position
 , after which the autocorrelation with all possible previous values (  = 0, 1, 2, etc.)
is updated.   is looping from 0 to  , i.e. when the ring buffer is full, the oldest
value of A will be overwritten by the newest.

of data from multiple time frames if the variable depends on the positions or velocities
of many particles. For example, to measure the mean square displacement of particles,
we would need to store in a file the positions of all particles at many different time
frames. With millions of particles, this quickly becomes prohibitive.
We conclude that it is desirable to calculate time correlation functions on the fly,
i.e. during the execution of the simulation. For the particular method we will describe
here, we need to choose two timescales:

1. the smallest time resolution dτ of the time correlation function, determined by


the number of integration timesteps ( ) between taking a sample of our
variable A;

2. the maximum correlation time τmax that we are interested in, determined by the
number of historic samples (  ) that we will keep in memory at any time.

The time resolution for the time correlation function is therefore dτ = ∗Δt
and the maximum correlation time is τmax =   ∗Δt . Preferably, dτ is
smaller than the fastest characteristic time scale of A and τmax much larger than the
longest characteristic time scale. However, often these characteristic time scales are a
priori unknown. Often, this is actually the reason for measuring the time correlation
function in the first place. If the characteristic time scales can also not be estimated
from physical intuition or knowledge of similar systems, the two time scales of the cor-
relator should be obtained by trial-and-error.
Advantage of calculating time correlations on the fly is that, as soon as the simu-
lation time is longer than τmax , we can forget about the oldest time frame, and use its
storage location to store the newest time frame. We can therefore use a ring buffer to
store data of the last  time frames, keeping track of the location   of the
most recent entry (Figure 4.9).
A time correlation measurement must be initialised before the main time loop by:
4.6. MEASURING DYNAMIC PROPERTIES 91

      


       
        
       
         
 
During execution of the main time loop, if the modulo of the time step with  
is zero (t is an integer number of dτ), we update the ring buffer and the correlations:

    


     
                   ! "
    #          $#  %
     &      "   &   
    #       ! &     
         ' &(      #  
       ' 

  '
 
Note that we store the current value of A in the array   at position  . The
above routine updates the correlation between the current value of A and previous
values of A for all correlation times up to the runtime or the maximum correlation time,
whichever is the smallest. Also note that this routine applies to a time autocorrelation
of A. If we are interested in a moment n of the change in A, we should replace just one
line, namely the update of    , by:

         ' &      #  )


Finally, at the end of the simulation, we calculate the correlation function by normal-
ising with the number of samples for each correlation time, and print the result:

      


    #  
    *     ( (#    +   

 
Up to this point we have considered a general variable A of the system. As an ex-
plicit example, suppose we wish to calculate the mean square displacement of all par-
ticles in a simulation. If we use periodic boundary conditions this poses an additional
problem, because particles that leave the central box through one face will reappear
at the opposite face. For a correct measurement of the mean square displacement, we
must ensure that we store the unfolded particle positions in the ring buffer. This may
92 CHAPTER 4. THE MICROSCOPIC WORLD

be accomplished by comparing the position of a particle with the position of the same
particle in the previous frame. If the particle appears to have displaced by more than
half a box length in a certain cartesian direction, we should first subtract or add an
integer number of box lengths to store the unfolded position. In detail:

   
           
       
       
                  
   
            
     
 !    
 
      "
 

  
   #
    $         %        & ' &(
                    ) #*
 (        #   (        ) #*
   
   #   (  
   #   (  
   #  ! (  
    (   "   # +,  -+
    (   "   # +,  -+
   ! (   "   # +,  -+
               &   
      
 !     
        '&       
       "  #       #  .
"  #       #  .
"  #  !     #  .
     " 
 
 
      "
 
Finalising the mean square displacement measurement can be done by       .
4.6. MEASURING DYNAMIC PROPERTIES 93

Figure 4.10: Typical velocity autocorrela-


tion function in a liquid. In this figure
the relatively fast initial decay is clearly vis-
ible, whereas the slow decay at larger times
is not. Nevertheless, the slow decay con-
tributes considerably to the self-diffusion co-
efficient, Eq. (4.82).

Velocity autocorrelation function and self-diffusion


Obtaining the self-diffusion coefficient from the mean square displacement is slightly
complicated by the use of periodic boundary conditions. Fortunately there is also an-
other way to obtain the self-diffusion coefficient which is not complicated by the use
of periodic boundaries. To see why, let us transform the mean square displacement to
an expression involving the autocorrelation of the velocity v = ṙ of a labeled particle:
t t
   
|r(t ) − r(0)|2 = dt  dt  v(t  ) · v(t  )
0 0
t t 

 
= 2 dt dt  v(t  ) · v(t  )
0 0
t t 
 
= 2 dt  dt  v(t  − t  ) · v(0)
0 0
t t 
= 2 dt  dτ 〈v(τ) · v(0)〉
0 0
t
= 2 dτ(t − τ) 〈v(τ) · v(0)〉 . (4.81)
0

The last step follows after partial integration. Comparing with the Einstein equation (4.78),
taking the limit for t → ∞, we finally find
∞
1
Ds = dτ 〈v(τ) · v(0)〉 . (4.82)
3 0

This is the Green-Kubo relation for the self-diffusion coefficient.


In Fig. 4.10 a typical velocity autocorrelation is shown. After a short time the au-
tocorrelation goes through zero; here the particle collides with some other particle in
front of it and it reverses it’s velocity. For large values of τ the velocity autocorrelation
scales as τ−3/2 , which is a hydrodynamic effect. This very slow decay is often difficult
to detect.
94 CHAPTER 4. THE MICROSCOPIC WORLD

Stress autocorrelation function and viscosity


In the previous subsection we have seen that a transport property (diffusion) may be
measured through the integral of a time correlation function (the velocity autocorre-
lation) measured in equilibrium. Similarly, it may be shown (we will not do this here
explicitly) that the dynamic viscosity of a molecular system may also be obtained from
the integral of the time correlation function of a quantity measured in equilibrium. In
this case the quantity of interest is an off-diagonal, say x y-component, of the micro-
scopic stress tensor. In formula:
∞ " #
V
μ= dτ S mi
xy
cr
(τ)S mi cr
xy (0) , (4.83)
kB T 0
where V is the volume of the system, k B T the thermal energy, and S mi
xy
cr
the x y-component
of the microscopic stress tensor, given by:

1  N  
S mi
xy
cr
=− m i v i ,x v i ,y + xi Fi ,y . (4.84)
V i =1
In the above expression m i is the mass and vi is the velocity of particle i , and Fi is the
force on particle i due to interactions with other particles.9 If the interactions between
the particles are pairwise additive, we can also write this as:

1  N    
S mi
xy
cr
=− m i v i ,x v i ,y + xi − x j Fi j ,y , (4.85)
V i =1
where Fi j ,y is the y-component of the force on i due to its interaction with j .
Because the stress tensor is a global property of the system instead of a particle
property, in practice much longer simulation times are required to obtain accurate
measurements of the viscosity than of the diffusion coefficient. For the velocity auto-
correlation we can average over N independent measurements, whereas for the stress
autocorrelation we can only average over 3 independent measurements, namely the
autocorrelations of x y, xz, and y z components of the stress tensor.

4.7 Limitations of Molecular Dynamics simulations


Molecular dynamics simulations can be very accurate if accurate force fields are used
for the interactions between the atoms. Still, there is a major limitation to molecular
dynamics simulations: the integration time steps are typically of the order of a few fem-
toseconds, allowing for simulation times of up to 100 nanoseconds for system sizes of
up to tens of nanometers. Even when massively parallel computers are used, employ-
ing tens of thousands of processors to a single problem, the largest system size that can
be studied is of the order of 100 nanometers cubed.
9
Note that this expression is reminiscent of the virial expression for the pressure, Eq. (4.39). In fact,
the isotropic pressure P is equal to −(S micr
xx + S micr
yy + S micr
zz )/3, i.e. the average diagonal element of the
stress tensor is negative the isotropic pressure.
4.8. PRACTICUM: MOLECULAR DYNAMICS SIMULATION OF LIQUID METHANE 95

4.8 Practicum: Molecular Dynamics simulation of liquid


methane
In this practicum you will study the behaviour of a Lennard-Jones fluid, representing
liquid methane, in a periodic system. You will thermostat the system and measure the
particle velocity distribution, the radial distribution function, and the diffusion coeffi-
cient at different temperatures.
CHAPTER
T HE MESOSCOPIC WORLD

5.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about coarse-graining and the softness of soft matter

• You will learn about the connection between dissipative forces and fluctuating
forces in mesoscale systems

• You will learn about Langevin Dynamics and Brownian Dynamics simulations

• You will learn about hydrodynamic interactions between mesoscopic particles

• You will learn about simulation methods that enable hydrodynamic interactions,
particularly dissipative particle dynamics and multi-particle collision dynamics

• You will learn about limitations of mesoscopic particle-based methods

5.2 Coarse-graining, soft matter systems and hydrodynamic


interactions
Many systems we encounter in our daily lives consist of a continuous fluid phase with
embedded particles which are much larger than single molecules. Examples include
paint, blood, toothpaste, coffee, milk, and mayonnaise. The embedded particles in
these systems of course ultimately also consist of atoms and molecules, but they are
coherent enough for their molecules to remain grouped over a time scale longer than

97
98 CHAPTER 5. THE MESOSCOPIC WORLD

our “human” time scale with which we observe or use the system. The particles can
be either solid, as in colloidal suspensions of solid polymeric particles, or deformable,
as in solutions with polymeric coils (paint), oil-in-water emulsions stabilised by sur-
factants (mayonnaise), or red blood cells in blood plasma. If the particles are less than
roughly 10 micrometer in size, they are still very much influenced by thermal fluctua-
tions. Such mesoscopic systems are the topic of this chapter.
It is impossible to model mesoscopic systems with atomistic detail, simply because
a single particle already consists of at least millions of atoms, let alone the millons of
fluid molecules surrounding each particle. So molecular dynamics simulations are out
of the picture. Rather, we will treat the particles as single entities: we reduce the mil-
lions of atomic positions and velocities to a much lower number of coarse-grained co-
ordinates and velocities and treat the fluid in an effective way. For example, we may de-
scribe each polymer in a polymer suspension as a string of a low number of beads, each
representing the centre-of-mass of a large number of atoms, connected by springs, and
treat the fluid as “blobs” of millions of fluid molecules each. This reduces the number
of coordinates and velocities per polymer to only a few dozen. In the extreme case of
a spherical solid particle it may be sufficient to use only 3 coordinates to describe the
position of the particle and 6 coordinates to describe its velocity and angular velocity.
The equations of motion for the coarse-grained coordinates are not the same as the
equations of motion for atoms in molecular dynamics simulations. We cannot sim-
ply throw away and ignore the atomistic degrees of freedom. For every configuration
of coarse-grained coordinates (say a certain distance between two beads of a model
polymer), there are many possible configurations of the atomistic coordinates in the
particle and the fluid. First, this leads to effective interactions between coarse-grained
coordinates which are generally softer than atomic interactions, meaning that the ef-
fective potential energy of a pair of coarse-grained particles will change much slower
with their mutual distance than in the case of a pair of atoms. The resulting effective
interaction energies are usually not very much higher than the thermal energy k B T ,
meaning that these systems are easily perturbed by external forces: they are soft mat-
ter systems. Second, coarse-graining leads to friction forces and random forces on the
coarse-grained coordinates, caused by the fluctuations of the atomistic degrees of free-
dom inside the particles and in the fluid. This is observed as Brownian motion of the
mesoscopic particles.
The friction forces and random forces are essential ingredients of mesoscopic sim-
ulations. It is intuitively clear that the friction on a mesoscopic particle will depend on
its velocity relative to the surrounding fluid. If more than one mesoscopic particle is
present, the motion of one mesoscopic particle will induce a flow field in the fluid thay
will alter the friction felt by another mesoscopic particle. The importance of this ef-
fect, referred to as hydrodynamic interaction, depends on its magnitude relative to the
magnitude of other forces such as the direct particle-to-particle interaction forces. If
hydrodynamic interaction can be ignored, the equations of motion are greatly simpli-
fied. This is the topic of the first part of this chapter. In the second part we will describe
methods to (re-)introduce hydrodynamic interactions.
5.3. BROWNIAN MOTION OF A SINGLE PARTICLE 99

F
R
Figure 5.1: A colloidal particle moving with
velocity V will experience a friction force - ζv v
−ζV opposite to its velocity and random
forces FR due to the continuous bombard-
ment of solvent molecules.

5.3 Brownian motion of a single particle

Friction and random forces: the Langevin equation


Consider a spherical colloidal particle of radius a (typically between 10−8 and 10−6 me-
ter) and mass M moving through a quiescent solvent along a path R(t ). The colloidal
particle will continuously collide with the solvent molecules. Because on average the
colloid will collide more often on the front side than on the back side, it will experience
a systematic force proportional with its velocity V, and directed opposite to its veloc-
ity. The colloid will also experience a random or stochastic force FR (t ). These forces are
summarized in Fig. 5.1 The equations of motion, referred to as the Langevin equations,
then read

dR
= V, (5.1)
dt
dV
M = −∇Φ − ζV + FR , (5.2)
dt

where the additional term −∇Φ represents a conservative force on the colloidal parti-
cle (∇ is the gradient with respect to the position of the particle). Because the colloidal
particle is small, the Reynolds number associated with its motion is much lower than
unity. Conversely, because the colloidal particle is much larger than the mean free path
of the solvent molecules, the Knudsen number is much smaller than unity. Thus, we
are allowed to obtain the friction coefficient ζ by solving the Stokes equations for fluid
flow around a sphere. Assuming the colloidal particle has no-slip boundaries with the
solvent,1 it can be shown that the friction coefficient is given by the following Stokes
drag formula:

ζ = 6πμa, (5.3)

where μ is the dynamic viscosity of the solvent. A full derivation is given in Appendix
A.2.
1
For very small colloidal particles in the nanometer range, the proper boundary conditions are actu-
ally quite a subtle point, which has not yet been resolved. For no-stick (full slip) boundaries, the friction
coefficient is ζ = 4πμa. In between these two extremes, partial-slip boundary conditions may apply at
such very small scales.
100 CHAPTER 5. THE MESOSCOPIC WORLD

The fluctuation-dissipation theorem


Assuming for the moment that no conservative forces act on the colloidal particle,
Φ = 0, we can divide Eq. (5.2) by M, define the friction “frequency” ξ as ξ = ζ/M, and
formally solve the equation, e.g. by variation of constants:

1 t
V(t ) = V0 e−ξt + dτ e−ξ(t−τ) FR (t ). (5.4)
M 0
where V0 is the initial velocity. Note that the above solution contains a time integral
over the random force FR (t ). We will now determine ensemble averages over all possi-
ble realizations of FR (t ), with the initial velocity as a condition. To this end we have to
make some assumptions about the random force. In view of its chaotic character, the
following assumptions seem to be appropriate for its ensemble average properties:
 R 
F (t ) = 0 (5.5)
 R 
F (t ) · FR (t  ) V0 = C V0 δ(t − t  ) (5.6)

where C V0 may depend on the initial velocity. The above equations say that the random
force is zero on average (the average force is already included in the friction) and has
no memory (a white noise process). Using Eqs. (5.4) - (5.6), we find

−ξt 1 t  
〈V(t )〉V0 = V0 e + dτ e−ξ(t−τ) FR (τ) V0
M 0
= V0 e−ξt (5.7)
t
2  
〈V(t ) · V(t )〉V0 = V02 e−2ξt + dτ e−ξ(2t−τ) V0 · FR (τ) V0
M 0
t t
1   

+ 2 dτ dτ e−ξ(2t−τ−τ ) FR (τ) · FR (τ ) V0
M 0 0
C
= V02 e−2ξt + 1 − e−2ξt .
V0
(5.8)
2M 2 ξ
The colloid is in thermal equilibrium with the solvent. According to the equipartition
theorem [45], for large t , Eq. (5.8) should be equal to 3k B T /M, from which it follows
that
 R 
F (t ) · FR (t  ) = 6k B T ξMδ(t − t  ) = 6k B T ζδ(t − t  ). (5.9)

This is one manifestation of the fluctuation-dissipation theorem, which states that the
dissipative force experienced by a particle dragged through a fluid is actually intimately
connected to the magnitude and time correlation of the fluctuating random force.

The Einstein and Stokes-Einstein equations


Integrating Eq. (5.4) we get
t τ
V0 1 
R(t ) = R0 + 1 − e−ξt + dτ dτ e−ξ(τ−τ ) FR (τ ), (5.10)
ξ M 0 0
5.3. BROWNIAN MOTION OF A SINGLE PARTICLE 101

Figure 5.2: The mean square displace- 25

ment of a colloidal particle according

(R(t ) − R(0))2 / [D s /ξ]


20
to the Langevin equation (5.11) as a
function of time. Here we have used
  15


the ensemble average V02 = 3k B T /M
for the initial velocity. The horizontal 10
time axis is scaled by the friction fre-
quency ξ = ζ/M , and the vertical m.s.d. 5


axis is scaled by D s /ξ, where D s is the
self-diffusion coefficient according to the 0
0 1 2 3 4 5
Einstein equation (5.14). ξt

from which we calculate the mean square displacement


  V2 2 3k B T
(R(t ) − R0 )2 V0 = 02 1 − e−ξt + 2ξt − 3 + 4e−ξt − e−2ξt . (5.11)
ξ Mξ2
We can study two limits of this equation. For very small t the mean square displace-
ment is
 
lim (R(t ) − R0 )2 = V02 t 2 , (5.12)
t↓0

which is the ballistic regime when the particle has not yet significantly changed its
velocity from V0 . Conversely, for very large t the mean square displacement becomes
  6k B T
lim (R(t ) − R0 )2 = t, (5.13)
t→∞ Mξ
from which we get the Einstein equation for the self-diffusion coefficient2
kB T
Ds = . (5.14)
ζ
Note that we could have obtained the same result directly from Eq. (5.4) by determin-
ing the time integral of the velocity autocorrelation, and applying equipartition for the
mean square velocity. Try this yourself!
When we combine the Einstein equation (5.14) with the Stokes drag equation (5.3),
we find the famous Stokes-Einstein equation, valid for a single colloidal sphere:
kB T
Ds = . (5.15)
6πμa
The Stokes-Einstein equation tells us, perhaps surprisingly, that the self-diffusion co-
efficient D s of a colloidal particle is independent of its mass M, but only depends on
its size a.
2
The Einstein equation does not only apply to a single spherical colloidal particles, but also to col-
lections of colloidal particles or colloidal particles of a non-spherical shape. The diffusion tensor D is
then related to the friction tensor Ξ as D = kB T Ξ−1 . This is beyond the scope of these lectures.
102 CHAPTER 5. THE MESOSCOPIC WORLD

〈V(t ) · V(0)〉 / [3k B T /M] 1

0.8

Figure 5.3: The velocity auto correlation


0.6
function of a colloidal particle according to
0.4 the Langevin equation (5.11) as a function
of time. Time axis is scaled by the friction
0.2 frequency ξ = ζ/M , and the vertical velocity
correlation axis is scaled by the expectation
 2
0
0 1 2 3 4 5 V0 = 3k B T /M . Note that the colloid loses
ξt memory of its initial velocity after a few 1/ξ.

Overdamped systems: from Langevin equation to Brownian dynamics


From Eq. (5.7) we see that the colloid loses its memory of its initial velocity after a
time τ ≈ 1/ξ (Figure 5.3). Using equipartition, its initial velocity may be put equal to

3k B T /M . The distance λ it travels, divided by its radius then is
 
λ 3k B T /M ρk B T
= = , (5.16)
a aξ 9πμ2 a
where ρ is the mass density of the colloid.
When the particles are very small, have a very high density, and/or the fluid itself
has a very low viscosity (e.g. the fluid is a gas), will λ/a be significant (0.1 or larger). In
that case it is important to keep track of both the particle position and its velocity.
However, in many practical situations the value of λ/a is very small. For example,
λ/a ≈ 10−3 for a 10 nanometre sized colloid and λ/a ≈ 10−4 for a micrometre sized
colloid in water at room temperature. We see that the particles have hardly moved at
the time possible velocity gradients have relaxed to equilibrium: they are overdamped.
When we are interested in timescales on which particle configurations change, we may
restrict our attention to the space coordinates, and average over the velocities.
It can be shown [15] (though we will not do it here) that the explicit equations of
motion for the particles which only include particle coordinates are
dR 1
= − ∇Φ + ∇D s + fR (5.17)
dt ζ
R 
f (t ) = 0 (5.18)
R 
f (t )fR (t  ) = 2D s Īδ(t − t  ). (5.19)
where −∇Φ is again a conservative force on the colloidal particle, and fR are random
variables with the unit of velocity. Ī denotes the unit tensor Iαβ = δαβ , to signify that
different cartesian components of the random variables fR are uncorrelated. The above
equations are known as the Brownian dynamics equations of motion. The term ∇D s
accounts for the effect of possible gradients in the self-diffusion coefficient. We em-
phasise again that the self-diffusion coefficient D s and friction coefficient ζ are not
independent, but linked by the Einstein equation (5.14).
5.4. LANGEVIN AND BROWNIAN DYNAMICS OF MULTIPLE PARTICLES 103

5.4 Langevin and Brownian dynamics of multiple parti-


cles without hydrodynamic interactions

Neglect of hydrodynamic interactions


In most applications we are dealing with N > 1 particles embedded in a fluid. Com-
pared to a single particle in a fluid, this leads to two types of additional forces:

1. direct interaction forces, usually caused by nearby other particles, and

2. hydrodynamic forces on a particle transmitted by fluid flow fields caused by the


motion of other particles.

The latter, fluid-mediated, forces are referred to as hydrodynamic interactions. Be-


cause hydrodynamic fields decay very slowly with distance, as slow as a/r (see Ap-
pendix A.3), hydrodynamic interactions may be felt between particles which are many
particle diameters apart.
Simulations of Brownian particles are greatly simplified if we can neglect hydrody-
namic interactions. It is therefore important to understand when hydrodynamic inter-
actions may be neglected.
First, hydrodynamic interactions may be neglected when the direct interaction forces
between particles dominate the hydrodynamic forces. This is usually the case at high
solid volume fractions φ of particles, where

N 43 πa 3 4
φ= = πρ # a 3 (5.20)
V 3
is based on the hard core radius a of the particle. How high φ needs to be, depends
on the range of the direct interaction. For example, when the colloidal particles are
charged, or if their surfaces are dotted with (non-collapsed) polymer coils, particles
start to feel each other long before their hard(er) cores touch. In such a case we may
neglect hydrodynamic interactions as soon as the virtual spheres associated with the
range of direct interaction significantly overlap. This may for example happen already
for φ > 0.1 when the range of direct interaction is 2-3 times the particle diameter. On
the other hand, when the colloidal particles act (almost) like hard spheres, their ranges
of direct interaction never significantly overlap. Indeed, for hard sphere solutions hy-
drodynamic forces may remain dominant for all volume fractions up to the point of
maximum packing (φ ≈ 0.64). At the higher volume fractions these hydrodynamic in-
teractions are mainly in the form of so-called lubrication forces. Lubrication forces are
a type of hydrodynamic interaction associated with the squeezing of a fluid in and out
of a region between two particles when two particle surfaces come close together or
move apart.
Second, hydrodynamic interactions may be neglected at very low solid volume
fractions. To see how low, we will estimate the effect of hydrodynamic flow fields on
104 CHAPTER 5. THE MESOSCOPIC WORLD

10−1
a/r nei g h

Figure 5.4: Log-log plot of the radius a of a par-


10−2
ticle divided by the typical distance r nei g h be-
tween neighbours in a homogeneous dispersion
of particles as a function of solid volume fraction
10−3 −6
10 10−5 10−4 10−3 10−2 10−1 φ. This is a measure for the influence of hydro-
φ dynamic interactions at low φ.

neighbouring particles. If the particles remain dispersed through the fluid, the typical
distance r nei g h between neighbouring particles is of the order of r nei g h = (ρ # )−1/3 . This
leads to the following estimate for the ratio a/r nei g h :
 1/3
a 3φ
≈ (5.21)
r nei g h 4π

Figure 5.4 shows that if φ = 10−3 (or 10−4 ), the flow field induced by a particle has de-
cayed to 6% (or 3%) of its original value by the time it reaches the first neighbouring
particle. Hydrodynamic interactions are always present, but depending on the quan-
tity of interest and the required accuracy, hydrodynamic interactions are typically ne-
glected for φ < 10−4 . In this case the dynamics of a Brownian particle is similar to the
dynamics of a single, isolated Brownian particle.
In summary, hydrodynamic interactions may be neglected in two extreme situa-
tions, either when the particles are so close that they significantly overlap in their di-
rect interaction range, or when the particles are so far apart that they effectively don’t
feel each other anymore.

Numerical implementation of Langevin dynamics


The Langevin equations of motion for multiple particles (without hydrodynamic inter-
actions) are
dRi
= Vi , (5.22)
dt
dVi
Mi = Fi − ζi Vi + FRi , (5.23)
dt
 R 
Fi (t ) = 0 (5.24)
" #
Fi (t )FRj (t  )
R 
= 2k B T ζi δi j Īδ(t − t ). (5.25)

In the second line, Fi is the force on particle i due to direct interactions with other
particles and external fields. The last line indicates that the random force on a particle
5.4. LANGEVIN AND BROWNIAN DYNAMICS OF MULTIPLE PARTICLES 105

is uncorrelated with the random force on another particle (δi j ), that the random force
in one cartesian direction is uncorrelated with the random force in another cartesian
direction (Ī), and that the random force at a time t is uncorrelated with the random
force at another time t  (δ(t − t  )) (white noise without memory).
The random forces need some attention because in a simulation we use finite time
steps Δt to integrate the equations of motion. An often used approach is to approxi-
mate the delta-correlated random force by a random force which is constant during an
entire time step, but uncorrelated with the random force in another time step.3 In that
case a leap-frog or velocity Verlet algorithm (for details see section 2.7) can be used to
update the positions and velocities of the particles, but with the force Fi on i replaced
by the total force Fitot which includes friction and random force,

Fi (t ) → Fitot (t ) = Fi (t ) − ζi Vi (t ) + FRi (t ), (5.26)

where each cartesian component of FRi (t ), i.e. FiR,x , FiR,y and FiR,z , is sampled from a
Gaussian distribution4 with zero average and standard deviation

$ %1/2 
2 2k B T ζi
σF R = FiR,α = . (5.27)
i ,α Δt

Note that the standard deviation of the random force increases with decreasing time
step Δt as Δt −1/2 .

Numerical implementation of Brownian dynamics


As already noted in section 5.3, using the Langevin equation to solve the motion of
Brownian particles with a relatively high amount of friction ζ makes no sense because
the velocities relax to equilbrium before the particle has moved only a tiny fraction of
its diameter. Moreover, with increasing friction, the discretised equations of motion
discussed in the previous subsection can only be used with increasingly small time
steps. The fundamental limitation is that the relaxation of the velocity in one time step
must be relatively small, i.e. ξΔt = ζΔt /M  1.
For particles experiencing a high friction (and for which hydrodynamic interactions
can be neglected), it is more efficient to solve the velocity-averaged equations of mo-
3
Another approach is to integrate the equations of motion over a finite time step Δt under the as-
sumption of a constant or linearly time interpolated force Fi . This leads, among other things, to a corre-
lated random update of both the velocity and the position of a particle [].
4
It is not absolutely necessary to sample from a Gaussian distribution. Any distribution with zero
average and the same standard deviation is sufficient. This is a consequence of the central limit theorem,
which states that the sum of a large number of random numbers will tend to a Gaussian distribution,
irrespective of the distribution from which the original random numbers have been sampled. However,
the speed with which a Gaussian distribution is approached depends, very roughly speaking, on the
similarity of the original distribution to a Gaussian shape. We therefore prefer to sample from a Gaussian
or near-Gaussian distribution.
106 CHAPTER 5. THE MESOSCOPIC WORLD

tion, i.e. the Brownian equation of motion:

dRi Fi
= + ∇D s,i + fRi (5.28)
dt ζi
R 
fi (t ) = 0 (5.29)
" #
R 
R
fi (t )f j (t ) = 2D s,i δi j Īδ(t − t  ). (5.30)

In the first line, Fi is the force on particle i due to direct interactions with other particles
and external fields. The last line indicates that the random variable fRi of particle i is
uncorrelated with that of another particle (δi j ), that it can be chosen independently
for each cartesian direction (Ī), and that it has no memory (δ(t − t  )).
In a simulation we use a finite timestep to integrate the first-order Brownian equa-
tion of motion. Since velocities are absent, the integration is slightly different (and in
fact simpler) than the leap-frog or velocity Verlet methods used to solve the second-
order Langevin equations of motion. Explicitly, the integration algorithm is:

Fi (t )
Ri (t + Δt ) = Ri (t ) + Δt + ∇i D s,i (t )Δt + ΔRRi (t ) (5.31)
ζi (t )
" #
ΔR iR,α = 0 (5.32)
$ %
R
2 1/2 
σΔR R = ΔR i ,α = 2D s,i (t )Δt . (5.33)
i ,α

We note that the time dependence of the self-diffusion coefficient is indirectly, via its
possible dependence on the particle position. Each cartesian component of the ran-
dom displacement ΔRRi , i.e. ΔR iR,x , ΔR iR,y and ΔR iR,z , is sampled from a Gaussian distri-
bution with zero average and standard deviation given by Eq. (5.33).
Convince yourself that for a constant diffusion coefficient (∇D s,i = 0) and with-
out direct or external forces on the particles (Fi = 0)  the above algorithm leads to a
mean square displacement given by (Ri (t ) − Ri (0)) = 6D s,i t . Of course, when parti-
2

cles interact with each other, the mean square displacement may be slowed down due
to temporary caging effects, i.e. in highly crowded systems the measured long time
diffusions coefficient D i may be significantly lower than the D s,i which is the diffusion
coefficient of a single free particle.

5.5 Mesoscale methods with hydrodynamic interactions

Direct calculation of hydrodynamic interaction forces from Stokes equa-


tions: Stokesian dynamics
In many (if not most) realistic situations the hydrodynamic interactions between col-
loidal particles cannot be neglected. For an unbounded system consisting of slowly
moving spheres in a fluid, it is possible to calculate the hydrodynamic force Fhi on
5.5. MESOSCALE METHODS WITH HYDRODYNAMIC INTERACTIONS 107

sphere i due to the motion of all spheres. Details of this calculation are given in Ap-
pendix A.3. The result is:

N
Fhi = − ζ̄i j · V j , (5.34)
j =0

where to lowest order the friction tensor ζ̄i j is given by


ζ̄i i = 6πμa Ī, (5.35)
3a  
ζ̄i j = −6πμa Ī + R̂i j R̂i j (i = j ). (5.36)
4R i j
   
Here R i j = Ri − R j  is the distance between particle j and i , and R̂i j = Ri − R j /R i j
the unit vector pointing from j to i .
It is clear from the above equations that including hydrodynamic interactions in a
simulation of colloidal particles is a computationally expensive operation: all particles
feel all other particles. Solving the equations of motion of all N particles on a level
equivalent to the Brownian dynamics algorithm (i.e. allowing for high frictions and
large time steps) requires the inversion of a 3N × 3N matrix, the computational costs
of which scale as N 3 . This is the basis of Stokesian dynamics simulations [14].
The advantage of Stokesian dynamics simulations is that higher order terms of the
hydrodynamic interactions can be incorporated without much additional costs, allow-
ing for accurate results.
Unfortunately, given the high computational costs, it may come as no surprise that
Stokesian dynamics simulations are typically limited to a few hundred colloidal parti-
cles. Various tricks have been applied to speed up the calculations, collectively known
as accelerated Stokesian dynamics [57], including spliting the hydrodynamic interac-
tions in far field and near field terms, handling the far field terms more efficiently
in “wave vector space” (this is akin to the so-called Ewald summation technique for
charge interactions). These tricks are rather technical and not easy to implement.
The largest drawback of Stokesian dynamics is that the expressions used for the
hydrodynamic interactions apply to spheres in an unbounded medium. Non-spherical
shapes may be generated by combining several spheres, but it is generally very difficult
to study the behaviour of colloidal suspensions confined by walls or other boundaries.

Using particle-based mesoscale methods for hydrodynamic fluids


The high computational costs of including hydrodynamic interactions through direct
calculations based on the (Navier-)Stokes equations have inspired researchers to em-
bed colloidal particles in relatively cheap particle-based mesoscale methods for hydro-
dynamic fluids.
The idea is illustrated in Fig. 5.5. By excluding fluid from regions occupied by col-
loidal particles, and applying appropriate boundary conditions, movement of a col-
loidal particle will set in motion a flow in the fluid which will affect other colloidal
particles, just as in a real colloidal dispersion.
108 CHAPTER 5. THE MESOSCOPIC WORLD

Figure 5.5: Schematic picture depicting


how a colloidal dispersion can be coarse-
grained by replacing the fluid with a
particle-based hydrodynamics solver such
as lattice Boltzmann (LB), dissipative par-
ticle dynamics (DPD), the Lowe-Anderson
thermostat, or stochastic rotation dynam-
ics (SRD), which is a type of multi-particle
collision dynamics (MPCD) [51]. Each
method introduces an effective coarse-
graining length scale that is chosen to
be smaller than those of the mesoscopic
colloids but much larger than the natu-
ral length scales of a microscopic fluid.
By obeying local momentum conservation,
these methods reproduce Navier-Stokes
hydrodynamics on larger length scales. For
LB, thermal fluctuations must be added in
separately, but these emerge naturally for
the other three methods.

A large number of particle-based mesoscale methods for hydrodynamic fluids ex-


ist. The best known are:

• DSMC: direct simulation Monte Carlo [10],

• LA: Lowe-Andersen method [40, 41],

• LGA and LBM: lattice gas automata and lattice Boltzmann methods [58],

• DPD: dissipative particle dynamics [23, 25, 31],

• MPCD: multi-particle collision dynamics [24, 29, 42, 51],

All these methods follow the idea of coarse-graining the countless molecules in a fluid
to a much lower number of particles. The coarse-grained fluid retains its viscous prop-
erties by performing efficient collisions between the fluid particles. The collisions are
constructed such that they obey local conservation of mass, momentum and (in most
cases) energy. These local conservation laws are sufficient to generate a correct Navier-
Stokes hydrodynamics in the continuum limit.
In direct simulation Monte Carlo (DSMC) simulations, collisions between neigh-
bouring particles are executed probabilistically, based on the relative velocities and
particle sizes using kinetic theory of gases [10]. DSMC is particularly useful for rarefied
gas flows, where the Knudsen number is of the order of 1 or larger.
In the Lowe-Andersen (LA) method, with a certain frequency random neighbouring
pairs of particles within a certain range are picked to collide with each other [40, 41].
5.6. DISSIPATIVE PARTICLE DYNAMICS 109

The collision is executed by resampling the relative velocity (the component along the
line connecting the pair of particles) from a Maxwell-Boltzmann distribution.
In lattice gas automata (LGA), particles live on a lattice and can attain a low number
of discrete values for the velocity [58]. Collisions are executed with probabilities cho-
sen such that the relaxation of hydrodynamic velocity fluctuations is isotropic on suf-
ficiently large length scales. Lattice Boltzmann methods (LBM) are based on the same
idea of particles colliding on a lattice, but are propagating the probability density distri-
bution according to the (linearised and pre-averaged) Boltzmann equation [58]. Ther-
mal fluctuations, which are crucial for mesoscopic systems, are absent in the original
Lattice Boltzmann method, but can be added through a fluctuating stress tensor [1,20].
In these lectures we will focus on the two last common hydrodynamic mesoscale
methods, namely dissipative particle dynamics and multi-particle collision dynamics.

5.6 Dissipative particle dynamics

Two innovations
Dissipative particle dynamics (DPD) [23,25,31] is a popular particle-based method that
includes hydrodynamics and thermal fluctuations. It can be viewed as an extension of
standard (Newtonian) molecular dynamics techniques, but with two important inno-
vations:

1. soft potentials that allow large time steps and rapid equilibration,

2. a Galilean-invariant5 thermostat that locally conserves momentum and there-


fore generates correct Navier-Stokes hydrodynamics.

The statistical mechanical origins of innovation (1) of DPD, the use of soft potentials
out of equilibrium, are still under debate, but are often explained as resulting from
viewing the DPD particles as “clumps” of the underlying fluid.
Innovation (2), on the other hand, can be put on firmer statistical mechanical foot-
ing, and can be usefully employed to study the dynamics of complex systems with
other types of interparticle interactions. The main advantage of the DPD thermo-
stat is that, by preserving momentum conservation, the hydrodynamic interactions
that intrinsically arise from microcanonical MD are preserved for calculations in the
canonical ensemble. Other thermostats (such as the ones treated in section 4.4) typi-
cally screen the hydrodynamic interactions beyond a certain length scale because they
introduce an effective friction with a hypothetical stagnant background.6 For weak
damping this may not be a problem, but for strong damping it could be.
5
Galilean invariant means that the behaviour of the system is exactly the same in another inertial
frame of reference. In other words, when the velocities of all particles are shifted by a constant amount
V, the updates of particle velocities are exactly the same as in the original system.
6
The molecular dynamics thermostats introduced in section 4.4 rescale absolute velocities, mean-
ing that macroscopic flow velocities are dampened to zero. These thermostats are not Galilean invariant.
110 CHAPTER 5. THE MESOSCOPIC WORLD

FiCj /ai j or ωD (r i j ) or ωR (r i j )
1 FiCj /ai j or ωD (r i j )
ωR (r i j )
0.8

0.6 Figure 5.6: In dissipative particle dynam-


ics, the conservative force F iCj and dissipa-
0.4
tive force F iDj decay linearly from a maxi-
0.2 mum value for pair distance r i j = 0 to zero
for pair distance r i j = r c (green line). The
0
0 0.5 1 1.5 2 strength of the random force decays as the
r i j /r c square root of this (red line).

Simulating a pure DPD fluid


In DPD, the force on each particle i results from a sum of pair-wise forces:

Fi = FCij + FD R
i j + Fi j , (5.37)
j =i

with
 ri j
ai j 1 − r̂i j for r i j ≤ r c
FCij = rc (5.38)
0 for r i j > r c
 
FD
ij = −γωD (r i j ) (vi − v j ) · r̂i j r̂i j (5.39)

2γk B T ωD (r i j )
FRij = ξi j r̂i j (5.40)
Δt
 
Here r i j = ri − r j  is the distance between particle i and j and r̂i j = (ri − r j )/r i j is the
unit vector pointing from particle j to particle i .
FCij is a soft conservative pair force mimicking internal pressure and Van der Waals
interactions. It decays linearly from an amplitude ai j for fully overlapping particles
(r i j = 0) to zero at a cut-off radius r c , see Figure 5.6. The parameter ai j controls the
strength of this repulsive force and solvation properties. A typical value that repro-
duces the properties and compressibility of water is ai j = 75k B T /(ρ # r c4 ) for relatively
high number densities ρ # > 3r c−3 [25]. Contrary to, e.g., the Lennard-Jones force the
conservative force does not diverge for small distances.7 This allows for relatively large
integration time steps.
FDi j is a dissipative pair force mimicking the effects of viscosity of the fluid. Its
strength is set by the friction coefficient γ and it depends on the pair distance through
a weight function ωD (r i j ). The weight function is usually also a linearly decaying func-
7
Note that is not forbidden to use a different interaction together with the DPD dissipative and fric-
tion forces. The linear force Eq. (5.38) is the most common in DPD literature.
5.6. DISSIPATIVE PARTICLE DYNAMICS 111

tion, with the same cut-off distance r c at the conservative force (Figure 5.6):
 ri j
D 1− r for r i j ≤ r c
ω (r i j ) = c (5.41)
0 for r i j > r c

FRij is a random pair force mimicking the kinetic energy input from the microscopic
degrees of freedom that have been coarse-grained out. In Eq. (5.40) we assumed, as in
Langevin dynamics, that the random force is constant during an integration time step
Δt ; ξi j is a symmetric random number (ξi j = ξ j i ) with zero mean and unit variance.
It should come as no surprise that the magnitude and distance dependence ωR (r i j )
of the random pair force is linked to the magnitude and distance dependence of the
dissipative force. This is another consequence of the fluctuation-dissipation theorem
we encountered already in section 5.3.
The total force Fi on a DPD particle i is used to update its velocity and position, in
exactly the same way as in standard molecular dynamics, for instance by the velocity
Verlet algorithm introduced in section 2.7.
The dissipative and random forces are pair forces obeying Newton’s third law: FD ij =
−FDji and FRij = −FRji . This is exactly what leads to a local conservation of momentum,
and hydrodynamic behaviour at larger length scales.

Embedding colloidal particles in a DPD fluid


A DPD fluid may be used to transmit hydrodynamic interactions between colloidal
particles. This may be accomplished by modelling a colloidal particle either as an extra
large DPD particle with altered interaction forces with the surrounding DPD fluid, or
by freezing multiple DPD spheres together into a “raspberry” model.
For the first option we could choose the conservative force on a colloid c of radius
a due its interaction with a DPD particle i as a shifted equivalent of the DPD fluid-fluid
interaction (Figure 5.7):

C δci (r ci )
Fci = ac r̂ci , (5.42)
rc
where ac is the strength of the interaction, r ci is the distance between the centres of
particle i and colloid c, r̂ci the unit vector pointing from particle i to colloid c, and the
(positive) overlap δ is found by subtracting r ci from the colloid radius a plus cut-off
range r c :

a + r c − r ci for r ci ≤ a + r c
δci (r ci ) = (5.43)
0 for r ci > a + r c

Direct colloid-colloid interactions can be included similarly, by an interaction shifted


over 2a, or by including harder interactions approximating hard spheres. In any case,
for the fluid to behave approximately as a continuum liquid, the radius a of the col-
loidal particle should be much larger than the interaction range r c of the fluid particles.
112 CHAPTER 5. THE MESOSCOPIC WORLD

1.5
FiCj /ai j
C
Fci /ac
/ac

1
Figure 5.7: A colloidal particle may be
FiCj /ai j or Fci
C

embedded in a DPD fluid by choosing


a conservative force between colloid and
0.5
fluid DPD particle (red) which is shifted
by the colloidal radius a compared to the
fluid-fluid conservative force (green). In
0
0 2 4 6 8 this example the radius and shift are a =
r i j /r c 5r c .

In practice, a ≥ 5r c may already be large enough. Note that by choosing the range r c
and strength ac of the colloid-DPD interaction, the solvability of the colloidal particle
in the DPD fluid can be tuned. Also note that the forces always point in a radial direc-
tion. As a consequence, no torque can be applied to the colloidal sphere. This method
therefore corresponds to slip-boundary colloidal particles.
For the second option a large number of DPD particles (typically a few hundred)
is frozen into a more-or-less spherical assembly and remain so for the rest of the sim-
ulation. The assembly is allowed to translate and rotate, but the internal degrees of
freedom remain fixed. Each colloidal DPD particle interacts with free DPD particles in
the surrounding fluid in the same way as two free DPD particles interact, except that
the force on the colloidal DPD particle is not used to directly update its position and
velocity. Rather, all forces and torques relative to the centre of mass of the assembly are
summed, and a rigid body update is applied using the total mass and moment of in-
ertia tensor of the assembly. Because the free DPD particles can also apply transversal
forces on the colloidal assembly, this method corresponds to stick-boundary colloidal
particles.

Advantages and disadvantages of DPD


The advantages of DPD are many. The method is a simple extension of molecular dy-
namics. The method has been long around, so a lot of experience has been gained.
The interactions are soft, allowing for relatively large (compared to molecular dynam-
ics simulations) time steps. Coupling to colloidal particles is also easy, when treating
them as extra large DPD particles.
However, there are also disadvantages. First, just as the case of molecular interac-
tions, the repulsive interactions between the particles cause an ordering in the fluid
structure, especially near walls and colloidal particles, that may be felt as oscillating
conservative interactions between the colloidal particles when their surfaces are closer
than a couple of r c apart. For a molecular fluid these oscillations extend only for a few
Å, much smaller than the typical colloidal radius a, whereas for a DPD fluid the oscil-
lations extend over a distance comparable to the colloidal radius.
5.7. MULTI-PARTICLE COLLISION DYNAMICS 113

Second, to be in the hydrodynamic limit, colloidal particles need to be much larger


than the DPD fluid particles. In the volume occupied by each colloidal particle, we
typically have a few hundred DPD fluid particles. The number of DPD fluid particles
quickly becomes extremely large, especially at lower colloidal volume fractions. This
points to the final disadvantage: despite the gains achieved over molecular dynamics
(lower number of particles, larger time step), the interactions between the DPD parti-
cles are still based on pair interactions. Even with a smart use of cell linked lists and/or
neighbourlists, this means that the computational effort increases faster-than-linear
with the number of DPD particles N (typically as N ln N , as in molecular dynamics).

5.7 Multi-particle collision dynamics

A brief history of MPCD

In 1999, Malevanets and Kapral [42] derived a method now called stochastic rotation
dynamics (SRD). In many ways it resembles the much older direct simulation Monte
Carlo (DSMC) method of Bird [10]. The particles are ideal, and move in a continuous
space, subject to Newton’s laws of motion. At discrete timesteps, a coarse-grained col-
lision step allows particles to exchange momentum. In SRD, space is partitioned into
a rectangular grid, and at discrete time-steps the particles inside each cell exchange
momentum by rotating their velocity vectors relative to the center of mass velocity of
the cell. One can imagine other collision rules as well. As long as they locally con-
serve momentum and energy, they generate the correct Navier Stokes hydrodynamics.
We may therefore view SRD as a specific implementation of the more general class of
multi-particle collision dynamics (MPCD) methods.
Soon after its introduction, it was pointed out by Ihle and Kroll that for MPCD it is
necessary to include a grid-shift procedure to enforce Galilean invariance [33].
An important advantage of SRD is that its simplified dynamics have allowed the
analytic calculation of several transport coefficients [34, 36, 53], greatly facilitating its
use.
SRD can be applied to directly simulate flow [3,38], but its stochastic nature means
that a noise average must be performed to calculate flow lines, and this may may make
it less efficient than pre-averaged methods like LB. Where SRD becomes more attrac-
tive is for the simulation of complex particles embedded in a solvent. Coupling can
be achieved by allowing direct collisions, obeying Newton’s laws of motion, between
the SRD solvent and the suspended particles [51]. Such an approach is important for
systems, such as colloidal suspensions, where the solvent and colloid time and length-
scales need to be clearly separated. In these lectures we will focus on this latter case.
114 CHAPTER 5. THE MESOSCOPIC WORLD

Simulating a pure SRD fluid


In SRD, the solvent is represented by a large number N (typically 106 or more) of par-
ticles of mass m. Here and in the following, we will call these “fluid” particles, with the
caveat that, however tempting, they should not be viewed as some kind of composite
particles or clusters made up of the underlying (molecular) fluid. Instead, SRD should
be interpreted as a Navier-Stokes solver that includes thermal noise. The particles are
merely a convenient computational device to facilitate the coarse-graining of the fluid
properties.
In the first (propagation) step of the algorithm, the positions and velocities of the
fluid particles are propagated for a time Δtc (the time between collision steps) by ac-
curately integrating Newton’s equations of motion,

dvi
m = fi , (5.44)
dt
dri
= vi . (5.45)
dt

ri and vi are the position and velocity of fluid particle i , respectively while fi is the total
(external) force on particle i , which may come from an external field such as gravity,
or fixed boundary conditions such as hard walls, or moving boundary conditions such
as suspended colloids. The direct forces between pairs of fluid particles are, however,
neglected in the propagation step. Herein lies the main advantage – the origin of the
efficiency – of SRD. Instead of directly treating the interactions between the fluid parti-
cles through pair forces (as in molecular dynamics and dissipative particle dynamics),
a coarse-grained collision step is performed at each time-step δtc : First, space is par-
titioned into cubic cells of volume a03 (Fig. 5.8), resulting in on average γ = ρ # a03 SRD
particles per cell (more on how to choose a0 or γ later). Next, for each cell, the particle
velocities relative to the center of mass velocity vcm of the cell are then rotated:

i ∈cell m i vi
vcm =  (5.46)
i ∈cell mi
vi = vcm + Ω (vi − vcm ) . (5.47)

Ω is a rotation matrix which rotates velocities by a fixed angle α around a randomly


oriented axis. The aim of the collision step is to transfer momentum between the fluid
particles while conserving the total momentum and energy of each cell. Check for
yourself that this is indeed the case. Both the collision and the streaming step conserve
phase-space volume, and it has been shown that the single particle velocity distribu-
tion evolves to a Maxwell-Boltzmann distribution [42].
The rotation procedure can thus be viewed as a coarse-graining of particle colli-
sions over space and time. Because mass, momentum, and energy are conserved lo-
cally, the correct hydrodynamic (Navier Stokes) equations are captured in the contin-
uum limit, including the effect of thermal noise [42].
5.7. MULTI-PARTICLE COLLISION DYNAMICS 115

Figure 5.8: In multi-particle collision dy-


namics simulations, the fluid is repre-
sented by point particles. In the colli-
sion step, space is partitioned into cu-
bic cells (lines), and all particles in a cell
(e.g. all red particles) exchange momen-
tum simultaneously.

At low temperatures or small collision times δtc , the transport coefficients of SRD,
as originally formulated [42], show anomalies caused by the fact that fluid particles in
a given cell can remain in that cell and participate in several collision steps [33]. Under
these circumstances the assumption of molecular chaos and Galilean invariance are
incorrect. However, this anomaly can be cured by applying a random shift of the cell
coordinates before the collision step [33, 34].8
From the above description, it is clear that it is important to be able to quickly iden-
tify all particles residing in a certain cell. We already know how to do this efficiently:
through a cell-linked-list (see section 2.4)! When creating the linked list we should take
into account the random grid shift.
The following pseudocode generates a cell-linked-list with a random grid shift
stored globally as   Here we have assumed periodic bound-
ary conditions.9

   


   
   
           
          !!   
        !
   "
     # $%
    # $ %
8
It should also be noted that the collision step in SRD does not locally conserve angular momentum.
As a consequence, the stress tensor S̄ is not, in general, a symmetric function of the derivatives of the flow
field (although it is still rotationally symmetric) [53]. The asymmetric part can be interpreted as a viscous
stress associated with the vorticity ∇ × v of the velocity field v(r). The stress tensor will therefore depend
on the amount of vorticity in the flow field. However, the total force on a fluid element is determined not
by the stress tensor itself, but by its divergence ∇ · S̄, which is what enters the Navier Stokes equations.
Taking the divergence causes the explicit vorticity dependence to drop out (the gradient of a curl is zero).
9
If walls are present in a certain direction, we should allow for one more cell in each cartesian di-
rection because of the random grid shift. For example,  can be in the range between 0 and
L x + 1. Therefore with walls in all directions, the cell index should be calculated as    
           .
116 CHAPTER 5. THE MESOSCOPIC WORLD

     


              
     
  

  
Here    is the inverse of the cell size in the x-direction, and similarly for
the other directions. In practice, in most SRD codes the lengths are expressed in units
of one cell size a0 , i.e. a0 ≡ 1 and masses in units of an SRD mass m, i.e. m ≡ 1. Because
of the first choice, the multiplication by  etc. can be avoided, making the above
routine even more efficient. Because of the second choice the rotation procedure can
be made more efficient, as we will see next.
The cell-linked-list must be updated each time right before the rotation procedure
is executed. The rotation procedure Eq. (5.47) mimics the collisions between the SRD
particles and involves choosing a randomly oriented axis independently for each cell
(in 2D simulations the only choice is up or down with equal probability), and then ro-
tating all relative velocities over an angle α around this axis. The following pseudocode
performs a rotation of α = π/2 around a random axis for particles that have just been
sorted into a cell-linked-list by the previous routine. Here we assume that the mass of
an SRD particle is unity.

    


    
  
   !
   !
   !
  !
 "   !! 
     
     
     
   !
   

 # !    $   "    
     $      
    
    
  %!  &  $      
   '! &  $   (
  %  
      
5.7. MULTI-PARTICLE COLLISION DYNAMICS 117

   
         
             
       
   
    
          
 
 
   !"#$%& 
   '     
   (
   )
  
 
  *
 
  + 
Here  !"#$%& executes a matrix multiplication of the rotation matrix  with the
vector of relative velocities  and stores the result in a vector of rotated velocities
. For a rotation over an angle α other than π/2, the rotation matrix should be ad-
justed. Note that it only makes sense to apply the rotation procedure if the cell contains
at least 2 SRD particles, hence the check for  , '. This is also used to prevent di-
vision by zero in the calculation of the centre of mass velocity if a cell happens to have
no particles.

Transport properties of an SRD fluid


The simplicity of SRD collisions has facilitated the analytical calculation of many trans-
port coefficients [34–36, 53]. These analytical expressions are particularly useful be-
cause they enable us to efficiently tune the viscosity and other properties of the fluid,
without the need for trial and error simulations. In this subsection we will summarize
a number of these transport coefficients, where possible giving a simple derivation of
the dominant physics.

Units and the dimensionless mean-free path

As already mentioned, in most SRD simulation codes, lengths are in units of collision
cell-size a0 and masses in units of the mass m of an SRD particle. In simulations of
mechanical systems only three independent scales can be set, so we have one scale
left. Some authors choose the collision time interval δtc as the third unit, but more
often the thermal energy k B T is chosen as the third unit. In the examples of these
lectures we will also choose k B T ≡ 1.
118 CHAPTER 5. THE MESOSCOPIC WORLD

Table 5.1: Units and simulation parameters for an SRD fluid with embedded colloidal
particles. The parameters listed in the table all need to be independently fixed to de-
termine a simulation.
Derived Units
Basic Units 
m
a0 = length t0 = a0 = time
kB T

kB T = energy
a02 kB T
m = mass D0 = = a0 = diffusion constant
t0 m

a02 kB T
ν0 = = a0 = kinematic viscosity
t0 m

γm mk B T
μ0 = = = viscosity
t0 a0 a02

Independent fluid simulation parameters


γ = average number of particles per cell
δtc = SRD collision time interval
α = SRD rotation angle
L = box length

Independent colloid simulation parameters


Δt MD = MD time step
σcc = colloid-colloid collision diameter Eq. (5.57)
cc = colloid-colloid energy scale Eq. (5.57)
σc f = colloid-fluid collision diameter Eq. (5.58)
c f = colloid-fluid energy scale Eq. (5.58)
Nc = number of colloids
Mc = colloid mass
5.7. MULTI-PARTICLE COLLISION DYNAMICS 119

All other units can therefore be expressed


 in units of a0 , m and k B T . Time, for
example, is expressed in units of t0 = a0 m/k B T , the number density n f = γ/a03 and
other derived units can be found in table 5.1. It is instructive to express the transport
coefficients and other parameters of the SRD fluid in terms of the dimensionless mean-
free path

δtc k B T δtc
λ= = , (5.48)
a0 m t0

which provides a measure of the average fraction of a cell size that a fluid particle trav-
els between collisions.
This particular choice of units helps highlight the basic physics of the coarse-graining
method. The (nontrivial) question of how to map them on to the units of real physical
system will be discussed later in this section.

Fluid self-diffusion coefficient

A simple back of the envelope estimate of the self-diffusion coefficient D f of a fluid


particle can be obtained from a random-walk picture. In a unit of time t0 , a particle will
experience 1/λ collisions, in between which it moves an average distance λa0 . By view-
ing this motion as a random walk, the diffusion coefficient for a pure SRD  fluid particle
is therefore given by D f /D 0 ≈ λ, expressed in units of D 0 = a02 /t0 = a0 k B T /m.
A more systematic derivation of the diffusion coefficient of a fluid particle, but still
within a random collision approximation, results in the following expression [34, 55]:
  
Df 3 γ 1
=λ − . (5.49)
D0 2(1 − cos(α)) γ − 1 2

The dependence on γ is weak. If, for example, we take α = π/2, the value used in this
paper, then limγ→∞ D f /D 0 = λ, the same as estimated above.
While Eq. (5.49) is accurate for larger mean-free paths, where the random collision
approximation is expected to be valid, it begins to show deviations from simulations
for λ  0.6 [55], when longer-time kinetic correlations begin to develop. These corre-
lations induce interactions of a hydrodynamic nature that enhance the diffusion coef-
ficient for a fluid particle. For example, for λ = 0.1 and α = 34 π, the fluid self-diffusion
constant D f is about 25% larger than the value found from Eq. (5.49).

Kinematic viscosity

The spread of a velocity fluctuation δv(r) in a fluid can be described by the three-
dimensional equivalent of Eq. (3.16):

∂δv(r)
= ν∇2 δv(r) (5.50)
∂t
120 CHAPTER 5. THE MESOSCOPIC WORLD

where ν is the kinematic viscosity, which determines the rate at which momentum
or vorticity “diffuses
 away”. For our coice of units, the unit of kinematic viscosity is
ν0 = a0 /t0 = a0 k B T /m which is the same as that for particle self diffusion i.e. D 0 = ν0 .
2

Momentum diffusion occurs through two mechanisms:

1. By particles streaming between collision steps, leading to a “kinetic” contribu-


tion to the kinematic viscosity νki n . Since for this gas-like contribution the mo-
mentum is transported by particle motion, we expect νki n to scale like the parti-
cle self-diffusion coefficient D f , i.e. νki n /ν0 ∼ λ.

2. By momentum being re-distributed among the particles of each cell during the
collision step, resulting in a “collisional” contribution to the kinematic viscosity
νcol . This mimics the way momentum is transferred due to inter-particle colli-
sions, and would be the dominant contribution in a dense fluid such as water
at standard temperature and pressure. Again a simple random-walk argument
explains the expected scaling in SRD: Each collision step distributes momentum
among particles in a cell, making a step-size that scales like a0 . Since there are
1/λ collision steps per unit time t0 , this suggests that the collisional contribution
to the kinematic viscosity should scale as νcol /ν0 ∼ 1/λ.

Accurate analytical expressions for the kinematic viscosity ν = νki n + νcol of SRD
have been derived [35, 36], and these can be rewritten in the following dimensionless
form:

νkin λ μ
= f (γ, α) (5.51)
ν0 3 kin
νcol 1
= f ν (γ, α). (5.52)
ν0 18λ col

where the dependence on the collisional angle α and fluid number density γ is sub-
sumed in the following two factors:

μ 15γ 3
f kin (γ, α) = − , (5.53)
(γ − 1 + e−γ )(4 − 2 cos(α) − 2 cos(2α)) 2

μ
f col (γ, α) = (1 − cos(α))(1 − 1/γ + e−γ /γ). (5.54)

These factors only depend weakly on γ for the typical parameters used in simulations.
For example, at α = π/2 and γ = 5, the angle and number density we routinely use, they
μ μ
take the values f kin (5, π/2) = 1.620 and f col (5, π/2) = 0.801. For this choice of collision
angle α, they monotonically converge to f kin = f col = 1 in the limit of large γ.
Figure 5.9 shows the dependence of the kinematic viscosity on the dimensionless
mean free path λ.
5.7. MULTI-PARTICLE COLLISION DYNAMICS 121

1.5
νki n
νcol
ν

1
Figure 5.9: Kinematic viscosity of an

ν/ν0
SRD fluid for α = π/2 and γ = 5 as a func-
tion of dimensionless mean free path 0.5
λ. The kinetic contribution νki n is indi-
cated in green, the collisional contribu-
tion νcol in red, and the total kinematic 0
0 0.2 0.4 0.6 0.8 1
viscocity ν in black. λ

Dynamic viscosity

The dynamic viscosity μ is related to the kinematic viscosity by μ = ρ f ν, where ρ f =


mγ/a03 is the fluid mass density. From Eqs. 5.51–5.54 it follows that the two contribu-
tions to the dynamic viscosity can be written in dimensionless form as:

μkin λγ μ
= f (γ, α) (5.55)
μ0 3 kin
μcol γ μ
= f (γ, α). (5.56)
μ0 18λ col

where μ0 = m/a0 t0 = mk B T /a02 is the unit of dynamic viscosity.
In contrast to the expressions for the diffusion of a fluid particle or a tagged par-
ticle, Eqs. (5.51) - (5.56) compare quantitatively to simulations over a wide range of
parameters [34, 36, 55]. This is because the derivation of the SRD particle diffusion
assumes molecular chaos and neglects density fluctuations. These are included in
the more rigorous derivation of the viscosity. For the parameters we routinely used,
i.e. λ = 0.1, α = π/2, γ = 5, the collisional contribution to the viscosity dominates:
νki n = 0.054ν0 and νcol = 0.45ν0 . This is typical for λ  1, where ν and μ can be taken
to a good first approximation by the collisional contribution only.

Difference between an SRD fluid and an ideal gas

It is instructive to compare the expressions derived in this section to what one would
expect for simple gases, where, as famously first derived and demonstrated experimen-
tally by Maxwell in the 1860’s [44], the dynamic viscosity is independent of density. This
result differs from the kinetic (gas-like) contribution to the viscosity in Eq. (5.55), be-
cause in a real dilute gas the mean-free path scales as λ ∝ 1/γ, canceling the dominant
density dependence in μki n ∝ γλ. The same argument explains why the self-diffusion
and kinematic viscosity of the SRD fluid are, to first order, independent of γ, while in
a gas they would scale as 1/γ. In SRD, the mean-free path λ and the density γ can
be varied independently. Moreover, the collisional contribution to the viscosity adds
122 CHAPTER 5. THE MESOSCOPIC WORLD

5
ϕcc
ϕc f
4

3
ϕ/k B T

Figure 5.10: Colloid-colloid interac-


2 tion ϕcc (Eq. 5.57, green) and colloid-
fluid interaction ϕc f (Eq. 5.58, red) as a
1 function of distance r . Here we show
the case σc f = 2a 0 , σcc = 4.4a 0 , and
0 c f = cc = 2.5k B T , which we regularly
0 1 2 3 4 5 6
r /a0 employ.

a new dimension, allowing a much wider range of physical fluids to be modeled than
just simple gases.
For example, the Schmidt number, which according to section 3.3 is the ratio of
momentum diffusivity to mass-diffusivity, is close to unity in an ideal gas, whereas it
is close to 1000 in liquid water. The Schmidt number of an SRD fluid can be made
arbitrarily large by choosing an arbitrarily small λ.

Embedding colloidal particles in an SRD fluid


Colloidal particles can be embedded in an SRD fluid by a hybrid molecular dynam-
ics (MD) scheme that couples a set of colloids to a bath of SRD particles [43, 51]. In
these lectures we restrict ourselves to hard sphere-like colloids with steep interpar-
ticle repulsions, although attractions between colloids can easily be added on [48].
The colloid-colloid and colloid-fluid interactions, ϕcc (r ) and ϕc f (r ) respectively, are
integrated via a normal MD procedure, while the fluid-fluid interactions are coarse-
grained with SRD. Because the number of fluid particles vastly outnumbers the num-
ber of HS colloids, treating their interactions approximately via SRD greatly speeds up
the simulation.

Colloid-colloid and colloid-solvent interactions

We can approximate pure hard-sphere colloidal interactions by a steep repulsive in-


teractions of the Weeks-Chandler-Andersen (WCA) form with high exponents 48 and
24 [27]:
⎧ 
⎨ σcc 48 σcc 24 1
4cc − + (r ≤ 21/24 σcc )
ϕcc (r ) = r r 4 (5.57)

0 (r > 21/24 σcc ).
Similarly, the colloid-fluid interaction takes the WCA form:
⎧     
⎨ σc f 12 σc f 6 1
4c f − + (r ≤ 21/6 σc f )
ϕc f (r ) = r r 4 (5.58)

0 (r > 21/6 σc f ).
5.7. MULTI-PARTICLE COLLISION DYNAMICS 123

Figure 5.10 shows these interactions for a specific set of parameters.


The mass m of a fluid particle is typically much smaller than the mass Mc of a col-
loid, so that the average thermal velocity of the fluid particles is larger than that of

the colloid particles by a factor Mc /m. For this reason the time-step ΔtMD is usu-
ally restricted by the fluid-colloid interaction (5.58),
 allowing fairly large exponents
 n
for the colloid-colloid interaction ϕcc (r ) = 4cc (σcc /r ) − (σcc /r ) + 1/4 . Above we
2n n

have chosen n = 24, which makes the colloid-colloid potential fairly steep, more like
hard-spheres, while still soft enough to allow the time-step to be set by the colloid-
solvent interaction.
The positions and velocities of the colloidal spheres are propagated through the
velocity Verlet algorithm (see section 2.7) [4] with a time step ΔtMD :

Fi (t ) 2
Ri (t + ΔtMD ) = Ri (t ) + Vi (t ) ΔtMD + Δt , (5.59)
2Mc MD
Fi (t ) + Fi (t + ΔtMD )
Vi (t + ΔtMD ) = Vi (t ) + ΔtMD .
2Mc
(5.60)

Ri and Vi are the position and velocity of colloid i , respectively. Fi is the total force
on that colloid, exerted by the fluid particles, an external field, such as gravity, exter-
nal potentials such as repulsive walls, as well as other colloids within the range of the
interaction potential (5.57).
The positions ri and velocities vi of SRD particles are updated by similarly solv-
ing Newton’s eqns. (5.44,5.45) every time-step Δt MD , and with the SRD procedure of
Eq. (5.47) every time-step δtc .
Choosing both Δt MD and δt c as large as possible enhances the efficiency of a sim-
ulation. To first order, each timestep is determined by different physics, Δt MD by the
steepness of the potentials, and δtc by the desired fluid properties, and so there is some
freedom in choosing their relative values. We routinely use Δtc /δtMD = 4 for our sim-
ulations of sedimentation [50], but other authors have used ratios of 50 [43] or even
larger. In the next subsection we will revisit this question, linking the time-steps to
various dimensionless hydrodynamic numbers and Brownian time-scales.

Stick and slip boundary conditions

Because the surface of a colloid is never perfectly smooth, collisions with fluid particles
transfer angular as well as linear momentum. As demonstrated in Fig. 5.11, the exact
molecular details of the colloid-fluid interactions may be very complex, and mediated
via co- and counter-ions, grafted polymer brushes etc. . . . However, on the time and
length-scales over which our hybrid MD-SRD method coarse-grains the solvent, these
interactions can be approximated by stick boundary conditions: the tangential velocity
of the fluid, relative to the surface of the colloid, is zero at the surface of the colloid [11,
28]. For most situations, this boundary condition should be sufficient, although in
some cases, such as a non-wetting surface, large slip-lengths may occur [6].
124 CHAPTER 5. THE MESOSCOPIC WORLD

Figure 5.11: Schematic picture depict-


1111111111111111111111
0000000000000000000000
1111111111111111111111
ing how a fluid molecule interacts with a
0000000000000000000000
1111111111111111111111
0000000000000000000000 colloid, imparting both linear and angu-
1111111111111111111111
0000000000000000000000
1111111111111111111111
0000000000000000000000
1111111111111111111111
lar momentum. Near the colloidal sur-
0000000000000000000000
1111111111111111111111
0000000000000000000000 face, here represented by the shaded re-
1111111111111111111111
0000000000000000000000
1111111111111111111111
0000000000000000000000
1111111111111111111111
gion, there may be a steric stabilization
0000000000000000000000
1111111111111111111111
0000000000000000000000 layer, or a double-layer made up of co-
1111111111111111111111
0000000000000000000000 and counter-ions. In SRD, the detailed
1111111111111111111111
0000000000000000000000
1111111111111111111111
0000000000000000000000
1111111111111111111111
manner in which a fluid particle inter-
0000000000000000000000
1111111111111111111111
0000000000000000000000
1111111111111111111111
acts with this boundary layer is repre-
0000000000000000000000
1111111111111111111111
0000000000000000000000 sented by a coarse-grained stick or slip
1111111111111111111111
0000000000000000000000
1111111111111111111111
0000000000000000000000 boundary condition.
1111111111111111111111
0000000000000000000000

In SRD simulations, stick boundaries are best implemented in combination with


sharp boundaries between the fluid and the wall or a colloidal particle. This way, dur-
ing the streaming step, we can clearly identify SRD particles which find themselves in
overlap with walls or colloidal particles. Stick boundary conditions may then be im-
plemented by applying bounce-back rules to those particles, where both parallel and
perpendicular components of the relative velocity of a fluid particle is reversed upon a
collision with a surface [38]. Bounce-back rules were already discussed in section 2.5.
Alternatively, during the streaming step, stick boundaries can be modeled by a
stochastic rule. As also discussed in section 2.5, after the collision, the relative tangen-
tial velocity v t and relative normal velocity v n of an SRD fluid particle may be taken
from the distributions (in our units where m = 1 and k B T = 1):

 
P (v n ) ∝ v n exp −v n2 /2 (5.61)
 
P (v t ) ∝ exp −v t2 /2 , (5.62)

so that the wall or colloidal particle also acts as a thermostat [29, 51]. We may argue
that the stochastic rule of Eq. (5.61) is more like a real physical colloid – where fluid-
surface interactions are mediated by steric stabilizing layers or local co- and counter-
ion concentrations – than bounce-back rules are.
Up to this point we have discussed modifications during the streaming step (the
molecular dynamics part) to achieve stick boundaries. Modifications of the collision
(SRD rotation) step are also necessary. During the collision step collision cells may
partially overlap with walls (because of the random grid shift) or colloidal particles
(because of the curvature of the colloids). This results in a lower-than-average num-
ber of particles in such cells, leading to an altered fluid viscosity near the boundaries.
To avoid spurious effects, it is sufficient to add virtual particles inside walls and ob-
jects such that they are filled with the same average number density (γ) as in the bulk
fluid. For colloidal particles, this may be achieved by adding γ 43 πσ3c f virtual particles
at random locations in each colloidal particle. Each virtual particle is assigned a ve-
5.7. MULTI-PARTICLE COLLISION DYNAMICS 125

Figure 5.12: Same as Fig. 5.11, but now


showing the SRD collision cells (square grid),
real SRD fluid particles (filled dots) and vir- ri -R
tual particles (open dots) which are added
to the colloidal particle right before the SRD
rotation collision step is executed. The vir-
tual particles, with an average velocity given
by the local colloid velocity plus a Maxwell-
Boltzmann distributed random component,
improve the stick-boundary conditions be-
tween the fluid and the colloidal particle.

locity equal to the local velocity of the colloidal particle plus a Maxwell-Boltzmann
distributed random velocity (Figure 5.12).10
As is apparent from the above discussion, including colloidal particles with stick
boundary conditions is not trivial. In these lectures we will therefore focus on the much
simpler radial interactions such as those described in Eq. (5.58). These do not trans-
fer angular momentum to a spherical colloid, and so induce effective slip boundary
conditions.11 For many of the hydrodynamic effects the difference with stick boundary
conditions is quantitative, not qualitative, and also well understood.

Spurious depletion forces induced by the fluid

We would like to issue a warning that the additional fluid degrees of freedom may in-
advertently introduce depletion forces between the colloids (Figure 5.13). Because the
number density of SRD particles is much higher than that of the colloids, even a small
overlap between two colloids can lead to enormous (effective) attractions.
For low colloid densities the equilibrium depletion interaction between any two
colloids caused by the presence of the ideal fluid particles is given by [5]:

Φdepl (d ) = n f k B T [Vexcl (d ) − Vexcl (∞)] , (5.63)

where n f = γ/a03 is the number density of fluid particles and Vexcl (d ) is the (free) vol-
ume excluded to the fluid by the presence of two colloids separated by a distance d .
10
Filling static walls with virtual particles is often done more simply by adding (γ − n) Maxwell-
Boltzmann distributed velocities during the calculation of the centre-of-mass velocity in a wall cell,
where n is the actual number of particles in that particular wall cell.
11
For slip boundaries it is not necessary to add virtual particles because there is no shear gradient
near the boundaries. The viscosity very close to the boundaries is therefore less important than in the
case of stick boundaries.
126 CHAPTER 5. THE MESOSCOPIC WORLD

Figure 5.13: Colloidal particles (spheres) ex-


clude the fluid particles (points) from their vol-
ume. If two colloidal particles partly overlap,
the (osmotic) pressure forces from the fluid
particles are unbalanced, leading to an effec-
tive force pushing the two colloidal particles
d together. Note that depletion forces are equi-
librium (thermodynamic) forces.

0
Φdepl / (nfkBTσcf )
3

-1 Figure 5.14: Effective depletion potentials in-


duced between two colloids by the SRD fluid
-2 particles, for hard-sphere fluid-colloid (solid
line) and WCA fluid-colloid (dashed line) in-
-3 teractions taken from Eq. (5.58). The interpar-
hard sphere interaction ticle distance d is measured in units of σc f .
-4 WCA interaction, ε = 2.5 kBT Whether these attractive potentials have im-
portant effects or not depends on the choice of
-5
0.0 0.5 1.0 1.5 2.0 2.5 3.0 diameter σcc in the bare colloid-colloid poten-
d / σcf tial (5.57) [51].

The latter is given by



  
Vexcl (d ) = d3 r 1 − exp −βϕc f (r − r1 ) − βϕc f (r − r2 ) , (5.64)

where |r1 − r2 | = d and β = 1/(k B T ). An example is given in Fig. 5.14 where we have
plotted the resulting depletion potential for the colloid-solvent interaction (5.58), with
cc = c f = 2.5k B T as routinely used in our simulations, as well as the depletion in-
teraction resulting from a truly hard-sphere colloid-solvent interaction. The latter can
easily be calculated analytically, with the result
 
3
4 3d d
ΦHS 3
depl (d ) = −n f k B T πσc f 1 − + (for d < 2σc f ). (5.65)
3 4σc f 16σ3c f

For the pure hard-sphere interactions, one could take σcc ≥ 2σc f and the depletion
forces would have no effect. But for interactions such as those used in Eqs. (5.57) and (5.58),
the softer repulsions mean that inter-colloid distances less than σcc are regularly sam-
pled. A more stringent criterion of σcc must be therefore be used to avoid spurious
depletion effects.
Since the depletion potentials can be calculated analytically, one might try counter-
acting them by introducing a compensating repulsive potential of the form: Φcomp =
5.7. MULTI-PARTICLE COLLISION DYNAMICS 127

Figure 5.15: When colloidal particles move


apart with relative velocity V , fluid must flow
into the freed up space. At very small gap
widths between the colloidal surfaces this
V/2 V/2
leads to strong lubrication forces F which op- F F
pose the direction of V . Note that lubrication
forces are non-equilibrium forces that scale
with the relative velocity V .

−Φdepl between the colloids. However, there are three problems with this approach:
Firstly, at higher colloid packing fractions, three and higher order interactions may also
need to be added, and these are very difficult to calculate. Secondly, the depletion in-
teractions are not instantaneous, and in fact only converge to their equilibrium average
algebraically in time [60]. While this is not a problem for equilibrium properties, it will
introduce errors for non-equilibrium properties. Finally, when external fields drive the
colloid, small but persistent anisotropies in the solvent density around a colloid may
occur [21]. Although these density variations are (and should be) small, the resulting
variations in depletion interactions can be large.
To avoid these problems, we routinely choose the colloid-fluid interaction range
σc f slightly below half the colloid diameter σcc /2. More precisely, we ensure that the
colloid-colloid interaction equals 2.5k B T at a distance d where the depletion interac-
tions have become zero, i.e., at a distance of twice the colloid-solvent interaction cut-
off radius (leading to σcc ≈ 2.2σc f ). Smaller distances will consequently be rare, and
adequately dealt with by the compensation potential. This solution may be a more
realistic representation anyhow, since in practice for charge and even for sterically sta-
bilized colloids, the effective colloid-colloid diameter σcc is expected to be larger than
twice the effective colloid-fluid diameter σc f . This is particularly so for charged col-
loids at large Debye screening lengths.

Lubrication forces

When two surfaces approach one another, they must displace the fluid between them,
while if they move apart, fluid must flow into the space freed up between the surfaces
(Figure 5.15). At very short inter-surface distances, this results in so-called lubrication
forces, which are repulsive for colloids approaching each other, and attractive for col-
loids moving apart [19, 28, 56]. These forces are expected to be particularly important
for driven dense colloidal suspensions [59].
An additional advantage of our choice of diameters σci above is that more fluid par-
ticles will fit in the space between two colloids, and consequently the lubrication forces
will be more accurately represented. It should be kept in mind that for colloids, the ex-
act nature of the short-range lubrication forces will depend on physical details of the
surface, such as its roughness, or presence of a grafted polymeric stabilizing layer [46].
For perfectly smooth colloids, analytic limiting expressions for the lubrication forces
128 CHAPTER 5. THE MESOSCOPIC WORLD

can be derived [28], showing a divergence at short distances. We have confirmed that
SRD resolves these lubrication forces down to surprisingly low interparticle distances,
of the order of half a collision cell size a0 [52]). But, at some point this will break down,
depending of course on the choice of simulation parameters (such as σc f /a0 , λ, and
γ), as well as the details of the particular type of colloidal particles that one wishes to
model [46]. An explicit analytic correction could be applied to properly resolve these
forces for very small distances, as has been implemented for Lattice Boltzmann [49].
However, in most application with radial interactions between the colloids and SRD
fluid, σc f is chosen small enough for SRD to sufficiently resolve lubrication forces. The
lack of a complete divergence at very short distances may be a better model of what
happens for real colloids anyway. For dense suspensions under strong shear, explicit
lubrication force corrections as well as other short-ranged colloid-colloid interactions
arising from surface details such as polymer coats will almost certainly need to be put
in by hand, see e.g. ref. [46] for further discussion of this subtle problem.

When the dust has settled: choosing the parameters


Suppose that finally, after some hard work, you have programmed a code that simu-
lates an SRD fluid with embedded colloidal particles. The question you will probably
ask is: how to choose the parameters of the model? An overview of all free parameters
is given in Table 5.1. We will now deal with each free parameter and discuss the reasons
for common choices. We end with a discussion on how to relate the simulation results
to those of real colloidal suspensions.

Average number of particles per cell γ

The average number of particles per cell γ should not be too low, because two or more
particles per cell are needed to actually have collisions between particles. Here we
should take into account that an SRD fluid acts in many ways as an ideal gas. In partic-

ular, it has the same number density fluctuations as an ideal gas (Δγ = γ). A too low γ
will make the collisions very inefficient and cause a (too) slow relaxation of the particle
velocities to thermal equilibrium.
The higher the number of particles in a cell, the lower the relative number fluc-
tuations (and also the smoother the flow fields can be visualised, if this is the goal of
the simulation). However, a very large number of particles per cell means a very high
computational effort.
A good compromise between efficient collisions and low computational effort is
reached for values in the range 3 < γ < 10.

Collision time interval δtc

The collision time interval δtc should not be too high, because this will lead to a mean
free path larger than the cell size a0 ; it does not make sense to let the particles stream
more than a collision cell size in between collisions. Since with our choice of unit
5.7. MULTI-PARTICLE COLLISION DYNAMICS 129

parameters, the mean free path is essentially the same as the collision time interval,
Eq. (5.48), we routinely choose δtc ≤ 1 (in our time unit t0). Conversely, Eq. (5.52) shows
that high viscosities may be achieved by choosing low values of δtc , but this may come
at the computational cost of having to perform many SRD rotation collisions per t0 .
For liquid-like behaviour (as opposed to gas-like behaviour) it is important to achieve
a high Schmidt number. Although for water the Schmidt number is approximately
1000, for most applications Sc  10 is already sufficiently liquid-like. This is achieved
when δtc < 0.1.
Depending on the application, values in the range 0.01 < δtc ≤ 1 are routinely cho-
sen.

Rotation angle α

The rotation angle α influences the viscosity too, see Eqs. (5.51) and (5.52), but in prac-
tice not as strongly as the collision time interval. At first sight extreme values of α near
zero would correspond to a very high kinetic viscosity, but this corresponds to a situ-
ation where the collisions between particles are again very inefficient, leading to too
slow relaxation of the particle velocities to thermal equilibrium (just as in the case of a
very low number of particles per cell). Using the other extreme of α near π also creates
problems because in that case relative velocities are nearly exactly inverted. Admiss-
able values are in the range π/10 < α < 9π/10 (or in degrees: 20◦ < α < 160◦).
The rotation operation is much simpler for α = π/2 than for other angles, with an
associated increase in computational efficiency. Therefore, in practice, α = π/2 is a
popular choice.

Colloid-fluid and colloid-colloid interaction parameters σc f , σcc , c f and cc

The values of c f and cc should be such that we may indeed identify σc f and σcc as
the colloid-solvent radius and colloid diameter. This is rather arbitrary for non-hard-
sphere interactions, but as a rule of thumb we may define the above radius and di-
ameter as the distance at which the probability of encountering an SRD fliud particle
or another colloidal particle, respectively, has decayed to 10% of the probability in the
bulk (far away). Using Eqs. (5.57) and (5.58) in the Boltzmann distribution, this leads
to c f = cc = − ln(0.1) ≈ 2.3k B T . To be on the safe side, we routinely use 2.5k B T .
Next we consider the colloid-solvent radius σc f . In SRD the hydrodynamic fields
are accurately resolved to a scale of the order of the collision cell size (typically down to
0.5a0 for small mean free paths). An important question is therefore: how large should
we choose the colloidal particle relative to the collision cell size. The answer, as always,
depends on the amount of accuracy we desire. Simulations have shown [51] that the
flow field around, and drag force on, a moving colloidal particle is already resolved
with typical errors of 10% or less for σc f = 2a0 . The error decreases to the order of 2%
for σc f = 4a0 . For the same number of colloidal particles and the same solid volume
fraction, the number of SRD particles needed in the simulation scales as σ3c f (in 3D),
so in practice values in the range 2a0 < σc f < 4a0 are used.
130 CHAPTER 5. THE MESOSCOPIC WORLD

Finally, to avoid depletion forces, as discussed previously, the colloid-colloid diam-


eter σcc should be chosen larger than 2σc f . How much larger depends on the range of
the nearly-hard-sphere colloid-fluid interactions. Figure 5.14 shows that for our choice
of radial interactions, σcc = 2.2σc f is a good choice.

Relating the simulation results to real colloidal suspensions: time scales

For realistic dynamical behaviour of the colloidal suspension, it is important that cer-
tain characteristic time scales are in the correct order [19]. For example, for a 1 micron
colloidal particle in water at room temperature, we can identify the following charac-
teristic time scales:

• The fluid velocity time scale at which molecular velocities of the fluid decorre-
late: τ f = 10−14 s.

• The Fokker-Planck time scale at which forces on a colloidal particle decorrelate:


τF P = 10−13 s.

• The sonic time scale for sound to propagate over a colloid radius: τcs = 10−10 s.

• The kinematic time scale for momentum to diffuse over a colloid radius: τν =
10−6 s.

• The diffusive time scale for the colloid to self-diffuse over a colloid radius: τD =
100 s.

For correct colloidal hydrodynamics, these time scales should be separated as:

τ f ≈ τF P < τcs < τν  τD . (5.66)

The above numbers show that for real colloidal particles, the range of relevant time
scales is enormous, spanning at least 14 orders of magnitude. Clearly it would be im-
possible to bridge all the time-scales of a physical colloidal system – from the molecu-
lar τ f to the mesoscopic τD – in a single simulation. Thankfully, it is not necessary to
exactly reproduce each of the different time-scales in order to achieve a correct coarse-
graining of colloidal dynamics. As long as they are clearly separated, and in the right
order, the correct physics should still emerge.
Regarding the fluid velocity time scale, in SRD the effect of the collisions calcu-
lated in an average way every time-step δtc . The time-scale τ f on which the velocity
correlations decay can be quite easily calculated from a random-collision approxima-
tion. Following [55]: τ f ≈ −λt0 / ln[1 − 23 (1 − cos α)(1 − 1/γ)]. For our usual parameters,
λ = 0.1, α = 12 π, γ = 5, we find τ f ≈ 0.76δtc = 0.076t0 .
Regarding the Fokker-Planck time scale, we expect the Fokker Planck time τF P to
scale as δtc , since this is roughly equivalent to the time τ f over which the fluid veloci-
ties will have randomized. Simulations show that this is indeed the case [51].
5.7. MULTI-PARTICLE COLLISION DYNAMICS 131


Regarding the sonic time scale, for our choice of units c s = 5/3a0 /t0 , so that the
sonic time scale reduces to
σc f
τcs ≈ 0.775 , (5.67)
a0

which is independent of λ or γ.
In the limit of small λ, the ratio of the kinematic time τν to δt c can be simplified to:

τν σc f 2
≈ 18 2 (5.68)
δtc a0

so that the condition τ f , τF P  τν is very easy to fulfill. Furthermore, under the same
approximations, the ratio τν /τcs ≈ 23σc f λ so that for λ too small the kinematic time
becomes faster than the sonic time. For our usual parameters λ = 0.1, α = 12 π, γ =
5, σc f = 2a0 , we find τν /τcs ≈ 5.
The colloid diffusion coefficient is directly related to the friction by the Einstein
relation (5.14). If we assume that λ  1 and, for simplicity, that the friction ζ is given
by the Stokes law,12 then the diffusion time scales as:

σ2c f 6πμσ3c f  
γ σc f 3
τD = ≈ ≈ t0 . (5.69)
D col kB T λ a0

It is instructive to examine the ratio of the diffusion time τD to the kinematic time τν :
 
τD ν γ σc f
= ≈ 0.06 2 . (5.70)
τν D col λ a0

In general we advocate keeping λ small to increase the Sc number Sc = ν/D f , and since
another obvious constraint is D col  D f , there is not too much difficulty achieving the
desired separation of time-scales τν  τD .
As a concrete example of how these time-scales are separated in a simulation, con-
sider the parameters used routinely in our work: for α = 12 π, γ = 5, λ = 0.1 and σcs =
2a0 , we find τ f = 0.076t0 , τF P = 0.09t0 , τcs = 1.4t0 , τν = 8t0 and τD = 200t0 . More gen-
erally, what the analysis of this section shows is that obtaining the correct hierarchy of
time-scales,

τ f ≈ τF P < τcs < τν  τD


12
Because in particle-based mesoscale methods a relatively low number of fluid particles is in contact
with each colloidal particle, there appears also another source of friction, which may be termed Enskog
friction ζE . This is the friction experienced by a colloidal particle moving through a non-hydrodynamic
thermal bath of the same number density as the SRD fluid. The Stokes and Enskog frictions can be
added (approximately) in parallel, i.e. 1/ζ = 1/ζS + 1/ζE [51]. For real colloidal particles, the number
of fluid molecules in contact with a colloidal particles is much higher, and consequently ζE  ζS . The
Enskog friction can then be neglected: ζ ≈ ζS .
132 CHAPTER 5. THE MESOSCOPIC WORLD

a
Figure 5.16: Simplified sketch of
a colloidal particle of radius a ex-
periencing an external force F ext
such as gravity. The colloidal par-
ticle will not only perform random
V Brownian motion, but as a conse-
quence of the external field also de-
ext velop an average velocity V in the
F direction of the external field.

is virtually guaranteed once the parameters are chosen as suggested in the previous
sections.
To achieve a mapping between the simulations and a real colloidal system, we need
to determine the real scales of the three units, a0 , m and k B T , of our simulations. Usu-
ally a0 is set by equating the simulated colloidal diameter to the real colloidal diameter,
and k B T can be set to its real value. Finally, the SRD particle mass m is determined
by setting the simulated self-diffusion coefficient of a single colloidal particle equal to
that in the real system, making use of the unit of diffusion D 0 = a0 k B T /m defined in
Table 5.1.13

5.8 Colloidal suspensions in external fields and flows: im-


portance of dimensionless numbers
In many (engineering) applications, colloidal particles are either subjected to external
force fields – such as gravity, centrifugation, electric or magnetic fields – or being forced
to flow as part of a flowing colloidal suspension. In all these cases, the colloidal parti-
cles are not only transported through random diffusive motion, but also by convective
motion. Figure 5.16 shows a simplified example of a colloidal particle convecting in
the direction of the external field (e.g. sedimentation in the direction of gravity).
No matter by which mesoscale method the fluid is modelled, it is important to
be aware of dimensionless numbers characterising this diffusive and convective pro-
cesses inside the colloidal suspension. We already introduced these dimensionless
numbers in chapter 3. Here we will focus on the peculiarities of mesoscale methods
with respect to the most important dimensionless numbers.
13
When using the Stokes-Einstein equation, this is equivalent to equating the dynamic viscosity of
the fluid. Note that it is not the fluid mass density ρ that is compared; because colloidal systems are in
the low-Re regime, the mass density of the fluid is (mostly) irrelevant. Connected to this, gravity forces
acting on the fluid should be avoided as much as possible because of potential problems with the high
compressibility of an SRD fluid. For sedimentation problems such as treated in the next section, it is
better to apply the net buoyant forces on the colloidal particles.
5.8. COLLOIDAL SUSPENSIONS IN EXTERNAL FIELDS AND FLOWS 133

The Mach number in colloidal suspensions


The Mach number measures the ratio between V , the speed of solvent or colloid flow,
and c s , the speed of sound in the fluid,
V
Ma = . (5.71)
cs
The Ma number measures compressibility effects [26] since sound speed is related to
the compressibility of a liquid. Because c s in many liquids is of order 103 m/s, the Ma
numbers for physical colloidal systems are extremely small under normally achiev-
able flow conditions. Just as for the Schmidt number, however, particle based coarse-
graining schemes drastically lower the Ma number. The fluid particle mass m is typi-
cally much greater than the mass of a molecule of the underlying fluid, and moreover,
due to the lower density, collisions also occur less frequently. These effects mean that
the speed of sound is much lower in a coarse-grained system than it is in the underlying
physical fluid. Or, in other words, particle based coarse-graining systems are typically
much more compressible than the solvents they model.
Ma number effects typically scale with Ma2 [26], and so the Ma number does not
need to be nearly as small as for a realistic fluid to still be in the correct regime of hydro-
dynamic parameter space. This is good, because to lower the Ma number, one would
need to integrate over longer fluid particle trajectories to allow, for example, a colloidal
particle to flow over a given distance, making the simulation more computationally
expensive. So there is a compromise between small Ma numbers and computational
efficiency.
It is best practice to limit the Ma numbers to values such that Ma ≤ 0.1, but it might
be possible, in some situations, to double or triple that limit without causing undue
error. For example, incompressible hydrodynamics is used for aerodynamic flows up
to such Ma numbers since the errors are expected to scale as 1/(1 − Ma2 ) [26].
Specifically for SRD, when working in units of m and k B T , the only way to keep
the Ma number below this upper limit is to restrict the maximum flow velocity to V ≤
0.1c s ≈ 0.13a0 /t0 . The flow velocity itself is, of course, determined by the external force
fields or imposed flow velocities.

The Reynolds number in colloidal suspensions


The Reynolds number determines the relative importance of inertial over viscous forces.
For a colloidal particle of radius a moving with average velocity V , it can be expressed
as:
Va
Re = . (5.72)
ν
For a spherical particle in a flow, the following heuristic argument helps clarify the
physics behind the Reynolds number: If the Stokes time
a
tS = , (5.73)
V
134 CHAPTER 5. THE MESOSCOPIC WORLD

it takes a particle to move over its own radius is about the same as the kinematic time

a2
τν = = Re tS (5.74)
ν
it takes momentum to diffuse over that distance, i.e. Re=τν /tS ≈ 1, then the particle will
feel vorticity effects from its own motion a distance a away, leading to non-linear iner-
tial effects. Since hydrodynamic interactions can decay as slowly as a/r , their effects
can be non-negligible. If, on the other hand, Re  1, then the particle will only feel
very weak hydrodynamic effects from its own motion.
Exactly when inertial finite Re effects become significant depends on the physical
system under investigation. For a single spherical particle, inertial effects which induce
a non-linear dependence of the friction ζ on the velocity V start to become noticeable
for a particle Reynolds number of Re ≈ 1 [26], while deviations in the symmetry of
the streamlines around a rotating sphere have been observed in calculations for Re
≈ 0.1 [47].
For typical colloidal suspensions, where the particle diameter is on the order of a
few μm down to a few nm, the particle Reynolds number is rarely more than 10−3 .
In this Stokes regime viscous forces dominate and inertial effects can be completely ig-
nored. The Navier-Stokes equations can be replaced by the linear Stokes equations [26]
so that analytic solutions are easier to obtain [28]. However, some of the resulting be-
havior is non-intuitive for those used to hydrodynamic effects on a macroscopic scale.
For example, as famously explained by Purcell in a talk entitled “Life at low Reynolds
numbers” [54], many simple processes in biology occur on small length scales, well into
the Stokes regime. The conditions that bacteria, typically a few μm long, experience in
water are more akin to those humans would experience in extremely thick molasses.
Similarly, colloids, polymers, vesicles and other small suspended objects are all sub-
ject to the same physics, their motion dominated by viscous forces. For example, if for a
colloid sedimenting at 1 μm/s, gravity were instantaneously turned off, then the Stokes
equations suggest that it would come to a complete halt in a distance significantly less
than one Å, reflecting the irrelevance of inertial forces in this low Re number regime. It
should be kept in mind that when the Stokes regime is reached because of small length
scales (as opposed to very large viscosities such as those found for volcanic lava flows),
then thermal fluctuations are also important. These will drive diffusive behavior [9]. In
many ways SRD is ideally suited for this regime, assuming that one can indeed reach
low Re numbers, because the thermal fluctuations are naturally included [43].
Specifically for SRD, from Eqs. (5.51) – (5.52) and Eq. (5.72), it follows that the Reynolds
number for a colloid of hydrodynamic radius a ≈ σc f can be written as:
  
5 σc f ν0
Re = Ma (5.75)
3 a0 ν
where the Ma number is defined in Eq. (5.71). Equating the hydrodynamic radius a
to σc f from the fluid-colloid WCA interaction of Eq. (5.58) is not quite correct, but is a
good enough approximation.
5.8. COLLOIDAL SUSPENSIONS IN EXTERNAL FIELDS AND FLOWS 135

In order to keep the Reynolds number low, one must either use small particles, or
very viscous fluids, or low velocities V . The latter condition is commensurate with
a low Ma number, which is also desirable. For small enough mean free path λ (and
μ
ignoring f col (γ, α)) the Re number then scales as
σc f
Re ≈ 23 Ma λ. (5.76)
a0
Again, we see that a smaller mean-free path λ, which enhances the collisional viscosity,
also helps bring the Re number down. This parameter choice is also consistent with a
larger Sc number. Thus larger Sc and smaller Ma numbers both help keep the Re num-
ber low for SRD. In principle a large viscosity can also be obtained for large λ, which
enhances the kinetic viscosity, but this choice also lowers the Sc number and raises the
Knudsen number, which, as we will see in the next sub-section, is not desirable.
Just as was found for the Ma number, it is relatively speaking more computation-
ally expensive to simulate for low Re numbers because the flow velocity must be kept
low, which means longer simulation times are necessary to reach time-scales where
the suspended particles or fluid flows have moved a significant distance. We therefore
usually compromise and keep Re  0.2 [37, 51]. For many situations related to the flow
of colloids, this should be sufficiently stringent.

The Knudsen number in colloidal suspensions


The Knudsen number Kn is the ratio between the mean free path λ f r ee of the molecules
and a characteristic length scale, which in this case is the colloidal radius a,

λ f r ee
Kn = . (5.77)
a
For rarified gas flows the Knudsen number is large (> 10), and continuum Navier-
Stokes equations completely break down.
However, the continuous phase in a colloidal suspension is a liquid. The mean free
path of most liquids is quite small. For water at standard temperature and pressure
λfree ≈ 3 Å. Just as found for the other dimensionless numbers, coarse-graining typi-
cally leads to larger Kn numbers because of the increase of the mean-free path. Making
the Kn number smaller also typically increases the computational cost because a larger
number of collisions need to be calculated. In mesoscale simulations, it is important
to keep Kn≤ 0.05. This rough criterion is based on the observation that for small Kn
numbers the friction coefficient on a sphere is expected to be decreased by a factor
1 − α Kn, where α is a material dependent constant of order 1 [28], so that we expect
Kn number effects to be of the same order as other coarse-graining errors. There are
two ways to achieve small Kn numbers: one is by by increasing the modeled radius a
of the colloidal particle relative to the intrinsic coarse-graining scale (e.g. σc f /a0 in
SRD), the other is by decreasing the mean free path λ f r ee . The second condition is
commensurate with a large Sc number or a small Re number.
136 CHAPTER 5. THE MESOSCOPIC WORLD

The Peclet number in colloidal suspensions


The Peclet number Pe measures the relative strength of convective transport to diffu-
sive transport. For example, for a colloid of radius a, traveling at an average velocity V ,
the Pe number is defined as:
Va
Pe = , (general) (5.78)
D col
where D col is the colloid diffusion coefficient. For colloidal particles in an external field
we can use the Einstein relation, D col = k B T /ζ, and the relation between external force
and average velocity in steady-state, V = F ext /ζ, to rewrite the Peclet number as:
F ext a
Pe = , (external field) (5.79)
kB T
which shows that the Peclet number may also be interpreted as the ratio of energy
F ext a gained by a colloidal particle by moving one radius a in the direction of the ex-
ternal field and the thermal energy k B T .
Alternatively, just as for the Re number, the Pe number can be interpreted as a ratio
of a diffusive to a convective time-scale, but now the former time-scale is not for the
diffusion of momentum but rather it is given by the colloid diffusion time

a2
τD = (5.80)
D col
which measures how long it takes for a colloid to self-diffuse over a distance a. The Pe
number can then be written as:
τD
Pe = . (general) (5.81)
tS
If Pe  1 then the colloid moves convectively over a distance much larger than its
radius a in the time τD that it diffuses over that same distance. Brownian fluctua-
tions are expected to be less important in this regime. For Pe  1, on the other hand,
the opposite is the case, and the main transport mechanism is diffusive (note that on
long enough time-scales (t > τD /Pe2 ) convection will always eventually “outrun” diffu-
sion [9]). It is sometimes thought that for low Pe numbers hydrodynamic effects can be
safely ignored, but this is not always true. For example, we found that the reduction of
average sedimentation velocity with particle volume fraction, famously first explained
by Batchelor [7], is independent of Pe number down to Pe = 0.1 at least [50].
The highest Pe number achievable in particle-based mesoscale simulations is lim-
ited by the constraints on the Ma and Re numbers. For example the Ma number sets
an upper limit on the maximum Pe number by limiting V .
Specifically for SRD, from Eqs. (5.72) and (5.78), it follows that the Peclet number
can be re-written in terms of the Reynolds number as
 2  
ν ν σc f
Pe = Re ≈ 6πγ Re (5.82)
D col ν0 a0
5.9. LIMITATIONS OF MESOSCOPIC SIMULATION METHODS 137

where we have approximated D col ≈ k B T /(6πησc f ). This shows that for a given con-
straint on the Re number, increasing γ or σc f increases the range of accessible Pe num-
bers. Similarly, when the kinematic viscosity is dominated by the collisional contribu-
tion, decreasing the dimensionless mean-free path λ will also increase the maximum
Pe number allowed since Pemax ∼ λ−2 Re ∼ λ−1 Ma.

5.9 Limitations of mesoscopic simulation methods


Many different particle-based methods exist to deal with the low-Reynolds number
dynamics of mesoscale particles such as colloidal particles. Which one to choose de-
pends on the system of interest, because each method has its own limitations. Let us
summarize the range of applicability and limitations of each of the methods we have
discussed in more detail.

• Langevin dynamics (LD) simulations neglect hydrodynamic interactions, and


are therefore limited to situations of either very low solid volume fractions (φ <
10−4 ) or when the particles are so close that they significantly overlap in their
direct interaction range. Being a second-order algorithm (including velocities),
larger particle frictions necessitate lower time steps, making Langevin dynam-
ics only suitable for situations with relatively low friction forces such as large
molecules or very small colloids of less than a nanometre.

• Brownian dynamics (BD) simulations also neglect hydrodynamic interactions,


and are therefore also limited to very low solid volume fractions or significantly
overlapping particles. Being a first-order algorithm (it is assumed that velocities
are equilibrated before the particle has moved a fraction of its size), larger parti-
cle frictions allow for larger time steps, making Brownian dynamics simulations
very suitable for situations with very high friction.

• Stokesian dynamics (SD) simulations are the equivalent of Brownian dynamics,


but including hydrodynamic interactions between the particles and is therefore
much more generally applicable. However, because of its complexity and high
computational costs, Stokesian dynamics is limited to a few hundred particles.
Moreover, being based on analytical expressions for the hydrodynamic interac-
tions in an unbounded medium, it is very difficult to study the dynamics of col-
loidal suspensions confined by walls or other boundaries.

• Dissipative particle dynamics (DPD) is a particle-based method for mesoscale


fluids, which is relatively easy to implement in a code. Colloidal particles can
be embedded as extra large DPD particles or by freezing assemblies of DPD par-
ticles. However, there are two limitations. First, the repulsive interactions be-
tween the particles cause an ordering in the fluid structure, especially near walls
and colloidal particles, that may be felt as oscillating conservative interactions
between the colloidal particles when their surfaces are closer than a couple of
138 CHAPTER 5. THE MESOSCOPIC WORLD

r c apart. Therefore DPD cannot be used in situations where lubrication forces


are important (dense solutions of nearly hard spheres). Second, to be in the hy-
drodynamic limit, colloidal particles need to be much larger than the DPD fluid
particles. Because the interactions between the DPD particles are still based on
pair interactions, the high computational effort limits the number of colloidal
particles again to a few hundred.

• Multiparticle collision dynamics (MPCD), and its specific implementation of


stochastic rotation dynamics (SRD), is also a particle-based method, but the
collisions between the fluid particles are handled much more efficiently (com-
pared to DPD) in collision cells. Colloidal particles can be embedded through re-
pulsive potential energy functions or by reflecting boundary conditions. Because
the thermodynamic properties of an MPCD fluid are equal to that of an ideal gas,
it is much more compressible than the real fluid it represents. One should there-
fore always be aware of the Mach number to avoid density fluctuations in the
system. Also, because the particles have inertia, it is sometimes a challenge to
keep the Reynolds number down when colloidal particles are driven by an ex-
ternal field or flow (these inertial limitations also apply to DPD). The Mach and
Reynolds number limitations also pose a limit on the largest Peclet number that
can be reached. In practice, MPCD is ideally suited for driven colloidal systems
with Peclet numbers ranging as 0 < Pe < 30 in systems containing thousands of
colloidal particles and tens of millions of fluid particles.

It is important to emphasize that thermal fluctuations are a key ingredient of all the
above methods. This is precisely the definition of the mesoscale regime. When dealing
with macroscopic particles, or if one is interested in average flowfields or average drag
relations, the thermal fluctuations are sometimes more of a nuisance than an aid, be-
cause a lot of averaging is required before the effects of thermal fluctuations have been
averaged out. In that case it may be better to use direct numerical simulations such as
Lattice Boltzmann (in its original form, i.e. without thermal fluctuations) or finite vol-
ume of finite element discretisations of the Navier-Stokes equations, combined with
immersed particles, as will be briefly discussed in the next chapter.

5.10 Practicum: Colloidal sedimentation


In this practicum you will study the behaviour of nearly hard sphere colloidal particles
sedimenting in a slit using stochastic rotation dynamics for the fluid. You will measure
the sedimentation velocity as a function of solid volume fraction.
CHAPTER
T HE MACROSCOPIC WORLD

6.1 Chapter objectives


Through the course of this chapter, you will accomplish the following:

• You will learn about the main particle-based simulation models for granular sys-
tems

• You will learn about a common contact model used in simulations of granular
systems

• You will learn to distinguish between resolved and unresolved coupling to fluid
flow

• You will learn about limitations of macroscopic particle-based methods

6.2 Introduction to granular systems


When particles are 100 micrometer in size or larger, Brownian motion due to thermal
fluctuations is no longer important: they are athermal. Such particles are often re-
ferred to as granular particles. These are the particles that we are most familiar with:
flowing sand, rock avelanches, emptying hoppers filled with grains, pneumatic con-
veying of particles and powders, and mixing and segregation of particles when they are
transported and shaken. The dynamics of these systems are dominated by gravity and
friction effects. Friction means that, without further perturbations such as shaking or
a fluidisation by a gas flow, the particles will quickly come to complete rest.

139
140 CHAPTER 6. THE MACROSCOPIC WORLD

Figure 6.1: Example of macro-


scopic particle-based modelling.
On the left is a fluidized bed on
a life-size scale (grayscale indi-
cates solid volume fraction). A
section of the fluidized bed is
modelled using a discrete par-
ticle method (top right), where
the gas flow around the parti-
cles (black dots) is not resolved,
but effective drag relations are
used, treating the particles as
point sources of momentum on
the fluid. When using direct nu-
merical simulations, the gas flow
around the particles is fully re-
solved (bottom right), but this
necessitates a grid size which is
much smaller than the particle
diameter. Adapted from [30].

Particle-based simulations on the granular scale include discrete element methods


(DEM), direct numerical simulations (DNS), and discrete particle methods (DPM).1
The discrete element method is used when the effects of the surrounding medium
(gas or liquid) are negligible, either because the gravity and inertial forces are orders
of magnitude larger than the hydrodynamic drag forces (e.g. large particles moving
relatively slowly through air) or because the particles are at such high solid volume
fraction that the interactions between the particles dominate the hydrodynamic forces
(e.g. particles packed together and moving slowly through a hopper).
The term direct numerical simulation is used when the particles do feel the sur-
rounding fluid medium and the fluid flow around the particles is fully resolved. DNS
is the most precise method of dealing with the fluid flow, by solving the Navier-Stokes
equations using a grid size much smaller than the particle diameter (Figure 6.1).
In some cases – particularly for particles in a gas – it is possible to find accurate
effective drag relations and treat the particles as point sources of momentum on the
fluid. The fluid flow around the particles is unresolved, allowing us to use pre-averaged
Navier-Stokes equations and a grid size much larger than the particle diameter, see
Figure 6.1. We refer to such unresolved methods as discrete particle methods.
Let us start by defining the equations of motion for the particles.
1
There exist also methods in which the solid particle phase is treated as a continuum. When the
particles are embedded in a fluid, such methods are referred to as two-fluid models (TFM) because the
particles and fluid are treated as two interpenetrating continua. An important ingredient of such models
is an effective description of the particle stress.
6.3. EQUATIONS OF MOTION FOR THE PARTICLES IN A GRANULAR SYSTEM 141

6.3 Equations of motion for the particles in a granular


system
The equations of motion for the particles in a DEM/DNS/DPM model are very similar
to those of molecular dynamics simulations. Newton’s equations of motion of individ-
ual particles are integrated using forces on the particles. What is different is the origin
of these forces: they do not only contain conservative elements (which depend on the
particle positions), but also dissipative particle-particle elements (which depend on
the particle positions and velocities) and possibly fluid-induced forces (which depend
on the pressure and velocity fields of the surrounding fluid). Moreover, particle rota-
tions are usually taken explicitly into account requiring specification of the torques on
particles.

Rigid body rotations


When rotations are taken into account, one usually assumes that the granular particle
translate and rotate at rigid bodies. The equations of motion for rigid body motion are:
dvi
mi = Fi (6.1)
dt
d
(Ii · ωi ) = Ti (6.2)
dt
The first equation may look quite familiar by now. It describes the change in velocity vi
of the centre of mass of particle i due to the total force Fi . The second equation is more
complex. It describes the effect of the total torque Ti acting on particle i on the change
in its angular velocity ωi . The complexity resides in the moment of inertia tensor Ii ,
which depends on the shape and current orientation of the particle.2 In other words,
we cannot simply integrate cartesian elements of the torque to update the angular ve-
locity of the particle. Rather, we should solve the so-called Euler equations of motion
for rigid body rotation.
Fortunately, for spherical particles the inertia tensor is diagonal and independent
of its orientation. For a sphere of mass m i and radius R i we have:
⎡ 2 2 ⎤
5 mi Ri 0 0
Ii = ⎣ 0 2 2
5 i Ri
m 0 ⎦, (spheres) (6.3)
2
0 0 m R2
5 i i

meaning that we can rewrite Eq. (6.2) as

dωi
Ii = Ti , (spheres) (6.4)
dt
2
The term Ii · ωi is also known as the angular momentum Li of the particle. The equation of motion
for rotations, dLi /dt = Ti , is completely analogous to Newton’s equation for translations in terms of
momenta pi = m i vi , namely dpi /dt = Fi .
142 CHAPTER 6. THE MACROSCOPIC WORLD

with I i = 25 m i R i2 the moment of inertia of particle (sphere) i . Numerically solving the


angular velocity of rigid spherical particles is therefore as simple as solving the trans-
lational velocity. An equivalent of the velocity Verlet algorithm3 for rigid spheres could
be programmed as follows:
1 Fi (t )
vi (t + Δt /2) = vi (t ) + Δt , (6.5)
2 mi
1 Ti (t )
ωi (t + Δt /2) = ωi (t ) + Δt , (6.6)
2 Ii
ri (t + Δt ) = ri (t ) + vi (t + Δt /2)Δt , (6.7)
(then evaluate force and torque at t + Δt )
1 Fi (t + Δt )
vi (t + Δt ) = vi (t + Δt /2) + Δt , (6.8)
2 mi
1 Ti (t + Δt )
ωi (t + Δt ) = ωi (t + Δt /2) + Δt . (6.9)
2 Ii
Note that the force and torque need to be evaluated only once per time step Δt because
the new force (torque) in the current step will be the old force (torque) in the next step.

Forces and torques in DEM, DNS and DPM simulations


So what are the forces and torques on a granular particle? Generally, we may write the
force Fi and torque Ti on particle i with volume Vi as:

Fi = F f l ui d,i + m i g + Fcont act,i + Fpp,i . (6.10)


Ti = T f l ui d,i + Tcont act,i + Tpp,i . (6.11)

These forces and torques are, respectively [17]:

• The fluid-induced (hydrodynamic) forces and torques on the particle.

• A gravity force.

• Contact forces and torques due to direct collisions with neighbouring particles.

• Longer range particle-particle forces and torques such as due to Van der Waals
and electrostatic interactions.

The difference between DEM, DNS and DPM models lies in the treatment of the fluid-
induced forces. In DEM the fluid-induced forces are neglected. In DNS the fluid is fully
resolved on a grid finer than the particles, whereas in DPM the fluid is unresolved and
particles are treated as point sources of momentum, using effective drag force correla-
tions.
For all models we need the contact forces. We will deal with this in section 6.4.
Then we will deal with fluid-induced forces in section 6.5. We will not explicitly deal
3
Higher order schemes such as Beeman and Runge-Kutta methods may also be used.
6.4. DISSIPATIVE COLLISIONS: CONTACT MODELS 143

Figure 6.2: Graphical representation


of the linear spring-dashpot soft sphere
model for granular particles. After
Hoomans [32].

with longer range particle-particle forces because they are very similar to the longer
range forces occurring in molecular dynamics and mesoscale simulations.

6.4 Dissipative collisions: contact models


In the field of granular flow, a model specifying the forces and torques on the particles
due to direct collisions is usually referred to as a contact model. Usually the contact
force and torque on particle i are approximated as pair sums, i.e.

Fcont act ,i = Fi j , (6.12)
j =i

Tcont act,i = Ti j , (6.13)
j =i

where Fi j is defined as the contact force on i due to its contact with particle j , and
similarly for the torque. Of course, because contact forces are short ranged, we can use
a cell linked list and/or a neighbourlist to efficiently deal with these pair sums.
Because of Newton’s third law (action equals minus reaction, or Fi j = −F j i ) the total
momentum of two particles in contact is conserved.4 The energy, however, is usually
not conserved. More precisely, energy is transferred to more microscopic degrees of
freedom during during a collision between two granular particles. An important ingre-
dient of a contact models is therefore the inclusion of dissipative forces.

Linear spring-dashpot soft sphere model


One of the simplest and most used contact models is the linear spring-dashpot soft
sphere model [16], depicted schematically in Figure 6.2.5 The total contact force be-
4
And similarly the total angular momentum, i.e. angular momentum of the particles around their
respective centres of mass plus angular momentum associated with the trajectories of the centres of
mass, is conserved.
5
The other much used contact model is the hard sphere model. Similar to the soft sphere model, the
hard sphere model is characterised by a normal and tangential coefficient of restitution and a coefficient
of friction. The positions of such hard spheres are not propagated using constant time steps, but rather
by calculating the next collision event. We do not treat such event-driven simulations in these lectures.
Hard sphere models are unfit for simulations at high densities because of divergent collision frequencies.
144 CHAPTER 6. THE MACROSCOPIC WORLD

tween two spherical particles i and j of radius R i and R j , respectively, is divided into a
normal force Fi j ,n and a tangential force Fi j ,t , i.e. Fi j = Fi j ,n + Fi j ,t .

Normal contact force

The normal contact force on particle i due to its contact with particle j is calculated
according to

Fi j ,n = k n δn n̂i j − η n vi j ,n , (6.14)

The first part represents a conservative force, where k n is the spring stiffness (in units
N/m) in the normal direction, and δn is the overlap (in units m) between the particles
in the normal direction:
 
δn = (R i + R j ) − ri − r j  . (6.15)

The normal direction is defined as the unit vector pointing from the centre of j to the
centre of i :
ri − r j
n̂i j =  . (6.16)
ri − r j 

The dissipative force in the normal direction is controlled by η n , the damping coeffi-
cient (in units kg/s)) in the normal direction, and vi j ,n , the normal component (in the
normal direction) of the relative velocity at the point of contact.
The relative velocity may need some explanation. Figure 6.3 shows two granular
particles with centre of mass velocities vi and v j and angular velocities (around their
respective centres of mass) ωi and ω j . In practice the normal spring stiffness is chosen
so large that the maximum overlap is of the order of a percent of the particle diameter.
Therefore, the relative velocity at the point of contact can be approximated by the dif-
ference between the local surface velocity of particle i , vli oc = vi − R i n̂i j × ωi , and the
local surface velocity of particle j , vljoc = v j + R j n̂i j × ω j . In other words,
   
vi j = vi − v j − R i ωi + R j ω j × n̂i j (6.17)

The normal component in the normal direction of the relative velocity is thus defined
as
 
vi j ,n = vi j · n̂i j n̂i j . (6.18)

In the following section we will need the tangential velocity, which is the remaining
part:

vi j ,t = vi j − vi j ,n . (6.19)

Check for yourself that vi j ,t is indeed always tangential to the surfaces of the particles
at the contact point.
6.4. DISSIPATIVE COLLISIONS: CONTACT MODELS 145

|ri -rj |

Ri ωi
vi

vj

Rj
n^ ij
Figure 6.3: Coordinate system for
two granular particles i and j of ra- ωj t^ij
dius R i and R j , with the definition
of the unit normal vector n̂i j and
tangential normal vector t̂i j .

Tangential contact force: sticking and sliding

For the tangential contact force on partice i due to its contact with particle j , two cases
should be distinguished, namely “sticking” and “sliding” contacts [32, 39]. If the tan-
gential velocity is sufficiently high, the impact can be described as sliding during the
entire collisions. However, when after the initial sliding phase the relative tangential
velocity between the two colliding particles becomes zero, the impact of the particles
belongs to the sticking case.
In the sticking case, the tangential contact force has a form similar to the normal
force:

Fi j ,t = −k t δt t̂i j − η t vi j ,t . (6.20)

Here k t is the spring stiffness (in units N/m) in the tangential direction t̂i j , which is
defined as the unit vector along the direction of the tangential velocity,
vi j ,t
t̂i j =  . (6.21)
vi j ,t 

The overlap in the tangential direction δt (in units m) is not a trivial quantity, but is
defined as the length of the time integral of the tangential velocity since the start of the
contact at t0 ,
t 
 
  
δt =  vi j ,t (t )dt  . (6.22)
t0

The damping coefficient in the tangential direction η t (in kg/s) plays a similar role to
that in the normal direction.
146 CHAPTER 6. THE MACROSCOPIC WORLD

The above tangential force applies to the sticking case. But what about sliding? We
already encountered a sliding case in section 2.3 when we considered dissipation of en-
ergy for a block sliding down an inclined plane. We considered the empirical fact that
when sliding occurs, the
 friction
 force (acting tangential to the surface) scales linearly
 
with the normal force Fi j ,n . The coefficient of proportionality is called the coefficient
of (Coulomb or kinetic) friction μ f . So the coefficient of friction poses an upper limit
to the tangential force that two particles can exert on each other. In formula:
    
Eq. (6.20) if Fi j ,t  ≤ μ f Fi j ,n  (sticking)
Fi j ,t =       (6.23)
−μ f Fi j ,n  t̂i j if Fi j ,t  > μ f Fi j ,n  (sliding)

The tangential force leads to a torque on each of the particles i and j . The torque on
particle i due to its contact with particle j is given by

Ti j = −R i n̂i j × Fi j ,t . (6.24)

The minus sign arises because n̂i j points from the contact point to the centre of mass
of i . Using the same reasoning, the torque on j due to its contact with i is given by
T j i = R j n̂i j × (−Fi j ,t ) = −R j n̂i j × Fi j ,t .

Contact parameters

To determine the normal and tangential forces and torques between the particles we
need to know five parameters (assuming all particles are equal): the normal and tan-
gential spring stiffness k n and k t , the normal and tangential damping coefficients η n
and η t , and the friction coefficient μ f .
Experimentally, it is very difficult to measure the damping coefficients. Rather, in
experiments we can measure the coefficients of restitution e n and e t in the normal and
tangential directions. The normal coefficient of restitution is defined as the ratio of
the normal velocity at the end of an impact and the normal velocity at the start of an
impact, and similarly for the tangential coefficient of restitution:
vi j ,n,end
en ≡ − , (6.25)
vi j ,n,st ar t
vi j ,t,end
et ≡ . (6.26)
vi j ,t,st ar t

The minus sign in the definition of e n arises because the particle reverses its direc-
tion during impact. For a perfectly elastic collision we have e n = e t = 1. Real macro-
scopic particles never collide perfectly elastically. For example, for millimetre sized
glass spheres we have e n ≈ 0.93.
The higher the amount of dissipation (damping) during an impact, the lower (fur-
ther away from 1) the coefficient of restitution. We therefore expect that there exists a
relation between the damping coefficients and the coefficients of restitution. Indeed,
6.4. DISSIPATIVE COLLISIONS: CONTACT MODELS 147

1.5

mi j k
Figure 6.4: Relation between the damp- 1

ing coefficient η and coefficient of resti-

η/
tution e. For the normal direction the 0.5
vertical axis is scaled by m i j k n , for the
tangential direction by m i j k t (see main 0
0.2 0.4 0.6 0.8 1
text). e

since the equations of motion for the collision process of a pair of particle are essen-
tially those of a damped harmonic oscillator, it is possible to solve the equations an-
alytically (not shown here). The analytical equations predicts the following relations
between η n and e n and between η t and e t :

−2 lne n mi j kn
ηn = (6.27)
π2 + ln2 e n

−2 lne t m i j k t
ηt = (6.28)
π2 + ln2 e t

where m i j = m i m j /(m i + m j ) is the reduced mass of particles i and j , and m i j = 27 m i j


(see also Figure 6.4). If all particles have the same mass m, then m i j = m/2, while for
a particle-wall collision we may treat the wall as a particle of infinite mass, making the
reduced mass equal to the mass of the free particle. The analytical solution also gives
us the contact time for normal and tangential directions:

m i j (π2 + ln2 e n )
tcont act,n = (6.29)
kn
-
. 
. m (π2 + ln2 e t )
/ ij
tcont act,t = . (6.30)
kt

This last result is very important. Just think about it: how could the contact time in
the normal direction be different from the contact time in the tangential direction? For
a consistent contact model the contact times in the normal and tangential directions
should be exactly equal.6 This extra requirement reduces the number of free parame-
6
This is also important for a proper energy balance. For example, if the normal collision is finished
before the tangential collision is finished, the particles no longer overlap, but there is still some energy
stored in the tangential spring. This energy will then be lost, resulting in an effectively lower coefficient
of restitution than intended.
148 CHAPTER 6. THE MACROSCOPIC WORLD

ters by one. Usually the tangential spring stiffness is expressed in the other parameters:

kt 2 π2 + ln2 e t
= . (6.31)
k n 7 π2 + ln2 e n

Although the normal spring stiffness can be determined from the Youngs modulus
and radius of the particle, it usually yields a very high value, which implies the use of a
very small time step, which is undesirable from a computational point of view [17]. In
practice k n is set to lower values, while ensuring that the normal overlap is kept small,
i.e. typically below 1% of the particle diameter.

6.5 Coupling to a fluid flow


The influence of a surrounding fluid on the dynamics of granular particles may be ig-
nored if we are studying dense particle packings or if we are dealing with large parti-
cles moving relatively slowly through a dilute gas such as air. In these special cases a
contact model, such as the spring-dashpot model discussed in the previous section,
together with gravity and possibly long-range particle-particle forces, suffices to make
predictions about the system. This is the field of discrete element method (DEM) sim-
ulations.
In most other cases, however, the liquid or gas will influence the granular particle
trajectories. We then enter the fields of direct numerical simulations (DNS) and dis-
crete particle method (DPM) simulations.

A brief digression into direct numerical simulations (DNS)


The case of solid particles flowing through a liquid is arguably more complex than that
of solid particles flowing through gas, because both long range hydrodynamic interac-
tions and short range lubrication forces are more dominant in liquid flows. Actually the
same problems are encountered in simulations of macroscopic solid-liquid flows as in
simulations of mesoscale colloidal suspensions: because of the long-range nature of
hydrodynamic interactions, the dynamics of a solid particle depends on the positions
and velocities of a very large number of other particles. Although one of the complica-
tions of mesoscale simulations is avoided – Brownian motion is absent in macroscopic
particles – we get in return another complication, namely inertia of the particles and
fluid. High Reynolds number hydrodynamic flow is practically impossible to predict
analytically.
The solution is to perform detailed simulations of particles embedded in a fluid
governed by the full Navier-Stokes equations. To ensure accurate predictions of all
hydrodynamic forces, the Navier-Stokes partial differential equations must be solved
numerically on a grid which is much smaller than the particle diameter. These types
6.5. COUPLING TO A FLUID FLOW 149

of simulations are collectively referred to as direct numerical simulations (DNS).7 In


DNS the coupling with the embedded solid particles is accomplished by enforcing no-
slip boundary conditions between the liquid and the solid particles at the surface of
the particles. The fluid-induced forces and torques on the particles follow from the
pressure and stress tensor of the fluid at the boundary of the volume of particle i , which
we denote symbollically as ∂Vi :
 
F f l ui d,i = − P (a)n̂(a)da + S̄(a) · n̂(a)da, (6.32)
∂V ∂Vi
 i 
 
T f l ui d,i = − P (a) (r(a) − ri ) × n̂(a)da + (r(a) − ri ) × S̄(a) · n̂(a) da.
∂Vi ∂Vi
(6.33)

Here ∂Vi . . . da indicates a surface integral over the boundary of particle i . n̂(a) is the
normal unit vector pointing away from the particle at a particular location r(a) of its
surface. In the above expressions S̄ is the fluid-phase stress tensor, given by
 
2  
S̄ = − λ − μ (∇ · u) Ī − μ ∇u + (∇u)T , (6.34)
3

where the bulk viscosity λ is usually set to zero for gases. An explicit example of this
type of boundary integral calculation for the case of (zero Reynolds number) Stokes
flow of a sphere can be found in Appendix A.2. Note that for a sphere the pressure can-
not exert a torque on a particle because the normal unit vector n̂(a) is always parallel
to (r(a) − ri ).
In these lectures we will not further delve into the details of DNS simulations, as
they are the topic of other (Computation Fluid Dynamics) courses.

Discrete particle model (DPM) simulations


When solid particles are flowing through a gas, the situation is relatively simpler (though
still quite complex!) because usually the lubrication forces are not as dominant as in a
liquid – the viscosity of a gas is much lower than in a liquid. Although multiple parti-
cles in a gas flow affect each other, direct numerical simulations of solid particles in gas
flows have shown that the multi-particle effect on the drag force experienced by a solid
particle may be approximated as a dependence of the drag on the local solid volume
fraction φ only. In other words, the drag forces are influenced by the presence of other
solid particles, but experience has shown that their influence may be approximated
as only indirectly, through a dependence of the drag forces on the local solid volume
fraction. Making use of this fact, we may reverse the situation and solve the Navier-
Stokes equations on a grid which is much larger than the size of the particles. This is
exactly what is done in discrete particle model (DPM) simulations. DPM simulations
7
Sometimes lattice Boltzmann simulations (without thermal fluctuations) are employed as an effi-
cient Navier-Stokes solver. Such simulations are then also referred to as direct numerical simulations.
150 CHAPTER 6. THE MACROSCOPIC WORLD

Figure 6.5: Injection of a sin-


gle bubble in the centre of a
fluidized bed (bed width: 0.30
m), containing spherical glass
beads of 2.5 mm diameter at in-
cipient fluidisation conditions.
Comparison of experimental
data (top) with discrete particle
model (DPM) simulations (bot-
tom) for 0.1, 0.2 and 0.4 s after
bubble injection [12].

have been applied with succes to, e.g., the flow of particles in fluidized bed reactors
(Figure 6.5).

Gas-induced forces on a particle

In DPM simulations the fluid-induced force on particle i of volume Vi consists of two


terms:

Vi β
F f l ui d,i = −Vi ∇P + (u − vi ) . (6.35)
φ

The first term is the buoyant force on particle i , driven by pressure gradients in the
surrounding fluid. The second term is the drag force due to a difference between the
local gas velocity u and the particle velocity vi ; β is the so-called inter-phase momen-
tum transfer coefficient (in units of kg/(s.m3 )) and φ the local solid volume fraction.
Regarding the torque Ti on particle i , the surrounding medium could exert a torque
(think of a fluid with a high amount of vorticity), but in practice this is often neglected
in DPM models.8
To calculate the fluid-induced force we therefore need local estimates of the solid
volume fraction φ and the inter-phase momentum transfer coefficient β.
Usually the volume fraction is stored at discrete grid locations of the computational
cells for the fluid. When the volume of the smallest computational cell for the fluid is
8
For spheres in a gas (such as in fluidized beds) neglecting the gas-induced torque is acceptable, but
for elongated particles or for particles suspended in a liquid this is most probably not allowed.
6.5. COUPLING TO A FLUID FLOW 151

much larger than the volume of a particle, the mapping of properties from the (Eule-
rian) computational grid to the (Lagrangian) particle positions (and vice versa) can be
done in a straightforward manner through volume-weighing techniques [32] and [18].
The idea is to calculate the local solid volume fraction φcel l of a cell as

1  i
φcel l = f cel l Vi , (6.36)
Vcel l i ∈cel l

i
where Vcel l is the volume of the cell and f cel l
is the fractional volume of particle i re-
siding in the cell under consideration. After obtaining the solid volume fractions of all
cells, the local solid volume fraction φ at the location of a specific particle is obtained
by trilinear interpolation.
The inter-phase momentum transfer coefficient is frequently modelled by combin-
ing the Ergun equation [22] for dense regimes (φ > 0.2),

βd 2 φ2
FErgun = = 150 + 1.75φRe, (6.37)
μ 1−φ

and the correlation proposed by Wen and Yu [61] for the more dilute regimes (φ < 0.2),

βd 2 3  −2.65
FWen&Yu = = C D Reφ 1 − φ , (6.38)
μ 4
  
24 1 + 0.15Re0.687 /Re if Re < 103 ,
CD = (6.39)
0.44 if Re > 103 ,

where d is the particle diameter and μ is the gas viscosity. Re is the particle Reynolds
number, defined as

(1 − φ)ρ g |u − vi | d
Re = , (6.40)
μ

where ρ g is the gas density. The particle Reynolds number is usually much larger than
unity, which gives rise to an unrealistic jump in the drag curve at φ = 0.2 [17]. This
problem can be circumvented by using the least value of Eqs. (6.37) and (6.38) for the
calculation of β.
Many other drag relations exist, of which we mention only one, by Beetstra et al. [8],

βd 2 φ2
FBeetstra = =A + BφRe, (6.41)
μ 1−φ
 4
18 1 − φ 
A = 180 + 1 + 1.5 φ , (6.42)
φ
0 −1   1
0.31 1 − φ + 3φ 1 − φ + 8.4Re−0.343
B = . (6.43)
1 + 103φ Re−0.5−2φ
This drag relation is frequently used in our DPM simulations of gas-fluidised beds.
152 CHAPTER 6. THE MACROSCOPIC WORLD

Particle-induced forces on gas

The gas does not only exert a force on the particles, but by Newton’s third law the par-
ticles also exert a force on the gas. At very low solid volume fractions (φ < 10−4 ) this
so-called two-way coupling may be neglected (leading to a one-way coupled system),
but in general it may not. The gas-phase hydrodynamics are then calculated from the
volume-averaged Navier-Stokes equations [17]:
∂    
(1 − φ)ρ g + ∇ · (1 − φ)ρ g u = 0 (6.44)
∂t
∂      
(1 − φ)ρ g u + ∇ · (1 − φ)ρ g uu = −(1 − φ)∇P − ∇ · (1 − φ)S̄ − Sp + (1 − φ)ρ g g.
∂t
(6.45)

The gas-phase stress tensor S̄ is given by Eq. (6.34). The two-way coupling between the
gas phase and the particles is enforced via the sink term Sp in the momentum equation
of the gas-phase, which is computed from [17]:
 
N V β
1 i
Sp = (u − vi ) D(r − ri )dV. (6.46)
Vcel l Vcel l i =0 φ

Here the sum is over all N particles inside the system. The distribution function D dis-
tributes the reaction force acting on the gas phase to the (Eulerian) gas grid. When the
volume of the smallest computational cell for the fluid is much larger than the volume
of a particle, the mapping of properties from the (Lagrangian) particle positions to the
(Eulerian) computational grid and vice versa can be done in a straightforward manner
through trilinear interpolation.

6.6 Stochastic methods to model the dynamics of dilute


granular flow
Note that all the above methods (DEM, DNS and DPM) are deterministic methods:
collisions between the particles are detected based on precise rules for overlap. In cer-
tain limiting cases it is possible to use stochastic methods to deal with the collisions.
Particularly, when we are dealing with relatively dilute streams of particles, sufficiently
perturbed by gas flow to experience random velocity fluctuations, we may take advan-
tage of the fact that collisions are rare on a particle-to-particle basis (large mean-free-
paths), yet common enough to apply probabilistic collision rules.
For example, when modelling liquid droplets in the jet emerging from a spray dryer,
we are dealing with hundreds of millions or even billions of particles (droplets). It is
computationally too expensive to deal with such particles in a deterministic fashion.
Rather, we can use the direct simulation Monte Carlo (DSMC) technique [10], briefly
touched upon in chapter 5, to estimate the probability that a given particle i of size d i
moving with velocity vi will collide with another nearby particle j of size d j moving
6.7. LIMITATIONS OF MACROSCOPIC PARTICULATE MODELS 153

with velocity v j .9 We then let random pairs of particles (within a prescribed range) col-
lide with each other with the correct probability. Further speed-up may be achieved by
not actually simulating each and every particle but by tracking representative droplets,
each of which represents a large number (say 102 to 104 ) real particles.
Of course such a stochastic approach is only allowed as long as many random col-
lisions occur between the particles (in relatively dilute streams) and we are not inter-
ested in the exact deterministic path of individual particles.

6.7 Limitations of macroscopic particulate models


A number of methods exist to deal with the generally high Reynolds number dynamics
of granular particles. Let us summarize the range of applicability and limitations of
each method.

• Discrete element model (DEM) simulations neglect all hydrodynamic effects of


the fluid surrounding the granular particles. The advantage is that this makes
DEM fast compared to methods that do take into account the flow of liquid or
gas; systems containing a million particles are not uncommon. Disadvantage is
that the method is limited to those cases where we can neglect the fluid hydro-
dynamics. Examples include large particles moving slowly through a dilute gas
and slow flow of dense packings of particles.

• In direct numerical simulations (DNS) the effects of the fluid are included in
great detail. Because the grid cell size – used to solve the discretised Navier-
Stokes equations – is much smaller than the particle size, DNS simulations are
usually limited to very small system sizes (approximately 100 particles).

• In discrete particle model (DPM) simulations we use effective drag relations –


these may be obtained from experiments or DNS simulations – to model the cou-
pling with the fluid phase. Because the grid cell size is much larger than the par-
ticle size, DPM simulations can deal with much larger systems than DNS simula-
tions (typically a few times 105 particles). A limitation is that in systems where the
hydrodynamic forces are important, the accuracy of DPM simulations depends
critically on the applicability and accuracy of the effective drag relations.

• In stochastic methods such as direct simulation Monte Carlo (DSMC) the colli-
sions between particles are not detected deterministically, but rather stochasti-
cally, based on collision probabilities. Also many particles may be represented by
just a single representative particle. Advantage is a huge speed-up, allowing for
simulations of 106 to 109 particles. However, these methods are usually limited
to relatively dilute particle flows where many random collisions occur between
the particles.
9
Using kinetic theorey, originally developed for dilute gases, it is possible to show that this probabil-

ity will grow with the collision cross-section area (d1 + d2 )2 and with the relative velocity vi − v j .
APPENDIX
H YDRODYNAMIC FORCES ON SLOWLY
MOVING SPHERES

In this appendix we will calculate hydrodynamic (friction) forces on slowly moving


spheres (Reynolds number  1). We will first introduce the Navier-Stokes and Stokes
equations, then deal with a single sphere, and finally deal with hydrodynamic interac-
tions in a suspension of many spheres.

A.1 Navier-Stokes and Stokes equations


To formulate the basic equations for the fluid we utilize the conservation of mass and
momentum. The conservation of mass is expressed by the continuity equation


= −ρ ∇ · v, (A.1)
Dt
and the conservation of momentum by the Navier-Stokes equation

D
ρ v = ∇ · S̄. (A.2)
Dt
Here ρ(r, t ) is the fluid density, v(r, t ) the fluid velocity, D/Dt ≡ v · ∇ + ∂/∂t the total
derivative, and S̄ is the stress tensor.
We now have to specify the nature of the stress tensor S̄. For a viscous fluid, friction
occurs when the distance between two neighbouring fluid elements changes, i.e. they
move relative to each other. Most simple fluids can be described by a stress tensor
which consists of a part which is independent of the velocity, and a part which depends
linearly on the derivatives ∂v α /∂r β , i.e., where the friction force is proportional to the

155
156 APPENDIX A. HYDRODYNAMIC FORCES ON SLOWLY MOVING SPHERES

instantaneous relative velocity of the two fluid elements.1 The most general form of
the stress tensor for such a fluid is
   
∂v α ∂v β 2
S αβ = μ + − P + μ − κ ∇ · v δαβ , (A.3)
∂r β ∂r α 3

where μ is the dynamic shear viscosity, κ the bulk viscosity, which is the resistance of
the fluid against compression, and P the pressure.
Many flow fields of interest can be described assuming that the fluid is incompress-
ible, i.e. that the density along the flow is constant. In that case ∇·v = 0, as follows from
Eq. (A.1). Assuming moreover that the velocities are small, and that the second order
non-linear term v· ∇v may be neglected, we obtain Stokes equation for incompressible
flow
∂v
ρ = μ∇2 v − ∇P (A.4)
∂t
∇ · v = 0. (A.5)

This is the starting point for our calculation of the hydrodynamic forces on spheres.

A.2 Friction on a single slowly moving sphere


First consider a single sphere of radius a moving with velocity vS in a quiescent liquid.
Assume that the velocity field is stationary. Referring all coordinates and velocities to
a frame which moves with velocity vS relative to the fluid transforms the problem into
one of a resting sphere in a fluid which, at large distances from the sphere, moves with
constant velocity v0 ≡ −vS . The problem is best considered in spherical coordinates
(see Fig. A.1),2 v(r) = v r êr +v θ êθ +v φ êφ , so that θ = 0 in the flow direction. By symmetry
1
The calculations in this Appendix assume that the solvent is an isotropic, unstructured fluid, with
a characteristic stress relaxation time which is much smaller than the time scale of any flow experiment.
The stress response of such a so-called Newtonian fluid appears to be instantaneous. Newtonian flu-
ids usually consist of small and roughly spherical molecules, e.g., water and light oils. Non-Newtonian
fluids, on the other hand, usually consist of large or elongated molecules. Often they are structured,
either spontaneously or under the influence of flow. Their characteristic stress relaxation time is experi-
mentally accessible. As a consequence, the stress between two non-Newtonian fluid elements generally
depends on the history of relative velocities, and contains an elastic part. Examples are polymers and
self-assembling surfactants.
2
In spherical coordinates the gradient, Laplacian and divergence are given by

∂ 1 ∂ 1 ∂
∇f = êr f + êθ f+ êφ f
∂r r ∂θ r sin θ ∂φ
   
1 ∂ 2 ∂ 1 ∂ ∂ 1 ∂2
∇2 f = r f + sin θ f + f
r 2 ∂r ∂r r 2 sin θ ∂θ ∂θ r 2 sin2 θ ∂φ2
1 ∂  2  1 ∂ 1 ∂
∇·v = r vr + (sin θv θ ) + vφ.
r 2 ∂r r sin θ ∂θ r sin θ ∂φ
A.2. FRICTION ON A SINGLE SLOWLY MOVING SPHERE 157

z ^
er
^
ef
q ^
Figure A.1: Definition of spherical r eq
coordinates (r, θ, φ) and the unit y
vectors êr , êθ , and êφ . f
x
the azimuthal component of the fluid velocity is equal to zero, v φ = 0. The fluid flow at
infinity gives the boundary conditions

v r = v 0 cos θ
for r → ∞. (A.6)
v θ = −v 0 sin θ

Moreover, we will assume that the fluid is at rest on the surface of the sphere (stick
boundary conditions):

v r = v θ = 0 for r = a. (A.7)

It can easily be verified that the solution of Eqs. (A.4) - (A.5) is


 
3a a3
v r = v 0 cos θ 1 − + (A.8)
2r 2r 3
 
3a a3
v θ = −v 0 sin θ 1 − − (A.9)
4r 4r 3
3 μv 0 a
p − p0 = − cos θ. (A.10)
2 r2
We shall now use this flow field to calculate the friction force exerted by the fluid on the
sphere. The stress on the surface of the sphere results in the following force per unit
area:

 ∂v θ 

f = S̄ · êr = êr S r r + êθ S θr = −êr p (r =a) + êθ μ
∂r (r =a)
 
3μv 0 3μv 0
= −p 0 + cos θ êr − sin θêθ . (A.11)
2a 2a
Integrating over the whole surface of the sphere, only the component in the flow direc-
tion survives:
   
3μv 0 3μv 0
F = dΩ a 2 −p 0 + cos θ cos θ + sin2 θ = 6πμav 0 . (A.12)
2a 2a
Transforming back to the frame in which the sphere is moving with velocity vS =
−v0 through a quiescent liquid, we find for the fluid flow field
   
3a a2 3a a2
v(r) = vS 1 + 2 + êr (êr · vS ) 1− 2 , (A.13)
4r 3r 4r r
158 APPENDIX A. HYDRODYNAMIC FORCES ON SLOWLY MOVING SPHERES

and the friction on the sphere

F = −ζvS = −6πμavS . (A.14)

F is known as the Stokes friction.

A.3 Hydrodynamic interactions between slowly moving


spheres
In the previous subsection we calculated the flow field in the solvent around a single
slowly moving sphere. When more than one sphere is present in the system, this flow
field will be felt by the other spheres. As a result these spheres experience a force which
is said to result from hydrodynamic interactions with the original sphere.
We will assume that at each time the fluid flow field can be treated as a steady state
flow field [15]. This is true for very slow flows, where changes in positions and velocities
of the spheres take place over much larger time scales than the time it takes for the fluid
flow field to react to such changes. The hydrodynamic problem then is to find a flow
field satisfying the stationary Stokes equations,

μ∇2 v = ∇P (A.15)
∇ · v = 0, (A.16)

together with the boundary conditions

v (Ri + a) = vi ∀i , (A.17)

where Ri is the position vector and vi is the velocity vector of the i ’th sphere, and a is
any vector of length a. If the spheres are very far apart we may approximately consider
any one of them to be alone in the fluid. The flow field is then just the sum of all flow
fields emanating from the different spheres

v(r) = v(0)
i
(r − Ri ), (A.18)
i

where, according to Eq. (A.13),



3a a2
v(0) (r − Ri ) = vi 1+
i 4 |r − Ri | 3(r − Ri )2

3a a2
+(r − Ri )((r − Ri ) · vi ) 1− . (A.19)
4 |r − Ri |3 (r − Ri )2

We shall now calculate the correction to this flow field, which is of lowest order in the
sphere separation.
A.3. HYDRODYNAMIC INTERACTIONS BETWEEN SLOWLY MOVING SPHERES 159

We shall first discuss the situation for only two spheres in the fluid. In the neigh-
bourhood of sphere one the velocity field may be written as

3a (r − R2 ) (r − R2 )
v(r) = v(0)
1 (r − R1 ) + v2 + · v2 , (A.20)
4 |r − R2 | |r − R2 | |r − R2 |

where we have approximated v(0)


2 (r − R2 ) to terms of order a/ |r − R2 |. On the surface of
sphere one we approximate this further by

3a  
v(R1 + a) = v(0)
1 (a) + v2 + R̂21 R̂21 · v2 , (A.21)
4R 21

where R̂21 = (R2 − R1 )/ |R2 − R1 |. Because v(0)


1 (a) = v1 , we notice that this result is not
consistent with the boundary condition v(R1 +a) = v1 . In order to satisfy this boundary
condition we subtract from our results so far, a solution of Eqs. (A.15) and (A.16) which
goes to zero at infinity, and which on the surface of sphere one corrects for the second
term in Eq. (A.21). The flow field in the neighbourhood of sphere one then reads


3a a2
v(r) = vcorr
1 1+
4 |r − R1 | 3(r − R1 )2

corr 3a a2
+(r − R1 )((r − R1 ) · v1 ) 1−
4 |r − R1 |3 (r − R1 )2
3a  
+ v2 + R̂21 R̂21 · v2 (A.22)
4R 21
3a  
vcorr
1 = v1 − v2 + R̂21 R̂21 · v2 . (A.23)
4R 21

The flow field in the neighbourhood of sphere two is treated similarly.


We notice that the correction that we have applied to the flow field in order to satisfy
the boundary conditions at the surface of sphere one is of order a/R 21 . Its strength in
the neighbourhood of sphere two is then of order (a/R 21 )2 , and need therefore not be
taken into account when the flow field is adapted to the boundary conditions at sphere
two.
The flow field around sphere one is now given by Eqs. (A.22) and (A.23). The last
term in Eq. (A.22) does not contribute to the stress tensor (the gradient of a constant
field is zero). The force exerted by the fluid on sphere one then equals −6πμavcorr 1 . A
similar result holds for sphere two. In full we have

3a  
F1 = −6πμav1 + 6πμa Ī + R̂21 R̂21 · v2 (A.24)
4R 21
3a  
F2 = −6πμav2 + 6πμa Ī + R̂21 R̂21 · v1 , (A.25)
4R 21
160 APPENDIX A. HYDRODYNAMIC FORCES ON SLOWLY MOVING SPHERES

where Ī is the three-dimensional unit tensor. Inverting these equations, retaining only
terms up to order a/R 21 , we get
1 1  
v1 = − F1 − Ī + R̂21 R̂21 · F2 (A.26)
6πμa 8πμR 21
1 1  
v2 = − F2 − Ī + R̂21 R̂21 · F1 (A.27)
6πμa 8πμR 21
When more than two spheres are present in the fluid, corrections resulting from
n-body interactions (n ≥ 3) are of order (a/R i j )2 or higher and need not be taken into
account. The above treatment therefore generalizes to


N
Fi = − ζ̄i j · v j (A.28)
j =0

N
vi = − μ̄i j · F j , (A.29)
j =0

where
3a  
ζ̄i i = 6πμa Ī, ζ̄i j = −6πμa Ī + R̂i j R̂i j (A.30)
4R i j
1 1  
μ̄i i = Ī, μ̄i j = Ī + R̂i j R̂i j . (A.31)
6πμa 8πμR i j

μ̄i j is generally called the mobility tensor. The specific form Eq. (A.31) is known as the
Oseen tensor.
APPENDIX
M ATHEMATICAL RELATIONS

B.1 Gaussian integrals


We will often need to evaluate integrals of the form
∞
2
I n (α) = dx x n e−αx . (B.1)
0

Solutions to these integrals can easily be generated by “differentiating under the inte-
gral sign”:
∞
2 ∂
I n+2 (α) = dx x n+2 e−αx = − I n (α). (B.2)
0 ∂α
Knowledge of the solution for n = 0 allows us to generate solutions for all even n,
whereas the solution for n = 1 allows us to generate solutions for all odd n.
Although the solution for n = 1 can easily be found, I 1 (α) = 1/(2α), the solution for
n = 0 requires some more thought. It can be calculated by considering the following
integral in the two-dimensional plane:
∞ ∞
2 2
dx dy e−α(x +y ) . (B.3)
0 0

Because we can factorize the exponential this is clearly equal to [I 0 (α)]2 . Changing
from (x, y) to polar coordinates (r, φ), taking into account the Jacobian, we find
∞ π/2
2 π
2
I 0 (α) = dr dφ r e−αr = . (B.4)
0 0 4α
π
Hence I 0 (α) = 12 α
.

161
162 APPENDIX B. MATHEMATICAL RELATIONS

The integrals I n appear so often that we write down the values of some of them
explicitly. We have

1 π 1/2 1 π 1/2 3 π 1/2


I0 = , I2 = , I4 = ,... (B.5)
2 α 4 α3 8 α5
and
1 1 1
I1 = , I3 = , I5 = ,.... (B.6)
2α 2α2 α3

B.2 Geometric series


The geometric series S n (x) is defined as


n
S n (x) = xn = 1 + x + x2 + . . . + xn . (B.7)
k=0

Multiplying both sides by x we find

xS n (x) = x + x 2 + x 3 + . . . + x n+1 . (B.8)

Subtracting these two equations yields

(1 − x)S n (x) = 1 − x n+1 . (B.9)

So we find

n 1 − x n+1
xn = . (B.10)
k=0 1−x

For −1 < x < 1 the series converges as n → ∞, in which case


∞ 1
xn = (|x| < 1). (B.11)
k=0 1−x

B.3 Taylor series


A Taylor series is a series expansion of a function around a certain point. In one dimen-
sion, the Taylor series of a function f around the point x = a is given by

1 1 1
f (x) = f (a)+ f  (a)(x −a)+ f  (a)(x −a)2 + f  (a)(x −a)3 +. . .+ f (n) (a)(x −a)n +. . . ,
2 6 n!
(B.12)

where f (n) (a) denotes the n-th derivative of f at x = a.


B.4. LOGARITHMS AND EXPONENTIALS 163

Here follow a few examples:


1 1 1
exp(x) = 1 + x + x 2 + x 3 + . . . + x n + . . . (B.13)
2 6 n!
1 2 1 3 (−1)n−1 n
ln(1 + x) = x − x + x + . . . + x +... (B.14)
2 3 n
1 1 cos(nπ/2) n
cos(x) = 1 − x 2 + x 4 + . . . + x +... (B.15)
2 24 n!
1 1 5 sin(nπ/2) n
sin(x) = x − x 3 + x +... + x +... (B.16)
6 120 n!

B.4 Logarithms and exponentials


When manipulating logarithms and exponentials remember the following rules:

exp(a + b) = exp(a) exp(b) (B.17)


a
exp(a ln b) = b (B.18)
b a
exp(ab) = (exp(a)) = (exp(b)) (B.19)
ln(ab) = ln(a) + ln(b) (B.20)
a
ln(b ) = a ln(b) (B.21)

Derivatives are given by


d
b exp(ax) = ab exp(ax) (B.22)
dx
d x
a = ln(a) a x (B.23)
dx
d a
b ln(cx a ) = b (B.24)
dx x
and integrals by

1
exp(ax)dx = exp(ax) (B.25)
 a
ln(ax)dx = x ln(ax) − x (B.26)
APPENDIX
R ANDOM NUMBER GENERATORS

C.1 Uniform random numbers


Most modern programming languages include a function that generates numbers ξ
that are uniformly distributed in the range ξ ∈ (0, 1). The generated numbers are not
truly random, but pseudo-random: successive numbers are generated by aritmetic ma-
nipulation inside the computer. The trick is to produce a repeatable sequence that
passes a wide range of statistical tests for independence [4]. We will assume that you
have available such a uniform random number generator. Usually they are called some-
thing like . Be careful: functions that generate a random integer between 0 and a
maximum integer often have similar names. For example, in C the function 
returns a random integer between 0 and   . This may be used to make a quick
and dirty uniform random number generator: ξ =      .

C.2 Gaussian random numbers


For several applications we need numbers from a Gaussian (or normal) distribution
with zero average:

1 x2
P (x) =  exp − 2 , (C.1)
2πσ2 2σ
where σ2 is the variance and σ the standard deviation of the distribution.
Here we describe two methods to generate a Gaussian random number with unit
variance. Note that a Gaussian number with variance σ2 is generated by simply multi-
plying the resulting random number x by σ. The first method is called the Box-Muller
transform [13]:

165
166 APPENDIX C. RANDOM NUMBER GENERATORS

• generate two uniform random numbers: ξ1 and ξ2 ;

• calculate x1 = (−2 ln ξ1 )1/2 cos(2πξ2 ) and x2 = (−2 lnξ1 )1/2 sin(2πξ2 ).

The numbers x1 and x2 are then two independent Gaussian random numbers. The
second method involves the generation of 12 uniform random numbers:

• generate 12 uniform random numbers ξ1 , . . . , ξ12 ;



• calculate x = 12
i =1 ξi − 6.

This generates an approximately Gaussian random number (by virtue of the central
limit theorem). Clearly random numbers outside the range (−6, 6) will never occur, but
for some applications (such as initialising velocities in a simulation) this may actually
be an advantage. The computational speed relative to the Box-Muller technique will
depend on the timing of the logarithm, square root, cosine and sine functions, as well
as the uniform random number generator itself.

C.3 Constructing random numbers with other distribu-


tions
Given an analytical expression P (x) for some normalised distribution between xmi n
and xmax (possibly ±∞), it is in some cases possible to construct an analytical mapping
between a uniform random number ξ and a number x from P (x). First determine the
cummulative probability up to x:
x
C (x) ≡ P (x  )dx  . (C.2)
xmi n

Because P (x) is normalised, C (x) ranges from 0 at xmi n to 1 at xmax . Indeed, C (x)
presents the mapping between x and the uniform distribution. If C (x) can be inverted,
we have the mapping from a uniform distribution to the distribution P (x):
x = C −1 (ξ). (C.3)
As an explicit example, consider the biased velocity distribution in Eq. (2.21),
2
m − mv
P (v ) = v e 2kB T . (C.4)
kB T
The cummulative probability is:
v 2
− mv
C (v ) = P (v  )dv  = 1 − e 2kB T (C.5)
0
Equating C (v ) = ξ and inverting, we find our mapping:
 1/2
2k B T
v= − ln(1 − ξ) . (C.6)
m
APPENDIX
P HYSICAL CONSTANTS

Planck’s constant, h = 6.62607 × 10−34 J s


atomic mass unit, u = 1.66054 × 10−27 kg
Boltzmann’s constant, k B = 1.38065 × 10−23 J/K
Avogadro’s number, N Av = 6.02214 × 1023 mol−1
Universal gas constant, R = 8.3145 J/K mol−1
Permittivity of vacuum, 0 = 8.854 × 10−12 F/m

167
B IBLIOGRAPHY

[1] R. Adhikari, K. Stratford, M.E. Cates, and A.J. Wagner, Fluctuating lattice Boltz-
mann, Europhys. Lett. 71, 473 (2005).

[2] B.J. Alder and T.E. Wainwright, Studies in Molecular Dynamics. I. General Method,
Journal of Chemical Physics 31, 459 (1959).

[3] E. Allahyarov and G. Gompper, Phys. Rev. E 66, 036702 (2002).

[4] M.P. Allen and D.J. Tildesley, Computer Simulation of Liquids, Clarendon (Oxford),
1987.

[5] S. Asakura and F. Oosawa, J. Polym. Sci., Polym. Symp. 33, 183 (1958); A. Vrij, Pure
Appl. Chem. 48, 471 (1976).

[6] J.-L. Barrat, L. Bocquet, Phys. Rev. Lett. 82, 4671 (1999).

[7] G. K. Batchelor, J. Fluid Mech. 56, 375 (1972).

[8] R. Beetstra, M.A. van der Hoef and J.A.M. Kuipers, Drag force from lattice Boltz-
mann simulations of intermediate Reynolds number flow past mono- and bidis-
perse arrays of spheres, A.I.Ch.E. Journal 53, 489 (2007).

[9] H.C. Berg, Random Walks in Biology, Princeton University Press (Princeton, 1993).

[10] G.A. Bird, Molecular Gas Dynamics and the Direct Simulation of Gas Flows,
Clarendon (Oxford), 1994.

[11] L. Bocquet and J.-L. Barrat, Phys. Rev. E 49, 3079 (1994).

[12] G.A. Bokkers, Multi-level modeling of the hydrodynamics in gas phase polymerisa-
tion reactors, Ph.D. thesis, University of Twente, The Netherlands, 2005.

[13] G.E.P. Box and M.E. Muller, A note on the generation of random normal deviates,
Ann. Math. Stat. 29, 610 (1958).

[14] J.F. Brady and G. Bossis, Stokesian Dynamics, Ann. Rev. Fluid Mech. 20, 111 (1988).

[15] W.J. Briels, Theory of Polymer Dynamics, Lecture notes, Uppsala (1994).

169
170 BIBLIOGRAPHY

[16] P.A. Cundall and O.D.L. Strack, A discrete numerical model for granular assemblies,
Geotechnique 29, 47 (1979).

[17] N.G. Deen, Review of discrete particle modeling of fluidized beds, Chem. Eng. Sci.
62, 28 (2007).

[18] E. Delnoij, J.A.M. Kuipers and W.P.M. van Swaaij, A three-dimensional CFD model
for gas-liquid bubble columns, Chem. Eng. Sci. 54, 2217 (1999).

[19] J. K. G. Dhont, An Introduction to the Dynamics of Colloids, Elsevier, (Amsterdam,


1996).

[20] B. Dünweg, U.D. Schiller, and A.J.C. Ladd, Statistical mechanics of the fluctuating
lattice Boltzmann equation, Phys. Rev. E 76, 036704 (2007).

[21] J. Dzubiella, H. Löwen, and C.N. Likos, Phys. Rev. Lett. 91, 248301 (2003).

[22] S. Ergun, Fluid flow through packed columns, Chem. Eng. Progress 48, 89 (1952).

[23] P. Español and P.B. Warren, Statistical Mechanics of Dissipative Particle Dynamics,
Europhys. Lett. 30, 191 (1995).

[24] G. Gompper, T. Ihle, D.M. Kroll and R.G. Winkler, Multi-Particle Collision Dy-
namics: A Particle-Based Mesoscale Simulation Approach to the Hydrodynamics
of Complex Fluids, Adv. Polymer Sci. 221, 1 (2009).

[25] R.D. Groot and P.B. Warren, Dissipative Particle Dynamics: Bridging the gap be-
tween atomisticc and mesoscopic simulation, J. Chem. Phys. 107, 4423 (1997).

[26] E. Guyon, J.-P. Hulin, L. Petit and C. D. Mitescu, Physical Hydrodynamics, (Oxford
Univeristy Press, Oxford, 2001).

[27] J.P. Hansen and I.R. McDonald, Theory of Simple Liquids, 2nd Ed., Academic Press,
London (1986).

[28] J. Happel and H. Brenner and Low Reynolds Number Hydrodynamics : with special
applications to particulate media, Springer ( New York 1983).

[29] M. Hecht, J. Harting, T. Ihle and H.J. Herrmann, Simulation of claylike colloids,
Phys. Rev. E 72, 011408 (2005).

[30] M.A. van der Hoef, M. Ye, M. van Sint Annaland, A.T. Andrews IV, S. Sundaresan
and J.A.M. Kuipers, Multiscale modeling of gas-fluidized beds, Adv. Chem. Eng. 31,
65 (2006).

[31] P.J. Hoogerburgge and J.M. Koelman, Simulating microscopic hydrodynamic phe-
nomena with dissipative particle dynamics, Europhys. Lett. 19, 155 (1992).
BIBLIOGRAPHY 171

[32] B.P.B. Hoomans, Granular dynamics of gas-solid two-phase flows, PhD thesis, Uni-
versity of Twente, The Netherlands, 1999.

[33] T. Ihle and D. M. Kroll, Phys. Rev. E 63 020201(R) (2001)

[34] T. Ihle and D. M. Kroll, Phys. Rev. E 67, 066705 (2003); ibid 066706 (2003).

[35] T. Ihle, E. Tuzel, and D.M. Kroll, Phys. Rev. E, 72, 046707 (2005).

[36] N. Kikuchi, C. M. Pooley, J. F. Ryder, and J. M. Yeomans, J. Chem. Phys. 119, 6388
(2003).

[37] A. J. C. Ladd, Phys. Rev. Lett. 76, 1392 (1996); ibid, 88, 048301 (2002).

[38] A. Lamura, G. Gompper, T. Ihle, and D.M. Kroll, Europhys. Lett. 56, 319 (2001).

[39] J.A. Laverman, On the hydrodynamics in gas polymerization reactors, PhD thesis,
Eindhoven University of Technology, The Netherlands, 2010.

[40] C.P. Lowe, Europhys. Lett. 47, 145 (1999).

[41] C.P. Lowe, A.F. Bakker and M.W. Dreischor, Europhys. Lett. 67, 397 (2004).

[42] A. Malevanets and R. Kapral, Mesoscopic model for solvent dynamics, J. Chem.
Phys. 110, 8605 (1999).

[43] A. Malevanets and R. Kapral, J. Chem. Phys., 112, 7260 (2000).

[44] J. C. Maxwell, in The Scientific Papers of James Clerk Maxwell, Vol. 2., W. D. Niven
ed. (Dover, New York, 1965).

[45] D.A. McQuarrie, Statistical Mechanics, Harper & Row (New York, USA), 1976.

[46] J.R. Melrose and R.C. Ball, J. Rheol. 48, 937 (2004).

[47] D.R. Mikulencak and J.F. Morris, J. Fluid. Mech. 520, 215 (2004).

[48] A. Moncho-Jordá, A.A. Louis and J.T. Padding, The effects of inter-particle attrac-
tions on colloidal sedimentation, Phys. Rev. Lett. 104, 068301 (2010).

[49] N.-Q. Nguyen and A. J. C. Ladd, Phys. Rev. Lett. 66, 046708 (2002).

[50] J.T. Padding and A.A. Louis, Hydrodynamic and Brownian Fluctuations in Sedi-
menting Suspensions, Phys. Rev. Lett. 93, 220601 (2004).

[51] J.T. Padding and A.A. Louis, Hydrodynamic interactions and Brownian forces in
colloidal suspensions: Coarse-graining over time and length scales, Phys. Rev. E
74, 031402 (2006).
172 BIBLIOGRAPHY

[52] J.T. Padding and W.J. Briels, Translational and rotational friction on a colloidal rod
near a wall, J. Chem. Phys. 132, 054511 (2010).

[53] C.M. Pooley and J.M. Yeomans, J. Phys. Chem. B 109, 6505 (2005).

[54] E. Purcell Life at Low Reynolds Numbers, Am. J. of Physics 45, 3 (1977). See also
http://brodylab.eng.uci.edu/j̃pbrody/reynolds/lowpurcell.html

[55] M. Ripoll, K. Mussawisade, R.G. Winkler, and G. Gompper, Phys. Rev. E 72, 016701
(2005).

[56] W. B. Russell, D. A. Saville and W. R. Showalter, Colloidal Dispersions, Cambridge


University Press, Cambridge (1989).

[57] A. Sierou and J.F. Brady, Accelerated Stokesian Dynamics simulations, J. Fluid
Mech. 448, 115 (2001).

[58] S. Succi, The Lattice Boltzmann Equation for Fluid Dynamics and Beyond, Oxford
University Press, 2001.

[59] J. Vermant and M.J. Solomon, J. Phys.: Condens. Matt. 17, R187 (2005).

[60] G. A. Vliegenthart and P. van der Schoot, Europhys. Lett. 62, 600 (2003).

[61] Y.C. Wen and Y.H. Yu, Mechanics of fluidization, Chem. Eng. Progress Symposium
Series 62, 100 (1966).
I NDEX

Archimedes number, 59, 60 convective mass flux, 56


athermal systems, 139 convective transport, 55, 136
Coulomb friction, 146
ballistic regime, 101
barostat, 79 damping coefficient, 144–147
Berendsen barostat, 79 Debye crystal, 48
Berendsen thermostat, 75 degeneracy, 71
Boltzmann distribution, 73, 80 degrees of freedom, 74
bond stretch interaction, 69 density fluctuations, 87
bounce-back, 29 density of states, 71
boundary conditions, 26, 123, 149 depletion force, 125, 130
boundary layer, 51 diffusion coefficient, 52
Brownian dynamics, 102 diffusion time scale, 131
Brownian dynamics algorithm, 106 diffusive transport, 52, 136
bulk viscosity, 149 diffusive wall, 30
buoyant force, 50, 150 dihedral interaction, 70
dimensionless numbers, 57, 132
canonical ensemble, 72 direct numerical simulation, 148
Capillary number, 59 direct simulation Monte Carlo, 108, 152
Carnahan and Starling’s equation, 86 discrete particle model, 149
cell linked list, 22, 24, 115 dissipative force, 19
charge interactions, 71 dissipative particle dynamics, 109
Clausius virial function, 77 drag coefficient, 51
coarse-graining, 98 drag force, 149, 150
coefficient of friction, 20, 146 drag relation, 151
coefficient of restitution, 146, 147 dynamic viscosity, 50, 54, 94, 121
colloidal suspension, 98
compressibility, 79, 133 Eötvös number, 59
compressibility equation, 84 Einstein equation, 89, 101
configuration integral, 80 elastic collision, 146
conservative force, 12 energy conservation, 14
contact model, 143 ensemble, 72
contact parameters, 146 enthalpy, 54
convective heat flux, 56 entropy, 72

173
174 INDEX

equipartition, 44, 100, 102 kinematic time, 131


Ergun equation, 151 kinematic viscosity, 55, 120
Euler equations, 141 kinetic energy, 14
Euler method, 46 kinetic friction, 146
event driven simulation, 12 Knudsen number, 60, 99, 135
external force, 18
Langevin dynamics algorithm, 105
Fick’s law, 52, 88 Langevin equation, 99, 104
fluctuation-dissipation theorem, 100 lattice Boltzmann method, 109, 149
fluid, 9 lattice gas automata, 109
Fokker-Planck time scale, 130 lattice initialisation, 41
force field, 68 leap-frog method, 46
Fourier’s law, 54 Lennard-Jones potential, 15, 26, 70
free energy, 73 Lewis number, 60
friction, 99, 158 Lowe-Andersen method, 108
friction coefficient, 19, 146 lubrication force, 127, 148
Froude number, 59
Mach number, 61, 133
Galilean invariance, 109, 113 mass diffusion, 52
Gaussian random number, 165 mass transfer coefficient, 56
granular particles, 139 Maxwell-Boltzmann distribution, 30, 43,
Grashof number, 60 74
gravity, 18, 50, 142 mean free path, 119
Green’s function, 53 mean square displacement, 53, 89, 91,
Green-Kubo relation 101
self-diffusion coefficient, 93 micro-canonical ensemble, 72
viscosity, 94 microscopic stress tensor, 94
grid shift, 113 minimum image convention, 39
growing particle initialisation, 43 mobility tensor, 160
modulo operator, 40
Hamiltonian, 14, 73 molecular dynamics simulation, 67
hard sphere, 68, 85, 143 molecular force field, 68
heat capacity, 54 moment of inertia, 141, 142
heat equation, 54 momentum diffusion, 54
heat transfer coefficient, 56 Monte Carlo simulation, 68
hydrodynamic interaction, 98, 103, 106, multi-particle collision dynamics, 113
148, 158
Navier-Stokes equation, 55, 138, 148, 152,
ideal gas, 121 155
inflow boundary, 32 neighbourlist, 21, 23, 24, 36
initialisation, 41 Newton’s law, 55
inter-phase momentum transfer coeffi- non-bonded interaction, 70
cient, 150, 151 Nosé-Hoover thermostat, 76
interatomic potential, 15 number density, 81
INDEX 175

Nusselt number, 61, 63 speed of sound, 57


spring stiffness, 144–146
one-way coupling, 152 spring-dashpot model, 143
Oseen tensor, 160 sticking, 145
outflow boundary, 33 stochastic force, 99
overlap, 144, 145 stochastic method for granular particles,
152
pair interaction, 16, 25, 143
stochastic rotation dynamics, 113
partition function, 72
Stokes equation, 156
Peclet number, 61, 63, 136
Stokes friction, 158
periodic boundary conditions, 37
Stokes law, 51, 99, 158
phase, 9
Stokes-Einstein equation, 101
phase space, 73
Stokesian dynamics, 107
potential energy, 13 stress tensor, 149, 155
Prandtl number, 62 structure factor, 88
pressure, 78 surface tension, 52, 63
from radial distribution function, 85
virial expansion, 85 temperature, 74
thermal conductivity coefficient, 54
radial distribution function thermal diffusion, 53
and compressibility, 83 thermal diffusivity, 54
and energy, 83 thermal energy, 15
definition, 80 thermal fluctuations, 98, 138
random force, 99, 105 thermal wall, 30
random grid shift, 115 thermostat, 31, 72, 75
random insertion, 42 time correlation function, 89
random number generator, 165 two-way coupling, 152
reduced mass, 147
valence angle interactions, 70
Reynolds number, 62, 99, 133, 151
validation, 9
rigid body motion, 141
Van der Waals attraction, 15, 70
ring buffer, 90
velocity scaling, 74
scattering, 87 velocity Verlet method, 47, 142
Schmidt number, 62, 122, 129 virial expression for pressure, 78
second virial coefficient, 85, 86 virtual particles, 124
sedimentation, 132, 138 wall collision, 29
self-diffusion coefficient, 88, 93, 101, 119 wall potential, 27
Sherwood number, 63 walls, 27
sliding, 145, 146 Weber number, 63
soft matter, 98 Weeks-Chandler-Andersen potential, 122
soft sphere model, 143 Wen and Yu correlation, 151
solid volume fraction, 103, 149, 150
sound, 57 Young-Laplace equation, 52
specular reflection, 29 Youngs modulus, 148

You might also like