0% found this document useful (0 votes)
183 views19 pages

Research Note 007 Thermostat

The document discusses various thermostat methods used in molecular dynamics simulations to control temperature, including the Anderson thermostat, Nosé-Hoover thermostat, and Langevin thermostat. The Anderson thermostat uses stochastic collisions to impose the desired temperature, while the Nosé-Hoover thermostat introduces additional degrees of freedom through an extended Lagrangian to generate fluctuations and sample the canonical ensemble. The Langevin thermostat combines deterministic equations of motion with stochastic forces to approximate a system in contact with a heat bath.

Uploaded by

dapias09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
183 views19 pages

Research Note 007 Thermostat

The document discusses various thermostat methods used in molecular dynamics simulations to control temperature, including the Anderson thermostat, Nosé-Hoover thermostat, and Langevin thermostat. The Anderson thermostat uses stochastic collisions to impose the desired temperature, while the Nosé-Hoover thermostat introduces additional degrees of freedom through an extended Lagrangian to generate fluctuations and sample the canonical ensemble. The Langevin thermostat combines deterministic equations of motion with stochastic forces to approximate a system in contact with a heat bath.

Uploaded by

dapias09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Brief introduction to the thermostats

Yanxiang zhao

Contents
1 Motivation for Thermostats

2 Anderson Thermostat: Stochastic Collision Method

3 Nos
e-Hoover Thermostat: Extended System Method
3.1 Extended equations of motion . . . . . . . . . . . . . . . .
3.2 Preserving canonical ensemble . . . . . . . . . . . . . . . .
3.3 Nose-Hoover Thermostat with additional constraints(April
3.3.1 Equivalent to Gaussian thermostat . . . . . . . . .
3.3.2 More constraints . . . . . . . . . . . . . . . . . . .
3.4 More on Nose-Hoover Thermostat(April 4th) . . . . . . .

. . .
. . .
4th)
. . .
. . .
. . .

.
.
.
.
.
.

3
3
5
6
6
7
7

4 Gaussian thermostat: velocity-rescaling (April 4th)

5 Berendsen Thermostat(April 4th)


5.1 Motivations for Berendsen thermostat . . . . . . . . . . . . . . .
5.2 Berendsen thermostat: proportional time-rescaling . . . . . . . .
5.3 Interpolation between the canonical and microcanonical ensemble

8
8
8
9

6 Langevin Thermostat(April 4th)


10
6.1 Motivation for Langevin thermostat . . . . . . . . . . . . . . . . 10
6.2 Key idea for Langevin thermostat . . . . . . . . . . . . . . . . . . 10
6.3 Advantages and disadvantages for Langevin thermostat . . . . . 11
7 Dissipative particle dynamics thermostat(April 13)
11
7.1 Description of the technique . . . . . . . . . . . . . . . . . . . . . 11
7.2 Implementation of the method . . . . . . . . . . . . . . . . . . . 12
8 Anisotropic Willmore cont(April 20))

13

9 Gausss principle of least constraint and thermostat (April 26))


9.1 Main question for the thermostats . . . . . . . . . . . . . . . . .
9.2 Least action principle with constraints . . . . . . . . . . . . . . .
9.3 Gausss principle of least constraint . . . . . . . . . . . . . . . . .

13
13
14
15

10 Fokker-Planck equation and energy laws (June 22))


16
10.1 Fundamental question . . . . . . . . . . . . . . . . . . . . . . . . 16
10.2 Possible approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1

Motivation for Thermostats

When people do the molecular dynamics(MD) in the canonical ensemble(N V T ),


a thermostat is introduced to modulate the temperature of a system in some
fashion. A variety of thermostat methods are available to add and remove
energy from the boundaries of an MD system in a realistic way, approximating the canonical ensemble. Popular techniques to control temperature include
the Anderson thermostat, Berendsen thermostat, Nose-Hoover thermostat, and
Langevin (stochastic) thermostat.
What is the goal of a thermostat? Actually, it turns out that the goal is not
to keep the temperature constant, as that would mean fixing the total kinetic
energy, which would be silly and not the aim of NVT or NPT. Rather, it is to
ensure that the average temperature of a system be the desired one.

Anderson Thermostat: Stochastic Collision Method

For the canonical ensemble(N V T ), the number of particles(N ), volume(V ) and


temperature(T ) are constant, and the energy of the simulated system will fluctuate. To simulate the N V T system, Anderson couple the system to a heat
bath that imposes the desired temperature. The coupling to a heat bath is
represented by stochastic collision that act occasionally on randomly selected
particles.
Precisely in Andersons method, the equations of motion
P 2 of the N particles
in volume V are the Hamiltonian equations with H =
pi /2mi + (q)
dqi
H
=
,
dt
pi

dpi
H
=
dt
qi

(1)

supplemented by a stochastic collision term in the equation for dpi /dt. Each
stochastic collision is an instantaneous event that affects the momentum of
one particle. Between stochastic collisions, the state of the system evolves in
accordance with (1).
To perform the simulation we introduce two parameters: T and . T is
the desired temperature of the system. And is the frequency of stochastic
collisions which determine the strength of the coupling to the heat bath. If
successive collisions are uncorrelated, then the distribution of time intervals
between two successive stochastic collisions, P (t; ), is of Poisson form
P (t; ) = et
where P (t; )dt is the probability that the next collision will take place in the
interval [t, t + t].
A constant-temperature simulation is now as the follows. We pick an initial
set of positions and momenta qN (0) and PN (0), and integrate the Hamiltonian
equations of motion (1) until the time of the first stochastic collision. Suppose the particle suffering the collision is i. The value of the momentum of
2

particle i after collision is chosen at random from a Boltzmann distribution at


temperature T . The change in momentum takes place instantaneously. All
other particles are unaffected by this collision. Then the Hamiltonian equations
for the entire collection of particles are integrated until the time of the next
stochastic collision. This process is then repeated.
The result of Andersons constant-temperature MD procedure is a trajectory (qN (t), pN (t)) for N particles in a volume V with periodic boundary conditions. This trajectory can be used to calculate time averages of any quantity
F (qN (t), pN (t); V ) according to
Z
F = lim

F (qN (t), pN (t); V (t))dt

The combination of Newtonian dynamics and stochastic collisions turns the


MD simulation into a Markov process. Anderson shows that a canonical distribution in phase space is invariant under repeated application of the Andersons
algorithm. Combined with the fact that the Markov chain is also irreducible and
aperiodic, this implies that the Andersons algorithm does generate a canonical
distribution. In other words, the time average of any F calculate from a Anderson trajectory is equal to the ensemble average of F for the canonical ensemble
in which the temperature is T , i.e. ,
Z
N
N
1

F =
F (qN , pN ; V )eH(q ,p ;V ) dqN dpN
N !Q(N, V, T )
where
Q(N, V, T ) =

1
N!

Z
eH(q

,pN ;V )

dqN dpN

is the partition function for canonical ensemble.


A disadvantage of Anderson thermostat is that since the algorithm randomly
decorrelates velocities, and messes up dynamics, the dynamics actually is not
physical. So if we plan to measure the dynamical properties, Anderson thermostat might not be a good method.

3
3.1

Nos
e-Hoover Thermostat: Extended System
Method
Extended equations of motion

Since the energy of a system of N particles fluctuates at constant temperature,


we need some mechanism for introducing the energy fluctuations in order to simulate such a system. Instead of using stochastic collisions on the simulated system, Nose invent an extended Lagrangian; that is, a Lagrangian containing additional, artificial cooridnates and velocities. Actually this extended-Lagrangian
method was first introduced by Anderson in the constant-pressure MD simulations. Currently extended-Lagrangian methods are not only for simulations in
3

ensembles other than constant N V E, but also as a stable and efficient approach
to perform simulation in which an expensive optimization has to be carried out
at each time step.Since it is now more common to use the Nose scheme in the formulation of Hoover, MD people usually call the extended-Lagrangian approach
Nose-Hoover Thermostat.
Assume the simulated system is of N particles, with coordinates q0i , masses
mi , potential energy (q0 ), and momenta p0i . An additional degree of freedom
s is introduced acting as an external system on the simulated system. We also
introduce virtual variables(coordinates qi , momenta pi , and time t) which are
realted to the real variable(q0 , p0 , t0 ) as follows
Z
q0i = qi ,

p0i = pi /s,

t0 =
0

dt
s

Then the real velocity is expressed by


dq0i
dq
dq0i
=
s
=s i
0
dt
dt
dt
Here we can think of the above transformations as time scaling by dt0 = dt/s.
Actually, when Anderson do the constant-pressure MD simulation he use the
similar idea.
The Lagrangian of the extended system of the N particles and variable s in
terms of the virtual variables is
LNose =

N
X
mi
i=1

s2 q 2i (q) +

Q 2
s gkT ln s
2

(2)

where Q is an effective mass associated to s, the parameter g is essentially equal


to the number of degrees of freedom of the system. However its exact value will
be chosen to satisfy the canonical distribution at equilibrium. And a logarithmic
dependence of the potential on the variable s is essential for producing the
canonical ensemble. The momenta conjugate to qi and s are:
pi =

LNose
= mi s2 q i ,
q i

ps =

LNose
= Qs.

This gives the Hamiltonian of the extended system of the N particles and variable s in terms of the virtual variables
HNose =

N
X
i=1

p2i
p2s
+
(q)
+
+ gkT ln s
2mi s2
2Q

(3)

According to the Hamiltonian formalism, We define the equations of motion

by using the extended Hamiltonian,


dqi
HNose
pi
=
=
dt
pi
m i s2
dpi
HNose

=
=
dt
qi
qi
HNose
ps
ds
=
=
dt
ps
Q
P p2i
dps
HNose
mi s2 gkT
=
=
dt
s
s

(4)
(5)
(6)
(7)

It is obvious that the extended Hamiltonian HNose is conserved when the extended system evolves by the above equations of motion. Therefore this method
produces microcanonical ensemble for the extended system.

3.2

Preserving canonical ensemble

The most important thing of extended Hamiltonian is that we can project the
partition function of the extended system onto the original system, and the
projection will recover the canonical ensemble. Indeed, the partition function
for the extended system is
Z h
i
p
p2
Z = H0
, q + s + gkT ln s E dps dsdpdq
(8)
s
2Q
where
H0 (p, q) =

X p2
i
+ (q)
2m
i
i

is the classic Hamiltonian, and E is the energy prescribed in advance. By change


of variables, we can transform the virtual momenta pi and coordinates qi to
the real variables p0i = pi /s, q0i = qi . Then
Z
h
i
p2
Z = s3N H0 (p0 , q0 ) + s + gkT ln s E dsdps dp0 dq0
2Q
Z 3N +1 h

s
H0 (p0 , q0 ) + p2s /2Q E i
=
s exp
dsdps dp0 dq0
gkT
gkT
Z
3N + 1 h
i
1
=
exp
H0 (p0 , q0 ) + p2s /2Q E dps dp0 dq0
gkT
gkT
Z
3N + 1 E Z
3N + 1 p2
3N + 1 H (p0 , q0 )
1
0
s
=
exp
exp
dps exp
dp0 q0 .
gkT
g
kT
g
2QkT
g
kT
If we choose g = 3N + 1, the partition function of the extended system is
equivalent to that of the original system in the canonical ensemble except for a
constant factor:
Z
0
0
Z = C eH0 (p ,q ) dp0 q0 ,
5

where 1/ = kT . Then the equilibrium distribution function is


0

(p0 , q0 ) = eH0 (p ,q ) .
Finally with the ergodic hypothesis, we have
D p E
D
E
= A(p0 , q0 )
F = A , q
s
extended
canonical

3.3
3.3.1

Nos
e-Hoover Thermostat with additional constraints(April
4th)
Equivalent to Gaussian thermostat

Let us consider the extended Hamiltonain (3)


HNose =

N
X
i=1

p2i
p2
+ (q) + s + gkT ln s
2
2mi s
2Q

with condition

HNose
1 X p2i
=
gkT = 0
s
s
mi s2

(9)

ps
HNose
=
=0
ps
Q

(10)

and

The constraint (10) is trivial since we can directly ignore the p2s /2Q term in the
extended Hamiltonian. And the equations of motion for the virtual variable qi
and pi are still:
dqi
HNose
pi
=
=
dt
pi
mi s2
dpi
HNose

=
=
dt
qi
qi

(11)
(12)

Now we map the virtual equations of motion back to the real equations of
motion:
dq0i
dq
p
p0
=s i = i = i
0
dt
dt
mi s
mi
dp0i
d(pi /s)

ds
=s
= 0 p0i
0
dt
dt
qi
dt
Note that if we differentiate both sides of (9), we have
X p dp
ds
ds
1 X p0i
1 d
i
i
= gkT s

=
=
mi dt
dt
dt
gkT
q0i mi
gkT dt0
6

(13)
(14)

If we set = ds/dt, then the real motion of equation


dq0i
p0
= i,
0
dt
mi

dp0i

= 0 p0i
0
dt
qi

is actually identical to the Gaussian thermostat which will be introduced in


section 4.
3.3.2

3.4

More constraints

More on Nos
e-Hoover Thermostat(April 4th)

Since the extended system method has more independent variables than the
equivalent statistical mechanical ensemble. This is why the extended system
method gives correct result for the static quantities, but the time evolution of s
are really dependent on the adjustable parameter Q, which is, in some sense, the
coupling frequency to the extended system. A frequency with too little overlap
with natural frequencies in the original system can lead to very long energy
transfer time.

Gaussian thermostat: velocity-rescaling (April


4th)

Velocity-rescaling method is actually the first method([14]) proposed to keep


the temperature a fixed value during a simulation without allowing fluctuations
of T . In this method, the velocities are scaled according to
r
T0
pi
p
T i
where T0 is the desired temperature, and T the actual temperature calculated
from the velocity of the particles.
However, a drawback for the velocity-rescaling is that this method leads to
discontinuities in the momentum part of the phase space trajectory due to the
rescaling procedure at each time step.
An extension of velocity-rescaling method implies a constraint of the equations of motion (1) to keep the temperature fixed([4],[6],[11]). Gaussian principle
of least constraint states that a force added to restrict a particle motion on a
constraint hypersurface should be normal to the surface in a realistic dynamics.
From this principle an constraint force term pi is added to the force term of
(1),
p
dqi
= i,
dt
mi
dpi
(q)
=
pi
dt
qi
7

(15)
(16)

Consequently, the equations are no longer in a canonical form. The parameter


is determined from the requirement that the temperature or the total kinetic
energy is constant,
X p2
gkT
i
=
2mi
2
or
X p dp
i
i
= 0.
mi dt
Thus we obtain

X p
i
=
mi qi

X p2
i
mi

This method can recover the canonical distribution in coordinate space if we


set g = 3N 1 where N is the number of particles. Since Gaussian principle
of least constraint is used, this extended velocity-rescaling method is also called
Gaussian thermostat.
One important remark is that Gaussian thermostat can actually derived
from Nose-Hoovers extended system method(NH thermostat) by imposing a
particular constraint. In other words, Gaussian thermostat is just a special
Nose-Hoover thermostat.

5
5.1

Berendsen Thermostat(April 4th)


Motivations for Berendsen thermostat

Notice that one main problem of velocity-rescaling method is that it does NOT
allow temperature fluctuations which are present in the canonical ensemble. To
overcome this problem, Berendsen introduce([2]) a weak coupling method to an
external bath which now is called Berendsen thermostat.
Berendsen thermostat, also called proportional thermostat, is trying to correct the deviations of the actual temperature T from the prescribed one T0 by
multiplying the velocities by a certain factor in order to move the system
dynamics towards the one corresponding to T0 . One advantage of Berendsen
thermostat is that it allows the temperature fluctuations, thereby not fixing it
to a constant value.
The motivation for the Berendsen thermostat is the minimization of local disturbances of a stochastic thermostat while keeping the global effects unchanged.

5.2

Berendsen thermostat: proportional time-rescaling

In Berendsens method, the velocities are scaled at each time step, such that the
rate of change of temperature is proportional to the difference in temperature:
1
dT
= (T0 T )
dt

(17)

where is the coupling parameter, analog of in Anderson thermostat, determining how tightly the bath and the system are coupled together. It turns out
that Berendsens method create an exponential decay of the system towards the
desired temperature:
T = T0 Cet/
(18)
Note that from (17) we have
T =

t
(T0 T )

(19)

Thus this lead to a modification of the momenta pi pi where is the scaling


factor:

t T0
2
=1+
1
(20)
T T
where T is coupling time constant which determines the time scale on which
the desired temperature is reached.

5.3

Interpolation between the canonical and microcanonical ensemble

A drawback of Berendsen thermostat is that it cannot be mapped onto a specific


thermodynamic ensemble. Actually, Morishita([9]) shows that the phase space
distribution is

h
||2 i
(q, p) = f (p) exp (q)
(21)
3N
where ' (1 E/) and E, are the mean fluctuations of the potential
and total energy. f (p) is in general an unknown function of the momenta, so
that the full density cannot be determined.
For = 0 which corresponds to t = T in (20) , the fluctuations in the
kinetic energy vanish and the phase space distribution reduces to the canonical
energy:
(q, p) = (T T0 ) exp((q)).

(22)

On the other hand, if T , it corresponds to an isolated system and the


energy should be conserved, which means
E = K + = 0,

= 1.

In this case, the phase space distribution reduces to the microcanonical distribution. Therefore (19) can be viewed as an interpolation between the canonical
and microcanonical ensemble.

6
6.1

Langevin Thermostat(April 4th)


Motivation for Langevin thermostat

When we consider the motion of large particles through a continuum of smaller


particles, Langevin equation
x
= x +
or
dqi
p
= i,
dt
mi

dpi
(q)
=
pi + i
dt
qi

(23)

is taken into account. The smaller particles create a damping force to the
momenta, pi , as the large particles push the smaller ones out of the way. The
smaller(thermal) particles also move with kinetic energy and give random kicks
to the large particles. , are connected by a fluctuation-dissipation relation
2 = 2mi kT
in order to recover the canonical ensemble distribution.
Langevin equation can be used for molecular dynamics equations by assuming that the atoms being simulated are embedded in a sea of much smaller
fictional particles. In many instances of solute-solvent systems, the behavior
of the solute is desired, and the behavior of the solvent is non-interesting(e.g.
proteins, DNA, nanoparticles in solution). In these cases, the solvent influences
the dynamics of the solute(typically nanoparticles) via random collisions, and
by imposing a frictional drag force on the motion of the nanoparticle in the
solvent. And Langevin equation of motion is the way we incorporate these two
effects.

6.2

Key idea for Langevin thermostat

At each time step t the Langevin thermostat changes the equation of motion
so that the change in momenta is

!
(q)
pi =
pi + p t
qi
where pi damp the momenta and p is a Gaussian distributed random number
with probability

1
|p|2
(p) =
exp
.
2 2
2
And standard deviation 2 = 2mi kT . The random fluctuating force represents
the thermal kicks from the small particles. The damping factor and the random
force combine to give the correct canonical ensemble.
10

6.3

Advantages and disadvantages for Langevin thermostat

Typical advantage for Langevin thermostat is that we need fewer computations


per time step since we eliminate many atoms and include them implicitly by
stochastic terms. Besides, we can choose a relatively large time step t, 2-3
times larger than in MD due to dissipative term, because damping term stabilizes the equations of motion. Furthermore, since Langevin thermostat replace
the fastest frequency motions in the real system by stochastic terms, t is now
chosen to resolve the slower degrees of freedom, and thus t is several hundred
times larger than in the original MD.
Drawbacks for Langevin thermostat are
Excluded volume effects of solvent not included(which I still do NOT
understand)
Not trivial to implement drag force for non-spherical particles since the
friction coefficient i is related to the particle radius
i = 6ri /mi
for the solute-solvent system, solvent molecules must be small compared
to the smallest molecules explicitly considered
the dissipative force describes only the friction with the small particles; we
can actually add the drag on neighboring large particles into the dissipative
force, which is the dissipative particle dynamics(DPD) thermostat,

7
7.1

Dissipative particle dynamics thermostat(April


13)
Description of the technique

Dissipative particle dynamics(DPD) thermostat is typically used in the mesoscale


simulations, for instance, the diblock copolymer model where we assume the
mesoscopic particles, which are soft spheres, are connected by a spring.
For the DPD thermostat, we adds the pairwise random and dissipative forces
to the force term of the Hamiltonian equations of motion:
X
p
dpi
dqi
= i,
= fi (t) =
(FijC + FijD + FijR )
(24)
dt
mi
dt
i6=j

where F C represents the conservative forces, F D represents the dissipative


forces, and F R represents the random forces.
More precisely, we define F C , F D , F R as follows. The conservative forces is
a sum of the harmonic springs and the soft repulsions:
FijC = FijCs + FijCr
11

with
FijCs

= K(rij r0 )rij ,

FijCr

(
aij 1

rij
rc

rij ,

rij < rc
rij rc

where rc is a cut-off radius and


rij =

rij
.
|rij |

The dissipative(friction) forces are defined as


FijD = D (rij )(vij rij )rij ,
where is the friction constant, D is a cut-off function for the force as a function
of the scalar distance between i and j which simply limits the interaction range
of the dissipative forces:

r
,
r < rc
1

rc
D (r) =
0.
rr
c

vij = vi vj is the relative velocity of i to j. The random forces are defined as


FijR = R (rij )ij rij
where is the strength of the random force which, similar as we discuss in section
6.1, is connected with the friction constant by the fluctuation-dissipation
relation
2 = 2kT,
and R is cut-off function related to D by
[ R (r)]2 = D (r),
and ij is a Gaussian random number with zero mean and unit variance with
ij = ji .

7.2

Implementation of the method

A DPD simulation can be implemented in any working MD program. The only


subtlety is in the integration of the equations of motion. As the forces between
the particles depend on their relative velocities, the standard velocity-Verlet
scheme cannot be used.
The update of velocities and position uses the new forces:

t C
t R
D
vi (t + t) = vi (t) +
(Fi + fi ) +
F ,
m
m i
ri (t + t) = ri (t) + tvi (t + t).
12

Note that since all forces are pair-interactions, Newtons 3rd law is obeyed! This
is not the case in the Langevin thermostat. As a result, DPD thermostat looks
more like MD.
One advantage of DPD over atomistic MD is that DPD involves a coarsegrained model. This makes the technique useful when studying the mesoscopic
structure of complex liquids. However if we are only interested in static properties, we could use standard MC or MD on a model with the same conservative
forces F C , but without dissipation F D . The real advantage of DPD shows up
when we try to model the dynamics of complex liquids.
We can even compare the DPD thermostat with the Langevin thermostat.
In the Langevin thermostat, the dissipative force describes only friction with
the small particles, but in DPD thermostat, the friction generates a drag on
neighboring DPD particles, mediated by the small particles. For the DPD thermostat, since all interactions are pair-interactions, the momentum is conserved
which is not true for Langevin thermostat.

Anisotropic Willmore cont(April 20))

I am still considering the regularized ODE system


cos t
Q +
Q + 2H(H 2 K) (2H + p)
sin
t
(

2
2

1
k1 (k1 k2 ) + /2
x
1
(k k22 ) cos
p
p1
+ 2
+
sin t
sin t
sin t
|k1 k2 |2 +
|k1 k2 |2 +
)
h
i
1
+ (k12 + k22 )((k1 k2 )2 + )) (k1 + k2 )2 p
|k1 k2 |2 +
h
i
cos t
+ 4 2 Q +
Q + 2H(H 2 K) = 0
sin t
with the boundary conditions
Q(0) = 0, (0) = 0, x(0) = 0, y(0) = 0, V (0) = 0,
Q() = 0, () = ,
V () = Volume.
The difficulty here is to compute all the derivatives and represent all of them
by Q, H, , x.

9
9.1

Gausss principle of least constraint and thermostat (April 26))


Main question for the thermostats

Le us consider the continuous thermostats, for instance, Gaussian thermostat or


Berendsen thermostat. Gaussian thermostat describes the following dynamics
13

for the system of N particles:


dqi
p
= i,
dt
mi
dpi
(q)
pi
=
dt
qi
where

X p
i
=
mi qi

(25)
(26)

X p2
i
.
mi

And Berendsen thermostat gives the dynamics:

T
0
1 vi
mi v i = Fi + mi
T

(27)

where Fi is the conservative force, or if we use the (q, p) language as Gaussian


thermostat, then the Berendsen dynamics reads as:
dqi
p
= i,
dt
mi
T

dpi
(q)
0
=

1 pi
dt
qi
T

(28)
(29)

Now if we observe the Gaussian and Beredesen dynamics, they are just the
Hamiltonian dynamics with extra terms
T

0
pi or
1 pi
T
So there is a natural question saying that if we can think of the Gaussian and
Berendsen dynamics as minimization problems with constraints
X p2
gkT
dT
i
=
or
= 2(T0 T )
2mi
2
dt
since the classic Hamiltonian system can be derived from least action principle.

9.2

Least action principle with constraints

We can investigate the least action principle with constraints, and see if we can
recover, for instance, the Gaussian dynamics,
dqi
p
= i,
dt
mi
dpi
(q)
=
pi .
dt
qi
Let us consider the minimization problem
Z TX
mi dqi 2
min
U (q)dt
2 dt
0
14

s.t.

X mi dq 2

gkT
2 dt
2
If we apply the Lagrange multiplier method as usual, the equilibrium system
will be
dp
(q)
(1 ) i =
dt
qi
i

This equilibrium system is not the Gaussian dynamics since the extra force in
Gaussian dynamics is proportional to the velocity rather than the acceleration.
Hence the argument of least action principle with constraints can not be
used to recover the Gaussian or Berendsen dynamics.

9.3

Gausss principle of least constraint

Gauss formulated a mechanics over 170 years ago which is more general than
Newtons. Gausss formulation applies to systems which are subject to constraints, either holonomic(coordinate-dependent only) or nonholonomic(coordinate
and velocity dependent). Gausss principle stated that the trajectories actually
followed would deviate as little as possible, in a least squares sense, from the
unconstrained Newtonian trajectories.
Mathematically, Gausss principle of least constraint state that the true motion of a mechanical system of N masses is the minimums of the quantity
N
2
X
1 dpi

Fi

m
dt
i
i=1

for all trajectories satisfying any imposed constraints. For our case, it turns out
to be a minimization problem
min

N
X
1 dpi
2
+

mi dt
qi
i=1

(30)

This is a statement of Gausss principle of least constraint that I find in [2],[5],[8],


however the minimization problem (30), in my understanding, does not make
sense since the minimized quantity is a function of time t rather than a constant.
If we modify the minimization problem as
Z
min
0

s.t.

N
T X
i=1

2
1 d2 qi
2 +
dt
mi dt
qi

X mi dq 2

(31)

gkT
2 dt
2
we can still not cover the Gaussian dynamics since the term containing the
Lagrange multiplier is proportional to the acceleration.
i

15

Although Berendsen mentioned in [2] that Berendsen proportional scaling


gives a least squares local disturbance satisfying a global constraint, meaning,
discretely at each time step, Berendsen proportional scaling method is a minimizer of
X
min
mi (vi (t + t) vi (t))2
s.t.

X1

X1

3N t
mi vi (t)2 =
K
(T0 T (t)).
2
2
2

But a global minimization description is still not given to recover the Gaussian
and Berendsen dynamics. Furthermore, Parrinellos dynamics in [3] is also derived from the minimization of the local disturbance on the trajectory without
any global consideration.
Now the main point in this section is how to understand the Gausss principle
of least constraint from the variational point of view and recover the Gauss and
Berendsen dynamics. I will keep working on it in the next couple of weeks.

10
10.1

mi vi (t + t)2

Fokker-Planck equation and energy laws (June


22))
Fundamental question

The fundamental question we are considering is that how Langevin equation


(1D case)

x + x + U = 2
(32)
converges to the first order stochastic differential equation

x + U = 2.

(33)

One way to prove the convergence is the so-called Smoluchowski-Kramers


approximation, which has been known very well. Another possible method is
that we consider relation between the associated Fokker-Planck equations for,
namely, how the solution of the Kramers equation
W

1
1 2
= (vW ) +
(v + U )W + 2 2 W
t
x
v
v

(34)

converges to the solution of the Smoluchowski equation


P

2
=
(P U ) +
P
t
x
x2

10.2

(35)

Possible approach

As what we discussed before, we can find the energy law for the Smoluchowski
equation (35), but the difficulty is the energy law for the Kramers equation
(34). Now I am thinking why dont we avoid the energy law, and try to prove
16

the convergence directly. In this case, what we need to do is to pick a suitable


weight function
F := F (x, v, )
such that
Q(x, t, ) P (x, t),
where

as

F (x, v, )W (x, v, t, )dv.

Q(x, t, ) :=

(36)

First of all, one notices that the Kramers equation

1
1 2
W
= (vW ) +
(v + U )W + 2 2 W
t
x
v
v
has coefficients 1/, 1/2 which will blow up when goes to zero, one possible
choice for the weight function might be
F (x, v, ) := exp[c()f (x, v)]
such that when one takes derivatives w.r.t. v, c() will appear to cancel 1/, 1/2 .
Secondly, notice that the limiting dynamics (33) tells us that

v + U = 2,
which implies that v will more likely be equal to U 1 . To certain degree the
function Q(x, t, ) should be averaged with more weights near U and less
weights far away from U . If that is the case, we can pick
F (x, v, ) = exp[c()(v + U )2 ].
In particular, we choose the special potential field U = x2 /2, then
F (x, v, ) = exp[c()(v + x)2 ].

(37)

One more thing we should keep in mind is that the stationary solution of
the Kramers equation (34) is given by the Boltzmann distribution
Wst (x, v) =

1
exp v 2 U (x)
Z
2

1 Intuitively, we can think that v U , then in the Kramers equation (34), we can roughly
do the following substitutions

vW U P,
v + U 0,
2
1 2
W
P
2
2
v
x2
and recover the Smoluchowski equation.

17

and the stationary solution of the Smoluchowski equation (35) is given by


Pst (x) =

1
exp U (x) .
Z

Now let us take derivative on both sides of (36) w.r.t. t, then


Z +
Q
W
=
dv
F (x, v, )
t
t

Z +

1
1 2
F
=
(vW ) + F (v + x)W + 2 F 2 W dv
x
v
v

consider the special form (37) of F , we have


Z +
Z +
Q
W
2c
=
v
F dv
(v + x)2 W F dv
t
x

Z +

1 2
+
4c (v + x)2 + 2c W F dv
2

A possible way to handle the above equality is to represent the right hand side
by Q, namely, find out a differential equation Q is satisfied. Then consider the
limiting behavior of that equation and check the convergence to Smoluchowski
equation.

References
[1] H.C. Anderson, Molecular dynamics simulations at constant pressure
and/or temperature, J. Chem. Phys, 72(4), 15 Feb. 1980.
[2] H.J.C. Berendsen, J.P.M. Postma, W.F. van Gunsterne, A. DiNola,
J.R.Haak, Molecular dynamics with coupling to an external bath, Computer
Physics Communications, 2008.01.006
[3] G. Bussi, M. Parrinello, Stochastic thermostats: comparison of local and
global schemes, J. Chem. Phys., 81: 3684, 1984.
[4] D.J.Evans, Computer experiment for nonlinear thermodynamics of Couette
flow, J. Chem. Phys., 78: 3298-3302, 1983.
[5] D.J.Evans, Nonequilibrium molecular dynamics via Gausss principle of
least constraint, Physical review A, Vol 28, No 2, August 1983.
[6] D.J.Evans, W.G. Hoover, B.H. Failor, B. Moran, A.J.C. Ladd, Nonequilibrium molecular dynamics via Gausss principle of least constraint, Phys.
Rev. A, 28: 1016-1021, 1983.
[7] Dann Frenkel, Berend Smit Understanding Molecular Simulation: From
Algorithms to Application,Academic Press, 2002. 1992.
18

[8] http://en.wikipedia.org/wiki/Gauss% 27 principle of least constraint


[9] T.Morishita, Fluctuation formulas in molucular dynamics simulations with
the weak coupling heat bath, J. Chem. Phy. 113:2976-2982, 2000.
[10] W. G. Hoover, Canonical dynamics: Equilibrium phase-space distributions,
Physical Review A, Vol 31, No. 3, 1695-1697 1985.
[11] W.G. Hoover, A.J.C.Ladd, B. Moran, High strain rate plastic flow studied
via nonequilibrium molecular dynamics, Phys. Rev. Lett., 48:1818-1820,
1982.
[12] S. Nose, A molecular dynamics for simulations in canonical ensemble,
Molecular Physics, Vol 52, No. 2, 255-268 1984.
[13] S. Nose, A unified formulation of the constant temperature molecular dynamics methods, J. Chem. Phy. 81(1), 1 July. 1984.
[14] L.V.Woodcock, Isothermal molecular dynamics calculations for liquid salt,
Chem. Phys. Letter., 10: 257-261, 1971.
[15] http://wiki.gromacs.org/index.php/Thermostats

19

You might also like