0% found this document useful (0 votes)
3 views37 pages

Introduction To Monte Carlo Methods

This document provides an introduction to Monte Carlo methods, emphasizing their significance in scientific computation, particularly for problems with large phase spaces. It covers key concepts such as Markov chains, the Metropolis algorithm, and advanced techniques like parallel tempering Monte Carlo, illustrated through examples like the Ising model and glassy systems. The document also discusses the challenges of traditional integration methods in high dimensions and the advantages of Monte Carlo integration for such cases.

Uploaded by

ranaimransa227
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views37 pages

Introduction To Monte Carlo Methods

This document provides an introduction to Monte Carlo methods, emphasizing their significance in scientific computation, particularly for problems with large phase spaces. It covers key concepts such as Markov chains, the Metropolis algorithm, and advanced techniques like parallel tempering Monte Carlo, illustrated through examples like the Ising model and glassy systems. The document also discusses the challenges of traditional integration methods in high dimensions and the advantages of Monte Carlo integration for such cases.

Uploaded by

ranaimransa227
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

arXiv:0905.1629v3 [cond-mat.

stat-mech] 4 May 2011

Introduction to Monte Carlo Methods

Helmut G. Katzgraber

Department of Physics and Astronomy, Texas A&M University


College Station, Texas 77843-4242 USA
Theoretische Physik, ETH Zurich
CH-8093 Zurich, Switzerland

Abstract. Monte Carlo methods play an important role in scientific computation,


especially when problems have a vast phase space. In this lecture an introduction
to the Monte Carlo method is given. Concepts such as Markov chains, detailed
balance, critical slowing down, and ergodicity, as well as the Metropolis algorithm
are explained. The Monte Carlo method is illustrated by numerically studying the
critical behavior of the two-dimensional Ising ferromagnet using finite-size scaling
methods. In addition, advanced Monte Carlo methods are described (e.g., the Wolff
cluster algorithm and parallel tempering Monte Carlo) and illustrated with nontrivial
models from the physics of glassy systems. Finally, we outline an approach to study
rare events using a Monte Carlo sampling with a guiding function.

Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Monte Carlo integration . . . . . . . . . . . . . . . . . . . . 3
2.1 Traditional integration schemes . . . . . . . . . . . . . . 3
2.2 Simple and Markov-chain sampling . . . . . . . . . . . . 4
2.3 Importance sampling . . . . . . . . . . . . . . . . . . . . 8
3 Interlude: Statistical mechanics . . . . . . . . . . . . . . . . 9
3.1 Simple toy model: The Ising model . . . . . . . . . . . . 9
3.2 Statistical physics in a nutshell . . . . . . . . . . . . . . . 10
Monte Carlo Methods (Katzgraber)

4 Monte Carlo simulations in statistical physics . . . . . . . 12


4.1 Metropolis algorithm . . . . . . . . . . . . . . . . . . . . 13
4.2 Equilibration . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Autocorrelation times and error analysis . . . . . . . . . 17
4.4 Critical slowing down and the Wolff cluster algorithm . . 18
4.5 When does simple Monte Carlo fail? . . . . . . . . . . . . 19
5 Complex toy model: The Ising spin glass . . . . . . . . . . 19
5.1 Selected hallmark properties of spin glasses . . . . . . . . 21
5.2 Theoretical description . . . . . . . . . . . . . . . . . . . 21
6 Parallel tempering Monte Carlo . . . . . . . . . . . . . . . 22
6.1 Outline of the algorithm . . . . . . . . . . . . . . . . . . 23
6.2 Selecting the temperatures . . . . . . . . . . . . . . . . . 24
6.3 Example: Application to spin glasses . . . . . . . . . . . 25
7 Rare events: Probing tails of energy distributions . . . . . 27
7.1 Case study: Ground-state energy distributions . . . . . . 27
7.2 Simple sampling . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Importance sampling with a guiding function . . . . . . . 28
7.4 Example: The Sherrington-Kirkpatrick Ising spin glass . 29
8 Other Monte Carlo methods . . . . . . . . . . . . . . . . . . 31

1 Introduction
The Monte Carlo method in computational physics is possibly one of the most im-
portant numerical approaches to study problems spanning all thinkable scientific dis-
ciplines. The idea is seemingly simple: Randomly sample a volume in d-dimensional
space to obtain an estimate of an integral at the price of a statistical error. For
problems where the phase space dimension is very large—this is especially the case
when the dimension of phase space depends on the number of degrees of freedom—the
Monte Carlo method outperforms any other integration scheme. The difficulty lies in
smartly choosing the random samples to minimize the numerical effort.
The term Monte Carlo method was coined in the 1940s by physicists S. Ulam,
E. Fermi, J. von Neumann, and N. Metropolis (amongst others) working on the nu-
clear weapons project at Los Alamos National Laboratory. Because random numbers
(similar to processes occurring in a casino, such as the Monte Carlo Casino in Monaco)
are needed, it is believed that this is the source of the name. Monte Carlo methods
were central to the simulations done at the Manhattan Project, yet mostly hampered
by the slow computers of that era. This also spurred the development of fast random
number generators, discussed in another lecture of this series.
In this lecture, focus is placed on the standard Metropolis algorithm to study prob-
lems in statistical physics, as well as a variation known as exchange or parallel tem-
pering Monte Carlo that is very efficient when studying problems in statistical physics
with complex energy landscapes (e.g., spin glasses, proteins, neural networks) [1]. In

2
2 Monte Carlo integration

general, continuous phase transitions are discussed. First-order phase transitions are,
however, beyond the scope of these notes.

2 Monte Carlo integration


The motivation for Monte Carlo integration lies in the fact that most standard in-
tegration schemes fail for high-dimensional integrals. At the same time, the space
dimension of the phase space of typical physical systems is very large. For exam-
ple, the phase space dimension for N classical particles in three space dimensions
is d = 6N (three coordinates and three momentum components are needed to fully
characterize a particle). This is even worse for the case of N classical Ising spins (dis-
cussed below) which can take the values ±1. In this case the phase space dimension
is 2N , a number that grows exponentially fast with the number of spins! Therefore,
integration schemes such as Monte Carlo methods, where the error is independent of
the space dimension, are needed.

2.1 Traditional integration schemes


Before introducing Monte Carlo integration, let us review standard integration
schemes to highlight the advantages of random sampling methods. In general, the
goal is to compute the following one-dimensional integral
Z b
I= f (x)dx . (1)
a

Traditionally, one partitions the interval [a, b] into M slices of width δ = (b − a)/M
and then performs a kth order interpolation of the function f (x) for each interval to
approximate the integral as a discrete sum (see Fig. 1). For example, to first order,
one performs the midpoint rule where the area of the lth slice is approximated by a
rectangle of width δ and height f [(xl + xl+1 )/2]. It follows that
M
X −1
I≈ δ · f [(xl + xl+1 )/2] . (2)
l=0

For M → ∞ the discrete sum converges to the integral of f (x). Convergence can be
improved by replacing the rectangle with a linear interpolation between xl and xl+1
(trapezoidal rule) or a weighted quadratic interpolation (Simpson’s rule) [74]. One
can show that the error made due to the approximation of the function is proportional
to ∼ M −1 for the midpoint rule if the function is evaluated at one of the interval’s
edges (in the center as shown above ∼ M −2 ), ∼ M −2 for the trapezoidal rule, and
∼ M −4 for Simpson’s rule. The convergence of the midpoint rule can thus be slow
and the method should be avoided.
A problem arises when a multi-dimensional integral needs to be computed. In this
case one can show that, for example, the error of Simpson’s rule scales as ∼ M −4/d

3
Monte Carlo Methods (Katzgraber)

f(x)

a b x
Figure 1: Illustration of the midpoint rule. The integration interval [a, b] is divided
into M slices, the area of each slice approximated by the width of the slice, δ =
(b − a)/M , times the function evaluated at the midpoint of each slice.

because each space component has to be partitioned independently. Clearly, for space
dimensions larger than 4 convergence becomes very slow. Similar arguments apply for
any other traditional integration scheme where the error scales as ∼ M −κ : if applied
to a d-dimensional integral the error scales ∼ M −κ/d .

2.2 Simple and Markov-chain sampling


One way to overcome the limitations imposed by high-dimensional volumes is simple
sampling Monte Carlo. A simple analogy is to determine the area of a pond by
throwing rocks. After enclosing the pond with a known area (e.g., a rectangle) and
having enough beer or wine [2], pebbles are randomly thrown into the enclosed area.
The ratio of stones in the pond and the total number of thrown stones is a simple
sampling statistical estimate for the area of the pond, see Fig. 2.

Figure 2: Illustration of simple-sampling


Monte Carlo integration. An unknown area
(pond) is enclosed by a rectangle of known a
area A = ab. By randomly sampling the
area with pebbles, a statistical estimate of
the pond’s area can be computed.

A slightly more “scientific” example is to compute π by applying Monte Carlo


integration to the unit circle. The area of the unit circle is given by A◦ = πr2 with
r = 1; the top right quadrant can be enclosed by a square of size r and area A = r2
(see Fig. 3). An estimate of π can be accomplished with the following pseudo-code
algorithm [3] that performs a simple sampling of the top-right quadrant:

1 algorithm simple_pi
2 initialize n_hits 0
3 initialize m_trials 10000
4 initialize counter 0
5

6 while(counter < m_trials) do

4
2 Monte Carlo integration

7 x = rand(0,1)
8 y = rand(0,1)
9 if(x**2 + y**2 < 1)
10 n_hits++
11 fi
12 counter++
13 done
14

15 return pi = 4*n_hits/m_trials

Figure 3: Monte Carlo esti-


mate of π by randomly sam-
pling the unit circle: two ran-
dom numbers x and y in the
r=1 range [0, 1] are computed. If
x2 + y 2 ≤ 1, the resulting point
is in the unit circle. After M
trials an estimate of π/4 can be
computed with a statistical er-
ror ∼ M −1/2 .

For each of the m trials trials we generate two uniform random numbers [74] in the
interval [0, 1] [with rand(0,1)] and test in line 9 of the algorithm if these lie in the
unit circle or not. The counter n hits is then updated if the resulting number is in
the circle. In line 15 a statistical estimate of π is then returned.
Before applying these ideas to the integration of a function, we introduce the
concept of a Markov chain [64]. In the simple-sampling approach to estimate the area
of a pond as presented above, the random pebbles used are independent in the sense
that a newly-selected pebble to be thrown into the rectangular area in Fig. 2 does not
depend in any way on the position of the previous pebbles. If, however, the pond is
very large, it is impossible to throw pebbles randomly from one position. Thus the
approach is modified: After enough beer you start at a random location (make sure to
drain the pond first) and throw a pebble into a random direction. You then walk to
that pebble, pull a new pebble out of a pebble bucket you have with you and repeat
the operation. This is illustrated in Fig. 4. If the pebble lands outside the rectangular
area, the thrower should go get the outlier and place it on the current position of the
thrower, i.e., if the move lies outside the sampled area, it is rejected and the last move
counted twice. Why? This will be explained later and is called detailed balance (see
p. 14). Basically, it ensures that the Markov chain is reversible. After many beers
and throws, pebbles are scattered around the rectangular area, with small piles of
multiple pebbles closer to the boundaries (due to rejected moves).
Again, these ideas can be used to estimate π by Markov-chain sampling the unit
circle. Later, the Metropolis algorithm, which is based on these simple ideas, is
introduced in detail using models from statistical physics. The following algorithm
describes Markov-chain Monte Carlo for estimating π:

5
Monte Carlo Methods (Katzgraber)

Figure 4: Illustration of Markov-


chain Monte Carlo. The new state
is always derived from the previous
state. At each step a pebble is thrown
in a random direction, the following
a
throw has its origin at the landing po-
sition of the previous one. If a peb-
ble lands outside the rectangular area
(cross) the move is rejected and the
last position recorded twice (double b
circle).

1 algorithm markov_pi
2 initialize n_hits 0
3 initialize m_trials 10000
4 initialize x 0
5 initialize y 0
6 initialize counter 0
7

8 while(counter < m_trials) do


9 dx = rand(-p,p)
10 dy = rand(-p,p)
11 if(|x + dx| < 1 and |y + dy| < 1)
12 x = x + dx
13 y = y + dy
14 fi
15 if(x**2 + y**2 < 1)
16 n_hits++
17 fi
18 counter++
19 done
20

21 return pi = 4*n_hits/m_trials

The algorithm starts from a given position in the space to be sampled [here (0, 0)]
and generates the position of the new dot from the position of the previous one. If
the new position is outside the square, it is rejected (line 11). A careful selection of
the step size p used to generate random numbers in the range [−p, p] is of importance:
When p is too small, convergence is slow, whereas if p is too large many moves are
rejected because the simulation will often leave the unit square. Therefore, a value of
p has to be selected such that consecutive moves are accepted approximately 50% of
the time.
The simple-sampling approach has the advantage over the Markov-chain approach
in that the different samples are independent and thus not correlated. In the Markov-
chain approach the new state depends on the previous state. This can be a problem
since there might be a “memory” associated with this behavior. If this memory is
large, then the autocorrelation times (i.e., the time it takes the system to forget where

6
2 Monte Carlo integration

it was) are large and many moves have to be discarded. Then why even think about
the Markov-chain approach? Because in the study of physical systems it is generally
easier to slightly (and randomly) change an existing state than to generate a new state
from scratch for each step of the calculation. For example, when studying a system
of N spins it is easier to flip one spin according to a given probability distribution
than to generate a new configuration from scratch with a pre-determined probability
distribution.
Let us apply now these ideas to perform a simple-sampling estimate of the integral
of an actual function. As an example, we select a simple function, namely
Z 1
n
f (x) = x → I= f (x)dx (3)
0

with n > −1. Using simple-sampling Monte Carlo, the integral can be estimated via
1 algorithm simple_integrate
2 initialize integral 0
3 initialize m_trials 10000
4 initialize counter 0
5

6 while(counter < m_trials) do


7 x = rand(0,1)
8 integral += x**n
9 counter++
10 done
11

12 return integral/m_trials

In line 8 we evaluate the function at the random location and add the result to the
estimate of the integral, i.e.,
M
1 X
I≈ f (xi ) , (4)
M i

where we have set m trials = M . To calculate the error of the estimate, we need to
compute the variance of the function. For this we need to also perform a simple sam-
pling of the square of the function, i.e., add a line to the code with integral square
+= x**(2*n). It then follows [56] for the statistical error of the integral δI
r
Varf
δI = , Varf = hf 2 i − hf i2 , (5)
M −1
with
Z 1 M
1 X
hf k i = [f (x)]k dx ≈ [f (xi )]k . (6)
0 M i
Here xi are uniformly distributed random numbers. The important detail is that
Eq. (5) does not depend on the space dimension and merely on M −1/2 . This means

7
Monte Carlo Methods (Katzgraber)

that, for example, for space dimensions d > 8 Monte Carlo sampling outperforms
Simpson’s rule.
The presented simple-sampling approach has one crucial problem: When in the
example shown the exponent n is close to −1 or much larger than 1 the variance of
the function in the interval is large. At the same time, the interval [0, 1] is sampled
uniformly. Therefore, similar to the estimate of π, areas which carry little weight
for the integral are sampled with equal probability as areas which carry most of the
function’s support (see Fig. 5). Therefore the integral and error converge slowly. To
alleviate the situation and shift resources where they are needed most, importance
sampling is used.

f(x)
1
Figure 5: Illustration of the simple-sampling
approach when integrating f (x) = xn with n 
1. The function has most support for x → 1.
Because random numbers are generated with
a uniform probability, the whole range [0, 1] is
sampled equally probable, although for x → 0
the contribution to the integral is small. Thus,
the integral converges slowly.
1 x

2.3 Importance sampling


When the variance of the function to be integrated is large, the error [directly depen-
dent on the variance, see Eq. (5)] is also large. A cure to the problem is provided
by generating random numbers that more efficiently sample the area, i.e., distributed
according to a function p(x) which, if possible, has to fulfill the following criteria:
First, p(x) should be as close as possible to f (x) and second, generating p-distributed
random numbers should be easily accomplished. The integral of f (x) can be expressed
in the following way [using the notation introduced in Eq. (6)]
Z 1 M
f (x) 1 X f (yi )
hf i = hf /pip = p(x)dx ≈ . (7)
0 p(x) M i p(yi )

In Eq. (7) h· · · ip corresponds to a sampling with respect to p-distributed random


numbers; yi are also p-distributed. The advantage of this approach is that the error
is now given in terms of the variance Var(f /p) and, if both f (x) and p(x) are close,
the variance of f /p is considerably smaller than the variance of f .
For the case of f (x) = xn we could, for example, select random numbers dis-
tributed according to p(x) ∼ x` with ` ≥ n (when n > −1). This means that in
Fig. 5 the area around x . 1 is sampled with a higher probability than the area

8
3 Interlude: Statistical mechanics

around x ∼ 0. Power-law distributed random numbers y can be readily produced


from uniform random numbers x by inverting the cumulative distribution of p(x),
i.e.,
y(x) = x1/(`+1) , ` > −1 . (8)
In the next sections the elaborated concepts are applied to problems in (statistical)
physics. First, some toy models and physical approaches to study the critical behavior
of statistical models using finite-size simulations are introduced.

3 Interlude: Statistical mechanics


In this section the core concepts of statistical mechanics are presented as well as a
simple model to study phase transitions. Because discussing these topics at length is
beyond the scope of these lecture notes, the reader is referred to the vast literature
in statistical physics, in particular Refs. [18, 31, 36, 43, 77, 79, 91].

3.1 Simple toy model: The Ising model


Developed in 1925 [45] by Ernst Ising and Wilhelm Lenz, the Ising model has become
over the decades the drosophila of statistical mechanics. The simplicity yet rich
behavior of the model makes it the perfect platform to study many magnetic systems
as well as for testing of algorithms. For simplicity, it is assumed that the magnetic
moments are highly anisotropic, i.e., they can only point in one space direction. The
classical spins Si = ±1 are placed on a hypercubic lattice with nearest-neighbor
interactions. Therefore, the Hamiltonian is given by
X X
H= Jij Si Sj − H Si . (9)
hi,ji i

The first term in Eq. (9) is responsible for the pairwise interaction between two
neighboring spins Si and Sj . When Jij = −J < 0, the energy is minimized by
aligning all spins, i.e., ferromagnetic order, whereas when Jij = J > 0 the energy is
minimized by ensuring that the product over all neighboring spins is negative. In this
case, staggered antiferromagnetic order is obtained for T → 0. The “hi, ji” represents
a sum over nearest-neighbor pairs of spins on the lattice (see Fig. 6). The second term
in Eq. (9) represents a coupling to an external field of strength H. Amazingly, this
simple model captures all interesting phenomena found in the physics of statistical
mechanics and phase transitions. It is exactly solvable in one space dimension, and in
two dimensions for H = 0, and thus an excellent test bed for algorithms. Furthermore,
in space dimensions larger than one it undergoes a finite-temperature transition into
an ordered state.
A natural way to quantify the temperature-dependent transition in the ferromag-
netic case is to measure the magnetization
1 X
m= Si (10)
N i

9
Monte Carlo Methods (Katzgraber)

of the system. When all spins are aligned, i.e., at low temperatures (below the
transition), the magnetization is close to unity. For temperatures much larger than the
transition temperature Tc , spins fluctuate wildly and so, on average, the magnetization
is zero. Therefore, the magnetization plays the role of an order parameter that is large
in the ordered phase and zero otherwise. Before the model is described further, some
basic concepts from statistical physics are introduced.

Figure 6: Illustration of the


two-dimensional Ising model
with nearest-neighbor inter-
actions. Filled [open] circles
represent Si = +1 [Si = −1].
The spins only interact with
their nearest neighbors (lines
connecting the dots).

3.2 Statistical physics in a nutshell


It would be beyond the scope of this lecture to discuss in detail statistical mechanics
of magnetic systems. The reader is referred to the vast literature on the topic [18, 31,
36, 43, 77, 79, 91]. In this context only the relevant aspects of statistical physics are
discussed.

Observables In statistical physics, expectation values of quantities such as the


energy, magnetization, specific heat, etc.—generally called observables—are computed
by performing a trace over the partition function Z. Within the canonical ensemble
[43] where the temperature T is fixed, the expectation value or thermal average of an
observable O is given by
1 X
hOi = O(s)e−H(s)/kT . (11)
Z s

The sum
P is over all states s in the system, and k represents the Boltzmann constant.
Z = s exp[−H(s)/kT ] is the partition function which normalizes the equilibrium
Boltzmann distribution
1
Peq (s) = e−H(s)/kT . (12)
Z
The h· · · i in Eq. (11) represent a thermal average. One can show that the internal
energy of the system is given by

E = hH(s)i , (13)

whereas the free energy F is given by

F = −kT ln Z . (14)

10
3 Interlude: Statistical mechanics

Note that all thermodynamic quantities can be computed directly from the partition
function and expressed as derivatives of the free energy (see Ref. [91] for details).
Because the partition function is closely related to the Boltzmann distribution, it
follows that if we can sample observables (e.g., measure the magnetization) with
states generated according to the corresponding Boltzmann distribution, a simple
Markov-chain “integration” scheme can be used to produce an estimate.

Phase transitions Continuous phase transitions [43] have no latent heat at the
transition and are thus easier to describe. At a continuous phase transition the free
energy has a singularity that usually manifests itself via a power-law behavior of the
derived observables at criticality. The correlation length ξ [43]—which gives us a
measure of correlations and order in a system—diverges at the transition

ξ ∼ |T − Tc |−ν , (15)

with ν a critical exponent quantifying this divergence and Tc the transition tempera-
ture. Close enough to the transition (i.e., |T −Tc |/Tc  1) the behavior of observables
can be well described by power laws. For example, the specific heat cV has a singu-
larity at Tc with cV ∼ |T − Tc |−α , although the exponent α (unlike ν) can be both
negative and positive. The magnetization does not diverge, but has a singular kink
at Tc , i.e., m ∼ |T − Tc |β with β > 0.
Using arguments from the renormalization group [31] it can be shown that the crit-
ical exponents are related via scaling relations. Often (as in the Ising case), only two
exponents are independent and fully characterize the critical behavior of the model.
It can be further shown that models in statistical physics generally obey universal be-
havior (there are some exceptions. . . ), i.e., if the lattice geometry is kept the same, the
critical exponents only depend on the order parameter symmetry. Therefore, when
simulating a statistical model, it is enough to determine the location of the transition
temperature Tc , as well as two independent critical exponents to fully characterize the
universality class of the system.

Finite-size scaling and the Binder ratio (or “Binder cumulant”) How can
we determine the bulk critical exponents of a system by simulating finite lattices?
When the systems are not infinitely large, the critical behavior is smeared out. Again,
using arguments from the renormalization group, one can show that the nonanalytic
part of a given observable can be described by a finite-size scaling form [75]. For
example, the finite-size magnetization from a simulation of an Ising system with Ld
spins is asymptotically (close to the transition, and for large L) given by

hmL i ∼ Lβ/ν M̃ [L1/ν (T − Tc )] , (16)

and for the magnetic susceptibility by

χL ∼ Lγ/ν C̃[L1/ν (T − Tc )] , (17)

11
Monte Carlo Methods (Katzgraber)

where close to the transition χ ∼ |T − Tc |−γ (for the infinite system, L → ∞) and

Ld
hm2 i − hmi2 .

χ= (18)
kT

Both M̃ and C̃ are unknown scaling functions. Equations (16) and (17) show that
when T = Tc , data for hmL i/Lβ/ν and χL /Lγ/ν simulated for different system sizes
L should cross in the large-L limit at one point, namely T = Tc , provided we use
the right expressions for β/ν and γ/ν, respectively. In reality, there are nonanalytic
corrections to scaling and so the crossing points between two successive system size
pairs (e.g., L and 2L) converge to a common crossing point for L → ∞ that agrees
with the bulk transition temperature Tc . Performing the finite-size scaling analysis
with the magnetization or the susceptibility is not very practical, because neither β
nor γ are known a priori. There are other approaches to determine these, but a far
simpler method is to determine combined quantities that are dimensionless. One such
quantity is known as the Binder ratio (or “Binder cumulant”) [12] given by

hm4 i
 
1
g= 3− ∼ G̃[L1/ν (T − Tc )] . (19)
2 hm2 i2

The different factors ensure that g → 1 for T → 0 and g → 0 for T → ∞. The


asymptotic (for large L) scaling behavior of the Binder ratio follows directly from the
fact that the pre-factors of the moments of the magnetization (mk ∼ Lkβ/ν ) cancel
out in Eq. (19).
The Binder ratio is a dimensionless quantity and so data for different system sizes
L approximately cross at a putative transition—provided corrections to scaling are
small. Furthermore, by carefully selecting the correct value of the critical exponent ν,
the data fall onto a universal curve. Therefore, the method allows for an estimation
of Tc , as well as the critical exponent ν. This is illustrated in Fig. 7 for the two-
dimensional Ising model. The left panel shows the Binder ratio as a function of
temperature for several small system sizes. The vertical dashed line
√ marks the exactly-
known value of the critical temperature, namely Tc = 2/ ln(1 + 2) ≈ 2.269 . . . [91].
The right panel shows a finite-size scaling analysis of the data for the exact value of
the critical exponent ν. Close to the transition the data fall onto a universal curve,
showing that ν = 1 is the correct value of the critical exponent. The two-dimensional
Ising universality class is fully characterized with a second critical exponent, e.g.,
β = 1/8.
Note that other dimensionless quantities, such as the two-point finite-size correla-
tion length [7, 70] can also be used with similar results.

4 Monte Carlo simulations in statistical physics


In analogy to the importance-sampling Monte Carlo integration of functions discussed
in Sec. 2, we can use the gained insights to sample the average of an observable in

12
4 Monte Carlo simulations in statistical physics

Figure 7: Left panel: Binder ratio as a function of temperature for the two-
dimensional Ising model with nearest-neighbor interactions. The data approxi-
mately cross at one point (the dashed line corresponds to the exactly-known Tc for
the two-dimensional Ising model) signaling a transition. Right panel: Finite-size
scaling of the data in the left panel using the known Tc = 2.269 . . . and ν = 1. Plot-
ted are data for the Binder ratio as a function of the scaling variable L1/ν [T − Tc ].
Data for different system sizes fall onto a universal curve suggesting that the pa-
rameters used are the correct ones.

statistical physics. In general, as shown in Eq. (11),


−H(s)/kT
P
s O(s)e
hOi = P −H(s)/kT
. (20)
se

Equation (20) can be trivially extended with a distribution for the states, i.e.,
−H(s)/kT
P
s [O(s)e /P(s)] P(s)
hOi = P −H(s)/kT
. (21)
s [e /P(s)] P(s)
The approach is completely analogous to the importance sampling Monte Carlo inte-
gration. If P(s) is the Boltzmann distribution [Eq. (12)] then the factors cancel out
and we obtain
1 X
hOi = O(si ) , (22)
M i
where the states si are now selected according to the Boltzmann distribution. The
problem now is to find an algorithm that allows for a sampling of the Boltzmann
distribution. The method is known as the Metropolis algorithm.

4.1 Metropolis algorithm


The Metropolis algorithm [63] was developed in 1953 at Los Alamos National Lab
within the nuclear weapons program mainly by the Rosenbluth and Teller families [4].
The article in the Journal of Chemical Physics starts in the following way:

13
Monte Carlo Methods (Katzgraber)

“The purpose of this paper is to describe a general method, suitable for fast
electronic computing machines, of calculating the properties of any substance
which may be considered as composed of interacting individual molecules.”

And they were right. The idea is the following: In order to evaluate Eq. (20) we
generate a Markov chain of successive states s1 → s2 → . . .. The new state is
generated from the old state with a carefully-designed transition probability P(s → s0 )
such that it occurs with a probability given by the equilibrium Boltzmann distribution,
i.e., Peq (s) = Z −1 exp[−H(s)/kT ]. In the Markov process, the state s occurs with
probability Pk (s) at the kth time step, described by the master equation
X
Pk+1 (s) = Pk (s) + [T (s0 → s)Pk (s0 ) − T (s → s0 )Pk (s)] . (23)
s0

The sum is over all states s0 and the first term in the sum describes all processes
reaching state s, while the second term describes all processes leaving state s. The
goal is that for k → ∞ the probabilities Pk (s) reach a stationary distribution described
by the Boltzmann distribution. The transition probabilities T can be designed in such
a way that for Pk (s) = Peq (s), all terms in the sum vanish, i.e., for all s and s0 the
detailed balance condition

T (s0 → s)Peq (s0 ) = T (s → s0 )Peq (s) (24)

must hold. The condition in Eq. (24) means that the process has to be reversible.
Furthermore, when the system has assumed the equilibrium probabilities, the ratio
of the transition probabilities only depends on the change in energy ∆H(s, s0 ) =
H(s0 ) − H(s), i.e.,

T (s → s0 )
= exp[−(H(s0 ) − H(s))/kT ] = exp[−∆H(s, s0 )/kT ] . (25)
T (s0 → s)

There are different choices for the transition probabilities T that satisfy Eq. (25). One
can show that T has to satisfy the general equation T (x)/T (1/x) = x ∀x with x =
exp(−∆H/kT ). There are two convenient choices for T that satisfy this condition:

1. Metropolis (also known as Metropolis-Hastings) algorithm In this case


T (x) = min(1, x) and so

 Γ, if ∆H ≤ 0;
T (s → s0 ) = (26)
 Γe−∆H(s,s0 )/kT , if ∆H ≥ 0 .

In Eq. (26), Γ−1 represents a Monte Carlo time.

2. Heat-bath algorithm In this case T (x) = x/(1 + x) corresponding to an


acceptance probability ∼ [1 + exp(∆H(s, s0 )/kT )]−1 . For the rest of this lecture, we

14
4 Monte Carlo simulations in statistical physics

focus on the Metropolis algorithm. The heat bath algorithm is more efficient when
temperatures far below the transition temperature are sampled.
The move between states s and s0 can, in principle, be arbitrary. If, however, the
energies of states s and s0 are too far apart, the move will likely not be accepted. For
the case of the Ising model, in general, a single spin Si is selected and flipped with
the following probability:

 Γ, for Si = −sign(hi );
T (Si → −Si ) = (27)
 Γe−2Si hi /kT , for S = sign(h ) .
i i

P
where hi = − j6=i Jij Sj + H is the effective local field felt by the spin Si .

15
Monte Carlo Methods (Katzgraber)

Practical implementation of the Metropolis algorithm A simple pseudo-code


Monte Carlo program to compute an observable O for the Ising model is the following:

1 algorithm ising_metropolis(T,steps)
2 initialize starting configuration S
3 initialize O = 0
4

5 for(counter = 1 ... steps) do


6 generate trial state S’
7 compute p(S -> S’,T)
8 x = rand(0,1)
9 if(p > x) then
10 accept S’
11 fi
12

13 O += O(S’)
14 done
15

16 return O/steps

After initialization, in line 6 a proposed state is generated by, e.g., flipping a spin.
The energy of the new state is computed and henceforth the transition probability
between states p = T (S → S 0 ). A uniform random number x ∈ [0, 1] is generated. If
the probability is larger than the random number, the move is accepted. If the energy
is lowered, i.e., ∆H > 0, the spin is always flipped. Otherwise the spin is flipped with
a probability p. Once the new state is accepted, we measure a given observable and
record its value to perform the thermal average at a given temperature. For steps
→ ∞ the average of the observable converges to the exact value, again with an error
inversely proportional to the square root of the number of steps. This is the core
bare-bones routine for the Metropolis algorithm. In practice, several aspects have to
be considered to ensure that the data produced are correct. The most important,
autocorrelation and equilibration times, are described below.

4.2 Equilibration
In order to obtain a correct estimate of an observable O, it is imperative to ensure
that one is actually sampling an equilibrium state. Because, in general, the initial con-
figuration of the simulation can be chosen at random—popular choices being random
or polarized configuration—the system will have to evolve for several Monte Carlo
steps before an equilibrium state at a given temperature is obtained. The time τeq
until the system is in thermal equilibrium is called equilibration time and depends
directly on the system size (e.g., the number of spins N = Ld ) and increases with
decreasing temperature. In general, it is measured in units of Monte Carlo sweeps
(MCS), i.e., 1 MCS = N spin updates.
In practice, all measured observables should be monitored as a function of MCS to
ensure that the system is in thermal equilibrium. Some observables, such as the

16
4 Monte Carlo simulations in statistical physics

m(t)

τ eq t

Figure 8: Sketch of the equilibration behavior of the magnetization m as a function


of Monte Carlo time. After a certain time τeq the data become approximately flat
and fluctuate around a mean value. Once τeq has been reached, the system is in
thermal equilibrium and observables can be measured.

energy, equilibrate faster than others (e.g., magnetization) and thus the equilibration
times of all observables measured need to be considered.

4.3 Autocorrelation times and error analysis


Because in a Markov chain the new states are generated by modifying the previous
ones, subsequent states can be highly correlated. To ensure that the measurement of
an observable O is not biased by correlated configurations, it is important to measure
the autocorrelation time τauto that describes the time it takes for two measurements
to be decorrelated. This means that in a Monte Carlo simulation, after the system
has been thermally equilibrated, measurements can only be taken every τauto MCS.
To compute the autocorrelation time for a given observable O, the time-dependent
autocorrelation function needs to be measured:
hO(t0 )O(t0 + t)i − hO(t0 )ihO(t0 + t)i
CO (t) = . (28)
hO2 (t0 )i − hO(t0 )i2
In general, CO (t) ∼ exp(−t/τauto ) and so τauto is given by the value where CO drops
int
to 1/e. An alternative is the integrated autocorrelation time τauto . It is basically the
same as the standard autocorrelation time for any practical purpose. However, it is
easier to compute:
P∞ 2

int t=1 hO(t0 )O(t0 + t)i − hOi
τauto = (29)
hO2 i − hOi2
Autocorrelation effects influence the determination of the error of statistical estimates.
It can be shown [36] that the error ∆O is given by
s
hO2 i − hOi2
∆O = (1 + 2τauto ) . (30)
(M − 1)
Here M is the number of measurements. The autocorrelation time directly influences
the calculation of the error bars and must be computed and included in all calcula-
tions. So far, we have not discussed how the autocorrelation times depend on the
system size and the temperature. Like the equilibration times, the autocorrelation
times increase with increasing system size.

17
Monte Carlo Methods (Katzgraber)

4.4 Critical slowing down and the Wolff cluster algorithm


Close to a phase transition, the autocorrelation time is given by

τauto ∼ ξ z (31)

with z > 1 and typically around 2. Because the correlation length ξ diverges at a
continuous phase transition, so does the autocorrelation time. This effect, known as
critical slowing down, slows simulations to intractable times close to continuous phase
transitions when the dynamical critical exponent z is large.
The problem can be alleviated by using Monte Carlo methods which, while only
performing small changes to the energy of the system (to ensure that moves are
accepted frequently), heavily randomize the spin configurations and not only change
the value of one spin. This ensures that phase space is sampled evenly. Typical
examples are cluster algorithms [82, 90] where a carefully-built cluster of spins is
flipped at each step of the simulation [36, 58, 59, 68].

Wolff cluster algorithm (Ising spins) In the Wolff cluster algorithm [90] we
choose a random spin and build a cluster around it (the algorithm is constructed
in such a way that larger clusters are preferred). Once the cluster is constructed,
it is flipped in a rejection-free move. This “randomizes” the system efficiently, thus
overcoming critical slowing down. Outline of the algorithm:

 Select a random spin.


 If a neighboring spin is parallel to the initial spin, add it to the cluster with a
probability 1 − exp(−2J/kT ).
 Repeat the previous step for all neighbors of the newly-added spins and iterate
until no new spins can be added.
 Flip all spins in the cluster.

The algorithm obeys detailed balance. Furthermore, one can show that the linear size
of the cluster is proportional to the correlation length. Therefore the algorithm adapts
to the behavior of the system at criticality resulting in z ≈ 0, i.e., the critical slowing
down encountered around the transition is removed and the algorithm performs orders
of magnitude faster than simple Monte Carlo. For low temperatures, the cluster
algorithm merely “flip-flops” almost all spins of the system and provides not much
improvement, unless a domain wall is stuck in the system. For temperatures much
higher than the critical temperature the size of the clusters is of the order of one spin
and there the Metropolis algorithm outperforms the cluster algorithm (keep in mind
that building the cluster takes many operations). Thus the method works best at
criticality.
In general, to be able to cover a temperature range that extends beyond the critical
region, combinations of cluster updates and local updated (standard Monte Carlo) are
recommended. One can also define improved estimators to measure observables with

18
5 Complex toy model: The Ising spin glass

a reduced statistical error. Finally, the Wolff cluster algorithm can also be generalized
to Potts spins, XY and Heisenberg spins, as well as hard disks. The reader is referred
to the literature [36, 56, 58, 59, 68] for details.
Note that the Swendsen-Wang cluster algorithm [82] is similar to the Wolff cluster
algorithm. However, instead of building one cluster, multiple clusters are built. This
is less efficient when the space dimension is larger than two because in that case only
few large clusters will exist.

4.5 When does simple Monte Carlo fail?


Metropolis et al. did not bear in mind that there are systems where even a simple
spin flip can produce a huge change in the energy ∆H of the system. This has the
effect that the probability for new configurations to be accepted is very small and the
simulation stalls, in particular when the studied system has a rough energy landscape,
i.e., different states in phase space are separated by large energy “mountains” and
deep energy “valleys,” as depicted in Fig. 9. Examples of such complex systems are
spin glasses, proteins and neural networks.

Figure 9: Sketch of a rough en-


ergy landscape. A Monte Carlo
move from the initial (solid cir-
cle) to the final state (open circle)
is unlikely if the size of the bar-
∆E rier ∆E is large, especially at low
temperatures. A simple Monte
Carlo simulation will stall and
the system will be stuck in the
metastable state.

These systems are characterized by a complex energy landscape with deep valleys
and mountains that grow exponentially with the system size. Therefore, for low tem-
peratures, equilibration times of simple Monte Carlo methods diverge. Although the
method technically still works, the time it takes to equilibrate even the smallest sys-
tems becomes impractical. Improved sampling techniques for rough energy landscapes
need to be implemented.

5 Complex toy model: The Ising spin glass


What happens if we take the ferromagnetic Ising model and flip the sign of randomly-
selected interactions Jij between two spins? The resulting behavior is illustrated in
Fig. 10. For low temperatures, if the product of the interactions Jij around any
plaquette is negative, frustration effects emerge. The spin in the lower left corner
of the highlighted plaquette tries to minimize the energy by either aligning with
the right neighbor, or being antiparallel with the top neighbor. Both conditions are
mutually exclusive and so the energy cannot be uniquely minimized. This behavior

19
Monte Carlo Methods (Katzgraber)

is a hallmark of spin glasses [13, 21, 23, 28, 65, 83, 92]. Note that, in general, the bonds
are either chosen from a bimodal (Pb ) or Gaussian (Pg ) disorder distribution:

1 2
Pb (Jij ) = pδ(Jij − 1) + (1 − p)δ(Jij + 1) , Pg (Jij ) = √ exp[−Jij /2] , (32)

where, in general p = 1/2. The Hamiltonian in Eq. (9) with disorder in the bonds
is known as the Edwards-Anderson Ising spin glass. There is a finite-temperature
transition for space dimensions d ≥ 3 between a spin-glass and the (thermally) disor-
dered state, cf. Sec. 5.2. For example, for Gaussian disorder in three space dimensions
Tc ≈ 0.95 [49].

?
Figure 10: Two-dimensional Ising spin-glass. The circles represent Ising spins.
A thin line between two spins i and j corresponds to Jij < 0, whereas a thick
line corresponds to Jij > 0. In comparison to a ferromagnet, the behavior of the
model system changes drastically, as illustrated in the highlighted plaquette. For
T → 0, the spin in the lower left corner is unable to fulfill the interactions with the
neighbors and is frustrated (see text).

The result of competing interactions is a complex energy landscape. The com-


plexity of the model increases considerably. For example, finding the ground state
energy of a spin glass is generally an NP-hard problem. Equilibration times in finite-
temperature Monte Carlo simulations grow exponentially and thus the study of system
sizes beyond a few spins becomes intractable. So . . . why study these systems? Not
only are there many materials [13] that can be described well with spin-glass Hamilto-
nians, many other problems spanning several fields of science can be either described
directly by spin-glass Hamiltonians or mapped onto these. Therefore these models
are of general interest to a broad spectrum of disciplines.
Note that, because in general only finite system sizes can be simulated, an average
over different realizations of the disorder needs to be performed in addition to the
thermal averages. This means that after a Monte Carlo simulation has been completed
for a given distribution of the disorder, it must be repeated at least 1000 times for the
results to be representative. Although this extra effort further complicates simulations
of spin-glass systems, it makes them embarrassingly parallel, i.e., simulations can
easily be distributed over many workstations.

20
5 Complex toy model: The Ising spin glass

5.1 Selected hallmark properties of spin glasses


Because of the complex energy landscape, spin glasses show dynamical properties not
seen in any other materials/systems. First, spin-glass observables such as suscepti-
bilities and magnetizations age with time. Due to the complex energy landscape,
there are rearrangements of the spins in macroscopic time scales. Therefore, when
preparing a spin-glass system at a given temperature, a slow decay of observables can
be observed because the system, at least experimentally, is never in thermal equilib-
rium. Furthermore, when performing an aging experiment on a spin glass, changing
the temperature from T1 < Tc to T2 < T1 at time t1 for a finite period of time t2
and then back to T1 shows interesting memory effects [47]. After the time t1 + t2 , the
system remembers the state it had at time t1 and follows the previous aging path.
This memory and rejuvenation effect is unique to spin glasses.
While the susceptibility shows a cusp at the transition [17], the specific heat has
a smooth behavior around the transition temperature [65]. Furthermore, no signs
of spatial ordering can be found when performing a neutron scattering experiment
probing below the transition temperature. However, Mössbauer spectroscopy shows
that the magnetic moments are frozen in space, thus indicating that the system is in
a glassy and not liquid phase. Therefore, in its simplest interpretation, a spin glass
is a model for a highly-disordered magnet.

5.2 Theoretical description


In 1975, Edwards and Anderson suggested a phenomenological model in order to
describe these fascinating materials: the Edwards-Anderson (EA) spin-glass Hamil-
tonian [25] discussed above. In 1979, Parisi postulated a solution (only recently
proven to be correct [84]) to the mean-field Sherrington-Kirkpatrick (SK) model [78],
a variation of the Edwards-Anderson model with infinite-range interactions (all spins
interact with each other). The replica symmetry breaking picture (RSB) of Parisi
for the mean-field model spawned an increased interest in the field and has been ap-
plied to a variety of problems. In addition, in 1986 a phenomenological description,
called the “Droplet Picture” (DP) was introduced simultaneously by Fisher & Huse
and Bray & Moore [26, 27] in order to describe short-range spin glasses, as well as
the chaotic pairs picture by Newman & Stein [66, 67] However, rigorous analytical
results are difficult to obtain for realistic short-range spin-glass models. Because of
this, research has shifted to intense numerical studies.
Nevertheless, spin glasses are far from being understood. The memory effect in
spin glasses [47] has yet to be understood theoretically, and only recently was it
observed numerically [46]. The existence of a spin-glass phase in a finite magnetic
field [20] has been a source of debate [93], as well as the ultrametric structure of
the phase space (hierarchical structure of states) which remains to be understood
for realistic models [39, 48]. Finally, there have been several numerical attempts at
finite [42, 50, 57, 62, 71] and zero temperature [34, 38] to better understand the nature
of the spin-glass state for short-range spin glasses. To date, the data for Ising spins
are consistent with an intermediate picture [57, 71] that combines elements from the

21
Monte Carlo Methods (Katzgraber)

standard theoretical predictions of replica symmetry breaking and the droplet theory.
How can order be quantified in a system that intrinsically does not have visible
spatial order? For this we need to first determine what differentiates a spin glass
at temperatures above the critical point Tc and below. Above the transition, like
for the regular Ising model, spins fluctuate and any given snapshot yields a random
configuration. Therefore, comparing a snapshot at time t and time t + δt yields
completely different results. Below the transition, (replica) symmetry is broken and
configurations freeze into place. Therefore, comparing a snapshot of the system at
time t and time t + δt shows significant similarities. A natural choice thus is to define
an overlap function q which compares two copies of the system with the same disorder.
In simulations, it is less practical to compare two snapshots of the system at
different times. Therefore, for practical reasons two copies (called “replicas”) α and β
with the same disorder but different initial conditions and Markov chains are simulated
in parallel. The order parameter is then given by

1 X α β
q= S S , (33)
N i i i

and is illustrated graphically in Fig. 11. For temperatures below Tc , q tends to unity
whereas for T > Tc on average q → 0, similar to the magnetization for the Ising
ferromagnet. Analogous to the ferromagnetic case, we can define a Binder ratio g by
replacing the magnetization m with the spin overlap q to probe for the existence of a
spin-glass state.

α β α β

Figure 11: Graphical representation of the order parameter function q. Two


replicas of the system α and β with the same disorder are compared spin-by-spin.
The left set corresponds to a temperature T  Tc where many spins agree and so
q → 1 (in the depicted example q = 0.918). The right set corresponds to T > Tc ;
the spins fluctuate due to thermal fluctuations and so q < 1 (here q = 0.408).

6 Parallel tempering Monte Carlo


As illustrated with the case of spin glasses in Sec. 5, the free energy landscape of
many-body systems with competing phases or interactions is generally characterized
by many local minima that are separated by free-energy barriers. The simulation

22
6 Parallel tempering Monte Carlo

of these systems with standard Monte Carlo [55, 59, 64] or molecular dynamics [29]
methods is slowed down by long relaxation times due to the suppression of tunnel-
ing through these barriers. Already simple chemical reactions with latent heat, i.e.,
first-order phase transitions, present huge numerical challenges that are not present
for systems which undergo second-order phase transitions where improved updating
techniques, such as cluster algorithms [82,90], can be used. For complex systems with
competing interactions, one instead attempts to improve the local updating technique
by introducing artificial statistical ensembles such that tunneling times through bar-
riers are reduced and autocorrelation effects minimized.
One such method is parallel tempering Monte Carlo [5,30,44,61,81] that has proven
to be a versatile “workhorse” in many fields [24]. Similar to replica Monte Carlo
[81], simulated tempering [61], or extended ensemble methods [60], the algorithm
aims to overcome free-energy barriers in the free energy landscape by simulating
several copies of the target system at different temperatures. The system can thus
escape metastable states when wandering to higher temperatures and relax to lower
temperatures again in time scales several orders of magnitude smaller than for a simple
Monte Carlo simulation at one fixed temperature. The method has also been combined
with several other algorithms such as genetic algorithms and related optimization
methods, molecular dynamics, cluster algorithms and quantum Monte Carlo.

6.1 Outline of the algorithm


M noninteracting copies of the system are simulated in parallel at different temper-
atures {T1 , T2 , . . . , TM }. After a fixed number of Monte Carlo sweeps (generally one
lattice sweep) two copies at neighboring temperatures Ti and Ti+1 are exchanged with
a Monte Carlo like move and accepted with a probability

T [(Ei , Ti ) → (Ei+1 , Ti+1 )] = min {1, exp[(Ei+1 − Ei )(1/Ti+1 − 1/Ti )]} . (34)

A given configuration will thus perform a random walk in temperature space over-
coming free energy barriers by wandering to high temperatures where equilibration
is rapid and configurations change more rapidly, and returning to low temperatures
where relaxation times can be long. Unlike for simple Monte Carlo, the system can
efficiently explore the complex energy landscape. Note that the update probability in
Eq. (34) obeys detailed balance.
At first sight it might seem wasteful to simulate a system at multiple tempera-
tures. In most cases, the number of temperatures does not exceed 100 values, yet the
speedup attained can be 5 – 6 orders of magnitude. Furthermore, one often needs the
temperature dependence of a given observable and so the method delivers data for
different temperatures in one run. A simple implementation of the parallel tempering
move called after a certain number of lattice sweeps using pseudo code is shown below.

23
Monte Carlo Methods (Katzgraber)

1 algorithm parallel_tempering(*energy,*temp,*spins)
2

3 for(i = 1 ... (num_temps - 1)) do


4 delta = (1/temp[i] - 1/temp[i+1])*(energy[i] - energy[i+1])
5 if(rand(0,1) < exp(delta)) then
6 swap(spins[i],spins[i+1])
7 swap(energy[i],energy[i+1])
8 fi
9 done

The swap( ) function swaps neighboring energies and spin configurations (*spins)
if the move is accepted. As simple as the algorithm is, some fine tuning has to be
performed for it to operate optimally.

6.2 Selecting the temperatures


There are many recipes on how to ideally select the position of the temperatures for
parallel tempering Monte Carlo to perform optimally. Clearly, when the temperatures
are too far apart, the energy distributions at the individual temperatures will not
overlap enough and many moves will be rejected. The result is thus M independent
simple Monte Carlo simulations run in parallel with no speed increase of any sort. If
the temperatures are too close, CPU time is wasted.
A measure for the efficiency of a system copy to traverse the temperature space is
the probability (as a function of temperature) that a swap is accepted. A good rule of
thumb is to ensure that the acceptance probabilities are approximately independent
of temperature, between approximately 20 – 80%, and do not show large fluctuations
as these would signify the breaking-up of the random walk into segments of the tem-
perature space. Following the aforementioned recipe, parallel tempering Monte Carlo
already outperforms any simple sampling Monte Carlo method in a rough energy
landscape. Still, the performance can be further increased, as outlined below.

Traditional approaches As mentioned before, a reasonable performance of the


algorithm can be obtained when the acceptance probabilities are approximately inde-
pendent of temperature. If the specific heat of a system is not strongly divergent at
the phase transition—as it is the case with spin glasses—a good starting point is given
by a geometric progression of temperatures. Given a temperature range [T1 , TM ], the
intermediate M − 2 temperatures can be computed via
k−1 r
M −1 TM
Y
Tk = T1 Ri , Ri = . (35)
i=1
T1

Because relaxation is slower for lower temperatures, the geometric progression peaks
the number of temperatures close to T1 . If, however, the specific heat of the system
has a strong divergence, this approach is not optimal.
One can show that the acceptance probabilities are inversely correlated √ to the
functional behavior of the specific heat per spin cV via ∆Ti,i+1 ∼ cV Ti / N [72].

24
6 Parallel tempering Monte Carlo

Therefore, if cV diverges, the acceptance probabilities for a geometric temperature


set show a pronounced dip at the transition temperature. More complex methods
such as the approach by Kofke [52, 53], its improvement by Rathore et al. [76], as
well as the method suggested by Predescu et al. [72, 73] aim to obtain acceptance
probabilities for the parallel tempering moves that are independent of temperature
by compensating for the effects of the specific heat.
Finally, the number of temperatures needed√can be estimated via the behavior
of the specific heat. One can show that M ∼ N 1−dν/α [44]. Here d is the space
dimension, N the number of spins, ν the critical exponent of the correlation length
and α the critical exponent of the specific heat.
In practice, it is straightforward to tune a temperature set produced initially via
a geometric progression by adding interstitial temperatures where the acceptance
rates are low. A quick simulation for only a few Monte Carlo sweeps yields enough
information about the acceptance probabilities to tune the temperature set by hand
without having to resort to a full equilibrium simulation.

Improved approaches Recently, a new iterative feedback method has been in-
troduced to optimize the position of the temperatures in parallel tempering simula-
tions [51]. The idea is to treat the set of temperatures as an ensemble and thus use
ensemble optimization methods [86] to improve the round-trip times of a given system
copy in temperature space. Unlike the conventional approaches, resources are allo-
cated to the bottlenecks of the simulation, i.e., phase transitions and ground states
where relaxation is slow. As a consequence, acceptance probabilities are temperature-
dependent because more temperatures are allocated to the bottlenecks. The approach
requires one to gather enough round-trip data for the temperature sets to converge
and thus is not always practical. For details on the implementation, see Refs. [51]
and [85], as well as Ref. [33] for an improved version.
A similar approach to optimize the efficiency of parallel tempering has recently
been introduced by Bittner et al. [15]. Unlike the previously-mentioned feedback
method, this approach leaves the position of the temperatures untouched but with an
average acceptance probability of 50%. To deal with free-energy barriers in the simu-
lation, the autocorrelation times of the simulation without parallel tempering have to
be measured ahead of time. The number of MCS between parallel tempering updates
is then dependent on the autocorrelation times, i.e., close to a phase transition, more
MCS between parallel tempering moves are performed. Again, the method is thus
optimized because resources are reallocated to where they are needed most. Unfortu-
nately, this approach also requires a simulation to be done ahead of time to estimate
the autocorrelation times, but a rough estimate is sufficient.

6.3 Example: Application to spin glasses


To illustrate the advantages of parallel tempering over simple Monte Carlo, we show
data for a three-dimensional Ising spin glass with Normal-distributed disorder. In
that case, one can use an exact relationship between the energy and a fourth-order

25
Monte Carlo Methods (Katzgraber)

spin correlator known as the link overlap q` [50]. The link overlap is given by
1 X α α β β
q` = Si Sj Si Sj . (36)
dN
hi,ji

The sum in Eq. (36) is over neighboring spin pairs and the normalization is over all
bonds. If a domain of spins in a spin glass is flipped, the link overlap measures the
average length of the boundary of the domain.

Figure 12: Equilibration test for spin


glasses with Gaussian disorder. Data for
the link overlap (circles) have to equate to
data for the link overlap computed from the
energy (squares). This is the case after ap-
proximately 300 MCS when parallel tem-
pering is used. A direct calculation of the
link overlap using simple Monte Carlo (tri-
angles) is not equilibrated after 105 MCS.
Data for L = 4, d = 3, 5000 samples, and
T = 0.50.

The internal energy per spin u is given by


1 X
u=− [ Jij hSi Sj i]av , (37)
N
hi,ji

where h· · · i represents the Monte Carlo average for a given set of bonds, and [· · · ]av
denotes an average over the (Gaussian) disorder. One can perform an integration by
parts over Jij to relate u to the average link overlap defined in Eq. (36), i.e.,
Tu
[hq` i]av = 1 +. (38)
d
The simulation starts with a random spin configuration. This means that the two
sides of Eq. (38) approach equilibrium from opposite directions. Data for q` will be
too small because we started from a random configuration, whereas the initial energy
will not be as negative as in thermal equilibrium. Once both sides of Eq. (38) agree,
the system is in thermal equilibrium. This is illustrated in Fig. 12 for the Edwards-
Anderson Ising spin glass with 43 spins and T = 0.5 which is approximately 50% Tc .
The data are averaged over 5000 realizations of the disorder. While the data for q`
generated with parallel tempering Monte Carlo agree after approximately 300 MCS,
the data produced with simple Monte Carlo have not reached equilibrium even after
105 MCS, thus illustrating the power of parallel tempering for systems with a rough
energy landscape.

26
7 Rare events: Probing tails of energy distributions

7 Rare events: Probing tails of energy distributions


When computing distribution functions (histograms) P (x) of a quantity x typically
simple-sampling techniques are used [6]. The quantity x can be an order parameter, a
free energy, an internal energy, a matching probability, etc. In these simple-sampling
techniques Nsamp instances are computed and subsequently binned in order to obtain
the desired distribution. If Nsamp samples are computed, then the maximal “res-
olution” of a bin is ∼ 1/Nsamp , and thus, for example, ∼ 107 samples have to be
computed to resolve seven orders of magnitude in a histogram. If, however, the tails
need to be probed to 18 order is magnitude precision, then the computations become
quickly intractable because Nsamp ∼ 1018 samples have to be computed. One alterna-
tive is to use multicanonical methods [8, 11] that, for example, have been used before
to overcome the limitations of simple-sampling techniques in order to probe tails of
overlap distribution functions in spin glasses [9, 10].
Here we outline a method related to multicanonical approaches based on ideas pre-
sented in Ref. [35] that also overcomes the limitations of simple-sampling techniques
and works for systems with disorder, e.g., spin glasses. The idea is to perform an
importance-sampling simulation of P (x) in the disorder with a guiding function esti-
mated from simple-sampling simulations. Similar approaches have been used before
in the studies of distributions of sequence alignment scores [35], free-energy barriers
in the Sherrington-Kirkpatrick model [14], as well as fluctuations in classical mag-
nets [40] (albeit the latter without disorder).

7.1 Case study: Ground-state energy distributions


A disordered system is defined by a Hamiltonian HJ (C), where the disorder configura-
tion J is chosen from a probability distribution P(J ) and C denotes the phase-space
configuration of the system. The ground-state energy E of a given disorder configu-
ration J is defined by
E(J ) = min HJ (C). (39)
C

Together with the disorder distribution P(J ), this defines the ground-state energy
distribution Z
P (E) = dJ P(J ) δ [E − E(J )] . (40)

7.2 Simple sampling


Nsamp independent disorder configurations Ji are chosen from P(J ) and the ground-
state energy is calculated for each disorder configuration. The calculation of the
ground-state energy in itself is a difficult optimization problem that we sweep under
the rug (see Refs. [36, 37] for efficient methods). From the ground-state energies of
these disorder configurations, the ground-state energy distribution can be estimated

27
Monte Carlo Methods (Katzgraber)

via
Nsamp
1 X
P (E) = δ [E − E(Ji )] , (41)
Nsamp i=1

so that the averages of functions with respect to the disorder are replaced by averages
with respect to the Nsamp random samples. The functional form of the ground-state
energy distribution and its parameters can be estimated by a maximum likelihood fit
of an empirical distribution Fθ (E) with parameters {θ} to the data [19]. Note that
due to the limited range of energies sampled by the simple-sampling algorithm it is
often difficult or even impossible to quantify how well the tails of the distribution are
described by a maximum-likelihood fit.

7.3 Importance sampling with a guiding function


Assume it is easy to find a function Fθ (E) that accurately describes the ground-state
energy distribution calculated from a quick simple-sampling simulation as described
in the previous section. In that case, an importance-sampling Monte Carlo algorithm
in the disorder [35, 58, 68] can be used to probe the tails efficiently by using Fθ (E)
as a guiding function. We start from a random disorder configuration J = J0 with
ground-state energy E(J0 ). From the i-th configuration Ji , we generate the i + 1-th
configuration Ji+1 via a Metropolis-type update:
1. Select a disorder configuration J 0 by replacing a subset of J chosen at random
(e.g., a single bond chosen at random) with values chosen according to P (J )
and calculate its ground-state energy E(J 0 ).
2. Set Ji+1 = J 0 with probability
 
Fθ [E(Ji )]
Paccept = min ,1 (42)
Fθ [E(J 0 )]
and Ji+1 = Ji otherwise.
With this algorithm a disorder configuration J is visited with probability 1/Fθ [E(J )],
such that the probability to visit a disorder configuration with ground-state energy
E is P (E)/Fθ (E). If Fθ (E) ∼ P (E), then each energy is visited with the same prob-
ability resulting in a flat-histogram sampling of the ground-state energy distribution.
To prevent trapping of the algorithm in an extremal region of phase space the range
of energies that the algorithm is allowed to visit can be restricted (see Ref. [54] for
details).
Note that successive configurations visited by the algorithm are not independent.
To ensure that the data are not correlated, only samples each τ measurements are
considered in the average, where τ is the exponential autocorrelation time of the
energy. It can be computed from the energy autocorrelation function
hEj Ej+i i − hEj i hEj+i i
ζauto (i) = , (43)
hEj2 i − hEj i2

28
7 Rare events: Probing tails of energy distributions

where it decays to 1/e [58]. Here Ei is the ground state energy after the i-th Monte
Carlo step and h. . .i represents an average over Monte Carlo time. To be sure that the
visited ground-state configurations are not correlated, we empirically only use every
4τ -th measurement. Once the autocorrelation effects have been quantified, the data
can be analyzed with the same methods as the simple-sampling results [see Eq. (41)].

!%#
" !#

!%!#

! "!! #!!! #"!! $!!!


!

Figure 13: Autocorrelation function as defined in Eq. (43) for the Sherrington-
Kirkpatrick spin glass and system sizes N = 16 (circles) and N = 128 (triangles).
The value 1/e is marked by the horizontal dotted line. Time steps i are measured
in Monte Carlo steps. (Figure adapted from Reference [54]).

7.4 Example: The Sherrington-Kirkpatrick Ising spin glass


The Sherrington-Kirkpatrick [78] model is given by the Hamiltonian
X
HJ ({Si }) = Jij Si Sj , (44)
i<j

where the Si = ±1 (i = 1, . . . , N ) are Ising spins, and the interactions J = {Jij }


are identically and independently distributed random variables chosen from a Normal
distribution with zero mean and standard deviation (N − 1)−1/2 . The sum is over all
spins in the system, i.e., the model represents the mean-field version of the Edwards-
Anderson Ising spin glass introduced before.
For the SK model several optimization algorithms, such es extremal optimization
[16], hysteretic optimization [69], as well as other algorithms such as genetic and
Bayesian algorithms [36,37], and even parallel tempering [44] can be used to estimate
ground-state energies for small to moderate system sizes.

29
Monte Carlo Methods (Katzgraber)

We first compute 105 ground-state energies and bin the data into 50 bins and
perform a maximum-likelihood fit to a function that describes the shape of the ground-
state energy distribution best. In this case, this is a modified Gumbel distribution [32]:
  
E−µ E−µ
Fµ,ν,m (E) ∝ exp m − m exp . (45)
ν ν
The modified Gumbel distribution is parametrized by the “location” parameter µ,
the “width” parameter ν, and the “slope” parameter m. The parameters µ, ν and
m estimated from a maximum-likelihood fit represent the input parameters for the
guiding function used in the importance-sampling simulation in the disorder. To
perform a step in the Monte Carlo algorithm, we choose a site at random, replace
all bonds connected to this site (the expected change in the ground-state energy is
then of the order ∼ 1/N ), calculate the ground-state energy of the new configuration,
and accept the new configuration with the probability given in Eq. (42). A study of
the energy autocorrelation shows that for system sizes between 16 and 128 spins the
autocorrelation times are of the order of 400 to 700 Monte Carlo steps, see Fig. 13.
0
10

N
-3
10 16
24
32
-6 48
10 64
96
128
PN (E)

-9
10

-12
10

-15
10

-18
10
-150 -100 -50 0
E

Figure 14: Ground-state energy distributions of the Sherrington-Kirkpatrick


model for different system sizes obtained from a importance-sampling simulation
with a guiding function. Although only ∼ 103 samples per system size N were sim-
ulated, the resolution of the histograms is up to 18 orders of magnitude. (Figure
adapted from Reference [54]).

Figure 14 shows the energy distributions for the Sherrington-Kirkpatrick Ising


spin glass for different system sizes N . The data span up to 18 orders of magnitude
and were produced with approximately 1000 samples per system size, therefore illus-
trating the immense power of importance sampling simulations when probing tails of
distributions. This would be impossible to obtain with simple-sampling techniques.

30
8 Other Monte Carlo methods

In comparison to similar methods [35, 40] the presented approach has several ad-
vantages due to its simplicity: Instead of iterating towards a good guiding function,
which may be quite expensive computationally, we use a maximum likelihood fit as a
guiding function. Therefore, the proposed algorithm is straightforward to implement
and considerably more efficient than traditional approaches, provided a good guiding
function, i.e., a good maximum-likelihood fit to the simple-sampling results, can be
found. Note also that the method can be generalized to any distribution function,
such as an order-parameter distribution.

8 Other Monte Carlo methods


In addition to the Monte Carlo methods outlined, there is a vast selection of other
approaches based on random Monte Carlo sampling to study physical systems. In this
section some selected finite-temperature Monte Carlo methods are briefly outlined.
The reader is referred to the literature for details. Note that most algorithms can be
combined for better performance. For example, one could combine parallel tempering
with a cluster algorithm to speed up simulations both around and far below the
transition.

Cluster algorithms In addition to the Wolff cluster algorithm [90] outlined in


Sec. 4.4, the Swendsen-Wang [82] algorithm also greatly helps to overcome critical
slowing down of simulations close to phase transitions. There are also specially-crafted
cluster algorithms for spin glasses [41].

Simulated annealing Simulated annealing is probably the simplest heuristic


ground-state search approach. A Monte Carlo simulation is performed until the sys-
tem is in thermal equilibrium. Subsequently, the temperature is quenched according
to a pre-defined protocol until T close to zero is reached. After each quench, the
system is equilibrated with simple Monte Carlo. The system should converge to the
ground state, although there is no guarantee that the system will not be stuck in a
metastable state.

Flat-histogram methods Flat-histogram algorithms include the multicanonical


method [8, 11], broad histograms [22] and transition matrix Monte Carlo [89] when
combined with entropic sampling, as well as the adaptive algorithm of Wang and
Landau [87, 88] and its improvement by Trebst et al. [86]. The advantage of these
algorithms is that they allow for an estimate of the free energy; this is usually not
available from standard Monte Carlo methods.

Quantum Monte Carlo In addition to the aforementioned Monte Carlo methods


that treat classical problems, quantum extensions such as variational Monte Carlo,
path integral Monte Carlo, etc. have been developed for quantum systems [80].

31
Monte Carlo Methods (Katzgraber)

Acknowledgments
I would like to thank Juan Carlos Andresen and Ruben Andrist for critically reading
the manuscript. Furthermore, I thank M. Hasenbusch for spotting an error in Sec. 2.1.

References
[1] The bulk of this chapter is based on material collected from the different books
cited.
[2] The alcohol is to improve the randomness of the sampling. If the experimentalist
is not of legal drinking age, it is recommended to close the eyes and rotate 42
times on the spot at high speed before a pebble is thrown.
[3] The pseudo code used does not follow any rules and is by no means consistent.
But it should bring the general ideas across.
[4] Although the algorithm is known as the Metropolis algorithm, N. Metropolis’
contribution to the project was minimal. He merely was the team leader at the
lab. The bulk of the work was carried out by two couples, the Rosenbluths and
the Tellers.
[5] The method is also known under the name of “Exchange Monte Carlo” (EMC)
and “Multiple Markov Chain Monte Carlo” (MCMC).
[6] This section is based on work published in Ref. [54].
[7] H. G. Ballesteros, A. Cruz, L. A. Fernandez, V. Martin-Mayor, J. Pech, J. J. Ruiz-
Lorenzo, A. Tarancon, P. Tellez, C. L. Ullod, and C. Ungil. Critical behavior of
the three-dimensional Ising spin glass. Phys. Rev. B, 62:14237, 2000.
[8] B. Berg and T. Neuhaus. Multicanonical ensemble: a new approach to simulate
first-order phase transitions. Phys. Rev. Lett., 68:9, 1992.
[9] B. A. Berg, A. Billoire, and W. Janke. Functional form of the Parisi overlap
distribution for the three-dimensional Edwards-Anderson Ising spin glass. Phys.
Rev. E, 65:045102, 2002.
[10] B. A. Berg, A. Billoire, and W. Janke. Overlap distribution of the three-
dimensional Ising model. Phys. Rev. E, 66:046122, 2002.
[11] B. A. Berg and T. Neuhaus. Multicanonical algorithms for first order phase
transitions. Phys. Lett. B, 267:249, 1991.
[12] K. Binder. Critical properties from Monte Carlo coarse graining and renormal-
ization. Phys. Rev. Lett., 47:693, 1981.
[13] K. Binder and A. P. Young. Spin glasses: Experimental facts, theoretical concepts
and open questions. Rev. Mod. Phys., 58:801, 1986.

32
References

[14] E. Bittner and W. Janke. Free-Energy Barriers in the Sherrington-Kirkpatrick


Model. Europhys. Lett., 74:195, 2006.
[15] E. Bittner, A. Nußbaumer, and W. Janke. Make life simple: Unleash the full
power of the parallel tempering algorithm. Phys. Rev. Lett., 101:130603, 2008.
[16] S. Boettcher and A. G. Percus. Optimization with Extremal Dynamics. Phys.
Rev. Lett., 86:5211, 2001.
[17] Y. Cannella and J. A. Mydosh. Magnetic ordering in gold-iron alloys (suscepti-
bility and thermopower studies). Phys. Rev. B, 6:4220, 1972.
[18] J. Cardy. Scaling and Renormalization in Statistical Physics. Cambridge Uni-
versity Press, Cambridge, 1996.
[19] G. Cowan. Statistical Data Analysis. Oxford Science Publications, New York,
1998.
[20] J. R. L. de Almeida and D. J. Thouless. Stability of the Sherrington-Kirkpatrick
solution of a spin glass model. J. Phys. A, 11:983, 1978.
[21] C. de Dominicis and I. Giardina. Random Fields and Spin Glasses. Cambridge
University Press, Cambridge, 2006.
[22] P. M. C. de Oliveira, T. J. P. Penna, and H. J. Herrmann. Broad Histogram
Method. Braz. J. Phys., 26:677, 1996.
[23] H. T. Diep. Frustrated Spin Systems. World Scientific, Singapore, 2005.
[24] D. J. Earl and M. W. Deem. Parallel Tempering: Theory, Applications, and New
Perspectives. Phys. Chem. Chem. Phys., 7:3910, 2005.
[25] S. F. Edwards and P. W. Anderson. Theory of spin glasses. J. Phys. F: Met.
Phys., 5:965, 1975.
[26] D. S. Fisher and D. A. Huse. Ordered phase of short-range Ising spin-glasses.
Phys. Rev. Lett., 56:1601, 1986.
[27] D. S. Fisher and D. A. Huse. Absence of many states in realistic spin glasses. J.
Phys. A, 20:L1005, 1987.
[28] K. H. Fisher and J. A. Hertz. Spin Glasses. Cambridge University Press, Cam-
bridge, 1991.
[29] D. Frenkel and B. Smit. Understanding Molecular Simulation. Academic Press,
New York, 1996.
[30] C. Geyer. Monte Carlo Maximum Likelihood for Dependent Data. In E. M.
Keramidas, editor, 23rd Symposium on the Interface, page 156, Fairfax Station,
1991. Interface Foundation.

33
Monte Carlo Methods (Katzgraber)

[31] N. Goldenfeld. Lectures On Phase Transitions And The Renormalization Group.


Westview Press, Jackson, 1992.

[32] E. J. Gumbel. Multivariate Extremal Distributions. Bull. Inst. Internat. de


Statistique, 37:471, 1960.

[33] F. Hamze, N. Dickson, and K. Karimi. Robust parameter selection for parallel
tempering. (arXiv:cond-mat/1004.2840), 2010.

[34] A. K. Hartmann. Scaling of stiffness energy for three-dimensional ±J Ising spin


glasses. Phys. Rev. E, 59:84, 1999.

[35] A. K. Hartmann. Sampling rare events: Statistics of local sequence alignments.


Phys. Rev. E, 65:056102, 2002.

[36] A. K. Hartmann and H. Rieger. Optimization Algorithms in Physics. Wiley-


VCH, Berlin, 2001.

[37] A. K. Hartmann and H. Rieger. New Optimization Algorithms in Physics. Wiley-


VCH, Berlin, 2004.

[38] A. K. Hartmann and A. P. Young. Lower critical dimension of Ising spin glasses.
Phys. Rev. B, 64:180404(R), 2001.

[39] G. Hed, A. P. Young, and E. Domany. Lack of Ultrametricity in the Low-


Temperature phase of 3D Ising Spin Glasses. Phys. Rev. Lett., 92:157201, 2004.

[40] R. Hilfer, B. Biswal, H. G. Mattutis, and W. Janke. Multicanonical Monte


Carlo study and analysis of tails for the order-parameter distribution of the two-
dimensional Ising model. Phys. Rev. E, 68:046123, 2003.

[41] J.J. Houdayer. A cluster Monte Carlo algorithm for 2-dimensional spin glasses.
Eur. Phys. J. B., 22:479, 2001.

[42] J.J. Houdayer, F. Krzakala, and O. C. Martin. Large-scale low-energy excitations


in 3-d spin glasses. Eur. Phys. J. B., 18:467, 2000.

[43] K. Huang. Statistical Mechanics. Wiley, New York, 1987.

[44] K. Hukushima and K. Nemoto. Exchange Monte Carlo method and application
to spin glass simulations. J. Phys. Soc. Jpn., 65:1604, 1996.

[45] E. Ising. Beitrag zur Theorie des Ferromagnetismus. Z. Phys., 31:253, 1925.

[46] S. Jimenez, V. Martin-Mayor, and S. Perez-Gaviro. Rejuvenation and memory


in model spin glasses in 3 and 4 dimensions. Phys. Rev. B, 72:054417, 2005.

[47] K. Jonason, E. Vincent, J. Hammann, J. P. Bouchaud, and P. Nordblad. Memory


and Chaos Effects in Spin Glasses. Phys. Rev. Lett., 81:3243, 1998.

34
References

[48] H. G. Katzgraber and A. K. Hartmann. Ultrametricity and Clustering of States


in Spin Glasses: A One-Dimensional View. Phys. Rev. Lett., 102:037207, 2009.

[49] H. G. Katzgraber, M. Körner, and A. P. Young. Universality in three-dimensional


Ising spin glasses: A Monte Carlo study. Phys. Rev. B, 73:224432, 2006.

[50] H. G. Katzgraber, M. Palassini, and A. P. Young. Monte Carlo simulations of


spin glasses at low temperatures. Phys. Rev. B, 63:184422, 2001.

[51] H. G. Katzgraber, S. Trebst, D. A. Huse, and M. Troyer. Feedback-optimized


parallel tempering Monte Carlo. J. Stat. Mech., P03018, 2006.

[52] D. A. Kofke. Comment on ”The incomplete beta function law for parallel temper-
ing sampling of classical canonical systems” [J. Chem. Phys. 120, 4119 (2004)].
J. Chem. Phys., 121:1167, 2004.

[53] D. A. Kofke. On the acceptance probability of replica-exchange Monte Carlo


trials. J. Chem. Phys., 117:6911, 2004.

[54] M. Körner, H. G. Katzgraber, and A. K. Hartmann. Probing tails of energy


distributions using importance-sampling in the disorder with a guiding function.
J. Stat. Mech., P04005, 2006.

[55] W. Krauth. Introduction To Monte Carlo Algorithms. In J. Kertesz and I. Kon-


dor, editors, Advances in Computer Simulation. Springer Verlag, Heidelberg,
1998.

[56] W. Krauth. Algorithms and Computations. Oxford University Press, New York,
2006.

[57] F. Krzakala and O. C. Martin. Spin and link overlaps in 3-dimensional spin
glasses. Phys. Rev. Lett., 85:3013, 2000.

[58] D. P. Landau and K. Binder. A Guide to Monte Carlo Simulations in Statistical


Physics. Cambridge University Press, 2000.

[59] R. H. Landau and M. J. Páez. Computational Physics. Wiley, New York, 1997.

[60] A. P. Lyubartsev, A. A. Martsinovski, S. V. Shevkunov, and P. N. Vorontsov-


Velyaminov. New approach to Monte Carlo calculation of the free energy:
Method of expanded ensembles. J. Chem. Phys., 96:1776, 1992.

[61] E. Marinari and G. Parisi. Simulated tempering: A new Monte Carlo scheme.
Europhys. Lett., 19:451, 1992.

[62] E. Marinari and G. Parisi. On the effects of changing the boundary conditions
on the ground state of Ising spin glasses. Phys. Rev. B, 62:11677, 2000.

35
Monte Carlo Methods (Katzgraber)

[63] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller.


Equation of State Calculations by Fast Computing Machines. J. Chem. Phys.,
21:1087, 1953.

[64] N. Metropolis and S. Ulam. The Monte Carlo Method. J. Am. Stat. Assoc.,
44:335, 1949.

[65] M. Mézard, G. Parisi, and M. A. Virasoro. Spin Glass Theory and Beyond.
World Scientific, Singapore, 1987.

[66] C. Newman and D. L. Stein. Non-mean-field behavior of realistic spin glasses.


Phys. Rev. Lett., 76:515, 1996.

[67] C. M. Newman and D. L. Stein. Short-range spin glasses: Results and specula-
tions. In Lecture Notes in Mathematics 1900, page 159. Springer-Verlag, Berlin,
2007. (cond-mat/0503345).

[68] M. E. J. Newman and G. T. Barkema. Monte Carlo Methods in Statistical


Physics. Oxford University Press Inc., New York, USA, 1999.

[69] K. F. Pal. Hysteretic optimization for the Sherrington-Kirkpatrick spin glass.


Physica A, 367:261, 2006.

[70] M. Palassini and S. Caracciolo. Universal Finite-Size Scaling Functions in the


3D Ising Spin Glass. Phys. Rev. Lett., 82:5128, 1999.

[71] M. Palassini and A. P. Young. Nature of the spin glass state. Phys. Rev. Lett.,
85:3017, 2000.

[72] C. Predescu, M. Predescu, and C.V. Ciobanu. The incomplete beta function law
for parallel tempering sampling of classical canonical systems. J. Chem. Phys.,
120:4119, 2004.

[73] C. Predescu, M. Predescu, and C.V. Ciobanu. On the Efficiency of Exchange in


Parallel Tempering Monte Carlo Simulations. J. Phys. Chem. B, 109:4189, 2005.

[74] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical


Recipes in C. Cambridge University Press, Cambridge, 1995.

[75] V. Privman, editor. Finite Size Scaling and Numerical Simulation of Statistical
Systems. World Scientific, Singapore, 1990.

[76] N. Rathore, M. Chopra, and J. J. de Pablo. Optimal allocation of replicas in


parallel tempering simulations. J. Chem. Phys., 122:024111, 2005.

[77] L. Reichl. A Modern Course in Statistical Physics. Wiley, New York, 1998.

[78] D. Sherrington and S. Kirkpatrick. Solvable model of a spin glass. Phys. Rev.
Lett., 35:1792, 1975.

36
References

[79] H. E. Stanley. An Introduction to Phase Transitions and Critical Phenomena.


Oxford University Press, Oxford, 1971.
[80] M. Suzuki. Quantum Monte Carlo Methods in Condensed Matter Physics. World
Scientific, Singapore, 1993.
[81] R. H. Swendsen and J. Wang. Replica Monte Carlo simulation of spin-glasses.
Phys. Rev. Lett., 57:2607, 1986.

[82] R. H. Swendsen and J. Wang. Nonuniversal critical dynamics in Monte Carlo


simulations. Phys. Rev. Lett., 58:86, 1987.
[83] M. Talagrand. Spin glasses: a Challenge for Mathematicians. Springer, Berlin,
2003.

[84] M. Talagrand. The Parisi formula. Ann. of Math., 163:221, 2006.


[85] S. Trebst, D. A. Huse, E. Gull, H. G Katzgraber, U. H. E. Hansmann, and
M. Troyer. Computer Simulation Studies in Condensed Matter Physics XIX.
In D. P. Landau, S. P. Lewis, and H.-B. Schüttler, editors, Ensemble optimiza-
tion techniques for the simulation of slowly equilibrating systems, volume 115.
Springer, Berlin, 2007.
[86] S. Trebst, D. A. Huse, and M. Troyer. Optimizing the ensemble for equilibration
in broad-histogram Monte Carlo simulations. Phys. Rev. E, 70:046701, 2004.
[87] F. Wang and D. P. Landau. Determining the density of states for classical
statistical models: A random walk algorithm to produce a flat histogram. Phys.
Rev. E, 64:056101, 2001.
[88] F. Wang and D. P. Landau. An efficient, multiple-range random walk algorithm
to calculate the density of states. Phys. Rev. Lett., 86:2050, 2001.
[89] J.-S. Wang and R. H. Swendsen. Transition Matrix Monte Carlo Method. J.
Stat. Phys., 106:245, 2002.
[90] U. Wolff. Collective Monte Carlo updating for spin systems. Phys. Rev. Lett.,
62:361, 1989.
[91] J. M. Yeomans. Statistical Mechanics of Phase Transitions. Oxford University
Press, Oxford, 1992.
[92] A. P. Young, editor. Spin Glasses and Random Fields. World Scientific, Singa-
pore, 1998.
[93] A. P. Young and H. G. Katzgraber. Absence of an Almeida-Thouless line in
Three-Dimensional Spin Glasses. Phys. Rev. Lett., 93:207203, 2004.

37

You might also like