0% found this document useful (0 votes)
88 views21 pages

Ch8 Asm MC Nptel

The document discusses the Monte Carlo simulation technique for calculating observable quantities of thermodynamic systems at equilibrium. Monte Carlo simulations randomly select a reasonable number of states, rather than all possible states, weighted by their Boltzmann probability. This provides reliable results for problems like phase transitions that are difficult to solve analytically due to the large number of interacting particles. The document outlines two Monte Carlo sampling methods: simple sampling, which selects states uniformly; and importance sampling, which preferentially selects states with higher Boltzmann weights to focus on the important regions of phase space at low temperatures.

Uploaded by

ARTI SAHU
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views21 pages

Ch8 Asm MC Nptel

The document discusses the Monte Carlo simulation technique for calculating observable quantities of thermodynamic systems at equilibrium. Monte Carlo simulations randomly select a reasonable number of states, rather than all possible states, weighted by their Boltzmann probability. This provides reliable results for problems like phase transitions that are difficult to solve analytically due to the large number of interacting particles. The document outlines two Monte Carlo sampling methods: simple sampling, which selects states uniformly; and importance sampling, which preferentially selects states with higher Boltzmann weights to focus on the important regions of phase space at low temperatures.

Uploaded by

ARTI SAHU
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

NPTEL – Physics – Advanced Statistical Mechanics

Chapter 8

Monte Carlo method

Joint initiative of IITs and IISc – Funded by MHRD Page 1 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Lecture-I
1 Introduction
Our primary goal is to calculate an observable physical quantity X of a thermodynamic
system in thermal equilibrium with a heat bath at temperature T . A macroscopic
thermodynamic system consists of a large number of atoms or molecules, of the order of
Avogadro number N A  6.0221023 per mole. Moreover, in most general cases, the
particles have complex interaction among themselves. The average property of a physical
quantity of such a system is then determined not only by the large number of particles but
also by the complex interaction among the particles. As per statistical mechanics, the
average of an observable quantity  X  can be calculated by evaluating the canonical
partition function Z of the system. However, the difficulty in evaluating the exact
partition function Z is two fold. First, there are large number of particles present in the
system with many degrees of freedom. The calculation of the partition function Z usually
leads to evaluation of an infinite series or an integral in higher dimension, a (6N )
dimensional space. Second, there exists a complex interaction among the particles which
gives rise to unexpected features in the macroscopic behaviour of the system.

Monte Carlo (MC) simulation method can be employed in evaluating such thermal
averages. In a MC simulation, a reasonable number of states are generated randomly
(instead of infinitely large number of all possible states) with their right Boltzmann weight
and an average of a physical property is taken over those states only. Judicious selection of
some most important states which contribute most to the partition sum, provides extremely
reliable results in many situations. Simplicity of the underlying principle of the technique
enables its application to a wide range of problems: random walks, transport phenomena,
optimization, traffic flow, binary mixtures, percolation, disordered systems, magnetic
materials, dielectric materials, etc. We will be addressing the problems related to phase
transitions and critical phenomena.

2 Monte Carlo Technique for Physical Systems


We will be presenting here Monte Carlo (MC) simulation as a numerical technique to
calculate macroscopic observable quantities of a thermodynamic system at equilibrium. A
thermodynamic system is composed of large number of interacting particles. The
microstates of these particles are either represented by their canonically conjugate position
and momentum (q, p) defined by the Hamilton's canonical equations or by the wave
function obtained as a stationary state solution of the Schrödinger equation if the particle's
dynamics is represented by classical or quantum mechanics respectively. We will be
considering the classical problem here. For a classical system described by the
Hamiltonian function H , the microstates are represented by points (qi , pi ), i = 1, N in a
6 N -dimensional phase space. Systems of interacting particles with discrete energy states
can also be treated classically if they are localized, i.e; distinguishable.

Joint initiative of IITs and IISc – Funded by MHRD Page 2 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Let us consider a macroscopic system at thermodynamic equilibrium with a heat bath at


temperature T . (Hence, we will be developing the MC technique here in the frame work
of canonical ensemble.) As per statistical mechanics (discussed in chapter- 1 ), if the system
is system is described by a Hamiltonian function H , the expectation value of a
macroscopic quantity A is given by
 A =   A( p, q) exp  H( p, q)d 3 N qd 3 N p
1
Z
where the canonical partition function Z is given by
Z = 3 N   exp  H( p, q)d 3 N qd 3 N p,
1
h
whereas if system described by discrete energy state, energy {Es } corresponding to the
state s , the expectation value of a macroscopic quantity A is then given by
1
 A = As exp ( Es /k BT )
Z s
where Z = s exp ( Es /k BT ) is the canonical partition function. Note that the partition
function Z involves either evaluation of summation over all possible states, an infinite
series or an integration over a 6 N dimensional space. The partition function gives the
exact description of a system. However, in most of the cases, it is not possible to evaluate
the partition function analytically or even exactly numerically. The difficulty is not only in
evaluating the integral or summation over 6 N degrees of freedom but also handling the
complex interaction appearing in the exponential. In general, it is not possible to evaluate
the summation over such a large number of state or the integral in such a high dimensional
space. On the other hand, at low temperature the system spends almost all its time sitting in
the ground state or one of the lowest excited states. Thus, an extremely restricted part of the
phase space should contribute to the average. Accordingly, the sum should be over a few
low energy states because at low temperature there is not enough thermal energy to lift the
system into the higher excited states. Thus, picking up points randomly over the phase
space is no good here.

However, it is always possible to take average over a finite number of states, a subset of all
possible states, if one knows the correct Boltzmann weight of the states and the probability
with which they are picked up. This can be realized in MC simulation technique. In MC
simulation techniques, usually only 106  108 states are considered for making average.
Say, N states (s1 , s2 ,, sN ) , a small subset of all possible states, are chosen at random
with certain probability distribution ps . So the best estimate of the physical quantity A
will be
N

A p
s =1
s
1
s exp ( Es /k BT )
AN = N
.
p
s =1
1
s exp ( Es /k BT )

Joint initiative of IITs and IISc – Funded by MHRD Page 3 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Note that the Boltzmann probability of each state is normalized by the probability of
choosing the state. As the number of sample N increases, the estimate will be more and
more accurate. Eventually, as N   , AN   A .

Now, the problem is how to specify ps so that the chosen N states will lead to right
 A . Depending on the nature of a physical problem, the choice of ps in MC averaging
may differ. The physical problems can be categorized in two different groups: non-thermal
non-interacting and thermal interacting systems. For non-thermal non-interacting systems
`simple sampling MC' and for thermal interacting systems `importance sampling MC' are
found suitable respectively.

The specification of ps in these sampling techniques are as follows. A simple choice of


ps = p for all state will lead the estimator
N N
AN = As exp ( Es /k BT )/  exp ( Es /k BT ).
s =1 s =1
As N   the estimator leads to
AN   A
the ensemble averaged value of the property A . This is known as simple sampling MC
method.

Joint initiative of IITs and IISc – Funded by MHRD Page 4 of 21


NPTEL – Physics – Advanced Statistical Mechanics

As already mentioned, at low temperature the system remains restricted in an extremely


restricted part of the phase space and an importance sampling is required. The Mc method
should automatically lead us to the important region of phase space. One could sample
points preferentially from the region which is populated by the appropriate states. This is
realized in the following manner: Instead of picking N states randomly with equal
probability, pick them with a probability ps = exp ( Es /k BT )/Z , the correct Boltzmann
weight. Then the expectation value of a quantity will be given by
N

A p
s =1
s
1
s exp ( Es /k BT )
AN = N

p
s =1
1
s exp ( Es /k BT )
N

A exp ( E /k T )Z/ exp ( E /k T )


s =1
s s B s B
= N

 exp ( E /k T )Z/ exp ( E /k T )


s =1
s B s B

which simply reduces to


1 N
As
N s =1
AN =

where each state s is picked with the correct Boltzmann probability


ps = exp ( Es /k BT )/Z (not randomly selected states with equal probability as in simple
sampling). This averaging is called `importance sampling' average. Hoverer, it is not a easy
task to pick up states which appear with its correct Boltzmann weight. It is possible to
realize importance sampling with the help of Markov chain.

Joint initiative of IITs and IISc – Funded by MHRD Page 5 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Lecture-II
3 Markov chain
Markov chain is a sequence of states each of which depends only on the preceding one.

[n] 
H
[m] 
H
[o]
The transition probability Wnm from state n to m should not vary over time and
depends only on the properties of current states (n, m) . Moreover, Wnm  0 and
W m nm = 1 . The Markov chain has the property that average over successive
configurations
 A n =  A  O(1/ n )
as n   , if the states are weighted by Boltzmann factor. However, for this to happen two
conditions on Markov chain have to be satisfied. They are: (i ) ergodicity and (ii )
detailed balance.

3.1 Ergodicity
It should be possible in Markov process to reach any state of the system starting from any
other state in long run. This is necessary to achieve a state with its correct Boltzmann
weight. Since each state m appears with some non-zero probability pm in the
Boltzmann distribution, and if that state is inaccessible from another state n , then the
probability to find the state m starting from state n is zero. This is in contradiction with
Boltzmann distribution which demands the state should appear with a probability pm .
This means that there must be at least one path of non-zero transition probabilities between
any two states. One should take care in implementing MC algorithms that it should not
violate ergodicity.

Joint initiative of IITs and IISc – Funded by MHRD Page 6 of 21


NPTEL – Physics – Advanced Statistical Mechanics

3.2 Detailed balance


In the steady state, the probability flux out of a state n is equal to the probability flux into
the state n and the steady state condition is given by the ``global balance'' equation
Wnm pn (t ) = Wmn pm (t ).
m m

Since W m nm = 1 , one has


pn (t ) = Wmn pm (t ).
m
It can also be written in the matrix form as
p(t ) = p(t ) W
where t represents the steps of a Markov process and W is the one step transition matrix.
It is expected that for any W , the probability distribution pn should represent the
equilibrium dynamics of the Markov process. However, the above condition is not
sufficient to guarantee that the probability distribution will tend to pn after a long
evolution of a Markov chain starting from any state. In this way, it is also possible to reach
dynamical equilibrium in which the probability distribution pn circles around a number
of different values. Such rotation is called a limit cycle. In those situations there is no
guarantee that the actual states generated will have anything like the desired probability
distribution. Such a situation can be ruled out by applying an additional condition, the
condition of ``detailed balance'', to the transition probabilities:
Wnm pn (t ) = Wmn pm (t )
with the requirement that all entries of W are positive. The condition of detailed balance
then tells us that on an average the system should go from state n to state m is equal to
that it goes from m to n . In a limit cycle, in which the probability of occupation of some
of the states changes in a cyclic fashion, there must be states for which this condition is
violated on any step of the Markov chain in order to satisfy a pre-determined occupation
probability of a state. The condition of detailed balance forbids dynamics with limit cycle.

Let the equilibrium distribution to be the Boltzmann distribution, the probability of the n
th state is then pn (t ) = exp ( En /k BT )/Z where Z = n exp ( En /k BT ) is the canonical
partition function. The detailed balance condition then tell us that the transition
probabilities should satisfy
Wnm pm  ( E  E )/ k T
= =e m n B
Wmn pn
along with W m nm = 1 . If these conditions as well as the condition of ergodicity are
satisfied, then the equilibrium distribution of states in a Markov process will be Boltzmann
distribution.

The probability pn is usually not known exactly because in most of the cases the
canonical partition function Z = n exp ( En /k BT ) is extremely difficult to calculate
exactly. To avoid the difficulty one uses the Markov process to generate a state directly

Joint initiative of IITs and IISc – Funded by MHRD Page 7 of 21


NPTEL – Physics – Advanced Statistical Mechanics

from the preceding one following Metropolis algorithm.


4 Metropolis algorithm
If the m th state is produced from the n th state only, the relative probability is the ratio of
the individual probabilities pn /pm = exp{( En  Em )/k BT } . As a result, only the energy
difference between the two states matter, i.e.; E = En  Em . Any transition rate which
satisfies detailed balance is acceptable. The following is considered by Metropolis:

 E/k BT
Wnm =e if E > 0
=1 if E  0

Joint initiative of IITs and IISc – Funded by MHRD Page 8 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Lecture-III
5 MC simulation of Ising Model
In this section, importance sampling Monte Carlo techniques will be used for the study of
phase transitions at finite temperature. We shall discuss details, algorithms, and potential
sources of difficulty using the Ising model as a paradigm. However, virtually all of the
discussion of the application to the Ising model is relevant to other models as well. The
Ising model is one of the simplest lattice models which one can imagine, and its behavior
has been well studied. The simple Ising model consists of spins which are confined to the
sites of a lattice and which may have only the values  1 or  1 . If there are N spins on
the lattice, then the system can be in 2 N states. The energy of a state is given by the Ising
Hamiltonian:

H =  J  i j  h i
i, j  i

where J is the interaction energy between nearest neighbour spins i, j , h is the
external magnetic field in units of energy, and  i = 1 . The Ising model has been solved
exactly in one dimension and as a result it is known that there is no phase transition. In two
dimensions, the model is exactly solved in zero field situation which showed that there is a
second order phase transition. The critical temperature is obtained from the condition
2J
2 tanh2 = 1,
k BTc
as k BTc /J = 2.269185 . The phase transition is characterized by the divergences in the
specific heat, susceptibility, and correlation length. The critical exponents obtained are:
 = 0 (logarithmic),  = 1/8 ,  = 7/4 ,  = 1 . The Ising model in higher dimensions can
be solved following mean field approach. The mean field exponents are:  = 0
(discontinuity),  = 1/2 ,  = 1 ,  = 1/2 . In order to satisfy the scaling relation
2   = d , on has d = 4 , the upper critical dimension.

5.1 Metropolis importance sampling scheme

1. Choose an initial state assigning  1 or  1 corresponding to up or down spin


arbitrarily to the lattice sites.

2. Choose a site j

3. Calculate the energy change E = E f  Ei which results if the spin at site j is


overturned

4. Generate a uniformly distributed random number r such that 0 < r < 1

5. If r < exp (E/k BT ) , flip the spin

Joint initiative of IITs and IISc – Funded by MHRD Page 9 of 21


NPTEL – Physics – Advanced Statistical Mechanics

6. Go to the next site and go to (3)

t=0 t = 64 t = 8192

100100 , spin configuration at temperature k BT/J = 2.0 , corresponds to T < Tc


Fig.8.1: On a square lattice of size

(kBTc /J  2.27) is shown. t is the MC time step per spin.

t=0 t=6 t = 128

Fig.8.2: On a square lattice of size 100100 , spin configuration at k BT/J = 3.0 , corresponds to T > Tc (

k BTc /J  2.27 ). t is the MC time step per spin.

Spin configurations of a 2d zero field spin- 1/2 Ising model on the square lattice are
obtained using single spin flip Metropolis algorithm at a temperature T below Tc in
Fig.8.1 and above Tc in Fig.82. The spin configurations shown in the figures above are
obtained on a 100100 square lattice. Note that, in absence of the field, the model has
up-down symmetry so the overturning all the spins produce a degenerate state. At high
temperature all the clusters of like spins are small, near the transition there is a broad
distribution of clusters, and at low temperatures there is a single large cluster of ordered
spins and a number of small clusters of oppositely directed spins.

Joint initiative of IITs and IISc – Funded by MHRD Page 10 of 21


NPTEL – Physics – Advanced Statistical Mechanics

5.2 Equilibrium
Any measurement of macroscopic property has to be made on the states at thermodynamic
equilibrium of the system at a given temperature T . Equilibrium means that the average
 E
probability of finding a state n is proportional to the Boltzmann weight e n of that
state. How to know that our system has reached that situation? One can calculate a
macroscopic quantity, say magnetization M , as a function of MC time step t starting
from an arbitrary configuration. As t   , the macroscopic quantity should reach a
constant value or fluctuate slightly around a constant value. For a square lattice of size
100100 , magnetization, the number of up spins, are measured as a function of MC time
step t and plotted in Fig.8.3. One could see that, it starts from zero as expected and
reaches a steady value after 6000 time steps.

Fig.8.3: Plot of spontaneous magnetization against time


t at temperature T = 2.0 starting from a random
configuration ( T   ).

Joint initiative of IITs and IISc – Funded by MHRD Page 11 of 21


NPTEL – Physics – Advanced Statistical Mechanics

In many cases it is possible for the system to get stuck in some metastable region of its state
space for a while, giving roughly constant value of the macroscopic quantities and so
appearing to have reached equilibrium. In terms of statistical Mechanics, there can be a
local energy minimum in which the system can remain temporarily, and one should not
mistake this as a global energy minimum, which is the region of phase space corresponds
to the system in equilibrium. On may verify this by calculating a macroscopic quantity as a
function of MC time steps starting from two widely different initial situation and different
random number seeds. Some of the equilibrium spin configurations at different
temperatures are shown below:

T = 2.00 (< Tc ) T = 2.27 ( Tc ) T = 2.70 (> Tc )

Fig.8.4: Equilibrium spin configuration at different temperatures around Tc after 8000 Monte Carlo time step per spin
on a square lattice of size 100100 .

Joint initiative of IITs and IISc – Funded by MHRD Page 12 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Lecture-IV
5.3 Measurements
Simulations are performed on a 2d square lattice of size 100100 . Measurements are
done after t = 104 time steps and averaged over 10 4 configurations.

The magnetization m per spin in a state n is given by


2
1 L
mn = 2  in .
L i
Since only one spin k flips at a time in the Metropolis algorithm, so the change in
magnetization is given by
M = M m  M n =  im   in =  km   kn = 2 km .
i i
One can then calculate the magnetization at the beginning of the simulation and then use
the following equation
M m = M n  M = M n  2 km
for every spin flip. As soon as the magnetization is obtained, one could calculate the per
spin susceptibility as
 = N (m2   m 2 ).
m and  are obtained as a function temperature T and are plotted in Figure.4.

Magnetization M Susceptibility 

Fig.8.5: Plot of magnetization M and susceptibility  against temperature T (in units of J/k B ).

Joint initiative of IITs and IISc – Funded by MHRD Page 13 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Energy E Specific Heat Cv

Fig.8.6: Plot of internal energy E and specific heat CV against temperature T (in units of J/k B ).

The energy En of a state n can be obtained as


En =  J  in nj
i, j 

setting J = 1 . However, in Metropolis algorithm we calculate the energy difference


E = Em  En in going from state n to state m . Thus, one can calculate energy of a state
m as
Em = En  E.
Knowing the value of energy E , one may calculate the per spin specific heat as
cV =  2 ( E 2    E 2 )
assuming k B = 1 . E and cV is plotted as a function of temperature T in Fig.8.6.

In order to take average of a physical quantity, we have presumed that the states over which
the average has been made are independent. Thus to make sure that the states are
independent, one needs to measure the ``correlation time''  of the simulation. The time
auto correlation Cm (t ) of magnetization is defined as
Cm (t ) = [m(t )   m ][m(t   t )   m ] = m(t )m(t   t )   m 2 .
t t

The auto correlation Cm (t ) is expected to fall off exponentially at long time


Cm (t ) : e t/ .
Thus, at t =  , Cm (t ) drops by a factor of 1/e from the maximum value at t = 0 . For
independent samples, one should draw them at an interval greater than  . In most of the
definitions of statistical independence, the interval turns out to be 2 .

Application Monte Carlo technique in different fields of condensed matter could be found
in K. Binder (Edt.), The Monte Carlo Method in Condensed Matter Physics,
(Springer-Verlag, Heidelberg, 1992) [3].

Joint initiative of IITs and IISc – Funded by MHRD Page 14 of 21


NPTEL – Physics – Advanced Statistical Mechanics

The difficulties one usually encountered are primarily limited computer time and memory
and secondly statistical and other errors. To encounter these difficulties, one may begin
with a relatively simple program using relatively small system size and modest running
time. The simulation can be performed for special parameter values for which exact results
may be available. The parameter range, system size and computer time then can be
optimized to obtain reasonable result with less error.

6 Error analysis
There are two types of error in Monte Carlo simulation: statistical error and systematic
error. Statistical error arises as a result of random changes in the simulated system from
measurement to measurement and can be eliminated by generating a large number of
independent samples. Systematic error is due to the procedure adopted to make a
measurement and that affect the whole simulation.

6.1 Statistical error:


Suppose a quantity x is distributed according to a Gaussian distribution with mean value
x and width  . Consider N independent observations {xi } of this quantity x . An
unbiased estimator of the mean x of this distribution is
1 N
 x = xi
N i =1
and the standard error of this estimate is
error =  / N .
In order to estimate the standard deviation  itself from the observations, consider the
deviation xi = xi   x . Trivially we have x = 0 . Thus we are interested in mean square
deviation
1 N
(x) 2  = ( xi   x ) 2 =  x 2    x 2 .
N i =1
The square of standard deviation is the variance and it is given by
1 N
 2 = var( x1  xN ) = 
N  1 i =i
( xi   x ) 2   x 2    x 2

Thus, the error in the estimation of mean x is given by


N
1
error = ( xi   x)2
N ( N  1) i =i

6.2 Systematic error:


Since the systematic errors do not appear in the fluctuations of the individual measurement,
they are more difficult to estimate than statistical errors. The main source of systematic
error in the Ising Model simulation is the choice of finite number of MC time steps to
equilibrate the system. There is no good general method for estimating systematic errors.
Each source of such error has to be considered separately and a strategy has to be
identified.

Joint initiative of IITs and IISc – Funded by MHRD Page 15 of 21


NPTEL – Physics – Advanced Statistical Mechanics

Problems
Problem 1: Check that the Metropolis transition probability satisfies the detailed balanced
condition.

Problem 2: Simulate the zero field spin- 1/2 Ising model on a 2d square lattice
employing Metropolis algorithm. Calculate per spin magnetization and susceptibility of
the system as a function of T . Determine Tc .

Solution: c-code of Monte Carlo algorithm for 2d-ising model

#include<stdio.h>
#include<math.h>
#include<stdlib.h>
#include <string.h>

#define L 128 // ^
#define LL (L*L) // | Lattice Parameters
#define L1 (L-1) // |
#define LL1 (L*L1) // v

#define T 1.25 // T= KT/J actually

#define p1 5000 // Average magnetization calculation starts here


// i.e., Steady state starts here

#define p2 10000 // Program stops at this step

#define N 10000 // No of configurations

static long iseed=-999999999;

float ran2(long *idum); // Random number generator: See Ref.8

int main()
{
static long I,j,k,ii,jj,step,isite,a(LL),nn,nns,sp,dE,Enrg=0, spn=0, time=0, sqr_Enrg=0,
sqr_spn=0, [4]={1,L,-1,-L};

static double z,r,avg_mag, avg_energy, avg_sqr_mag, avg_sqr_energy


cie[2], mgt=0, energy=0,sqr_mgt=0, sqr_energy=0, dpLL, sp_heat, mag_sus;

cie[0]=exp(-4/T);
cie[1]=exp(-8/T);
dpLL=(p2-p1)*LL;
Joint initiative of IITs and IISc – Funded by MHRD Page 16 of 21
NPTEL – Physics – Advanced Statistical Mechanics

/* Arbitraryly fillup lattice sites */

for(i=0;i<LL;i++)
{
z=ran2(&iseed);
if (z>0.4) a[i]=1;
else a[i]=-1;
spn=spn+a[i];
}

/* Calculation of initial energy */

for(i=0;i<LL;i++)
{
for(k=0;k<4;k++)
{
nn=i+iv[k];
if(k==0 && i%L==L1)nn=i-L1;
if(k==1 && i>=LL1)nn=i-LL1;
if(k==2 && i%L==0)nn=i+L1;
if(k==3 && i<L)nn=i+LL1;

Enrg+=a[i]*a[nn];

}
}

/* Algorithm starts here */

for(ii=0;ii<p2;ii++)
{
for(step=0;step<LL;step++)
{
isite=ran2(&iseed)*LL; // Choosing any arbitrary site
sp=0;
for(i=0;i<4;i++)
{
nns=isite+iv[i];
if(i==2 && isite%L==0)nns=isite+L1; // |
if(i==3 && isite<L)nns=isite+LL1; // | Finding NNs
if(i==0 && isite%L==L1)nns=isite-L1;// |
if(i==1 && isite>=LL1)nns=isite-LL1;// |
sp+=a[nns];
}

Joint initiative of IITs and IISc – Funded by MHRD Page 17 of 21


NPTEL – Physics – Advanced Statistical Mechanics

/* Metropolis Algorithm */

dE=2*a[isite]*sp;
if(dE<=0)
{
a[isite]=-a[isite]; // Flipping criterion
spn=spn+2*a[isite]; // Change in magnetization & Energy
Enrg+=dE;
}
else
{
j=(dE/4)-1;
r=cie[j];
z=ran2(&iseed); //Calling random number

if (z<=r)
{
a[isite]=-a[isite];
spn=spn+2*a[isite]; //Change in magnetization & Energy
Enrg+=dE;
}
}
} // Here we will have one state.

time++;

// After p1 steps:

if(time>=p1)
{
for(jj=0;jj<N;jj++)
{
for(step=0;step<LL;step++)
{
isite=ran2(&iseed)*LL;
sp=0;
for(i=0;i<4;i++)
{
nns=isite+iv[i];
if(i==2 && isite%L==0)nns=isite+L1;
if(i==3 && isite<L)nns=isite+LL1;
if(i==0 && isite%L==L1)nns=isite-L1;
if(i==1 && isite>=LL1)nns=isite-LL1;
sp+=a[nns];

Joint initiative of IITs and IISc – Funded by MHRD Page 18 of 21


NPTEL – Physics – Advanced Statistical Mechanics

dE=2*a[isite]*sp;

if(dE<=0)
{
a[isite]=-a[isite];
spn=spn+2*a[isite];
Enrg+=dE;
}
else
{
j=(dE/4)-1;
r=cie[j];
z=ran2(&iseed);
if (z<=r)
{
a[isite]=-a[isite];
spn=spn+2*a[isite];
Enrg+=dE;
}
}
}
}

mgt=mgt+(double)spn;
sqr_spn=spn*spn;
sqr_mgt=sqr_mgt+(double)sqr_spn;
energy=energy+(double)Enrg;
sqr_Enrg=Enrg*Enrg;
sqr_energy=sqr_energy+(double)sqr_Enrg;
}

// Calculating the steady state average quantity


// per spin between p1 and p2 steps

avg_mag=mgt/dpLL;
avg_sqr_mag=sqr_mgt/(dpLL*LL);
avg_energy=energy/dpLL;
avg_sqr_energy=sqr_energy/(dpLL*LL);

sp_heat=(avg_sqr_energy-avg_energy*avg_energy)/(T*T);

mag_sus=(avg_sqr_mag-avg_mag*avg_mag)/T;

// End of the Program //

Joint initiative of IITs and IISc – Funded by MHRD Page 19 of 21


NPTEL – Physics – Advanced Statistical Mechanics

return(0);

Joint initiative of IITs and IISc – Funded by MHRD Page 20 of 21


NPTEL – Physics – Advanced Statistical Mechanics

References
[1] D. P. Landau and K. Binder, A Guide to Monte Carlo simulations in Statistical
Physics, (Cambridge university Press, Cambridge, 2005).

[2] M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics,


(Clarendon Press, Oxford, 2001).

[3] K. Binder (Edt.), The Monte Carlo Method in Condensed Matter Physics,
(Springer-Verlag, Heidelberg, 1992).

[4] K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics,


(Springer, Berlin, 1997).

[5] D. Frenkel and B. Smit, Understanding Molecular Simulation, (Academic Press,


San Diego, 2002).

[6] K. P. N. Murthy, Monte Carlo Methods in Statistical Physics, University Press,


Hyderabad (2004).

[7] S. B. Santra and P. Ray, Classical Monte Carlo simulation, in Computational Statistical
Physics, edited by S. B. Santra and P. Ray, (Hindustan Book Agency, New Delhi, 2011).

[8] W. H. Press and S. A. Teukolsky, W. T. Vellerling, B. P. Flannery, Numerical Recipes,


(Cambridge University Press, Cambridge, 1998).

Joint initiative of IITs and IISc – Funded by MHRD Page 21 of 21

You might also like