0% found this document useful (0 votes)
28 views

Lecture 1

1. The document discusses foundational concepts in statistical physics including macrostates, microstates, and the principle of equal a priori probabilities. It describes how statistical physics can be used to understand systems with many degrees of freedom like a gas of molecules. 2. The maximum entropy principle and second law of thermodynamics are introduced, stating that isolated systems will evolve towards the state with the highest number of microstates corresponding to thermal equilibrium. 3. An expression for the number of microstates Ω of an ideal gas is derived, relating it to the system's energy E, volume V, and number of particles N.

Uploaded by

Prince Mensah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Lecture 1

1. The document discusses foundational concepts in statistical physics including macrostates, microstates, and the principle of equal a priori probabilities. It describes how statistical physics can be used to understand systems with many degrees of freedom like a gas of molecules. 2. The maximum entropy principle and second law of thermodynamics are introduced, stating that isolated systems will evolve towards the state with the highest number of microstates corresponding to thermal equilibrium. 3. An expression for the number of microstates Ω of an ideal gas is derived, relating it to the system's energy E, volume V, and number of particles N.

Uploaded by

Prince Mensah
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Statistical Physics Vishnu Jejjala

Lecture 1

1. Statistical physics makes the assumption that there are underlying quantum states of a
system. We generally consider systems with a large number of degrees of freedom N . The
goal is to determine the probability distribution of states in the limit that the time t → ∞.

2. We will initially work in equilibrium settings wherein the macroscopic observables have
stopped evolving in time. Familiar examples of macroscopic observables include temperature
and pressure.
Think about a gas of N molecules in a room. Under standard conditions, there is about one
mole (6.022 × 1023 ) of molecules in 22.4 L of air, so N is an extremely large number. It is
impossible to keep track of the motion of each individual molecule in the room. There are
too many microscopic degrees of freedom. Fortunately, it is not necessary to know everything
in such detail. If we wait a suitably long time (longer than the relaxation time τ ), even very
special initial conditions evolve to generic distributions with universal properties.
Consider a measurement apparatus of volume ℓ3 in the room of volume L3 . Figure 1 inter-
rogates how many particles nℓ (r, t) abide within the measurement apparatus as a function
of time. (The vector r gives the position of the measurement apparatus within the room.) If
the mean intermolecular spacing is λ, let us examine the regimes where λ ∼ ℓ, λ ≪ ℓ ≪ L,
and ℓ ∼ L. We see large fluctuations in nℓ (r, t) in the first case. Noise contaminates the
data. Because we cannot get a consistent answer to our query, we do not learn much about
the system as a result of this investigation. Let us increase ℓ. As indicated by the second
plot, even if all the molecules were initially within the measurement apparatus, in the regime
where λ ≪ ℓ ≪ L there is a well defined average value of nℓ (r, t) at late times. There are
small fluctuations on top of this. The fluctuations become larger with increasing tempera-
ture, but the mean occupation remains the same. It is N ℓ3 /L3 , and this result is robust


in time. The third plot simply depicts how many particles there are because the measure-
ment apparatus and the system are the same size. The number of particles is a macroscopic
observable defined with respect to the length scale ℓ; it is a coarse grained observable.

Figure 1: The number of particles in a box of length ℓ as a function of time.

3. The principle of equal a priori probabilities asserts that a closed system in thermal
equilibrium is equally likely to be in any of the quantum states accessible to it. A priori
is a Latin phrase meaning “from what comes before.” There is no justification, a priori, to

1
Statistical Physics Vishnu Jejjala

propose one quantum state for the system in favor of another. Any of the possibilities are
identically likely, and we lack a selection principle to differentiate a particular allowable state
over one of its alternatives. In this setting, we are working in the microcanonical ensemble.
The probability for the system to satisfy certain macroscopic conditions is proportional to the
fraction of states that exhibit that particular property. For simple systems, we can tabulate
all of the possible quantum states. Exceedingly special states are exactly as probable as any
given typical state. It is just that the states with some unusual property are overwhelmingly
rare in the complete list of states. Almost all states are typical. As such, we can study
their properties statistically. If you like, this may be viewed as the Copernican principle in
statistical physics.

4. Suppose we have a system of N particles in a volume V with energy E. The entropy


enumerates the total number of microscopic states of a physical system. If there are Ω total
states, the entropy is
S = kB log Ω , (1)
where Boltzmann’s constant kB = 1.381 × 10−23 J/K = 8.617 × 10−5 eV/K. Natural units
set kB = 1 and treat entropy as a dimensionless quantity. The probability for being in the
state |i⟩ is Ω−1 .

5. In the continuum limit, the entropy is a function of the thermodynamic variables corre-
sponding to energy, volume, and particle number: S = S(E, V, N ). Similarly, the energy is a
function E(S, V, N ). From this, we write

∂E ∂E ∂E
dE = dS + dV + dN . (2)
∂S V,N ∂V S,N ∂N S,V

We identify
∂E ∂E ∂E
T = , p=− , µ= , (3)
∂S V,N ∂V S,N ∂N S,V
where T is temperature, p is pressure, and µ is chemical potential. We then have

dE = T dS − p dV + µ dN . (4)

This is the first law of thermodynamics, which is a statement about the conservation of
energy.

6. The principle of maximum entropy posits that physical processes maximize the number
of states Ω. This is a statement of the second law of thermodynamics. As an example,
consider two systems with (S1 , E1 , V1 , N1 ) and (S2 , E2 , V2 , N2 ). If the first system has Ω1
microstates and the second system has Ω2 microstates, the total system has Ωtot = Ω1 · Ω2
microstates so that

Stot = kB log Ωtot = kB log Ω1 + kB log Ω2 = S1 + S2 . (5)

The logarithm appears in (1) so that the entropy of subsystems is additive.


Suppose the two systems are brought into thermal contact so that energy flows from one to
the other. Crucially, the volume and the number of particles associated to each system do

2
Statistical Physics Vishnu Jejjala

not change (i.e., dV1 = dV2 = dN1 = dN2 = 0). The energy of a closed system is constant so
that 0 = dE = dE1 + dE2 . At equilibrium, the entropy is maximum and
 
∂S1 ∂S2 1 1
0 = dS = dS1 + dS2 = dE1 + dE2 = − dE1 . (6)
∂E1 V1 ,N1 ∂E2 V2 ,N2 T1 T2

This implies that T1 = T2 . The number of microstates is maximum when the temperatures of
the two systems are the same. This is the condition for thermal equilibrium. At equilibrium,
the values of the macroscopic observables are such that Ωtot is as large as possible. The most
probable state is also the equilibrium state.

7. Let us count the number of states of energy E for an ideal gas of N identical and indistin-
guishable, spin-0 particles of mass m in a box with sides L. To begin, recall from quantum
mechanics that the energy of a single particle in a box is

ℏ2 π 2 2
ε(n) = (n + n2y + n2z ) , (7)
2mL2 x
where nx , ny , nz are positive integers. As we have N particles, the total energy is
N
X
E= ε(ni ) , (8)
i=1

where ni is a vector of integers that describes the excitations of the i-th particle. As this is a
partition problem, there are many choices of vectors {ni } that yield the same total energy for
the gas. The system is degenerate, and entropy quantifies this degeneracy. Entropy therefore
represents our ignorance about the true microstate of the system consistent with a given
property, in this case the number of ways of realizing a particular total energy. How does
the system choose the vectors {ni }? In general, this is a consequence of initial conditions.
Noting that the Kronecker δ-function is defined so that
(
1, x=0
δ(x) = , (9)
0 , x ̸= 0

if we want to enumerate the number of states with a fixed energy E, the result is
N
1 X X ℏ2 π 2
Ω(E) = δ(E − |ni |2 ) . (10)
N! 2mL2
{ni }>0 i=1

The δ-function is 1 only for those choices of {ni } that satisfy the energy condition (8).
Therefore, (10) calculates the total number of configurations with the energy E. The division
by N ! in front accounts for the fact that the particles are indistinguishable. This is the number
of elements in the symmetric group SN , whose elements are all the permutations of N objects.
Removing the restriction that the components of the vectors {ni } are positive, we recast the
sum in (10) as
N
1 1 X X ℏ2 π 2
Ω(E) = δ(E − |ni |2 ) . (11)
N ! 23N 2mL2
{ni } i=1

Now, define
π
k= (nx , ny , nz ) . (12)
L

3
Statistical Physics Vishnu Jejjala

Taking the limit L → ∞, we pass to the continuum and replace sums over lattice points
with integrals over k-space. Since it is proportional to L−2 , the energy is now a continuous
function rather than a quantity that assumes discrete values. Thus,
N
1 VN ℏ2 X
Z
Ω(E) = d3 k1 . . . d3 kN δ(E − |ki |2 ) , (13)
N ! (2π)3N 2m
i=1

where we have put V = L3 and traded the Kronecker δ-function for the Dirac δ-function.
We as well write d3 k to compress notation:
R

Z Z ∞ Z ∞ Z ∞
3
d k f (k) = dkx dky dkz f (k) . (14)
−∞ −∞ −∞

The Dirac δ-function is a degenerate Gaussian obtained as the Fourier transform of the
identity: Z ∞
1
δ(x) = dp eip·x · 1 . (15)
2π −∞
It enjoys the property that Z ∞
dx f (x) δ(x) = f (0) . (16)
−∞

If f (x) = 1, we see that the Dirac δ-function integrates to unity. (Mathematicians may prefer
to call δ(x) a distribution. As usual, physicists are imprecise with language.) We can also
think of δ(x) as the derivative of the step function:
(
d 1, x≥0
δ(x) = Θ(x) , Θ(x) = . (17)
dx 0, x<0

Thus, δ(x) = 0 except at x = 0, where it is infinite. The step function allows us, incidentally,
to write
N
d 1 VN ℏ2 X
Z
3 3
Ω(E) = d k 1 . . . d k N Θ(E − |ki |2 ) . (18)
dE N ! (2π)3N 2m
i=1
q
Noting that the integral is the volume of a ball of radius r = 2mE ℏ2
in 3N dimensions, we
expect that the derivative gives us the surface area of a (3N − 1)-dimensional sphere. This
anticipates the form of the answer we will now derive.
Defining
N
r
ℏ2 X
qi = ki , q2 = |qi |2 , (19)
2m
i=1

we recast (13) as
 3N Z
VN

2m 2
Ω(E) = d3 q1 . . . d3 qN δ(E − q 2 ) . (20)
N !(2π)3N ℏ2

The factors in front come from the change of variables k → q. Now, since we are especially
interested in what happens to the δ-function when its argument is zero, we make use of the
fact that
X δ(q − qj )
δ(f (q)) = , f (qj ) = 0 . (21)
|f ′ (qj )|
roots of f

4
Statistical Physics Vishnu Jejjala

The derivation of this expression is as follows. Suppose the roots of f (q) are at qj . Then,
given a test function g(q),
Z ∞ X Z qj +ϵ
I= dq g(q) δ(f (q)) = Ij , Ij = dq g(q) δ((q − qj )f ′ (qj )) . (22)
−∞ j qj −ϵ

Here, we have Taylor expanded f (q) about the zero at qj :

f (q) = f (qj ) + f ′ (qj )(q − qj ) + . . . . (23)

The leading term vanishes by construction and f (q) ≈ f ′ (qj )(q − qj ) when we take ϵ suitably
small. This approximation is then substituted into the argument of the δ-function which
appears in the integrand of Ij . Setting t = (q − qj )f ′ (qj ), we have
Z ϵf ′ (qj )  
1 t g(qj )
Ij = ′ dt g ′
+ qj δ(t) = ′ . (24)
f (qj ) −ϵf ′ (qj ) f (qj ) |f (qj )|

The absolute value arises because, in case f ′ (qj ) is negative, we must reverse the bounds of
integration, which pulls down an additional minus sign. Now,
X g(qj ) Z ∞
X X 1
I= Ij = = dq g(q) δ(q − qj ) , (25)
|f ′ (qj )| |f ′ (qj )| −∞
j j j

which, by comparison to (22), establishes the claim in (21).


The composition law (21) enables us to rewrite
√ √
2 δ( E − q) δ( E + q)
δ(E − q ) = √ + √ . (26)
2 E 2 E
Substituting this into the integral from (20), we have
Z ∞ √
3N −1 δ( E − q)
Z Z
1 3N
3 3 2
d q1 . . . d qN δ(E − q ) = dω3N −1 dq q √ = S3N −1 E 2 −1 . (27)
0 2 E 2
In the first equality, we have treated q = (q1 , . . . , qN ) as a vector and switched to spherical
polar coordinates in 3N dimensions. The fact that q > 0 lets us drop the second term on the
RHS of (26). The integral over 3N − 1 angular variables describes the surface area of a unit
(3N − 1)-dimensional sphere as we have argued above. In the second equality, we evaluate

the δ-function where its argument vanishes, viz., at q = E.
To proceed, we must use the formula for the surface area of a higher dimensional sphere. Let
us derive this. Put
Z Z ∞ Z ∞ Z ∞
2 2 2 2 d
I = dd x e−|x| = dx1 e−x1 dx2 e−x2 . . . dxd e−xd = π 2 . (28)
−∞ −∞ −∞

The integrals are Gaussian, so we know the answer. Taking x = |x|, we can also express I
in d-dimensional spherical polar coordinates as
Z Z ∞ Z ∞
d−1 −x2 du d−1
I = dωd−1 dx x e = Sd−1 √ u 2 e−u , u = x2 . (29)
0 0 2 u
We compare the integration over u to the generalized factorial, or Euler’s Γ-function
Z ∞
Γ(z) = dt tz−1 e−t , (30)
0

5
Statistical Physics Vishnu Jejjala

in order to conclude that


1 d
I = Sd−1 Γ( ) . (31)
2 2
Thus, equating the two expressions for I in (28) and (31), we calculate
d
2π 2
Sd−1 = . (32)
Γ( d2 )

This matches what we know: for d = 2, S1 = 2π and for d = 3, S2 = 4π, which are the
circumference of a unit circle and the surface area of a unit sphere, respectively.
Putting all the pieces together, we finally determine that
3N
−1
V N  m  3N
2 E 2
Ω(E) = . (33)
N ! 2πℏ2 Γ( 3N
2 )

From this expression, we may compute the entropy using (1). Calculating in the microcanon-
ical ensemble is straightforward, but it is messy! We will develop techniques to make these
computations easier as we proceed.

You might also like