0% found this document useful (0 votes)
11 views47 pages

CHAPTER4

Uploaded by

onerepublicsunny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views47 pages

CHAPTER4

Uploaded by

onerepublicsunny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Thermodynamics and Statistical Physics

1
B. Zheng
Zhejiang Institute of Modern Physics, Zhejiang University,
Hangzhou 310027, P.R. China

1
author, e-mail: [email protected]; tel: 0086-571-87952753; Fax: 0086-571-
87952659

1
Chapter 1

Thermodynamics

2
Chapter 2

Classical statistical mechanics

3
Chapter 3

Canonical ensemble and grand


canonical ensemble

4
Chapter 4

Quantum statistical mechanics

4.1 Quantum mechanics


4.1.1 Fundamental principles
A Hilbert space:
a linear inner-product space spanned by a complete
orthonormal set of stationary functions, in general de-
noted as {ϕn }, e.g., {eixp }. The dimension of a Hilbert
space is usually infinite.

A state of the microscopic system at time t corre-


sponds to a point in the Hilbert space, expressed as a
linear superposition of {ϕn },

ψ= Cn (t)ϕn ,
n

i.e., a state including its dynamic evolution is given by


the coordinates {Cn (t)}. In general, ψ(x, t) is a complex-
valued function.

5
The inner product is usually defined as

(ψ, ϕ) ≡ dxψ ∗ (x, t)ϕ(x, t)

A physical observable is associated with a Hermitian


operator in the Hilbert space. The average value of an
observable in a state at time t is calculated by

⟨Ô(t)⟩ = dxψ ∗ (x, t)Ô(x)ψ(x, t)

Suppose that the operator Ô(x) is diagonal, i.e., a


classical function of x, ψ ∗ (x, t)ψ(x, t)dx is just the proba-
bility of finding the particle at the x position, and at the
time t. The physical meaning of ψ(x, t) itself is not very
clear. But we know that ψ ∗ (x, t)ψ(x, t) is not sufficient to
describe quantum mechanics. ψ(x, t) contains the phase,
and explains the interference of particles etc.
Note that at a certain time t, the particle can be
found at any x with a certain probability, it is not that
the particle moves from here to there. This is just the
uncertainty principle. When one measures the particle,
however, it can be found at a fixed x, since the external
equipment in the microscopic measurement violates the
environment. Therefore, the physical meaning of the
probability distribution in quantum mechanics is very
different from that of statistical mechanics.
In quantum mechanics, one needs ψ(x, t) to describe
the microscopic state even for a single particle. It is
also not that the particle localizes sometime here and

6
other time there. The particle itself is “a probability
wave”, extensive in space.

Fundamental problems:
• How to construct the operator Ô(x)?
• How to find the state ψ(x, t)?
We adopt the expression O(x, p) of the observable in
classical mechanics, and change x and p to operators:

x → x̂ = x

p → p̂ = −i
∂x
This is the so-called quantization. Now, x̂ and p̂ are
non-commutable
[ ] ( ) ( )
∂ ∂ ∂
[x̂, p̂] = x, −i = x −i − −i x=i
∂x ∂x ∂x
Therefore, the ordering of x̂ and p̂ in the quantization
is a little subtle. In the computation of the operators,
one should keep in mind that it acts on a function on
the right hand side.
The Schrödinger equation

i ψ(x, t) = Hψ(x, t)
∂t
+ initial conditions
boundary conditons
The Hamiltonian operator is usually assumed to be lin-
ear, and of course also Hermitian. The principle of the

7
linear superposition is resulted from the Schrödinger
equation.

Question: how would the linearity be violated?

Assuming H is independent of t. Let


ψ(x, t) = Cn (t)ϕn (x)
we have

i Cn (t)ϕn (x) = Hϕn (x)Cn (t)
∂t
⇔ Hϕn (x) = En ϕn (x)

i Cn (t) = En Cn (t)
∂t
In general, it can be proven under certain conditions
that {ϕn } form a complete set of the Hilbert space, and
the general solution of the Schrödinger equation is writ-
ten as ∑
ψ(x, t) = cn e−iEn t ϕn (x),
n
where cn is constant. En typically takes discrete values,
and it is the result of the quantization.
We denote the average of a linear operator by (ψ, Ôψ),

⟨Ô⟩ ≡ ψ ∗ (x)Ôψ(x)dx ≡ (ψ, Ôψ)
∑ ∑
= ( Cn (t)ϕn , Ô Cm ϕm )
n m

= Cn∗ (t)Cm (t)(ϕn , Ôϕm )
n,m

{Cn (t)} in quantum mechanics is similar to {p(t), q(t)}


in classical mechanics.

8
Statistical mechanics does not intend to solve {Cn (t)}
from the Schrödinger equation, but does give a proba-
bility distribution to {Cn (t)} in the equilibrium state.

4.1.2 Theory of representations


Representations in quantum mechanics

A state is described by
ψ(x, t)
An inner product must be defined

(ψ, ϕ) ≡ dxψ ∗ (x, t)ϕ(x, t)

An observable ∫
⟨Ô(x)⟩ ≡ (ψ, Ôψ) ≡ dx ψ ∗ (x, t)Ô(x)ψ(x, t)

If {ϕn (x)} is a complete orthonormal set of wave func-


tions, i.e.,
(ϕn , ϕm ) = δmn
then, in general

ψ(x, t) = Cn (t)ϕn (x)
n

ϕ(x, t) = bn (t)ϕn (x)
n
The inner product
( )
∑ ∑
(ψ, ϕ) = Cn (t)ϕn , bm (t)ϕm
∑n m

= Cn∗ (t)bn (t)


n

9
The observable
( )
∑ ∑
(ψ, Ôψ) = Cn (t)ϕn , Cm (t)Ôϕm
∑n m

= Cn (t)Cm (t)Onm
n,m

Onm = (ϕn , Ôϕm )

{Onm } defines a matrix


 
O11 O12 · · ·
{Onm } = O21 O22 · · · ≡ O
··· ··· ···

{Cn } defines also a matrix


 
C1 (t)
C (t)
{Cn (t)} =  2  ≡ C
..
.

Similarly,
 
b1 (t)
 
B ≡ b2 (t)
..
.
C + = (C1∗ (t) C2∗ (t) · · · )

C, B and O are the representations of the state ψ, ϕ and


operator Ô with the complete set {ϕn }, and

(ψ, ϕ) = C + B
(ψ, Ôψ) = C + OC

10
e.g.,

(1) ϕn (x) = eipx (p ↔ n)



p̂ = −i
∂x
p̂eipx = peipx

(2) ϕn (x) = δ(x − x′ ) (x′ ↔ n)


x̂δ(x − x′ ) = xδ(x − x′ )
= (x − x′ )δ(x − x′ ) + x′ δ(x − x′ )
= x′ δ(x − x′ )

ψ(x, t) = dx′ ψ(x′ , t)δ(x − x′ )
 
′ ′
ψ(x1 , t)dx1
ψ(x′ , t)dx′ 
ψ →  2 2
..
.

Question: what is the representation of an operator


such as p̂ in this case?

The linear transformation between representations

If {ψn } is another complete set



ψ = Cn′ (t)ψn → C ′
n
Onm = (ψn , Ôψm ) → O′

we may find the linear transformation between the two


representations.

11
Suppose

ψn = ϕm tmn
m
∑ ∑
ψ = Cn′ ψn = Cn′ ϕm tmn
n n,m
( )
∑ ∑
= tmn Cn′ ϕm
m n

hence ∑
Cm (t) = tmn Cn′ (t)
n
i.e.,

C = T C′ C + = C +T +
T ≡ {tmn }
Here we should remember the definition T !

Exercise: prove
O = T O′T +, T +T = T T + = 1
Then, both
(ψ, ϕ) = C + B
= C ′+ T + T B ′
= C ′+ B ′
and
(ψ, Ôψ) = C + OC
= C ′+ T + T O′ T + T C ′
= C ′+ O′ C ′

12
are independent of the representations!!
Dirac notations

Introduce |ψ⟩ to describe ψ



e.g., |ψ⟩ = Cn (t)|ψn ⟩
n
similarly ⟨ψ| to describe ψ ∗

⟨ψ| = ⟨ψn |Cn∗ (t)
n
+
(|ψ⟩) = ⟨ψ|

All |ψ⟩ form a Hilbert space, and so do all ⟨ψ|.


Both the inner product

(ψ, ϕ) ≡ ⟨ψ|ϕ⟩

and the observable

(ψ, Ôψ) ≡ ⟨ψ|Ô|ψ⟩

are independent of representations, but computations


of their values must be carried out in a specified repre-

13
sentation, e.g.,
∑ ∑
⟨ψ|ϕ⟩ = ⟨ϕn |Cn∗ (t) bm (t)|ϕm ⟩
n m

= Cn∗ (t)bn (t)
n
∑ ∑
⟨ψ|Ô|ψ⟩ = ⟨ϕn |Cn∗ (t)Ô Cm (t)|ϕm ⟩
n m

= Cn∗ (t)Cm (t)⟨ϕn |Ô|ϕm ⟩
n,m

= Cn∗ (t)Cm (t)Onm
n,m

⟨ϕn |ψ⟩ = ⟨ϕn | Cm (t)|ϕm ⟩ = Cn (t)
m
In the above computations, the Dirac notations are just
notations, and do not provide much convenience.
In some cases, the Dirac notations are useful. For-
mally, the unit operator can be expressed as
∑ ∑
1̂ = |ϕn ⟩⟨ϕn | (̸= ⟨ϕn |ϕn ⟩),
n n
where one should keep in mind that |ϕn ⟩ and ⟨ϕn | are
operators.
Proof:

⟨ϕn |1̂|ϕm ⟩ = ⟨ϕn | |ϕn′ ⟩⟨ϕn′ |ϕm ⟩
n′

= ⟨ϕn |ϕn′ ⟩⟨ϕn′ |ϕm ⟩
n ′

= δnn′ δn′ m
n′
= δnm

14
This representation of the unit operator is rather con-
venient, e.g.,

⟨ψ|ϕ⟩ = ⟨ψ|1̂|ϕ⟩ = ⟨ψ| |ϕn ⟩⟨ϕn |ϕ⟩
n

= ⟨ψ|ϕn ⟩⟨ϕn |ϕ⟩
n

= Cn∗ (t)bn (t)
n

∑ ∑
⟨ψ|Ô|ψ⟩ = ⟨ψ| |ϕn ⟩⟨ϕn |Ô |ϕm ⟩⟨ϕm |ψ⟩
n m

= ⟨ψ|ϕn ⟩⟨ϕn |Ô|ϕm ⟩⟨ϕm |ψ⟩
n,m

= Cn∗ (t)Cm (t)Onm
n,m

Question: how to define the unit operator without


the Dirac operator?

Question: what are the difficulties in studying quan-


tum mechanics and statistical mechanics respectively?

4.2 Postulates of quantum statistical mechan-


ics
4.2.1 Postulates
• An isolated systems
• In thermodynamic equilibrium

15
• Any macroscopic time interval is long enough in
microscopic sense.
then, the statistical average of an observable is

⟨O(x)⟩ = (ψ, O(x)ψ) = Cn∗ (t)Cm (t)(ϕn , Ôϕm )
n,m

Postulates: let {ϕn } be eigen-functions of energy,


• Cn∗ (t)Cm (t) = 0 n ̸= m
{
constant E < En < E + ∆
• Cn∗ (t)Cn (t) =
0 otherwise
This is the so-called microcanonical ensemble. Here ∆
is a small macroscopic parameter, but it is sufficiently
large in microscopic sense. Thus there are very many
microscopic states with E < En < E + ∆.
We note that the “number” of microscopic states de-
scribed by {Cn (t)} is much more than that of the eigen-
functions of energy. Actually, the statistical probability
distribution of {Cn (t)}, i.e., ρ({Cn (t)}), is unknown. But
we assume that after the statistical average, the prob-
ability at any eigen-function of energy becomes a con-
stant. This is slightly different from classical mechanics.
Cn∗ (t)Cm (t) = 0 for n ̸= m implies that different mi-
croscopic states are not correlated. For fixed Cn∗ (t), the
averaging over different Cm (t) cancels each other.

Effectively, therefore, we may write


{
const. E < En < E + ∆
|bn |2 =
0 otherwise

16
then ∑
n |b n|
2
(ϕn , Ôϕn )
⟨O⟩ = ∑ .
n |bn |
2

Here ⟨O⟩ denotes both the statistical average and the


average in quantum mechanics.

In classical statistical mechanics, the equal a priori


probability assumption is based on the so-called “er-
godicity”. In quantum mechanics, it seems more con-
troversy to assume the ergodicity. For example, it is in
principle not possible to transit from one eigen-state of
energy to another. In fact, obviously, the general solu-
tion of the Schrödinger equation in Eq. (4.1.1) does not
guarantee the ergodicity.
The equal a priori probability postulates are assumed,
based on possible interactions with the external envi-
ronment. In other words, {Cn (t)} include the nontrivial
time evolution and the coordinates of the environment
(page 171 in the textbook). In the fundamental sense,
therefore, the statistical average Cn∗ (t)Cm (t) should effec-
tively describe the average over microscopic times and
the interactions with the environment.
We should recognize that the postulates of quantum
statistical mechanics, even if regarded as phenomeno-
logical statements, are more fundamental than the laws
of thermodynamics. First, the postulates not only im-
ply the laws of thermodynamics, but also lead to definite
formulas for all thermodynamic functions of a given sub-
stance. Second, the postulates are more directly related
to the fundamental dynamics than the laws of thermo-
dynamics.

17
4.2.2 Density matrix
The purpose of introducing the density matrix
• Describe statistical mechanics in the framework of
quantum mechanics without the phases of the state
• Generalize the theory to any representation.
With the density matrix, one may reformulate the the-
ory in the previous subsection, and generalize it to dif-
ferent ensembles.
Let us define the density matrix ρnm by

ρnm ≡ (ϕn , ρϕm ) ≡ δnm |bn |2

where {ϕn } are the eigen-functions of energy. In Dirac


notations ∑
ρ= |ϕn ⟩|bn |2 ⟨ϕn |.
n
Here the sum is over all possible eigen-states of energy,
however, |bn |2 is non-zero only in case E < En < E + ∆.
Then ∑
⟨ϕn |Oρ|ϕn ⟩ Tr (Oρ)
⟨O⟩ = ∑n =
n ⟨ϕn |ρ|ϕn ⟩ Tr ρ
where Tr is independent of representations, and we have
generalized the calculation of ⟨O⟩ to any representation.
Exercise: Prove it.

This formulation can be generalized, and different


macroscopic conditions correspond to different |bn |2 , i.e.,
different ensembles.

18
4.3 Ensembles
4.3.1 Microcanonical ensemble
Setting the constant in |bn |2 to be 1,

ρ= |ϕn ⟩⟨ϕn |
E<En <E+∆

where H|ϕn ⟩ = En |ϕn ⟩, and the sum is over the eigen-


states of energy, not the energy levels! We now define

Γ(E) = Tr ρ

Obviously, Γ(E) is the number of eigen-states with en-


ergy between E and E + ∆, since

Tr ρ = ⟨ϕn |ρ|ϕn ⟩
n
∑ ∑
= ⟨ϕn | |ϕm ⟩⟨ϕm |ϕn ⟩
n E<Em <E+∆

= 1
E<En <E+∆

To derive thermodynamics, we simply identify the


entropy
S(E, V ) = k log Γ(E)
For a general observable, including the one non-local in
space such as the correlation function
Tr (Oρ)
⟨O⟩ =
Tr ρ
and it may go beyond thermodynamics.

19
4.3.2 Canonical ensemble
In the derivation of the canonical ensemble in classical
statistical mechanics, we replace
∫ ∑
1
dp dq →
N! n

then
1
ρnm = δnm e−βEn β=
kT
In Dirac notations,
∑ ∑
−βEn −βH
ρ= |ϕn ⟩e ⟨ϕn | = e |ϕn ⟩⟨ϕn |
n n
−βH
=e
Here we note that the operator exp(−βH) is defined by
∑ 1
e−βH = (−βH)n
n
n!
The partition function
QN (V, T ) = Tr ρ

= Tr e−βH = ⟨ϕn |e−βH |ϕn ⟩
n

= e−βEn
n

Here n is over all eigen-states of energy, not the eigen-
values of energy. For a macroscopic system, this is very
different.
For an observable
Tr (Oρ) 1
⟨O⟩ = = Tr (Oe−βH )
Tr ρ QN
20
In the energy representation,
1 ∑
⟨O⟩ = Onn e−βEn
QN n
and it is very similar to the formulism in classical sta-
tistical mechanics, but counting only the eigen-states of
energy .
For a two-level system without degeneracy, e.g., E0 =
0 and E1 = ϵ, the partition function
Z = 1 + exp (−βϵ)
the internal energy
ϵ exp (−βϵ)
U = ⟨E⟩ =
1 + exp (−βϵ)
the free energy
A = −β −1 ln Z = −β −1 ln[1 + exp (−βϵ)]
the entropy
∂A ϵ exp (−βϵ)
S=− = k ln Z +
∂T T [1 + exp (−βϵ)]
thus we may verify
A = U − T S = −kT ln Z

4.3.3 Grand canonical ensemble


The grand partition function


Z(µ, V, T ) = eβµN · QN (V, T )
N =0
∑ ∞
1
⟨O⟩ = eβµN ⟨O⟩N
Z(µ, V, T )
N =0

21
where ⟨O⟩N is the canonical ensemble average, and z =
exp(βµ) is called the fugacity.
More generally

Z(µ, V, T ) = Tr e−β(H−µN )
1 [ ]
−β(H−µN )
⟨O⟩ = Tr Oe
Z(µ, V, T )
i.e., the density matrix is

ρ = e−β(H−µN )

For a system consisting of N quasi-independent two-


level subsystems, the internal energy
N ϵ exp (−βϵ)
U = ⟨E⟩ =
1 + exp (−βϵ)
From the partition function QN = Z N with Z being the
partition function of a two-level subsystem, and the free
energy A = −kT ln QN , we derive the chemical potential
( )
∂A
µ= = −kT ln[1 + exp (−βϵ)]
∂N T
Therefore, µ is negative!

4.3.4 Simple derivation of ensembles


Consider an one-particle system or a “small” system in
contact with a many-particle or large thermal reservoir,
and the total energy

U = En + UR

22
where En is the energy of the particle in state n, and UR
is that of the reservoir.
Basic question: what is the probability that the small
system is in a particular state with the energy En ?
Answer: The probability of finding the small system
in a state with the energy En is proportional to the
corresponding number of states in the reservoir!!

ρn ∝ ΩR (U − En )
Here ΩR (U − En ) is the number of states of the reservoir,
when the particle is with the energy En .
Let us expand
k∂ log ΩR (UR )
k log ΩR (U − En ) = k log ΩR (U ) − En + ···
∂UR UR =U
∂SR (UR )
= SR (U ) − En
∂UR UR =U
En
= SR (U ) −
T
Thus it leads to
ρn ∝ e−En /kT
This can be considered as either the Boltzmann distri-
bution or the canonical distribution. Similarly, one may
obtain the grand canonical distribution.
ρn (N ) ∝ e(µN −En )/kT

4.3.5 The third law of thermodynamics


For an isolated systems, the entropy is defined by
S(E, V ) = k log Γ(E)

23
At the absolute zero of temperature, the system is in
its ground state. For discrete energy eigenvalues, Γ(E)
is nothing but the degeneracy of the ground state, G.
Usually, G ≤ N , therefore, S ≤ k log N . Actually, such a
statement is valid for a sufficiently low temperature,
kT ≪ ∆E = E1 − E0
where ∆E is the energy difference between the first ex-
cited state and the ground state. In other words, the
third law holds, since the entropy per particle is of the
order (log N )/N near the absolute zero of temperature.

For a continuous energy spectrum, we must study the


density of states, ω(E), around the ground state. Almost
all substances known to us becomes crystalline solids
near the absolute zero of temperature, and the thermo-
dynamic functions near the absolute zero of tempera-
ture may be obtained through the Debye’s ∫ Ttheory. It
can be proved that CV ∼ T , therefore S = 0 dT CV /T ′ .
3 ′

The third law holds.

It is not possible to give a universal proof of the third


law of thermodynamics.

4.4 Ideal gases: microscopic ensemble


4.4.1 Quantum states
The system: N identical particles in quantum mechanics
∑N
p2i
H= .
i=1
2m

24
The particles are non-distinguishable. Therefore, in na-
ture there are two types of identical particles
• Bosons
Wave functions are symmetric under an interchange
of any pair of particle coordinates.
• Fermions
Wave functions are anti-symmetric under an inter-
change of any pair of particle coodinates
For example, N = 2
the wave functions of a single particle

ψ1 (x1 ), ψ2 (x2 )

In general, the wave function of two particles can not


be written as

ψ(x1 , x2 ) = ψ1 (x1 )ψ2 (x2 )

Assume ψ1 (x) and ψ2 (x) are orthonormal,


Bosons
1
ψ(x1 , x2 ) = (ψ1 (x1 )ψ2 (x2 ) + ψ1 (x2 )ψ2 (x1 ))
2!
Fermions
1
ψ(x1 , x2 ) = (ψ1 (x1 )ψ2 (x2 ) − ψ1 (x2 )ψ2 (x1 ))
2!
If ψ1 (x) = ψ2 (x), then ψ(x1 , x2 ) = 0

=⇒ the Pauli principle

25
Question: why are identical microscopic particles not
distinguishable?
Hint: identical macroscopic particles are localized.

Thermodynamics: we must compute Γ(E), the num-


ber of eigen-states with the energy between E and E+∆.
Assume the particles are spinless, then a single par-
ticle energy is
p2
ϵp =
2m
Where p ≡ | p⃗ |, p⃗ is the momentum of the particle.


px = 2π~mx /L
2π~
p⃗ = ⃗ ⇔ py = 2π~my L
m
L 

pz = 2π~mz /L
mx , my , mz = 0, ±1, ±2, · · ·
L is the size of the cubic box.
For example, the plane wave function in one dimen-
sion
eipx x/~
The periodic boundary condition

eipx x/~ = eipx (x+L)/~


∴ eipx L/~ = eimx 2π mx = 0, ±1, ±2 · · ·
∴ px = 2π~mx /L

In the limit L → ∞
∑ ∑ ∫
V
= → 3 d3 p h = 2π~
h
p⃗ m

26
Since the particles are non-distinguishable, a micro-
scopic state of an ideal gas can be uniquely specified
by a set of occupation number {np⃗ }, with np⃗ being the
number of the particles having the momentum p⃗. This
is different from a classical ideal gas, or the ideal Boltz-
mann gas discussed below.
Obviously, the total energy and number of particles

E = ϵp⃗ np⃗
p⃗

N = np⃗
p⃗
{
0, 1, 2 · · · bosons
np⃗ =
0, 1 fermions

4.4.2 Distributions of particles


Let us divide the energy spectrum
p2
ϵp =
2m
into groups of levels containing respectively g1 , g2 , · · ·
states, as shown in Fig. 4.1. Each gi is assumed to
be very large.
Each group is called a cell, the i-th cell has an aver-
age energy ϵi and the occupation number ni . Then the
number of states

Γ(E) = W {ni }
{ni }

27
g2

g1

g1
28

Figure 4.1:
W {ni } ≡ the number of the states corresponding to {ni }.
The summation over {ni } satisfies the conditions

E = ϵi n i
i

N = ni
i

In other words, we may consider ϵi labels the energy


level with gi degeneracy, i.e., there are gi different states
at the energy level ϵi .

Question: why do we assume each gi to be very large?

How to compute W {ni }?


The key is to find out ωi : the number of ways in which
ni particles can be assigned to the i-th cell with gi states,
i.e., the number of ways to distribute ni particles to gi
states inside the i-th cell.
It is important that in quantum mechanics interchang-
ing particles does not lead to new states of the system.
• keep this in mind in computing ωi
• for a fixed set of {ni }, we simply have

W {ni } = ωi
i

Bosons

We put the states and particles on a line

| · · · · · | · ·| · · · · · ·| | · | · ··

29
where | and · · · denote a state and particles respectively.
The particles belong to the state on the left. The first
position on the left can only be a state.
If particles are distinguishable and the ordering of the
states is relevant, there are then gi ways to put a state
on the first position on the left, and (ni + gi − 1)! ways to
assign ni particles and gi − 1 states to other (ni + gi − 1)
positions.
However, particles are not distinguishable. There are
ni ! ways for permuting the ni particles which do not lead
to new states. Taking into account the ordering of the
states is also irrelevant, a factor of g! should be reduced
gi (ni + gi − 1)! (ni + gi − 1)!
ωi = =
ni !gi ! ni !(gi − 1)!
Therefore
∏ ∏ (ni + gi − 1)!
W {ni } = ωi =
i i
ni !(gi − 1)!

In fact, ωi is nothing but the combinatorial number of


selecting ni particles and gi − 1 states from ni + gi − 1
positions.
Question: why?
There are ni + gi ordering positions, but the first po-
sition has to be a state. In the remaining ni + gi − 1 posi-
tions, select gi −1 to be states, and others to be particles.
Since the ordering of both the states and particles is not
relevant, it is the combinatorial number.
For example, by simple numerating
gi = 1; ωi = 1
gi = 2; ωi = ni + 1

30
gi = 3, ni = 1; ωi = 3
gi = 3, ni = 2; ωi = 3(|..) + 3(|.|.) = 6
gi = 3, ni = 3; ωi = 3(|...) + 6(|..|.) + 1(|.|.|.) = 10

Fermions

The number of particles assigned to a state can be


only 0 or 1. If ni > gi , it is not allowed by the Pauli
principle.
If ni ≤ gi , we put the gi states on a line, and the ni
states on the left are occupied by particles,

|·|·|·|·| | | | |
There are gi ! ways to put gi states on a line. If the par-
ticles are distinguishable and ni = gi , gi ! is the number
of states. In general, the particles are not distinguish-
able, and ni ≤ gi , thus the ordering of the empty states
is irrelevant,
gi ! ∏
ωi = W {ni } = ωi
ni !(gi − ni )! i

In fact, ωi is nothing but the combinatorial number of


selecting ni states occupied by particles from gi states.

4.4.3 The most probable distribution


The entropyy
S = k log Γ(E)

= k log W {ni }
{ni }

31
In equilibrium and in thermodynamic limit, the most
probable distribution {n̄i } dominates

S = k log W {n̄i }

Note: we request n̄i is a large number, while np⃗ is not.

How to maximize W {ni }?


Fermions

log W {ni } = log gi ! − log ni ! − log(gi − ni )!
i
∵ log ni ! ≃ ni (log ni − 1)

∴ log W {ni } = gi (log gi − 1) − ni (log ni − 1)
i
−(gi − ni )(log(gi − ni ) − 1)

= gi log gi − ni log ni − (gi − ni ) log(gi − ni )

The method of Lagrange multipliers


∑ ∑
δ[log W {ni }] − δ(α ni + β ϵi n i ) = 0
i


δ[log W {ni }] = [−(log ni + 1) + (log(gi − ni ) + 1)]δni
i
∑ gi − n i
= log δni
i
ni

gi − n̄i
∴ log = α + βϵi
n̄i
gi
=⇒ n̄i = α+βϵ
e i + 1

32
hence
1
n̄p⃗ = α = −βµ
eα+βϵp⃗ + 1

Exercise: Show for bosons


1
n̄p⃗ =
eα+βϵp⃗ − 1
The parameters β and α are determined by

ϵp⃗ n̄p⃗ = E
p⃗

np⃗ = N
p⃗

The first equation leads to the identification β = 1/kT ,


and the second identifies α = −βµ. With n̄p⃗ at hand, we
are able to calculate all observable.

Question: how to identify β and α?

Question: how do the particle distributions for bosons


and fermions go back to the classical Boltzmann distri-
bution?
Hint: at high temperatures, the average number of
particles at a state is very small.

Question: A quiz with 20 multi-choice questions is


given to students of statistical mechanics, and each ques-
tion comes with 5 choices. If three students sitting next
to each other are found to be wrong on the same six

33
questions with the same wrong choice for each ques-
tion, what is the probability of such an occurrence in
the absence of cheating?
Here it is assumed that all three students made the
wrong choices for six questions.
Hint: for one student, the number of possible wrong
choices for six questions

20!
N= 46 ≈ 108
14! 6!

4.5 The ideal Boltzmann gas


It is essentially a “classical” system

4.5.1 Quantum states


Assume the particles are distinguishable and do not ∏ fol-
low the Pauli principle, a set of {np⃗ } specifies N !/( p⃗ np⃗ !)
states. This is different from the bosons and fermions,
in which cases a set of {np⃗ } uniquely corresponds to a
single microscopic state.

4.5.2 The distribution of particles


First place the particles into K cells, and ni is the num-

ber of particles in the i-th cell. Thus there are N !/ Ki=1 ni !
ways for a set of {ni }.
In the i-th cell, there are gi microscopic states. There-
fore, there are gini ways to put ni particles onto gi states,
since each particle can be put on gi possible states. In

34
total, the number of states for {ni }
∏ g ni
i
N!
i
ni !

Taking into account correct Boltzmann factor


∏ g ni
W {ni } = i

i
ni !

Question: why N !/ ki=1 ni ! ?

1 if all ni = 1,

2 if ni = 1, i ≤ K − 1, nK ̸= 1

3 ni = 1, i ≤ K − 2, nK−1 ̸= 1, nK ̸= 1

4 in general
N!
∏K
i=1 ni !
Question: compare bosons and Boltzmann particles.

4.5.3 Exercise:
Prove n̄p⃗ = e−α−βϵp⃗

4.5.4 Thermodynamics
For quasi-independent particles, we consider that parti-
cles in a cell contribute the same to physical observables.
∑ ∑
N= gi e−α−βϵi = e−α e−βϵp⃗
i p⃗

35
∫ +∞
V
= e−α 3 d3 pe−βp /2m
2

h −∞
∫ ∫ +∞
V
= e−α 3 dΩ dp · p2 e−βp /2m
2

h 0

Here dΩ = 4π.
Exercise: Calculate the integrals
∫ ∞
dp · p2 · e−cp
2

0
∫ ∞
dp · p4 · e−cp
2

0
This kind of integrals is important in quantum statisti-
cal mechanics. ∫

hint: suppose 0 dp · e−cp can be picked up from the
2

hand book.
Therefore

V 2π~2
N = e−α 3 λ=
λ mkT
Similarly
∑ ∑
E = e−α gi ϵi e−βϵi = e−α ϵp⃗ e−βϵp⃗
i p⃗
∫ ∞
V p2 −βp2 /2m
= e−α 3 dp 4πp 2
e
h 0 2m
3N
=

36
The entropy
S ∏ g n̄i
= log W {n̄i } = log i
k i
n̄i !

= log gin̄i − log n̄i !
i
∑ gi
≃ n̄i log
i
n̄i


= gi e−α−βϵi (α + βϵi )
i

= e−α−βϵp⃗ (α + βϵp⃗ )
p⃗
= βE + αN
[ ( )3/2 ]
2
3N N 2π~
= − N log
2 V mkT

The result is the same as in classical statistical mechan-


ics. From E = 3N/2β, one identifies β = 1/kT . In ad-
dition, one may derive α = −βµ, since the distribution
of the Boltzmann particles is the same as that of the
classical particles.
Question: how to identify α = −βµ?

37
4.6 Ideal gases: grand canonical ensemble
4.6.1 Grand partition function



Z(µ, V, T ) = eβµN QN (V, T )
N =0
∑∞ ∑ ∑
βµN −β ⃗ ϵp
⃗ np
= e e p ⃗

N =0 ∑ {np⃗ }
⃗ np
p ⃗ =N

∑ ∑ ∏( )np⃗
= eβµ−βϵp⃗
N =0 ∑ {np⃗ } p⃗
⃗ np
p ⃗ =N

• In quantum mechanics, a set of {np⃗ } correspond to


just one state
• Identity

∑ ∑ ∑∑
= ···
N =0 ∑ {np⃗ } n0 n1
⃗ np
p ⃗ =N

Every term on the right hand side uniquely corre-


sponds to that on the left hand side, e.g. p⃗ takes 3
states

N =0 0 0 0
N =1 1 0 0
0 1 0
0 0 1

38
∑∑ ( ) n0 ( ) n1
β(µ1 −ϵ1 )
Z(µ, V, T ) = ··· e β(µ−ϵ0 )
e ···
n n
[0 1 ][ ]
∑( ) n0 ∑ ( )n1
= eβ(µ−ϵ) eβ(µ−ϵ1 ) ···
n0 n1
[ ]
∏ ∑( )n
= eβµ−βϵp⃗
p⃗ n

n = 0, 1, 2 · · · bosons
= 0, 1 fermions
{∏
(1 − eβµ−βϵp⃗ )−1 bosons
∴ Z(µ, V, T ) = ∏p⃗ βµ−βϵp⃗
p⃗ 1 + e fermions
then
PV
= log Z(µ, V, T )
kT

∑ ∑ ∑
1
N̄ = N eβµN e−β ϵp⃗np⃗
Z(µ, V, T )
N =0 ∑{np⃗ }
np⃗ =N
1 ∂
= log Z(µ, V, T )
β ∂µ

1 ∑ ∑ ∑
βµN −β ϵp⃗ np⃗
n̄p⃗ = e np⃗ e
Z(µ, V, T )
N =0 ∑{np⃗ }
np⃗ =N
1 ∂
= − log Z(µ, V, T )
β ∂ϵp⃗
1
= −βµ+βϵp⃗
e ∓1
39
∓: bosons
fermions
All results are equivalent to those in the microcanon-
ical ensemble.
Note: ∵ n̄p⃗ ≥ 0, ∴ it must be eβµ ≥ 0 for both
bosons and fermions,i.e., µ is real. For example, for a
high energy ϵp⃗ , ∓1 is negligible.

4.6.2 Equations of state


If the summand is finite for all p⃗, it is possible to replace
∑ ∫
V
→ 3 d3 p
h
p⃗

in the limit V → ∞. Note that V should be measured


in the microscopic scale.
Fermi gas
PV ∑
= log(1 + eβµ−βϵp⃗ )
kT
p⃗
∫ ∫
V 2
= 3 dΩ dp p2 log(1 + eβµ−βp /2m )
h
Therefore

P 4π ∞ 2
= 3 dp p2 log(1 + eβµ−βp /2m )
kT h 0

N̄ 4π ∞
dp p2 (e−βµ+βp /2m + 1)−1
2
= 3
V h 0
Eliminating µ, we obtain the equation of state.

40
Bose gas
PV ∑
=− log(1 − eβµ−βϵp⃗ )
kT
p⃗

at p⃗ = 0, log(1 − eβµ ) diverges when µ → 0. Therefore, we


separate the mode p⃗ = 0

P 4π ∞ 2 1
=− 3 dp p2 log(1 − eβµ−βp /2m ) − log(1 − eβµ )
kT h 0 V
Similarly
∫ ∞
N̄ 4π 2 1 1 eβµ
= 3 dp p −βµ+βp2 /2m +
V h 0 e − 1 V 1 − eβµ
Actually,
eβµ
⟨n0 ⟩ =
1 − eβµ
If ⟨n0 ⟩/N is a finite number, it gives rise to the Bose-
Einstain condensation.

4.6.3 Internal energy


It can be calculated from

∑ ∑ ∑ ∑
1 βµN −β ϵp⃗ np⃗
U (µ, V, T ) = e e ϵp⃗ np⃗
Z(µ, V, T )
N =0 ∑{np⃗ } p⃗
np⃗ =N

= − [log Z(µ, V, T )]
∂β
In the derivative, the fugacity z = exp(βµ) is fixed. To
express u = U/V in terms of N̄ , V , and T , however, we
must eliminate µ. That is complicated.

41
4.7 Classical limit of the partition function
Let the Hamiltonian of a system of N identical spinless
particles be the sum of two operators, the kinetic energy
K and the potential energy Ω

H =K +Ω

We intend to prove that when the temperature is


sufficiently high, we can make the approximation

1
QN (V, T ) = Tr e−H/kT ≈ 3N
d3N p d3N r e−H(p,r)/kT
N !h
where H(p, r) on the right size is the classical Hamilto-
nian.

4.7.1 Free particles


There is only the kinetic energy

~2 ∑ 2
N
K=− ∇
2m i=1 i

Whenever convenient, we use the abbreviation (1, ..., N )


for (r1 , ..., rN ), and p for p1 , ...pN . The energy-eigenvalue
equation
KΦp (1, ..., N ) = Kp Φp (1, ..., N )
where
1 ∑ 2
N
Kp ≡ p
2m i=1 i

42
1 ∑
Φp (1, ..., N ) = √ δP [up1 (P 1)...upN (P N )]
N! P
1 ∑
= √ δP [uP p1 (1)...uP pN (N )]
N! P
1
up (r) = √ eipr/~
V
The notation P denotes the permutation on the parti-
cles from (1, ..., N ) to (P 1, ..., P N ). For bosons, δP = +1;
for fermions, δP = +1 if P is even, while δP = −1 if P
is odd. P is defined as even or odd, by whether it is
equivalent to an even or an odd number of successive
interchanges of a particle pair.
Here we note that the permutation on ri is the same
as on pi . A permutation of the momenta does not pro-
duce a new state in quantum mechanics. Therefore, a
sum over microscopic states is 1/N ! times a sun over all
the momenta independently

Tr e−H/kT = ⟨Φp |e−K/kT |Φp ⟩
states
∫ N
V
= 3N
d3N p d3N r |Φp (1, ..., N )|2 e−Kp /kT
N !h

Note that in the sum states , each state should be counted
only once. That is why 1/N ! is needed, if each momen-
tum is integrated independently.
N = 2, for example,
∑ ∞ ∞
1 ∑ ∑
⟨Φp1 ,p2 |...|Φp1 ,p2 ⟩ = ⟨Φp1 ,p2 |...|Φp1 ,p2 ⟩
states
2 p =−∞ p =−∞
1 2

43
since |Φp1 ,p2 ⟩ and |Φp2 ,p1 ⟩ correspond to a same state.

Using the representation of Φp (1, ..., N ) above, we can


rewrite
1 ∑
|Φp (1, ..., N )|2 = δP δP ′ [u∗p1 (P 1)uP ′ p1 (1)...u∗pN (P N )uP ′ pN (N )]
N! ′
P,P

Since the sums for both P and P ′ run over all possible
permutations, one of them, e.g., that of P ′ , just yields
a factor of N ! after the integration,

|Φp (1, ..., N )|2 → δP [u∗p1 (P 1)up1 (1)...u∗pN (P N )upN (N )]
P
1 ∑ i
=√ δP exp [p1 (r1 − P r1 ) + ... + pN (rN − P rN )]
VN ~
P

N = 2, for example,

[u∗p1 (1)u∗p2 (2) ± u∗p1 (2)u∗p2 (1)] · [up1 (1)up2 (2) ± up2 (1)up1 (2)]
= (1 ± ei[p1 (1−2)+p2 (2−1)]/~ ) + (1 ± ei[p1 (2−1)+p2 (1−2)]/~ )

Since the kinetic energy is an even function of pi , one


may change the sign of pi in the above second term, thus

[u∗p1 (1)u∗p2 (2) ± u∗p1 (2)u∗p2 (1)] · [up1 (1)up2 (2) ± up2 (1)up1 (2)]
→ 2(1 ± ei[p1 (1−2)+p2 (2−1)]/~ )

Taking into account the factor exp(−Kp /kT ), there-


fore, each momentum integral can be expressed in term
of the function
∫ 3 −β(p2 /2m)+ipr/h
d pe −πr2 /λ2
f (r) ≡ ∫ = e
d3 pe−βp /2m
2

44

where λ = 2π~2 /mkT is the thermal wave length. There-
fore

−βK 1 3N 3N −β(p21 +···+p2N )/2m
T re = d pd re
N !h3N

× δP [f (r1 − P r1 ) · · · f (rN − P rN )]
P

This is an exact identity.


For high temperatures, the above ∑ integrand may be
approximated as follows. The sum P contains N ! terms.
The term corresponding to the unit permutation is [f (0)]N =
1. A permutation interchanging a pair of coordinates
yields [f (ri − rj )]2 . Thus by enumerating the permuta-
tions in increasing order of the number of the coordi-
nates interchanged, we arrive at the expansion

δp [f (r1 − P r1 ) · · · f (rN − P rN )]
p
∑ ∑
=1± fij2 + fij fik fkj ± · · ·
i<j i,j,k

where fij ≡ f (ri − rj ), and the plus sign and minus sign
apply to bosons and fermions respectively. From the
form of f (r), fij vanishes rapidly for (ri − rj ) ≫ λ. For
the third term, please note that there are only three
indices. For sufficiently high temperatures, we have

−βK 1 3N −β(p21 +···+p2N )/2m
T re ≈ d3N
pd re
N !h3N
This proves the classical limit for an ideal gas.

To take into account the first quantum correction, we

45
may write
( )
∑ ∏ ∑
1± fij2 ≈ (1 ± fij2 ) = exp −β ṽij
i<j i<j i<j

where
ṽ±ij ≡ −kT log(1 ± fij2 )
Therefore
∫ ([ )]
1 ∑ p2 ∑
T re−βK ≈ d3N pd3N r exp −β i
+ ṽ±ij
N !h3N i
2m i<j

The first quantum correction looks similar to an inter-


particle potential, which is attractive for bosons and
repulsive for fermions. But this potential is not a true
one, but solely originating from the symmetry proper-
ties of the wave function.

Exercises

• Assuming ψn = m ϕm tmn , prove
T + T = T T + = 1, O = T O′ T + .

• Prove that Tr (O) is independent of representations.


• Show that for bosons
1
n̄p⃗ =
eα+βϵp⃗ − 1
• Prove that for the ideal Boltzmann gas
n̄p⃗ = e−α−βϵp⃗

46
• Calculate the integrals
∫ ∞
dp · p2 · e−cp
2

0
∫ ∞
dp · p4 · e−cp
2

0
∫∞
dp · e−cp can be picked up from the
2
hint: suppose 0
hand book.
• Problems 8.2 and 8.5 in the textbook.

47

You might also like