0% found this document useful (0 votes)
902 views210 pages

Thermo Dynamics Script

Uploaded by

Amir Wadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
902 views210 pages

Thermo Dynamics Script

Uploaded by

Amir Wadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 210

Courses in Physics

Thermodynamics & Statistical Physics

applied to Gases and Solids

Ph.W. Courteille
Universidade de São Paulo
Instituto de Fı́sica de São Carlos
12/06/2024
2

.
3

.
4

Preface
The script was developed for the course Physical Chemistry and Thermodynamics
o Solids (SFI5769) offered by the Institute of Physics of São Carlos (IFSC) of the
University of São Paulo (USP).
The course is intended for masters and PhD students in physics. The script
is a preliminary version continually being subject to corrections and modifications.
Error notifications and suggestions for improvement are always welcome. The script
incorporates exercises the solutions of which can be obtained from the author.

Information and announcements regarding the course will be published on the


website:
http://www.ifsc.usp.br/ strontium/ − > Teaching − > Semester
The student’s assessment will be based on written tests and a seminar on a special
topic chosen by the student. In the seminar the student will present the chosen topic
in 15 minutes. He will also deliver a 4-page scientific paper in digital form. Possible
topics are:
- The Bose-Einstein condensation,
- Ultracold Fermi-gases,
- Dicke phase transitions,
- The Ising model,
- Heat engines,
- Non-equilibrium thermodynamics,
- Brownian motion,
- The Debye model,
- The electron gas model.

The following literature is recommended for preparation and further reading:


Ph.W. Courteille, script on Thermodynamics applied to gases and solids (2024)

R.T. DeHoff, Thermodynamics in Material Science, Boca Raton: CRC/Taylor Fran-


cis (1985)
H.B. Callen, Thermodynamics, New York: Wiley (2006)
C. Kittel, Introduction to Solid State Physics, ed. Hoboken, New York: Wiley (2005)

A.R. West, Basic Solid State Chemistry, Chichester: Wiley (2006)


D. Mc Quarry, Statistical Thermodynamics, New York: Harper & Row (1973)
Content

I Thermodynamics 1
1 Foundations and mathematical formalism 3
1.1 Phenomenological thermodynamics at equilibrium . . . . . . . . . . . 3
1.1.1 Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2 Kinetic theory and microscopic interpretation of temperature . 7
1.1.3 Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2 Canonical formulation of thermodynamics . . . . . . . . . . . . . . . . 20
1.2.1 Tackling thermodynamic systems . . . . . . . . . . . . . . . . . 21
1.2.2 The laws of thermodynamics . . . . . . . . . . . . . . . . . . . 24
1.2.3 Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . 28
1.2.4 Strategy for deriving thermodynamic relations . . . . . . . . . 33
1.2.5 Ideal gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.2.6 Cyclic processes . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.2.7 Real gases, liquids and solids . . . . . . . . . . . . . . . . . . . 44
1.2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.3 Thermodynamic equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 58
1.3.1 Conditions for equilibrium . . . . . . . . . . . . . . . . . . . . . 59
1.3.2 Entropy maximization in two-phase systems, chemical potential 60
1.3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.4 Thermodynamic ensembles . . . . . . . . . . . . . . . . . . . . . . . . 62
1.4.1 Coupling of thermodynamic ensembles to reservoirs . . . . . . 63
1.4.2 Thermodynamic potentials associated to specific ensembles . . 64
1.4.3 Microcanonical ensemble . . . . . . . . . . . . . . . . . . . . . . 65
1.4.4 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . 69
1.4.5 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . 71
1.4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
1.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

2 Thermodynamics applied to fluids and solids 75


2.1 Unary heterogeneous systems . . . . . . . . . . . . . . . . . . . . . . . 75
2.1.1 Unary phase diagrams in P T -space . . . . . . . . . . . . . . . . 75
2.1.2 The Clausius-Clapeyron equation, latent heat . . . . . . . . . . 79
2.1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.2 Multi-component, homogeneous, non-reacting systems . . . . . . . . . 84
2.2.1 The Gibbs-Duhem equation . . . . . . . . . . . . . . . . . . . . 84
2.2.2 Partial molal properties . . . . . . . . . . . . . . . . . . . . . . 85
2.2.3 Chemical potential in solutions . . . . . . . . . . . . . . . . . . 89
2.2.4 Models of real solutions . . . . . . . . . . . . . . . . . . . . . . 94
2.2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5
6 CONTENT
2.3 Multi-component, heterogeneous, non-reacting systems . . . . . . . . . 96
2.3.1 Equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . 96
2.3.2 Structure of phase diagrams . . . . . . . . . . . . . . . . . . . . 98
2.3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.4 Continuous, non-uniform systems exposed to external forces . . . . . . 99
2.4.1 Thermodynamic densities . . . . . . . . . . . . . . . . . . . . . 99
2.4.2 Equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . 100
2.4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
2.5 Reacting systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
2.5.1 Univariant chemical reactions in the gas phase . . . . . . . . . 106
2.5.2 Multi-variant chemical reactions in the gas phase . . . . . . . . 108
2.5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
2.6 Classification of thermodynamic phase transitions . . . . . . . . . . . . 111
2.6.1 Solid-liquid-vapor . . . . . . . . . . . . . . . . . . . . . . . . . . 111
2.6.2 Bose-Einstein condensation . . . . . . . . . . . . . . . . . . . . 112
2.7 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.7.1 Electrons in solids . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.7.2 Plasmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
2.7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

3 Appendices to ’Thermodynamics’ 117


3.1 Quantities and formulas in thermodynamics . . . . . . . . . . . . . . . 117
3.1.1 Statistical formulas . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.1.2 Polylogarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.2 Data tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.2.1 Material data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.3 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

II Statistical Physics and Atomic Quantum Fields 121


4 Statistical thermodynamics 123
4.1 Microstates, macrostates, and entropy . . . . . . . . . . . . . . . . . . 124
4.1.1 Probabilities of microstates and the partition function . . . . . 124
4.1.2 Equilibrium in statistical thermodynamics . . . . . . . . . . . . 125
4.1.3 Thermodynamic potentials in canonical ensembles . . . . . . . 127
4.1.4 Two-level systems . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.1.5 Einstein-Debye model of solids . . . . . . . . . . . . . . . . . . 129
4.1.6 Maxwell-Boltzmann distribution of ideal gases . . . . . . . . . 131
4.1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2 Quantum statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.2.1 Wavefunction symmetrization and detailed balance . . . . . . . 135
4.2.2 Microcanonical ensembles of indistinguishable particles . . . . . 137
4.2.3 Density-of-states in a trapping potential . . . . . . . . . . . . . 141
4.2.4 Grand canonical ensembles of ideal quantum gases . . . . . . . 144
4.2.5 Thermodynamic limit and Riemann’s zeta function . . . . . . . 151
4.2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
0 CONTENT
4.3 Condensation of an ideal Bose gas . . . . . . . . . . . . . . . . . . . . 154
4.3.1 Condensation of a gas confined in a box potential . . . . . . . . 154
4.3.2 Condensation of a harmonically confined gas . . . . . . . . . . 158
4.3.3 Density and momentum distribution for a Bose gas . . . . . . . 161
4.3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.4 Quantum degeneracy of an ideal Fermi gas . . . . . . . . . . . . . . . 166
4.4.1 Chemical potential and Fermi radius for a harmonic trap . . . 166
4.4.2 Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.4.3 Entropy and heat capacity . . . . . . . . . . . . . . . . . . . . . 168
4.4.4 Density and momentum distribution for a Fermi gas . . . . . . 169
4.4.5 Density and momentum distribution for anharmonic potentials 174
4.4.6 Signatures for quantum degeneracy of a Fermi gas . . . . . . . 176
4.4.7 Fermi gas in reduced dimensions . . . . . . . . . . . . . . . . . 178
4.4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.5.1 on quantum statistics . . . . . . . . . . . . . . . . . . . . . . . 180
4.5.2 on ideal quantum gases . . . . . . . . . . . . . . . . . . . . . . 180

5 Quantum theory of non-relativistic scalar particle fields 181


5.1 Quantizing scalar fields . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.1.1 Wavefunction (anti-)symmetrization for atom pairs . . . . . . . 182
5.1.2 Identical particles and exchange operator . . . . . . . . . . . . 184
5.1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.2 Occupation number representation . . . . . . . . . . . . . . . . . . . . 189
5.2.1 Number states in the N -body Hilbert space . . . . . . . . . . . 189
5.2.2 Field operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.2.3 Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.3 Quantum statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Part I

Thermodynamics

1
Chapter 1

Foundations and
mathematical formalism
Thermodynamics is a central branch of modern science, and its general laws govern
the physical and chemical processes which occur in our world. An important early
application of thermodynamics dealt with steam (or heat) engines, in which heat is
converted to mechanical energy. Phenomenological thermodynamics was developed in
the nineteenth and in the beginning of the twentieth century by Watt, Carnot, Clau-
sius, Joule, von Helmholtz, Lord Kelvin, Nernst, Boltzmann, and Gibbs, culminating
in the discovery of the Laws of Thermodynamics. These laws set general limits for
the conversion of one form of energy, for example heat or chemical energy, to another
one, for example mechanical work.

Figure 1.1: Reaction of a system to a sudden change in its environment.

The generic question addressed by thermodynamics is, how a given system re-
sponds to environmental changes. Indeed, the sudden modification of an environment
will force the system to seek a new state of equilibrium, as illustrated in Fig. 1.1. On
the other hand, thermodynamics is limited to describing equilibrium. It does not tell
us step by step, how the new equilibrium is reached, only what the final state will be.
Nevertheless, this is sufficient to establish complete phase diagrams, which are maps
of equilibrium states. Fig. 1.2 shows as an example the phase diagram of water.

1.1 Phenomenological thermodynamics at equilib-


rium
To begin with, we consider each system as structureless glop endowed with proper-
ties to be identified and defined, such as temperature, pressure, composition, heat
capacity, expansion coefficient, compressibility, entropy, and various measures of the

3
4 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

system’s energy. The minimum set of properties on which information is necessary


to compute the state of the system depends on its complexity. For example, a unary
system, i.e. an ensemble of identical particles belonging to the same species, is com-
pletely characterized by its heat capacity, expansion coefficient, and compressibility,
while additional information is required for systems exhibiting several phases or made
up of several chemical components, or even subject to chemical reactions [13].

Figure 1.2: Phase diagram of water.

Fundamental concepts of thermodynamics, such as temperature, pressure, or heat


have been unraveled before the discovery, that any type of matter consist of smallest
components, atoms and molecules. The fact that heat flows from hot to cold bodies
brought into contact is an everyday experience. Joseph Michel and Jacques Etienne
Montgolfier demonstrated that hot air is lighter than cold air. And James Watt
invented the steam engine before Amedeo Avogadro and John Dalton reintroduced
the notion of the atom, which had initially been postulated by Democritus 500 A.C..
With the knowledge of the composition of any matter we could in principle give up the
phenomenologically derived laws that represent the framework of thermodynamics.
To characterize a system with N particles, we just need to specify the position and
velocity of each particle as well as the forces acting between them or exerted by
external fields. However, in the case of a macroscopic system, the number of particles
is extremely large and this task becomes very difficult.
An alternative approach to the problem is to work with averaged values, which
represent the behavior of a system as a whole. Let’s embark on this approach by
defining some macroscopic quantities that determine the state of the system. Let us
consider a gas made up of N molecules in a container of volume V . Microscopically,
the movement of each particle is rectilinear and uniform, until it collides with another
molecule or with the walls of the container. This type of movement is called Brow-
nian motion. The average distance that the particle travels between two successive
collisions is called the mean free path.
The collisions of the particles with the walls of the container result in momentum
transfers and, consequently, in an average force exerted onto the walls. Integrated over
a surface, this force generates a pressure that the gas exerts on the walls, and despite
its microscopic origin, the pressure represents a macroscopic quantity that describes
an average property of the global system. Apart from pressure, other macroscopic
quantities that are important for the description of a system are its volume, internal
energy and temperature, the two latter ones being associated with the translational,
vibrational and rotational movement of the particles. All these macroscopic quantities
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 5

can be measured experimentally, and the objective of thermodynamics is to establish


relationships between them, in order to predict the behavior of some quantities when
other quantities are changed.
When the macroscopic properties of a system do not change over time, we say it
is in thermodynamic equilibrium. In this case, the system of interest must be kept
in contact with a second system, called a reservoir or heat bath, which determines
the parameters of the equilibrium. The set of macroscopic quantities associated with
a system in equilibrium is called a macroscopic state. It should be noted that the
microscopic state of the system determines the macroscopic state, but the inverse
does not hold, because from average values it is impossible to find r and p for all
particles in the system.
Macroscopic quantities are somehow interconnected. To see this, we may consider
a piston containing a gas, as schematized in Fig. 1.3(a), and heat it. As a consequence,
the system temperature will increase. If we keep the position of the piston fixed, the
pressure will increase, as well. If, on the other hand, we leave the piston moveable
with friction, the volume will increase. Thus, both the increase in pressure and volume
are consequences of an increasing temperature, from which we conclude that these
quantities are, in some way, related.

Figure 1.3: (a) Cylinder with a piston containing a gas. (b) Interaction between two systems
through a wall.

If we have two systems in thermal contact, it is important to know in which way


they interact. An interaction is often made through walls, as shown in Fig. 1.3(b).
If the wall is at a fixed position and the temperature of one of the systems is varied,
either (i) the temperature of the other system does not change, which is the case of
a perfectly insulating wall (also called adiabatic wall) or (ii) the temperature of the
other system follows the changes of the first and, which is the case of a diathermal
wall. In case (ii), the temperatures of the two systems evolve, until they reach a
common value. When the temperatures of the two systems are equal, the systems are
considered to be at thermal equilibrium.

1.1.1 Temperature
Temperature is, in general, measured by observing some quantity which is sensitive
to its variation. Defining a temperature scale on this quantity in a particular sys-
tem, we construct a thermometer. An example is the mercury thermometer, where a
certain volume of liquid mercury is placed in a capillary glass tube and the thermal
expansion of mercury is observed as a function of temperature. The length L of the
mercury column (usually calibrated in degrees Celsius) varies approximately linearly
6 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

with temperature T , that is, T = aL + b, where a and b are two constants that depend
on chosen reference temperatures. Conventionally, the melting temperatures of ice
(triple point) at 0 ◦ C and the boiling temperature of water at 100 ◦ C are used, so that,

100 100Lg
a= and b= . (1.1)
Lv − Lg Lv − Lg

The gas thermometer illustrated in Fig. 1.4 is another possible realization, in which
the volume of a gas is used as thermometric quantity.

Figure 1.4: (a) Gas thermometer.

The aforementioned Celsius temperature scale is widely used in everyday life.


In scientific applications, the Kelvin (or absolute) scale is mostly employed. We
will see later, that it is based on microscopic properties of matter. The zero on
this scale corresponds to the temperature at which all energy (except zero point
fluctuations) is removed from the system. The scale is related to the Celsius scale
through the expression: TK = 11◦KC TC + 273.15 K. Another scale only used the USA
is the Fahrenheit scale, which is related to the Celsius scale through the expression:

TF = 59 ◦ C
F
TC + 32 ◦ F.

1.1.1.1 Ideal and real gases


The equation of state of a system is a mathematical relationship between the various
macroscopic quantities that define the state of the system. In general, knowing the
state equation allows to compute all thermodynamic properties of the system.
For gases with very low pressures, interactions between molecules in the system
can be neglected. In this case, the gas is called ideal and the relationship between
the macroscopic quantities that define its thermodynamic state is given by,

P V = N kB T . (1.2)

where T is the absolute temperature (in K), N is the number of molecules contained
in the volume V , and kB is the Boltzmann constant. This is the most famous equation
of state, and it is based on experimental observations of Boyle, Mariotte, and Gay-
Lussac. Eq. (1.3) can also be written in terms of the number of moles, which is a
quantity defined by n = N/NA , where NA is Avogadro’s number. In this case,

P V = nRg T , (1.3)
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 7

where Rg = NA kB = 8.314 J/mol K = 0.082 atm · l/mol K is called universal gas


constant.
The gas thermometer and the barometric formula are examples of the numerous
applications of the ideal gas law. In Excs. 1.1.4.1 and 1.1.4.2 we will study gas
thermometers and in Excs. 1.1.4.3 to 1.1.4.6 applications of the barometric formula.
Real gas models will be studied in Sec. 1.2.7.

1.1.2 Kinetic theory and microscopic interpretation of tem-


perature
We will now develop a microscopic theory of temperature following a line of thought
proposed by Maxwell. Let us consider a cubic box with volume V and surface A, and
let us suppose that
(i) the gas is made up of a large number N of particles that collide elastically with
each other and with the walls of the container;
(ii) there are no attractive forces between the particles (ideal gas approximation);
(iii) the movement is completely random, with no direction or position privileged.
We also disregard external forces. Since the movement is completely random, the
average velocities are the same in the x, y, and z directions, v̄x = v̄y = v̄z , so that we
may restrict our considerations to the x-direction.
Looking at a small portion of the gas in the vicinity of the wall (see Fig. 1.5), we
can imagine that a large number of particles will collide with this wall. We divide
the particles into i classes of velocities vxi , each one filled with Ni particles. Because
collisions with the wall are elastic, the momentum transferred when a single particle
encounters the wall of the box is,

∆pxi = 2mvxi . (1.4)

Since half of the particles move to the left, the number of collisions in the time interval
∆t is given by,
1 Ni
#= Avxi ∆t . (1.5)
2 V
The total change of momentum is,
2
Ni mvxi A∆t
∆I = ∆pxi # = , (1.6)
V
and the pressure is,
2
∆I Ni mvxi
Pi =
= . (1.7)
A∆t V
Hence, as the movement is isotropic,
X mX 2 m N 2
P = Pi = Ni vxi ≡ vx2 = mv 2 = N Ēkin . (1.8)
i
V i V 3V 3

See Excs. 1.1.4.7 to 1.1.4.8.


8 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Figure 1.5: Box with N molecules colliding with the wall of a container.

Using the ideal gas state equation (1.3), Eq. (1.8) yields,

3 mv 2
kB T = , (1.9)
2 2
from which we conclude that temperature is associated with translational energy of
ideal gas molecules. This expression is also known as the theorem of energy equiparti-
tion theorem. Generally speaking, we assign to each degree of freedom of a system the
term 12 kB T . In the example above, we have 3 degrees of freedom, which correspond
to translations in the x, y and z directions. If energy can be stored in vibrations or
rotations of a molecule, we also have to assign a term 12 kB T to each of these degrees
of freedom.
Example 1 (Johnson noise): As an example demonstrating the usefulness of
the energy equipartition theorem, let us consider a resistor R subject to a certain
temperature T . If we associate 21 kB T with the average power P̄J dissipated by
the resistor within a time ∆t we have,
r
ŪJ2 1 kB T R
P̄J ∆t = ∆t = kB T =⇒ ŪJ = , (1.10)
R 2 2∆t
that is, a small voltage ŪJ appears at the resistor terminals, which is known as
Johnson noise. For a 1 Ω resistor at ambient temperature we find ŪJ ≈ 4.0 nV
averaged over 1 second. This voltage is small, but must be taken into account
in high precision measurements.

1.1.2.1 Thermal expansion


When we heat a solid, it generally changes size. This is due to the fact that the poten-
tial energy between its constituents, atoms or molecules idealized as being connected
by springs, has non-harmonic terms, as shown in Fig. 1.6(a). As we increase tem-
perature, we give more energy to the system and the atoms of the solid vibrate with
great amplitude, producing on average a greater separation between the constituents
of the system.
The variation in the length of a solid along a direction i = x, y, z follows the law,
∆Li = αLi ∆T , (1.11)
where α is called the linear expansion coefficient and characteristic for each material,
as shown in Tab. 1.1, although it generally also depends on temperature. Conse-
quently,
Li = Li0 (1 + α∆T ) . (1.12)
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 9

The surface expansion of a body is then, with A = L1 L2 = L10 L20 + 2αL10 L20 ∆T +
L10 L20 α2 ∆T 2 and assuming α to be very small,

∆A ≃ A0 2α∆T . (1.13)

Similarly, volumetric expansion is described by,

∆V ≃ V0 3α∆T . (1.14)

Figure 1.6: (a) Interaction energy between two atoms. (b) temperature-dependence of the
density of water.

Table 1.1: Expansion coefficient, compressibility, and heat capacity coefficients


(CP (T ) = a + bT + c/T 2 ) for selected materials at 298 K.

material α × 106 κ × 107 a b × 103 c × 10−5


(K-1 ) (bar-1 ) (J/K) (J/K2 ) (J K)
aluminum 23.5 12 20.7 12.3 -
silver - - 21.3 8.5 1.5
graphite - 340 - - -
diamond - - 9.12 13.2 -
steel 11 - - - -
invar 0.7 - - - -
silica glas 22.2 42 15.6 11.4 -
tungsten - 2.9 24.0 3.2 -
pyrex 32 - - - -

Liquids and gases also undergo volume variations with temperature. In this case,
it is quite common to work with the fluid density instead of volume:
m m m
ρ= =⇒ ∆ρ = − ∆V = − 2 γV ∆T , (1.15)
V V2 V
where γ is the volumetric expansion coefficient. Therefore, ∆ρ = −γρ0 ∆. In general,
γ is positive and the density of the fluid decreases with temperature. An exception to
10 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

this rule is the case of water, as shown in Fig. 1.6(b), which below 4 ◦ C it has γ < 0
and thus, between 4 ◦ C and 0 ◦ C the density increases with the temperature. This
explains why during winter lakes freeze starting from the surface. Do the Excs. 1.1.4.9
to 1.1.4.12.

1.1.3 Heat and work


When we place two bodies with different temperatures in thermal contact, there is a
transfer of energy from one body to the other so that the two temperatures evolve
towards a common value. The energy transferred is called heat. We cannot say that a
body at a given temperature contains a certain amount of heat. Heat is the change in
energy between one state and another. In this sense, heat is very similar to mechanical
work, and we will denote this fact by assigning to both the symbol δ, that is δQ for
heat changes and δW for work executed.
Analogously to mechanical inertia, a body has a certain heat inertia called heat ca-
pacity, which is the capacity of a body to retain thermal energy. The formal definition
of heat capacity is,
δQ
C≡ . (1.16)
dT
The unit of heat is the Joule J, but it is also very common to use the calorie,
which is the amount of heat required to raise the temperature of 1 g of water from
14.5 ◦ C to 15.5 ◦ C. The mechanical equivalent of a calorie is 1 cal = 4.184 J. The unit
of heat capacity is therefore J/◦ C. Do the Exc. 1.1.4.13 to 1.1.4.14.
Instead of heat capacity, it is common to use the specific heat defined as,
C
c≡ , (1.17)
m
where m is the mass of the system. Hence,
δQ = mc dT , (1.18)
meaning that when a certain amount of heat is given to the system, this one increases
its temperature. This expression, however, is not always valid. For example, upon
phase transitions from solid to liquid or from liquid to gas, the temperature does not
change when heat is supplied to the system. For a certain mass m of material, the
heat supplied for the phase transition to occur is,
δQ = mL , (1.19)
where L is called the latent heat of fusion or evaporation.
When several bodies are placed in thermal contact, heat flows between them in
such a way that,
XN
δQi = 0 . (1.20)
i=1
This is due to energy conservation, and this property is important for the determina-
tion of the specific heat of any of the bodies. When heat stops flowing, the bodies are
all in thermal equilibrium and, in this case, the 0th law of thermodynamics applies:
If a body A is in thermal equilibrium simultaneously with bodies B and C, then B is
in equilibrium with C.
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 11

1.1.3.1 Heat transport


Heat can be transported in three different ways: through conduction, radiation or
convection. In the case of conduction, although a material medium conducting the
heat is needed, no mass is transported. In a solid, molecules in the hot part of
the solid vibrate with large amplitudes and transmit this vibration via collision to
neighboring molecules. In metals, conduction electrons also participate in the heat
transport mechanism.
In the case of convection, heat is transported via displacement of masses. When
part of a fluid is heated, density and/or pressure variations cause matter to move and
carry heat from one volume to another. One already mentioned example is a freezing
lake, where cold and therefore less dense water moves to the surface. When the fluid
is forced to move due to the action of some external agent, for example a fan, we
speak of forced convection.
The third way of transporting heat is through radiation. In this case, the presence
of a material medium is not required to transmit energy. This transport is caused by
(mostly infrared) electromagnetic radiation, which is emitted from any hot body.
In this following we will concentrate on heat transport through conduction. Let us
consider a bar of cross section A and length L, whose ends are in thermal contact with
two bodies of temperatures T1 and T2 < T1 , as shown in Fig. 1.7. For given position
x along the bar, the amount of heat per unit of time (thermal current) crossing the
surface A at that position depends on the following factors:

(i) Type of bar material. – There are materials that conduct heat better than
others, e.g. copper conducts heat better than steel. A measure is provided by
the thermal conductivity K.

(ii) The cross section A. – The larger it is, the greater the thermal current, as more
atoms are participating in the conduction process.

(iii) The temperature gradient. – The thermal current depends on the difference in
temperature between adjacent layers of atoms (left and right of the plane at
position x).

Based on the above considerations, we can write the following expression for the
thermal current H:
δQ dT
H= = −KA . (1.21)
dt dx
If the bar is thermally insulated, as is the case in Fig. 1.7, the current is conserved,
that is, all the heat entering one end of the bar will come out at the other, as there are
no losses. In this situation, H is independent of x and, consequently, dT dx is constant.
Hence,
dT T2 − T1
= , (1.22)
dx L
and consequently,
T2 − T1
H = KA . (1.23)
L
In this case, the temperature distribution is a straight line, as shown in Fig. 1.7(b).
On the other hand, if the side surface of the bar is not insulated, there will be heat
12 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

losses by convection and the thermal current decreases as x increases. In this case,
dT
dx also decreases and, as a consequence, we have a temperature distribution like the
one illustrated by the dotted line in Fig. 1.7(b).

Figure 1.7: (a) Heat conduction through a laterally insulated bar. (b) Temperature distri-
bution along an insulated bar (a) and not isolated (b) laterally.

We will now analyze two examples where the equation of conductivity (1.23) ap-
plies.

Example 2 (Thermal conduction along rods): In the first example, we will


consider two bars having the same cross-section, but made of different materials
and thus having different conductivities, as shown in Fig. 1.8(a). The bars are
thermally insulated on their sides. We want to determine the temperature at the
junction between the two bars. As the bars are insulated, the thermal current
is constant and, therefore:

T1 − T T − T2
H = K1 A = K2 A , (1.24)
L1 L2

yielding,
K2 L1 T2 + K1 L2 T1
T = . (1.25)
K2 L1 + K1 L2
Substituting this in the expression for H we get,

K1 K2 A
H= (T1 − T2 ) . (1.26)
K2 L1 + K1 L2

In the particular case in which K1 = K2 we recover the result (1.23) derived for
a single isolated bar. The temperature distribution along the bars depends on
the ratio between K1 and K2 . If K1 > K2 , we have the temperature distribution
shown in Fig. 1.8(b).

Example 3 (Thermal conduction in a hollow cylinder ): As a second ex-


ample, let us consider a hollow cylinder with outer radius b and inner radius a.
The inner part of the cylinder is kept at a temperature T1 , while the external
temperature is maintained at T2 (T2 < T1 ). The length of the cylinder is L
and the conductivity is K. The orientation of the thermal current is obviously
radial. The area is given by A = 2πrL and, therefore:

dT dT
H = −KA = −K2πrL . (1.27)
dr dr
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 13

Figure 1.8: (a) Bars of different materials placed in series. (b) Temperature distribution
along two bars of different materials placed in series.

As H is constant, since there are no losses, we can integrate this equality between
a ≤ r ≤ b assuming T1 ≥ T ≥ T2 . We find the result,
2πLK
H= (T1 − T2 ) . (1.28)
ln(b/a)

Let us now briefly cover the other two types of heat transmission mentioned in the
beginning. In the case of convection, we are typically interested in the following type
of problem: given a body at a temperature T surrounded by atmospheric air colder
by an amount ∆T , how much heat does it lose per unit of time? The thermal current
from the body to the air is described by a similar formula as thermal conduction,

H = hA∆T , (1.29)

where A is the area through which heat is being lost and h is a number that depends
on ∆T (in general h ∝ ∆T 1/4 ), the geometry of the body and its orientation in space
(since convection is due to the fact that hot air rises). Therefore, a lying plate has a
different h as compared to a standing plate.
Heat transport by radiation is proportional to T 4 , where T is the absolute tem-
perature (in Kelvin):
R = eσT 4 . (1.30)
Here, R is the thermal current emitted per unit area, e is the emissivity of the body
(0 ≤ e ≤ 1), and σ is the Stefan-Boltzmann constant. Do the Exc. 1.1.4.15 to 1.1.4.18.

1.1.3.2 Work
Heat can be injected into a system in various ways, e.g. via thermal contact with a
reservoir, via electrical dissipation or discharge, or by carrying out mechanical work.
The amount of energy (translational, rotational and vibrational) contained in a system
is called internal energy. The law of energy conservation demands that the heat
δQ supplied to a system is used to change its internal energy E and/or to perform
mechanical work δW . This principle is known as the 1st law of thermodynamics and
can be expressed mathematically as:

dE = δQ + δW . (1.31)
14 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Let us consider an ideal gas contained in a cylinder with a piston, as shown in


Fig. 1.3(a). By moving the piston, it is possible to compress or expand the gas, and in
this process there will be pressure and/or temperature variation, since these variables
are linked by the ideal gas equation (1.3).
Now, we imagine that the pressure P of the gas is greater than the atmospheric
pressure. In this situation, the gas will try to push the piston out of the cylinder. If
the piston slowly moves by a distance dx, the work done by the gas will be,
dW = F dx = P Adx = P dV , (1.32)
where A is the cross area of the piston and dV = Adx is the variation of volume
during expansion. Thus, if the gas expands from a volume V1 to a volume V2 , the
total work done is,
Z V2
δW = P dV . (1.33)
V1
If we follow the evolution of pressure with volume on a P V diagram, as in Fig. 1.9(a),
the work done by the gas will be the area under the curve. This area obviously
depends on how the gas is taken from point 1 to point 2: the area under the path (i)
is different from that under the path (ii). This means that work can be done along
a closed path, as shown in Fig. 1.9(b): The system initially undergoes an isochoric
transformation (constant volume), followed by a isobaric one (constant pressure), then
again an isochoric, and finally an isobaric transformation. The area under the curve
(i) corresponds to the work P1 (V2 − V1 ), and the area under the curve (ii) to the work
P2 (V1 − V2 ). Although the processes are different, they produce the same variation
in the internal energy of the gas ∆E = E2 − E1 , as this only depends on the initial
and final states of the system. For a closed loop the total change in internal energy is
zero, ∆E = 0, and therefore, by the 1st law of thermodynamics, the transferred heat
must compensate the executed work,
δW = −δQ . (1.34)

Figure 1.9: (a) P V diagram of a gas showing work carried out in two different processes
leading from an initial state (1) to a final state (2). (b) Work performed in a complete gas
cycle.

The internal energy of an ideal gas, as we discussed previously, essentially comes


from the kinetic energy of its constituents,

E = 32 N kB T , (1.35)
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 15

that is, for processes maintaining the temperature constant, ∆E = 0. This relation-
ship was experimentally verified by Joule, who found that when a gas adiabatically
expands (without heat exchange nor work executed) its temperature stays constant.

After the warm up provided by the previous sections, let us now start setting up
the framework of phenomenological thermodynamics in the following sections.

1.1.4 Exercises
1.1.4.1 Ex: Gas thermometer
A gas thermometer filled with an ideal gas and working at a constant volume is cali-
brated on the one hand in dry ice (carbon dioxide in its solid state at a temperature
of −80.0 ◦ C) and on the other hand in boiling alcohol (78.0 ◦ C). At these respective
temperatures, the pressure in the gas thermometer is 0.900 bar or 1.635 bar. At ab-
solute zero, the gas in the thermometer is still gaseous, but the pressure has dropped
to 0.000 bar.
a. At what ◦ C is the absolute zero?
b. What is the pressure at the freezing point of water and what is it at the boiling
point?

1.1.4.2 Ex: Gas thermometer


A gas thermometer ’a is connected to a second gas thermometer ’b, which is kept in a
water bath at a constant temperature. The connecting capillary has a cross-sectional
area A and is filled with mercury (ρ = 13.5 g/cm3 ). At the same temperature T0 in
the two thermometers, the mercury level in both capillaries is the same. Now the
gas in thermometer ’a’ is heated by ∆T . This increases the pressure Pa and thus the
volume Va → Va + ∆V . The mercury column is displaced accordingly.
a. What is the relationship between the volume increase ∆V and the temperature
increase ∆T in this setup?
b. To simplify, assume that the volumes Va = Vb = V0 and thus the particle numbers
Na = Nb = N0 are the same. How much has the temperature of the gas in ther-
mometer ’a’ increased if the following conditions exist in the coupled thermometer:
N0 = 1022 , h = 5 mm, T0 = 300 K, V0 = 1000 cm3 , A = 1 cm2 ?

Figure 1.10: Gas thermometer.


16 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.1.4.3 Ex: Barometric formula


The air pressure P at a height h is equal to the weight mg of the air column, which
at this height rests on an (imaginary) horizontal base divided by the base area A of
the column (m = m(h): mass of the air column, neglect the curvature of the Earth
and the height dependence of the temperature). Therefore, we have for the change
dP of the pressure upon a small change of height dh, with the local density ρ = ρ(h):

g dm gρ dV
dP = = = −gρ dh . (1.36)
A A
Here dV = −A dh is the change in volume of the air column (located above the base).
The air should be treated approximately as a substance with a uniform molar mass
M̃ .
a. Show with the help of the ideal gas equation that Eq. (1.36) can be cast into the
form dP = −kP dh under the given conditions with the constant k. Which is the
expression for k?
b. What is the integral relationship P = P (h)? At what height h is the air pressure
at T = 273 K and M̃ = 29 g/mol only half the size of P (0)?
c. What is the air pressure on the Mont Blanc (4794 m) and the Mount Everest
(8848 m) at T = 273 K as compared to P (0) at sea level? How big is the pressure
difference ∆P compared to normal zero on the Tübingen market place (h = 341 m)?

1.1.4.4 Ex: Barometric formula


The barometric height formula is usually derived assuming constant temperature.
Now suppose that the temperature depends on the height h above the surface of the
Earth according to the relationship T = T0 /(1 + αh).
a. Show that the pressure p then must satisfy the following differential equation,
dP mg
=− (1 + αh)P .
dh kB T0
b. Find the solution to this differential equation. What is the sign of the constant α?
Is the pressure at a fixed height larger or smaller than the value resulting from the
height formula at a fixed temperature?

1.1.4.5 Ex: Depth gauge


You want to build a depth gauge for diving operations and take advantage of the
compressibility of air. To do this, you take a glass cylinder with a movable flask
(volume V = Ax, footprint A) and a millimeter scale located in the flask. To what
water depth h can the device deliver the targeted measuring accuracy of ±1 m, if the
piston position x can be read with an accuracy of ±1 mm and x(P0 ) = 0.2 m at the
water surface?

1.1.4.6 Ex: Scuba diving


A diver is at a water depth of h0 and breathes air from a compressed air bottle.
When exhaling, he creates (spherical) air bubbles with the volume V0 . Assume that
1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 17

the surface water temperature is T1 and decreases evenly to a depth of h0 by an


amount of temperature α per meter.
a. Assume a constant water density ρ and calculate the pressure P depending on the
water depth at an atmospheric pressure P1 .

Figure 1.11: Jacques Mayol

b. Calculate the volume of the bubbles as a function of water depth. How big is the
volume just below the water surface? Why is it important for the diver to exhale
continuously as he ascends?
Numerical values: Water depth h0 = 40 m, V0 = 1 cm3 , T1 = 20 ◦ C, α = 0.2 ◦ C/m,
ρ = 1 kg/l, and P1 = 1013 hPa.

1.1.4.7 Ex: Particle collisions with a container

How many particle collisions Z does a wall surface A = 1 dm2 experience in ∆t = 1 s


at T = 298 K and P = 1 bar through the particles of an ideal gas, if ⟨|vx |⟩, the mean
value of the particle velocity in the x direction, has the value 330 m/s?
Hint: Imagine a cuboid box in an xyz coordinate system and assume the wall surface
of interest as one of the cuboid surfaces perpendicular to the x-axis. The width of
the box is ∆x, so its volume is V = A · ∆x. There is a simple relationship for the
mean number ⟨νx ⟩ of impacts that a single particle does exert on the wall within a
time ∆t depending on ∆x, ∆t and ⟨|vx |⟩. To get Z you have to consider that the gas
contains N particles.

1.1.4.8 Ex: Kinetic pressure

A closed box with end face A and side length L is divided into two equal halves by a
movable plate (see figure). Both halves contain one mole of helium under a pressure
of P0 . The movable plate is now shifted to the right by the distance x. The shift
takes place at constant temperature T = 20 ◦ C.
a. Give the volume of the right or left sub-box as a function of x. Give the pressure
in the right or left sub-box as a function of x.
b. Calculate the work W that needs to be done to move the plate from x = 0 to
x = L/4. Specify W in Joules.
18 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM
264 Termologia e termodinâmica
x
10- Encontre o gradiente de temperatura e a corrente térmica numa barra de
condutividade K, comprimento L e secção transversal irregular, como
mostra a Fig. 13.16.
A
L
L/2 L/2
L
T1>T2 FigureA1.12: Ideal2A T2
gas in a box.

L/2
1.1.4.9 Ex: Bi-metal Fig. 13.16
Two bars12-of different
Duas barras materials, withdiferentes,
de materiais lengths, Young’s modules and
com comprimentos, thermal
módulos de expansion
coefficients given
Youngrespectively
e coeficientesby de L 1 , L2 , Y
dilatação 1 , Y2 , dados
térmica α1 , and α2 , are pinched
respectivamente por L1between
, two
walls, as shown
L2, Yin
1 , Ythe
2 , αfigure.
1 e α2 , Calculate
estão presas the
entre distance
duas traveled
paredes como by
mostrathe
a junction
Fig. point
of the bars when
13.14.the system
Calcule is heated
a distância by an amount
percorrida pelo ponto ∆Tde. junção
What das
is the tension on the
barras
bars?
quando o sistema é aquecido de ∆T. Qual é a tensão nas barras?

α1 , Y1 α2 , Y2

L1 L2

Fig. 13.17
Figure 1.13:

1.1.4.10 Ex: Bi-metal


In the sketched construction two thin metal strips with different linear expansion
coefficients (aluminum and copper) αAl = 24 · 10−6 K-1 and αCu = 17 · 10−6 K-1
are connected to each other by L = 10 cm bars so that they have a fixed distance
d = 1 mm. When the temperature increases, the two strips expand so that they form
circular segments with different radii, as shown in the figure. An angle of the circle
segment of ϕ = 1◦ is measured. How big is the temperature increase?
S. C. Zilio e V. S. Bagnato Mecânica, calor e ondas
Cu Al

T: d
L

T+∆T :

Figure 1.14: Bimetal.


1.1. PHENOMENOLOGICAL THERMODYNAMICS AT EQUILIBRIUM 19

1.1.4.11 Ex: Linear expansion


The length of a 10 cm long spacer made of quartz glass with linear expansion coefficient
α1 = −1 cm/m/◦ C is to be kept constant by using a spacer made of Invar steel with
a linear expansion coefficient α1 = 10 cm/m/◦ C. How long must the spacer be?

1.1.4.12 Ex: Thermal expansion


Consider a solid body with momentum of inertia I. Show that due to a small tem-
perature variation ∆T , this momentum varies by ∆I = 2α∆T , where α is the linear
expansion coefficient. With this result, calculate how much the period of a physical
pendulum varies when subject to a temperature variation ∆T .

1.1.4.13 Ex: Heat capacity and energy of air


a. How many air molecules are in one kilogram of air, knowing that air has the relative
molecular mass mair ≈ 29u.
b. Supposing air is essential composed of oxygen and nitrogen, what is are the relative
abundances of both elements.
c. Estimate the molar and specific heat capacities of air.
d. Calculate the kinetic energy as well as the average molecular velocity in 1 kg of air
at T = 300 K.

1.1.4.14 Ex: Calorimetry


a. A calorimeter initially contains a volume of V1 = 100 ml of water in thermal equi-
librium with the calorimeter at temperature T1 = 15 ◦ C. Now we add a volume
V2 = 100 ml of water at temperature T2 = 40 ◦ C. After reaching thermal equilibrium
again, the temperature becomes Tf = 25 ◦ C. What is the thermal capacity of the
calorimeter?
b. Starting from the final condition of the previous item, we add to the calorimeter
a metallic body with mass m3 = 80 kg and temperature T3 = 90 ◦ C. After reach-
ing thermal equilibrium again, the temperature becomes Tf f = 35 ◦ C. What is the
specific heat of the body?

1.1.4.15 Ex: Thermal conduction


Show that the thermal current in a substance of conductivity K located between the
surfaces of two concentric spheres is given by:
dQ 4πkr1 r2
= H = (T1 − T2 ) ,
dt r2 − r1
where r1 and r2 are respectively the radii of the inner and outer surfaces and T1 > T2 .

1.1.4.16 Ex: Thermal conduction


A bar with thermal expansion coefficient α and Young’s modulus Y F ∆L

A =Y L is
stuck between two walls, as shown in the figure. Calculate the stress in the bar when
the temperature is increased by ∆T .
0 1 2
Fig. 13.14
6- Uma barra com coeficiente de dilatação térmica α e módulo de Young Y

20
(
CHAPTER
A
)
F = Y ∆L está presa entre duas paredes, conforme mostra a Fig. 13.15.
L 1. FOUNDATIONS AND MATHEMATICAL FORMALISM
Calcule a tensão na barra quando a temperatura é acrescida de ∆T.

Fig. 13.15
Figure 1.15:
7-
264 Qual a quantidade de calor necessária para transformar
Termologia1g de gelo a –10
e termodinâmica
0
C (cgelo = 0.55 cal/g C, Lf = 80 cal/g) em vapor a 100 0C (LV = 540
0
1.1.4.17 Ex: Thermal conduction
10- Encontre
cal/g)? o gradiente de temperatura e a corrente térmica numa barra de
Find the temperature
Coloca-segradient
8- condutividade umaK,barraand thermal
comprimento
de metal current
(CL =e0,2 in
C) aa bar
cal/gotransversal
secção ofirregular,
100oC conductivity
sobre grande K, length
um como
L and irregular cross
mostra
bloco section,
adeFig.
gelo o
13.16. as Qual
a 0 C. shown é a in thedafigure.
massa barra se quando o sistema atingir
o equilíbrio térmico 500 g de gelo se derreteram?
L
9- Coloca-se um bloco de gelo a –20 0C dentro de um recipiente
hermeticamente fechado com 200g de vapor de água a 100 0C. Se a massa
T1>T2 A 2A T2
do gelo é 500 g, qual será a temperatura final do sistema?
L/2
Fig. 13.16
Figure 1.16:
12- Duas barras de materiais diferentes, com comprimentos, módulos de
S. C. Zilio e V. S. Bagnato Mecânica, calor e ondas
Young e coeficientes de dilatação térmica dados respectivamente por L1,
L2, Y1, Y2, α1 e α2, estão presas entre duas paredes como mostra a Fig.
1.1.4.18 Ex: Tramway
13.14. Calcule a distância percorrida pelo ponto de junção das barras
A tram with quando
mass m B = 12500
o sistema kg brakes
é aquecido de ∆T. from
Qual éaa tensão
speednas
v= 57.6 km/h to standstill.
barras?
What is the temperature of the eight cast iron brake blocks when the mass of each
block is 9.0 kg and 60% of the kinetic energy flows into the heating of the blocks?
α1 , Y1 α2 , Y2

1.2 Canonical formulation of thermodynamics


L1 L2
Thermodynamics is different from other physical theories, such as classical mechanics,
electrodynamics or quantum mechanics, in the sense that it describes phenomena
Fig. 13.17
emerging from the presence of large numbers of identical (microscopic) subsystems,
which are absent in the individual subsystems, in particular the tendency of large
(macroscopic) systems to evolve towards certain equilibrium states. This claim makes
thermodynamics pervasive and applicable to all kind of systems that can appear to
be completely different. Its task is to provide a way of organizing information on
systems’ behavior, generating phase diagrams and data bases on their physical and
chemical properties.
Following textbook didactics we approach the area of thermodynamics in two main
steps. The first step, outlined in the present chapter 1, is known as phenomenological
thermodynamics. It ignores the microscopic composition as far as possible. We will,
however, see that certain features of the microscopic subsystems can have crucial
impact onS.the
C. Zilio e V. S. Bagnato
macroscopic behavior. This step attemptsMecânica, calor e ondas
to structure the macroscopic
observations by identifying characteristic physical quantities and formulating laws
governing their dynamics.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 21

In the second step, exposed in the subsequent chapter 4, is called statistical thermo-
dynamics. It aims at explaining the laws found in phenomenological thermodynamics
by deriving them from features of the microscopic subsystems. In particular, some
subsystems are small (or cold) enough to behave according to the rules of quantum
mechanics, and this can have an important impact on the macroscopic behavior. This
is studied in the field of quantum statistics exposed in Sec. 4.2.
The methods introduced in the first two chapters will be illustrated with simple
examples (mainly ideal gases and solids). In Chp. 2 we apply them to complex real
systems, e.g. multi-component, heterogeneous, or chemically interacting systems.

1.2.1 Tackling thermodynamic systems


The subject of thermodynamics are many-body systems the properties of which are
characterized by physical quantities, called thermodynamic variables, and their rela-
tionships. We will now attempt to structure the fundamental concepts of the field
following the textbook of DeHoff [13].

1.2.1.1 Classification of thermodynamic systems


Before starting to tackle a new unknown system, trying to characterize all of its
properties, it is always a good idea to classify them in order to identify those properties
which are essential for the information we want to extract from the system. This
prevents a waste of efforts into gathering irrelevant data and provides a guideline for
the system’s characterization. In general a system can be

1. unary or multicomponent (e.g. a gas of pure argon or atmospheric air),

2. homogeneous or heterogeneous (e.g. a single phase or two coexisting phases like


water and vapor),

3. closed or open (e.g. isolated from environment or exchanging energy or parti-


cles),

4. non-reacting or reacting (e.g. the molecules of a two-component system may


react to form a third component),

5. otherwise simple or complex.

At first we will exemplify the introduced concepts mostly restricting to unary,


homogeneous, non-reacting, otherwise simple systems. In later sections, we will turn
our attention to complex systems.

1.2.1.2 State functions and process variables


Thermodynamic variables fall into two distinct classes, process variables and state
functions.
The state of stationary thermodynamic systems may be expressed by state func-
tions (or state variables), whose values only depend upon the current state of the
22 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

system. The most common ones are temperature, pressure, entropy, volume, particle
number, chemical potential, and internal energy,
T, S, P, V, N, µ . (1.37)
The pressure and volume may be replaced by other mechanical variables. Their
equilibrium dynamics is confined to trajectories obeying so-called state equations,
that is, functional relationships such as,
f (T, S, P, ...) = 0 , (1.38)
or their illustration in phase diagrams, such as the one shown in Fig. 1.2. Such
state equations are obviously extremely useful, as they allow to abstract from the
physical process which led to a particular state of a system, and which might be very
complicated. In fact, their discovery represents one of the major achievements of
thermodynamics.
In contrast to state functions, the values of process variables depend upon the
path followed by the process. Consequently, they only have a meaning for changing
systems, i.e. for systems traversing a sequence of different states. The process variables
fall into two category, mechanical work and exchanged heat,
δQ, δW . (1.39)
The concept of work is developed in classical mechanics
R as the path integral over a
force acting on a body along a given path, W = S F(s) · ds, and it can be brought
into the context of thermodynamics considering, e.g. a piston working against the
pressure of a confined gas, as illustrated in Fig. 1.3. Work depends on changes ds
and thus cannot be associated with stationary systems. Any of the forces known in
physics, inertial forces in accelerated or rotating systems, electromagnetic forces, or
molecular forces, can work.
Work is associated with a displacement of macroscopic matter, e.g. the movement
of a piston. Systems may, however, exchange energy without net displacement of
masses via the exchange of heat. By itself the notion of a ’quantity of heat’ in a
system is meaningless. Only the amount of heat exchanged with another system in a
given process can be quantified.

1.2.1.3 Extensive and intensive properties


State functions can further be classified into extensive and intensive variables of the
system. To understand the difference, we imagine the system under consideration
subdivided into smaller, identical and not interconnected subsystems. Now, an in-
tensive variable describes a global property, i.e. a property that does not depend on
the system size or the amount of material in the system, for example, temperature
or the hardness of an object. No matter how small a diamond is cut, it maintains
its intrinsic hardness. Intensive variables are those which can be represented as a
field, such as the local temperature variation T (r) across the system or the pressure
variation in the barometric formula. Nevertheless, intensive properties can also be
derived from extensive ones via the concept of densities, e.g. the local particle density
n(r) ≡ dN/dV . In general, the ratio of two extensive properties is scale-invariant and
hence an intensive property.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 23

Examples of intensive parameters are: the chemical potential, concentration, den-


sity (or specific gravity), ductility, elasticity, electrical resistivity, hardness, magnetic
field, magnetization, malleability, melting point and boiling point, molar absorptivity,
pressure, specific energy.
By contrast, an extensive variable adds up when independent, non-interacting
subsystems are combined. The property is proportional to the amount of material in
the system. For example, both the mass and the volume of a diamond are directly
proportional to the amount that is left after cutting it from the raw mineral. Mass and
volume are extensive properties. An extensive variable characterizes the system as a
whole, e.g. the volume of a recipient, the number of enclosed particles, the internal
energy or the entropy.
In uniform systems the value of intensive variables does not change, so that the
system is characterized by a unique value. This is useful for the description of systems
in equilibrium. However, it does not mean that intensive quantities turn into extensive
ones. Intensive properties cannot exclusively depend on extensive ones. For example,
in the ideal gas equation extensive and intensive quantities are interrelated in such
a way that the two intensive quantities P = P (r) and T = T (r) are proportional to
each other. Extensive properties can R be expressed as integrals of intensive ones over
the extend of the system, e.g. N = V n(r)d3 r.
Examples of extensive parameters are: energy, entropy, Gibbs energy, length,
mass, particle number, momentum, number of moles, volume, magnetic moment,
electrical charge, weight.

1.2.1.4 Classification of thermodynamic relationships


The thermodynamic variables characterizing a system, the state function as well as
the process variables, are interrelated by mathematical expressions, and the apparatus
of thermodynamics allows to generate connections between new sets of variables, thus
leading to an unmanageable number of expressions. It is thus helpful to introduce a
classification of thermodynamic relationships into
1. thermodynamic laws forming the physical basis for all other relations;
2. definitions of new quantities expressed in terms of previously formulated vari-
ables with the motivation of simplifying the description of specific classes of
systems;
3. coefficient relations between differential forms emerging from the description of
changes in state function;
4. Maxwell relations relating second derivatives to one another and reflecting the
fact that the order of differential operators can be switched; and finally
5. equilibrium conditions, which are sets of equations describing the relationships
between state functions to be satisfied in a system at equilibrium and used for
establishing maps and phase diagrams.
The concept of equilibrium is central to thermodynamics. It describes a situation
in which a system coupled to an environment does not change its state autonomously.
The situation is expressed by a set of equations called equilibrium conditions, relating
internal properties of the system.
24 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Example 4 (Thermodynamic relationships): Examples for definitions are,


 
∂E
H ≡ E − PV or CP ≡ .
∂T N,P

An example for a coefficient relation emanating from the real gas equation T =
T (P, V ) = N 1kB (P + P ∗ )(V − V ∗ ) is,

V −V∗ P + P∗
   
∂T ∂T
dT = AdP + BdV = dP + dV = dP + dV .
∂P V ∂V P N kB N kB

An example for a Maxwell relation using the above example is,


           
∂A ∂ ∂T ∂ ∂T ∂B
= = = .
∂V P ∂V ∂P V P ∂P ∂V P V ∂P V

Finally, an example for an equilibrium condition considering a mixture of two


gases A and B, is the request that their temperatures be equal, TA = TB .

1.2.2 The laws of thermodynamics


The laws of thermodynamics are highly condensed expressions forming the basis of
empirical thermodynamics. Although not deduced from fundamental principles, they
are universal, general, and pervasive. Before discussing them in detail, let us enunciate
them altogether:
0. The zeroth law affirms that two systems each one in thermal equilibrium with
a third are in equilibrium themselves,

T1 = T2 ∧ T2 = T3 =⇒ T1 = T3 . (1.40)

1. The first law states that the total energy is always conserved,

dE = δQ + δW . (1.41)

2. The second law states that the entropy of any closed system goes always in-
creasing,
dS ≥ 0 . (1.42)

3. The third law states that for T → 0, the entropy difference between systems
connected by a reversible process vanishes,

lim S = 0 . (1.43)
T →0

This last law has its origins in quantum mechanics.

1.2.2.1 The 0th law of thermodynamics


The zeroth law is a necessary assumption for the existence of a temperature scale
for all substances in nature and provides an absolute measure of their tendencies to
exchange heat, as already discussed in Sec. 1.1.3.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 25

1.2.2.2 The 1st law of thermodynamics


According to the first law of thermodynamics, there is a property of the universe,
called energy, which cannot change no matter what process occurs. The energy can,
however, change its appearance (e.g. between kinetic, potential of internal energy), or
be exchanged between subsystems or between a system and its environment across the
system’s boundaries. Hence, defining a thermodynamic state function called internal
energy E of the system, the first laws states that this quantity can only increase by
working on the system or by transferring heat to it. Put in this way, the statement
also fixed the sign convention, see Fig. 1.17.
It its mathematical formulation the differential d represents a change in a state
function, while the prefix δ just denotes an infinitesimal quantity of work or heat, but
cannot be considered a differential: There is no mathematical state function W or Q
of which dW or dQ could be a differential.
Despite its fundamental importance, in its form (1.41) the first law is not ready
for use in practical applications, because it does not tell us how to evaluate δW or
δQ.

Figure 1.17: Any change of a system’s internal energy is due to either work done on it or
heat transferred through its borders.

1.2.2.3 The 2nd law of thermodynamics


Many processes in nature are irreversible. Although no fundamental law of physics
prevents heat to flow from cold to hot places or mixed gases to spontaneously separate
into the components, this is never observed. Time seems to flow in one direction.
The second law of thermodynamics distills this aspect of experience and states it
succinctly and quantitatively, albeit abstractly, introducing a state function called
entropy. When summed up for a system and its surroundings, the entropy always
increases 1 .
While the second law postulates that any closed system always produces (or main-
tains) entropy dSprod , this does not preclude entropy reduction for this system, pro-
vided the entropy can be removed from the system via transfer dStrans through its
boundaries faster than being produced. The net entropy balance of the system is
then,
dSsyst = dSprod + dStrans ≤ 0 . (1.44)
Indeed, many practical applications are based on entropy reduction in subsystems,
e.g. lasers. Since entropy transfer between a system and its environment does not
1 Note that the sign of entropy increase is fixed by convention.
26 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

change the overall entropy balances, the total entropy production of the universe,
consisting of the system plus its environment, must remain positive 2 .

The concept of irreversibility is intrinsically connected to the notion of sponta-


neous breaking of time reversal symmetry, which plays a fundamental role e.g. in
quantum optics of open systems. Let us consider an atomic two-level system interact-
ing with a light mode. While coherent processes, such as absorption and stimulated
emission of a photon, maintain irreversibility, spontaneous emission into the reservoir
of electromagnetic vacuum modes cannot be undone. The size of the phase space
potentially occupied by the spontaneously emitted photon is simply to large to yield
any reasonable probability for spontaneous reabsorption.
Processes linking systems with a relatively small number of degrees of freedom
to much larger systems are called dissipative, and the rate of entropy production is
a quantitative measure for this dissipation. It not only depends on the strength of
the dissipation, but also on how far away the system is from equilibrium: the closer
to equilibrium, the smaller dissipation. A quantitative treatment of the correlation
between dissipation and distance from equilibrium is, however, very complicated.
For processes sufficiently slow never to move away very far from equilibrium the
entropy production is correspondingly small. It completely vanishes, when the system
is infinitesimally close to equilibrium, for example, exerting work in incremental steps
or adding heat in incremental portions always allowing the system to equilibrate
before applying the next change. Systems undergoing such processes can change
their entropy only via exchange with other systems, and as no entropy is produced
they are reversible. Obviously, the concept of reversibility represents an idealization,
since any real process is afflicted with dissipation.
In contrast, systems undergoing quick drastic changes instantaneously deviate
from equilibrium and transiently occupy states, in which entropy is produced, before
they return to equilibrium. Such processes are irreversible.

Figure 1.18: Illustration of (blue) an irreversible process forced through an inhomogeneous


parameter landscape and (red) a reversible sequence of infinitesimal irreversible processes
following a given path. As long as the infinitesimal processes do not recede much from
equilibrium (dashed red line), entropy production can be neglected.

For reversible processes the process variables are readily calculated, since each
intermediate state (red dashed line in Fig. 1.18) is described by just a few state
functions, e.g. temperature or pressure. In contrast, this is very complicated for
irreversible processes carrying the system to a state whose state functions depend on
the trajectory on which the state was reached.
2 We may formulate a continuity-type equation for entropy density and entropy flow.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 27

On the other hand, changes of state functions are easy to calculate also for ir-
reversible processes, since these only depend on the initial and final states. Thus,
changes in state functions for a given complex irreversible process may be calculated
by substituting the process by a reversible one. E.g. in Fig. 1.18 instead of following
solid red lines, we substitute them by dashed red lines connecting the same initial and
final states. This procedure illustrates the fundamental role of state functions and
reversible processes in thermodynamics.

Let us now analyze entropy transfer for reversible processes. Let δQrev be the
heat absorbed in an infinitesimal step and T the temperature of the system. Now,
although δQrev is a process variable, δQrev /T is the differential of a state function,
that is, the state function for an infinitesimal process. To prove this, it is sufficient
to show that for a general cyclic process, such as the Carnot cycle, the path integral
I
δQrev
=0 (1.45)
T
vanishes. The state function δQrev /T is defined to be the entropy,

δQrev = T dS . (1.46)

The expression allows us to evaluate the heat absorbed during


H a reversible process by
integrating a combination of two state functions, Qrev = T dS. Note, that Qrev still
depends on the path and thus remains a process variable.
Since entropy is a state function, the entropy change for a process can not depend
on whether the process is reversible or not. That is,
I
δQrev
∆Srev = = ∆Sirr . (1.47)
T
However, an irreversible process will also produce entropy,
∆Srev = ∆Sirr,trans + ∆Sirr,prod , (1.48)
with ∆Sirr,prod > 0. Associating the irreversible entropy change not due to entropy
production with irreversible heat transfer, we find,
I I
δQirr δQrev
∆Sirr,trans = < = ∆Srev , (1.49)
T T
which means that the maximum heat transport is observed for reversible processes.
Or in other words, irreversible heat transfer is subject to losses resulting in entropy
production.

An analogous treatment can also be done for work. We express reversible work
by a combination of state functions,

δWrev = −P dV . (1.50)

Combining the first and second law of thermodynamics Eqs. (1.41), (1.46), and (1.50),
we find for reversible processes,

dE = T dS − P dV . (1.51)
28 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.2.2.4 The 3rd law of thermodynamics


Experiments have shown that temperature, as defined in Sec. 1.1.1, does not go below
a certain value, consequently identified as absolute zero temperature. Experiments
have also shown that all substances in any thermodynamic state have the same entropy
at T = 0. This finding motivates the choice of setting the entropy at absolute zero
temperature to zero,
S(T → 0) → 0 . (1.52)

Example 5 (Entropy change in chemical reactions): Experimentally, the


process of heating from T = 0 a mixture of two atomic species to a temperature
where they react to form molecules and then cooling the molecules down back
to T = 0 is found not to produce entropy, although the initial substances are
different from the final ones. This fact can be exploited for the determination of
the entropy balance in chemical reactions. For example, given that the absolute
entropy at 298 K for the substances Al, O2 , and Al2 O3 are, respectively,

SAl = 28.3 J/mol K , SO2 = 205.03 J/mol K , SAl2 O3 = 51.1 J/mol K ,

together with the stoichiometrically balanced reaction, 2Al + 23 O2 = Al2 O3 ,


yields the entropy change,

∆S = SAl2 O3 − (2SAl + 32 SO2 ) = −313.19 J/mol K .

1.2.3 Thermodynamic potentials


With the statements outlined in Secs. 1.2.1 and 1.2.2 we have laid the foundations
of a conceptual world of thermodynamics. Now, we need to show how to use it in
practice.
So far we defined the process variables

Q, W (1.53)

and the state functions


P, V, T, S, E (1.54)
appearing in the laws of thermodynamics. In very simple systems, being in equilibrium
with themselves and with the environment, the state is completely fixed by two state
variables. This means that, if two of the five enumerated state functions are known, all
others can be expresses as their functions, for example, V = V (T, P ) or E = E(S, V )
for an ideal gas consisting of exactly N particles.
Furthermore, we already got in contact with material variables, such as thermal
expansion, compressibility, heat capacity at constant pressure or at constant volume,

α, κ, CP , CV . (1.55)

These material properties may also depend on temperature or pressure. For a given
substance the state equations can be computed from those properties.
The general procedure to tackle a problem is the following:
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 29

1. Identify the properties of the system about which information is available. These
are the independent variables, for example, T and P .

2. Identify the properties of the system about which information is requested.


This property is a dependent variable, which means that it is a function of the
independent variables, for example V = V (T, P ).

3. Such functions will necessarily contain material properties, which will have to
be looked up from data bases.

The crucial step in this procedure is evidently the second one, which consists in finding
the appropriate state function.

Another class of thermodynamic relationships, falling under the category of defi-


nitions, is the introduction of state functions with the dimension of energy known as
thermodynamic potentials. Apart from internal energy, the potentials used in canoni-
cal ensembles are the enthalpy, the Helmholtz free energy, and the Gibbs free energy,

E, H, F, G . (1.56)

Which one of the defined potentials is used as a state function in a particular problem
is, in principle, arbitrary. However, some processes are easier to describe in terms
of particular potentials. Before we define them below, let us present a useful mathe-
matical framework facilitating the conversion between state functions called Legendre
transform.

1.2.3.1 Legendre transform in thermodynamics


Assume that for a system characterized by three state variables (X, Y, Z) we know
the state function W = W (X, Y, Z), so that for given system parameters (A, B, C),
we know how the system will evolve upon a set of variations (dX, dY, dZ),
     
∂W ∂W ∂W
dW = dX + dY + dZ
∂X Y,Z ∂Y X,Z ∂Z X,Y . (1.57)
≡ AdX + BdY + CdZ

The partial derivatives A, B, C are evaluated assuming that the derived quantities do
not depend on other variables. Since the order of the derivatives can be inverted, we
know,
    !   !  
∂A ∂ ∂W ∂ ∂W ∂B
= = = (1.58)
∂Y X,Z ∂Y ∂X Y,Z ∂X ∂Y X,Z ∂X Y,Z
X,Z Y,Z
    !   !  
∂A ∂ ∂W ∂ ∂W ∂C
= = = ,
∂Z X,Y ∂Z ∂X Y,Z ∂X ∂Z X,Y ∂X Y,Z
X,Y Y,Z

and analogously for all other second derivatives. Applied to thermodynamic systems
these expression are called Maxwell relations.
30 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Now, we want to predict how the system will evolve when a different set of varia-
tions is applied, for instance (dA, dY, dZ). To solve the problem we first define a new
state function V = V (A, Y, Z) via,

V ≡ W − AX . (1.59)

This substitution is called Legendre transform. The new differential is,

dV = dW − AdX − XdA = −XdA + BdY + CdZ (1.60)


     
∂V ∂V ∂V
=− dA + dY + dZ ,
∂A Y,Z ∂Y A,Z ∂Z A,Y

and each of these three expressions has a physical meaning. The thermodynamic
potentials introduced in the next section will exemplify the procedure. Do the
Exc. 1.2.8.1.
Useful mathematical identities when working with partial derivatives are,
         
∂X ∂Z ∂Z ∂X ∂Y
=1 , = −1 . (1.61)
∂Z Y ∂X Y ∂X Y ∂Y Z ∂Z X

1.2.3.2 Enthalpy
The energy function called enthalpy is defined as,

H ≡ E + PV . (1.62)

With the expression (1.51) the differential enthalpy becomes,

dH = dE + P dV + V dP = T dS + V dP . (1.63)

It has the same level of generality as the combined first and second laws.
Historically, the enthalpy was introduced to simplify the description of heat engines
taken through cycles at atmospheric pressure, dP = 0. For isobaric processes in
simple systems, dHP = T dSP = δQrev,P , the enthalpy provides a direct measure of
the reversible heat exchange of the engine with the environment.

1.2.3.3 Helmholtz free energy


The energy function called Helmholtz free energy is defined as,

F ≡ E − TS . (1.64)

With the expression (1.51) the differential enthalpy becomes,

dF = dE − T dS − SdT = −SdT − P dV . (1.65)

It has the same level of generality as the combined first and second laws.
This function was devised to simplify the description of processes occurring at
a fixed (if necessary stabilized) temperature, dT = 0. For isothermal processes in
simple systems, dFT = −P dVT = δWrev,T , the Helmholtz free energy reports the
total reversible work done on the system.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 31

1.2.3.4 Gibbs free energy


The energy function called Gibbs free energy is defined as,

G ≡ E + PV − TS . (1.66)

With the expression (1.51) the differential enthalpy becomes,

dG = dE + P dV + V dP − T dS − SdT = −SdT + V dP . (1.67)

It has the same level of generality as the combined first and second laws.
This function was introduced to simplify the description of processes occurring
at both temperature, dT = 0, and constant pressure, dP = 0. For isobaric and
isothermal processes in simple systems, dGT,P = 0. But in systems undergoing phase
transformations or chemical reactions, the Gibbs free energy yields the total work

other than mechanical work, dGT,P = δWT,P .

1.2.3.5 Summary of thermodynamic potentials for canonical ensembles


The following list summarizes the thermodynamic potentials for canonical ensembles,
which are the total energy E, the free enthalpy H, the Helmholtz free energy F , and
the Gibbs free energy G,

δQrev = T dS
δWrev = −P dV
E = Qrev + Wrev =⇒ dE = T dS − P dV
(1.68)
H = E + PV =⇒ dH = T dS + V dP
F = E − TS =⇒ dF = −SdT − P dV
G = E + PV − TS =⇒ dG = −SdT + V dP

1.2.3.6 Material properties


Database variables are defined as conditional derivatives of the state functions or
thermodynamic potentials. Mechanical properties such as the compressibility κ, the
thermal expansion coefficient α, and the stress coefficient β are prominent examples.
In the case of a single substance system, they are defined by,
     
1 ∂V 1 ∂V 1 ∂P
κ=− , α= , β= . (1.69)
V ∂P T V ∂T P P ∂T V

Table 1.1 lists the coefficient for several materials.


Example 6 (Links between material properties): Maxwell’s relations ap-
plied to the thermal expansion coefficient and the compressibility defined in
(1.69) immediately tell us,
   
∂α ∂κ
= , (1.70)
∂P T ∂T P
32 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

for any system.


Furthermore, using the rules (1.62) we calculate,
∂V
       ∂P 
∂V ∂P ∂T ∂P T ∂T V κV βP
1=− =− ∂V
 = .
∂P T ∂T V ∂V P ∂T P
αV
Hence,
α = κβP . (1.71)
That is, the stress coefficient is not an independent quantity, but depends on
thermal expansion coefficient and the compressibility.

Thermal properties are grasped by the concept of heat capacity. This quantity
is measured via the temperature rise of a substance due to reversible absorption
of a defined quantity of heat. Since heat is a process variable, the heat capacity
measurement will depend on the circumstances, i.e. whether the pressure is kept
constant during the measurement or the volume. The heat capacity will be for the
respective cases,
   
δQrev δQrev
CP ≡ or CV ≡ . (1.72)
dT P dT V

Using the relationships listed in (1.68), we find immediately,


       
∂S ∂H ∂S ∂E
CP = T = or CV = T = . (1.73)
∂T P ∂T P ∂T V ∂T V
For a system held at constant pressure, absorption of heat will lead to both, an increase
in temperature but also an expansion of volume, which corresponds to work. For a
system held at constant volume, absorption of heat will only increase temperature,
hence, CP > CV .
Empirically, the heat capacities are found to depend on temperature. A frequently
used interpolation expression giving good results at temperatures above room tem-
perature is,
CP (T ) = a + bT + c/T 2 + dT 2 , (1.74)
where the coefficients are listed in Tab. 1.1 for several materials.

1.2.3.7 Coefficient and Maxwell relations


The differential forms listed in (1.68) immediately allow to express state functions via
partial derivatives of thermodynamic potentials in the following coefficient relations,
       
∂E ∂H ∂F ∂G
T = = , −S = = (1.75)
∂S V ∂S P ∂T V ∂T P
       
∂E ∂F ∂H ∂G
−P = = , V = = .
∂V S ∂V T ∂P S ∂P T
Furthermore, taking the second derivatives of the expressions for the temperature and
the pressure, we find,    
∂P ∂T
− = , (1.76)
∂S V ∂V S
and similarly for the other potentials.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 33

1.2.4 Strategy for deriving thermodynamic relations


The conceptual framework of terms and definitions erected in the previous sections
allows us to derive a totally general equation expressing any state variable as a func-
tion of two (for simple systems) other state variables. For complex systems, more
free state variables may be necessary. Below we will provide a recipe for a rigorous
general procedure.
As a preparation for the employment of the procedure we will express all state
variables as functions of temperature T and pressure P , which are the most commonly
used free variables. Once equations for volume V = V (T, P ) and entropy S = S(T, P )
are found, expressions for the four energy variables as functions of (T, P ) readily
follow. At the end, we will show how to convert functions of (T, P ) into functions of
any other pair of variables.

1.2.4.1 State variables as functions of T and P


The differential form of the function V = V (T, P ) is,
   
∂V ∂V
dV = AdT + BdP = dT + dP = αV dT − κV dP , (1.77)
∂T P ∂P T
where we used the definitions of the material coefficients (1.69). Analogously, the
differential form of the function S = S(T, P ) is,
   
′ ′ ∂S ∂S CP
dS = A dT + B dP = dT + dP = dT − αV dP . (1.78)
∂T P ∂P T T
For the first coefficient we used the definitions of the heat capacities (1.73). The
second coefficient follows from the Maxwell relation applied to the Gibbs free energy
differential, dG = −SdT + P dV , yielding,
   
∂S ∂V
− = . (1.79)
∂P T ∂T P
Inserting the expressions (1.77) and (1.78) into the differential forms (1.68) and
separating the coefficients of the differential dT ad dP , we immediately get,

V = V (T, P ) =⇒ dV = αV dT − κV dP
CP
S = S(T, P ) =⇒ dS = T dT − αV dP
E = E(T, P ) =⇒ dE = (CP − αP V )dT + (κP − αT )V dP
(1.80)
H = H(T, P ) =⇒ dH = CP dT + (1 − αT )V dP
F = F (T, P ) =⇒ dF = −(S + αP V )dT + κP V dP
G = G(T, P ) =⇒ dG = −SdT + V dP

1.2.4.2 Recipe for change of variables


With the results (1.80) we may now devise the following recipe for changing the
independent variables,
34 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1. Identify the new free and dependent state variables: W = W (X, Y ).


2. Write the differential form: dW = AdX + BdY .
3. Use the Eqs. (1.80) to express dX and dY in terms of the variables dT and dP :
       
∂X ∂X ∂Y ∂Y
dX = dT + dP , dY = dT + dP , (1.81)
∂T P ∂P T ∂T P ∂P T
where the derivatives are the coefficients of dT and dP in the expressions for
dX and dY in (1.80).
4. Insert the expressions for dX and dY into the differential form dW and collect
terms:
           
∂X ∂Y ∂X ∂Y
dW = A +B dT + A +B dP . (1.82)
∂T P ∂T P ∂P T ∂P T

5. Obtain W = W (T, P ) from (1.80),


   
∂W ∂W
dW = dT + dP . (1.83)
∂T P ∂P T

6. By comparison of the coefficients of the equations (1.82) and (1.83),


           
∂W ∂X ∂Y ∂W ∂X ∂Y
=A +B , =A +B .
∂T P ∂T P ∂T P ∂P T ∂P T ∂P T
(1.84)
7. Solve the set of equations (1.84) by A and B,
∂W
 ∂Y  ∂W
 ∂Y  ∂W
 ∂X
 ∂W
 ∂X

∂P T ∂T P − ∂T P ∂P T ∂T P ∂P T − ∂P T ∂T P
A = ∂X  ∂Y  ∂X
 ∂Y  , B= ∂X
 ∂Y
 ∂X
 ∂Y
 .
∂P T ∂T P − ∂T P ∂P T ∂P T ∂T P − ∂T P ∂P T
(1.85)
Example 7 (Relating entropy to temperature and volume): As an example
of the procedure developed in the previous sections, we will now express entropy
as a function of temperature and volume.

1. The wanted expression is: S = S(T, V ).


2. Its differential form is: dS = AdT + BdV .
3. Substituting dV from (1.80)(i): dS = AdT + B(αV dT − κV dP ).
4. Collecting terms: dS = (A + BαV )dT − BκV dP .
5. Obtain S = S(T, P ) from (1.80)(i): dS = (CP /T )dT − αV dP .
6. Compare coefficients: A + BαV = CP /T and −BκV = −αV .
7. Solve by A and B: A = (CP /T ) − (α2 V /κ) and B = α
κ
.

The expression is thus,


α2 V
 
CP α
dS = − dT + dV . (1.86)
T κ κ
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 35

Example 8 (Relating entropy to pressure and volume): As second example


of the procedure developed in the previous sections, we will now express entropy
as a function of pressure and volume.

1. The wanted expression is: S = S(P, V ).


2. Its differential form is: dS = AdP + BdV .
3. Substituting dV from (1.80)(i): dS = AdP + B(αV dT − κV )dP .
4. Collecting terms: dS = BαV dT + (A − BκV )dP .
5. Obtain S = S(T, P ) from (1.80)(i): dS = (CP /T )dT − αV dP .
6. Compare coefficients: BαV = CP /T and A − BκV = −αV .
κCP CP
7. Solve by A and B: A = αT
− αV and B = αV T
.

The expression is thus,


 
κCP CP
dS = − αV dP + dV . (1.87)
αT αT V

1.2.5 Ideal gases


One of the most common systems to apply thermodynamic concepts are ideal gases,
for which we know that they obey the state equation,

P V = N kB T , (1.88)

where we consider for now a fixed number of particles N = const. The thermal expan-
sion coefficient and the compressibility are readily calculated from their definitions
(1.69),
   
1 ∂V 1 ∂N kB T /P 1
αid = = = (1.89)
V ∂T P V ∂T P T
   
1 ∂V 1 ∂N kB T /P 1
κid = − =− =
V ∂P T V ∂P T P
   
1 ∂P 1 ∂N kB T /V 1
βid = = = .
P ∂T V P ∂T V T

The relationship between the heat capacities at constant volume and pressure is di-
rectly obtained from (1.86),

α2 T V
 
∂S
CV = T = CP − id = CP − N kB . (1.90)
∂T V κid

Hence, for an ideal gas, the heat capacities do not depend on T not P , but only on
the number of atoms N and their configuration in each gas molecule,

CV = 32 N kB . (1.91)
36 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

We will see later, that for a molecular gas with f degrees of freedom (accessible at
the temperature), the results must be generalized to,

CV = f2 N kB . (1.92)

The differential form for the internal energy E = E(T, P ) is,

dE = (CP − αid P V )dT + (κid P − αid T )V dP = (CP − N kB )dT = CV dT . (1.93)

Thus, E only depends on T , and since E is a state function, this holds for any process,
whether reversible or irreversible. For the other thermodynamic potentials we get,
exploiting the relationships (1.68),

E = CV T , F = (CV − S)T
(1.94)
H = CP T , G = (CP − S)T

Do the Excs. 1.2.8.2 to 1.2.8.12.

1.2.5.1 Adiabatic reversible processes


In thermodynamics adiabatic processes are called those in which no heat is exchanged
between the system and its environment,

δQadiab = 0 . (1.95)

If additionally a process is reversible,

δQadiab,rev = T dS = 0 , (1.96)

it is isentropic process, that is, entropy is neither produced nor transferred.


Let us compute the change in temperature of a reversibly and adiabatically com-
pressed ideal gas. As usual, we start identifying the relevant state variables, T =
T (S, V ). From (1.68)(iii), using the definition of the heat cavity at constant volume
CV , we find,
T dS − P dV
dT = , (1.97)
CV
and since dS = 0,
P N kB T
dTS = − dVS = − dVS . (1.98)
CV CV V
Integrating this equation,
Z T2 Z V2
dT T2 N kB V2 N kB dV
= ln =− ln =− , (1.99)
T1 T T1 CV V1 CV V1 V

we finally find,
 N kB /CV  N kB /CP  CP /CV
T2 V1 P2 P2 V1
= = or = . (1.100)
T1 V2 P1 P1 V2
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 37

The ratio
CP
γ≡ , (1.101)
CV
is called the adiabaticity coefficient. For the ideal gas studied in the previous section
we have, γ = 53 , and for a molecular gas with f degrees of freedom γ = 1 + f2 .
With the adiabaticity coefficient the state functions for adiabatic reversible pro-
cesses can be written,

P V γ = const and T V γ−1 = const , (1.102)

where the second relation follows from the ideal gas equations (1.88). Such a process
is shown in the P V -diagram of Fig. 1.21.
Since the specific heat can be obtained from the heat capacity simply by dividing
by the mass, c ≡ C/m, the ratio of the specific heat at constant pressure and to the
specific heat at constant volume is also equal to the constant γ, that is, γ = cP /cV .
Do the Excs. 1.2.8.13 to 1.2.8.22.

1.2.5.2 Isothermal, isobaric, and isochoric processes


We consider an ideal gas confined in a rigid, thermally insulated box, so that all
processes occurring with the box are adiabatic, since no heat is exchanged with the
surroundings, δQ = 0. Now, we divide the box into two volumes separated by a rigid
wall but connected by a valve, which may be opened or closed, as shown in Fig. 1.19.
Initially, the entire gas is in volume V1 , and when the valve is opened it expands into
the volume V2 . Since, the walls of the box do not move, no work is done on the
surroundings, δW = 0.

Figure 1.19: Free expansion of a gas.

Since no entropy can flow out of the system, any change in entropy must come
from local production arising from irreversible processes. The expansion of the gas is
a complicated process occurring far from equilibrium. Nevertheless, since entropy is
a state function, the gain in entropy is same as for a fictive reversible process leading
to the same final state. Since the initial state of the gas is known (V1 , Ti ) as well as
the final volume, Vf = V1 + V2 , the final temperature Tf is determined. Now, since
the internal energy does not change during expansion, dE = δQ + δW = 0, and since
for an ideal gas the internal energy is proportional to temperature, the temperature
doesn’t change,
P V = N kB T = const . (1.103)
38 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Such a process is called isothermal and is shown in the P V -diagram of Fig. 1.21.
On an isotherm of an ideal gas, we have P dV + V dP = N kB T = 0 and therefore
δW = P dV = −V dP . Nonetheless,
N kB T N kB T dV
P = ⇒ dP = − dV ⇒ δW = N kB T , (1.104)
V V2 V
and so, the work done when the gas goes from V1 to V2 is,
Z V2
dV V2
∆W1−2 = N kB T = N kB T ln . (1.105)
V1 V V1
Similar considerations can be made for isobaric and isochoric processes. The results
are summarized in Tab. 1.2.

1.2.5.3 Entropy of ideal gases


As an example, let us consider entropy changes in P V space. Via Legendre transform
we derive from the relationships (1.80) obtained for S = S(T, P ),
 
κCP CP
S = S(P, V ) with dS = − αV dP + dV (1.106)
αT αT V
dP dV
−→ CV + CP ,
P V
where the last step holds for ideal gases. We can integrate this expression either
holding pressure or volume constant,
Z Vf Z Pf
dV Vf dP Pf
∆SP = CP = CP ln or ∆SV = CV = CV ln . (1.107)
Vi V Vi Pi P Pi

Since, entropy is a state function, it is possible to calculate the entropy at any


point S2 = S(P2 , V2 ) from the entropy change along an arbitrary path starting from
any other point S1 = S(P1 , V1 ), e.g.,
V2
S(P2 , V2 ) = S(P2 , V1 ) + CP ln (1.108)
V1
V2 P2 P2
= S(P1 , V1 ) + CP ln + CV ln = S(P1 , V2 ) + CV ln .
V1 P1 P1
We conclude,

V2 P2 P2 V2γ T2 V2γ−1
∆S1→2 = CP ln + CV ln = CV ln = C V ln . (1.109)
V1 P1 P1 V1γ T1 V1γ−1

Assuming a mono-atomic gas, for which γ = 53 . We obtain,


3/2
V2 T2
∆S1→2 = N kB ln 3/2
, (1.110)
V1 T1
which is known as the Sackur-Tetrode formula plotted in Fig. 1.20(b).
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 39

(a) (b)

4 3 δW = 0

3
2

S
T

dE = 0
1
1 δQ = 0
2 2
0 0 dE = 0
0 1 0 1
1 1
2 0 V 2 0 V
P T
Figure 1.20: (code) (a) Qualitative behavior of the ideal gas law in T P V space. (b) Qual-
itative behavior of the Sackur-Tetrode formula in T SV space with planes corresponding to
δQ = 0, δW = 0, and δQ + δW .

1.2.5.4 Entropy, heat and work balance during reversible processes


The heat absorbed and work done during reversible processes can be computed via
integration of T dS or −P dV along the path of the process. The integration is sim-
plified if one of the state functions is kept constant during the process. We will
now study particularly simple cases, where one of the three variables T, S, P, V is
held constant. 12 combinations are possible: (i) isothermal process entropy change,
(ii) isothermal pressure change, (iii) isothermal volume change, (iv) isentropic process
temperature change, (v) isentropic pressure change, (vi) isentropic volume change,
(vii) isobaric process temperature change, (viii) isobaric entropy change, (ix) iso-
baric volume change, (x) isochoric process temperature change, (xi) isochoric entropy
change, and (xii) isochoric pressure change. If one of the six variables T, S, P, V µ, N
is held constant, there are 30 possible combinations.
Table 1.2 summarize the entropy, heat, and work balances for all 12 processes,
which will be derived in Exc. 1.2.8.23. Marked in red are those processes not requiring
Legendre transforms for calculating entropy, heat, and work changes.
40

Z S2 Z S2 Z V2
process ∆S1→2 = dS = ∆Q1→2 = T dS = ∆W1→2 = − P dV =
S1 S1 V1
Z Z Z
κ
dST → S2 − S1 T dST → T (S2 − S1 ) − α
P dST → −∆Q1→2
Z Z Z
2 2
isotherm − αV dPT → −N kB ln P P1
− αV dPT → −N kB T ln P P1
κP V dPT → −∆Q1→2
Z Z Z
α α
κ
dVT → N kB ln VV12 κ
dVT → N kB T ln VV12 − P dVT → −∆Q1→2
Z
P
0 0 ( κC
αT
− αV )P dTS → CV (T2 − T1 )
Z
2 P2 V2 −P1 V1
isentropic 0 0 ( αCTPV − κ)P V dPS → γ−1
Z
P2 V2 −P1 V1
0 0 − P dVS → γ−1
Z Z Z
CP T2
T
dTP → CP ln T1
CP dTP → CP (T2 − T1 ) − αP V dTP → −N kB (T2 − T1 )
Z Z Z Z
αT V
isobar dSP → S2 − S1 T dSP → CP T1 (e∆SP /CP − 1) − CP
dSP → − NCkPB T dSP
Z Z Z
CP V2 CP CP P
αT V
dVP → CP ln V1 αV
dVP → N kB
(V2 − V1 ) − P dVP → −P (V2 − V1 )
Z   Z
CP α2 V T2 α2 T V
T
− κ
dTV → CV ln T1
(CP − κ
)dTV → CV (T2 − T1 ) 0
Z Z
isochor dSV → S2 − S1 T dSV → CV T1 (e∆SV /CV − 1) 0
Z Z
P2 P CV V
dPV → CV ln P1
( κC
αT
− αV )dPV → N kB
(P2 − P1 ) 0
Table 1.2: Summary of entropy, heat, and work balances upon various state changes.
CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 41

1.2.6 Cyclic processes


Thermal machines are based on cyclic processes. Examples are the Clément-Desormes
cycle studied in the next section, the Carnot cycle in Exc. 1.2.8.24, the Otto cycle
studied in Exc. 1.2.8.25, or the Diesel cycle studied in Exc. 1.2.8.26, and others studied
in Excs. 1.2.8.27 to 1.2.8.33.
From the first law of thermodynamics dE = δQ + δW we expect that for cyclic
processes going through a sequences of processes j the heat and work balances are
compensated,
I I X X
0 = T dS − P dV = ∆Q>0j→j+1 + ∆Wj→j+1 . (1.111)
j j

We define the efficiency of a cyclic process as the ratio between net work performed
and heat absorbed (not delivered),
P
− j ∆Wj→j+1
η≡ P >0 . (1.112)
j ∆Qj→j+1

1.2.6.1 The Cléments-Desormes method for determining γ


The specific heat of solids and liquids is usually measured with samples under atmo-
spheric conditions and without control of the volume of the material, i.e. we generally
measure cP . In contrast, gases are easier to study when they are contained in a rigid
recipient, such as a glass bulb with little thermal expansion within the temperature
range of the experiment. Then, the specific heat is measured at constant volume cV .
The value cP of a gas is larger than cV because in the experiment, at constant pres-
sure, the heat delivered to the gas also causes its expansion, which means that part of
that energy is been converted into work and not into an increase the thermal energy
of the gas molecules. The ratio between the specific heats at constant pressure and
volume, γ = cP /cV , is a value that often appears in the description of thermodynamic
processes in gases. This ratio can be measured by isobaric and isochoric processes,
respectively measuring cP and cV . The first experiment to measure the factor γ in
gases was performed in 1819 by Desormes and Clément. The method consists in
applying to a gas (assume to be ideal), a sequence of three processes illustrated in
Fig. 1.21: an isothermal expansion from state (1) to (2), followed by an isochoric
cooling from (2) to (3), and finally an adiabatic compression from (3) back to (1).
During the adiabatic process from (3) to (1) the relation between pressure and volume
is described by P V γ = const. Monitoring both during the process thus allows us to
measure the adiabaticity coefficient γ.
The heat and work balance of the Clément-Desormes cycle is summarized in the
following table. The heats and works exchanged with the reservoir can be looked up
in Tab. 1.2,

process ∆Qj→j+1 = ∆Wj→j+1 =


1→2 isotherm T1 (S2 − S1 ) = N kB T1 ln VV12 > 0 −∆Q1→2
(1.113)
2→3 isochor CV (T3 − T2 ) < 0 0
3→1 isentropic 0 CV (T1 − T3 )
42 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Figure 1.21: P V -diagram (left) and T S-diagram (right) for the Clément-Desormes cyclic
process.

with T1 = T2 > T3 . The efficiency is defined as,


P
− ∆Wj→j+1 −T1 (S2 − S1 ) + CV (T1 − T3 ) + 0
η= P = (1.114)
∆Q>0 i→j T1 (S2 − S1 )
CV (T1 − T3 )
=1− .
N kB T1 ln VV21
Example 9 (Rüchardt’s method for determining γ): Rüchardt’s method
shown in Fig. 1.22 allows the measurement of ratio γ = CP /CV of a gas. Let
us consider a gas confined in a large container with volume V . Connected
to this container is a tube with cross section A, inside which a metal ball of
mass m (which fits perfectly in the tube) can slide up and down thus acting
like a piston. Due to the compression and decompression of the gas, this mass
oscillates in around its equilibrium position (y = 0). The presence of the metallic
sphere increases the internal pressure to P = Pa +mg/A in equilibrium position,
where Pa is the external (atmospheric) pressure. We will call τ the period of
oscillation. See also Exc. 1.2.8.13. Given a displacement y on the sphere, the

Figure 1.22: Rüchardt’s method for determining γ.

change in the volume of the gas is ∆V = yA, so that there is a pressure variation
∆P accompanying ∆V . This increase in pressure produces a restoring force
F = ∆P A. Assuming that the process is nearly static and adiabatic, we have,
P V γ = const ⇒ γP V γ−1 ∆V + V γ ∆P = 0 . (1.115)
Using ∆P = F/A and ∆V = yA, we obtain,
γP A2
γP V γ−1 yA + V γ F/A = 0 ⇒ F =− y, (1.116)
V
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 43

which leads us to the following equation of motion for the sphere,

d2 y γP A2
m + y=0, (1.117)
dt2 V
which is the differential equation of a simple harmonic motion of frequency:

γP A2
ω02 = . (1.118)
mV
In this way, knowing P , V , m and A, we can measure τ and obtain γ as,
r
mV
τ = 2π . (1.119)
γP A2

1.2.6.2 Carnot cycle


A Carnot cycle is an ideal thermodynamic cycle providing, by Carnot’s theorem,
an upper limit on the efficiency of any classical thermodynamic engine during the
conversion of heat into work, or conversely, the efficiency of a refrigeration system in
creating a temperature difference through the application of work to the system.
In a Carnot cycle, an engine transfers energy in the form of heat between two
thermal reservoirs at temperatures Thot and Tcold , and a part of this transferred
energy is converted to the work done by the system. The cycle is reversible and hence
isentropic. In other words, entropy is conserved; it is only transferred between the
thermal reservoirs. When work is applied to the system, heat moves from the cold
to hot reservoir, which is exploited in heat pumps and refrigerators, depending on
whether the heat increase of the hot reservoir is exploited or the heat decrease of
the cold reservoir. When heat moves from the hot to the cold reservoir, the system
applies work to the environment, which can be exploited in heat engines.
The work W done by the system or engine to the environment per Carnot cycle
depends on the temperatures of the thermal reservoirs and the entropy transferred
from the hot reservoir to the system ∆S per cycle such as,

∆Qhot
∆W = (Thot − Tcold )∆S = (Thot − Tcold ) , (1.120)
Thot
where Qhot is heat transferred from the hot reservoir to the system per cycle.
The heat and work balance of the Carnot cycle is summarized in the following
table (see Fig. 1.23):

process ∆Qj→j+1 = ∆Wj→j+1 =


1→2 isotherm T1 (S2 − S1 ) = N kB T1 ln VV21 < 0
−∆Q1→2 > 0
2→3 isentropic 0 CV (T3 − T2 ) > 0 ,
3→4 isotherm T3 (S4 − S3 ) = N kB T3 ln VV43 > 0
−∆Q3→4 < 0
4→1 isentropic 0 CV (T1 − T4 ) < 0
(1.121)
with T3 = T4 > T1 = T2 and S2 = S3 > S1 =PS4 . Because EP is a state function a
cyclic process must necessarily satisfy, ∆E = ∆Wj→j+1 + ∆Qj→j+1 = 0. The
44 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Figure 1.23: P V -diagram (left) and T S-diagram (right) for the Carnot cycle.

efficiency is defined as,


P
− ∆Wj→j+1
η≡ P (1.122)
∆Q>0j→j+1
−T1 (S2 − S1 ) + CV (T3 − T2 ) − T3 (S4 − S3 ) + CV (T1 − T4 ) T1
=− =1− .
T3 (S4 − S3 ) T3

Example 10 (Efficiency of the Carnot cycle): According to Eq. (1.111) the


total work and heat balances of a cyclic process correspond to the enclosed areas
in the P V , respectively, T S-diagrams, and both areas are equal. As illustrated
in Fig. 1.24, for given temperatures of the hot and cold reservoir, the largest area
H
∆W = ∆Q = T dS is occupied by a rectangle corresponding to the Carnot
cycle.

Figure 1.24: T S-diagram for an arbitrary cyclic process (left) and for the Carnot cycle
(right).

1.2.7 Real gases, liquids and solids


As long as real gases, liquids, and solids qualify as unary, homogeneous, closed, non-
reacting, and otherwise simple systems, the laws and procedures outlined in Secs. 1.2.1
to Sec. 1.2.3 apply to them in the same way as for ideal gases. Additionally, in solids
and liquids the material constants α, κ, CP are generally to a good approximation
constant.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 45

The ideal gas law (1.3) is only valid for non-interacting particles. In reality, inter-
particle interactions increase the effective pressure and the finite size of the molecules
reduces the effective volume. Indeed, even at T = 0 the volume V of a real gas cannot
be zero, because the molecules have their own volume V ∗ . And as molecules interact
attract each other, the pressure is zero even before T = 0. In the van der Waals model
the ideal gas equation is generalized to,

 2  
a N N
(P +P ∗ )(V −V ∗ ) = N kB T with P ∗ = and V ∗ = b (1.123)
V2 NA NA

where a and b are empirical constants specifically depending on the gas. Real gases
are studied in Excs. 1.2.8.34 to 1.2.8.36. Using molar functions denoted by a tilde (.̃),
N → NA , NA kB → Rg , V NNA → Ṽ , etc.,

 
a
P+ (Ṽ − b) = Rg T . (1.124)
Ṽ 2

1.5
1
1
P/Pc
T /Tc

0
0.5
-1 1.2
1
0 1
-0.5 0.5 -0.5
0 0 0.8 T /Tc
0.5 1 0 P/Pc 0.5 1
lg (V /Vc ) lg (V /Vc )

Figure 1.25: (code) Phase diagrams of a real gas.

1.2.7.1 Joule-Thomson effect

The Joule-Thomson effect describes the temperature change of a real gas or liquid
(as differentiated from an ideal gas), when it is forced through a valve or porous plug
while keeping it insulated so that no heat is exchanged with the environment (see
Exc. 1.2.8.37).
The relationships (1.80)(ii) and (iii) express energy and entropy as a function of
temperature and pressure, Ẽ = Ẽ(T, P ) and S̃ = S̃(T, P ). Using the procedure
outlined in Sec. 1.2.4 to express energy and entropy as a function of temperature and
46 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

volume, Ẽ = Ẽ(T, Ṽ ) and S̃ = S̃(T, Ṽ ), we find,


!
α2 T Ṽ
 
αT
dẼ = C̃P + P − αP Ṽ − dT + − P dṼ (1.125)
κ κ
!
C̃P α2 Ṽ α
dS̃ = − dT + dṼ .
T κ κ

Hence, ! !
∂ Ẽ ∂ S̃
=T −P . (1.126)
∂ Ṽ T
∂ Ṽ T

From the Maxwell relation derived from, dF̃ = −S̃dT − P dṼ , we find,
  ! ! ! ! !
∂P ∂ ∂ F̃ ∂ ∂ F̃ ∂ S̃
=− =− = . (1.127)
∂T V ∂T ∂ Ṽ ∂V ∂T ∂ Ṽ T
T V V T

Hence, !  
∂ Ẽ ∂P
=T −P . (1.128)
∂ Ṽ ∂T V
T
Now, we consider a dense real van der Waals gas according to Eq. (1.124) but
neglecting the volume parameter, b = 0. The pressure P then behaves as a function
of temperature T and molar volume Ṽ according to the following state equation,
Rg T a
P = − , (1.129)
Ṽ Ṽ 2
where a is a positive constant and Rg is the universal gas constant. From the equation
of state we obtain,  
∂P Rg
= , (1.130)
∂T V Ṽ
which, replaced in the expression (1.128), gives,
!
∂ Ẽ Rg a
=T −P = . (1.131)
∂ Ṽ T Ṽ Ṽ 2

Integrating this equation yields,


a
Ẽ = − + K(T ) (1.132)

where K(T ) depends only on T . We now have the molar energy Ẽ expressed as a
function of molar volume Ṽ .
We now assume that the heat capacity C̃V be constant. Differentiating the ex-
pression (1.132) by temperature gives,
!
∂ Ẽ
C̃V = = K ′ (T ) . (1.133)
∂T
V
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 47

Inserting the integral of (1.133), K(T ) = C̃V T + K0 , into (1.132), we end up with,
a
Ẽ = − + C̃V T + K0 . (1.134)

In a process of free expansion, Ẽ remains invariant and Ṽ grows. Thus, resolving
(1.134) by temperature and deriving by volume,
  !
∂T ∂ Ẽ + a/Ṽ − K0 −a
= = . (1.135)
∂ Ṽ E ∂V CV CV Ṽ 2
E

For an ideal gas, a = 0, we expect no temperature change. For a gas with positive
(negative) a the variation (1.135) will be negative (positive).
Example 11 (Microscopic interpretation of Joule-Thomson cooling ): In
the discussion of the the process illustrated in Fig. 1.19 we stated that an ideal
gas does not changes its temperature when it expands flowing through a nozzle
from one volume into another. This is in contrast to the behavior of a real gas.
Indeed, in a compressed gas the molecules are closer to each other and thus
feel attractive (or repulsive) van der Waals forces. When the gas expands, the
molecules must overcome these forces at the cost (gain) of kinetic energy.

1.2.8 Exercises
1.2.8.1 Ex: Material parameters
The compressibility κ, the thermal expansion coefficient α, and the stress coefficient β
are important material parameters. In the case of a unary system (single substance),
they are defined by,
     
1 ∂V 1 ∂V 1 ∂P
κ=− , α= , β= .
V ∂P ν,T V ∂T ν,P P ∂T ν,V
Here ν is the number of moles.
a. Show that these relationships can be rewritten, using the molar volume Ṽ , to,
! !  
1 ∂ Ṽ 1 ∂ Ṽ 1 ∂P
κ=− , α= , β= .
Ṽ ∂P T Ṽ ∂T P P ∂T Ṽ

b. Use the total differential of V = V (T, P, ν) to show that, in general,


α
β= .
κP
c. Calculate κ, α and β for an ideal gas as functions of P and T . Show that the
relationship from (b) is also fulfilled.

1.2.8.2 Ex: 1. law of thermodynamics


In a thermally insulated container there is one mole of air at the temperature Ti =
400 K. Now, it is reversibly compressed, doing the work W = 100 cal. Calculate the
ratio Vf /Vi between the final and initial volume. Assume that the air behaves like an
ideal gas and that the container itself does not absorb heat from the air.
48 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.2.8.3 Ex: Specific heat

The specific heat for an isobaric transformation is defined as CP = Rg + CV . A mass


of mN2 = 10 g of nitrogen is heated at constant pressure P = 2 atm and an initial
temperature of Ti = 20 ◦ C until its volume increases by 20%. Calculate the initial
volume V , the final temperature Tf , and the heat supplied Q.

1.2.8.4 Ex: Volumetric thermal expansion of an ideal gas

a. Calculate the volumetric thermal expansion coefficient of an ideal gas at constant


pressure.
b. Calculate the volumetric thermal expansion coefficient of an ideal gas during an
adiabatic expansion.

1.2.8.5 Ex: Specific heat at constant volume/pressure

a. Explain why the specific heat at constant volume is less than the specific heat at
constant pressure.
CP
b. Show that for a diatomic gas γ = C V
= 75 .

1.2.8.6 Ex: Heat capacities

a. Two gas containers are brought into thermal contact. They contain gases with
the temperatures T1 and T2 , as well as the heat capacities C1 and C2 . The thermal
capacity of the containers is negligible. What is the temperature of the gases after an
equilibrium has been reached?
b. Now consider the temperature equilibrium of three containers, each with 100 g of
the gas H2 at the temperature TH2 = 10 ◦ C, 50 g of the gas He at the temperature
THe = 15 ◦ C, and 200 g of the gas N2 at temperature TN2 = 20 ◦ C. What is the final
temperature?

1.2.8.7 Ex: Gas compression

Calculate the temperature change resulting from adiabatic compression of an ideal


gas of volume V (T1 ) to V (T2 ) = V (T1 )/10.
Compare this with the temperature change through an analog isobaric compression
of an equally ideal gas. Note: γ = cP /cV = 1.4 (for air), T1 = 293 K.

1.2.8.8 Ex: Gas compression

An oxygen bottle with the volume V2 = 40 l contains a filling ex works that would
have the volume V1 = 6 m3 at atmospheric pressure P1 = 101 kPa. The bottle, which
has been emptied to atmospheric pressure, is refilled at a constant temperature of
T1 = 18 ◦ C. What mechanical work W must be added to the gas to compress it
isothermally from P1 to the filling pressure?
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 49

1.2.8.9 Ex: Gas expansion

1 kmol of nitrogen under normal conditions (P0 = 1.01 × 105 Pa, T = 0 ◦ C) adiabat-
ically expands from V1 to V2 = 5V1 . Calculate the change in the internal energy of
the gas and the amount of work the gas does as it expands.

1.2.8.10 Ex: Adiabatic expansion

a. During the adiabatic expansion of a gas the pressure P and the volume V of the
gas satisfy the relationship P V γ = α, where α is a constant, and γ is the factor
of the gas that gives the ratio between the specific heats at constant pressure and
volume, i.e. γ = cP /cV . A gas was placed in a cylinder with a movable (frictionless)
plunger completely insulated from the external environment. The assembly makes it
possible to measure the volume and pressure of the gas during its expansion and the
experimental values obtained are given in the table below.
a. From the values in the table below, and using the least squares method, deter-
mine the gas factor and the constant α. (Hint: To obtain a linear relationship, take
x = lg V and y = lg P .).
b. Determine, through the method of least squares, the uncertainties in the values
obtained for γ and α.
c. Using a log×log paper, prepare a graph P × V and determine the values of γ and
α. Compare with the results obtained by the least squares method.
Notes: When displaying the values of γ and α, be sure to indicate the units in which
they are expressed.
Display the values of Sx , Sy , Sx2 and Sxy used in the least squares method calcula-
tions.
V (l) P (atm)
40 1.20
41 1.16
43 1.10
44 1.05
46 0.98
47 0.96
49 0.90
50 0.87

1.2.8.11 Ex: 1. law of thermodynamics

In a thermally insulated container B there are n mol of an ideal gas and a body K
with the heat capacity C. Specify the relationship between pressure P and volume
V , whereby the change in V is carried out so slowly that the following always applies
to body and gas: TK = TG .
Note: Body and gas exchange heat. Assume that the container itself does not take
heat from the gas or the body.
50 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

dV

B K
Figure 1.26: Potato.

1.2.8.12 Ex: 1. law of thermodynamics


A container with 1 mol helium and a container of the same size with 1 mol nitrogen
are both heated with the same heating power PQ = 10 W.
a. Calculate how long it takes to warm up the containers from T1 = 20 ◦ C to T2 =
100 ◦ C, if the thermal capacity of the container is Crec = 10 J/K.
b. How long does it take to warm up to 1000 ◦ C assuming that the vibrational degrees
freedom of N2 molecules can be excited above 500 ◦ C? Neglect heat loss.

1.2.8.13 Ex: Rüchardt’s calorimetric method


A mono-atomic ideal gas with the adiabatic coefficient γ = 1.4 is in a thermally
insulated bottle with a long neck. The total volume of the bottle and the neck is
V0 = 10 l. At the beginning there is atmospheric pressure. A thermally insulating
ball with mass m = 20 g is now inserted into the neck (precision tube with a diameter
of d = 16 mm), which hermetically seals the bottle to the outside. The ball can move
smoothly.
a. Determine the equilibrium position of the ball. What is the pressure and volume
in the part of the bottle sealed by the ball?
b. The ball is now pushed down slightly from the equilibrium position and then
released. With what period τ does the ball vibrate.
Help: Relate the instantaneous pressure p in the bottle to small volume changes ∆V
and linearize the expression using a Taylor expansion.

1.2.8.14 Ex: Calorimeter for mixtures


The specific heat capacity of platinum cPt is to be measured with a mixing calorimeter.
For this purpose, a platinum body is heated to 100 ◦ C and then thrown into water
of 20 ◦ C. To simplify the evaluation, the mass of the water is chosen to be that of
the platinum body. The heat absorption of the calorimeter body should be neglected.
The specific heat capacity of water is cH2 O = 4.19 J/(g K), the relative atomic mass
of platinum is mPt = 195 u, the linear expansion coefficient α = 9.0 · 10−6 K-1 .
a. The mixing temperature is 22.41 ◦ C. What value follows for cPt ?
b. What is the value for cPt when applying the Dulong-Petit rule?
c. The platinum body and the water have the same mass. What is the ratio of the
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 51

number of platinum atoms to the number of water molecules?


d. How many degrees does the platinum body have to be heated to increase its volume
by 1 %?

1.2.8.15 Ex: Calorimeter for mixtures


The equation of state of an ideal gas P V = N kB T applies and the energy is given as
E = CV T .
a. Show that for the entropy change of an ideal gas from state A with temperature
TA and volume VA to state B with temperature TB and volume VB holds: ∆S =
CV ln (TB /TA ) + N kB ln (VB /VA ).
b. Two insulated containers with the same volume V = 10 cm3 , the same pressure
P = 1 bar and the same temperature T = 100 ◦ C are filled with nitrogen and oxygen,
respectively. Determine the change in entropy when connecting the containers so that
the gases can mix.

1.2.8.16 Ex: 2. law of thermodynamics


A thermally insulated container with a total volume of 10 l is separated into two equal
parts by a plate. In each part there are 10 mol of an ideal atomic gas. In one part
the gas has the temperature T1 = 300 K, in the other the temperature T2 = 400 K.
Calculate the change in the total entropy ∆S of the system in the event that:
a. the plate does not insulate heat;
b. a small hole opens in the disc through which the gases can mix slowly and which
at the end closes again;
c. the plate is suddenly removed and then put back in after some time without any
work being done.
In all three cases we wait for equilibrium to establish.

1.2.8.17 Ex: 2. law of thermodynamics


A heat engine works between two heat sources with temperatures T2 > T1 . If we
take away the heat δQ from the second heat source, what is the maximum and min-
imum work δWmax and δWmin that the heating machine can do? What could the
corresponding processes look like?

1.2.8.18 Ex: Specific heat


Calculate the specific heat per mol of an ideal gas for a reversible process according
to the law P V γ = const with γ ∈ R. Can such specific heat be negative? Justify the
result.

1.2.8.19 Ex: Expansion of a gas


One mole of a simple ideal gas, defined by E = cRg T , P V = Rg T , is contained in
a container of initial volume Vi and pressure Pi . The gas expands from that initial
state to the state corresponding to a final volume Vf = 2Vi , through several different
processes. Determine the work δW done by the gas and the heat δQ received by the
gas for each of these processes. Final answers should be given only in terms of (Vi , Pi )
52 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

and the constant c.


a. Free expansion: also determine the temperature variation ∆T .
b. Isentropic expansion: also obtain the final pressure Pf , using the fact that in this
process for an ideal gas P V γ = constant, where γ = (c + 1)/c.
c. Isobaric quasi-static expansion.
d. Isothermal quasi-static expansion.

1.2.8.20 Ex: Adiabatic expansion


A diatomic gas is initially maintained at a pressure of Pi = 4000 hPa in a piston of
volume Vi = 1 l at temperature Ti = 100 ◦ C. Now the gas pressure forces the piston
to move until it reaches half the pressure. Considering the expansion as an adiabatic
process,
a. calculate the number of moles η inside the piston;
b. calculate the thermal capacities at constant volume CV of the gas, as well as at
constant pressure CP , and determine the adiabatic coefficient γ;
c. calculate the final volume of the piston;
d. calculate the final temperature of the gas;
e. calculate the work done by the gas on the piston.
f. Now, the gas cools down gradually while the piston is held fix. Calculate the
pressure Pr when the temperature reached Tr = 20 ◦ C.

1.2.8.21 Ex: Entropy changes


a. What entropy increase results when 200 g of (liquid) water at 0 ◦ C and 200 g water
at 90 ◦ C are mixed at constant pressure in a heat-insulated recipient? The molar heat
cp of the water should be 75.5 J/(mol K) regardless of the temperature.
b. 1 dm3 helium at P0 = 1 bar and T0 = 0 ◦ C are heated to the temperature T = 500 K.
How big is the change in entropy upon isochoric and isobaric heating?

1.2.8.22 Ex: Heat capacity


When drilling a brass block of mass m = 500 g (c = 0.1 cal/(g K)), a power of 300 W
is provided for 2 minutes. What is the temperature rise of the block, if 75% of the
heat generated warms it up? What happens to the remaining 25%?

1.2.8.23 Ex: Heat and work upon thermodynamic processes


Calculate the heat generated and the work executed for at least two (non-trivial)
equilibrium thermodynamic processes involving the active change of one of the state
variables T, S, P, V , while maintaining one other variable fixed. Consider the ideal gas
case as a limit of the general formulas. Check with the expressions listed in Tab. 1.2.

1.2.8.24 Ex: Heat power engine and heat pump based on Carnot cycle
Consider a heat power engine and a heat pump based on the Carnot cycle. Calculate
the efficiency and the generated, respectively, consumed power as a function of the
temperature difference between the hot and the cold bath.
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 53

3
1 0
2
Figure 1.27: Principle of a heat pump.

1.2.8.25 Ex: The Otto cycle


Calculate the yield η = ∆W/∆Q of the Otto cycle, where ∆Q is the heat received by
the system and ∆W the work executed, (see Fig. 1.28).

Figure 1.28: Otto cyclic process. The black branches are adiabatic processes.

1.2.8.26 Ex: The Diesel cycle


Calculate the yield η = ∆W/∆Q of the Diesel cycle, where ∆Q is the heat received
by the system and ∆W the work executed, (see Fig. 1.29).

Figure 1.29: P V -diagram (left) and T S-diagram (right) for the Diesel cycle. The black
branches are adiabatic processes.
54 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.2.8.27 Ex: Brayton cycle modeling a gas turbine


Calculate the efficiency of the Brayton cycle modeling a gas turbine, as shown in
Fig. 1.30.

Figure 1.30: P V -diagram (left) and T S-diagram (right) for the Brayton cycle.

1.2.8.28 Ex: Cyclic process


An ideal gas with N atoms and the heat capacities CV = 23 N kB and CP = 52 N kB
goes through the cycle shown in Fig. 1.31. For the starting point (1) its pressure P1 ,
volume V1 , and thus also the temperature T1 are known.
a. Calculate P2 , T2 , P3 and T3 , if V2 = V3 = 3V1 .
b. Calculate the work done and the heat input for all three steps of the cyclic process,
as well as the efficiency.

Figure 1.31: Cyclic process.

1.2.8.29 Ex: Heat engine cycle with two isochoric phases


In a heat power machine (illustrated in Fig. 1.32), the working gas (helium) is sealed
off in a cylinder by a movable piston. The gas is alternately heated and cooled from
the outside. The piston moves back and forth periodically and drives a shaft. The
initial state is: P = 0.2 · 106 Pa, V = 150 cm3 , T = 300 K.
a. Calculate the mass of the enclosed helium and the adiabaticity coefficient.
b. During a complete work cycle, the gas undergoes the following changes in its state:
1 → 2 isochoric heating to twice the temperature,
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 55

2 → 3 isothermal expansion to twice the volume,


3 → 4 isochoric cooling to the initial temperature,
4 → 1 isothermal compression to initial volume.
Calculate the work done and the heat input for all three steps of the cyclic process.

Figure 1.32: Cyclic process and scheme of a heat power engine

1.2.8.30 Ex: Heat engine cycle with two isochoric phases

An engine, whose four work steps consist of two isothermal and two isochoric pro-
cesses, runs at a speed of N = 500 min-1 . There is ν = 0.5 mol of an ideal, mono-
atomic gas in the volume of the engine. The parameters for the individually labeled
working steps are T1 = 50 ◦ C, P1 = 2 bar, and P2 = 5 bar.
a. Determine the volumes V1 = V2 and V3 = V4 , the pressure P4 , and the temperature
T2 .
b. What efficiency η does the cycle have?
c. How high is the net power P of the engine?

Figure 1.33: Cyclic process.

1.2.8.31 Ex: Cyclic process

A thermodynamic system is brought from an initial state (1) to another state (2), then
to (3), and finally back to (1), as illustrated in the diagram in Fig. 1.34. Calculate
the work and heat balance for the entire cycle.
56 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Figure 1.34: Cyclic process

1.2.8.32 Ex: Cyclic process


An ideal gas with N atoms and the heat capacities CV = 32 N kB and CP = 52 N kB
goes through the cycle shown in Fig. 1.35: First an isotherm from (1) to (2), then an
isobar from (2) to (3) and finally an isochor from (3) to (1). For the starting point
(1), the temperature T1 and the volume V1 are known, for (2) V2 is given.
a. Calculate the work done and the heat input for all three steps of the cyclic process,
as well as the efficiency as a function of V1 and V2 and alternatively of T1 and T3 .
b. Calculate the total changes of internal energy and entropy. Show explicitly that
∆Q/T remains the same, regardless of whether we go from (1) to (2) via a reversible
isothermal process or first with a reversible isobaric process and then with a reversible
isochoric process.

Figure 1.35: Cyclic process.

1.2.8.33 Ex: Cyclic process


Calculate the yield η = ∆W/∆Q of the process depicted in Fig. 1.36).

1.2.8.34 Ex: Real gas


Here we study a real gas whose state equation be given by the van der Waals formula
(1.124).
a. Why is the critical point for the liquid-gas phase transition defined by to the
1.2. CANONICAL FORMULATION OF THERMODYNAMICS 57

Figure 1.36: Cyclic process.

conditions,
dP d2 P
= =0?
dṼ dṼ 2
b. Show that the temperature of the critical point Tc and the volume of the system at
the critical point Ṽc are linked to the material constants a and b by the relationships:
8a
Tc = and Ṽc = 3b .
27Rg b

c. Calculate the pressure at the critical point Pc as a function of a and b.


d. Express the van der Waals formula in terms of rescaled parameters, Ṽr ≡ Ṽ /Ṽc ,
Tr ≡ T /Tc , Pr ≡ P/Pc , e. For CO2 the values a = 3.6 · 10−6 bar m6 mol-2 and b =
4.3 · 10−5 m3 mol-1 are suitable parameters of the van der Waals equation. Consider
a mole of CO2 and calculate the values for the critical point Ṽc , Tc and Pc .

1.2.8.35 Ex: Dieterici model for a real gas


Assume that one mole of a real gas satisfies the Dieterici equation of state (an alter-
native to the van der Waals equation),

P eα/(Rg T Ṽ ) (Ṽ − β) = Rg T ,

where α and β are parameters.


a. In what units α and β have to be specified? What sign do you expect for each
parameter?
b. Express the parameters of the critical point Tc and Vc by α and β.

1.2.8.36 Ex: Isothermal expansion


Calculate the work done during isothermal expansion from V1 to V2 of a real gas.

1.2.8.37 Ex: Joule-Thomson process


Here we study the Joule-Thomson effect. A gas is forced under constant pressure P1
from a container B1 through a porous partition into a container B2 with constant
pressure P2 < P1 . The constancy of the pressures in the containers is ensured by
58 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

increasing or decreasing their volumes. Finally, it is assumed that the gas is adiabat-
ically isolated from the environment and therefore only exchanges with it energy in
the form of work.
a. Show that the enthalpy H remains constant in both recipients during this process.
b. Show that,      
∂T 1 ∂V
= T −V ,
∂P H CP ∂T P
and calculate (∂T /∂P )H explicitly for an ideal gas.
c. For a real gas, the so-called inversion curve P (T ) defined by (∂T /∂P )H is obtained
in the P T -plane. Physically interpret the areas above and below this curve. Calculate
the inversion curve for the van der Waals gas using the thermal equation of state for
real gases.
d. Discuss the behavior of entropy in this process.

1.3 Thermodynamic equilibrium


Until now, we always assumed a system to be in thermal equilibrium, but we did not
say how to determine what this state is. In the following, we will lay out general
criteria for equilibrium and devise strategies for the calculation of equilibrium maps
and phase diagrams.
Upfront let as note that in an isolated system, the entropy is maximum at equi-
librium. We will translate this principle into a mathematical language consisting in
a set of equations, called equilibrium conditions, which determine the relationships
that the internal properties of the system must satisfy to find itself in equilibrium.
To discuss equilibrium conditions for a system, we need to allow it to be out of
equilibrium. This obviously is not possible with the simple systems studied so far.
The mere fact that they can be fully described by a single state function is due to the
assumption that they are in equilibrium already, e.g. with the environment. We will
thus illustrate the equilibrium conditions at a unary, two-phase system.
The notion of equilibrium is introduced in classical mechanics as a balance of
forces. In thermodynamics the factors that influence the evolution of a systems are
more general: whatever is doing work, transferring heat, or providing matter can drive
a system out of equilibrium. To be at equilibrium a system must (i) be at rest and
(ii) be balanced. The first condition means that, if not perturbed by external factors,
the system does not change its state during time. The second condition means, that
after a transient perturbation driving the system out of equilibrium the system finds
back to its original state.
A system can be at steady-state while being driven by external influences, as
illustrated in Fig. 1.37. We will, however, consider a system to be at equilibrium
only, when it is stationary while being isolated from the environment. In that case,
no entropy is exchanged with the environment, dStrans = 0, so that all changes in
entropy must result from internal production, dSprod > 0. And since in equilibrium
the entropy production comes to a halt, the state of equilibrium is necessarily the one
with the highest entropy. This principle will be used as a criterion to help us in the
quest for the equilibrium state of an isolated system 3 .
3 Note that it doesn’t matter how the equilibrium state was reached, e.g. whether it had interacted
1.3. THERMODYNAMIC EQUILIBRIUM 59

Figure 1.37: (a) Stable and unstable equilibrium positions in a mechanical system. (b) A
thermally insulated copper rod conducting heat from a hot source to a cold sink develops a
stationary temperature profile across its length (solid line in c). However, when disconnected
from its surroundings (source and sink) it changes a temperature profile (dashed line in c).

1.3.1 Conditions for equilibrium


Mathematically the condition for equilibrium is spelled out by requesting the state
to sit at an extremum of the state function. For instance, the state function W =
W (X, Y ), whose differential is,
   
∂W ∂W
dW = dX + dY , (1.136)
∂X Y ∂Y X
will have minima at those points (X0 , Y0 ) satisfying the conditions,
   
∂W ∂W
=0= . (1.137)
∂X Y =Y0 ∂Y X=X0
The nature of the extrema, whether maximum, minimum, or saddle point, depends
on the second derivatives of W at those points. The procedure can easily be extended
to state functions of more variables. This procedure holds for independent variables,
which generally is not the case. For instance, if the variables depend on each other,
Y = Y (X), we got a constraint.
A general procedure to handle this is the following:
1. Consider the state function, W = W (X1 , X2 , ...), with the constraints Xk =
Xk (X1 , X2 , ...).
2. Write the differential forms for all equations, for the state equation:
 
∂W
dW = A1 dX1 + A2 dX2 + ... with Aj ≡ , (1.138)
∂Xj {Xi|i̸=j }

and for the constraints:


 
∂Xk
dXk = B1k dX1 + B2k dX2 + ... with B1k ≡ . (1.139)
∂Xj {Xi|i̸=j }

3. Substitute dXk in Eq. (1.138) and collect the coefficients Cl of the remaining
terms dXl|l̸=k ,
dW = C1 dXl1 + C2 dXl2 + ... . (1.140)
transiently with some pump or sink. If it stays stationary once the pump or sink is removed, it should
be considered at equilibrium.
60 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

(a) (b)
1 1

Z(X, Y )

Z(X, Y )
0.5 0.5

0 Y (X) 0 Z2 (X, Y )
1 1 1 1
0.5 0.5 0.5 0.5
0 0 0 0
Y X Y X
Figure 1.38: (code) Illustration of state function and constraints. The surface in (a) visualizes
a state function depending on two variables; the black curve illustrates a possible constraint
Y = Y (x) between the variables. Fig. (b) shows the same surface as in (a), now constrained
by and additional surface Z2 = Z2 (X, Y ).

4. Set the coefficients to zero, Cl = 0. The solution of the system of equations


yields the extremum under the constraint.

Example 12 (Equilibrium condition with four variables): To illustrate


the procedure, let us consider the state function W = W (U, V, X, Y ) with two
constraints, U = U (X, Y ) and V = V (X, Y ). The differential form are,

dW = AdU +BdV +CdX+DdY , dU = EdX+F dY ,


dV = GdX+HdY ,
(1.141)
where the coefficients A to H are the partial derivatives taken with all other
variables being help constant. Substituting dU and dV and collecting terms, we
get,

dW = A(EdX + F dY ) + B(GdX + HdY ) + CdX + DdY (1.142)


= (AE + BG + C)dX + (AF + BH + D)dY .

Now, setting terms in the brackets to zero,

AE + BG + C = 0 = AF + BH + D , (1.143)

we obtain the desired conditions for the extremum.

1.3.2 Entropy maximization in two-phase systems, chemical


potential
Let us consider a unary, two-phase, non-reacting, and otherwise simple system. Both
phases are characterized by their own set of extensive and intensive parameters. Now,
the values of the extensive properties of both systems sum up,

S1 + S2 = Stot , (1.144)

and analogously for the other state functions Vj , Ej , Hj , Fj , Gj with j = 1, 2, but


also for the number of particles Nj in each phase. If we now allow particles to move
1.3. THERMODYNAMIC EQUILIBRIUM 61

between the phases, as the state functions depend on the particle number, we need
to consider this dependency,

Ej = Ej (Sj , Vj , Nj ) , (1.145)

and so on for the other state functions. This leads to a generalization of the differential
form (1.68),
dEj = Tj dSj − Pj dVj + µj dNj , (1.146)

where we introduced the coefficient,


 
∂Ej
µj ≡ , (1.147)
∂Nj Sj ,Vj

called chemical potential of the component j.


Resolving Eq. (1.146) by dSj , we may now calculate the total entropy change,

1 1 P1 P2 µ1 µ2
dStot = dS1 + dS2 = dE1 + dE2 + dV1 + dV2 − dN1 − dN2 . (1.148)
T1 T2 T1 T2 T1 T2

The fact that the system is closed implies, dE1 + dE2 = 0, dV1 + dV2 = 0, and
dN1 + dN2 = 0. Using these constraints in (1.148) we get,
     
1 1 P1 P2 µ1 µ2
dStot = − dE1 + − dV1 − − dN1 . (1.149)
T1 T2 T1 T2 T1 T2

Hence, the systems comes to equilibrium once the following conditions are fulfilled,

T1 = T2 thermal equilibrium
P1 = P2 mechanical equilibrium . (1.150)
µ1 = µ2 chemical equilibrium

Alternative formulations of the equilibrium criteria may be derive in terms of other


thermodynamic state functions. A quick look at the expressions (1.68) informs us,
that for systems constraint to

• S = const and V = const, E is minimum at equilibrium;

• S = const and P = const, H is minimum at equilibrium;

• T = const and V = const, F is minimum at equilibrium;

• T = const and P = const, G is minimum at equilibrium.

Application of any of these criteria leads to the same set of conditions for equilibrium.
62 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.3.3 Exercises
1.3.3.1 Ex: Gibbs free energy

Prove the relationship 4 ,


       
∂G ∂G ∂G ∂P
= + .
∂T V ∂T P ∂P T ∂T V

1.4 Thermodynamic ensembles


When we introduced the main thermodynamic potentials for canonical ensembles
(1.68) we implicitly supposed that the particle number N is fixed. If we want to
consider N as an equilibrium parameter, it must be treated as an independent state
variable, as shown in Sec. 1.3.2, and consequently the thermodynamic potentials must
be generalized.
The following eqs. (1.152) introduce several new energy state variables. The de-
fined potential Ω(T, V, µ) is the called the Landau grand canonical potential. The
potentials Ψ(S, V, µ) and Φ(S, P, µ) are not used in practice. The potential O(T, P, µ)
only depends on intensive variables and therefore cannot be a state energy, i.e. it must
be zero, O = 0. This is known as the Gibbs-Duhem equation, which will be treated in
Sec. 2.2. The potentials will be discussed briefly in Sec. 1.4.1 and in detail in Chps. 4
and 2.

δQrev = T dS
δWrev = −P dV
R
E = (T dS − P dV + µdN ) =⇒ dE = T dS − P dV + µdN
Ψ = E − µN =⇒ dΨ = T dS − P dV − N dµ
H = E + PV =⇒ dH = T dS + V dP + µdN
(1.151)
Φ = H − µN =⇒ dΦ = T dS + V dP − N dµ
F = E − TS =⇒ dF = −SdT − P dV + µdN
Ω = F − µN =⇒ dΩ = −SdT − P dV − N dµ
G = H − TS =⇒ dG = −SdT + V dP + µdN
O = G − µN = 0 =⇒ dO = −SdT + V dP − N dµ

Example 13 (Thermodynamics of grand canonical ensembles): The goal


is here to represent the differentials of the extensive variables as functions of the
intensive variables,

S = S(T, P, µ) , V = V (T, P, µ) , N = N (T, P, µ) . (1.152)

4 Note that [13] claims on page 65 that this relationship holds for the enthalpy H, which is wrong.
1.4. THERMODYNAMIC ENSEMBLES 63

For grand canonical ensembles we need to define additional material constants,


 
∂S 1 ∂S ∂S
 
CP ≡ T ∂T P,µ
−α = V ∂P T,µ
σ ≡ µ ∂µ
 T,P
1 ∂V
κ ≡ − V ∂P T,µ η ≡ V ∂V
1 ∂V 1
 
α ≡ V ∂T P,µ ∂µ (1.153)
 T,P
∂N 1 ∂N 1 ∂N
 
σ = µ ∂T P,µ −η = V ∂P T,µ κ ≡ N ∂µ
T,P

where the relationships between corresponding off-diagonal elements are easily


derived as Maxwell equations via the second derivative of the thermodynamic
potential O = G − µN . Rearranging the matrix, we finally obtain,

CP σ
dS = T
dT − αV dP + µ

dV = αV dT − κV dP + ηV dµ . (1.154)
σ
dN = µ
dT − ηV dP + κN dµ

The question is now, what are the additional material constants γ, κ, and η?
Do the Exc. 1.4.6.2.

1.4.1 Coupling of thermodynamic ensembles to reservoirs


Application of the thermodynamic apparatus on physical systems requests certain
idealizations. In the preceding sections, for instance, we studied the case of systems
in equilibrium with themselves, but insulated from the environment. Often, we also
tacitly assumed that the number of particles be fixed and termed this condition as
canonical ensemble. There are, however, numerous real world situations deviating
from the above conditions, for example, when a system is held at a given temperature
via thermal contact with a ’reservoir’, or when chemical equilibrium is maintained via
particle exchange with a ’reservoir’.

Figure 1.39: Insulated or heat conducting, closed or open, rigid or working systems.

A reservoir is here regarded as a system with infinite resources of heat, work,


and particles. Thermal, mechanical, or chemical contact with a reservoir will thus
force any finite system to equilibrate to the conditions imposed by the reservoir. In
practice, the details of the coupling between system and reservoir may however vary,
as illustrated in Fig. 1.39. With respect to coupling to a reservoir a system may be
insulated (dS = 0) or heat conducting (T → Teq ), rigid (dV = 0) or compressible
64 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

(P → Peq ), closed (dN = 0) or open to particle exchange (µ → µeq ). From the


intensive parameters (T, P, µ) only those parameters whose extensive counterpart are
NOT held constant may act as equilibrium parameters. For example, holding S fixed
the variable T cannot be an equilibrium parameter, and the system will be unable
to exchange heat with the reservoir; holding V fixed the variable P cannot be an
equilibrium parameter, and the system will be unable to work; holding N fixed the
variable µ cannot be an equilibrium parameter, and the system will be unable to
assimilate particles. This is summarized in Tab. 1.3.

Table 1.3: Various types of couplings to a reservoir.

type of coupling equilibrium parameter


insulated dS = 0 -
heat conducting dS ̸= 0 T
rigid dV = 0 -
compressible dV ̸= 0 P
closed dN = 0 -
open dN ̸= 0 µ

1.4.2 Thermodynamic potentials associated to specific ensem-


bles
The choice of which parameters are held fixed and which are treated as equilibrium
parameters depends on the properties of the system under consideration. Idealized
theoretical models have been developed in the past by Helmholtz, Gibbs, and Lan-
dau, among others, for several paradigmatic physical situations. These models are
called thermodynamic ensembles. They are named after the variables considered as
invariant within this model. For example, assuming that holding the entropy, the
volume, and the particle number is a good description of a particular physical sys-
tem, we will model it by a N V S-ensemble. This frequently used model is also called
microcanonical ensemble. From Eqs. (1.152) we see that if dS = dV = dN = 0, then
the internal energy as well cannot change, dE = 0. That is, E is the characteristic
energy associated to the microcanonical ensemble 5 . Do the Exc. 1.4.6.1.
Analogous statements hold for other ensembles, as summarized in Tab. 1.4. The
most common ones are the microcanonical, the canonical, and the grand (or macro-)
canonical ensembles. We will briefly introduce them in the following and postpone
an in-depth discussion to Chp. 4. It is important, however, to keep in mind that all
those ensembles only are approximative models for particular physical situations. The
predictions obtained from these ensembles will only be as good as the assumptions
they are based on represent a realistic image of the physical reality.

In the following we will qualitatively discuss the most common ones, which are
the microcanonical, the canonical, and the grand canonical ensembles.
5 For this reason, the microcanonical ensemble is also called N V E-ensemble.
1.4. THERMODYNAMIC ENSEMBLES 65

Table 1.4: Various thermodynamic ensembles.

ensemble name const.param. extensive parameter assoc.pot. equil.param.


microcanonical NV S dS = dV = dN = 0 dE = 0 -
µV S dS = dV = 0 dΨ = 0 µ
isenthalpic-isobar NPS dS = dN = 0 dH = 0 P
µP S dS = 0 dΦ = 0 P, µ
canonical NV T dV = dN = 0 dF = 0 T
grand canonical µV T dV = 0 dΩ = 0 T, µ
isotherm-isobar NPT dN = 0 dG = 0 T, P
µP T dO = 0 T, P, µ

1.4.3 Microcanonical ensemble


The microcanonical ensemble is used to represents the possible microstates of a me-
chanical system whose total energy E is exactly specified. The system is assumed
to be isolated in the sense that it cannot exchange energy or particles with its envi-
ronment, so that the energy of the system does not change with time. The primary
macroscopic variables of the microcanonical ensemble are the total number of parti-
cles N in the system, the system’s volume V , as well as the total energy E in the
system.
In the microcanonical ensemble an equal probability P(E) is assigned to every
microstate whose energy falls within a range centered at E. All other microstates are
given a probability of zero. Since the probabilities must add up to 1, the probability
is the inverse of the number of microstates W within the range of energy,

Pmc = W −1 . (1.155)

The range of energy ∆E is then reduced in width until it is infinitesimally narrow,


still centered at E. The microcanonical ensemble is obtained in the limit of this
process. For a given mechanical system (fixed N , V ) and a given range of energy,
the uniform distribution of probability Pmc over microstates maximizes the ensemble
average −⟨ln P⟩.
The quantum mechanics the microcanonical density operator and partition func-
tion are given by,

1
|ψk ⟩f ( E−ε
P
ρ̂mc = ∆E )⟨ψk |
k
Ξmc k
. (1.156)
E−εk
f (x) = θ( 21 − |x|)
P
Ξmc = k f ( ∆E ) with

1.4.3.1 Applicability
Because of its connection with the elementary assumptions of equilibrium statistical
mechanics (particularly the postulate of a priori equal probabilities), the microcanoni-
cal ensemble is an important conceptual building block in the theory and is sometimes
considered to be the fundamental distribution of equilibrium statistical mechanics. It
66 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

is also useful in some numerical applications, such as molecular dynamics. On the


other hand, most nontrivial systems are mathematically cumbersome to describe in
the microcanonical ensemble, and there are also ambiguities regarding the definitions
of entropy and temperature. For these reasons, other ensembles are often preferred
for theoretical calculations.
The applicability of the microcanonical ensemble to real-world systems depends
on the importance of energy fluctuations, which may result from interactions between
the system and its environment as well as uncontrolled factors in preparing the sys-
tem. Generally, fluctuations are negligible if a system is macroscopically large, or if it
is manufactured with precisely known energy and thereafter maintained in near iso-
lation from its environment. In such cases the microcanonical ensemble is applicable.
Otherwise, different ensembles are more appropriate, such as the canonical ensemble
(fluctuating energy) or the grand canonical ensemble (fluctuating energy and particle
number).

1.4.3.2 Thermodynamic quantities


The fundamental thermodynamic potential of the microcanonical ensemble is entropy.
Various definitions of entropy are possible, each given in terms of the phase volume
function v(E), which counts the total number of states with energy less than E.

• The Boltzmann entropy is,


 
dv
SB = kB ln W = kB ln ∆E , (1.157)
dE

• the ’volume entropy’ is,


Svol = kB ln v , (1.158)

• and the ’surface entropy’ is,

dv
Ssur = kB ln = SB − kB ln ∆E . (1.159)
dE

In the microcanonical ensemble, the temperature is a derived quantity rather than


an external control parameter. It is defined as the derivative of the chosen entropy
with respect to energy. For example, one can define the ’temperatures’ Tvol and Tsur
as follows:
1 ∂Svol 1 ∂Ssur ∂SB
= , = = . (1.160)
Tvol ∂E Tsur ∂E ∂E
Like entropy, there are multiple ways to understand temperature in the microcanonical
ensemble. Particularly for finite systems, the correspondence between these ensemble-
based definitions and their thermodynamic counterparts is not perfect.
The microcanonical pressure and chemical potential are given by:

P ∂S µ ∂S
= , =− . (1.161)
T ∂V T ∂N
1.4. THERMODYNAMIC ENSEMBLES 67

1.4.3.3 Phase transitions and thermodynamic analogies


Under their strict definition, phase transitions correspond to non-analytic behavior in
the thermodynamic potential or its derivatives. Using this definition, phase transitions
in the microcanonical ensemble can occur in systems of any size. This contrasts with
the canonical and grand canonical ensembles, for which phase transitions can occur
only in the thermodynamic limit– i.e. in systems with infinitely many degrees of
freedom. Roughly speaking, the reservoirs defining the canonical or grand canonical
ensembles introduce fluctuations that ’smooth out’ any non-analytic behavior in the
free energy of finite systems. This smoothing effect is usually negligible in macroscopic
systems, which are sufficiently large that the free energy can approximate non-analytic
behavior exceedingly well. However, the technical difference in ensembles may be
important in the theoretical analysis of small systems.

The volume entropy Svol and associated temperature Tvol form a close analogy to
thermodynamic entropy and temperature. It is possible to show exactly that,
dE = Tvol dSvol − ⟨P ⟩dV , (1.162)
where ⟨P ⟩ is the ensemble average pressure, as expected for the first law of thermo-
dynamics. A similar equation can be found for the surface entropy and its associated
temperature Tsur , however the ’pressure’ in this equation is a complicated quantity
unrelated to the average pressure.
The microcanonical Tvol and Tsur are not entirely satisfactory in their analogy to
temperature. Outside of the thermodynamic limit, a number of artefacts occur.
• Nontrivial result of combining two systems: Two systems, each described by
an independent microcanonical ensemble, can be brought into thermal contact
and be allowed to equilibriate into a combined system also described by a mi-
crocanonical ensemble. Unfortunately, the energy flow between the two systems
cannot be predicted based on the initial T ’s. Even when the initial T ’s are
equal, there may be energy transferred. Moreover, the T of the combination is
different from the initial values. This contradicts the intuition that temperature
should be an intensive quantity, and that two equal-temperature systems should
be unaffected by being brought into thermal contact.
• Strange behavior for few-particle systems: Many results, such as the micro-
canonical equipartition theorem acquire a one- or two-degree of freedom offset
when written in terms of Tsur . For a small systems this offset is significant, and
so if we make Ssur the analogue of entropy, several exceptions need to be made
for systems with only one or two degrees of freedom.
• Spurious negative temperatures: A negative Tsur occurs whenever the density
of states decreases with energy. In some systems the density of states is not
monotonic in energy, and so Tsur can change sign multiple times as the energy
is increased. The preferred solution to these problems is to avoid using the
microcanonical ensemble. In many realistic cases a system is thermostatted to
a heat bath so that the energy is not precisely known. Then, a more accurate
description is the canonical ensemble or grand canonical ensemble, both of which
have complete correspondence to thermodynamics.
68 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

Example 14 (Ideal gas): The fundamental quantity in the microcanonical


ensemble is W (E, V, N ), which is equal to the phase space volume compatible
with given (E, V, N ). From W , all thermodynamic quantities can be calculated.
For an ideal gas, the energy is independent of the particle positions, which
therefore contribute a factor of V N to W . The momenta, by √ contrast, are
constrained to a 3N -dimensional (hyper-)spherical shell of radius 2mE; their
contribution is equal to the surface volume of this shell. The resulting expression
for W is,
V n 2π 3N/2
W = (2mE)(3N −1)/2 .
N ! Γ(3N/2)
where Γ(.) is the gamma function, and the factor N ! has been included to account
for the indistinguishability of particles (see Gibbs paradox). In the large N limit,
the Boltzmann entropy S = kB ln W is,
"  3/2 #
V 4πm E 5
S = kB N ln + kB N + O(ln N ) .
N 3 N 2

This is also known as the Sackur-Tetrode equation.


The temperature is given by,

1 ∂S 3 N kB
≡ = .
T ∂E 2 E
which agrees with analogous result from the kinetic theory of gases. Calculating
the pressure gives the ideal gas law,

P ∂S N kB
≡ = .
T ∂V V
Finally, the chemical potential µ is,
"  3/2 #
∂S V 4πmE
µ ≡ −T = kB T ln .
∂N N 3N

Example 15 (Ideal gas in a uniform gravitational field ): The microcanon-


ical phase volume can also be calculated explicitly for an ideal gas in a uniform
gravitational field.
The results are stated below for a 3-dimensional ideal gas of N particles, each
with mass m, confined in a thermally isolated container that is infinitely long in
the z-direction and has constant cross-sectional area A. The gravitational field
is assumed to act in the minus z direction with strength g. The phase volume
W (E, N ) is,
(2π)3N/2 An mN/2 5N
W (E, N ) = E 2 −1 .
g n Γ(5N/2)
where E is the total energy, kinetic plus gravitational.
The gas density ρ(z) as a function of height z can be obtained by integrating
over the phase volume coordinates. The result is,

mgz  5N
 
5N mg  2
−2
ρ(z) = −1 1− .
2 E E
1.4. THERMODYNAMIC ENSEMBLES 69

Similarly, the distribution of the velocity magnitude |v| (averaged over all heights)
is,

 5(N2−1)
m3/2 |v|2 m|v|2

Γ(5N/2)
f (|v|) = × 1/2 3/2 × 1 − .
Γ(3/2)Γ(5N/2 − 3/2) 2 E 2E

The analogues of these equations in the canonical ensemble are the barometric
formula and the Maxwell-Boltzmann distribution, respectively. In the limit N →
∞, the microcanonical and canonical expressions coincide; however, they differ
for finite N . In particular, in the microcanonical ensemble, the positions and
velocities are not statistically independent. As a result, the kinetic temperature,
defined as the average kinetic energy in a given volume Adz, is nonuniform
throughout the container,

3E  mgz 
Tkinetic = 1− .
5N − 2 E

By contrast, in the canonical ensemble the temperature is uniform, for any N .

1.4.4 Canonical ensemble


The canonical ensemble is used to represent the possible microstates of a mechanical
system in thermal equilibrium with a heat bath at a fixed temperature. The system
can exchange energy with the heat bath, so that the states of the system will differ
in total energy. The principal thermodynamic variable of the canonical ensemble,
determining the probability distribution of states, is the absolute temperature T .
The ensemble typically also depends on mechanical variables, such as the number of
particles N in the system and the system’s volume V , each of which influence the
nature of the system’s internal states.
The canonical ensemble assigns a probability Pcn (E) to each distinct microstate
given by the following exponential,

Pcn (E) = eβ(F −E) = 1 −E/(kT )


Ξcn e with Ξcn = e−F/(kT ) , (1.163)

where E is the total energy of the microstate and Ξcn the canonical partition function.
In quantum mechanics the density operator and partition function are,

e−β Ĥ
eβ(F −Ĥ) = 1
|ψk ⟩eβ(F −Ek ) ⟨ψk |
P
ρ̂cn = Ξcn = Ξcn k
. (1.164)
Tr e−β Ĥ = e−βF = −βEk
P
Ξcn = ke

The Helmholtz free energy F is constant for the ensemble. However, the prob-
abilities and F will vary if different N, V, T are selected. The free energy F serves
two roles: first, it provides a normalization factor for the probability distribution (the
probabilities, over the complete set of microstates, must add up to one); second, many
important ensemble averages can be directly calculated from the function F (N, V, T ).
70 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

1.4.4.1 Applicability
The canonical ensemble is the ensemble that describes the possible states of a system
that is in thermal equilibrium with a heat bath. It applies to systems of any size;
while it is necessary to assume that the heat bath is very large (i.e. take a macroscopic
limit), the system itself may be small or large.
The condition that the system is mechanically isolated is necessary in order to
ensure it does not exchange energy with any external object besides the heat bath. In
general, it is desirable to apply the canonical ensemble to systems that are in direct
contact with the heat bath, since it is that contact that ensures the equilibrium.
In practical situations, the use of the canonical ensemble is usually justified either
(1) by assuming that the contact is mechanically weak, or (2) by incorporating a
suitable part of the heat bath connection into the system under analysis, so that the
connection’s mechanical influence on the system is modeled within the system.
When the total energy is fixed but the internal state of the system is otherwise
unknown, the appropriate description is not the canonical ensemble but the micro-
canonical ensemble. For systems where the particle number is variable (due to contact
with a particle reservoir), the correct description is the grand canonical ensemble.

Example 16 (Boltzmann distribution for a separable system): A system


of non-interacting particles can be separated into independent parts. If such a
system is described by a canonical ensemble, then each part can be seen as a
system unto itself and described by a canonical ensemble having the same tem-
perature as the whole. In this way, the canonical ensemble provides exactly the
Maxwell-Boltzmann statistics for systems of any number of particles. In com-
parison, the justification of the Boltzmann distribution from the microcanonical
ensemble only applies for systems with a large number of particles, that is, in
the thermodynamic limit. The Boltzmann distribution itself is one of the most
important tools in applying statistical mechanics to real systems, as it dramat-
ically simplifies the study of systems that can be separated into independent
parts (e.g. particles in a gas, electromagnetic modes in a cavity, etc.).

1.4.4.2 Thermodynamic quantities


The partial derivatives of the function F (N, V, T ) yield important canonical ensemble
average quantities: the average pressure is,

∂F
P =− , (1.165)
∂V
the entropy is,
∂F
S = −kB ⟨ln Pcn ⟩ = − , (1.166)
∂T
the partial derivative ∂F/∂N is approximately related to chemical potential, although
the concept of chemical equilibrium does not exactly apply to canonical ensembles of
small systems, and the average energy is,

E = ⟨Ĥ⟩ = F + T ⟨S⟩ . (1.167)


1.4. THERMODYNAMIC ENSEMBLES 71

From the above expressions, it can be seen that, for a given N , the function F (V, T )
has the exact differential,
dF = −S dT − P dV . (1.168)

Substituting the above relationship for ⟨E⟩ into the exact differential of F , an equation
similar to the first law of thermodynamics is found,

dE = T dS − P dV . (1.169)

The energy in the system is uncertain and fluctuates in the canonical ensemble. The
variance of the energy is,

∂E
⟨Ĥ 2 ⟩ − ⟨Ĥ⟩2 = kB T 2 . (1.170)
∂T
Example 17 (Ising model of strongly interacting systems): In a system of
strongly interacting particles, it is usually not possible to find a way to separate
the system into independent subsystems as done in the Boltzmann distribution.
In these systems it is necessary to resort to using the full expression of the
canonical ensemble in order to describe the thermodynamics of the system when
it is thermostatted to a heat bath. The Ising model, which is a widely discussed
toy model for the phenomena of ferromagnetism, is one of the simplest models
showing a phase transition.

1.4.5 Grand canonical ensemble


The grand canonical ensemble is used to represent the possible microstates of a system
of particles that are in thermal and chemical equilibrium with a reservoir. The system
is said to be open in the sense that the system can exchange energy and particles with
a reservoir, so that various possible states of the system can differ in both their total
energy and total number of particles. The system’s volume, shape, and other external
coordinates are kept the same in all possible states of the system.
The thermodynamic variables of the grand canonical ensemble are chemical po-
tential µ and absolute temperature T . The ensemble is also dependent on mechanical
variables such as volume V which influence the nature of the system’s internal states.
As each of these is assumed to be constant in the grand canonical ensemble, it is
sometimes called the µV T ensemble.
The grand canonical ensemble assigns a probability Pgc (E) to each distinct mi-
crostate given by the following exponential 6 ,

Pgc = eβ(Ω+µN −E) = 1 β(µN −E)


Ξgc e with Ξgc = e−βΩ , (1.171)

where N is the number of particles in the microstate and E is the total energy of the
microstate.
6 In the case where more than one kind of particle is allowed to vary in number, the probability

expression generalizes to Pgc = eβ(Ω+µ1 N1 +µ2 N2 +...−E) , where µj is the chemical potential for the
j-th kind of particles, Nj the number of that kind of particle in the microstate.
72 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM

The quantum mechanics the density operator and partition function are,

e−β(Ĥ−µN̂ )
eβ(Ω+µN̂ −Ĥ) = 1
|ψk ⟩eβ(Ω+µnk −Ek ) ⟨ψk |
P
ρ̂gc = Ξgc = Ξgc k
. (1.172)
−β(Ĥ−µN̂ ) −βΩ β(µnk −Ek )
P
Ξgc = Tr e =e = k e

The grand potential Ω is constant for the ensemble. However, the probabilities
and Ω will vary if different µ, V, T are selected. The grand potential Ω serves two roles:
to provide a normalization factor for the probability distribution (the probabilities,
over the complete set of microstates, must add up to one); second, many important
ensemble averages can be directly calculated from the function Ω(µ, V, T ).

1.4.5.1 Applicability
The grand canonical ensemble is the ensemble that describes the possible states of
an isolated system that is in thermal and chemical equilibrium with a reservoir. The
grand canonical ensemble applies to systems of any size, small or large; it is only
necessary to assume that the reservoir with which it is in contact is much larger
(i.e. to take the macroscopic limit).
The condition that the system is isolated is necessary in order to ensure it has well-
defined thermodynamic quantities and evolution. In practice, however, it is desirable
to apply the grand canonical ensemble to describe systems that are in direct contact
with the reservoir, since it is that contact that ensures the equilibrium. The use of
the grand canonical ensemble in these cases is usually justified either (1) by assuming
that the contact is weak, or (2) by incorporating a part of the reservoir connection
into the system under analysis, so that the connection’s influence on the region of
interest is correctly modeled. Alternatively, theoretical approaches can be used to
model the influence of the connection, yielding an open statistical ensemble.
Another case in which the grand canonical ensemble appears is when considering
a system that is large and thermodynamic (a system that is ’in equilibrium with
itself’). Even if the exact conditions of the system do not actually allow for variations
in energy or particle number, the grand canonical ensemble can be used to simplify
calculations of some thermodynamic properties. The reason for this is that various
thermodynamic ensembles (microcanonical, canonical) become equivalent in some
aspects to the grand canonical ensemble, once the system is very large. Of course,
for small systems, the different ensembles are no longer equivalent even in the mean.
As a result, the grand canonical ensemble can be highly inaccurate when applied to
small systems of fixed particle number, such as atomic nuclei 7 .
Grand ensembles are apt for use when describing systems such as electrons in a
conductor or photons in a cavity, where the shape is fixed but the energy and number
of particles can easily fluctuate due to contact with a reservoir (e.g. an electrical
ground or a dark surface, in these cases). The grand canonical ensemble provides a
natural setting for an exact derivation of the Fermi-Dirac statistics or Bose-Einstein
statistics for a system of non-interacting quantum particles.
7 Note that even in the thermodynamic limit, in the presence of long range interactions, the

ensembles may not be equivalent.


1.5. FURTHER READING 73

1.4.6 Exercises
1.4.6.1 Ex: Thermodynamic potential
What is the most suitable thermodynamic potential to describe a compressible system
(compressible in the sense that the systems always adjusts its pressure to that of
a large environment) with fixed particle number held at constant temperature via
thermal contact with a large reservoir. How is the corresponding ensemble called?

1.4.6.2 Ex: Thermodynamic potential


a. Derive the differential forms for the relationships S = S(T, V, µ), P = P (T, V, µ),
and N = N (T, V, µ) from (1.154) via Legendre transform.
b. Derive the differential forms for the relationships S = S(T, P, N ), V = V (T, P, N ),
and µ = µ(T, P, N ).
c. For V = V (T, P, N ) calculate dV for the case of an ideal gas. Determine the
material constants σ, η, and κ by comparison with (b).

1.5 Further reading


H.M. Nussenzveig, Edgar Blucher (2014), Curso de Fisica Basica: Fluidos, Vibrações
e Ondas, Calor - vol 2 [ISBN]

R. DeHoff, Thermodynamics in Material Science [ISBN]


H.B. Callen, Thermodynamics [ISBN]
C. Kittel, Introduction to Solid State Physics [ISBN]
A.R. West, Basic Solid State Chemistry [ISBN]

D. Mc Quarry, Statistical Thermodynamics [ISBN]


74 CHAPTER 1. FOUNDATIONS AND MATHEMATICAL FORMALISM
Chapter 2

Thermodynamics applied to
fluids and solids
In the present chapter we will apply the notions in thermodynamics acquired in Chp. 1
to various physical systems including multi-component, heterogeneous, chemically
reacting gases and solids.

2.1 Unary heterogeneous systems


Let us first focus on unary systems, i.e. samples composed by a single species of
molecules, e.g. H2 O, which nevertheless may be encountered in different states of
aggregation: solid, liquid, gaseous, also called phases. We call homogeneous a system
consisting of a single such phase and heterogeneous, when two or three phases coexist.
Note that the list of phases is not exhaustive, as solids can exist in various allotropic
phases depending on how the atoms arrange into a crystalline structure. For example,
solid carbon may be exist as graphite or diamond.

2.1.1 Unary phase diagrams in P T -space


Allotropic phases are typically represented in phase diagrams, such as the one shown
in Fig. 1.2. They are characterized by lines dividing P T -space into domains:
• Every area corresponds to a stability domain of each single phase.
• Every line corresponds to a stability domain for two phases coexisting in equi-
librium.
• Every point corresponds to a stability domain for three phases coexisting in
equilibrium.
• There are no regions where more than three phases coexist at equilibrium.
Heterogeneous systems may evolve between different phases when thermodynamic
variables are changed in a process called phase transition. If such a system is taken
through a reversible process represented by a path in the P T -diagram, when that path
intersects a phase boundary, the change on pressure and temperature will be arrested
while the phase transformation occurs. When the allotropic change is complete, the
path may resume.

75
76 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

Let us define intensive variables as extensive ones per particle (or per mole),
 
∂X
X̃ ≡ , (2.1)
∂N T,P

from now on we designate intensive properties which have been converted from ex-
tensive ones by a tilde (.̃). With the Gibbs free energy G = E + P V − T S defined in
(1.66) we find the differential form (1.152),

dG = −SdT + V dP + µdN , (2.2)

from which we get the Gibbs free energy per particles,


 
∂G
µ= = G̃ , (2.3)
∂N T,P

with its differential form,


! !
∂ G̃ ∂ G̃ 1 G
dG̃ = dG + dN = dG − 2 dN (2.4)
∂G ∂N N N
T,P T,P

S V 1 G̃
= − dT + dP + µ dN − dN = −S̃dT + Ṽ dP ,
N N N N
or
dµ = −S̃dT + Ṽ dP . (2.5)
For heterogeneous systems,

µα = µα (T α , P α ) and µβ = µβ (T β , P β ) . (2.6)

describe surfaces in T P µ-space. Where those surfaces intersect, as illustrated in


Fig. 1.38, the phases can coexist. That is, if,

Tα = Tβ ∧ Pα = Pβ =⇒ µα (T α , P α ) = µβ (T β , P β ) , (2.7)

or simply µα (T, P ) = µβ (T, P ), then the two phases α and β coexist.


Analogously, the coexistence of three phases, α, β, γ, is described by the intersec-
tion of three surfaces in P T µ-space.

2.1.1.1 Calculation of the chemical potential surface for a single phase


Because the chemical potential is a state function, if it is known at a reference point
µ(T0 , P0 ), its value at any other point µ(T, P ) is determined, and we may parametrize
an arbitrary path to go from one point to the other. For example, we may first hold
T constant and integrate dµ given by Eq. (2.5) until reaching an intermediate point
µ(T0 , P ). In a second step, we hold P constant and continue integrating dµ until
teaching the end point µ(T, P ). Repeating this procedure for all T and P , we generate
a chemical potential surface.
2.1. UNARY HETEROGENEOUS SYSTEMS 77

Example 18 (Chemical potential surface for constant heat capacity ): For


example, let us consider an ideal gas in a single phase α, whose heat capacity
does not depend on T nor P . Then, the two-step path integral can be solved
analytically. The functional dependence of the volume on (T, P ) is given by the
ideal gas equation,
Rg T
Ṽ α (T, P ) = . (2.8)
P
The functional dependence of the entropy on (T, P ) can be integrated from
Eq. (1.73)(i),
Z T α
CP (T ) T
S̃ α (T ) = S̃ α (T0 ) + dT = S̃ α (T0 ) + CPα ln . (2.9)
T0 T T0

Now, we first hold the temperature fixed, dT = 0, and exploiting Eq. (2.5)
integrate the chemical potential over pressure,
Z P
P
µα (T0 , P ) = µα (T0 , P0 ) + Ṽ α dP ′ = µα (T0 , P0 ) + Rg T ln . (2.10)
P0 P0
Then we hold the pressure fixed, dP = 0, and exploiting the same Eq. (2.5)
integrate the chemical potential over temperature,
Z T
µα (T, P ) = µα (T0 , P ) − S̃ α (T )dT (2.11)
T0

= µ (T0 , P ) − S̃ α (T0 )(T − T0 ) + CPα ( 21 T 2 − T0 T + 12 T02 ) .


α

All we need to do, is choose a reference temperature T0 and pressure P0 , where we


can look up in data bases the molar entropy S̃ α (T0 ) and the molar heat capacity
CP . Note, that in general the heat capacity may be temperature-dependent. See
Excs. 2.1.3.1 to 2.1.3.2.

2.1.1.2 Chemical potential change upon crossing a phase transition


The integration demonstrated in the example 18 may be repeated for all phases,
taking care that the reference state specified by T0 is the same for all phases. The
intersection of the curves then marks the phase transitions.

If the temperature-dependent heat capacities of both phases, CPα and CPβ , are
known, as well as the entropy change, ∆S α→β (Tm ) ≡ S β (Tm ) − S α (Tm ), upon phase
transition at a specific temperature Tm , then the entropy difference between two states
in different phases and at different temperatures is simply,
Z Tm α ′ Z T β ′
CP (T ) ′ CP (T ) ′
S̃ β (T ) = S̃ α (T0 ) + ′
dT + ∆ S̃ α→β
(Tm ) + dT . (2.12)
T0 T Tm T′
Such a path is illustrated in Fig. 2.1.
In order to determine the change in chemical potentials between states localized
at different phases and at different temperatures T0 and T , we need to integrate
µα (T, P0 ) from T0 to the phase boundary at Tm and continue integrating µβ (T, P0 )
from Tm to the end point Tm , remembering that within a single phase γ = α, β,
Z CPγ (T )
dCPγ
Z T γ ′
γ γ CP (T ) ′
S̃ (T ) = S̃ (T0 ) + = S̃ γ (T0 ) + dT , (2.13)
γ
CP (T0 ) T T0 T′
78 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

×104
0 (T, P0 )

μ (J)
-1 (Tm , P0 )
(T0 , P0 )

-2
0
100 1
0.5
200 0
T (◦ C) P (bar)

Figure 2.1: (code) Intersecting chemical potential surfaces. The points (Tj , P0 ) can be
related to each other (see text).

and

µγ (T ) T ′′
!
T
CPγ (T ′ ) ′
Z Z Z
γ γ
µ (T ) = µ (T0 ) + γ
dµ = µ (T0 ) − γ γ
S̃ (T0 ) + dT dT ′′ .
µγ (T0 ) T0 T0 T′
(2.14)
We calculate,

Z µα (Tm ) Z µβ (T )
µβ (T ) = µα (T0 ) + dµα + dµβ (2.15)
µα (T0 ) µβ (Tm )
T ′′ T ′′
! !
Tm
CPα (T ′ ) ′ T
CPβ (T ′ ) ′
Z Z Z Z
′′
α
= µ (T0 ) − α
S̃ (T0 ) + dT dT − β
S̃ (T0 ) + dT dT ′′
T0 T0 T′ Tm Tm T′
= µα (T0 ) − S̃ α (T0 )(Tm − T0 ) − S̃ α (Tm )(T − Tm ) − ∆S̃ α→β (Tm )(T − Tm )
Z Tm Z T ′′ α ′ Z T Z T ′′ β ′
CP (T ) ′ ′′ CP (T ) ′ ′′
− ′
dT dT − dT dT
T0 T0 T Tm Tm T′
 Z Tm α ′′ 
CP (T ) ′′
= µα (T0 ) − S̃ α (T0 )(T − T0 ) − ∆S̃ α→β (Tm ) + dT (T − Tm )
T0 T ′′
Z Tm Z T ′′ α ′ Z T Z T ′′ β ′
CP (T ) ′ ′′ CP (T ) ′ ′′
− dT dT − dT dT .
T0 T0 T′ Tm Tm T′

Hence, knowing entropy S̃ α (T0 ) at a specific temperature in one of the phases, the
entropy change between the phases at a specific temperature, ∆S̃ α→β (Tm ), and the
temperature-dependent heat capacities on both phases, CPα (T ) and CPβ (T ), we can
relate the chemical potentials at any temperature of both phases.
2.1. UNARY HETEROGENEOUS SYSTEMS 79

2.1.2 The Clausius-Clapeyron equation, latent heat


Let us have another look at the three equations (2.7) ruling two-phase coexistence,
Tα = Tβ =⇒ dT α = dT β ≡ dT (2.16)
α β α β
P =P =⇒ dP = dP ≡ dP
α β
µ =µ =⇒ dµα = dµβ ≡ dµ .
From (2.5) we conclude,
−S̃ α dT + Ṽ α dP = dµ = −S̃ β dT + Ṽ β dP , (2.17)
This expression can be rewritten as,

dP ∆S̃ α→β
= , (2.18)
dT ∆Ṽ α→β
which is one form of the Clausius-Clapeyron equation. It states that state changes
along a phase coexistence curve (red line in Fig. 2.1) are ruled by the ratio between
the molar entropy change and the molar volume change. Knowing this ratio, we can
integrate the P (T ) dependence.
In practice, ∆S̃ α→β is not measured in experiments, but rather the heat produced
or absorbed under the transformation (e.g. condensation or evaporation). Since, trans-
formation occurs isobarically under reversible conditions,
δQα→β = ∆H α→β . (2.19)
Recalling that G = H − T S and Gα = Gβ , we have,
∆H̃ α→β = T ∆S̃ α→β , (2.20)
so that
dP ∆H̃ α→β
= , (2.21)
dT T ∆Ṽ α→β
which is the most frequently used form of the Clausius-Clapeyron equation.

2.1.2.1 Integration of the Clausius-Clapeyron equation


Integrating Eq. (2.21) requires knowledge of,
∆H α→β = ∆H α→β (T, P ) . (2.22)
From (1.80) we know,
dH̃ = CP dT + (1 − αth T )Ṽ dP . (2.23)
Hence,
d∆H̃ α→β = dH̃ β − dH̃ α (2.24)
= CPβ dT − CPα dT + [(1 − αthβ
T )V β ]dP − (1 − αth α
T )Ṽ α dP
= ∆CPα→β dT + [(1 − αth β
T )Ṽ β − (1 − αth
α
T )Ṽ α ]dP .
80 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

In practice, it turns out that the prefactor of the pressure differential is zero for pres-
sure changes below 100000 bar, so that the enthalpy change is very well approximated
by,
d∆H̃ α→β (T, P ) = ∆CPα→β dT . (2.25)
Using an empirical fit equation to describe the heat capacities in both phases γ = α, β,

CPγ = aγ + bγ T + cγ T −2 + dγ T 2 , (2.26)

we find,
Z T
∆H̃ α→β (T, P ) = (∆a + ∆b T + ∆c T −2 + ∆d T 2 )dT (2.27)
T0

= ∆a T + ∆b T 2 − ∆c T −1 + 13 ∆d T −3 + ∆H̃ (0) .

To evaluate the volume change,

∆Ṽ α→β = ∆Ṽ α→β (T, P ) , (2.28)

let us consider a fluid-gas phase transition. Then, to a good approximation, we may


neglect the volume change of the fluid phase, Ṽ β ≫ Ṽ α ,
Rg T
∆Ṽ α→β = Ṽ β = . (2.29)
P
In this case,

∆H̃ α→β dT ∆H (0)


 
dP 1 ∆a ∆b ∆c ∆d
= = + − 3 + T+ dT (2.30)
P ∆Ṽ α→β P T R g T 2 T 3 T2
with the solution,
P
ln (2.31)
P0
    
1 T ∆b ∆c 1 1 ∆d 2 2 (0) 1 1
= ∆a ln + (T − T0 ) + − 2 + (T − T0 ) − ∆H − .
Rg T0 2 2 T2 T0 6 T T0
Good as long the empirical approximation formula (2.26) holds and gas remains ideal,
this formula describes well gas-fluid coexistence curves.

Often the enthalpy change can even be considered as almost temperature-independent,


∆H̃ α→β = ∆H̃ (0) . Then, the Clausius-Clapeyron equation (2.21) can be rewritten
as,
dP ∆H̃ α→β dT ∆H̃ α→β dT
= = , (2.32)
P ∆Ṽ α→β P T Rg T2
using the approximation (2.29). The solution is,

∆H̃ α→β
 
P 1 1
ln =− − . (2.33)
P0 Rg T T0

and represents a good estimation for phase transitions between gases and fluids or
solids.
2.1. UNARY HETEROGENEOUS SYSTEMS 81

Figure 2.2: (a) P T -phase diagram. (b) Stable and metastable equilibrium lines near a triple
point.

2.1.2.2 Triple points


Three chemical potential surfaces intersect in a point called triple point. The point
(Pt , Tt ) is also the intersection of three equilibrium lines, as illustrated in Fig. 2.2(b),
and thus simultaneously satisfies three Clausius-Clapeyron equations.
It is an important characteristic of a triple point, that property changes across
phase boundaries sum up, when several boundaries are crossed. It is ultimately a
corollary of the fact that properties are state functions. For example,

∆Ṽ α→γ = Ṽ γ − Ṽ α = Ṽ β − Ṽ α + Ṽ γ − Ṽ β = ∆Ṽ α→β + ∆Ṽ β→γ , (2.34)

and the same holds for S̃ and H̃, e.g. the enthalpy change upon fusion and vaporization
sum up to the enthalpy change upon sublimation,

∆H̃ sub = ∆H̃ f us + ∆H̃ vap . (2.35)

The sublimation curve [red line in Fig. 2.2(a)] and the vaporization curve [blue
line in Fig. 2.2(b)] both share a boundary to the gaseous phase, to that we may use
the solution (2.33) of the Clausius-Clapeyron relation for both,
vap
/Rg )(1/Tvap −1/T )
Pvap (T ) = P0 e(∆H̃ (2.36)
(∆H̃ sub /Rg )(1/Tt −1/T )
Psub (T ) = Pt e .

If the boiling temperature Tvap is known, setting P0 = 1 bar obviously fully determines
the vapor curve Pvap = Pvap (T ). In contrast, the sublimation curve Psub = Psub (T ) is
not fully determined as long as the triple point is unknown. However, approximating,

Tt = Tf us (2.37)

and relating the enthalpy change via (2.35), we are a good step further. We just need
to find the triple pressure Pt using the information, that both curves share the triple
point,
Pvap (Tt ) = Pt = Psub (Tt ) = Psub (Tf us ) . (2.38)
82 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

With this we find,


vap
/Rg )(1/Tvap −1/Tt )
Pt = P0 e(∆H̃ , (2.39)
and finally,
sub
/Rg )(1/Tt −1/T )
Psub (T ) = Pt e(∆H̃
(2.40)
vap
/Rg )(1/Tvap −1/T )+(∆H̃ f us /Rg )(1/Tf us −1/T )
= P0 e(∆H̃

Example 19 (Vapor pressure of strontium): Eq. (2.40) can be used to


obtain the vapor pressure of substances, for example, the partial pressure of
metals contained in a cell under vacuum at a fixed volume. All one need to
know is the set of data ∆H̃ f us , ∆H̃ vap , Tf us , and Tvap , which is specific for
the metal. As an example, the curve in Fig. 2.3(a) shows the vapor pressure of
strontium as a function of temperature. The triple point and the vaporization
point are marked with green circles in Fig. 2.3.

(a) (b) 1012


0
10
P0 Tf us Tvap
−1
n (cm−3 )

10 1010
(bar)

Tf us
10−2
(Pt , Tt ) 8
10
P

10−3

10−4 106
600 800 1000 1200 1400 300 500 700 900
T (◦ C) T (◦ C)

Figure 2.3: (code) (a) Vapor pressure of strontium obtained with ∆H̃ f us = 144 kJ/mol,
∆H̃ vap = 8.3 kJ/mol, Tf us = 1050 K, and Tvap = 1650 K. The solid lines corresponds to a
path along the phase transition [(see arrow in Fig. 2.2(a)]. The sublimation curve (marked
in red) and the vaporization curve (blue) correspond to those emphasized in Fig. 2.2(a) by
the same colors. The dash-dotted lines show extensions of the phase boundaries beyond the
sublimation curve, respectively, vaporization curve helping to emphasize the discontinuity
at the triple point. (b) Density of the strontium vapor corresponding to the partial pressure
according to the ideal gas law: n = P/kB T .

2.1.2.3 The vacuum


Vacuum pumps and pressure measurement.

2.1.3 Exercises
2.1.3.1 Ex: Chemical potential surface
Compute and plot the chemical potential surface: (T, P ) for a monatomic ideal gas in
the range 5 K < T < 1000 K and 10−5 bar < P < 10 bar. Suppose the gas is helium
with S298 = 126.04 J/(mol K) and CP = 5193.2 J/(kg K).
2.1. UNARY HETEROGENEOUS SYSTEMS 83

2.1.3.2 Ex: Clausius-Clapeyron relationship

Show that the Clausius-Clapeyron equation,

dP mL
= ,
dT T (Vgas − Vf l )

where L is the latent heat and m the mass of the component undergoing a phase
transition from fluid to gas, can also be derived via a cycle analogous to Carnot’s
cycle. The working fluid is an evaporating liquid, and the efficiency of this fictitious
machine is dT /T , because the temperature difference between theH two isotherms is
dT . From the heat Q used upon evaporation and the work done P dV , which are
related on one hand to the latent heat L and on the other to the volume difference of
Vf l and Vgas , results a rise in vapor pressure dP/dT [see Eq. (1.120)].

2.1.3.3 Ex: Measurement of latent heat upon water condensation

A calorimeter with thermal capacity C = 209 J/K initially contains m1 = 250 g of


water in thermal equilibrium at a temperature of T1 = 20 ◦ C. Now, an amount of
m2 = 40 g of water vapor is added. After reaching thermal equilibrium again, the
temperature is Tf = 92 ◦ C. Calculate the latent heat of water condensation.

2.1.3.4 Ex: Latent heat in a sauna

A Finnish sauna of 10 m3 volume is heated to 95 ◦ C. To increase the thermal con-


ductivity of the air, 100 ml of water at a temperature of 20 ◦ C is added to the oven
container, where the water is evaporated. How does the temperature of the sauna
evolve if, for simplicity, the impact of the oven on the temperature is disregarded.

2.1.3.5 Ex: Latent heat

How much heat is needed to transform 1 g of ice at −10 ◦ C (cice = 0.55 cal/g/K,
Lf us = 80 cal/g) in a vapor at 100 ◦ C (caq = 1 cal/g/K, Lvap = 540 cal/g)?

2.1.3.6 Ex: Latent heat

A metal bar with specific heat capacity cmt = 0.2 cal/g/K at 100 ◦ C is placed on a
large block of ice at 0 ◦ C. What is the mass of the bar if, when the system reaches
thermal equilibrium, maq = 500 g of ice have melted?

2.1.3.7 Ex: Latent heat

An ice block with the mass mice = 500 g and the temperature −20 ◦ C is put in an
airtight container together with mvap = 200 g of water vapor at 100 ◦ C. What will be
the final temperature of the system?
84 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

2.1.3.8 Ex: A lake in winter


How long does it take at an air temperature of −6 ◦ C to form a d = 4 cm thick layer of
ice on the surface of a lake (thermal conductivity of ice: κ = 1.7 × 10−2 J /(s cm K);
density of ice: ρ = 0.92 g/cm3 ; amount of heat that must be dissipated to form 1 g of
ice: 335 J)
Note: First consider a layer of ice of thickness z, and then think about how much
heat has to be dissipated from the lake in order to to form additional layer of thickness
dz.

2.1.3.9 Ex: Equations of state for fluids


a. Find in literature a representation of the P V T -diagram for water or carbon dioxide
and discuss it.
b. Find in literature a parametrization of the equation of state for fluid water or
carbon dioxide.

2.2 Multi-component, homogeneous, non-reacting


systems
In this section we will analyze systems made of more than one independent chemical
component, in particular, mixtures or solutions. The chemical content of such a
system is described by specifying the number of particles Nj (or moles nj ) of each
component j, which is an extensive property. In order to handle the multi-component
system, we introduce independent chemical potentials µj for every component, which
are intensive variables.
The composition of a system Nj can vary due (i) to exchange with reservoir or
(ii) to conversion via chemical reactions. In this section concentrate on case (i).

2.2.1 The Gibbs-Duhem equation


In Sec. 1.4.5 below Eq. (1.152) we already argued that the energy defined as,
X X
O =G− µj Nj =⇒ dO = −SdT + V dP − Nj dµj (2.41)
j j

being by definition an extensive variable, cannot be a state potential because it would


only depend on intensive variables. Hence, O = 0 = dO, or,
X X
G= µj Nj =⇒ − SdT + V dP = Nj dµj . (2.42)
j j

This important result is termed the Gibbs-Duhem equation. This equation shows that
intensive properties are not independent but related. When pressure and temperature
are variable, for a system with J components only J −1 components have independent
values of chemical potential 1 .
1 The Gibbs-Duhem equation cannot be used for small thermodynamic systems due to the influence

of surface effects and other microscopic phenomena.


2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 85

Example 20 (Derivation of the Gibbs-Duhem equation): Another way


to derive the Gibbs-Duhem equation is by noticing that, as an extensive state
function, the Gibbs potential should satisfy λG(X) = G(λX) for any extensive
variable λ. In particular, it should be linear in the particle numbers, so that,
X ∂G X X X
G= Nj = µj N j or dG = µj dNj + Nj dµj .
j
∂N j
j j j

Comparing this to the definition of the Gibbs potential,


X
dG = −SdT + V dP + µj dNj
j

we must conclude, X
−SdT + V dP = Nj dµj .
j

2.2.2 Partial molal properties


We already mentioned that the number of particles Nj (or moles nj ) of each com-
ponent j are extensive properties. Corresponding intensive P properties that can be
defined are the fractions of particles Nj /Ntot with Ntot = j Nj (or the molar frac-
tions nj /ntot ). Now, for any extensive property X = X(T, P, N1 , N2 , ...) of the sys-
tem, which can be any of the state functions X = E, S, V, H, F, G, we may define a
corresponding partial molal property of only the component j,
 
∂X
X̄j ≡ , (2.43)
∂Nj P,T,Nk̸=j

by holding pressure, temperature, and the number of moles of all other components
fixed. Note that, in contrast to the previous decoration (.̃), which referred to quantities
per total number of particles, the new decoration (.̄) refers to quantities per number
of particles of that species. Then, the total differential form is,
   
∂X ∂X X
dX = dT + dP + X̄j dNj . (2.44)
∂T P,{Nj } ∂P T,{Nj } j

Example 21 (Partial molal volume): Considering, for example, volume V =


V (T, P, N1 , N2 , ...). Then,
    X  ∂V 
∂V ∂V
dV = dT + dP + dNj (2.45)
∂T P,{Nj } ∂P T,{Nj } j
∂Nj P,T,Nk ̸=Nj
X
= αV dT − V κ dP + V̄j dNj .
j

with the definitions (1.69) of the thermal expansion coefficient and the com-
pressibility,
   
1 ∂V 1 ∂V
α≡ , κ≡− . (2.46)
V ∂T P,{Nj } V ∂P T,{Nj }

The quantity V̄j is called partial molal volume, and analogous procedures can
be followed for Ēj , S̄j , V̄j , H̄j , F̄j , Ḡj .
86 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

For the particular extensive property X → G and X̄j → µj we derived the Gibbs-
Duhem equation (2.42), but the result holds for any extensive property satisfying
(2.43). E.g. holding temperature and pressure constant, (2.44) becomes,
X
dXT,P = X̄j dNj . (2.47)
j

That is, changes of the partial molal properties of the components add up to a total
change of the system. The value of the extensive state function XT,P is obtained
by integrating (2.47). Fortunately, as an intensive property, X̄j can only depend on
other intensive properties, that is, it cannot depend on Nj . Furthermore, changes of
state functions are path-independent. Hence, the total state XT,P reached by adding
all the components is,
XZ Nj X Z Nj X
XT,P = X̄j dNj = X̄j dNj = X̄j Nj . (2.48)
j 0 j 0 j

When we differentiate (2.48),


X X
dXT,P = X̄j dNj + Nj dX̄j = 0 , (2.49)
j j

which only coincides with Eq. (2.47), if the second term vanishes. Hence,
X
Nj dX̄j = 0 . (2.50)
j

Substituting X̄j → µj , we recover the Gibbs-Duhem equation (2.42) for the case
T, P = cnst.
The important message of the Gibbs-Duhem equation is, that the partial molal
properties are not all independent. Its integration provides a recipe, how to cal-
culate values of partial molal properties of one component from those of the other
components, as we will see later.

2.2.2.1 The mixing process


Temperature, pressure, volume and, according to the third law of thermodynamics,
entropy, all have absolute non-zero values. In contrast, the energy functions E, H, F, G
are only defined with respect to some reference state, i.e. only their changes are really
of interest.
We will now study the mixing process resulting from putting together several
components at constant pressure and temperature and waiting for them to form a
homogeneous solution. In order to guarantee that only the mixing process by itself
is studied, we start from a ’reference state’ in which all components are spatially
separated and held at the same temperature and pressure, as illustrated in Fig. 2.4,
X
X0 = X̄j0 Nj . (2.51)
j
2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 87

Figure 2.4: Initial ’reference state’ and final mixture.

Mixing changes the value of state functions,

X 0 ↷ X sol = X 0 + ∆X mix . (2.52)

Now, X X
∆X mix = X̄jsol − X̄j0 Nj = ∆X̄jmix Nj .

(2.53)
j j

∆X̄jmix measures the change per particle of type j that the state function suffers from
being put into the surrounding composed by all other particle types, and ∆X mix is
the weighted sum of all these changes. Differentiating Eq. (2.53),
X X
d∆X mix = ∆X̄jmix dNj + Nj d∆X̄jmix . (2.54)
j j

For the second term we find,


X X X
Nj d∆X̄jmix = Nj dX̄jsol − Nj dX̄j0 , (2.55)
j j j

Here, the second term is zero because X̄j0 are properties of the reference state, which
is fixed by definition. The summation over the first term is zero by the Gibbs-Duhem
equation (2.50). Hence, X
Nj d∆X̄jmix = 0 , (2.56)
j

which is the Gibbs-Duhem equation applied to the mixing process. With this we
deduce from (2.54),
X
d∆X mix = ∆X̄jmix dNj . (2.57)
j

2.2.2.2 Partial molal properties from total properties


Obviously, all relationships derived in this section can be normalized to the total
number of particles (or moles) Ntot . We just need to replace all extensive state
functions E, S, V, H, F, G by molar quantities, X →PX̃, and particle numbers (or
partial moles) by fractions Nj → ηj ≡ Nj /Ntot , with j ηj = 1.
88 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

Let us consider a binary system and rewrite Eq. (2.53) for normalized quantities,

∆X̃ mix = η1 ∆X̄1mix + η2 ∆X̄2mix . (2.58)

Also from (2.57),

d∆X̃ mix = ∆X̄1mix dη1 + ∆X̄2mix dη2 with dη1 = −dη2 , (2.59)

which gives,
d∆X̃ mix
= ∆X̄1mix − ∆X̄2mix . (2.60)
dη2
Isolating the terms X̄1mix and X̄2mix from the system of equations (2.58) and (2.60),

d∆X̃ mix
∆X̄jmix = ∆X̃ mix + (1 − ηj ) with j = 1, 2 . (2.61)
dηj

Example 22 (Partial molal enthalpies upon mixing a binary solution): In


this example we calculate the partial molal enthalpies upon mixing a binary solu-
tion for a model enthalpy given by ∆H mix = aη1 η2 and satisfying limη1 →0 ∆H mix =
0. Using η1 + η2 = 1 the enthalpy can be rewritten,

∆H mix = a(η1 − η12 ) = a(η2 − η22 ) .

Evaluating,
d∆H̄ mix
= a(1 − 2ηj ) ,
dηj
we get,
d∆H̄ mix
∆H̄1mix = ∆H mix + (1 − η1 ) = aη22 ,
dη1
and finally,
∆H̄jmix = aηi̸2=j .
A consistency check yields,

∆H mix = ∆H̄1mix η1 + ∆H̄2mix η2 .

2.2.2.3 Partial molal properties of one component from those of the oth-
ers
The partial molal version of the Gibbs-Duhem relation for the mixing process (2.55)
is, X
ηj d∆X̄jmix = 0 , (2.62)
j

or restricting to two species,


η2
d∆X̄1mix = − d∆X̄2mix . (2.63)
η1
2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 89

Now, limη2 →0 X̄2sol = X̄20 , since there is nothing to mix. Hence, limη2 →0 ∆X̄2mix = 0.
Integrating the last expression,
∆X̄2mix η2
η2 d∆X̄2mix
Z Z
η2
∆X̄1mix =− d∆X̄2mix = − dη2 . (2.64)
0 η1 0 η1 dη2
Do the Exc. 2.2.5.2.

Example 23 (Partial molal enthalpies upon mixing a binary solution): As-


sume that we have a partial molal enthalpy of component 2 depending on the
square of the abundance of the other component 1,

∆H2mix = aη12 .

Then, the above recipe yields,


Z η2 Z η2
η2 d∆H̄2mix η2 da(1 − η2 )2
∆H̄1mix = − dη2 = − dη2 = aη22 .
0 η1 dη2 0 η1 dη2

2.2.2.4 Relationships among partial molal properties


As seen in Sec. 2.2.2, application of the operator,
 

(2.65)
∂Nj T,P,Nk̸=j

to any total property X yields the corresponding molal property. Applying this
operator to the definitions, laws, coefficient relations, and Maxwell equations, we
obtain the corresponding molal expressions. For example, the counterpart of the
relation H = E + P V is,
     
∂H ∂E ∂V
H̄j ≡ = +P ≡ Ēj + P V̄j , (2.66)
∂Nj T,P,Nk̸=j ∂Nj T,P,Nk̸=j ∂Nj T,P,Nk̸=j

and analogously for all other equations. For a solution with various components, we
can simply substitute any extensive variable X for each component j,

X −→ X̄j . (2.67)

2.2.3 Chemical potential in solutions


Initially introduced for unary systems in Sec. 1.3.2, the concept of chemical potential
can be extended to multicomponent systems. We will show below that, if the chemical
potential is known as a function of temperature, pressure and composition, then all
of the partial molal properties of the system may be computed. The thermodynamic
state of a system with J components depends on J + 2 variables,
J
X
E = E(T, P, N1 , ..., NJ ) =⇒ dE = T dS − P dV + µj dNj (2.68)
j=1
90 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

with  
∂E
µj ≡ . (2.69)
∂Nj S,V,Nk̸=j
Comparing this expression with the first law, we identify,
J
X
δArev = µj dNj (2.70)
j=1

as an additional non-mechanical work. We simply need to substitute,


J
X
µdN −→ µj dNj (2.71)
j=1

in (1.152) to obtain the corresponding expression for multicomponent systems.


Although the chemical potential can be expressed as partial derivatives of various
state energies,
       
∂E ∂H ∂F ∂G
µj = = = = = Ḡj ,
∂Nj S,V,Nk̸=j ∂Nj S,P,Nk̸=j ∂Nj T,V,Nk̸=j ∂Nj T,P,Nk̸=j
(2.72)
only the last one is a partial molal property, distinguished by the fact that the intensive
properties temperature and pressure are held constant, e.g. µj ̸= Ēj . If energies other
than the Gibbs free energy are to be expressed as partial molal properties, they need
to be related to the Gibbs energy using the expressions (1.151),
Ḡj = µj
   
∂ Ḡ ∂µ
S̄j = − ∂Tj = − ∂Tj
P,Nk P,Nk
   
∂ Ḡj ∂µj
V̄j = ∂P = ∂P
T,Nk T,Nk

∂µ
 (2.73)
H̄j = Ḡj + T S̄j = µj − T ∂Tj
P,Nk
 
∂µj
F̄j = Ḡj − P V̄j = µj − P ∂T
T,Nk
   
∂µj ∂µ
Ēj = Ḡj + T S̄j − P V̄j = µj − T ∂T − P ∂Tj
P,Nk T,Nk

The linearity of the expressions allows us to extend them to changes of states,


∆µj = µj − µ0j . For example, for a binary system, the Gibbs-Duhem formula (2.63)
applied to Gibbs free energy, X ≡ G, reads,
η1 d∆Ḡ1 + η2 d∆Ḡ2 = η1 d∆µ1 + η2 d∆µ2 = 0 , (2.74)
and the integrated version,
Z η2
η2 d∆µ2
∆µ1 = − dη2 . (2.75)
0 η1 dη2
Thus, if ∆µ2 is known as a function of composition at any temperature and pressure,
we can also calculate ∆µ1 and all other partial molar state functions according to
(2.73), as well as the total properties according to (2.58).
2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 91

2.2.3.1 Activity in ideal gas solutions

In experiments, instead of the chemical potential, the activity of a component aj or


the activity coefficient γj are frequently measured,

aj ≡ e∆µj /kB T ≡ ηj γj . (2.76)

When a mixture is produced by combining ideal gases initially stored in different


volumes at the same temperature and pressure, as illustrated in Fig. 2.4, every species
by itself generates a smaller partial pressure in the total volume, corresponding to its
abundance. Summed up, however, they reproduce the initial pressure. This is called
Dalton’s law,

X
P j ≡ ηj P such that P = Pj . (2.77)
j

Now, we focus on the change experienced by an individual component j during


the mixing process, which corresponds to an isothermal expansion of the component,
(T0 , P0 ) −→ (T0 , Pj = ηj P0 ). The corresponding change in chemical potential is,

dT =0
dµj = dḠj = −S̄j dT + V̄j dP −→ V̄j dP , (2.78)

since temperature does not change. The partial molal volume can be evaluated from,

 
∂V
V̄j = (2.79)
∂Nj T,P,Nk̸=j
! !
Nk kB T
∂ N kPB T
P
ideal ∂ k P kB T
−→ = = ,
∂Nj ∂Nj P
T,P,Nk̸=j T,P,Nk̸=j

assuming ideal gases. With this we can now proceed to integrate the chemical poten-
tial,

Z Pj
∆Ḡj = ∆µj = µj − µ0j = V̄j dPT (2.80)
P0
Pj
= kB T ln = kB T ln ηj = kB T ln aj .
P0

We see that, for ideal gas solutions the abundance and the activity are identical, which
is to say the activity coefficient is γj = 1. Based on the expressions (2.73), we can
92 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

also calculate the dependence of other partial molal properties on the abundances,

partial molal property total property


∆Gmix
P
∆Ḡj = kB T ln ηj = kB T ηj ln ηj
 
∂∆Ḡ
∆S mix
P
∆S̄j = − ∂T j = −kB ln ηj = −kB ηj ln ηj
P,Nk
 
∂∆Ḡj
∆V̄j = ∂P =0 ∆V mix =0
T,Nk

∆H̄j = ∆Ḡj + T ∆S̄j =0 ∆H mix =0


∆F mix
P
∆F̄j = ∆Ḡj − P ∆V̄j = kB T ln ηj = kB T ηj ln ηj
∆Ēj = ∆Ḡj + T ∆S̄j − P ∆V̄j =0 ∆E mix =0
(2.81)
The dependencies are plotted in Fig. 2.5(a,b) for a two-component system. We notice
that (i) the curves are symmetric with respect to ηj ↔ 1 − ηj ; (ii) their slopes are
vertical for ηj −→ 0, 1; (iii) ∆Sjmix is temperature-independent.

6 0
(J/mol)

(J/mol)
(a)
T 2000 (c) T
all T
(J/mol K)

-2000
4 0
-4000
ΔGmix , ΔF mix

ΔGmix , ΔF mix

-2000
-6000
2
ΔS mix

-4000
-8000
-6000
(b)
0 -10000
0 0.5 1 0 0.5 1 0 0.5 1
η2 η2 η2

Figure 2.5: (code) Dependence of state functions on composition and temperature in the
range between 300 K and 1700 K. (a) The entropy for an ideal gas and a regular solution
are identical. (b) Gibbs free and Helmholtz free energy for an ideal gas solution and (c) for
a regular solution.

This ideal solution is extremely useful to benchmark deviations from it observed


in real solutions.

2.2.3.2 Fugacity in real solutions

For real solutions for which the ideal gas equation does not hold, the partial molal vol-
ume cannot be determined from Eq. (2.79), but must be experimentally determined.
It is often interesting to define a ’deviation volume’,

kB T
Vj ≡ V̄j − . (2.82)
P
2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 93

The chemical potential change can then be written,


Z Pj Z Pj  
kB T
∆µj = V̄j dP = + Vj dP (2.83)
P0 P0 P
Z Pj
Pj fj
= kB T ln + Vj dP ≡ kB T ln .
P0 P0 P0

The last equality of (2.83) defines the fugacity of a component of a mixture,


Z Pj !
1
fj = Pj exp Vj dP . (2.84)
kB T P0

As the deviation from ideal gas behavior, measured by Vj diminishes, the fugacity
of component j approaches its partial pressure. Measurement of the fugacity of one
component over a range of temperature, pressure, and composition is sufficient to
describe the behavior of real gas mixtures completely in that range.

2.2.3.3 Activity in real solutions


Remembering that the chemical potential can also be expressed in terms of activity
via Eq. (2.80), we see that activity and fugacity are intrinsically related,

fj = P0 aj . (2.85)

From (2.80),

∆Ḡj = kB T ln aj = kB T ln ηj γj = ∆Ḡid + ∆Ḡxs (2.86)


id xs
with ∆Ḡ ≡ kB T ln ηj and ∆Ḡ ≡ kB T ln γj .

The activity coefficient γj = γj (T, P, ηj ) quantifies the deviation from ideal solution
behavior; if γj > 1 then the component j ’acts’ as if its abundance were more than
expected from a supposed ideal gas behavior.
Using the expressions for the thermodynamic potentials of the first column of
Tab. (2.81), we can also express the other partial molal properties (PMP) in terms of
activity or activity coefficient,

PMP as a function of activity activity coefficient

∆Ḡj /kB T = ln aj = ln(γj ηj )


   
∂ ln a
j ∂ ln γj
∆S̄j /kB = − ln aj − T ∂T
= − ln(γj ηj ) − T ∂T
P,Nk P,Nk
   
∂ ln aj ∂ ln γj
∆V̄j /kB T = ∂P
= ∂P
 T,Nk  T,Nk
∂ ln aj ∂ ln γj
∆H̄j /kB T = −T ∂T
= −T ∂T
 P,Nk  P,Nk
 
∂ ln aj ∂ ln γj
∆F̄j /kB T = ln aj − P ∂P
= ln(γj ηj ) − P ∂P
T,Nk T,Nk
       
∂ ln aj ∂ ln aj ∂ ln γj ∂ ln γj
∆Ēj /kB T = −T ∂T
− P ∂P
= −T ∂T
− P ∂P
P,Nk T,Nk P,Nk T,Nk
(2.87)
94 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

2.2.4 Models of real solutions


The ideal solution assumption disregards any features emanating from properties of
the microscopic constituents. Hence, at same temperature and composition all ideal
solutions are equal. More sophisticated models are required to remove this degeneracy.

2.2.4.1 Regular and non-regular solutions


One real solution model is the regular solutions model. It is based on two assumptions:
1. The mixing entropy is that of an ideal solution:
X
∆S̃ mix,rl = ∆S̃ mix,id = −Rg ηj ln ηj . (2.88)
j

2. The solution enthalpy is a function of composition, and not zero as in ideal


solutions,
∆H̃ mix,rl = ∆H̃ mix,xs (ηj ) . (2.89)

Because, the excess mixing entropy vanishes ∆S̃ mix,xs ≡ ∆S̃ mix,rl −∆S̃ mix,id = 0,
the excess Gibbs free energy G̃ = H̃ −T S̃ becomes equal to the excess mixing enthalpy,

∆G̃mix,xs = (∆H̃ mix,rl − T ∆S̃ mix,rl ) − (∆H̃ mix,id − T ∆S̃ mix,id ) (2.90)
mix,xs
= ∆H̃ .

Since S̃ = (∂ G̃/∂T )P,Nj , we conclude that ∆G̃mix,xs = ∆H̃ mix,xs are temperature-
independent. For the components, we have from Eq. (2.80),
mix,xs
∆Ḡmix,xs = kB T ln γj =⇒ γj = e∆H̄j /kB T
. (2.91)

Since, all properties of a solution can be calculated from a known activity coefficient,
the regular solution model with focuses upon the heat of mixing as a function of
composition.

Example 24 (Real solution model with a single adjustable parameter ): Let


us suppose the heat of mixing for a binary solution can be described by the sim-
ple formula,
∆H̄jmix = ∆Ḡmix,xs
j = a0 η1 η2 .
Then,
X
∆Ḡmix = ηj (∆Ḡmix,id + ∆Ḡmix,xs )
j

= kB T (η1 ln η1 + η2 ln η2 ) + a0 η1 η2 .

The behavior is shown in Fig. 2.5(c).

A solution is called ’non-regular’ when the coefficients of the solution model ad-
ditionally depend on temperature, for example,

∆H̄jmix = [a1 (T )η1 + a2 (T )η2 ]2 . (2.92)


2.2. MULTI-COMPONENT, HOMOGENEOUS, NON-REACTING SYSTEMS 95

2.2.4.2 Atomistic model for real solutions


In Sec. 1.2.7 we have seen, that the behavior of real gases can be somewhat different
from that of an ideal gas and how to incorporate this in a heuristic model. The
justification of the van der Waals equation (1.124) was the existence of interparticle
forces capable of storing energy depending on the mean interparticle distance [see
Fig. 1.6(a)].
The interparticle forces obviously depend on the colliding partners, the interaction
energy V11 ruling a collision between two particles of type j = 1 will be different from
V22 and V12 . Hence, in multi-component systems the corrections to the real solution
model must depend on the fractions η1 and η2 .
In a random mixture, the probability for collisions between identical and between
different particles scales as,

pjj = ηj2 and p12 = 2η1 η2 . (2.93)

The presence of collisions different particles motivates the ansatz for the mixing en-
tropy made in example 23,
∆H mix = aη1 η2 . (2.94)

2.2.5 Exercises
2.2.5.1 Ex: Partial pressures
A closed cylindrical reservoir with the base area S = 10 cm2 is kept at a constant
temperature T = 27 ◦ C. It is divided in two volumes by an airtight mobile disk with
the mass m = 10 kg. The upper volume VO2 contains ηO2 = 1 mol of oxygen, the
lower volume VN2 contains the same amount of nitrogen. Due to its weight the disc
finds an equilibrium position when the lower volume is VN2 = 10 l.
a. What are the masses mO2 and mN2 of the gases?
b. What are the pressures PO2 and PN2 ?
c. What is the upper volume VO2 ?
d. What are the densities nO2 and nN2 ?
e. Now the disc has a hole, so that the gases can mix and the disc falls to the bottom
of the reservoir. What is the final pressure of the mixture?

2.2.5.2 Ex: Gibbs-Duhem integration


For an ideal solution, it is known that for component 2,

∆Ḡ2 = Rg T ln η2 .

Use the Gibbs-Duhem integration (2.64) to derive the corresponding relation for com-
ponent 1.

2.2.5.3 Ex: Oxygen concentration in a metal


Titanium metal is capable of dissolving up to 30 atomic percent oxygen, a feature that
is interesting for the realization of ultra high vacuum sublimation pumps. Consider a
solid solution in the system Ti – O containing an atom fraction, ηO = 0.12. The molar
96 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

volume of this alloy is ṼTi+ O = 10.68 cm3 /mol. Calculate:


a. The weight percent of O in the solution.
b. The molar concentration (mol/cm3 ) of O in the solution.
c. The mass concentration (g/cm3 ) of O in the solution.
d. Use these calculations to deduce general expressions for weight percent, molar and
mass concentrations of a component in a binary solution in terms of the atom fraction,
η2 , the molar volume Ṽ1+2 , and the molecular weights of the elements involved.

2.2.5.4 Ex: Gibbs-Duhem rule


Given that the volume change upon mixing of a solution obeys the relation Ṽ mix =
aη1 η22 ,
a. Derive expressions for the partial molal volumes of each of the components as
functions of composition.
b. Demonstrate that your result is correct by using it to compute Ṽ mix , demonstrating
that the equation above is recovered.

2.3 Multi-component, heterogeneous, non-reacting


systems
2.3.1 Equilibrium conditions
ExpressionsPfor multi-component systems require addition of terms describing particle
exchange, j µj dNj . For example, the combined first and second law for homoge-
neous multi-component reads,
J
X
dE = T dS − P dV + µj dNj . (2.95)
j=1

Now, we assume that each component exists in Γ different phases. Each phase viewed
as a system exchanges heat, work, and matter with the other phases and with the
reservoir,
J
X
dE γ = T γ dS γ − P γ dV γ + µγj dNjγ , (2.96)
j=1

for all phases γ = α, β, .... For the extensive properties V, S, E, H, F, G the value for
the property of the system is the sum of the parts,
Γ
X
X syst = Xγ . (2.97)
γ=1

When the system is taken through an arbitrary change of state, then the change of
X syst is simply the sum of the changes that each phase experiences,
Γ
! Γ
X X
syst γ
dX =d X = dX γ . (2.98)
γ=1 γ=1
2.3. MULTI-COMPONENT, HETEROGENEOUS, NON-REACTING SYSTEMS97

Figure 2.6: Heat work, and particle exchange in a micro structure with 3 components and
two phases.

As an example, let us regard a change in internal energy,


 
XΓ J
X
dE syst = T γ dS γ − P γ dV γ + µγj dNjγ  . (2.99)
γ=1 j=1

To find the equilibrium conditions, we need to express the entropy. We solve the
Eq. (2.96) for dS γ and sum up the entropies for all phases,
 
Γ Γ γ J
X X 1 P 1 X
dS syst = dS γ =  dE γ +
γ γ
dV γ − γ µγj dNjγ  . (2.100)
γ=1 γ=1
T T T j=1

The condition for equilibrium is S syst = maximal. Eq. (2.100) contains Γ×(2+J)
variables, but their number is reduced if the system is isolated from the reservoir,
dE syst = dV syst = dN syst = 0. Let us suppose there are only two phases γ = α, β.
Then the isolation assumption reads,

dE α = −dE β , dV α = −dV β , dN α = −dN β , (2.101)

and the expression (2.100) becomes,

J 
Pα Pβ
    
syst,iso 1 1 α α
X 1 α 1 β
dS = α
− β dE + − dV − µ − β µj dNjα
α j
T T Tα Tβ j=1
T T
!
=0. (2.102)

From this follows,

Tα = Tβ thermal equilibrium
α β
P =P mechanical equilibrium (2.103)
µα
j = µβj , ∀j ≤ J chemical equilibrium
98 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

This rule can be easily generalized to more than two phases by pairwise comparison
of the phases. We immediately see that,

Tα = Tβ = ... = TΓ thermal equilibrium


α β Γ
P =P = ... = P mechanical equilibrium (2.104)
µα
j = µβj = ... = µΓj , ∀j ≤ J chemical equilibrium

These equations form the basis for the construction and calculation of multi-component
multi-phase phase diagrams.

2.3.1.1 The Gibbs phase rule


An interesting question is, how many independent variables are needed to completely
describe a composite system. This number f is called the number of macroscopic
thermodynamic degrees of freedom.
The system (2.104) consists of (Γ − 1) × (2 + J) equations and contains Γ × (2 +
J) variables. However, the Gibbs-Duhem equation imposes additional constraints
relating the chemical potentials between the components for all Γ allotropic phases.
Hence,
f = Γ × (2 + J) − (Γ − 1) × (2 + J) − Γ , (2.105)
that is,
f =J +2−Γ . (2.106)
The rule assumes the components do not react with each other. The number of
degrees of freedom is the number of independent intensive variables, i.e. the largest
number of thermodynamic parameters such as temperature or pressure that can be
varied simultaneously and arbitrarily without determining one another.

2.3.2 Structure of phase diagrams


Phase diagrams are graphical representations of the domains of stability of the vari-
ous classes of structures (one, two, or more phases) that a system may have mostly
in (T, P, ηj )-space. A unary system (J = 1) with Γ coexisting phases has, according
to the Gibbs phase rule, f = 3 − Γ degrees of freedom. From the fact that f de-
creases with the number of phases we deduce, that single-phase regions require the
largest amount of variables for their specification. Consequently, the graphical space
in which the phase diagram is constructed must have J +1 independent coordinates to
allow for a representation of the full range of behavior of the single-phase regions. A
unary system thus can be plotted in a (P, T )-diagram, while a binary system already
requires a three-dimensional space, e.g. (T, P, η2 ). As printed pages only have two di-
mensions, one has to resort to sections across multi-phase diagrams or to projections.
Sections provide quantitative information, while projections illustrate better general
relationships. Do the Exc. 2.3.3.1.
Example 25 (Phase diagrams): According to the Gibbs phase rule, a unary
homogeneous system has f = 2 degrees of freedom. A unary heterogeneous
system with two (or more) phases (f ≤ 2) is characterized by phase boundaries
(or triple points), which can be conveniently represented in a plane P T -diagram.
2.4. CONTINUOUS, NON-UNIFORM SYSTEMS EXPOSED TO EXTERNAL FORCES99

2.3.3 Exercises
2.3.3.1 Ex: Volume change in a multi-phase multi-component system
Use Eq. (2.98) to write out a general expression for the change of volume of a three-
phase two-component system including all 12 terms.

2.3.3.2 Ex: Phase diagram of water


Sketch the phase diagram for pure water in (P, V ) space. Be careful to incorporate
the observation that solid water shrinks upon conversion to the liquid state. Discuss
complications in the structure of the diagram that derive from this fact.

2.4 Continuous, non-uniform systems exposed to


external forces
When we discussed thermodynamic equilibrium in Secs. 1.3 and 2.1, we tacitly as-
sumed that the intensive properties within a phase are uniform, i.e. a single tempera-
ture, pressure, or composition can be assigned to a particular phase α, and the same
holds for the chemical potential and all partial molal properties. On the other hand,
those properties can vary from phase to phase.
In the presence of external force fields the situation is different. Equilibrated sys-
tems may then exhibit temperature, pressure, or composition profiles. One example
is the atmosphere on Earth, whose vertical temperature and pressure profile is de-
scribed by the barometric formula studied in Excs. 1.1.4.3 and 1.1.4.3; an atomic gas
confined in a harmonic trapping potential develops a radial density profile studied in
Excs. 4.1.7.14 to 4.1.7.16; mixtures submitted to centrifuges may experience a sepa-
ration of their components; and an electrostatic field will separate components with
different charges.
To handle thermodynamic properties in such locally homogeneous but globally
non-uniform systems, we need to introduce the concept of densities of extensive ther-
modynamic quantities, e.g. the energy density, which are scalar fields, i.e. local inten-
sive properties associated with each point of the system in space.

2.4.1 Thermodynamic densities


For each extensive property of a system, X ∈ {S, V, E, F, H, G, Nj }, we may define a
corresponding local density,
δX
x(r) ≡ , (2.107)
δV
where the V is a volume element near the point located at the coordinates r. Here,
we use the symbol δ for spatial differentials to distinguish them from thermodynamic
state changes denoted by the symbol d. Then, x(r)δV is the total value of the exten-
sive property X in the volume element and,
Z
X= x(r)δV (2.108)
V
100 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

in the whole system. Changes of this property along a thermodynamic process are
expressed as usual, Z
dXV = dx(r)δV , (2.109)
V

where we assume that these changes do not modify the volume.


Let us now calculate the differential of the energy density using the prescription
(2.107) for a thermodynamic process. Since the process does not change the volume
element, we can write,
 
δdEV δ  X
de(r) = = T dSV − P dVV + µj dNjV  (2.110)
δV δV j

dδSV X dδNjV X
=T + µj = T ds(r) + µj dnj (r) ,
δV j
δV j

where s is the entropy density and nj the molar density of the component j. That is,
we can derive local versions for all thermodynamic relationships.

2.4.2 Equilibrium conditions


2.4.2.1 Without external force field
Integrating Eq. (2.110) we get, resolved by entropy
 
Z
1
 de − 1 X
dS = µj dnj  δV . (2.111)
V T T j

If during the thermodynamic process the system is isolated against energy and particle
exchange,
Z Z
0 = dE = de(r)δV and 0 = dNj = dnj (r)δV . (2.112)
V V

Equilibrium is reached when the entropy is maximum, and a necessary condition for
that is, X
dS − kB αj dNj − kB βdE = 0 (2.113)
j

for any value of the Lagrange multipliers αj and β. Substituting (2.111) and (2.112),
 
Z Z Z
1 1 X X
0=  de(r) − µj dnj (r) δV − kB β
 de(r)δV − kB αj dnj (r)δV ,
V T T j V j V

(2.114)
which implies,
 
1 X  µj 
0= − kB β de(r) − + kB αj dnj (r) . (2.115)
T j
T
2.4. CONTINUOUS, NON-UNIFORM SYSTEMS EXPOSED TO EXTERNAL FORCES101

We conclude,
1 µj
β= , αj = − . (2.116)
kB T kB T
Since αj and β are constants, this means that temperature and chemical potential
cannot depend on position,
∇T = 0 = ∇µj . (2.117)

To derive the condition for mechanical equilibrium, we apply the operator (2.43)
on (2.42), that is,
 
∂ X
applied to Nj dµj = −SdT + V dP − Nk dµk , (2.118)
∂Nj P,T,Nk̸=j
k̸=j

and obtain the partial molal equation,


 
X ∂dµk
dµj = −S̄j dT + V̄j dP − Nk . (2.119)
∂Nj P,T,Nk̸=j
k̸=j

We now interpret the differential forms as gradients, d → dr → ∇,


 
X ∂∇µk
∇µj = −S̄j ∇T + V̄j ∇P − Nk . (2.120)
∂Nj P,T,Nk̸=j
k̸=j

As we already found the equilibrium conditions ∇T = 0 = ∇µj , we get,


 
X ∂∇µk
V̄j ∇P = Nk . (2.121)
∂Nj P,T,Nk̸=j
k̸=j

Multiplying both sides with ηj and summing over all components, and exploiting the
fact
P that summation over all partial molar volumes produces the total molar volume,
j ηj V̄j = V ,  
X X ∂∇µk
V ∇P = ηj Nk . (2.122)
j
∂Nj P,T,Nk̸=j
k̸=j

The right-hand side can be shown to vanish using the Gibbs-Duhem equation (2.50).
Hence,
∇P = 0 . (2.123)
The results (2.117) and (2.121) were expected and only tell us that there is nothing
wrong with the density formalism.

2.4.2.2 With external force field


Let us now study the impact of time invariant external forces. The time invariance
is obviously necessary to allow the system to reach a steady state. If the force is
conservative, it can be derived from a potential,

F(r) = −∇Φ(r) . (2.124)


102 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

The work executed by this force, δWext = dWext = F · dr, then does not depend on
the path taken between start and end point,
Z r2 Z r2 Z Φ(r2 )
Wext = F · dr = − ∇Φ(r) · dr = − δΦ(r) = Φ(r1 ) − Φ(r2 ) . (2.125)
r1 r1 Φ(r1 )

Forces act on masses Mj , or charges Qj carried by particles. The mass contained in


a volume element δV is,
X
δm = Mj nj δV . (2.126)
j

The total potential energy is the sum of the potential energies to which the mass
elements δm are subject,
Z Z Z
δΦ(r) δΦ(r) δΦ(r) X
Epot = δV = δm(r) = Mj nj δV . (2.127)
V δV V δm V δm j

Including the internal energy the total energy differential is then,


Z Z
δΦ(r) X
dEtot = dE + dEpot = de(r)δV + Mj dnj δV . (2.128)
V V δm j

Repeating the entropy maximization procedure, we now get,


X
0 = dS − kB βdEtot − kB αj dNj (2.129)
j
 
Z
1
 de(r) − 1 X
= µj dnj (r) δV
V T T j
 
Z Z Z
δΦ(r) X X
− kB β  de(r) + Mj dnj (r) δV −
 kB αj dnj (r)δV ,
V V δm j j V

which implies,
  X  µj 
1 δΦ(r)
0= − kB β de(r) − + kB β Mj + kB αj dnj (r) . (2.130)
T j
T δm

We conclude,

1 µj δΦ(r)
T = , + kB β Mj + kB αj = 0 . (2.131)
kB β T δm

Since αj and β are constants,


 
δΦ(r)
∇T = 0 = ∇ µj + Mj . (2.132)
δm
2.4. CONTINUOUS, NON-UNIFORM SYSTEMS EXPOSED TO EXTERNAL FORCES103

To derive the condition for mechanical equilibrium, we use again the expres-
sion (2.120), but now inserting the equilibrium conditions (2.132), we obtain an equa-
tion generalizing (2.121),
  X  
δΦ(r) ∂∇µk
V̄j ∇P = −∇ Mj − Nk . (2.133)
δm ∂Nj P,T,Nk̸=j
k̸=j

Again, multiplying both sides with ηj and summing over all components,
 
X δΦ(r)
V ∇P = −∇  η j Mj  , (2.134)
j
δm

or,
M δ∇Φ(r)
∇P = − . (2.135)
V δm
In particular, for an ideal gas,

ideal M P δ∇Φ(r) M̃ P δ∇Φ(r)


∇P −→ − =− , (2.136)
N kB T δm Rg T δm
with the solution, !
M̃ δΦ(r)
P = P0 exp − . (2.137)
Rg T δm
Solve the Excs. 2.4.3.1 to 2.4.3.3.
Example 26 (Earth gravitation): Here, we consider a system in the Earth’s
gravitational field, whose force is described by,

F(r) = −∇(mgz) = −mgêz . (2.138)

The force can be derived from a potential,


δΦ(r)
Φ(r) = mgz or = gz . (2.139)
δm
The conditions for equilibrium are,
 
δΦ(r)
∇T = 0 and ∇µj = −∇ Mj = −Mj gêz , (2.140)
δm
and,
M δ∇Φ(r) M
∇P = − = − gêz . (2.141)
V δm V
Hence, for an ideal gas,

M ideal M̃ g
dP = − gdz −→ − P dz . (2.142)
V Rg T
This is a first-order differential equation in P with the solution,

P = P0 e−M̃ gz/Rg T , (2.143)

where M̃ is the molar mass.


104 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

Example 27 (Centrifugal field ): Here, we consider system rotating at con-


stant angular velocity ω about the z-axis and thus subject to radial acceleration,
a = ω2 ρ . (2.144)
The centrifugal force can be derived from a potential,
m
Φ(r) = − ω 2 (x2 + y 2 ) . (2.145)
2
The conditions for equilibrium are ∇T = 0 and,
 
δΦ(r)
∇µj = −∇ Mj = Mj ω 2 ρêρ , (2.146)
δm
and,
M δ∇Φ(r) M 2
∇P = − = ω ρêρ . (2.147)
V δm V
Hence, for an ideal gas,
M 2 ideal M̃ P
dP = ω ρ dρ −→ ω 2 ρ dz , (2.148)
V Rg T
or
dP M̃ ω 2
= ρ dρ , (2.149)
P Rg T
with the solution,
2 2
P = P0 eM̃ ω ρ /2Rg T
. (2.150)

2.4.2.3 With electrostatic field


In electrostatics the Lorentz force is,
F(r) = qE(r) with E(r) = −∇Φel (r) , (2.151)
where Φel is the electrostatic potential. In thermodynamics,
δΦ(r)
F(r) = −∇Φ(r) = −q∇Φel (r) =⇒ Φel (r) = . (2.152)
δq
The mass and the charge elements are, respectively,
δmj = Mj nj δV and δQj = enj δV . (2.153)
Conditions for equilibrium are,
  Z
Mj δΦ(r) Mj δ
∇T = 0 and ∇µj = −∇ =− ∇ Φel (r)δq . (2.154)
V δm V δm
The quantity Z
Mj δ
µj + Φel (r)δq (2.155)
V δm
is also called electrochemical potential, and in thermodynamic equilibrium it should be
homogeneous, i.e. its gradient should vanish. For mechanical equilibrium we request,
Z
M δ∇Φ(r) M δ
∇P = − =− ∇ Φel (r)δq . (2.156)
V δm V δm
2.5. REACTING SYSTEMS 105

Example 28 (Electrostatic field ): Given an electrostatic field that decays


exponentially with the distance from a surface toward its interior, we want to
describe the variation of composition of the components with position when
it comes to equilibrium. Let ηj (∞) be the composition well away from the
surface corresponding to the chemical potential µj (∞). Since at equilibrium
the electrochemical potential (2.155) is constant,
Z
Mj δ
µj + Φel (r)δq = const = µj (∞) . (2.157)
V δm

Negatively charged components exhibit a chemical potential distribution that


mimics the electric field function with µj and ηj high near the surface and
decaying down to ηj (∞). The distribution for positively charged components is
opposite.

2.4.3 Exercises
2.4.3.1 Ex: Pressure in a harmonically trapped ideal gas
Calculate the local pressure in a harmonically trapped ideal gas in thermodynamic
equilibrium.

2.4.3.2 Ex: Atmosphere of a planet


Assume the atmosphere of planet X to be composed of a binary mixture of hydrogen
and nitrogen. Derive an expression for the variation of composition with altitude
treating the elements as ideal gases. Assuming that the planet has a gravitational
acceleration of g = 10 m/s2 , a temperature of T = 800 K, and a H2 abundance on
the planetary surface of ηH2 = 0.35, calculate the atmosphere composition at 100 km
above the surface.

2.4.3.3 Ex: Centrifuges


Consider a dilute solution of 85 Rb and 87 Rb whose natural abundances are 72.17 :
27.83. Placed in a centrifuge, how fast should it rotate in order to reach ratio of
10 : 90 near the outside radius ρ = 1 m.

2.5 Reacting systems


Matter consists of molecules and molecules are composed of atoms chosen out of a
small number of species. Being the elementary building blocks, the atoms cannot mu-
tate into atoms of another species. However, they may associate to a large variety of
molecules in a multitude of geometric and energetic configurations in a process called
chemical reaction. The molecules are represented by chemical formulas describing
succinctly the numbers of atoms, e.g. CO2 for carbon dioxide, and a chemical reac-
tion is represented by an equation comparing the number of atoms before and after
an association or dissociation, e.g.,

2 H2 + O2 −−→ 2 H2 O . (2.158)
106 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

As already formulated by Lavoisier, the number of atoms in a closed system is always


conserved.
A system that consists of K elements and J chemical components, some of which
are molecules, has
R=J −K (2.159)
independent reactions. For example, for a system containing H2 , O2 , and H2 O we
have J = 3 and K = 2, so that R = 1. Such systems are called univariant reacting
systems. Now assume that we have also H2 O2 in the system. Then we expect two
independent chemical reactions,

2 H2 + O2 −−→ 2 H2 O (2.160)
H2 + O2 −−→ H2 O2 .

Other reactions that may be formulated, e.g.,

2 H2 O + O2 −−→ H2 O2 (2.161)

are linear combinations of (2.160). Systems with R > 1 are called bivariant, respec-
tively, multivariant.
In the next section we will first focus on the gas phase, where the components
unambiguously exist as molecules. In solids, which are characterized by the existence
of multiple bonds between molecules, this is more complicated. Also, we will first
treat univariant systems before progressively generalizing to multivariant systems.

2.5.1 Univariant chemical reactions in the gas phase


Let us study again the univariant system composed of H2 , O2 , and H2 O. The objective
is to derive conditions for thermodynamic equilibrium. From the combined first and
second law of thermodynamics we derive the entropy for the multi-component system,
1 P 1X
dS = dE + dV − µj dNj (2.162)
T T T j
1 P 1
= dE + dV − [µH2 dNH2 + µO2 dNO2 + µH2 O dNH2 O ] .
T T T
As usual, for a closed isolated system, we ask for the entropy to be at maximum, under
the constraint that the internal energy and the volume cannot change, dE = 0 = dV .
On the other hand, the third isolation constraint dNj = 0 does not hold any more,
since the components may transform into one another. The total number of atoms of
one species, however, does not change. Hence, from the chemical reaction (2.158), we
obtain the constraint,
2dNH2 = dNO2 = −2dNH2 O . (2.163)
Inserting all the constraints into Eq. (2.162) we get,
1 A
dS iso = − −µH2 − 12 µO2 + µH2 O dNH2 O ≡ − dNH2 O .

(2.164)
T T
The linear combination of chemical potentials in brackets is called affinity for the
reaction (2.158). If temperature, pressure, and composition of the gas mixture are
2.5. REACTING SYSTEMS 107

known, then the chemical potential of the components and hence the affinity can be
computed.
The affinity can be positive or negative, depending on the chemical potentials of
the reactants and products. The third law of thermodynamics, however, requests
that for any process dS iso > 0, so that in Eq. (2.164) the affinity and dNH2 O must
have opposite signs. In other words, the affinity decides in which direction a chemical
reaction will take place. A state of equilibrium is reached, when dS iso > 0, which
implies A = 0. The chemical reaction comes to a halt, when the sum of the chemical
potentials of the reactants and products are equal, and this condition may require an
excess of reactants or products, as illustrated in Fig. 2.7.

Figure 2.7: (a) Direction of a chemical reaction as a function of affinity. (b) The equilibrium
of a chemical reaction may be on the left or right side.

As stated earlier, what is usually reported in experimental studies of solutions are


the activities of the components. In order to express the affinity in terms of activities
of the components, we stress Eq. (2.80) saying,

µj = µ0j + Rg T ln aj = G̃0j + Rg T ln aj , (2.165)

where G̃0j is the molar Gibbs free energy of component j, when it is in its reference
state. Let us now consider an arbitrary chemical reaction,

p P + q Q −−→ x X + y Y , (2.166)

for which,

A = µproduct − µreactants = xµX + yµY − pµP − qµQ (2.167)


= xG̃0X + y G̃0Y
+ + pG̃0P q G̃0Q + Rg T [x ln aX + y ln aY − p ln aP − q ln aQ ]
a a
X Y
≡ ∆G0 + Rg T ln ,
aP aQ

where the abbreviation ∆G0 comprises the four Gibbs free energy terms and describes
the change in Gibbs free energy upon complete conversion of p moles of P and q moles
of Q in their standard states into x moles of X and y moles of Y in their standard
states. The quotient in the logarithm is called the ratio of activities for the reaction,
aX aY
Q≡ . (2.168)
aP aQ
As already mentioned, a reaction will go on until the system reaches equilibrium,

A = 0 = ∆G0 + Rg T ln Qeq . (2.169)


108 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

Example 29 (Synthesis of water ): Let us consider a gas mixture composed of


ηH2 = 0.01 mol, ηO2 = 0.03 mol, and ηH2 O = 0.96 mol at atmospheric pressure.
At 700 ◦ C the reaction (2.158) liberates the Gibbs free energy ∆G0 = −440 kJ.
The equilibrium constant can be calculated from (2.169),
0
Qeq = e−∆G /Rg T
≈ 4.2 × 1023 .

A gas mixture may be considered an ideal solution, so that the activities of their
components are given by their molar fractions,

a2H2 O ηH2
2O
Q= 2
= 2
≈ 3.1 × 105 .
a2H2 aO2 η2H2 ηO2

Thus, Q/Qeq ≈ 7.2 × 10−19 , which means that the equilibrium is far on the
H2 O side. In the reaction process, the 0.01 mol of hydrogen will be completely
used up. However, there is an excess of oxygen in the system, as only 0.005 mol
are used. Hence, the final composition will be ηH2 ≈ 0, ηO2 = 0.0025 mol,
and ηH2 O = 0.97 mol. Note also that the total number of moles is reduced to
ηH2 + ηO2 + ηH2 O = 0.995 mol.

2.5.2 Multi-variant chemical reactions in the gas phase


The general strategy for determining the equilibrium composition in a multi-variant
reacting system makes use of the independent conditions for chemical equilibrium and
conservation laws for the number of atoms of every involved element. For a single-
phase system with J components and K different atomic elements, Akj denotes the
number of atoms of a given species k bound in a molecule representing the j-th
component of the mixture. For example, if k = H and j = H2 O then Akj = 2.
The strategy can be summarized in the following sequence of calculations:

1. Identify the types j of molecules involved and their numbers Njin (or moles or
molar fractions) before a chemical reaction is initiated.

2. Determine the chemical structure of each type of molecules j in terms of numbers


of atoms Akj of each element k.

3. Calculate the total number of atoms of each atomic element in the system from,
J
X
Ak = Akj Njin ∀ k = 1, ..., K . (2.170)
j=1

4. The numbers of atoms per species Akj and the chemical composition of the
mixture Nj change when the atoms reorganize into different molecules through
chemical reactions (labeled r = 1, ..., R). Thus, in parallel, we need to identify
all R possible chemical reactions transforming a group of molecules {jr } into a
group {jr′ }, and vice versa, with jr , jr′ = 1, ..., J labeling the types of molecules.

5. The numbers of molecules are controlled by equilibrium conditions associated to


every possible chemical reaction. As stated in Eq. (2.159), there are R = J − K
2.5. REACTING SYSTEMS 109

independent reactions in this system, and each reaction has its equilibrium
condition,
Q eq
0 eq j ′ aj ′
∆Gr = −Rg T ln Qr = −NA kB T ln Q r eqr (2.171)
jr ajr
Q eq Q eq
ideal jr′ ηjr′ jr′ Njr′
−→ −NA kB T ln Q eq = −NA kB T ln Q eq ∀ r = 1, ..., R ,
jr ηjr jr Njr

for an ideal solution. Thus, each of the equilibrium constants Qeq r ruling a
chemical reaction r is related to the composition of the system through a non-
linear equation.
6. The number of atoms Ak of a given atomic element is conserved and does not
change with time during the reaction process. Hence, the K equations (2.170)
also hold after chemical equilibrium is reached. Together with R equations
(2.171) we obtain a sufficient set of equations,
PJ
j=1 Akj Njeq = Ak ∀ k = 1, ..., K
(2.172)
∆G0
ln Njeq eq
P P
jr′ ′ − jr ln Njr = − Rg Tr ∀ r = 1, ..., R
r

While the first equations are linear in the J unknown values Njeq , the second
equations are non-linear.
Note, that the equilibrium composition of the mixture depends on the initial
composition that the system had while it was still isolated, but only via the numbers
of atoms of all involved atomic elements, no matter how these were distributed among
the molecular components before these were mixed. The task is to express the final
composition Njeq as a function of the Ak and the Qeq r .

Example 30 (Chemical equilibrium of a gas mixture): A gas mixture has


the following initial composition,
molecular component H2 O2 H2 O CO CO2 CH4
(2.173)
initial molar fraction ηjin 0.05 0.05 0.15 0.25 0.40 0.10
and the goal is to find the equilibrium composition at 600 ◦ C.
We start setting up the equations (2.170),
 
NH 2
    NO 

2 
AC 0 0 0 1 1 1 
 NH2 O 
 
AO  = 0 2 1 1 2 0   , (2.174)
  
 NCO 
AH 2 0 2 0 0 4  
NCO2 
NCH4
which hold at any time before and after the reactions. Therefore, before the
reaction, we have,
in in in in in
AC = (1ηCO + 1ηCO 2
+ 1ηCH 4
)Ntot = 0.75Ntot
in in in in in in
AO = (2ηO 2
+ 1ηH 2O
+ 1ηCO + 2ηCO 2
)Ntot = 1.30Ntot (2.175)
in in in in in
AH = (2ηH 2
+ 2ηH 2O
+ 4ηCH 4
)Ntot = 0.80Ntot
110 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

with Njin = ηjin Ntot


in in
and the total number of molecules Ntot in
= NCO in
+ NCO 2
+
in
NO2 . After the reaction, we have,
eq eq eq eq eq
AC = (1ηCO + 1ηCO 2
+ 1ηCH 4
)Ntot = 0.75Ntot
eq eq eq eq eq eq
AO = (2ηO 2
+ 1ηH 2O
+ 1ηCO + 2ηCO 2
)Ntot = 1.30Ntot (2.176)
eq eq eq eq eq
AH = (2ηH2
+ 2ηH 2O
+ 4ηCH 4
)Ntot = 0.80Ntot

with Njeq = ηjeq Ntot


eq eq
and a different total number of molecules Ntot eq
= NCO +
eq eq
NCO 2
+ N O2 . The equations now read,
eq eq eq eq eq eq
ηH2
+ ηO 2
+ ηH 2O
+ ηCO + ηCO 2
+ ηCH 4
=1 (2.177)
eq eq eq N in
ηCO + ηCO + ηCH = 0.75 tot
eq
2 4
Ntot
eq eq eq eq N in
2ηO + ηH 2O
+ ηCO + 2ηCO = 1.30 tot
eq
2 2
Ntot
in
eq eq eq N
2ηH + 2ηH 2O
+ 4ηCH = 0.80 tot
eq .
2 4
Ntot
Now, with J = 6 and K = 3 we have R = 3 independent reactions,

2 H2 + O2 ←−→ 2 H2 O (i)
2 CO + O2 ←−→ 2 CO2 (ii) . (2.178)
CH4 + 2 O2 ←−→ 2 H2 O + CO2 (iii)

The standard free energy for these reactions at the specified temperature are,

∆G0i = −404.2 kJ , ∆G0ii = −414.5 kJ , ∆G0iii = −797.9 kJ . (2.179)

The corresponding equilibrium constants, assuming an ideal gas mixture so that


the activities are equal to the molar fractions, are,
eq eq eq eq
(ηH 2O
)2 eq (ηCO 2
)2 eq (ηH 2O
)2 ηCO 2 eq
eq 2 eq = Qi , eq 2 eq = Qii , eq eq 2 = Qiii . (2.180)
(ηH2 ) ηO2 (ηCO ) ηO2 ηCH4 (ηO2 )

With (2.177) and (2.180) there are thus seven equations for the six unknown
equilibrium molar fractions and the ratio of total numbers of molecules before
in eq
and after the reactions Ntot /Ntot , that can be solved numerically. A MATLAB
code which is available here, yields,

molecular component H2 O2 H2 O CO CO2 CH4


final molar fraction ηjeq 0.136 1.6 × 10−24 0.144 0.231 0.445 0.052
(2.181)

2.5.3 Exercises
2.5.3.1 Ex: Final composition of an ideal gas mixture
An ideal gas at 1000 K has the following composition,

molecular component CO CO2 O2


initial molar fraction ηjin 0.50 0.38 0.12
2.6. CLASSIFICATION OF THERMODYNAMIC PHASE TRANSITIONS 111

a. Compute the affinity for the reaction,


1
CO + 2 O2 ←−→ CO2 .

b. Calculate the molecular composition after the reaction.

2.5.3.2 Ex: Hydrogen concentration in a metal

The concentration of hydrogen cH , which is dissolved in a metal in the form of H


atoms, depends on the pressure P of the H2 gas around the metal.
a. How are the chemical potentials µH and µH2 related?
b. Express the chemical potential in terms of its partial pressure in the gas phase.
c. Describing the hydrogen dissolved in the metal as an ideal solution, express the
chemical potential in terms of its concentration.
d. Finally, determine the relationship between concentration and pressure.

2.6 Classification of thermodynamic phase transi-


tions
The old Ehrenfest classification [32] calls a phase transition of nth order if the deriva-
tive ∂ n µ/∂T n is discontinuous. Thus BEC of a trapped ideal gas is a first-order phase
transition, because the chemical potential suddenly changes its slope at Tc .
The modern Landau classification distinguishes two types of phase transitions in
homogeneous systems: First-order phase transitions exhibit a discontinuity in the
order parameter, while for continuous phase transitions the order parameter does not
make jumps.
First order phase transitions are characterized by 1. equilibrium between phases
(liquid-gas, liquid-solid), 2. discontinuous entropy, therefore latent heat, 3. at least one
derivative of a thermodynamic potential is discontinuous. The two phases coexist
at the transition point. E.g. at T = 0 ◦ C in a closed system water and ice coexist.
Continuous phase transitions are characterized by 1. no equilibrium, 2. no latent
heat, but often discontinuous heat capacity, 3. first order derivatives are all continuous,
but second order is discontinuous. There is no phase coexistence at the critical
point. E.g. at T = Tc there no condensate; only below Tc .

2.6.1 Solid-liquid-vapor
In the case of the liquid-vapor transition the two phases are only quantitatively dis-
tinct, but have the same symmetry. Therefore, a discontinuity of the thermodynamic
potentials is required to reveal the phase transition.
In the case of the solid-liquid transition the two phases are qualitatively distinct
due to different symmetries. We do not need a discontinuity to distinguish the phases.
Landau’s theory holds for this class of transitions. It establishes a relationship between
symmetry considerations and physical characteristics by introducing the notion of the
order parameter and free energy.
112 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

2.6.2 Bose-Einstein condensation


Is the observed Bose-Einstein condensation in trapped gases really a phase transition?
A homogeneous gas has strong fluctuations near Tc that can heavily be influenced by
interactions, which could result in phase domains. In contrast, a trapped gas is quite
robust near Tc due to the modification of the density of states for small energies by
the trapping potential, which make the interaction less important (see stabilization
of attractive gases). However, Tc is not precisely defined and far from Tc interactions
become very important (Thomas-Fermi limit).
The dynamics of phase transition is rules by a competition between internal energy
which tries to minimize itself and entropy which tries to maximize itself.

2.7 Materials
2.7.1 Electrons in solids
2.7.1.1 Types of solids

In contrast to a gas, which in most cases consists of isolated particles, the interparti-
cle interaction plays a dominant role in crystals. Solids, or more specifically crystals,
are classified according to the predominant type of binding. 1. Molecular binding
is responsible for the solidification of binary gases like O2 . Here, fluctuating dipole
moments inducing dipole moments in neighboring molecules lead to van der Waals
attractive forces on the order of Ebind ≃ 10−2 eV going like r−7 . 2. Ionic binding
gives rise to periodic structures alternating positive and negative charges, as in NaCl.
3. Covalent binding is directional. This directionality determines the crystalline struc-
ture, such as in graphite and diamonds. In those three binding types there are no
free electrons and hence no conductance. However, covalently bound crystals can
sometimes be semiconductors or transparent. 4. Metallic binding is a limit of cova-
lent binding, in which the valence electrons, which shared by all atoms, overrule the
repulsion between the ions. The ionic lattice is immersed in a gas of free electrons.
The ions have filled shells and are spherically symmetric. The electrons can easily
absorb light, which makes the crystal opaque. The type of binding is studied via
X-ray diffraction, via the dielectric properties, etc..

2.7.1.2 Band model

The number of orbitals in the isolated atoms forming the crystal gives the number
of states available to the free electron gas. The exchange interaction of the fermionic
electrons lifts the degenerescence (generalization of the H2 molecule) and gives rise to
a band structure. The electronic localization determines the width of the band: very
delocalized electrons move in large bands. The interatomic distance also influences
the band width. The closer the atoms the stronger the interaction, the larger the
bands.
Bands connecting to different orbital may finally overlap. Note that the ml degen-
eracy is lifted because spherical symmetry is broken by the crystal [E(3s) ̸= E(3p)].
2.7. MATERIALS 113

2.7.1.3 Electrical conductance


Electrons can only move in presence of a sufficient number of unpopulated states,
even under the influence of an external force. If no states are available the crystals
becomes isolating. Overlapping filled and empty bands reserve many states and allow
for good conductance.
If the Fermi energy EF lies between a completely filled conducting band and an
empty valence band. The crystal is isolating. However, at T > 0, if the forbidden
band is narrow as in the case of semiconductors (for Si ∆E ≃ 1 eV), the gap may be
bridged by thermal excitations.
The electrons collide with crystal impurities, defects and phonons. While the
velocity of the electrons is about v̄ ≃ 107 cm/s, the short mean free path λ limits the
drift velocity to vd ≃ 10−2 cm/s. The Lorentz force eE/m accelerates the electrons
between successive collisions occurring at a rate v̄/λ, such that

j eE λ
= vd = × , (2.182)
ne m v̄
where j is the current density and ne the charge density. The fact that v̄ and λ do
not depends on the electric field is known as Ohm’s law. The mobility µ ≡ vd /E
allows to write the electrical conductance as

ρ−1 = n− e− µ− + n+ e+ µ+ . (2.183)

The value and sign of the Hall coefficient 1/ne can be measured by the Hall effect. It
is positive if the conductance occurs primarily through holes and negative if it occurs
through electrons.

2.7.1.4 Semiconductors
There is an intrinsic temperature-dependent conductivity (for Si ρ(600 K)/ρ(300 K) ≈
109 ). Extrinsic conductivity can be induced by photoexcitation or doping. E.g. Ar/Ga
in a Ge crystal has one weakly bound electron more/less than required to fit into the
lattice. This generates discrete energy levels slightly below the conducting/above the
valence band, min(Ec ) − En , Ep − max(Ev ) ≃ 0.01 eV.
The Fermi energy EF is the energy, where half of the electrons are below that.
In an isolator is between max(Ev ) and min(Ec ). In the presence of doping EF is
shifted by the additional amount of electrons/holes toward En , Ep . If n and p-doped
materials are combined, electrons drift from the n to the p region, such as to minimize
energy and obtain a uniform EF across the hole crystal.
Thermally excited electrons may drift and recombine with holes. The junction
is maintained by a steady flux in a dynamic equilibrium. An external voltage can
higher/lower the barrier, because the potential drops mostly near the junction, where
the resistance is highest. In this case the thermal current is not equilibrated, the
diode either blocks or opens. The electrons move to try to rectify EF .
A transistor is a series of junctions in npn of pnp configuration. The base-emitter
current can be used to switch a collector-emitter current by injecting electrons. A
tunnel diode acts like a normal diode except that when the bands come closer together
within the junction (at low voltages in conduction polarization), electrons may pass
114 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

by tunneling from the conducting into the valence band. This flow gradually stops
when EF is leveled (for zero voltage). Tunneling currents react much faster than
thermal drift currents.

2.7.2 Plasmas
2.7.2.1 Debye length
Consider a mixture of charges + and −, that is, a plasma. Energy seeks to be
minimized by local compensation of charge imbalance. However, thermal motion
spoils perfect homogeneity. That is, if on the one hand, looking at large scales, the
environment seems neutral and homogeneous, at small scales there may be charge
imbalances producing potential sites with exponentially decreasing ranges,

ne2
 
1 1 1 1 1
= + = + . (2.184)
λD λD− λD+ ε0 kB T+ kB T−

The Debye length naturally enters the thermodynamic description of large sys-
tems of mobile charges. We consider a system of 2 different species of charges q±
and n± (r) at locations r. According to the so-called primitive model, these charges
are distributed in a continuous medium characterized only by its relative static per-
mittivity, εr . This distribution of charges through the medium generates an electric
potential Φ(r) that satisfies the Poisson equation:

ε∇2 Φ(r) = −q+ n+ (r) − q− n− (r) − ρE (r) , (2.185)

where ε ≡ εr ε0 , ε0 is the dielectric constant, and ρE is the charge density outside the
medium (logically, not spatially).
The mobile charges do not only generate Φ(r), but also are moved according to the
associated Coulomb force, −q± ∇Φ(r). Assuming the system to be in thermodynamic
equilibrium with a heat reservoir at an absolute temperature T , the concentrations
of discrete charges, n± (r), can be considered as thermodynamic averages (ensem-
ble average) and the associated electrical potential as a thermodynamic mean field.
With these assumptions, the concentration of species j is described by the Boltzmann
distribution,
n± (r) = n0± e−q± Φ(r)/kB T , (2.186)
where n0j is the mean field concentration of the charge species j.
Identifying the instantaneous concentrations and the potentials in the Poisson
equation with their mean-field counterparts in the Boltzmann distribution, we obtain
the Poisson-Boltzmann equation:

ε∇2 Φ(r) = −q+ n0+ e−q+ Φ(r)/kB T − q− n0− e−q− Φ(r)/kB T − ρE (r) . (2.187)

Solutions of this nonlinear equation are known for simple systems. Solutions for more
general systems can be obtained in the high-temperature (or low-coupling) limit,
qj Φ(r) ≪ kB T , by Taylor expansion of the exponential,

q± Φ(r)
e−q± Φ(r)/kB T ≈ 1 − . (2.188)
kB T
2.7. MATERIALS 115

This approximation gives the linearized Poisson-Boltzmann equation,

n0+ q+
2
n0 q 2
 
2
ε∇ Φ(r) = + − − Φ(r) − n0+ q+ − n0− q− − ρe (r) (2.189)
kB T kB T

also known as Debye-Hückel equation. The second term on the right side disappears
for electrically neutral systems. The term in parentheses divided by ε, has the unit
1/m2 . By a dimensional analysis, it leads to a definition of a characteristic length
scale,
 1/2
εkB T
λD = (2.190)
n0+ q+
2 + n0 q 2
− −

usually called Debye-Hückel length. Being the only characteristic length scale of the
Debye-Hückel equation, λD defines the scale of variations in the potential and the
concentrations of the charged species. All charged species contribute to the Debye-
Hückel length in the same manner regardless of the charge signal. For an electrically
neutral system, the Poisson equation is,

ρe (r)
∇2 Φ(r) = λ−2
D Φ(r) − . (2.191)
ε
To illustrate the Debye shielding, the potential produced by an external point-like
charge ρe = Qδ(r) is,
Q −r/λD
Φ(r) = e . (2.192)
4πεr
The bare Coulomb potential is exponentially shielded by the medium over a distance
corresponding to the Debye length.
The length of Debye-Hückel can be expressed in terms of the length of Bjerrum
λB as,
 −1/2
XN
λD = 4πλB n0j zj2  , (2.193)
j=1

where zj = qj /e.

2.7.2.2 Typical values


In plasmas in space, where the electron density is small, the Debye’s length can reach
macroscopic values.

2.7.2.3 Length of Debye in a plasma


In a plasma, the background medium may be treated as the vacuum (εr = 1), and
the length of Debye is,
s
ε0 kB /qe2
λD = , (2.194)
ne /Te + j zj2 nj /Tj
P
116 CHAPTER 2. THERMODYNAMICS APPLIED TO FLUIDS AND SOLIDS

system density electronic temp. magn.field Debye length


ne (m−3 ) T (K) B(T) λD (m)
solar core 1032 107 - 10−11
Tokamak 1020 108 10 10−4
gas discharge 1016 104 - 10−4
ionosphere 1012 103 10−5 10−3
magnetosphere 107 107 10−8 102
solar wind 106 105 10−9 10
interstellar medium 105 104 10−10 10
intergalactic medium 1 106 - 105

where T± are the temperatures of the electrons and ions, n− is the density of the
electrons and n+ that of the atomic species j, with positive ionic charge z+ qe . The
ion term is often neglected, giving,
s
ε0 kB Te
λD = , (2.195)
ne qe2

although this is valid only, when the mobility of ions is negligible on the time scale of
the process.

2.7.3 Exercises

2.8 Further reading


H.M. Nussenzveig, Edgar Blucher (2014), Curso de Fisica Basica: Fluidos, Vibrações
e Ondas, Calor - vol 2 [ISBN]
Chapter 3

Appendices to
’Thermodynamics’
3.1 Quantities and formulas in thermodynamics
3.1.1 Statistical formulas
Stirling’s formula is,

ln n! = n ln n − n + O(ln n) or N ! ≃ nn e−n . (3.1)

3.1.2 Polylogarithm
The polylogarithm (or Joncquière’s function) is a function defined as,
∞ ∞
Zt xη−1 dx
Z
X 1
Liη (Z) ≡ = . (3.2)
t=1
tη Γ(η) 0 Z −1 ex − 1

It serves to express the Bose and Fermi functions used in quantum statistics,

gη(±) (Z) = ±Liη (±Z) . (3.3)

The upper sign holds for bosons, the lower for fermions.

3.1.2.1 Riemann zeta-function


The definition of the Riemann zeta-function is,

gξ (1) = ζ(ξ) . (3.4)

3.1.2.2 Bose/Fermi function


According to (3.3) the Bose-Fermi functions are given by,
∞ ∞
xξ−1 dx X (±Z)ℓ
Z
1
gξ± (Z) = = , (3.5)
Γ(ξ) 0 Z −1 ex ∓ 1 ℓξ
ℓ=0

117
118 CHAPTER 3. APPENDICES TO ’THERMODYNAMICS’

where the second equation represents an expansion. The derivative satisfies a useful
relationship,

∂gξ± (Z) ∞
X ∂ (±Z)ℓ

X ±(±Z)ℓ−1

1 X (±Z)ℓ
±
gξ−1 (Z)
= ξ
= ξ−1
= ξ−1
= . (3.6)
∂Z ∂Z ℓ ℓ Z ℓ Z
ℓ=0 ℓ=1 ℓ=1

The relationship can also be derive via partial integration exploiting,

d −Z ex
= . (3.7)
dx Z −1 ex + 1 (ex /Z ∓ 1)2

We calculate,

∂gξ± (Z) ∞ ∂
−xξ−1 ∂Z (Z −1 ex ∓ 1) ∞
ex
Z Z
1 1
= −1
dx = 2 xξ−1 dx
∂Z Γ(ξ) 0
x
(Z e ∓ 1) 2 Z Γ(ξ) 0 (Z −1 ex ∓ 1)2
(3.8)
Z ∞ ∞
xξ−1 ex 1 −Z
= x − 2 (ξ − 1)xξ−2 −1 x dx
(e /Z ∓ 1)20 Z Γ(ξ) 0 Z e ∓1
Z ∞ ±
1 xξ−2 gξ−1 (Z)
=0+ −1 x
dx = .
ZΓ(ξ − 1) 0 Z e ∓ 1 Z

Another useful relationship is the Sommerfeld expansion, which holds for Fermi
functions,
∞ y ∞ x
η(y + x)ξ−1 dx η(y − x)ξ−1 dx
Z Z Z Z
η(x)dx
= η(x)dx + − (3.9)
0 ex−y + 1 0 0 ex + 1 0 ex + 1
Z y 2
π ′
≈ η(x)dx + η (x) + ...
0 6

holds for z ≫ 1 and yields,

xξ π 2 ξ(ξ − 1) 7π 4 ξ(ξ − 1)(ξ − 2)(ξ − 3)


 
fξ (ey ) ≈ 1+ + + ... . (3.10)
Γ(ξ + 1) 6x2 360x4

For small z both functions converge towards,



xξ−1 dx
Z
1
cξ (z) = = cξ−1 (z) = z . (3.11)
Γ(ξ) 0 z −1 ex

3.2 Data tables


3.2.1 Material data
The following table shows the specific latent heats and change of phase temperatures
(at standard pressure) of some common fluids and gases:
3.3. FURTHER READING 119

Substance SLH of fusion melting point SLH of vaporization boiling point


(kJ/kg) (◦ C) (kJ/kg) (◦ C)
ethyl alcohol 108 −114 855 78.3
ammonia 332.17 −77.74 1369 −33.34
carbon dioxide 184 −78 574 −78.46
helium 21 −268.93
hydrogen 58 −259 455 −253
lead 23.0 327.5 871 1750
strontium 0.72 777 12.6 1377
methane 59 −182.6 511 −161.6
oxygen 13.9 −219 213 −183
silicon 1790 1414 12800 3265
water 334 0 2264.705 100
The specific latent heat of condensation of water in the temperature range from
−25 ◦ C to 40 ◦ C is approximated by the following empirical cubic function. For subli-
mation and deposition from and into ice, the specific latent heat is almost constant in
the temperature range from −40 ◦ C to 0 ◦ C and can be approximated by the following
empirical quadratic function:
h 2 3 i
Lwater (T ) ≃ 2500.8 − 2.36 ◦TC + 0.0016 ◦TC − 0.00006 ◦TC J/g (3.12)
h 2 i
Lice (T ) ≃ 2834.1 − 0.29 ◦TC − 0.004 ◦TC J/g .

2.5
L (J/kg)

2
fusion
vaporization
1.5
-50 0 50 100 150 200
T (◦ C)

Figure 3.1: (code) Specific latent heat of fusions and condensation of water.

3.3 Further reading


R. DeHoff, Thermodynamics in Material Science [ISBN]
120 CHAPTER 3. APPENDICES TO ’THERMODYNAMICS’
Part II

Statistical Physics and


Atomic Quantum Fields

121
Chapter 4

Statistical thermodynamics
All thermodynamic quantities studied in Chp. 1 (extensive or intensive) are quasi-
continuous, i.e. macroscopic. The laws of thermodynamics found to rule the behavior
of large systems were discovered empirically via experimental observations. The na-
ture of the laws is thus phenomenological, i.e. not derived from first principles. Until
now we totally neglected the fact that matter (gases, fluid, or solids) is composed of
microscopic elementary particles (atoms or molecules). Nevertheless, it already be-
came clear that the behavior of a system is somehow related to the properties of the
particles that compose it. E.g. the degrees of freedom of a molecule that can be ex-
cited have an influence on the heat capacity of a gas composed of these molecules; the
Joule-Thomson effect is due to intermolecular forces; and what we experience as heat,
is actually an outward manifestation of molecular and atomic motion, as we already
pointed out in Sec. 1.1.2. Tracing back macroscopic properties and phenomena to mi-
croscopic models bears a formidable potential of deepening our level of understanding
thermodynamic systems. It may even provide insight into the physical meaning of
mysterious or elusive phenomenological concepts such as entropy production.
An atomistic description acknowledges the fact that matter is quantized into small
portions called molecules 1 . Each molecule is understood as a (not necessarily rigid)
body characterized by its center-of-mass coordinates, but also its rotations or internal
vibrations. With typically 1023 atoms in just one liter of air the task of describing
the microstate by all its coordinates is hopeless. The mathematical discipline that
provides the tools capable of handling such big numbers is statistics, and the primary
tool supplied for the purpose is the concept of the distribution function. The idea is
to lump atoms having similar properties together to classes, e.g. energy levels. The
distribution function then simply reports the number of particles in each class, which
dramatically reduces the amount of information. The task of statistical thermodynam-
ics is now the description of a thermodynamic state in terms of a distribution function
called macrostate. The formulation of statistical thermodynamics by Boltzmann and
Gibbs provided a solid microscopic foundation of phenomenological thermodynamics.
We will begin this chapter with a calculation of the Boltzmann distribution of
microstates over the macrostates in Sec. 4.1 and introduce the concept of partition
function, from which all macroscopic state functions may be computed. As appli-
cations of this algorithm we will revisit the ideal gas and the Einstein model of a
1 The ’quantization’ of matter is not to be understood in the quantum mechanical sense. Nev-

ertheless, the particles themselves are generally microscopic and, under certain circumstances, may
behave following rules dictated by quantum mechanics. This can lead to macroscopically observable
phenomena studied in the area of quantum statistics, as we will learn in Chp. 4.2.

123
124 CHAPTER 4. STATISTICAL THERMODYNAMICS

crystalline solid.

4.1 Microstates, macrostates, and entropy


4.1.1 Probabilities of microstates and the partition function
We consider a unary thermodynamic system composed of a very large number N of
identical (albeit distinguishable) particles, each one sufficiently specified by a set of
numbers (coordinates and internal quantum numbers). The list combining the sets
of all particles completely describes the microstate of the system. It changes if a
single number of just one particle is changed. The microstate also changes when we
just exchange two particles, although the physics of the system cannot change if the
particles are identical. Clearly, the macrostate of a system is invariant upon particle
exchange.
On the other hand, the number of macrostates we attribute to a system depends on
the information we want to gather. For example, we could split the volume occupied
by a gas into two parts, V1 and V2 , and call macrostate the situation when a specific
number N1 of particles is in volume V1 , no matter which particles. Or we could
classify the particles by their velocities and prepare a histogram. Any distribution of
the particles over the possible velocity classes leading to the exact same histogram
would then belong to the same macrostate.
In general, the microstates outnumber the macrostates by many orders of magni-
tude such that, when a system evolves through a thermodynamic process, it moves
through a large number of microstates. And since, a priori, all microstates have the
same probability, the likeliness of a macrostate is just the number of microstates it
encompasses. Let 1, 2, .., j, .., r denote the possible single-particle states that the sys-
tem has to offer, nj the number of particles being in the single-particle state j, and
{n1 , n2 , .., nj , .., nr } the actual macrostate. The number of microstates contributing
to the same macrostate is easily found by combinatorial analysis,
r
N! Y 1
W{nj } = = N! , (4.1)
n1 !n2 !...nr ! n
j=1 j
!

with the total number of particles N = n1 + n2 + ... + nr . The total number of


possible microstates is obviously rN . Hence, the probability to encounter the system
in a particular macrostate is,
r
W{nj } Y 1
P{nj } = = N ! . (4.2)
r N n !rnj
j=1 j

Of all possible macrostates, there will be one containing the largest number of
microstates, and the probability to encounter the system in this macrostate is highest.
Examination of P{nj } for a variety of macrostates {nj } reveals that the probability
distribution is sharply peaked, and that macrostates deviating only slightly are already
very unlikely. The most probable state is now interpreted as the state of equilibrium,
and this hypothesis forms the basis for connecting phenomenological thermodynamics
to an atomistic statistical description.
4.1. MICROSTATES, MACROSTATES, AND ENTROPY 125

Figure 4.1: Illustration of micro- and macrostates with identical indistinguishable: (a-c) Dis-
tribution of 12 particles over 2 boxes. (d-f) Distribution of 13 particles over 4 energy levels.
All schemes show different microstates, but only the schemes (a) and (b), respectively, (d)
and (e) correspond to same macrostates.

The equilibrium condition for highest probability in the statistical description


is similar to the request for highest entropy in phenomenological thermodynamics,
which suggests that both concepts are connected. But while the entropy is additive
(the entropies of subsystems sum up to a global entropy), the number of macrostates
is multiplicative. This led Boltzmann to his famous hypothesis,

S = kB ln W . (4.3)

Do the Excs. 4.1.7.1 to 4.1.7.7.

4.1.2 Equilibrium in statistical thermodynamics


Evaluating Eq. (4.3) involves computation of large factorials, which is a challeng-
ing numerical task. Fortunately, large factorials can be very well approximated by
Stirling’s formula,
ln n! ≃ n ln n − n . (4.4)
With this formula we can simplify Eq. (4.3),
r
N! X
S = kB ln Qr ≃ kB (N ln N − N ) − kB (nj ln nj − nj ) (4.5)
j=1 nj ! j=1
r r
X X nj
= kB N ln N − kB nj ln nj = −kB nj ln .
j=1 j=1
N

This expression allows to compute the entropy of any macrostate of the system.
To find the equilibrium macrostate {n1 , n2 , .., nj , .., nr }eq in the atomistic descrip-
tion, we have to maximize the entropy (4.5). That is, we have to evaluate the to-
tal differential of entropy in the direction of changes {dn1 , dn2 , .., dnj , .., dnr } of the
126 CHAPTER 4. STATISTICAL THERMODYNAMICS

Pr
macrostate under the constraint N = j=1 nj ,
r     r  r
X ∂S ∂S X nj  X nj
dS = dnj + dN = −kB 1 + ln dnj +kB dN , (4.6)
j=1
∂nj ∂N j=1
N j=1
N

yielding,
r
X nj
dS = −kB ln dnj . (4.7)
j=1
N

Application of the equilibrium criterion requires isolation from the environment,


which sets constraints to the entropy evaluation in terms of particle and energy ex-
change,
Xr X r
N= nj and E= εj nj , (4.8)
j=1 j=1
2
or equivalently ,
r
X r
X
dN = dnj = 0 and dE = εj dnj = 0 . (4.9)
j=1 j=1

The maximum of the entropy function (4.5) under the constraints (4.9) can be
found using the technique of Lagrange multipliers, which consists in solving the equa-
tion
r 
X nj 
0 = dS − αkB dN − βkB dE = kB − ln − α − βεj dnj (4.10)
j=1
N

for arbitrary factors α and β. This implies,


nj
= e−α e−βεj , (4.11)
N
for j = 1, 2, ..., r. The Lagrange multiplier α can readily be eliminated using the
normalization constraint (4.8)(i),
r r
X nj X
1= = e−α e−βεj , (4.12)
j=1
N j=1

leaving us with,
nj e−βεj
= Pr −βεj
, (4.13)
N j=1 e

where we used the so-called canonical partition function,


r
X
Ξcn ≡ e−βεj = eα . (4.14)
j=1

2 Note that dε = 0, if the energy levels do not vary along a thermodynamic process, only their
j
population with particles.
4.1. MICROSTATES, MACROSTATES, AND ENTROPY 127

To determine the Lagrange multiplier β, we compare the expressions obtained for


the entropy variations in statistical and phenomenological thermodynamics. Solving
(4.10) by dS and substituting α taken from (4.14) we get,
r r
X e−βεj X
dS = −kB ln dnj = kB (βεj + ln Ξcn ) dnj = kB β dE + kB ln Ξcn dN .
j=1
Ξcn j=1
(4.15)
And from (1.148) we get,

1 P µ
dS = dE + dV − dN , (4.16)
T T T
where µ is the chemical potential per atom and dV = 0, since we assumed in this
derivation, that every atom has access to the whole volume of the system. A compar-
ison of the expressions (4.15) and (4.16) then yields,

1
β= and α = −βµ = ln Ξcn . (4.17)
kB T

Substitution into (4.13) and (4.14) finally yields,

r
nj 1 −εj /kB T X
= e with Ξcn = e−εj /kB T . (4.18)
N Ξcn j=1

This expression is known as Boltzmann distribution.

4.1.3 Thermodynamic potentials in canonical ensembles


We wish now to express all state functions of the system in terms of the partition
function (4.14). To this end we begin calculating the Helmholtz free energy using the
expressions for the total energy (4.8)(ii) and the entropy (4.5),
r r
X X e−βεj
F = E − TS = nj εj + kB T nj ln = −kB T ln Ξcn . (4.19)
j=1 j=1
Ξcn

Hence, Ξcn = e−βF and,


nj
= eβ(F −εj ) (4.20)
N
confirming the role of the free energyfor normalization of the canonical probability
distribution 3 .
The entropy function can now be expressed by the coefficient relation,
   
∂F ∂ ln Ξcn
S=− = kB ln Ξcn + kB T , (4.21)
∂T V ∂T V
3 We obtained the Boltzmann distribution from a microcanonical derivation, but since the Boltz-

mann distribution holds for any ensemble of classical particles, we can use it to derive the distribution
function for canonical ensembles.
128 CHAPTER 4. STATISTICAL THERMODYNAMICS

the internal energy becomes,


 
2 ∂ ln Ξcn
E = F + T S = kB T , (4.22)
∂T V

and the heat capacity,

∂ 2 ln Ξcn
     
∂E ∂ ln Ξcn
CV = = 2kB T + kB T 2 . (4.23)
∂T V ∂T V ∂T 2 V

To compute the remaining thermodynamic potentials, V , H, G, and CP , we need


to generalize the partition function to include pressure dependence. This will be done
later in Sec. 4.5.2.
In summary, the state of thermodynamic equilibrium is characterized by the fact
the particles are distributed over the available energy levels according to the expo-
nential function (4.11). Once the energy levels are known for a system, the partition
function and all the thermodynamic potentials can be calculated. We will now study
the algorithm at several examples.

4.1.4 Two-level systems


Let us consider a system consisting of only two allowed energy levels εj = 0, ε, that is,
we set the energy of the ground state to zero. This system is relevant for atomic system
in equilibrium with radiation fields driving electronic transitions between excitation
levels. The Boltzmann partition function and the population (4.18) then become,

r
X
Ξcn = e−βεj = 1 + e−βε , (4.24)
j=1

n1 1 1 n2 e−βε e−βε
= = , = = . (4.25)
N Ξcn 1 + e−βε N Ξcn 1 + e−βε

In particular, the ratio between populations of consecutive levels is, n2 /n1 = e−βε .
At low temperature, kB T ≪ ε, the excited state population is negligibly small, while
at high temperature, kB T ≫ ε, both energy levels have almost the same population.
Do the Exc. 4.1.7.8.
With the partition function it is easy to evaluate the potentials,

F = −N kB T ln Ξcn = −N kB T ln(1 + e−βε ) (4.26)


−βε
 
∂ ln Ξcn Nε e
S = N kB ln Ξcn + N kB T = N kB ln(1 + e−βε ) +
∂T V T 1 + e−βε
e−βε
 
∂ ln Ξcn
E = N kB T 2 = Nε
∂T V 1 + e−βε
e−βε
 2
N ε2
  
∂ ln Ξcn ∂ ln Ξcn
CV = 2N kB T + N kB T 2 = .
∂T V ∂T 2
V kB T (1 + e−βε )2
2
4.1. MICROSTATES, MACROSTATES, AND ENTROPY 129

4.1.5 Einstein-Debye model of solids


According to the equipartition theorem, every atom has three degrees of freedom
due to its translational motion. Describing a solid simply as a conjunction of N
atoms bound by a common potential, we expect the total energy and the specific heat
following the Dulong-Petit law,
 
∂E
E = 3N kB T resp. CV = = 3N kB , (4.27)
∂T V

for all solids regardless of temperature.


It was observed, however, that the specific heat of solids decreases like CV ∝ T 3
as T approaches zero. Einstein proposed an alternative model treating the N atoms
as three-dimensional harmonic oscillators vibrating in a lattice. Indeed, many solids
are crystalline, which means that they arrange in a periodic structure, in the simplest
case a cubic lattice, where each atom has six neighbors arranged along Cartesian
coordinates, as illustrated in Fig. 4.2. The interatomic bonds are described by springs
storing energies like a quantized 3D harmonic oscillator,

εj = (j + 32 )ℏω . (4.28)

The normal-mode frequency ω is related to the spring constant of the atomic bond
and the atomic mass. The spectrum (4.27) completely defines the model.

Figure 4.2: Einstein’s model of a solid.

The partition function is,

r r ∞
X X X e−3βℏω/2
Ξcn = e−βεj = e−3βℏω/2 e−βℏωj ≃ e−3βℏω/2 e−βℏωj = . (4.29)
j=1 j=1 j=0
1 − e−βℏω

The discrete energies nℏω are identified with quasi-particles called phonons. The
quantum nature of atoms does not matter, they just provide the medium supporting
the phonons.
130 CHAPTER 4. STATISTICAL THERMODYNAMICS

With the partition function it is easy to evaluate the potentials,


3N ℏω
+ 3N kB T ln 1 − e−βℏω

F = −N kB T ln Ξcn = (4.30)
2 
∂ ln Ξcn  3N ℏω 1
S = N kB ln Ξcn + N kB T = −3N kB ln 1 − e−βℏω + βℏω
∂T V T e −1
βℏω
 
∂ ln Ξcn 3N ℏω e +1
E = N kB T 2 = βℏω − 1
∂T V 2 e
 2 2
eβℏω
   
∂ ln Ξcn 2 ∂ ln Ξcn ℏω
CV = 2N kB T + N kB T = −3N kB .
∂T V ∂T 2 V kB T (eβℏω − 1)2

4.1.5.1 Debye model


In his model Einstein applied Planck’s law on the distribution of energy in electromag-
netic radiation, which treats radiation as a gas of photons, to the energy distribution
of atomic vibrations in a solid, treating them as a gas of phonons in a box (the box be-
ing the solid). Most of the steps of the calculation are identical, as both are examples
of a massless bosonic gas with linear dispersion relation.
Following the Bose-Einstein statistics, we must replace in (4.27),
ℏω
kB T −→ , (4.31)
eℏω/kB T −1
yielding,
2
eℏω/kB T

3N ℏω ℏω
E = βℏω resp. CV = 3N kB , (4.32)
e −1 kB T (eℏω/kB T − 1)2
in accordance with (4.30).
Still, the disappearance of the specific heat at low temperatures,
3N (ℏω)2 −ℏω/(kB T )
CV ≃ e , (4.33)
kB T 2
which is related to the finite localization energy of harmonic oscillators, does not
describe experimental observations very well, and the model had to be refined by
Debye, later on.
While Einstein assumed monochromatic lattice vibrations, Debye’s approach was
to allow a spectrum of vibrational frequencies. With the density-of-states,

ρ(ν)dν = (4πV v 3 )ν 2 dν , (4.34)

where v is the velocity of sound propagation, the formula is totally equivalent to the
density-of-states for photons in a cavity. Assuming thatR there is an upper bound νm
ν
for the vibrational frequencies, we normalize as 3N0 = 0 m ρ(ν)dν. The energy now
is 4 ,
Z νm
T 4 θ/T x3 dx
Z
ℏω 4πV
E= νdν = 9N kB 3 . (4.35)
0 eℏω/kB T − 1 v 3 θ 0 ex − 1
4 The fact that the electron gas also has a heat capacity is neglected.
4.1. MICROSTATES, MACROSTATES, AND ENTROPY 131

The Debye temperature θ = hνm /kB is characteristic for the metal. The derivative is
then, "   Z #
3 θ/T
T x3 dx θ 1
CV = 9N kB 4 − . (4.36)
θ 0 ex − 1 T eθ/T − 1
At low temperatures this formula reproduces the Debye law,
"   Z #
3 ∞
T x3 dx θ 12π 4 3
CV ≃ 9N kB 4 x
− θ/T
= N kB (T /θ) . (4.37)
θ 0 e − 1 Te 5

4.1.6 Maxwell-Boltzmann distribution of ideal gases


Here we consider R a gas composed of identical monoatomic particles enclosed in a box
of volume V = V d3 r. The energy of every atom is just its kinetic energy associated
with its flight through space,
m 2 m m m
ε= v = vx2 + vy2 + vz2 . (4.38)
2 2 2 2
Since the phase space of atomic motion is continuous, the partition function is now
calculated as an integral,
Z Z
Ξcn = e−βε d3 rd3 v (4.39)
R3 R3
Z ∞  3/2
2 −βmv 2 /2 2πkB T
=V 4πv e dv = V .
0 m
We will see later how to generalize the procedure in the presence of an inhomogeneous
trapping potential U (r). Insertion of the kinetic energy (4.38) generates the well-
known Maxwell-Boltzmann distribution,

n(ε) 1 −βmv2 /2
= e , (4.40)
N Ξcn

which will be studied in Excs. 4.1.7.9 to 4.1.7.16.


The potentials are easily calculated,
 
3 2πkB T
F = −N kB T ln Ξcn = −N kB T ln V + ln (4.41)
2 m
   
∂ ln Ξcn 3 3 2πkB T
S = N kB ln Ξcn + N kB T = N kB ln V + + ln
∂T V 2 2 m
 
∂ ln Ξcn 3
E = N kB T 2 = N kB T
∂T 2
 V  2 
∂ ln Ξcn ∂ ln Ξcn 3
CV = 2N kB T + N kB T 2 2
= N kB .
∂T V ∂T V 2
Furthermore,  
∂F N kB T
P =− = . (4.42)
∂V T V
132 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.1.6.1 Inclusion of vibrational and rotational degrees of freedom


See [13], p.155.

4.1.7 Exercises
4.1.7.1 Ex: Probabilities
In a game, 5 ideal dice are rolled.
a. What is the probability that exactly two of these dice show the number one?
b. What is the probability that at least one die shows the number one?

4.1.7.2 Ex: Probabilities


With what probability have out of
a. 1000 random numbers between 1 and 100 exactly five the value 50;
b. 100 two people on birthday January 1st.

4.1.7.3 Ex: Probabilities


What is the probability that you inhale at least one molecule that Julius Caesar
exhaled during his last breath (Tu quoque, Brute, fili mi!)? Assume a breathing
volume of 1 liter and an atmosphere height of approximately h = 10 km. Assume the
density of the atmosphere is approximately homogeneous.

4.1.7.4 Ex: Idiots roulette


A Bavarian, a Swabian and an East Frisian play Russian roulette together, each
according to their own rules. The Bavarian inserts two cartridges into the drum of a
six-shot revolver, sets the drum in a rapid rotation, aims at his own head and pulls
the trigger once. The Swabian puts a cartridge in the revolver and pulls the trigger
twice, the East Frisian puts a cartridge in the revolver, pulls the trigger once, turns
the drum a second time and pulls the trigger again. What is the chance of survival
of the three crazy people?

4.1.7.5 Ex: Students roulette


A student writes a multiple choice test in physics. It consists of 18 tasks. For each
task, only one of the four proposed solutions is correct. Since he does not understand
much about the topic, he trusts his luck and checks the possible solutions by chance.
What is the probability that the student meets the minimum requirement of 8 correct
answers?

4.1.7.6 Ex: Slot machine


A slot machine consists of three concentric rings. Each ring is evenly divided into
10 sections and the sections in each ring are labeled with letters from ’a’ to ’j’. By
pressing the start button, the three rings start to rotate independently. If the lock
button is pressed, the rings brake independently of one another and three letters
appear side by side in the viewing window. With three ’a’ you win, with two ’a’ there
4.1. MICROSTATES, MACROSTATES, AND ENTROPY 133

is a free spin.
a. Calculate the probability for one free spin per game.
b. What is the probability of getting exactly 3 free spins in 10 games?
c. What is the probability of winning at least once in 10 games?

4.1.7.7 Ex: Binomial distribution


Two drunks stagger on the x-axis. Starting from the origin, they take a step to the
right or to the left with the same probability. The steps take place synchronously,
and the steps of both people are the same and constant. Determine the probability
that they will meet again after N steps.

4.1.7.8 Ex: Simple model for a solid


Consider a system of N atomic particles at a temperature T . The individual atoms
can only be in one of two states. Either in state |0⟩ at the energy ε0 = 0 or in state
|1⟩ at energy ε1 = ε. Apart from this energy εi the atoms have no kinetic or other
energies.
a. Choosing the Boltzmann distribution, determine the population ni , that is, the
probability that a certain atom is in state |i⟩. How should the normalization be
chosen?
b. Determine the statistical mean ε̄ for the energy of one atom. Which value results
for kB T = ε? What is the expression for the total energy E of N atoms?
c. Calculate the population n1 (Tj ) to find a certain atom at the energy ε for four
different temperatures: kB Tj = 0.1 × jε for j = 1, 2, 3, 4. Also calculate the energy
per atom E(Tj )/N of the entire system at these temperatures.
d. Find an expression for the heat capacity C of this N -atom system. Note: For this
system, the total energy is identical to the thermal energy.
e. Calculate the heat capacities Cj especially for the temperatures Tj from subtask
(c). What does the result have to do with ’freezing degrees of freedom’ ?

4.1.7.9 Ex: Velocity distribution


The Maxwellian velocity distribution or Boltzmann distribution of a one-dimensional
ideal gas of identical particles of mass m at temperature T is,
r
m 2
f (v)dv = e−mv /2kB T dv .
2πkB T

This gives the average kinetic energy for each molecule of ⟨Ekin ⟩ = 21 kB T . According
to the equipartition theorem, Maxwell’s velocity distribution of a three-dimensional
gas is given by f (vx )dvx f (vy )dvy f (vz )dvz .
a. Write down the velocity distribution explicitly and determine the average kinetic
energy of a molecule in the three-dimensional gas at temperature T .
Determine the average absolute velocity ⟨v⟩ = ⟨|v|⟩ and compare ⟨v⟩2 with ⟨v 2 ⟩ for
the three-dimensional case.
c. What is the number of particles F (v)dv with an absolute velocity v = |v| in the
range v and v + dv.
134 CHAPTER 4. STATISTICAL THERMODYNAMICS

d. Consider a gas made of rubidium atoms (m = 87u) and sketch F (v) for tempera-
tures between 100 K and 300 K.
e. Consider the rubidium gas at room temperature (T = 300 K). What is the propor-
tion of molecules whose average velocity ⟨v⟩ is greater than 1000 m/s?

4.1.7.10 Ex: Maxwell-Boltzmann distribution


Calculate the number of particles in an ideal homogeneous gas having velocities slower
than 2vrms .

4.1.7.11 Ex: Maxwell-Boltzmann distribution


Using the Maxwell-Boltzmann distribution f (v)
p and the following formulas, calculate
R∞ 2
the velocities v̄ ≡ 0 vf (v)v dv and vrms ≡ v 2 :
( √
Z ∞ (2k−1)!! π
n −x2 1 n+1 2k+1
for n = 2k
x e dx = 2 Γ( 2 ) = k! .
0 2 for n = 2k + 1

4.1.7.12 Ex: Mean velocity in a gas


The average velocity of the molecule in an ideal gas is 500 m/s. If the gas maintains
the same temperature and the molecular masses are doubled, what will be the new
average velocity?

4.1.7.13 Ex: Evaporation


a. A three-dimensional homogeneous gas consisting of N = 108 rubidium atoms (mass
m = 87u) has the temperature T = 100 µK. How many atoms are faster on average
than v1 = 10 cm/s?
b. Now suppose that all atoms with a velocity v > v1 were suddenly removed. After
some time, a new thermal equilibrium is established due to collisions. What is the
temperature of the gas now?

4.1.7.14 Ex: Trapped gases


The density distribution of a rubidium gas in a three-dimensional harmonic potential
can be expressed by,
n(r)d3 r = n0 e−U (r)/kB T d3 r ,
where U (r) = m 2 2
2 ω r . Numerical values: √ m = 87u and ω = 2π · 50 Hz.
a. Determine the expansion of the gas (1/ e full width of the distribution) at a given
temperature T = 100 µK.
b. Determine the maximum density n0 of the gas when N = n(r)d3 r = 108 is the
R

total number of atoms.


c. The effective volume is defined by Veff = N/n0 . How many atoms are in the effective
volume?
4.2. QUANTUM STATISTICS 135

4.1.7.15 Ex: Trapped gases


Calculate the internal energy and heat capacity of an ideal gas stored in a harmonic
trap and compare the result with a free gas.

4.1.7.16 Ex: Trapped gases


An ultracold gas made of 108 rubidium atoms (mass number 87) is trapped in a
three-dimensional potential of the form U (r) = m 2 2
2 ω r with the oscillation frequen-
cies ω/2π = 100 Hz.
a. Assume the spatial distribution
√ function for the atoms to be n(r) = n0 e−U (r)/kB T .
What is its width at 1/ e of the maximum height? How does the width of the dis-
tribution function change when the number of atoms is doubled?
b. The trap potential is suddenly switched off. The atoms are robbed of their poten-
tial energy, while their kinetic energy leads to the ballistic
√ expansion of the cloud.
20 ms after switching off the trapping potential, a 1/ e width of r̄a = 0.2 mm is ex-
perimentally measured for the distribution of the expanded atomic cloud. What was
the temperature of the atomic cloud in the trap?
Help: Assume that the final size of the atomic cloud is much larger than the size of
the trap. Neglect collisions between the atoms.

4.2 Quantum statistics


Considering a closed isolated system in a fixed volume (N V E-ensemble where E, N, V =
const) we have derived in Sec. 4.1.1 the partition function for microcanonical ensem-
bles, from which we obtained in Sec. 4.1.2 the Boltzmann distribution function.
The combinatorial derivation of the number of microstates contributing to the
same macrostate (4.1) was based on the observation, that all particles constitut-
ing the system were identical, but distinguishable. The expression (4.1) is just the
multinomial coefficient, i.e. the number of ways of arranging N items into r boxes,
the j-th box holding nj items, ignoring the permutation of items in each box. The
problem, however, is that quantum mechanics postulates that identical particles are
indistinguishable, and this has an impact on the numbers of states available upon
permutation. Consequently, the partition function (4.1) needs to be corrected.
The problems ultimately results from the fact that phase space is quantized. If
this weren’t the case, the cells’ size could be chosen so small that they admit at most
one particle. Then quantum statistics would not apply, the system would be classical.

4.2.1 Wavefunction symmetrization and detailed balance


We learn in quantum mechanics, that (anti-)symmetrization of the total wavefunction
of a multiparticle system leads to Bose-enhancement (Pauli blocking). Consider a
product state of two particles 1 and 2, Ψo = ψα (1)ψβ (2), and symmetrize it to
Ψs,a = √1 [ψα (1)ψβ (2) ± ψα (2)ψβ (1)] . (4.43)
2

Now assume that the single particle wavefunctions do completely overlap,


α=β =⇒ |Ψo,s,a |2 = (s + 1) |ψα (1)|2 |ψβ (2)|2 , (4.44)
136 CHAPTER 4. STATISTICAL THERMODYNAMICS

where s = 0 for Boltzmann particles (called boltzons here for simplicity), s = 1


for bosons, and s = −1 for fermions. Generalized to arbitrary numbers of par-
ticles we state: If n bosons (fermions) are in state Ψ, the probability for another
bosons (fermions) to joint this state is 1 + sn times the probability without (anti-
)symmetrization.

An intuitive derivation of the quantum statistics distribution function is based on


the postulate of detailed balance. Let us consider the most fundamental process in
physics, which is the collision between two particles initially in states 1 and 4 ending
up in two other states 2 and 3 [see Fig. 4.3(a)]. All four states j are initially occupied
with populations nj . The detailed balance postulate claims that equality of the rates
R14→23 for two particles to change their states and the rate for the inverse process
R23→14 is a sufficient condition for thermal equilibrium. Using the bosonic enhance-
ment (fermionic suppression) factor derived above, the postulate can be formulated,

R14→23 = |M14,23 |2 n1 n4 (1 + sn2 )(1 + sn3 ) = (4.45)


2
R23→14 = |M14,23 | n2 n3 (1 + sn1 )(1 + sn4 ) ,

where M14,23 is the matrix element of the collision process. Hence,


n1 n4 n2 n3
= . (4.46)
1 + sn1 1 + sn4 1 + sn2 1 + sn3
Energy conservation requires,

ε1 + ε4 = ε2 + ε3 . (4.47)

In a canonical ensemble in thermal equilibrium the population distribution among


the levels must be a unique function of their energies,

nj = f (εj ) . (4.48)

To satisfy Eqs. (4.46) and (4.47) f must have the functional form,

1
f (εj ) = , (4.49)
Ceβεj −s

where C is an arbitrary constant introduced to satisfy some normalization constraints.


This can be verified easily by plugging the formula (4.49) into the Eq. (4.46).

Figure 4.3: (a) Detailed balance entails thermal equilibrium. (b) Subdivision of energy levels
j in subboxes gj . Red circles are fermions and green particles bosons.
4.2. QUANTUM STATISTICS 137

4.2.2 Microcanonical ensembles of indistinguishable particles


4.2.2.1 Boltzons
In order to hold for indistinguishable particles, the partition function (4.1) must be
generalized allowing for the possibility that there is more than one way to put nj
particles into the box j. If the j-th box has a ’degeneracy’, that is, it has gj ’sub-
boxes’ with the same energy εj , such that any way of filling the j-th box where the
number in the sub-boxes is changed is a distinct way of filling the box, then in order to
get the right number of macrostates, the number of ways of filling the j-th box must
be increased by the number of ways of distributing the nj objects in the gj sub-boxes.
n
The number of ways of placing nj distinguishable objects in gj sub-boxes is gj j , since
any particle can go into any of the gj boxes. Thus the number of ways W{nj } that
a total of N particles can be classified into energy levels according to their energies,
while each level j having gj distinct states such that the j-th level accommodates nj
particles is,
r n
Y gj j
W{nj } = N ! . (4.50)
n !
j=1 j

In analogy to the procedure outlined in Sec. 4.1.2 we derive the Boltzmann distri-
bution by first taking the logarithm from (4.51) and then simplifying it using Stirling’s
formula (3.1),
X X gj
ln W = ln N ! + [nj ln gj − ln nj !] ≃ ln N ! + [nj ln + nj ] , (4.51)
j j
n j

then calculating the differential,


X  ∂ ln W  X gj
d ln W = dnj = ln dnj . (4.52)
j
∂nj j
nj

introducing Lagrange multipliers α and β and minimizing the functional,


X X
f ({nj }) ≡ ln W + α(N − nj ) + β(E − εj nj ) , (4.53)
j j

Relating the condition,


X X X gj

0 = df (nj ) = d ln W − α dnj − β εj dnj = ln − α − βεj dnj (4.54)
j j j
nj

via the Boltzmann hypothesis (4.16) to entropy,


1 P µ
dS = dE + dV − dN = kB β dE + kB α dN = kB d ln W , (4.55)
T T T
we identify the Lagrange multipliers,
1 µ
β= and α=− , (4.56)
kB T kB T
and finally obtain the Boltzmann distribution by setting the parenthesis in (4.54) to
zero,
gj
nj = β(ε −µ) . (4.57)
e j
138 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.2.2.2 Bosons
Boltzmann’s fundamental equation (4.3) relates the thermodynamic entropy S to
the logarithm of the number of microstates W{nj } . It was pointed out by Gibbs
however, that the above expression (4.51) does not yield an extensive entropy, and is
therefore faulty 5 This problem is known as the Gibbs paradox. The problem is that
the particles considered by the above equation are not indistinguishable. In other
words, for two particles (i and j) in two energy sublevels the population represented
by [i, j] is considered distinct from the population [j, i], while for indistinguishable
particles, they are not. Indeed, bosons have anti-symmetric wavefunctions, Fermions
have symmetric ones. Boltzons have all wavefunctions as eigenfunctions. In the limit
of high temperatures all particles behave like boltzons. Discretize the one-particle
energies in small cells labeled j of constant energy εj . Let nj be their number and gj
their degeneracy [see Fig. 4.3(b)].

Figure 4.4: Distribution of nj bosons (green)  and gfermions (red) over gj boxes the number
j −1
of possibilities being, respectively, nj +g

nj
and njj .

For bosons, each level gj can hold arbitrarily many of the nj particles. If we carry
out the argument for indistinguishable particles, we are led to the expression for the
partition function for bosons 6 ,

r  
Y nj + gj − 1
W{nj } = . (4.58)
j=1
nj

can be seen as follows: Consider two identical systems, r′ = r and gj′ ′ = gj , with atom
5 This

numbers N = j nj and N ′ = j n′j . The partition function for boltzons is not multiplicative,
P P

r g j n r g j n r g j n′
j j j
Y Y Y
′ ′
(N + N )! ̸= N ! ×N !
j=1
nj ! j=1
nj ! j=1
n′j !

while for fermions it is. To see this we set n′j ′ ≡ nr+j for j ′ = r + j and j = 1, ..., r. Then,

2r  r  r 
Y gj  Y gj  Y gj 
= × .
j=1
nj j=1
nj j=1
n′j

The same argument holds for bosons. A critical discussion of the above statements can be read in
[33].
6 Note that this partition function converges toward the one for boltzons for g ≫ n ≫ 1, which
j j
can be seen by simplifying it using the Stirling formula.
4.2. QUANTUM STATISTICS 139

Analogously to (4.51) we calculate the logarithm using Stirling’s formula,


X
ln W = [ln(nj + gj − 1)! − ln nj ! − ln(gj − 1)!] (4.59)
j
X gj − 1 + nj gj − 1 + nj

≃ nj ln + (gj − 1) ln ,
j
nj gj − 1

the differential,
X  ∂ ln W  X nj + gj − 1
d ln W = dnj = ln dnj , (4.60)
j
∂nj j
nj

and obtain the condition,


X X
0 = df (nj ) = d ln W − α dnj − β εj dnj (4.61)
j j
 
X n j + gj − 1 X X
= ln −α −β εj  dnj
j
nj j j

with the same Lagrange multipliers. This yields the Bose-Einstein distribution,
gj − 1
nj = . (4.62)
eβ(εj −µ) − 1
The Boltzmann distribution follows from this Bose-Einstein distribution for temper-
atures well above absolute zero, implying that gj ≫ 1. The Boltzmann distribution
also requires low density, implying that gj ≫ nj . Under these conditions, we may use
Stirling’s approximation (3.1) for the factorial: N ! ≈ N N e−N .

4.2.2.3 Fermions
For fermions, each level gj can hold at most one of the nj particles, which implies
that necessarily gj > nj [see Fig. 4.3(b)]. Let us consider a single energy level j. The
first of the nj particles has the choice between gj boxes. Since no box can be filled
with more than one particle, the second particle has only gj − 1 boxes at its disposal,
and so on until all particles have been assigned. This corresponds to gj !/nj ! possible
choices. However, we still need to respect the indistinguishability requirement. The
overcounting can be removed by dividing by (gj −nj )!. The procedure is now repeated
with all energy levels j, which leads to the partition function for fermions,
r  
Y gj
W{nj } = . (4.63)
j=1
n j

Again we calculate the logarithm using Stirling’s formula,


X
ln W = [ln gj ! − ln nj ! − ln(gj − nj )!] (4.64)
j
X gj − nj gj − nj

≃ nj ln − gj ln ,
j
nj gj
140 CHAPTER 4. STATISTICAL THERMODYNAMICS

the differential,
X  ∂ ln W  X gj − nj
d ln W = dnj = ln dnj . (4.65)
j
∂nj j
nj

and obtain the condition,


X X
0 = df (nj ) = d ln W − α dnj − β εj dnj (4.66)
j j
r  
X gj − nj
= ln − α − βεj dnj ,
j=1
nj

with the same Lagrange multipliers. This yields the Fermi-Dirac distribution for
gj ≫ 1,
gj
nj = β(ε −µ) . (4.67)
e j +1
Do the Exc. 4.2.6.1.

4.2.2.4 Thermodynamic potentials for bosons and fermions


Using the abbreviation s = +1 for bosons, s = −1 for fermions, and s = 0 for boltzons
the distribution function can be expressed as,
gj
nj = . (4.68)
eβ(εj −µ) − s

The chemical potential µ is fixed by the boundary conditions,


r
X r
X
N= nj and E= εj nj , (4.69)
j=1 j=1

With this, knowing the energy spectrum εi and the distribution of states gj of
the system, we are able to calculate all thermodynamic potentials. E.g. the entropy
reads,
X 
gj
 
nj

S = kB ln W{nj } = kB nj ln s + + sgj ln 1 + s (4.70)
j
nj gj
X  sgj β(εj − µ)  
β(εj −µ)
= kB + sgj ln e − s .
j
1 − se−β(εj −µ)

The Bose-Einstein and the Fermi-Dirac distribution both have many applications
in quantum mechanics, e.g. for the explanation of the blackbody radiation, the heat
capacity of metals, the laser, the Bose-Einstein condensation, and much more. In fact,
these distributions must be used whenever quantum statistical effects are important.
Prominent examples of systems where a quantum statistical treatment is crucial are
electrons in metals and ultracold quantum gases. We will discuss the latter in Secs. 4.3
and 4.4.
4.2. QUANTUM STATISTICS 141

(a) 2 (b) 2
kB T = 0.3μ E = 0.8μ
0.05μ μ
1.2μ
1.5 1.5

εj /μ
1 1

nj
0.5 0.5

0 0
0 1 2 0 1 2
nj kB T /μ

Figure 4.5: (code) Quantum statistical weight (4.49) for fermions (red dash-dotted line),
bosons (green dashed line), and boltzons (black solid line). (a) Weight nj as a function of
level energy εj for two different temperatures (solid and dash-dotted lines). (b) Weight nj
as a function of temperature for various level energy εj (solid, dash-dotted, and dashed).

4.2.3 Density-of-states in a trapping potential


An important boundary condition for the discussion of the quantum statistics of
gases is that the atoms are often confined in trapping potentials. Suspended in space
far from massive walls, they escape the perturbative influence of the environment.
This however implies, that the system becomes inhomogeneous, which means that
the number of states available to the atoms varies in space. In order to prepare
subsequent evaluations of thermodynamic potentials, let us first characterize this
spatial dependence by introducing the concept of the density-of-states.
In three dimensions the Hamiltonian of a trapped atoms is,

ℏ2 2
Ĥ = − ∇ + U (r) . (4.71)
2m
As the wavefunction is localized, the spectrum of possible energies organizes into
discrete levels, and the atoms are allocated in populations of these levels. Such mul-
tidimensional systems are often degenerate, which means that the same total energy
can be realized with different sets of quantum numbers 7 . The way an atomic cloud
accommodates itself inside a trapping potential is governed by the density of available
states. We now introduce the density-of-states η(ϵ) for an arbitrary potential via,

(2m)3/2
Z Z Z Z
1 p
η(ε)dε ≡ d3 rd3 k = d3 r dε ε − U (r) , (4.72)
(2π)3 (2π)2 ℏ3
q
with the substitution k = 2m
ℏ2 [ε − U (r)].
As an example, let us consider a box potential of volume V . In this case, the
7 This can be checked easily with separable potentials, such as the rectangular 3D box potential or

the 3D harmonic oscillator, where the same energy E = Ex + Ey + Ez can be reached with different
combinations of Ex , Ey , and Ez .
142 CHAPTER 4. STATISTICAL THERMODYNAMICS

expression (4.71) simply yields,

(2m)3/2 √ (2m)3/2 √
Z
η(ε) = d3 r ε = V ε (box potential) . (4.73)
(2π)2 ℏ3 V (2π)2 ℏ3

In the following we derive the density-of-states for the case of an harmonic oscillator
potential. More general potentials are discussed in the Excs. 4.2.6.2 and 4.2.6.3.

Figure 4.6: Artists’s view of phase space cells in a trapping potential in two dimensions.

Example 31 (Density-of-states for a cylindrical harmonic oscillator


potential ): Let us consider a cylindrical harmonic oscillator,
m 2 2 m 2 2
U (r) = ωr r + ωz z where r2 = x2 + y 2 , (4.74)
2 2
which can also be given in the form,
m 2 2 ωz
U (r) = ωr ρ where ρ2 = x2 + y 2 + λ2 z 2 with λ= . (4.75)
2 ωr
We also define the mean oscillation frequency,

ω̄ = (ωr2 ωz )1/3 = λ1/3 ωr . (4.76)

The single-particle levels of this Hamiltonian are,

εnx ny nz = ℏωx nx + ℏωy ny + ℏωz nz , (4.77)

where the coefficients nj with j = x, y, z are integer numbers. For the cylindrical
harmonic trap defined in (4.73), we find with a little help from Dr. Bronstein
[6],

(2m)3/2
Z r
3 m
η(ε) = d r ε − ωr2 ρ2 (4.78)
(2π)2 ℏ3 2
Z 1 √
Z 1−x̃2 √
Z 1−x̃2 −ỹ2
1 8ε2 p
= 2 3
dx̃ √ dỹ √ dz̃ 1 − x̃2 − ỹ 2 − z̃ 2 .
(2π) (ℏω̄) −1 − 1−x̃2 − 1−x̃2 −ỹ 2

The resolution of the integral gives,

ε2
η(ε) = (harmonic potential) . (4.79)
2(ℏω̄)3
4.2. QUANTUM STATISTICS 143

(a) (b)
100 100
(MHz)
50 50

0 2 0 2
U

2 0 1 0
0 -2 -2 0 -1 -2 x (mm)
y (mm) x (mm)
y (mm)
(c) 40 (d) 2
(MHz)

(108 )
30
20 1

η(ε)h̄ω
10
U

0
-1 0 1 0 50 100
x (mm) ε/kB (μK)

Figure 4.7: (code) (a) The figure shows two dimensions of a Ioffe-Pritchard type magnetic
trapping potential (characterized by being approximately linear at large distances from the
center and harmonic near the center). (b) Harmonic approximation (most experimentally
feasible potentials are approximately harmonic near the center). (c) One-dimensional cut
through the potential of (a,b). (d) Density-of-states for a harmonic (dotted line) and a
Ioffe-Pritchard type potential (solid line).

4.2.3.1 Application to the microcanonical partition function


Let us now come back to the distribution functions for ideal quantum gases introduced
in Sec. 4.2.2. In the thermodynamic limit, N → ∞, the distribution of states is
assumed so dense, that it can be expressed by a continuous density,
εj −→ ε = εr,p
gj −→ η(ε)
1 1
−→ ≡ wT,µ (ε)
eβ(εj −µ) −s eβ(ε−µ) −s
gj
nj = −→ η(ε)wT,µ (ε) (4.80)
eβ(εj −µ) −s
1
P R R 3 3
j gj −→ η(ε)dε = (2π) 3 d rd k
P R
N = j nj −→ η(ε)wT,µ (ε)dε
P R
E = j εj nj −→ εη(ε)wT,µ (ε)dε
where s = 0 stands for the ’Boltzmann’, s = −1 for the ’Bose-Einstein’, and s = +1
for the ’Fermi-Dirac’ distributions derived in (4.57), (4.62), and (4.67). We also
introduced the symbol wT,µ to denote the statistical distribution function,

wtµ (r, p)d3 rd3 p = η(ε)wT,µ (ε)dε . (4.81)

In the following sections we will calculate all system variables based on the expressions
(4.90) in the thermodynamic limit.
144 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.2.4 Grand canonical ensembles of ideal quantum gases


Let us now derive the statistics for physical conditions satisfied by a grand canonical
ensemble, which is a good model for many systems in which the particle number is
not conserved. A deeper discussion of the relation to the canonical ensemble and the
role of the chemical potential will be provided in the last part of this section.
Supposing that the particles of a system do not interact, it is possible to compute
a series of single-particle stationary states, each of which represents a separable part
that can be included into the total quantum state of the system. Let us call these
single-particle stationary states ’orbitals’ in order to avoid confusion with the total
many-body state. Every orbital has a distinct set of quantum numbers and may be
occupied by several particles or be empty. In this sense, each orbital forms a separate
grand canonical ensemble by itself, one so simple that its statistics can be immediately
derived. Focusing on just one orbital labeled m, the total energy for a microstate of
N particles in this orbital will be E = N εm , where εm is the characteristic energy
level of that orbital. The grand potential for the orbital is given by 8 ,
X
Ω = −kB T ln eβ(µN −E) , (4.82)
microstates

which is required for the microstates’ probabilities to add up to 1, as already stated


in (1.172).

In quantum mechanics the orbitals are understood as the eigenstates |ψm ⟩ of a


single-particle Hamiltonian,

p̂2m
ĥm = + Vtrap (r̂m ) , (4.83)
2m

with m = 1, . . . , N , whose spectrum is εm = ⟨ψm |ĥm |ψm ⟩. That is, every single
particle is completely characterized by the quantum number m 9 . A microstate |Ψk ⟩
is now identified as an eigenstate of the total many-particle Hamiltonian,
N
X N
Y
Ĥ = ĥm with |Ψk ⟩ = |ψm ⟩k . (4.84)
m=1 m=1

The request that the particles do not interact makes the system separable. The
density operator and the grand canonical partition function are [10],

e−β(Ĥ−µN̂ )
ρ̂ = and Ξgc = e−βΩ = Tr e−β(Ĥ−µN̂ ) , (4.85)
Ξgc

obviously satisfying Tr ρ̂ = 1. For the grand canonical ensemble the basis states of
the total Hamiltonian Ĥ are all microstates composed of many particles, and the
operators N̂ and ρ̂ can be expressed in the same basis.
8 In case of multi-species ensembles, the potentials add up like µ1 N1 + µ2 N2 + ....
9 In practice, a set of several quantum numbers may be required.
4.2. QUANTUM STATISTICS 145

We now migrate from the single-particle product state basis {|Ψk ⟩} to a Fock state
basis assigning a given number of particles nj to every possible energy level εj , where
j = 1, . . . , ∞, as illustrated in Fig. 4.8,
|Ψk ⟩ −→ |n1 , n2 , . . . , nj , . . .⟩ . (4.86)
I.e. we replace the distribution of microstates by a distribution of populations {nj }
among the energy levels. Since the energy and particle numbers are separately con-
served, the corresponding operators commute,
[Ĥ, N̂ ] = 0 , (4.87)
and therefore it is possible to find a complete basis of simultaneous eigenstates,
Ĥ| . . . nj . . .⟩ = E| . . . nj . . .⟩ with N̂ | . . . nj . . .⟩ = N | . . . nj . . .⟩ (4.88)
with,

X ∞
X
E= εj nj and N= nj . (4.89)
j=0 j=0

This means that the number of particles is a conserved quantity and that Ĥ and N̂
can be simultaneously diagonalized.
We can now evaluate the partition function (4.85),
X
Ξgc = ⟨Ψk |e−β(Ĥ−µN̂ ) |Ψk ⟩ (4.90)
k∈{microstates}
X X
= ⟨. . . nj . . . |e−β(Ĥ−µN̂ ) | . . . nj . . .⟩ = e−β(E−µN ) .
{nj } {nj }

The density operator in this new basis is,


X | . . . nj . . .⟩e−β(E−µN ) ⟨. . . nj . . . |
ρ̂ = P −β(E−µN )
. (4.91)
{nj } {nj } e

Figure 4.8: (a) Ensemble of N particles with different positions and velocities. (b) Distri-
bution of the particles over the spectrum of allowed energies.

Using the conditions (4.89), the partition function becomes,


X
Ξgc = ⟨. . . nj . . . |e−β(Ĥ−µN̂ ) | . . . nj . . .⟩ (4.92)
n1 ,n2 ,...
X X ∞
Y
= ⟨n1 |e−β(n1 ĥ1 −n1 µ) |n1 ⟩ ⟨n2 |e−β(n2 ĥ2 −n2 µ) |n2 ⟩ × . . . ≡ Ξj ,
n1 n2 j=1
146 CHAPTER 4. STATISTICAL THERMODYNAMICS

where in the last step we defined a partial partition sum,


X
Ξj ≡ e−β(εj nj −µnj ) , (4.93)
nj

accounting for all possible populations of a particular energy level εj . Analogously,


the density operator becomes,

e−β(Ĥ−µN̂ )
P 
1 −β {nj } (εj −µ)n̂j
ρ̂ = = e (4.94)
Ξgc Ξgc

1 −β(ε1 −µ)n̂1 −β(ε2 −µ)n̂2 Y
= e e × ... = ρ̂j .
Ξgc j=1
P
Note, that breaking down the exponential of a sum of operators, e− n̂j , into a prod-
Q −n̂
uct of exponentials of that operators, e j , is only possible because the operators
commute, [n̂k , n̂j ] = 0. In the last step we defined,

e−β(εj −µ)n̂j e−β(εj −µ)


ρ̂j ≡ = |nj ⟩ ⟨nj | . (4.95)
Ξj Ξj

The problem with this expression is, that the global wavefunction |Ψ⟩ has not
yet been (anti-)symmetrized according the particles’ bosonic or fermionic nature.
For bosons, nj may be any non-negative integer and each value of nj counts as
one microstate due to the indistinguishability of particles. For fermions, the Pauli
exclusion principle allows only two microstates for the orbital (occupation of 0 or 1),
giving a two-term series 10 ,
 P∞ −β(nj εj −nj µ) 1
 nj =0 e =
1−e−β(εj −µ)
for bosons
Ξj = (4.96)
 P1 −β(nj εj −nj µ) −β(εj −µ)
nj =0 e = 1 + e for fermions

Hence,

Y
Ξgc = (1 − se−β(εj −µ) )−s , (4.97)
j=1

where s = 1 for bosons and s = −1 the lower for fermions.


The grand canonical potential per microstate becomes,

Ωj = −kB T ln Ξj = skB T ln(1 − se−β(εj −µ) ) . (4.98)


10 Here, we introduce the statistics of indistinguishable particles ad hoc. The same result is obtained

automatically introducing field operators satisfying bosonic or fermionic commutation rules. Indeed,
we can rewrite the Hamiltonian and the number operator of any non-interacting system like [49],

εj â†j âj
X X †
Ĥ = and N̂ = âj âj ,
{nj } {nj }

where â†j
and âj are the particle creation and annihilation operators introduced in the occupation
number representation.
4.2. QUANTUM STATISTICS 147

Considering again the entire system, the total Landau grand potential is found by
adding up the Ωj for all orbitals,


X
Ω= Ωj . (4.99)
j=1

11
In any case the value

∂Ωj 1
nj = − = β(ε −µ) ≡ wT,µ (εj ) (4.100)
∂µ e j −s

gives the thermodynamic average number of particles on the orbital: the Fermi-Dirac
distribution for fermions, and the Bose-Einstein distribution for bosons.
The problem is completely analogous to Planck’s treatment of blackbody radia-
tion, where the Bose-Einstein distribution function followed as a corollary from the
Boltzmann statistics in thermal equilibrium and Planck’s quantization hypothesis,
E = N εj .

4.2.4.1 Grand potential and ensemble averages

Evaluating partial derivatives of the function Ω(µ, V, T ), looking up the relations


(1.152), we find for the averages of numbers of particles, the Gibbs entropy, the
average pressure, and the average energy,

1 = Tr ρ̂
 
∂Ω
N = ⟨N̂ ⟩ = Tr ρ̂N̂ =−
∂µ
 T,V
∂Ω
S = Tr ρ̂ ln ρ̂ = − . (4.101)
∂T
 µ,V
∂Ω
P =−
∂V T,µ
E = ⟨Ĥ⟩ = Tr ρ̂Ĥ = T S + µN + Ω

We will derive Eq. (4.101)(iii) in Exc. 4.2.6.4.

Example 32 (Calculation of ensemble averages): Thermodynamic fluc-


tuations can be calculated via the variances in energy and particle numbers.
Starting from,
−βΩ = ln Ξgc = ln Tr e−β(Ĥ−µN̂ ) (4.102)

11 Note the absence of the degeneracy factor g in comparison to the formula (4.68), which is
j
simply due to the fact that here we only consider a potential with non-degenerate eigenstates. The
degeneracy factor gj can, however, simply added ad hoc.
148 CHAPTER 4. STATISTICAL THERMODYNAMICS

it is easy to show, that,

∂Ξgc
= βTr N̂ e−β(Ĥ−µN̂ ) = βΞgc ⟨N̂ ⟩ (4.103)
∂µ
∂Ω Tr N̂ e−β(Ĥ−µN̂ )
−β = = β⟨N̂ ⟩
∂µ Ξgc
∂2Ω

Ξgc ∂µ Tr N̂ e−β(Ĥ−µN̂ ) − Tr N̂ e−β(Ĥ−µN̂ ) ∂µ

Ξgc
−β 2
= 2
= β(⟨N̂ 2 ⟩ − ⟨N̂ ⟩2 ) .
∂µ Ξgc

4.2.4.2 Meaning of chemical potential

The key behind second quantization is to remove the restriction that the number of
particles is fixed. Instead, the theory is built around the idea of Fock space, where the
number of particles is not fixed. This is highly advantageous when dealing with many-
body systems. This same idea, when extended to finite temperatures, is what we call
the grand canonical ensemble. What we want is to consider some finite temperature
density matrix ρ̂ ∼ e−β Ĥ , where the number of particles is not fixed, but can fluctuate
[39].
However, we cannot let it fluctuate arbitrarily since that would make no physical
sense. Instead, the basic idea of the grand canonical ensemble is to impose that the
number of particles in the system is only fixed on average. That is, we impose that,

⟨N̂ ⟩ = N . (4.104)

In some systems, the number of particles does indeed fluctuate. This happens, for
instance, in chemical solutions: if we look at a certain region of a liquid, the number
of molecules there is constantly fluctuating due to molecules moving in and out from
other regions. Of course, in many other systems, the number of particles is fixed.
However, even in these cases, pretending it can fluctuate may still give good answers
for large N (thermodynamic limit). The reason is that, as we have seen above, the
variance of N̂ scales as,

∆N̂ ∝ N , (4.105)

which is small. Hence, when N is large, the grand canonical ensemble will give
accurate answers, even if the number of particles is not actually allowed to fluctuate.
This is the idea behind ensemble equivalence: we are allowed to use an ensemble
where the number of particles fluctuates, even though it actually doesn’t, because in
the thermodynamic limit the fluctuations are small.

Because of [Ĥ, N̂ ] = 0 the eigenvalues of N̂ are good quantum numbers alongside


the eigenvalues of Ĥ. We can now arrange the common eigenvectors of E and N in
such a way as to sort the eigenvalue sets (N, E) by total atom numbers, such that
Ĥ is divided in sectors with well-defined N . In other words, Ĥ is block diagonal,
and there are no terms connecting sectors with different N . The eigenvalues E are
thus labeled by two indices E(N, m), where m labels the quantum states within each
4.2. QUANTUM STATISTICS 149

sector,
 
E(N1 , 1)

 E(N1 , 2) 

 .. 
.
 
 
 
Ĥ = 
 E(N2 , 1)  .

(4.106)

 E(N2 , 2) 

 .. 
.
 
 
 
..
.

Suppose now that the system is in thermal equilibrium with exactly N particles,
which corresponds to a canonical ensemble. As resumed in Tab. 1.4, the conditions
for equilibrium are then obtained minimizing the Helmholtz free energy, dF = 0,
and the corresponding canonical density operator and partition function are (see also
Sec. 1.4.4),

e−β Ĥ X
ρ̂cn = , Ξcn (N ) = e−βE(N,m) , F = −kB T ln Ξcn (N ) . (4.107)
Ξcn (N ) m∈sector

This is a constrained sum, since we are only summing over that sector that has
exactly N particles. This constraint makes it notoriously difficult to compute the
sum in practice solving a Schrödinger equation with Ĥ.
Instead, in the grand canonical ensemble we allow the number of particles to
fluctuate but only fix them on average (4.104). To accomplish this we had to intro-
duce a new parameter µ, called the chemical potential, so that the grand canonical
equilibrium state is transformed to (see also Sec. 1.4.5),

e−β(Ĥ−µN̂ )
ρgc = , Ξgc = Tr e−β(Ĥ−µN̂ ) , Ω = −kB T ln Ξgc . (4.108)
Ξgc
Apparently, the chemical potential enters by shifting the Hamiltonian,
Ĥ → Ĥ − µN̂ . (4.109)
As resumed in Tab. 1.4, in grand canonical ensembles the conditions for equilib-
rium are obtained minimizing the Landau energy, dΩ = d(F − µN ) = 0. To obtain
the energy spectrum in the case of fluctuating particle numbers, we need to solve
a many-body Schrödinger equation (such as the Gross-Pitaevski equation) with the
Hamiltonian substituted by Ω̂ = Ĥ − µN̂ [17].
The logic behind µ is twofold. When the number of particles is allowed to fluctuate,
the value of µ is fixed externally (like the temperature). As a consequence the number
of particles ⟨N̂ ⟩ = N (µ, T ) is interpreted as a function of µ and T . Conversely, if the
number of particles N is fixed, then µ = µ(N, T ) is to be interpreted as a function of
N and T , which is to be determined as the solution of the implicit equation,

Tr N̂ e−β(Ĥ−µN̂ )
⟨N̂ ⟩ = =N . (4.110)
Tr e−β(Ĥ−µN̂ )

Relevant cases in which the number of particles is not conserved are:


150 CHAPTER 4. STATISTICAL THERMODYNAMICS

• Chemical reactions can convert one type of molecule to another; if reactions


occur then the Ni must be defined such that they do not change during the
chemical reaction.

• In high energy particle physics, ordinary particles can be spawned out of pure
energy, if a corresponding antiparticle is created. Then, neither the number of
particles nor antiparticles are conserved, only their difference.

• In a system composed of multiple compartments that share energy but do not


share particles it is possible to set the chemical potentials separately for each
compartment, for example, when a capacitor composed of two isolated conduc-
tors is charged by applying a difference in electron chemical potential.

• In some slow quasi-equilibrium situations it is possible to have distinct popula-


tions of the same kind of particle in the same location, which are each equili-
brated internally but not with each other.

• The grand canonical ensemble is particularly useful for developing the thermo-
dynamics of large ideal trapped quantum gases. While the phenomenon of BEC
can be derived in any ensemble (in Sec. 4.2.2 we derived the bosonic partition
function from the detailed balanced assumption using combinatorial arguments),
when the dynamics of a condensate is the subject under study, it is often useful
to consider it as a separate system being in thermal and chemical equilibrium
with a reservoir. The role of a reservoir is played by the thermal cloud, which
always coexists with the condensate and which exchanges particles and energy
with it.

In order for a particle number to have an associated chemical potential, it must


be conserved during the internal dynamics of the system, and only able to change
when the system exchanges particles with an external reservoir. If the particles can
be created out of energy during the dynamics of the system, then an associated µN
term must not appear in the probability expression for the grand canonical ensemble,
i.e. we require µ = 0 for that kind of particle. Such is the case for photons in a black
cavity, which can be annihilated or created due to absorption and emission on the
cavity walls 12 , see Exc. 4.2.6.5.

4.2.4.3 Fluctuations
Fluctuations if the system can be readily calculated as well [39],

∂⟨Ĥ⟩ ∂⟨Ĥ⟩
(∆Ĥ)2 = ⟨Ĥ 2 ⟩ − ⟨Ĥ⟩2 = kB T 2 + kB T µ (4.111)
∂T ∂µ
∂⟨N̂ ⟩
(∆N̂ )2 = ⟨N̂ 2 ⟩ − ⟨N̂ ⟩2 = kB T .
∂µ
12 Note that photons in a highly reflective cavity can be conserved and caused to have a non-zero

chemical potential µ.
4.2. QUANTUM STATISTICS 151

If different species are present, it is interesting to calculate correlations in fluctuations.


The covariances of particle numbers and energy are then,

∂⟨N̂2 ⟩ ∂⟨N̂1 ⟩
⟨N1 N2 ⟩ − ⟨N1 ⟩⟨N2 ⟩ = kB T = kB T (4.112)
∂µ1 ∂µ2
∂⟨Ĥ⟩
⟨N̂1 Ĥ⟩ − ⟨N̂1 ⟩⟨Ĥ⟩ = kB T .
∂µ1

From the above expressions, it can be seen that the function Ω has the exact
differential,
dΩ = −S dT − ⟨N̂ ⟩dµ − P dV . (4.113)
Substituting the relationship (4.101)(v) for E into the exact differential of Ω, an equa-
tion similar to the first law of thermodynamics is found, except that some quantities
only appear as averages,

d⟨Ĥ⟩ = T dS + µ d⟨N̂ ⟩ − P dV . (4.114)

4.2.5 Thermodynamic limit and Riemann’s zeta function


The partition functions (4.58) resp. (4.63) for microcanonical and (4.97) for grand
canonical ensembles are evaluated over discrete distributions of microstates. Also, in
Sec. 4.2.3 we argued that, in view of the huge number of microstates, it is desirable
to introduce continuous distribution functions,
X Z Z
. . . −→ h−3 d3 rd3 p . . . −→ h−3 dεη(ε) . . . , (4.115)
r,p

which, for confined ensembles, can even be simplified using the concept of density-of-
states η(ε). As long as we are deep in the thermodynamic limit, N → ∞, we expect to
obtain reliable results. Let us now do this exercise for an ideal quantum gas confined
in a box potential of volume V , whose density-of-states is given by (4.73).
We begin with the request that the chemical potential satisfies the normalization
condition, √ √
V 2m ∞
Z Z
εdε
N = wT,µ (ε)η(ε)dε = . (4.116)
(2π)2 ℏ3 0 eβ(ε−µ) ∓ 1
Introducing the thermal de Broglie wavelength,
s
2πℏ2
λth ≡ , (4.117)
mkB T

and defining the fugacity,


Z ≡ eβµ , (4.118)
and we may also write, √
Z ∞
V xdx
N= . (4.119)
λ3th 0 Z −1 ex ∓ 1
152 CHAPTER 4. STATISTICAL THERMODYNAMICS

At this point, to simplify the notation, we introduce the Bose function and its
integral representation,
∞ Z ∞
X Zt 1 xξ−1 dx
gξ+ (Z) = = ≡ gξ (Z) , (4.120)
t=1
tξ Γ(ξ) 0 Z −1 ex − 1

where Γ(η) denotes the Gamma function. Analogically, we can define the Fermi
function via 13 ,
∞ Z ∞

X (−Z)t 1 xξ−1 dx
gξ (Z) = − ξ = ≡ fξ (Z) . (4.121)
t=1
t Γ(ξ) 0 Z −1 ex + 1

For classical particles,



xξ−1 dx
Z
1
gξ0 (Z) = =Z . (4.122)
Γ(ξ) 0 Z −1 ex + 0
That is, interestingly the classical function corresponding to the Bose or Fermi func-
tion is an identity for all orders of ξ. A particular value is the Riemann zeta-function
defined as,
ζ(ξ) = gξ+ (1) . (4.123)

(a) (b) 6

g3 5
2
g3/2
f3 4
gξ (Z)

ζ(ξ)

f3/2
1 3

0 1
0 1 2 1 2 3 4
Z ξ

Figure 4.9: (code) (a) Bose and Fermi functions for box potentials (g3/2 and f3/2 ) and for
harmonic potentials (g3 and f3 ). Also shown is the Boltzmann limit (4.122). (b) Riemann
function.

Note that for Z −1 ex ≫ 0 all denominators in the expressions (4.121) ro (4.123)


converge to the classical limit, which is to say, that for highly excited atoms, ε − µ ≫
kB T , all quantum statistical effects disappear.

With all these definitions we can now rewrite the expression (4.119),

V (s)
N= g (Z) , (4.124)
λ3th 3/2
13 When the context is clear, we will use the shorter notations g and f for Bose and Fermi
ξ ξ
functions, respectively.
4.2. QUANTUM STATISTICS 153

where s = + for bosons, s = − for fermions, and s = 0 for boltzons. Apparently, we


can identify the Bose/Fermi function as the thermal phase space density of an ideal
gas,
N (s)
ρth ≡ λdb = g3/2 (Z) . (4.125)
V
In a similar way we could now derive analytic expressions for all other thermodynamic
potentials. We will, however, see in the next Sec. 4.3, that for ideal bosonic gases the
result (4.124) must be corrected. The reason is rooted in a momentous quality of the
Bose function, which is that it diverges for Z > 1, which limits the chemical potential
to negative values.

4.2.6 Exercises
4.2.6.1 Ex: Quantum statistics
n particles are distributed over g > n different cells with the same probability. Cal-
culate the probabilities
a. that there is exactly one particle in each one of the first n cells;
b. that there is no cell with more than one particle.
Use the three different assumptions that:
i. the particles are boltzon, i.e. they are identifiable and arbitrarily many particles
can be assigned to each cell;
ii. the particles are bosons, i.e. they are NOT identifiable and arbitrarily many par-
ticles can be assigned to each cell;
iii. the particles are fermions, i.e. they are NOT identifiable and only a single particle
may be assigned to each cell.

4.2.6.2 Ex: Density-of-states for non-harmonic potentials


l
ℏ2 k 2 x p y
Calculate the density-of-states for non-harmonic potentials, Ĥ = 2m + 2x̄ + 2ȳ +
z q
2z̄ using Ref. [1]. Apply the result to a quadrupolar potential.

4.2.6.3 Ex: Electron gas model


A simple model for the behavior of electrons in a metal is the Fermi gas model. In this
model the electrons move in a square well potential, a mean-field approach accounts
globally for the periodic lattice of ions and the influence of all other electrons. The
density-of-states and the electron density are the same as for blackbody radiation,

V (2m3 )1/2 √
ρ(ε)dε = εdε ,
π 2 ℏ3
1
n(ε)ρ(ε)dε = (ε−ε )/k T ρ(ε)dε .
e F B +1
Calculate the maximum energy at T = 0.

4.2.6.4 Ex: Entropy in the grand canonical ensemble


Derive the relationship S = Tr ρ̂ ln ρ̂.
154 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.2.6.5 Ex: Black-body radiation


Derive the thermodynamics of the phenomenon of black-body radiation.
a. Which is the appropriate thermodynamic ensemble, and why?
b. For a single mode of a cavity, calculate the partition function, the density operator,
the total energy, and the Helmholtz free energy.
c. Generalize the results for an arbitrary black-body.
d. Introducing the density-of-states, calculate the energy density in the cavity as a
function of temperature.

4.3 Condensation of an ideal Bose gas


The clearest manifestation of quantum statistical effects is probably the phenomenon
of Bose-Einstein condensation (BEC) predicted by Bose and Einstein in 1926 [4].
With the achievement of BEC in a dilute gas of atomic rubidium in 1995, Cornell et
al. [8] confirmed the theory. Quantum degeneracy in Fermi gases was also observed
a bit later [14, 44]. In this and the subsequent section, we will present a quantum
statistical theory of ideal quantum gases for the cases of bosons, respectively, fermions.
Clearly, the theory is unable to grasp many phenomena observed in BECs and linked
to interatomic interactions, such as superfluidity. These will be discussed elsewhere 14 .

4.3.1 Condensation of a gas confined in a box potential


At very low temperatures approaching T = 0, according to the Bose-Einstein distri-
bution (4.100), we expect the atoms to pile up in the lowest energy state εj = 0 of
the trap,
εj →0 1 1
nj −→ wT,µ (0) = −βµ = =N , (4.126)
e −1 1/Z − 1
where we used the definition of the fugacity (4.118). In the thermodynamic limit,

1 N →∞
Z= −→ 1 , (4.127)
1 + 1/N

we find that the fugacity approaches unity. Thus, Z = 1 is the condition for a
macroscopic ground state population.
Let us now calculate the ground state population at finite temperatures. For
a free gas with energy spectrum, ε = p2 /2m, we derived the density-of-states η(ε)
in (4.73) 15 . Using the occupation number wT,µ (ε) for the Bose-Einstein distribu-
tion (4.100) in the thermodynamic limit, we express the total number of atoms as we
already did in Eq. (4.124),
Z ∞
V
N= wT,µ (ε)η(ε)dε = g3/2 (Z) . (4.128)
0 λ3th
14 See script on Quantum mechanics (2023).
15 We must, however, keep in mind that the state density approach is an approximation not valid
for experiments with a limited number of atoms.
4.3. CONDENSATION OF AN IDEAL BOSE GAS 155

The problem with the expression (4.128) now is, that the thermal de Broglie wave-
length diverges for T → 0, while the phase space density g3/2 (Z) is bounded be-
tween g3/2 (0) = 0 and g3/2 (1) ≈ 2.612, which we realize after a quick inspection of
Fig. 4.9(a). Hence, according to this formula, even taking the largest possible value
T →0
of the fugacity, Z −→ 1, the number of atoms in the lowest energy state tends to 0,
 3/2
V mkB T T →0
N = 3 g3/2 (Z) < V g3/2 (1) −→ 0 . (4.129)
λth 2πℏ2

This is obviously in contrast to the expectation of a large ground state population for
T → 0.
The reason is, that in the process of converting the sum to an integral (4.115),
the density-of-states disappears as we approach the ground state, thus removing the
ground state from the spectrum of energies that can be occupied. Einstein’s idea to
resolve the problem, was to explicitly maintain a discrete term accounting for the
ground state population Nc and to add it to the expression (4.128),

V
N = Nc + g3/2 (Z) . (4.130)
λ3th

4.3.1.1 Critical temperature and condensed fraction


We can use Eq. (4.130) to calculate the critical temperature Tc for Bose-Einstein
condensation. Above the phase transition, T > Tc , the population is distributed over
all states, each individual state being weakly populated; in particular, practically no
atoms are condensed, Nc = 0. The critical temperature Tc is the lowest temperature
where there are still no condensed atoms.
Below the critical temperature, T < Tc , the chemical potential is fixed by µ = 0,
and the fugacity reaches its maximum value, Z = 1. Above and at the critical
temperature all atoms occupy excited states,

V V
N= g3/2 (Z) = 3 g3/2 (1) for T ≥ Tc , (4.131)
λ3th λc

with g3/2 (1) = 2.612. The first part of Eq. (4.130) holds for T ≥ Tc and provides a
mean of determining Z from temperature and total atom number. The second part
of Eq. (4.130) holds at T = Tc . Resolving it by Tc we obtain,
2/3
2πℏ2

N
kB Tc = . (4.132)
m V g3/2 (1)

Below the critical temperature we need to add an additional term Nc . Resolving the
full expression (4.130) by the fraction Nc /N of atoms condensed in the ground state
and substituting N from (4.131), we obtain,

λ3
(
Nc V λ3c g3/2 (Z) 1 − λ3c for T ≤ Tc
=1− g3/2 (Z) = 1 − 3 = th (4.133)
N N λ3th λth g3/2 (1) 0 for T ≥ Tc
156 CHAPTER 4. STATISTICAL THERMODYNAMICS

16
The superscript (3/2) denotes the box potential shape of the trapping potential .
In summary we have,
!3/2 2/3
(3/2)
2πℏ2

Nc min(T, Tc ) N
=1− (3/2)
with kB Tc(3/2) = .
N Tc m V g3/2 (1)
(4.134)
The abrupt occurrence of a finite occupation in a single quantum state at temperature
(3/2)
below Tc indicates a spontaneous change in the system and a thermodynamic
phase transition. Solve Exc. 4.3.4.1.
1
Nc /N

0.5

0
0 0.5 1 1.5
T /Tc

Figure 4.10: (code) Condensed fraction for an ideal Bose gas as a function of reduced
temperature for a (blue) in a box potential and (green) in a harmonic trap. Red circles
denote experimentally measured data points [28]. The red dashed line is a fit to the data.
The cyan dash-dotted line is a theoretical curve taking into account finite size effects and
interatomic interactions.

4.3.1.2 Thermodynamic potentials in a grand canonical ensemble


In order to calculate the density-of-states, state equation, mean values in the grand
canonical ensemble, we start from the definitions of the partition sum Ξgc in Eq. (4.97)
using the upper signs for bosons, the grand canonical potential Ω, the fugacity Z, the
density operator ρ̂, and the trace,
Y∞
Ξgc ≡ (1 ∓ Ze−βεj )∓1 and Ω ≡ −kB T ln Ξgc and Z ≡ eβµ
j=1

e−β(ĤN −µN̂ ) X
and ρ̂ ≡ and Tr . . . ≡ ⟨ψj | . . . |ψj ⟩ . (4.135)
Ξgc j

The parameters µ, V, T are held fixed. As we have seen, for large systems in the
thermodynamic limit, the sum can be replaced by an integral, which, in turn, may
be expressed by the Riemann zeta-function (see Secs. 4.2.5 and 3.1.2). The thermo-
dynamic potentials and their expressions are summarized in the following table 17 .

16 See Exc. 4.3.4.3 for an explanation of the notation.


17 The red terms in {} brackets only hold for bosons, because the integrals diverge otherwise.
4.3. CONDENSATION OF AN IDEAL BOSE GAS 157

Table 4.1: Thermodynamic potentials for an ideal Bose gas (upper signs) or Fermi
gas (lower signs) trapped in a box potential.

P∞
Tr ρ̂ ln Ξgc j=1 limN →∞
−βεj
ln(1∓Ze )
− β1 ln Ξgc
P
Ω j ±β

1
µ β
ln Z

1 Tr ρ̂
 
∂Ωj
nj − ∂µ
wT,µ − β1 ∂
∂εj
ln Ξgc βεj
1
T,V e /Z∓1
  n o
±
− ∂Ω ∂ V 1
P
N ∂µ
Tr N̂ ρ̂ Z ∂Z ln Ξgc j nj λ3
g3/2 + 1/Z−1
T,V th
  βε
∂Ω
P nj e j 5V ±
S/kB − kB ∂T Tr ρ̂ ln ρ̂ ln Ξgc ± j ln Z 2λ3
g5/2 −{ln Z}
µ,V th

∂Ω 1 V ±

P − ∂V T,µ βV
ln Ξgc λ3
g5/2 −{N ln(1 − Z)}
th

∂ 3kB T V ±
≃ 3P V
P
E T S + µN + Ω Tr Ĥ ρ̂ − ∂β ln Ξgc j nj εj 2λ3
g5/2
th  2± 
± 9N g3/2
∂E 15V

CV ∂T N,V 4λ3
g5/2 − ±
th 4g1/2

With the particle number N we calibrate the chemical potential µ at a given


temperature T via,
V −1
λ3th N/V ,

N= 3 g3/2 (Z) =⇒ Z = g3/2 (4.136)
λth
and knowing Z we can determine all thermodynamic potentials of the table 4.1.
The internal energy with fixed volume is proportional to the pressure. Note that
limN →∞ S = 0 and limN →∞ CV = 0. Do the Exc. 4.3.4.2.
The Bose-Einstein phase transition occurs at some critical temperature Tc . At
high temperature T > Tc the ground state population vanishes. At low temperature
T < Tc , we have to substitute in the above equations Z by 1. Since g3/2 is limited
for Z = 0, .., 1 the population balance must be equilibrated by an additional term
describing the ground state population:
(
+
N 3 g3/2 (1) + λ3th NVc for T ≤ Tc
λth = +
V g3/2 (Z) for T ≥ Tc
( (4.137)
+
P 3 g5/2 (1) for T ≤ Tc
λ =
kB T th +
g5/2 (Z) for T ≥ Tc

In the thermal Bose-gas phase, T ≥ Tc , we get from (4.137) the state equation,
+
PV g5/2 (Z) T →∞
= + −→ 1 . (4.138)
N kB T g3/2 (Z)
158 CHAPTER 4. STATISTICAL THERMODYNAMICS

In the classical limit, obtained by noticing gξ0 (Z) = Z, follows the well-known classical
ideal gas equation. In the Bose-condensate phase, T ≤ Tc , using the definition of the
critical temperature, we recover from (4.137) the equation of state (4.134).

(a) (c) (e)

2
4 (f)
optical density

(b) (d)
1
1 2

0 0 0
0 120 240 360 0 120 240 360 0 120 240 360
position (μm) position (μm) position (μm)

Figure 4.11: Ultracold 87 Rb gas at various temperatures (a,b) T > Tc , (c,d) T ≃ Tc , and
(e,f) T < Tc measured in experiment [28]. The figures (a,c,e) are two-dimensional false color
images of the momentum distribution. The figures (b,d,f) are cuts through the images.

4.3.2 Condensation of a harmonically confined gas


The critical temperature Tc can be significantly altered, when the atoms are confined
to a spatially inhomogeneous potential. The critical temperature depends on the
general shape and the tightness of the potential. Let us consider N particles of
an ideal Bose gas distributed over several quantum states of an arbitrary potential.
The occupation number wT,µ (ε) of particles at an energy level ε is still given by
(4.114), the ground state energy is defined as zero. In the thermodynamic limit,
the relation between the chemical potential and the total number of particles is still
given by Eq. (4.136), with an adequate density-of-states η(ε). The state density for
an arbitrary confinement potential U (r) can be found by generalizing the calculation
to the free gas. The phase space volume between the energy surfaces ε and ε + dε
is proportional to the number of states in this energy range. However, the external
potential limits the space available for the gas. For a harmonic potential (4.75) with
the mean secular frequency ω̄ the density-of-states η(ε) has already been calculated
in Eq. (4.79). With this, we can analogically to (4.136) and (4.134), calculate,
Z ∞
N = Nc + wT,µ (ε)η(ε)dε (4.139)
0
∞ 3
ε2 dε
Z 
1 kB T
= Nc + = Nc + g3 (Z) .
2(ℏω̄)3 0 eβ(ε−µ) −1 ℏω̄
4.3. CONDENSATION OF AN IDEAL BOSE GAS 159

In the same way as for a potential well we find for a harmonic potential,
 3  3
kB T T
Nth = g3 (1) = N (3)
, (4.140)
ℏω̄ Tc
with g3 (1) = 1.202. Since Nc + Nth = N , the number of particles in the ground state
is,
!3 1/3
(3) 
Nc min(T, Tc ) N
=1− (3)
with kB Tc(3) = ℏω̄ . (4.141)
N Tc g3 (1)

The superscript (3) indicates the harmonic shape of the trap.

(a) (b) 2

(μK)
4 0
(105 )

-2
μ/kB
2
Nc

-4
0
0 0.5 1 1.5 0 0.5 1 1.5
T /Tc T /Tc
(c) 10 (d)
10
(μK)

C/NkB

5
5
E/kB

0 0
0 0.5 1 1.5 0 0.5 1 1.5
T /Tc T /Tc

Figure 4.12: (code) Calculation of thermodynamic potentials as a function of temperature


for a Bose gas of 500000 88 Sr atoms trapped in a harmonic potential with secular frequency
ωho /2π = 416 Hz. (a) Chemical potential, (b) energy, (c) heat capacity per particle, and
(d) total heat capacity. The critical temperature is Tc = 1.7 µK.

Fig. 4.10 traces the condensed fraction Nc /N measured as a function of the reduced
(3)
temperature T /Tc . Experiments [28, 19] confirm Bose’s ideal gas theory in the
thermodynamic limit. A particularity of inhomogeneous trapping potentials is, that
the condensed and the normal phase separate in momentum space, simply because the
condensed atoms occupy only the ground state, whose spatial extend is small, while
thermal atoms are distributed over all energy levels. Fig. 4.11 shows a measurement
of velocity distributions of a cloud of atoms close to the critical temperature.
We note that smaller trapping volumes (or tighter potentials) increase the critical
temperature Tc , thus allowing for quantum degeneracy at higher temperatures, which
can be advantageous in experimentation. Also, at a given temperature, a strongly
confining potential reduces the total minimum number of atoms required to reach
condensation.
160 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.3.2.1 Energy and heat capacity


When the number of atoms is limited, N < ∞, we expect a slightly reduced critical
temperature [25]. In addition, the interatomic interaction reduces the critical tem-
perature [1]. As the effects are small, they are difficult to observe in experiments.
However, measurements of other thermodynamic quantities such as energy and heat
capacity [14, 19] showed significant deviations from the ideal gas behavior due to
interaction effects.

2.0

1.5
E / NkBTo

0.0

1.0
-0.2

0.5 -0.4

0.4 0.8 1.2 1.6

0.0
0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8
T / To(N)

Figure 4.13: Measurement of the release energy [19].

The heat capacity quantifies the system’s ability to secure its energy. In conven-
tional systems, the heat capacity is typically either specified at constant volume or
at constant pressure. With this specification heat capacities are extensive state vari-
ables. When crossing a phase transition, the temperature-dependent heat capacity
measures the degree of change in the system above and below the critical temperature
and provides valuable information about the general type of phase transition.
Using (4.81), the total energy per particle is given by,

εwT,µ (r, p)d3 rd3 p εη(ε)(eβ(ε−µ) − 1)−1 dε


R R
E g4 (Z)
= R = R = 3kB T . (4.142)
N wT,µ (r, p)d3 rd3 p η(ε)(eβ(ε−µ) − 1)−1 dε g3 (Z)

For a confined gas, volume and temperature are interdependent, and the concept of
pressure is somewhat vague. In this case, we can not refer to the heat capacity at
constant volume or pressure. However, one can define the heat capacity for a fixed
number of particles,  
∂E(T )
C(T ) = . (4.143)
∂T N

Fig. 4.12 shows the temperature dependence of some thermodynamic potentials for a
harmonically trapped ultracold Bose gas. The discontinuity of the heat capacity at
the critical temperature is known as λ-point.

Calculating the second moments of the distributions obtained for the same density
by time-of-flight of absorption images, we obtain the kinetic energy,
Z 2
p
Ekin = n(p)d3 p . (4.144)
2m
4.3. CONDENSATION OF AN IDEAL BOSE GAS 161

For confined ideal gases, the virial theorem ensures Ekin + Epot = 2Ekin . For real
gases, the repulsive energy of the mean field adds to this energy, E = Ekin + Epot +
Eself . The sudden extinction of the trapping potential before time-of-flight takes away
the potential energy Epot non-adiabatically. The kinetic energy and the self-energy of
the condensate are fully converted into kinetic energy during ballistic expansion. It
is this energy, p2 /2m = Ekin + Eself , which is sometimes called release energy, which
is measured after ballistic expansion 18 . Fig. 4.13(right) shows a measurement of the
release energy. Solve the Exc. 4.3.4.3.

4.3.2.2 Micro- and grand canonical Bose-condensates

The question which ensemble is the correct assumption depends on the experimental
situation. The question is particularly interesting in the context of Bose-Einstein
condensation: Here it is related to the question which state better describes a BEC:
A Fock state characterized by a fixed atom number or a Glauber state, where the
atom number is fluctuating.
The condensates experimentally produced in alkali gases consisted of relatively
small atom numbers between 1000 to 107 , so that the validity of the thermodynamic
approximation and the use of the density-of-states approach has been questioned [25].
Also, the decision whether to use the grand canonical, the canonical or the micro-
canonical ensemble for calculating the thermodynamic quantities noticeably influences
the results. Herzog and Olshanii [29] have shown that for small atom numbers on
the order of 100 the canonical and grand canonical statistics lead to predictions on
the condensed fraction that differ by up to 10% [see Fig. 4.10(right)]. On the other
hand, they give the same results if the particle numbers are large. Which canonical
statistics is more appropriate is not a trivial question and depends on the experimen-
tal setup and in particular on the time scale of the measurements. If we look at the
sample for short times, the number of condensed atoms will be fixed, and we can
assume a canonical ensemble. For longer times, however, the atom number may be
an equilibrium parameter depending on the contact of the sample with a reservoir,
and the grand canonical statistics is better suited.

4.3.3 Density and momentum distribution for a Bose gas


Bose-Einstein condensates consist of atoms sharing a single quantum state. In in-
homogeneous potentials, the condensate and the thermal fraction form spatially sep-
arated clouds, concentrated around the center of the potential and therefore very
dense. For this reason, interatomic interaction effects generally dominate the density
and momentum distribution of the condensed fraction. However, the non-condensed
(or normal, or thermal) fraction is also subject to modifications due to the bosonic na-
ture of the atoms. Since the density of the normal fraction is generally much smaller,
these modifications are weak. In this section, we will only discuss these effects briefly,
18 Itis interesting to measure the heat capacity of a partially condensed cloud near the critical
point and analyze the discontinuity, because it contains important information about interatomic
interactions and finite-size effects ([9], Sec. 3.4). In addition, the classification of Bose-Einstein con-
densation as a phase transition depends very much on the behavior of the thermodynamic potential
near the critical point [38, 32].
162 CHAPTER 4. STATISTICAL THERMODYNAMICS

but we note that the calculations are analogous to the calculations for fermionic gases
presented in Sec. 4.4.4.
For an ideal Bose gas the density and momentum distributions are expressed by
Bose functions g3/2 (Z) [9]. For example, as will be derived in Exc. 4.3.4.4, the density
and momentum distributions are,

1
n(x) = g3/2 (e−β[U (x)−µ] )
λ3th
(bosonic distribution functions) (4.145)
a6ho 2
n(k) = g3/2 (eβ(µ−p /2m) )
λ3th

In the classical limit, we can calibrate the chemical potential by Eq. (4.136) for a
box potential or by (4.139) for a harmonic potential,

( N 3
βµ βµ βµ V λth (for a box potential)
g3/2 (e ) → c3/2 (e )=e =  3
c3 (eβµ ) =N ℏω̄
kB T (for a harmonic potential)
(4.146)
Hence, we obtain for the classical density distribution,

1 eβµ
n(x) = 3 c3/2 (e−β[U (x)−µ] ) = 3 e−βU (x) (4.147)
λ λth
th
N
 V
x∈V
(for a box potential)
= q
mω̄ 2
3 2 2
−βmω̄ x /2
 N
2πkB T e (for a harmonic potential)

Similarly, the momentum density distribution is given by,

a6ho β(µ−p2 /2m) a6ho eβµ −βp2 /2m


n(k) = c3/2 (e ) = e (4.148)
λ3 λ3th
th
 N 6 −βp2 /2m
V aho x∈V e (for a box potential)
= 3
q
1
3
−βp 2
/2m
 Nℏ
2πmkB T e (for a harmonic potential)

where
p we used the spatial extend of the ground state of the harmonic oscillator aho =
ℏ/mω. We see that we recover the Maxwell-Boltzmann velocity distribution, as
seen in Fig. 4.14,

3
m3
r
m 2
n(v) = n(k) =N e−βmv /2 . (4.149)
ℏ3 2πkB T
4.3. CONDENSATION OF AN IDEAL BOSE GAS 163

(a) 1 (b) 3

n(p) (h̄λ3th /a6ho )


th )
n(x) (λ−3
2
0.5
1

0 0
-50 0 50 -50 0 50
x (aho ) p (h̄/aho )

Figure 4.14: (code) (a) Density and (b) momentum distribution of a Bose gas (red) and a
Boltzmann gas (green) at T = 1.1Tc (solid line) and at T = 2Tc (dotted line).

4.3.3.1 Ballistic expansion


To describe the density distribution of an ultracold Bose-gas after a time-of-flight we
replace in the second Eq. (4.145): k = mr/ℏtT oF . We obtain the density distribution,
 3  3 6
m m aho 2 2
nT oF (r, tT oF ) = n(k = mr/ℏtT oF ) = g3/2 (e(µ−mr /2tT oF )/kB T )
ℏtT oF ℏtT oF λ3th
 3 r 3
T →∞ m 1 2 2 N 2 2
−→ Nℏ 3
e−mr /2tT oF kB T = e−r /2rrms ,
ℏtT oF 2πmkB T (2π)3/2 rrms
3

(4.150)

where we defined, r
kB T
rrms ≡ tT oF . (4.151)
m
This distribution does not directly depend on the potential U (r), that is, the expansion
is isotropic. In Exc. 4.3.4.4(b) we determine the time-of-flight density distribution of
an ultracold Bose gas. For very long flight times (usually several 10 ms) the density
resembles a Gaussian distribution [9]. Note however, that in interacting non-ideal
gases the chemical potential does depend on the potential.
In a time-of-flight experiment, any deviation observed between the results (4.150)
and (4.151) points towards an impact of quantum statistics. However, absorption
images only record column densities, i.e. projections of the time-of-flight distribution
on a plane, which tends to smear out the non-Gaussian features.
Example 33 (Heat capacity measurement): For an ideal Bose gas trapped
in a harmonic potential the temperature dependence of the heat capacity at
the threshold to condensation can easily be obtained as follows. The condensed
fraction determines the chemical potential through,
 3
kB T
N = N0 + g3 (Z) , (4.152)
ℏω

where Z(T ) = eµ/kB T for a grand canonical ensemble and gn denotes the
Riemann zeta function. The condensed fraction vanishes above the critical
164 CHAPTER 4. STATISTICAL THERMODYNAMICS

temperature, the chemical potential vanishes below the critical temperature.


(kB T /ℏω)3 = 2π(aho /λth )3 denotes the normalized volume of a phase space
cell. Knowing Z(T ) from equation (4.152), we can calculate the total energy,
the heat capacity and all the other thermodynamic potentials:
 3
kB T g3 (Z)
CN = 12kB g4 (Z) − 9kB N . (4.153)
ℏω g2 (Z)

For an interacting Bose-gas we expect that the Eqs. (4.152) and (4.153) are
not scrupulously obeyed. Indeed, the abrupt discontinuous change in the heat
capacity at the phase transition to BEC, expected for ideal gases, is smeared
out by atomic collisions [1].

Figure 4.15: Population variation during a slow adiabatic compression followed by a sudden
non-adiabatic decompression.

4.3.3.2 Adiabatic compression


Adiabaticity of a process means reversibility, while the atom number is unchanged
N = const and, hence, constant entropy S − const. This implies an unchanged
population distribution nj = const and βεj /T = const also we get βµ, βE = const.
Furthermore, the phase space density keeps unchanged ρ = const. The process of
adiabatically compressing a harmonic trap therefore changes the temperature like
T ′ = T ω ′ /ω. This is valid above and below the transition point. The measure is
repeated twice: With and without adiabatic-sudden variation. The heat capacity
then follows from equation (4.153).

4.3.4 Exercises
4.3.4.1 Ex: Monoatomic gas as a canonical ensemble
Consider a classical monoatomic gas made up of N non-interacting atoms of mass m
confined in a container of volume V , at temperature T . The Hamiltonian correspond-
ing to an atom is given by Ĥ = (p̂2x + p̂2y + p̂2z )/2m.
a. Show that the atomic canonical partition function is ξ = V /λ3th , where λth is the
thermal de Broglie wavelength defined in Eq. (4.113).
b. Using ξ of the previous item, obtain the system’s partition function Ξcn and the
Helmholtz free energy F . Also obtain the free energy per atom f = F/N in the
thermodynamic limit N −→ ∞, V −→ ∞, such that v = N/V fixed.
c. Obtain internal energy E and the gas pressure p.
d. Calculate the chemical potential and entropy per atom in the thermodynamic limit,
thus deriving the so-called Sackur-Tetrode formula.
4.3. CONDENSATION OF AN IDEAL BOSE GAS 165

4.3.4.2 Ex: Thermodynamic quantities for a Bose gas gas trapped in a


box
Derive all expressions for the entropy and the pressure of Tab. 4.1.

4.3.4.3 Ex: Generalization for arbitrary potentials in reduced dimen-


sions
The calculation of the thermodynamic potentials can be generalized to arbitrary trap-
ping potentials and dimensions [4, 18, 12, 2, 50, 20, 3, 25, 29, 37, 36, 42, 38, 41, 19].
To do so, we consider a generic power law potential confining an ideal Bose gas in α
dimensions,
Xα xi ti
U (r) = ,
i=1 ai

and define a parameter describing the confinement power of the potential,


α Xα 1
ξ= + .
2 i=1 ti

For example, for a three-dimensional potential, α = 3. Now, for a 3D harmonic


potential, ξ = 3, and for 3D box potential, ξ = 3/2.
a. Calculate the density-of-states η using the equation (4.71) employing Bose functions
(4.120).
b. Prove the following expressions:

(bosonic potentials)
 ξ
Nc min(T, Tc )
= 1−
N T
c ξ
E gξ+1 (Z) min(T, Tc )
= ξ
N kB T gξ (Z) Tc
S gξ+1 (Z) 2µ
= 4 −
N kB gξ (Z) kB T .
 ξ
C gξ+1 (Z) min(T, Tc ) gξ (Z) max(T − Tc , 0)
= ξ(ξ + 1) − ξ2
N kB gξ (Z) Tc gη−1 (Z) T − Tc
CT >Tc gξ+1 (Z) 2 gξ (Z) CT <Tc gξ+1 (1)
= ξ(ξ + 1) −ξ , = ξ(η + 1)
N kB gξ (Z) gξ−1 (Z) N kB gξ (1)
∆CTc CTc− − CTc+ gξ (1)
= = ξ2
N kB N kB gξ−1 (1)

4.3.4.4 Ex: Time-of-flight distribution of a Bose-gas


a. Derive the formulae (4.145) describing the density and momentum distribution of
an ultracold Bose-gas.
b. Calculate the time-of-flight distribution of a Bose-gas as a function of temperature
(i) analytically for a harmonic potential and (ii) numerically for an arbitrary potential.
166 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.4 Quantum degeneracy of an ideal Fermi gas


Atoms are fermions or bosons, depending on whether their spin is integer or semi-
integer. For example, 87 Rb atoms with their total integer spin of F are bosons, while
40
K atoms having a half-integer spin are fermions. At high phase space densities,
atoms have to figure out how they will organize their coexistence. Bosons encourage
each other to occupy the same phase space cell, in contrast to the reluctant fermions,
which prefer to follow Pauli’s exclusion principle. The different behavior is described
by different quantum statistics that determine how the phase space (i.e., the available
energy levels) has to be filled by the atoms. The Bose-Einstein distribution is valid for
bosons, the distribution of Fermi-Dirac for fermions and both asymptotically approach
the Boltzmann distribution at high temperatures. We have seen that bosons undergo
a phase transition and condense in the ground state when the temperature is reduced
below a critical threshold. On the other hand, the fermions organize their phase
space, so that their energy levels are arranged like a ladder. The impact of fermionic
quantum statistics on a cold cloud of atoms were observed experimentally by DeMarco
and Jin [14, 44]. They cooled a two-components Fermi gas of 7×105 potassium atoms
down to 300 nK, which corresponded to 60% of the atoms populating energy levels
below the Fermi energy. The measured density distribution was found to deviate from
the one expected for an ideal Boltzmann gas 19 .

4.4.1 Chemical potential and Fermi radius for a harmonic trap


The phase space density for a degenerate Fermi gas in the thermodynamic limit has
been derived in (4.121). We consider a cylindrically symmetric harmonic potential, as
defined in (4.74), for which the density-of-states ηε has been calculated in (4.79). In
the same way as for a Bose gas, the chemical potential of the Fermi gas must satisfy
the normalization condition,
∞ 3
ε2 dε
Z Z 
1 kB T
N= wT,µ (ε)η(ε)dε = = f3 (Z) . (4.154)
2(ℏω̄)3 0 eβ(ε−µ) + 1 ℏω̄

For low temperatures, βµ ≫ 1, we can use the Sommerfeld expansion of the Fermi
function, which in first order gives fξ (ex ) ≃ xξ /Γ(ξ + 1), where x is a placeholder for
βµ, Γ is the Γ-function, and ξ = 3 for a harmonic potential. From this we immediately
obtain the chemical potential at zero temperature defined as the Fermi energy,

EF ≡ µ(T = 0) = ℏω̄(6N )1/3 , (4.155)

and from that the momentum of free particles and the Fermi radius,
r s s
2mEF 2EF 2EF
KF ≡ and rF ≡ , zF = . (4.156)
ℏ2 mωr2 mωz2

19 We note that meanwhile ultracold two-components Fermi gas have been demonstrated to form

bosonic Cooper-pairs, similarly to the phenomena known as superconductivity in some metals and
as superfluidity of the fermionic 3 He.
4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 167

Using the second order of the Sommerfeld expansion,

xξ π 2 ξ(ξ − 1)
 
x
fξ (e ) ≃ 1+ + ... , (4.157)
Γ(ξ + 1) 6x2

we obtain for the chemical potential at finite temperature the equation, 0 = µ3 +


(πkB T )2 µ − EF3 . The approximate solution of this equation, neglecting higher-order
terms such as 4π 6 kB6 6
T ≪ 27EF6 , is
" 2 #
π2

kB T
µ = EF 1− . (4.158)
3 EF

For highly excited atoms, ε−µ ≫ kB T , the Fermi function approaches the identity,
Z→0
fξ (Z) −→ Z (see Fig. 4.9), so that,
 3 3

kB T kB T
N= eβµ = (1 + βµ + ...) , (4.159)
ℏω̄ ℏω̄
" 3 #  3
ℏω̄ 1 EF
µ = kB T ln Z ≃ kB T ln N = kB T ln ,
kB T 6 kB T

where in the last step we substituted the definition of the Fermi energy. This means
that highly excited fermions behave like a Boltzmann gas, which satisfies an ideal gas
equation similar to that of classical particles in a box potential,
 3
kB T
N= . (Boltzmann) . (4.160)
ℏω̄

Fig. 4.16(a) shows calculations of the chemical potential for an ideal Fermi gas
along with the chemical potentials of a Boltzmann gas and a Bose gas.

4.4.2 Energy
Using (4.81), the total energy per particle, E/N ≡ N −1 εwT µ d3 xd3 k, is given by,
R

−1
εη(ε) eβ(ε−µ) + 1
R
εwT,µ (x, k)d3 xd3 k
R
E dε f4 (Z)
= R 3 3
= R −1 = 3kB T , (4.161)
N wT,µ (x, k)d xd k η(ε) e β(ε−µ) +1 dε f3 (Z)

in analogy to the expression (4.142) holding for a Bose gas. Again using the Sommer-
feld approximation, we see that for low temperatures, T → 0, the energy is limited
by [see Fig. 4.16(b)],

3µ4 2π 2
 
3 βµ T →0 3
E= f 4 (e ) = 1+ + ... −→ EF . (Fermi) (4.162)
β(βℏω̄)3 4EF3 (βµ)2 4

Hence, the total energy per fermion does not vanish for T → 0. The reason is that
the atoms are forced to adopt states in the outermost regions of the harmonic trap.
168 CHAPTER 4. STATISTICAL THERMODYNAMICS

In the limit of high temperatures, T → ∞, a classical gas has the energy per
particle,

3   3

E= 3
f4 f3−1 (βE6F ) ≃ 3N kB T . (Boltzmann) (4.163)
β(βℏω̄)

Z→0
which is seen by taking the high temperature limit fη (Z) −→ Z and extrapolating
T →∞
to all Z. This implies, E1 /EF −→ 3kB T /EF .

In comparison, for bosons we have,


 3  3
g4 (Z) min(T, Tc ) T
E = 3N kB T ≃ 2.7N kB T . (Bose) (4.164)
g3 (Z) Tc Tc

Hence, the total energy per boson decreases very rapidly for T → 0. The reason
is that the atoms are bosonically encouraged to pile up in the inner region of the
harmonic trap.

4.4.3 Entropy and heat capacity


The entropy per particle can be calculated from,
Z
S = −kB η(ε) [wT,µ ln wT,µ + (1 − wT,µ ) ln(1 − wT,µ )] dε (4.165)

and becomes,
f4 (Z) µ 4E1 µ
− =
S1 = 4kB − . (4.166)
f3 (Z) T 3T T

The heat capacity per particle C1 = ∂E ′



∂T N is easily calculated using Zfη (Z) =
1

fη−1 (Z),
   
f4 (Z) 3µ f4 (Z)f2 (Z) E1 3µ f4 (Z)f2 (Z)
C1 = 3kB − 1− = − 1 − . (4.167)
f3 (Z) T f3 (Z)2 T T f3 (Z)2

For fermions well below the Fermi temperature, T → 0, using the Sommerfeld
approximation, we calculate,

T →0 3π 2 kB T
C1 −→ . (Fermi) (4.168)
2 TF

For high temperature T

C1 ≈ 3kB . (Boltzmann) (4.169)


4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 169

(a) 1 (b) 3

(μK)
(μK)
0 2

EF

E/kB
μ/kB -1 1
3
4 EF
-2 0
0 0.5 1 0 0.5 1
T /TF T /TF
×106
(c) (d)
10 2
Tc
C/NkB

C/kB
5 1

0 0
0 0.5 1 0 0.5 1
T /TF T /TF

Figure 4.16: (code) Calculation of thermodynamic potentials for Bose (red), Fermi (green),
and Boltzmann gases as a function of temperature for a given harmonic trapping potential.
The gases are assumed to have same mass, same atom number N = 200000, and same trap
frequencies ωho /2π = 200 Hz. (a) Chemical potential, (b) energy, (c) heat capacity per
particle, and (d) total heat capacity. The dotted magenta line in (a) shows the chemical
potential calculated from the Sommerfeld approximation.

4.4.4 Density and momentum distribution for a Fermi gas


4.4.4.1 Spatial distribution
The density distribution is,
2k 2 dk
Z Z
3 1
n(x) = wT,µ (x, k)d k = 2 2 (4.170)
(2π)2 eβ[ℏ k /2m+U (x)−µ] + 1
 3/2 Z √  3/2
1 2m εdε 1 2m
= = Γ(3/2)f3/2 (e−β[U (x)−µ] ) ,
(2π)2 ℏ2 eβ[ε+U (x)−µ] + 1 (2π)2 βℏ2
such that,
n(x) = λ−3
th f3/2 (e
−β[U (x)−µ]
) (Fermi) . (4.171)
At low temperatures, T → 0, we can apply the Sommerfeld expansion [7], which to
first order gives µ → EF ,
 3/2
1 Γ(3/2) 2m
n(x) ≈ [µ − U (x)] (4.172)
(2π)2 Γ(5/2) ℏ2
3/2  3/2
ρ2
 
1 2 2m m 2 2 3/2 8λ N
= E F − ω ρ = 1 − .
(2π)2 3 ℏ2 2 r π 2 RF3 RF2
170 CHAPTER 4. STATISTICAL THERMODYNAMICS

At high temperatures, T → ∞, we should recover the Boltzmann gas situation,


n(x) = λ−3
th f3/2 (e
−β[U (x)−µ]
) (4.173)
2 3/2
 
3 mβ ω̄ 2 2
+ωy2 y 2 +ωz2 z 2 )/2
≈ λ−3
th N (βℏω̄) e
−βU (x)
= N e−βm(ωx x .

n(x)d3 x = N . Introducing the peak density n0 , we obtain,
R
It’s easy to check,
2 2
n(x) = n0 e−mω ρ /2kB T
(Boltzmann) . (4.174)
q
The rms-radius of the distribution is σj = kB T /mωj2 , which seems contrary to the
m 2
above results, 2 ωj x2j = kB T . In comparison,
h i
n(x) = λ−3 g
th 3/2 e β(µ−U (x))
(Bose gas above Tc ) . (4.175)
p p
where λth = 2πℏ2 /mkB T and aho = ℏ/mω̄.

4.4.4.2 Momentum distribution


The momentum distribution is,
Z Z
1 rdrdz
ñ(k) = wT,µ (x, k)d3 x = 2 β[ε(k)+mω 2 2 (4.176)
(2π) e r ρ /2−µ] + 1

4πρ2 dρ
Z
1
= 3 β[ε+mω 2 2
(2π) e r ρ /2−µ] + 1

 3/2 Z √  3/2
1 2 tdt 1 2
= = Γ(3/2)f3/2 (eβ(µ−ε) ) ,
(2π)2 βmωr2 eβ[ε+t−µ] + 1 (2π)2 βmωr2
such that,
ñ(k) = λ−3 6
th aho f3/2 (e
β(µ−ε)
) (Fermi) . (4.177)
At low temperatures, T → 0,
 3/2
1 2 Γ(3/2)
ñ(k) ≈ 2 2
(β [µ − ε])3/2 (4.178)
(2π) βmωr Γ(5/2)
3/2  3/2 3/2
ℏ2 k 2 k2
 
1 2 2 8 N
≈ EF − = 2 3 1− 2 .
(2π)2 mωr2 3 2m π KF KF
This can easily be integrated by dimensions,
Z ∞Z ∞ 3/2
k2
Z Z 
8 N
ñT →0 (kz ) = ñcl (k)dkx dky = 2 3 1− 2 dkx dky
−∞ −∞ π KF |k|≤KF KF
(4.179)
√ 2 2 !3/2
KF −kz 5/2
2π Z
kz2 kρ2 kz2
Z  
8 N 16 N
= 1− − 2 kρ dkρ dϕ = 1− 2 .
π 2 KF3 0 0 KF2 KF 5π KF KF
4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 171

R∞
It is easy to check −∞ ñT →0 dkz = N , with Maple.
At high temperatures, T → ∞, we should recover the Boltzmann gas situation,
3/2
ℏ2 ω̄ 2

ñ(k) ≈ N e−βε (Boltzmann) . (4.180)
2πmωr2

Since ε is the kinetic energy, the rms-radius k 2 of this distribution is βℏ2 ⟨k 2 ⟩ = m.
In comparison,
h i
β (µ−p2 /2m)
ñ(k) = λ−3 6
th aho g3/2 e (Bose gas above Tc ) . (4.181)

Example 34 (Integrated momentum distribution of a Fermi gas): To


integrate the momentum distribution of finite temperature Fermi gas by dimen-
sions,
3/2 Z ∞ Z ∞ Z ∞
4πr̃2 dr̃

1 2
ñ(kz ) = 2
dky dkx (4.182)
(2π)3 βmω̃ho −∞ −∞ 0 eβε−βµ+r̃2 + 1
3/2 Z ∞Z ∞
4πr̃2 dr̃

1 2
= 2
2π 2 2 2 2 2 kρ dkρ
(2π)3 βmω̃ho 0 0 eβℏ kz /2m+βℏ kρ /2m−βµ+r̃ + 1
3/2
2m ∞ ∞
 Z Z
1 2 k̃ρ dk̃ρ
= 2
r̃2 dr̃
π βmω̃ho βℏ2 0 0 e βℏ2 kz
2 /2m−βµ+r̃ 2 +k̃2
ρ + 1
3/2 ∞
2m 1 ∞ 2
 Z
1 2 1
= 2
r̃ ln 2 2 2 2 dr̃
π βmω̃ho βℏ2 2 0 1 + e−βℏ kz /2m+βµ−r̃ −kρ 0
 1/2 Z ∞
2 2 2

βµ−βℏ2 kz 2
/2m−r̃ 2

= r̃ ln 1 + e dr̃ .
π (βℏω̃ho )2 βmω̃ho2
0

4.4.4.3 Time-of-flight distribution


To describe time-of-flight images we substitute k = mr/ℏt. We obtain the density
distribution from a convolution,
δ 3 (x − x0 − pt/m)
Z
1
nT oF (x, t) = 3
d3 x0 d3 k β(ε(x ,p)−µ) (4.183)
(2π) e 0 +1
d3 k
Z
1
= 3 β(ε(x+pt/m,p)−µ)
(2π) e +1
Z
1 dkx dky dkz
= where j = x, y, z . .
(2π)3 eβΣj [ℏ kj /2m+ 2 mωj (xj +ℏkj t/m) ] /Z + 1
2 2 1 2 2

We rewrite the exponent,


2
ℏ2 kj2 /2m + 12 mωj2 (xj + ℏkj t/m) = ℏ2 kj2 /2m(1 + ωj2 t2 ) + ωj2 txj ℏkj + 12 mωj2 x2j
s √ 2
ℏ2 kj2 ωj
2
tx j 2m mωj2 x2j
= (1 + ωj2 t2 ) + q  +
2m 2 1 + ω 2 t2 2(1 + ωj2 t2 )
j
m
= ξj + ω̌j2 x2j . (4.184)
2
172 CHAPTER 4. STATISTICAL THERMODYNAMICS
q
2
where we defined ω̌i ≡ ωi (1+ωi2 t2 )−1/2 . With the substitution dξj = dkj 2ℏ 2 2

m ξj 1 + ωj t
we obtain
 3/2 Z 3/2 −1/2
1 mkB T 1 β (ξx ξy ξz ) dξx dξy dξz
nT oF (x, t) = Q 2
(2π)3 2ℏ2 i (1 + ω i t2)
eβΣj [ξj + 2 ω̌j2 x2j ]
m
/Z + 1
3 Z 3/2 −3/2 2
1 1 ω̃ β ξ 4πξ dξ
= 3 3/2 3 3 , (4.185)
2 π λth ω̄ βΣj [ξ+ m ω̌j j ] /Z + 1
2 x2
e 2

1/3
where ω̄ ≡ (ωx ωy ωz )1/3 and ω̌ ≡ (ω̌x ω̌y ω̌z ) .

1 ω̌ 3 
βµ− 12 βmΣj ω̌j2 x2j

nT oF (x, t) = f 3/2 e . (4.186)
λ3th ω̄ 3

For long times-of-flight t ≫ ω −1 ,


1 1    m 3
β (µ−mx2 /2t2 )
nT oF (x, t) = f3/2 e = ñ(mx/t) . (4.187)
λ3th ω̄ 2 t2 ℏt

×1010
5

4
(cm−3 )

3
nT oF

0
0 100 200 300
r (μm)

Figure 4.17: (code) Time-of-flight velocity distributions after TT oF = 2 ms of (red) a Li


Fermi gas at T = 0 with vanishing initial spatial distribution [7] and (black) a thermal gas
at T = TF .

At low temperatures,
3/2
 m 3 N 8  (mx/ℏt)2
nT oF (x, t) = 1− (4.188)
ℏt KF3 π 2 KF2
 !2 3/2
 m 3 R 3 R mx/ℏt
F  F
= 1− 
ℏt 6π 2 λ (48N λ)
1/3
4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 173

At high temperatures,
1 1 2 2
nT oF (x, t) = f3/2 (eβ(µ−mx /2t ) ) (4.189)
λ3th ω̄ 2 t2
1 1 2 2
≈ 3 2 2 eβ(µ−mx /2t )
λth ω̄ t
 3/2  3  3/2
mkB T 1 ℏω̄ −βmx2 /2t2 ω̄ m 2 2
≈ N e ≈ N e−βmx /2t .
2πℏ2 ω̄ 2 t2 kB T t2 2πkB T

A rms-width is,
Z
rT2 oF = r2 nT oF (x, t)d3 x (4.190)

1 ω̌ 3
Z  
1 2 2
= 3 3 r2 f3/2 eβµ− 2 βmΣj ω̌j xj d3 x
λth ω̄
Z
2 εg(ε)dε kB T g4 (Z)
= = .
mω̌r2 N eβ(ε−µ) + 1 mω̌r2 g3 (Z)
This shows that the width of the flight-of-time distribution
√ can simply be obtained
from the spatial distribution by substituting ω → ω/ 1 + ω 2 t2 . Of course this does
not hold for condensed gases Bose.

Example 35 (Equipartition theorem): We find for harmonic traps,

U (x)wT,µ (x, k)d3 xd3 k


R
mω 2 r2 d3 xd3 k
Z
1
Epot,1 = R
3 3
= 3
(2π) N 2 eβ [ℏ k /2m+mω r /2−µ] + 1
2 2 2 2
wT,µ (x, k)d xd k
u4 v 2 dudv
Z
16
= 3 (4.191)
4
πN β (ℏω) e 2 +v2 /Z + 1
u
R 2 2
ℏ2 k2 d3 xd3 k ℏ k wT,µ (x, k)d3 xd3 k
Z
1
= 3
= R = Ekin,1 .
(2π) N 2m e [ ]+1 2m wT,µ (x, k)d3 xd3 k
β ℏ 2 k 2 /2m+mω 2 r 2 /2−µ

This confirms the equipartition theorem for confined particles, which postulates,

E = Ekin + Epot = 2Ekin . (4.192)

In flight time, however, Epot suddenly vanishes.

4.4.4.4 Calibrating the number of atoms


Experimentally, to calibrate N , we can use either the measured value of ⟨k 2 ⟩ at T = 0,
which gives µ = EF = 4E/3 and consequently,
3
ℏ2 ⟨k 2 ⟩

32
N= . (4.193)
3 6mℏω̄

Or we determine the temperature Tg where the Boltzmann gas turns into a Fermi gas
3µ/4 = 3kB Tg ,
 3
32 kB Tg
N= . (4.194)
3 ℏω̄
174 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.4.5 Density and momentum distribution for anharmonic po-


tentials
4.4.5.1 Width of momentum distribution for anharmonic potentials
If the potential is non-harmonic, the widths of Fermi distributions must in general
be calculated p
numerically. I.e. first η(ε) is determined by integrating for every value
of ε the root ε − U (x) over the entire volume, where U (x) < ε, i.e. in the case of
cylindrical symmetry,

(2m)3/2
Z p
η(ε) = ε − U (r, z)rdrdz . (4.195)
2πℏ3

Second the chemical potential must also be calculated numerically from


−1
N = η(ϵ) eβ(ε−µ) + 1
R
dε by minimizing the function,
Z
η(x/β)dx
o(Z) = βN − . (4.196)
ex /Z + 1

Finally, the rms-momentum width of a degenerate Fermi-gas is calculated from,

⟨k 2 ⟩
Z
E1 1 εη(ε)dε
2 = = β(ε−µ)
. (4.197)
kF EF N EF e +1

It is important to note that the temperature cannot be obtained from ℏ2 k 2 /2m =


3N kB T any more. Rather for a given ⟨k 2 ⟩ the parameter β in the integral (4.195)
must be fitted to satisfy the equation.
Alternatively, we may assume a polynomial potential for which the density-of-
states can be described by η(ε) ∝ εn . Then,
−1
εη(ε) eβ(ε−µ) + 1
R
⟨k 2 ⟩ 1 dε T (n + 1)fn+2 (Z)
2 = −1 = , (4.198)
kF EF η(ε) eβ(ε−µ) + 1 T fn+1 (Z)
R
dε F

For a harmonic potential we recover the energy formula,

⟨k 2 ⟩ 3T f4 (Z)
2 = , (4.199)
kF TF f3 (Z)

and for hot clouds the classical limit holds,

⟨k 2 ⟩ n+1
2 = . (4.200)
kF βEF

Must for a single dimension the value be divided by three? ℏ2 ⟨kj2 ⟩ = 2mkB T f4 (Z)/f3 (Z)
setting ε = ℏ2 k 2 /m.
For a harmonic potential η(ε) ∝ ε2 and for a linear potential η(ε) ∝ ε7/2 . Inter-
mediate values are possible for non isotropic traps, which are linear in some directions
and harmonic in others, e.g. for a radially quadrupolar and axially harmonic trap, we
expect η(ε) ∝ ε3 and thus E = 4N kB T . In general, we may have more complicated
4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 175

situations, where the trap becomes non-harmonic beyond a certain distance from the
origin. In those cases, the density-of-states may be approximated by series,
η(ε) ∝ ε2 + κε3 , (4.201)
where η is a small parameter, so that,
1 (ε3 + κε4 )(eβ(ε−µ) + 1)−1 dε
R
⟨k 2 ⟩ T 3f4 (Z) + 12κf5 (Z)
= = , (4.202)
kF2
R
EF (ε2 + κε3 )(eβ(ε−µ) + 1)−1 dε TF f3 (Z) + 3κf4 (Z)
which in the classical limit gives rise to energies E = 3..4N kB T depending on the
value of κ.
Such effects must be considered when the time-of-flight method is used for temper-
atures measurements. For example, if we underestimate η(ε) by assuming a harmonic
potential at all ε, although the potential is quadrupolar at large ε ≫ kB T , we get a
wrong estimate for the temperature Twrng = E/3N kB instead of Tcorr = E/4N kB .

4.4.5.2 Width of the density distribution for anharmonic potentials


The result also permits to calculate the rms spatial width,
X3 m f4 (Z)
ωj2 ⟨x2j ⟩ = 3kB T . (4.203)
j=1 2 f3 (Z)
Let us for simplicity assume ωi = ωj . So in the classical limit,
⟨x2j ⟩ ⟨x2 ⟩ E1 1..1.3T
2 = = = . (4.204)
RF 3RF2 3EF TF
If the potential is non-harmonic, the widths of Fermi distributions must in general be
calculated numerically. We may use the same results for the density-of-states and the
chemical potential as for the momentum width calculations. Then,
−1
⟨x2j ⟩ εη(ε) eβ(ε−µ) + 1
R
E1 1 dε
= = . (4.205)
RF2 3EF 3EF η(ε) eβ(ε−µ) + 1 −1 dε
R 

4.4.5.3 Momentum distribution for a classical gas


For high temperatures, T → ∞, we should recover the ideal Boltzmann gas situation,
f3/2 → id,
4πρ2 dρ
Z Z
1 1 −β(ε−µ) 2 2
ñT →∞ (k) = = e e−βmωho ρ /2 ρ2 dρ (4.206)
(2π)3 e β [ε+mωho
2 ρ2 /2−µ
] 2π 2
 3/2
1
= 2 e−β(ε−µ) = λ−3 6
th aho e
β(µ−ε)
.
2πβmωho
Since the chemical potential satisfies the normalization, ñT →∞ (k)d3 k = 1,
R

3/2  3 s 3
ℏ2

1 ℏωho −βε 2 2
ñT →∞ (k) = 2 N e =N e−ℏ k /2mkB T .
2πβmωho kB T 2πmkB T
(4.207)
176 CHAPTER 4. STATISTICAL THERMODYNAMICS

This is easy to integrate by dimensions, so that,


Z ∞Z ∞ s
ℏ2 2 2
ñT →∞ (kz ) = ñT →∞ (k)dkx dky = N e−ℏ kz /2mkB T . (4.208)
−∞ −∞ 2πmkB T
The rms-width of this distribution is,

mkB T
∆kz = . (4.209)

4.4.6 Signatures for quantum degeneracy of a Fermi gas


Whether an atom is a fermion or a boson uniquely depends on its total spin. Halfinte-
ger spin particles are fermions, integer spin particles are bosons. E.g. Rb atoms have
in the ground state J = 1/2, I = 7/2, integer F , and are therefore bosons. Ca+ ions
have J = 1/2 and no hyperfine structure so that F is halfinteger, and are therefore
fermions. 6 Li has half-integer F and is a boson.
For a composite particle the quantum statistical nature may depend on the inter-
action strength of the partners. For weak interaction, e.g. Feshbach the total spins of
the partners will couple to a total total spin, which determines the nature of the com-
posite particle. A fermion pairing with a fermion or a boson pairing with a boson will
be bosons. A fermion pairing with a boson will be a fermion. Composite trimers may
be either bosonic or fermionic depending on the coupling scheme. Can the quantum
nature change with the tightness of the binding? What is the total spin of a deeply
bound molecule? [47, 5, 22], [23, 30, 46]

4.4.6.1 Optical density of a Fermi gas


With the local density of a Fermi gas,
kF3
nloc = (4.210)
3π 2
the optical density is at T = 0,
Z RF  3/2
x2 + y 2 z2
Z
8σ N
σndy = 2 3 1− − dy (4.211)
π RF −RF RF2 ZF2
3/2 Z RF  3/2
x2 z2 y2

8σ N
= 2 3 1− 2 − 2 1− 2 dy .
π RF RF ZF −RF RF − x2 − RF2 z 2 /ZF2
p
Writing a = RF / RF2 − x2 − RF2 z 2 /ZF2 ,
Z Z a
8σ N
σndy = 2 2 4 (1 − ỹ 2 )3/2 dỹ (4.212)
π RF a −a
2σ N  p p 
= 2 2 4 9a 1 − a2 − 2a3 1 − a2 + 3 arcsin a .
π RF a
In the center, a = 1,
9mωr2 N
Z
3N σ
σndy = = , (4.213)
πRF2 2E
kL F

such that for EF ≃ 1µK we expect nloc ≃ 4 × 1012 cm−3 .


4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 177

(a) ×10−15
1 (b) 0.04 total atom number N = 20000
temperature T = 0.5 μ K
Fermi temperature T F = 2.446
0.03

n(k)

n(k)
0.5 0.02

0.01

0 0
0 1 2 -2 0 2
k/kF k/kF

Figure 4.18: (code) (a) Radial momentum distribution and (b) distribution of momentum
classes in the direction of kz for a Fermi gas at T /TF = 0.2 µK (red solid), a classical gas
(black), and a Fermi gas at T = 0 (red dash-dotted).

4.4.6.2 ’Pauli blocking’ of sympathetic cooling

For a harmonic trap U = µB = mω 2 r2 the rms-radius of a thermal cloud,


s s
2kB T kB T
rrms = = , (4.214)
mωr2 µ∂r2 B

is independent on the atomic mass. This means that a Li and a Rb cloud in the same
harmonic trap at the same temperature have the same radius. This ensures good
overlap. E.g. at T = 10 µK assuming the Rb secular frequencies ωr ≃ 2π × 300 Hz
and ωz ≃ 2π × 30 Hz, we expect rrms = 16 µm and zrms = 160 µm. However below
the temperature 0.5TF , which is TF ≃ 1 µK for NF = 104 , the quantum pressure
stops the reduction of the fermion cloud while cooling. This evtl. reduces the overlap
with the boson cloud, disconnects the two clouds and stops the evaporative cooling.
On the other hand, the interaction energy of the boson cloud also increases its size,
when the Rb cloud approaches the critical temperature Tc ≃ 0.6 µK for NB = 106 .
The Pauli blocking of sympathetic cooling is a signature for the advent of quantum
statistics [15, 24, 45]. It is due to a reduced mobility (or better reduced available phase
space at collisions) of the atoms and not to be confused with the prohibition of s-
wave collisions due to the Pauli exlusion principle. Furthermore, elastic collisions are
suppressed [14], because atoms cannot be scattered into occupied trap levels [31, 48,
26, 27].

4.4.6.3 Superfluid suppression of sympathetic cooling

The fermions inside the bosonic cloud can be regarded as impurities. If they travel too
slow, v < c, and if the condensed fraction is too large, the motion will be frictionless
and thermalization stops. If they travel fast, quasiparticles are excited, which can
be√ removed by evaporation. With the typical velocity of sound in the BEC c =
ℏ 16πna/2mB ≈ 2 mm/s, or m 2
2 c ≈ kB × 20 nK, we see that this is no real danger.
178 CHAPTER 4. STATISTICAL THERMODYNAMICS

4.4.6.4 Component separation


If the interspecies interaction h is stronger than the inter-bosonic interaction, the
components may separate [43]. Otherwise a small fermionic cloud stays inside the
BEC.

4.4.6.5 Excess energy modifies 2nd moment


Independent on any model, just look deviation from Gaussian (interaction energy
plays no role for the fermions). Also calculate the 2nd moment E = Ekin (k)n(k)dk,
R

where n(k) is measured in time-of-flight and Ekin = ℏ2 k 2 /2m.

4.4.6.6 Modification of light scattering


The unavailability of final momentum states inhibits scattering in a similar way as the
Lamb-Dicke effect. Forward scattering is suppressed, because all small momentum
states are occupied. Furthermore, spontaneous emission is suppressed like in photonic
band gaps. However, here it is rather an atomic momentum band gap. Could it be
that because scattering is suppressed, in-situ images of fermions are hampered?
A condition for this effect to play a role is krec ≪ kF . For Li the temperature
must be kB TF = ℏ2 kF2 /2m = ℏω̄(6N )1/3 ≫ ℏ2 kL 2
/2m ≈ kB × 3 µK. I.e. we need quite
large Fermi gases.

4.4.6.7 Hole heating


Loss processes that remove particles from an atom trap leave holes behind in the
single particle distribution if the trapped gas is a degenerate fermion system. The
appearance of holes increases the temperature, because of an increase in the energy
share per particle if cold particles are removed. Heating is significant if the initial
temperature is well below the Fermi temperature. Heating increases the temperature
to T > TF /4 after half of the systems lifetime, regardless of the initial temperature.
The hole heating has important consequences for the prospect of observing Cooper
pairing in atom traps.

4.4.7 Fermi gas in reduced dimensions


In n dimensions with the energy ε = aps + brt [40] we have to generalize the results
of the last chapter,

Γ ns + 1 Γ nt + 1
 
n/s+n/t
N =g 2 (kB T ) fn/s+n/t (z) . (4.215)
n n/s n/t n
(2ℏ) a b Γ 2 + 1
1 2
This gives for a harmonic trap where ε = 2m p +m 2 2
2 ω r and with the spin degeneracy
factor g = 1,  n
kB T
N= fn (z) . (4.216)
ℏω
The Fermi energy again follows from Sommerfeld’s expansion,

EF = (n!N )1/n ℏω . (4.217)


4.4. QUANTUM DEGENERACY OF AN IDEAL FERMI GAS 179

We now assume a 1D potential V = m 2 2


2 ωz r embedded in a 3D trap. A true 1D
situation arises when the atoms occupy all low-lying axial levels with the lowest radial
vibrational quantum number, i.e. EF ≪ ℏωr which gives,
ωr
N≪ . (4.218)
ωz
Such quantum degenerate 1D fermion gases realize the so-called Luttinger liquid. One
of the hallmarks of Luttinger liquids is spin-charge separation.

Example 36 (Estimations q for 1D): Let us considerqa Fermi gas in a very


elongated microtrap: ωr = 87
7
2π × 1.4 kHz and ωz = 877
2π × 15 Hz for Rb.
With NLi = 105 the Fermi temperature is as high as TF ≃ 5 µK. However we
need N ≪ 100 to see 1D features.
2
1
Assume ε = 2m p2 + m4 b4 r4 ,
n

1 Γ 4 +1
N=  (kB T )3n/4 f3n/4 (z)
(ℏb)n Γ n2 + 1
!4/3n
Γ n2 + 1 Γ( 3n

4/3 4
+ 1)
EF ≈ (ℏb) N .
Γ n4 + 1


In 1D,
1.02
N= (kB T )3/4 f3/4 (z)
ℏb
EF ≈ 0.87(N ℏb)4/3 .

4.4.7.1 Fermi degeneracy


A completely analogous treatment to the Bose-gas yield for the case of fermion
3  
E= kB T N 1 + 2−5/2 nλ3th + ... . (4.219)
2
Bosonic 4 He has a very different behavior than fermionic 3 He. It stays gaseous at
very low temperatures and becomes a Fermi gas before becoming fluid. Fermi gases
have a higher pressure then classically predicted.
Electrons in a solid are characterized by a high density and a low mass. Hence,
nλ3th ≈ 103 . The interelectronic repulsion is canceled by atomic attraction, so that
they may be considered an ideal gas. For the density-of-states we get the same formula
as for bosons in a box multiplied with the factor 2 to account for the spin degree of
freedom. Thus, from
Z EF
N= ρfF D dε , (4.220)
0
2
h
we derive the Fermi energy EF = 8m (3N/πV )( 2/3). The free electron gas is deep
in the Fermi regime, the classical statistics may only be used at temperatures above
T > 105 K. Hence the energy is temperature-independent and the heat capacity
180 CHAPTER 4. STATISTICAL THERMODYNAMICS

vanishes, i.e. the electron gas does not contribute to the heat capacity of a metal. It
is only at very low temperatures of a few K, when the heat capacity of the atomic
lattice drops due to the underlying bosonic statistics, that the electrons contribute.
Now, make the metallic box potential having a finite depth. An electron can then
leave the metal, if it surmounts the exit work W = −Vmin − EF ≃ 10 eV, which is the
difference between the potential depth and the Fermi energy. At high temperatures,
the tail of the Fermi-Dirac distribution can leak into the unbound regime, which gives
rise to thermoionic emission. This feature explains the existence of contact potentials:
Metals with different W and EF brought into contact exchange charges until their
Fermi level is at same height.

4.4.8 Exercises
4.4.8.1 Ex: Li Fermi gas
Programs on Li Fermi gases.

4.5 Further reading


4.5.1 on quantum statistics
R. DeHoff, Thermodynamics in Material Science [ISBN]

H.B. Callen, Thermodynamics [ISBN]


C. Kittel, Introduction to Solid State Physics [ISBN]
A.R. West, Basic Solid State Chemistry [ISBN]
D. Mc Quarry, Statistical Thermodynamics [ISBN]

J. Walraven, Quantum gases [http]


G.T. Landi, Grand canonical ensemble [http]

4.5.2 on ideal quantum gases


V.S. Bagnato et al., Bose-Einstein Condensation in an External Potential [DOI]
D.A. Butts et al., Trapped Fermi gases [DOI]
R.J. Dodd et al., Two-gas description of dilute Bose-Einstein condensates at finite
temperature [DOI]
Chapter 5

Quantum theory of
non-relativistic scalar particle
fields
In the preceding sections we mostly restricted to single particles except in the con-
text of the electron shell of atoms, where we presented the Pauli principle in Sec. ??.
In the context of optical lattices [see Eq. (??)] we introduced particle creation and
annihilation operators, which we only used to simplify the notation without consid-
ering the possible presence of several atoms per lattice site. On the other hand, in
the context of the harmonic oscillator (Sec. ??), we allowed for large population of a
quantum states by quasi-particles identified as phonon or photons (see Sec. ??). The
commutation rule that the field operators of the quasi-particles had to obey was the
expression of their bosonic nature.
The question now arises how to treat the presence of several massive particles. In
quantum mechanics (differently from classical mechanics) there is no way of differen-
tiating identical particles, and this feature has far-reaching consequences in the way
of counting the number of possible microstates that a system may adopt. The field
of physics preoccupied with this counting is quantum statistics.
The quantum mechanical state of a system is described by a global wavefunc-
tion, which must somehow be composed from the wavefunctions of all the particles
of the system in such a way that the indistinguishability of particles is respected.
In mathematical terms this means that the global wavefunction must satisfy some
symmetry requirements upon particle exchange. We will show in Sec. 5.1.2 that the
system’s Hamiltonian must commute with the operator describing particle exchange,
and consequently the eigenstates of the Hamiltonian must also be eigenstates of the
particle exchange operator. We will find that only two eigenstates are possible: a
symmetric and an antisymmetric one. Which one of the symmetries the system must
exhibit depends on the nature of the particles: Particles behaving symmetrically under
particle exchange are called bosons, particles behaving antisymmetrically are called
fermions. Obviously, the symmetry requirement drastically limits the ways how the
single-particle wavefunctions may be joined to a global wavefunction. We will see that
bosons have increased probability to occupy the same quantum state, while fermions
cannot occupy the same quantum state at all.
The quantum statistical nature of particles has far-reaching consequences for the
description of many-body problem; when the particles do not interact with each other

181
182 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

and even more when interactions between the atoms couple the single-atom states.
The formalism that rigorously enforces the exchange symmetry of the many-body
wavefunction is known as the second quantization method, which will be introduced
in Sec. 5.2, although a less confusing name would be occupation number (or Fock)
representation. In Sec. 5.2.2 we introduce construction operators for the creation or
annihilation of particles in properly symmetrized many-body states and show that the
proper quantum statistics is enforced by an algebra defined through the commutation
relations between the construction operators.
All this being said it is often not so obvious what is actually meant by particle.
In most cases, the particles under study are composed from other particles bound
together through strong forces. E.g. protons and neutrons bound together by nuclear
forces form a nucleus, electrons bound to a nucleus by Coulomb forces form an atom,
and atoms bound together by van der Waals forces form a molecule. The bosonic
or fermionic nature of composite particles somehow depends on the nature of the
components and on the way they interact. We consider a composite system as a
particle as long it has a well-defined internal structure. If all particles of a system
have the same internal structure they are called identical. When their structure brakes
down, for instance, induced by strong inter-particle forces, new quantum numbers
appear (related to the symmetries of the new structure), but the statistical nature
under exchange of complete particles (e.g. all components of an atom) is conserved
(see Sec. 5.1.2) [49].

5.1 Quantizing scalar fields


In the following, we will consider specifically consider atomic gases, although most
findings can be translated to other systems, such as electrons in a metal.

5.1.1 Wavefunction (anti-)symmetrization for atom pairs


The physical situation under study is here an external potential U(r) with the shape
of a cubic box of length L and volume V = L3 . Introducing periodic boundary
conditions, ψ(x + L, y + L, z + L) = ψ(x, y, z), the Schrödinger equation for the
spatial motion of a single atom in the box can be written as,

ℏ2 2
− ∇ ψk (r) = εk ψk (r) , (5.1)
2m

where the eigenfunctions and corresponding eigenvalues are given by

ℏ2 k 2
ψk (r) = √1
V
eık·r and εk = . (5.2)
2m

The wavefunctions ψk (r) represent plane wave solutions, normalized to the volume of
the box, with k the wave vector of the atom, k = |k| = 2π/λdB the wave number,
and λdB the de Broglie wavelength. The periodic boundary conditions give rise to a
discrete set of wavenumbers, kα = (2π/L)nα with nα = 0, ±1, ±2, ... and α = x, y, z.
5.1. QUANTIZING SCALAR FIELDS 183

The Hamiltonian for the motion of two atoms with interatomic interaction V(r12 )
and confined by the cubic box potential U(r) defined above is given by,
X  ℏ2 
H= − ∇2i + U(ri ) + V(r12 ) . (5.3)
i=1,2
2mi

When the interatomic interaction can be neglected, the Schrödinger equation takes
the form 1 ,

ℏ2 2 ℏ2 2
 
− ∇1 − ∇2 Ψk1 ,k2 (r1 , r2 ) = Ek1 ,k2 Ψk1 ,k2 (r1 , r2 ) . (5.4)
2m1 2m2

In this limit, we have complete separation of variables so that the pair solution can
be written in the form of a product wavefunction

Ψk1 ,k2 (r1 , r2 ) = 1


V eık1 ·r1 eık2 ·r2 , (5.5)

with ki the wavevector of atom i, quantized as kiα = 2π


L niα with niα = 0, ±1, ±2, ....
This wavefunction is normalized to unity (one pair). The energy and momentum
eigenvalues are,

ℏ2 k12 ℏ2 k22
Ek1 ,k2 = + and P = ℏk1 + ℏk2 . (5.6)
2m1 2m2
Hence, the total momentum is not affected by particle exchange, but the energy,

m1 ̸= m2 =⇒ Ek1 ,k2 ̸= Ek2 ,k1 for k1 ̸= k2 . (5.7)

Importantly, only for pairs of unlike atoms the product wavefunctions (5.5) represent
uniquely defined quantum mechanical eigenstates for the eigenvalues Ek1 ,k2 . By unlike
we mean that the atoms may be distinguished from each other because they are of
different species. For identical atoms, i.e. atoms of the same species such that m1 =
m2 , the situation is fundamentally different. In this case the product wavefunctions
(5.5) are degenerate with pair wavefunctions in which the atoms are exchanged,

m1 = m2 =⇒ Ek1 ,k2 = Ek2 ,k1 also for k1 ̸= k2 . (5.8)

This is called exchange degeneracy (two states, one energy). This phenomenon shows
that we cannot determine which atom is in which state by measuring the energy of
the pair. Actually, it is fundamentally impossible to distinguish the atoms by any
measurement because (by definition of being identical) the expectation values of all
observables of the pair are invariant under exchange of the atoms. For identical atoms
in the same state (k1 = k2 ) the exchange degeneracy is manifestly absent because there
is nothing to be distinguished.
The exchange degeneracy implies that any linear combination of the type,
1 1
Ψk1 ,k2 (r1 , r2 ) = p (c1 eık1 ·r1 eık2 ·r2 + c2 eık2 ·r2 eık1 ·r1 ) (5.9)
V |c1 | + |c2 |2
2

1 We use the letter Ψ to denote single-particle orbitals and the letter Ψ for many-particle orbitals.
184 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

represents a properly normalized energy eigenstate of the pair. For example, we can
choose c1 = 1 and c2 = ±1, yielding,
1 1 ık1 ·r1 ık2 ·r2
Ψ±
k1 ,k2 (r1 , r2 ) = √ (e e ± eık2 ·r2 eık1 ·r1 ) . (5.10)
V 2

These are called the symmetric and antisymmetric eigenstates of the pair. In the
absence of exchange degeneracy (i.e. for k1 = k2 ) Eq. (5.5) represents the proper
solution. This state is symmetric and unit normalized to start with, so there is no
need for explicit symmetrization. Do the Exc. 5.1.3.1.

5.1.2 Identical particles and exchange operator


Linear combinations of the type (5.9) other than the symmetric or the antisymmetric
one were never observed for pairs of identical particles. Pair states for particles of
integral spin, i.e. bosons, are always symmetric and pair states for particles of half-
integral spin, i.e. fermions, are always antisymmetric. Apparently, nature does not
allow us to neglect the internal degrees of freedom when studying ensembles of identical
particles, which is the essence of the spin-statistics theorem. Therefore, we repeat the
discussion of the preceding section now including the spin, whose role is to represent
the internal structure of the particle,

Ψ(r1 , σ1 ; r2 , σ2 ) , (5.11)

where r1 and r2 are the position coordinates and σ1 and σ2 the spin coordinates
of the particles 1 and 2, respectively. The squared modulus of the wavefunction,
|Ψ(r1 , σ1 ; r2 , σ2 )|2 , corresponds to the probability of observing particle 1 at position
r1 in spin state σ1 with particle 2 at position r2 in spin state σ2 . As before, two
particles are called identical, if there is no physical way to establish whether or not
they particles have been exchanged, a condition which is satisfied for particles with
identical internal structure.
To describe the particle exchange, we introduce the exchange operator P. For two
identical particles in an arbitrary pair state Ψ(r1 , σ1 ; r2 , σ2 ) the operator P is defined
by
PΨ(r1 , σ1 ; r2 , σ2 ) ≡ Ψ(r2 , σ2 ; r1 , σ1 ) , (5.12)

The effect of this operator is to exchange the particle labels. As tracking is impossible
in quantum mechanics, we have no physical means to determine whether or not two
identical particles have been exchanged, i.e. |Ψ(r1 , σ1 ; r2 , σ2 )|2 = |Ψ(r2 , σ2 ; r1 , σ1 )|2 .
Because P is a norm-conserving operator, ⟨Ψ|P † P|Ψ⟩ = 1, we have

P †P = 1 . (5.13)

Furthermore, exchanging the particles twice must leave the pair state unchanged.
Therefore, we have,
P2 = 1 . (5.14)
and writing P = P † PP = P, we see that P † is Hermitian, i.e. the eigenvalues are
real and have to ±1 for the norm to be conserved.
5.1. QUANTIZING SCALAR FIELDS 185

Any pair state Ψ(r1 , σ1 ; r2 , σ2 ) can be written as the sum of a symmetric (+) and
an antisymmetric (−) part, as we will demonstrate in Exc. 5.1.3.2. Therefore, the
eigenstates of P span the full Hilbert space of the pair and P is not only Hermitian
but also an observable. We do not deepen the discussion about the relation between
spin and statistics and only mention that the bosons turn out to have integral total
spin and the fermions half-integral total spin, so that the eigenvalues of the exchange
operator are given by,

PΨ(r1 , σ1 ; r2 , σ2 ) = e−2πıs PΨ(r1 , σ1 ; r2 , σ2 ) , (5.15)

where s is the (semi-)integral spin of the particle. Apparently, for identical particles
the pair wavefunction has to be an eigenfunction of the exchange operator, i.e. the
exchange symmetry of the wavefunction is conserved in time and the pair Hamiltonian
Ĥ is invariant under exchange of the two particles,

[P, Ĥ] = 0 , (5.16)

so that P and Ĥ share a complete set of eigenstates.

5.1.2.1 Fermions and the Pauli principle


Let us study the case of two fermions in the pair state Ψ(r1 , σ1 ; r2 , σ2 ). As we are
dealing with fermions we know that P must have the eigenvalue −1. Exploiting the
definition (5.12) we get,

Ψ(r2 , σ2 ; r1 , σ1 ) = PΨ(r1 , σ1 ; r2 , σ2 ) = −Ψ(r1 , σ1 ; r2 , σ2 ) . (5.17)

For fermions in the same spin state (σ1 = σ2 ≡ σ) and at the same position
(r1 = r2 ≡ r) this condition can only be satisfied, if Ψ(r, σ; r, σ) = −Ψ(r, σ; r, σ).
Hence, two fermions in the same spin state have zero probability to be found at the
same position. Therefore, the fermions show correlated motion. Importantly, these
kinematic correlations occur irrespective of the presence or absence of forces between
the particles.
We first look at the symmetry of the spin states. A Clebsch-Gordan decomposition
(see Sec. ?? and ??),
X
|S, MS ⟩ = |s1 , ms1 ; s2 , ms2 ⟩⟨s1 , ms1 ; s2 , ms2 |S, MS ⟩ (5.18)
mS1 ,mS2

1 1
yields for the 2 × 2 two fermion spin system,

|1, +1⟩ = q
| ↑↑⟩ 

1 (S = 1)
|1, 0⟩ = 2 (| ↑↓⟩ + | ↓↑⟩)
 (5.19)
|1, −1⟩ = q
| ↓↓⟩

|0, 0⟩ = 12 (| ↑↓⟩ − | ↑↓⟩) (S = 0)

with total wavefunction (see Sec. ??),

Ψk1 ,k2 (r1 , σ1 ; r2 , σ2 ) = Ψk1 ,k2 (r1 , r2 )|S, MS ⟩ . (5.20)


186 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

Hence, assuming the fermion to be in one of the symmetric spin states, S = 1, the
orbital wavefunction must be antisymmetric,
q
1 ık1 ·r1 ık2 ·r2
ΨAk1 ,k2 (r1 , r 2 )|1, MS ⟩ = 1
V 2 (e e − eık1 ·r2 eık2 ·r1 )|1, MS ⟩ . (5.21)

If the fermions are in different orbital states (k1 ̸= k2 ) this gives rise to the above
mentioned kinematic correlations. For two fermions in the same orbital state (k1 =
k2 = k) the total wavefunction Eq. (5.20) vanishes and also its norm |Ψ− k,k (r1 , r2 )|
2

is zero. Apparently two (identical) fermions cannot occupy the same state; such
a coincidence is entirely destroyed by interference. This is the essence of Pauli ’s
exclusion principle which holds for all fermions. Starting from an antisymmetric spin
state the orbital part should symmetric,
1 1 ık1 ·r1 ık2 ·r2
ΨA
k1 ,k2 (r1 , r2 )|0, MS ⟩ = V 2 (e e + eık1 ·r2 eık2 ·r1 )(| ↑↓⟩ − | ↓↑⟩) . (5.22)

In this case, no restriction is found for the relative positions of the fermions.
We found that the quantum mechanical indistinguishability of identical particles
affects the distribution of particles over the single-particle states. Remarkably, these
kinematic correlations happen in the complete absence of forces between the particles:
it is a purely quantum statistical effect.
Example 37 (Kinematic correlation in ortho and para isomers of the
hydrogen and deuterium): A consequence of such correlations can be observed
in the rotational properties of homonuclear diatomic molecules: depending on
the total spin of the molecules either the even or the odd rotational levels are
observed. This is illustrated in Tab. 5.1 for the ortho and para isomers of
hydrogen (bosonic atoms) and deuterium (fermionic atoms).

Table 5.1: In the ortho and para isomers of the hydrogen and deuterium molecule the
distribution over the rotational levels is affected by quantum statistics (only even or odd
levels) [49].

symmetry
1
species J I Ψnucl Ψrot Ψvib Σ+
g Ψtot
ortho-H2 1, 3, 5, .. 1 S A S A S
ortho-D2 0, 2, 4, .. 1 A S S A A
para-H2 0, 2, 4, .. 0 A S S A S
para-D2 1, 3, 5, .. 0, 2 S A S A A

5.1.2.2 Bosons and normalization


For bosons the energy eigenfunctions must be symmetric under exchange of the atoms.
For spinless bosons (like 4 He atoms) or boson in symmetric spin states this suggests
to use Eq. (5.9) in the form,
q
1 ık1 ·r1 ık2 ·r2
ΨSk1 ,k2 (r1 , r2 ) = 1
V 2 (e e + eık1 ·r2 eık2 ·r1 )(| ↑↓⟩ − | ↑↓⟩) . (5.23)
5.1. QUANTIZING SCALAR FIELDS 187

For k1 ̸= k2 this form is appropriate because it is symmetric and also has the proper
normalization of unity, ⟨k1 , k2 |k1 , k2 ⟩ = 1 (see Exc. 5.1.3.2). For k1 = k2 = k the
situation is different. Eq. (5.23) yields norm 2 rather than the physically required
value unity. In this case the properly symmetrized and normalized wavefunction is
the product wavefunction,

Ψk,k (r1 , r2 ) = 1
V eık1 ·r1 eık2 ·r2 , (5.24)

with ⟨k, k|k, k⟩ = 1. Explicit symmetrization is superfluous because the product


wavefunction is symmetrized to begin with. The general form (5.23) may still be used
provided the normalization is corrected√for the degeneracy of occupation (in this case
we have to divide by an extra factor 2). For bosons in antisymmetric spin states
we require the motional wavefunction to be also antisymmetric. Like in the fermion
case this gives rise to kinematic correlations.

5.1.2.3 Spin orbitals and Slater determinants


Since internal and external degrees of freedom are related through the spin-statistics
theorem it is important to treat them on equal footing by writing the wavefunction
in the form of a multi-valued function called spin orbital. For an atom at position |r⟩
and in spin state |σ⟩ the spin orbitals are (2s + 1)-valued functions of the form,
q
ψk,s,ms (r, σ) = ϕk (r)χs,ms (σ) = 1
V eık·r χs,ms (σ) . (5.25)

With this, Eq. (5.21) can be written in the form of a determinant. Taking, for instance,
the spin state |S, MS ⟩ = | ↑, ↑⟩, we write, introducing the abbreviation (i) ≡ (ri , σi ),

ψk1 ↑ (1) ψk2 ↑ (1)


q
Ψ−
k1 ,k2 (r1 , r2 )| ↑, ↑⟩ =
1
2 . (5.26)
ψk1 ↑ (2) ψk2 ↑ (2)

Example 38 (Representation-free notation): In Dirac’s representation-free


notation,

ϕk (r1 ) = ⟨r1 |k⟩ = ⟨r|k⟩1 and χs,ms (σ1 ) = ⟨σ1 |s, ms ⟩ = ⟨σ|s, ms ⟩1

denote the spatial and the spin wavefunction of particle 1 in state |k, s, ms ⟩.
Hence,

ψk (r1 , σ1 ) = ϕk (r1 )χs,ms (σ1 ) = ⟨r|k⟩1 ⟨σ|s, ms ⟩1 = ⟨r, σ|k, s, ms ⟩1

Also, for two particles in states |ki , kj ⟩,

Ψki ,kj (r1 , r2 ) = ⟨r1 |⟨r2 |ki , kj ⟩ .

Hence,

Θki ,si ,msi ;kj ,sj ,msj (r1 , σ1 ; r2 , σ2 ) = ⟨r1 |⟨r2 |⟨σ1 |⟨σ2 |ki , si , msi ; kj , sj , msj ⟩ .
188 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

Similarly, the symmetric spin state |1, 0⟩ in combination with the antisymmetric
orbital state Ψ−
k1 ,k2 (r1 , r2 ) can be written as the sum of two determinants,

ψk1 ↑ (1) ψk2 ↓ (1) ψk1 ↓ (1) ψk2 ↑ (1)


Ψ−
k1 ,k2 (r1 , r2 )|1, 0⟩ =
1
2 + 1
2 , (5.27)
ψk1 ↑ (2) ψk2 ↓ (2) ψk1 ↓ (2) ψk2 ↑ (2)

where ψkj ↕ (i) = ψkj (ri )χ↕ (σi ) with i = 1, 2. The two-body state (5.22) consisting of
an antisymmetric spin state and a symmetric orbital state (k1 = k2 ) takes the form

ψk1 ↑ (1) ψk2 ↑ (1)


q
Ψ−
k1 ,k2 (r1 , r2 )|0, 0⟩ =
1
2 . (5.28)
ψk1 ↑ (2) ψk2 ↑ (2)

Indeed, the property of determinants to vanish when two columns or two rows are
equal assures that the wavefunction vanishes when two fermions are in the same state
α or share the same (position and spin) coordinates (i) = (ri , σi ), while exchanging
two rows or two columns yields the minus sign required for antisymmetric wavefunc-
tions. One can easily show that any two-body fermion state can be expressed as a
linear combination of determinantal spin-orbital states.

Slater generalized this approach to antisymmetrize N -fermion systems,

q ψα1 (1) · · · ψαN (1)


Ψα (r1 , σ1 ; ...; rN , σN ) = 1 .. .. .. . (5.29)
N! . . .
ψα1 (N ) · · · ψαN (N )

In this form the determinant is called a Slater determinant. It is the simplest gener-
alization of the product wavefunction with the proper symmetry under interchange
of any two fermions and consistent the Pauli principle. In Dirac notation the anti-
symmetrized form of N fermions in states α1 , ..., αN is given by
X
|α1 , ..., αN ⟩ ≡ N1 ! (−1)p P |α1 , ...αN ) , (5.30)
P

where |α1 , ..., αN ) ≡ |α1 ⟩1 ⊗ |α2 ⟩2 ⊗ ...|αN ⟩N is the N -body product state of the
single-particle states |ακ ⟩i , where κ = 1, .., N is the state index and i = 1, .., N is
the particle index. The sum runs over all permutations P of the particles, p being
the parity (number of transpositions; i.e., binary interchanges) required to realize the
permutation starting from an initial ordering fixed by convention. As the sum runs
over all permutations, it makes no difference whether we permute all particles or
permute all states of the particles. We choose the permutation operator P to act on
the state index (κ) and not on the particle index (i). With this choice, the interchange
of the states of particles 1 and 2 is written as,

P |α1 , ...αN ) = |α1 , ...αN ) = |α1 ⟩1 ⊗ ...|αN ⟩N . (5.31)

To assure a uniquely defined sign of the Slater determinants we shall adopt the stan-
dard ordering convention of atomic configurations (see below). The state labeling
α1 , ...αN represents both the orbital and the spin quantum numbers, e.g. ψακ (ri , σi ) =
ψk1 ↑ (ri , σi ) and is called spin orbital.
5.2. OCCUPATION NUMBER REPRESENTATION 189

5.1.3 Exercises
5.1.3.1 Ex: Norm of symmetrized wavefunctions
Show that ψk1 ,k2 (r1 , r2 ) has the norm N = 1 for k1 ̸= k2 , but N = 1 for k1 = k2 .

5.1.3.2 Ex: Norm of symmetrized wavefunctions


Confirm that any pair state ψ(r1 , σ1 ; r2 , σ2 ) can be written as the sum of a symmetric
and an antisymmetric part.

5.1.3.3 Ex: Symmetrization operators


Show that for N > 2,
S +A=
̸ I .

5.2 Occupation number representation


The notation of the previous section calls for simplification. This is realized by in-
troducing construction operators which satisfy an algebra that enforces the quantum
statistics. The first construction operators were introduced by Paul Dirac in 1927
[16]. Starting from Maxwell’s equations, Dirac quantized the electromagnetic field
by treating the eigenmodes of the field as independent harmonic oscillators. The
excitation level of the oscillator represents the mode occupation of the field. The
raising (lowering) operators of the oscillator serve to construct the field by creation
(annihilation) of photons, the quanta of the radiation field, which occupy the modes.
The commutation relations between the operators define the algebra that enforces the
Bose statistics of the field. This marks the start of quantum field theory. In the same
year Pascual Jordan and Oskar Klein showed that the method could be extended to
describe quantum many-body systems of bosons satisfying the Schrödinger equation
[34]. Adapting the algebra, Jordan and Wigner further extended the method to de-
scribe quantum many-body systems of interacting fermionic particles [35]. The above
sequence of seminal papers is not complete without the name of Vladimir Fock, who
emphasized in 1932 the use of field operators (construction operators for configuration
space) [21]. This approach leads to an operator identity resembling the Schrödinger
equation, which explains the unfortunate name second quantization for the construc-
tion operator formalism. In following sections we give a concise introduction in the
construction operator formalism for quantum many-body systems [11, 49].

5.2.1 Number states in the N -body Hilbert space


The notation of the properly symmetrized states can be further compacted by listing
only the occupations of the states,

|ñγ ⟩ = |n1 , n2 , ..., nl ⟩ ≡ |k1 , k1 , ..., k2 , k2 , ..., ..., kl ⟩ , (5.32)


| {z } | {z } | {z }
n1 n2 nl
190 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

where γ = {1, 1, ..., 2, 2, ..., nl } with l ≲ N . In this way the states take the shape of
number states, which are the basis states of the occupation number representation
(see next section). For the case of N bosons in the same state |ks ⟩ the number state
is given by |ns ⟩ ≡ |ks , ..., ks ⟩; for a single particle in state |ks ⟩ we have |1s ⟩ ≡ |ks ⟩.
Note that the Bose symmetrization procedure puts no restriction on the value or
order of the occupations n1 , ..., nl as long as they add up to the total number of
particles, n1 + n2 + ... + nl = N . For fermions the notation is the same but because
the wavefunction changes sign under permutation the order in which the occupations
are listed becomes subject to convention (for instance in order of growing energy of
the states). Up to this point and in view of Eqs. (??) and (??) the number states
(5.32) have normalization,

⟨n′1 , n′2 , ...|n1 , n2 , ...⟩ = δn′1 ,n1 δn′2 ,n2 ... (5.33)

and closure

X
|n1 , n2 , ...⟩⟨n1 , n2 , ...| = I , (5.34)
n1 ,n2 ,...

where the prime indicates that the sum over all occupations equals the total number
of particles, n1 + n2 + ... + nl = N . This is called closure within HN .

5.2.2 Field operators


5.2.2.1 Position representation
Let us write the total number
R operator (7.98) in the position representation. Using
the closure relation I = dr|r⟩⟨r| we obtain,
Z X † Z
ϕ∗s′ (r)â†s′
X X
N̂ = d3 r âs′ ⟨s′ |r⟩⟨r|s⟩âs = d3 r ϕs (r)âs , (5.35)
s,s′ s′ s,s′

where the r|s⟩ = ψs (r) are the wavefunctions of an arbitrary single-particle basis
{|s⟩}. With this transformation we introduced two operator densities,
X X
ψ̂(r) ≡ ψ(r)âs and ψ̂ † (r) ≡ ψ ∗ (r)â†s , (5.36)
s s

which are called field operators in view of their dependence on position. In terms of
these field operators the total number operator takes the form,
Z Z

N̂ = d rψ̂ (r)ψ̂(r) = d3 rn̂(r) ,
3
(5.37)

where we defined the density operator n̂(r) as the diagonal part of the density matrix
operator. The field operators are construction operators that create or annihilate
particles at a given position. Let us demonstrate this for ψ̂(r): This field operator is
a creation operator because it is defined in terms of creation operators,
X X
ψ̂ † (r)|0⟩ = ψs∗ (r)â†s |0⟩ = |s⟩⟨s|r⟩ = |r⟩ . (5.38)
s s
5.2. OCCUPATION NUMBER REPRESENTATION 191

Using the closure relation, we found that ψ̂ † (r) creates from the vacuum a particle
in state |r⟩; i.e., a particle at position r. Similarly we can show that ψ̂(r) is the
corresponding annihilation operator,

ψ̂(r)|0⟩ = |0⟩ . (5.39)

The field operators are important quantities because (at least in principle) the po-
sitions of the particles can be measured to arbitrary accuracy in any many-body
system, also when the concept of stationary single-particle states has lost meaning
due to coupling by the interactions. Do the Exc. 5.2.4.2.

Example 39 (Atom annihilation in a number state): Consider the number


state |N0 ⟩ representing N0 bosons in the single-particle ground state |ψ0 (r)⟩. In
this case, the following relation holds,

⟨N0 − 1|ψ̂(r)|N0 ⟩ = N0 ψ(r) .

This expression follows immediately from Eq. (5.36). As only a single single-
particle state is occupied, only a single term contributes. If in contrast, |ΨN ⟩ =
|ñγ ⟩ is a pure number state of the single-particle representation {|s⟩}, with
many single-particle states are occupied, the expression is replaced by the linear
combination, X√
⟨ΨN −1 |ψ̂(r)|ΨN ⟩ = ns ψs (r) .
s

5.2.2.2 Commutation relations for field operators


It is straightforward to show (see Exc. 5.2.4.3) that the field operators ψ̂(r) and ψ̂ † (r)
satisfy commutation relations very similar to those of the construction operators âs
and â†s (see Sec. ??),

[ψ̂(r), ψ̂ † (r′ )]± = δ(r − r′ ) and [ψ̂(r), ψ̂(r′ )]± = 0 = [ψ̂ † (r), ψ̂ † (r′ )]± . (5.40)

Like previously, the commutators (−) refer to to case of bosons and the anti-commutators
(+) to the case of fermions. Further we have for both bosons and fermions,

[n̂(r), ψ̂ † (r′ )]± = ψ̂ † (r)δ(r − r′ ) and [n̂(r), ψ̂(r′ )]± = −ψ̂(r)δ(r − r′ ) . (5.41)

Using the expression (5.37) for the total number operator the latter commutation
relation leads to,
[ψ̂ † (r′ ), N̂ ]± = ψ̂(r) . (5.42)

5.2.2.3 Number densities


For a many-body system in the state |ΨN ⟩ the number-density is given by ⟨ΨN |n̂(r)|ΨN ⟩.
Using the definitions (5.36) the density operator can be expressed in the occupation
number representation of the single-particle representation {|s⟩},
X X X X
n̂(r) = φ∗s (r)â†s φt (r)ât = |φs (r)|2 â†s âs + φ∗s (r)φt (r)â†s ât . (5.43)
s t s s,t̸=s
192 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

We separated the operators into a sum of two contributions: (i) a part diagonal in the
number representation of {|s⟩}: For pure number states, |ΨN ⟩ = |ñγ ⟩, we calculate,
X X
⟨ΨN |n̂(r)|ΨN ⟩ = |φs (r)|2 ⟨ñγ |n̂s |ñγ ⟩ = |φs (r)|2 |n̂s = n̄(r) . (5.44)
s s

which corresponds to an eigenvalue equal to the quantum mechanical average (prob-


ability density). (ii) the cross terms in this representation,
X X
φ∗s (r)φt (r)â†s ât = n̂(r) − |φs (r)|2 |n̂s . (5.45)
s,t̸=s s

For pure number states the cross term contribution vanishes. For linear combinations
of number states the cross terms correspond to density fluctuations about the average.

5.2.2.4 Hamiltonian expressed by field operators


It is instructive to write the matrix elements ⟨s′ |H0 |s⟩ and ⟨s′ , t′ |Vint |s, t⟩ in the
position representation,
XZ
Ĥ0 = ... . (5.46)
s,s′

5.2.3 Correlations
5.2.4 Exercises
5.2.4.1 Ex: Quantum statistics
Show that the Fermi-Dirac statistics and the Bose-Einstein statistics directly follow
from the fermionic, respectively, bosonic commutation rules.

5.2.4.2 Ex: Annihilation operator


Show that ψ̂(r)|r⟩ = 0.

5.2.4.3 Ex: Annihilation operator


Verify the commutation rules (5.40).

5.2.4.4 Ex: Construction operators and occupation number representa-


tion
Many body quantum systems are described by symmetrized or antisymmetrized wave-
functions. To keep the calculations manageable this is done in the occupation number
representation. Below follows a series of questions with regard to the properties of
construction operators. We presume that the standard ordering is represented by the
alphabetic ordering of the state indices.
1. For an arbitrary normalized one-body eigenstate |s⟩, i.e. ⟨s′ |s⟩ = δs,s′ , we have by
definition |s⟩ ≡ |1s ⟩ ≡ |1̄s ⟩ = â†s |0⟩. Comment of the differences of notation.
2. Give the definition of the creation operator â†s acting on the number state |ns , nt , ..., nl ⟩
5.2. OCCUPATION NUMBER REPRESENTATION 193

for bosons.
3. Give the definition of the annihilation operator âs acting on the number state
|ns , nt , ..., nl ⟩ for bosons.
4. Evaluate â†s |0q , 0s , ...⟩ for fermions.
5. Evaluate â†s |1q , 0s , ...⟩ for fermions.
6. Evaluate â†s |1q , 1s , ...⟩ for fermions.
7. Show that also the vacuum state |0⟩ is normalized.
8. Calculate the norm ⟨2s |2s ⟩ for bosons and fermions without using the fermion rule
â†s |1q , 1s , ...⟩ = 0.
9. Derive [â†q , â†s ] = 0 for bosons.
10. Derive [â†q , â†s ]+ = 0 for fermions.
11. Derive [âq , â†s ]+ = δq,s for fermions.
12. Show that [n̂q , â†s ]± = +â†s δq,s for both bosons and fermions.

5.2.4.5 Ex: Important commutation relations for boson field operators


Consider a many-body system of particles confined by on external potential U(r).
At sufficiently low densities this system may be described by the Hamiltonian for a
pairwise interacting system,
X X
H= H(i) + 12 H(i,j) ,
i i,j

where
p2i
H(i) = + U(ri )
2m
is the free particle contribution of particle i and H(i,j) = V(ri , rj ) represents the
potential energy of interaction between the particles i and j. At sufficiently low
temperatures this potential may be approximated by the expression

V(ri , rj ) = gδ(ri − rj ) ,

where g = (4πℏ2 /m)a and a is called the s-wave scattering length. In the occupation
number representation for a system of identical particles the Hamiltonian takes the
form
Ĥ = Ĥ (1) + Ĥ (2) ,
where
â†s′ ⟨s′ |H(1) |s⟩âs ,
X
Ĥ (1) =
s,s′

and
â†s′ â†t′ ⟨s′ , t′ |H(1,2) |s, t⟩ât âs ,
XX
Ĥ (2) = 1
2
t,t′ s,s′

Here |s⟩, |s′ ⟩ ∈ {|s⟩}, and |t⟩, |t′ ⟩ ∈ {|t⟩} represent single particle eigenstates of the
Hamiltonians H(1) and H(2) , respectively. Rewriting the Hamiltonian in terms of field
operators we obtain Z
Ĥ (1) = drψ̂ † (r)H0 ψ̂(r)
194 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD

with H0 = p2 /2m + U(r) and


Z
Ĥ = 2 drdr′ ψ̂ † (r)ψ̂ † (r′ )V(r, r′ )ψ̂(r′ )ψ̂(r) .
(2) 1

Here r is the position coordinate of a single particle and r, r′ are the position coordi-
nates of a pair of particles. The total number operator is defined as the integral over
the density operator Z
N̂ = drn̂(r) .

Three commutation relations for boson field operators are crucial for the understand-
ing of the ground state of an interacting bosonic superfluid:

[ψ̂(r), Ĥ (1) ] = H0 (p, r)ψ̂(r)


[ψ̂(r), Ĥ (2) ] = gn̂(r)ψ̂(r)
[ψ̂(r), N̂ ] = ψ̂(r) .

Derive the above commutation relations.

5.2.4.6 Ex: Amplitude and phase of the order parameter


To analyze the deviations from the stationary state we write the order parameter in
a form separating the fluctuations of the amplitude from those of the phase,

Ψ(r, t) = |Ψ(r, t)|e−ıµt/ℏ+ıΦ(r,t) .

Using the Bogolubov ansatz the amplitude takes the form


p
|Ψ(r, t)| = n0 (r, t) .

The overall phase ϕ(r, t) is a real quantity defined as

ϕ(r, t) ≡ µt − Φ(r, t) .

The phase Φ(r, t) is called the fluctuating phase and represents the deviation from the
dynamical phase evolution µt/ℏ of the stationary state Ψ0 (r, t). The current density
is defined as
ıℏ
j(r, t) = (Ψ∇Ψ∗ − Ψ∗ ∇Ψ) .
2m
a. Derive the following expression for the current density,

j(r, t) = n0 (r, t)∇Φ(r, t) .
m
b. Show that the time dependence of the phase of the order parameter can be expressed
in the form,
∂ϕ ℏ2 ℏ2 1 2
−ℏ = (∇Φ)2 + U(r) + g|Ψ|2 − ∇ |Ψ| .
∂t 2m 2m |Ψ|
Hint: Use the difference of time-dependent GP equation and its complex conjugate.
5.3. QUANTUM STATISTICS 195

5.3 Quantum statistics


To describe the time evolution of an isolated quantum gas, in principle, all we need
to know is the many-body wavefunction plus the hamiltonian operator. Of course, in
practice, these quantities will be known only to limited accuracy. Therefore, just as
in the case of classical gases, we have to rely on statistical methods to describe the
properties of a quantum gas. This means that we are interested in the occupation
probability of quantum many-body states. In view of the convenience of the occu-
pation number representation we ask in particular for the occupation probability Pγ
of the number states |ñγ ⟩. The canonical ensemble is not suited for this purpose
because it presumes a fixed number of atoms N , whereas the ensemble of number
states {|ñγ ⟩} is defined in grand Hilbert space in which the number of atoms is not
fixed. This motivates us to introduce an important variant of the canonical ensemble
which is known as the grand canonical ensemble.

5.3.1 Exercises
196 CHAPTER 5. QUANTUM THEORY OF NON-RELATIVISTIC SCALAR PARTICLE FIELD
Bibliography
[1] V. S. Bagnato, G. P. Lafyatis, A. G. Martin, E. L. Raab, R. N. Ahmad-Bitar,
and D. E. Pritchard, Continuous stopping and trapping of neutral atoms, Phys.
Rev. Lett. 58 (1987), 2194.

[2] V. S. Bagnato, D. E. Pritchard, and D. Kleppner, Bose-Einstein condensation


in an external potential, Phys. Rev. A 35 (1987), 4354, DOI.

[3] Vanderlei S. Bagnato, Simple calculation of the spatial evolution of atoms in a


trap during occurance of Bose-Einstein condensation, Phys. Rev. A 54 (1996),
1726, DOI.

[4] Satyendra N. Bose, Plancks Gesetz und Lichtquantenhypothese, Z. Phys. 26


(1924), 178.

[5] T. Bourdel, J. Cubizolles, L. Khaykovich, K. M. F. Magalhaes, S. J. J. M. F.


Kokkelmans, G. V. Shlyapnikov, and C. Salomon, Measurement of the interaction
energy near a Feshbach resonance in a 6 li Fermi gas, Phys. Rev. Lett. 91 (2003),
020402, .

[6] I. N. Bronstein and K. A. Semandjajew, Taschenbuch der mathematik, Thun und


Franfurt, Main, 1978, ISBN.

[7] D. A. Butts and D. S. Rokhsar, Trapped Fermi gases, Phys. Rev. A 55 (1997),
4346, DOI.

[8] E. A. Cornell, Very cold indeed: The nanokelvin physics of Bose-Einstein con-
densation, J. Res. Natl. Inst. Stand. Tech. 101 (1996), 419.

[9] Ph. W. Courteille, V. S. Bagnato, and V. I. Yukalov, Bose-Einstein condensation


of trapped atomic gases, Laser Phys. 11 (2001), 659–800.

[10] F. J. P. Cuevas, Aspects of hybrid confinement for a Bose-Einstein condensate:


Global pressure and compressibility, Ph.D. thesis, Universidade de São Paulo,
2013, .

[11] J. de Boer and G. E. Uhlenbeck, Construction operator formalism for many par-
ticle systems, In Studies in Statistical Mechanics, J. de Boer and G.E. Uhlenbeck
(Eds.), (North Holland, 1965)] III (1996), 212.

[12] S. R. de Groot, G. J. Hooyman, and C. A. Ten Deldam, Proc. R. Soc. Lond. A


203 (1950), 266.

[13] R. DeHoff, Thermodynamics in material science, Taylor Francis, 2006, ISBN.

[14] B. DeMarco and D. S. Jin, Onset of fermi-degeneracy in a trapped atomic gas,


Science 285 (1999), 1703, DOI.

197
198 BIBLIOGRAPHY

[15] B. DeMarco, S. B. Papp, and D. S. Jin, Pauli blocking of collisions in a quantum


degenerate atomic Fermi gas, Phys. Rev. Lett. 86 (2001), 5409, DOI.

[16] P. A. M. Dirac, The quantum theory of the emission and absorption of radiation,
Proc. Roy. Soc. A 114 (1927), 243.

[17] R. J. Dodd, M. Edwards, and C. W. Clark, Two-gas description of dilute Bose-


Einstein condensates at finite temperature, J. Phys. B: At. Mol. Opt. Phys. 32
(1999), 4107, DOI.

[18] A. Einstein, Quantentheorie des einatomigen idealen gases, S. B. Kgl. Preuss.


Akad. Wiss. 35 (1924).

[19] J. R. Ensher, D. S. Jin, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Bose-


Einstein condensation in a dilute gas: Measurement of energy and ground-state
occupation, Phys. Rev. Lett. 77 (1996), 4984, .

[20] DOIV. S. Bagnato and D. Kleppner, Bose-Einstein condensation in low-


dimensional traps, Phys. Rev. A 44 (1991), 7439, .

[21] V. Fock, Konfigurationsraum und zweite quantelung, Zeitschrift für Physik 75


(1932), 622.

[22] S. Giovanazzi, A. Görlitz, and T. Pfau, Tuning the dipolar interaction in quantum
gases, Phys. Rev. Lett. 89 (2002), 130401, .

[23] B. Goss-Levi, Ultra-cold fermionic atoms team up as molecules: Can they form
cooper pairs as well, Phys. Today (2004), .

[24] M. Greiner, C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, Detection of


spatial correlations in an ultracold gas of fermions, Phys. Rev. Lett. 92 (2004),
150405, .

[25] S. Grossmann and M. Holthaus, On Bose-Einstein condensation in harmonic


traps, Phys. Lett. A 208 (1995), 188, .

[26] S. Gupta, Z. Hadzibabic, M. W. Zwierlein, B. J. Verhaar, and W. Ketterle,


Radio-frequency spectroscopy of ultracold fermions, Sciencexpress (2003), 1, .

[27] Z. Hadzibabic, S. Gupta, C. A. Stan, C. H. Schunck, M. W. Zwierlein, K. Dieck-


mann, and W. Ketterle, Fiftyfold improvement in the number of quantum degen-
erate fermionic atoms, Phys. Rev. Lett. 91 (2003), 160401, .

[28] D. J. Han, R. H. Wynar, Ph. W. Courteille, and D. J. Heinzen, Bose-Einstein


condensation of large numbers of atoms in a magnetic time-averaged orbiting
potential trap, Phys. Rev. A 57 (1998), R4114.

[29] C. Herzog and M. Olshanii, Trapped bose gas: The canonical versus grand canon-
ical statistics, Phys. Rev. A 55 (1997), 3254, .

[30] Tin-Lun Ho and E. J. Mueller, High temperature expansion applied to fermions


near Feshbach resonance, Phys. Rev. Lett. 92 (2004), 160404, .
BIBLIOGRAPHY 199

[31] M. Houbiers, H. T. C. Stoof, W. I. McAlexander, and R. G. Hulet, Elastic and


inelastic collisions of li-6 atoms in magnetic and optical traps, Phys. Rev. A 57
(1998), R1497, .

[32] K. Huang, Statistical mechanics, John Wiley and Sons, 1987, ISBN.

[33] E. T. Jaynes, The gibbs paradox, (1992), 1–22, .

[34] P. Jordan and O. Klein, Zum mehrkörperproblem der quantentheorie, Zeitschrift


für Physik 45 (1927), 751.

[35] P. Jordan and E. Wigner, über das paulische äquivalenzverbot, Zeitschrift für
Physik 47 (1928), 631.

[36] W. Ketterle and N. J. Van Druten, Evaporative cooling of trapped atoms, Adv.
At. Mol. Opt. Phys. 37 (1996), 181, .

[37] K. Kirsten and D. J. Toms, Bose-Einstein condensation of atomic gases in a


general harmonic-oscillator confining potential trap, Phys. Rev. A 54 (1996),
4188.

[38] L. D. Landau, Butterworth-Heinemann, 1937, ISBN.

[39] G. T. Landi, 2019, ⊙Lecture notes.

[40] Mingzhe Li, Zijun Yan, Jincan Chen, Lixuan Chen, and Chuanhong Chen, Ther-
modynamic properties of an ideal Fermi gas in an external potential with U = brt
in any dimensional space, Phys. Rev. A 58 (1998), 1445, .

[41] F. London, On the Bose-Einstein condensation, Nature 54 (1938), 947.

[42] R. Napolitano, J. De Luca, V. S. Bagnato, and G. C. Marques, Effect of finite


numbers in the Bose-Einstein condensation of a trapped gas, Phys. Rev. A 55
(1997), R3954, .

[43] N. Nygaard and K. Mølmer, Component separation in harmonically trapped


boson-fermion mixtures, Phys. Rev. A 59 (1999), 2974, .

[44] K. M. OHara, S. R. Granade, M. E. Gehm, T. A. Savard, S. Bali, C. Freed,


and J. E. Thomas, Ultrastable co2 laser trapping of lithium fermions, Phys. Rev.
Lett. 82 (1999), 4204.

[45] K. M. OHara, S. L. Hemmer, M. E. Gehm, S. R. Granade, and J. E. Thomas,


Observation of a strongly interacting degenerate Fermi gas of atoms, Science 298
(2002), 2179, .

[46] D. S. Petrov, Three-boson problem near a narrow Feshbach resonance, Phys. Rev.
Lett. 93 (2004), 143201, .

[47] C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, Tuning p-wave interactions


in an ultracold Fermi gas of atoms, Phys. Rev. Lett. 90 (2003), 053201, .
200 BIBLIOGRAPHY

[48] A. G. Truscott, K. E. Strecker, W., I. McAlexander, Guthrie, B. Partridge, and


R. G. Hulet, Observation of Fermi pressure in a gas of trapped atoms, Science
291 (2001), 2570, .

[49] J. Walraven, Quantum gases, unpublished, Amsterdam, 2019, ⊙.


[50] Ziyun Yan, Bose-Einstein condensation of a trapped gas in n dimensions, Phys.
Rev. A 59 (1999), 4657, .
Index
activity of a component, 91 Dalton’s law, 91
adiabatic process, 36 Debye
adiabaticity coefficient, 36 Peter Joseph William, 130
affinity Debye law, 130
reaction, 106 Debye length, 114
Avogadro Debye model, 130
Amedeo, 4 Debye temperature, 130
Debye-Hückel equation, 115
black-body radiation, 153 Debye-Hückel length, 115
Boltzmann degenerate Fermi gas, 166
Ludwig, 123 density-of-states, 141
Boltzmann distribution, 127, 135 Desormes
Boltzmann gas, 167 Charles-Bernard, 41
Bose detailed balance, 135
Satyendranath, 138 Diesel
Bose function, 151 Rudolf Christian Carl, 39
Bose-Einstein distribution, 146 Diesel cycle, 39, 53
boson, 176, 184 Dieterici
Boyle Conrad, 57
Robert, 6 dissipation, 26
Brown Duhem
Robert, 4 Pierre Maurice Marie, 84
Brownian, 4 Dulong
canonical ensemble, 31, 69 Pierre Louis, 128
Carnot Dulong-Petit law, 128
Nicolas Léonard Sadi, 43
Carnot cycle, 43 Ehrenfest classification, 111
Celsius ensemble equivalence, 148
Anders, 6 enthalpy, 30
chemical potential, 60 entropy, 25, 27
chemical reaction, 105 equipartition, 8
Clément equipartition theorem, 173
Nicolas, 41 exchange operator, 184
Clapeyron exit work, 179
Benoı̂t Paul Émile, 79 extensive variable, 22
Clausius
Rudolf Julius Emanuel, 79 Fahrenheit
Clausius-Clapeyron equation, 79, 83 Daniel Gabriel, 6
composite particle, 176 Fermi energy, 166
compressibility, 31 Fermi function, 151
contact potential, 180 Fermi gas model, 153
Fermi-Dirac distribution, 146
Dalton fermion, 176, 184
John, 4, 91 fugacity, 93, 151

201
202 INDEX

gas constant microcanonical ensemble, 64, 65, 135


universal, 7 molal property
Gay-Lussac partial, 85
Louis Joseph, 6 Montgolfier
Gibbs Joseph Michel and Jacques Etienne,
Josiah Willard, 123 4
Gibbs free energy, 30
Gibbs paradox, 137 optical density, 176
Gibbs-Duhem equation, 62, 84 Otto
grand canonical ensemble, 71 Nikolaus August, 39
Otto cycle, 39, 53
heat capacity, 10, 31, 159
heat engine, 43 parity, 188
heat exchange, 22 partial pressure, 91
heat pump, 43 particle index, 188
Helmholtz free energy, 30 partition function, 126
hole heating, 178 Pauli ’s exclusion principle, 186
Pauli blocking, 177
intensive variable, 22 Pauli exlusion principle, 177
internal energy, 24 Petit
isentropic, 36 Alexis Thérèse, 128
isobaric process, 38 phase diagram, 3
isochoric process, 38 phase transition, 75
isothermal, 37 Ehrenfest classification, 111
Landau classification, 111
Johnson noise, 8 phonon, 129
Joule phonons, 130
James Prescott, 14 plasma, 114
Joule-Thomson effect, 45, 57 Poisson equation, 114
process variable, 22
Kelvin
William Thomson, 6 quantization
of phase space, 135
lambda point, 160 quantum statistical, 135
Landau classification, 111 quantum statistics, 181
Landau grand canonical potential, 62
latent heat, 10 real gas, 56
Lavoisier refrigeration, 43
Antoine-Laurent, 106 release energy, 160
Legendre transform, 29 reservoir, 63
liquid-vapor transition, 111 Riemann zeta-function, 117, 151
Luttinger liquid, 179 Ruechardt
Eduardt, 42
Mariotte
Edme, 6 Sackur-Tetrode formula, 39, 164
Maxwell relation, 29 Slater determinant, 188
Maxwell-Boltzmann distribution, 131 solid-liquid transition, 111
mechanical work, 22 solution, 86
INDEX 203

ideal, 92
real, 92
solutions
regular, 94
Sommerfeld expansion, 118
specific heat, 10
spin orbital, 187, 188
spin-charge separation, 179
spin-statistics theorem, 184
state equation, 21
state functions, 21
state index, 188
Stirling
James, 117
Stirling’s formula, 117, 125
stress coefficient, 31
sublimation pump, 95

temperature
absolute zero, 27
thermal de Broglie wavelength, 151
thermal expansion coefficient, 31
thermodynamic ensembles, 63
thermodynamic limit, 143
thermodynamic potential, 31
thermodynamic potentials, 29
thermoionic emission, 180
thermometer, 5
Thomson
William, 45
triple point, 81

univariant reacting systems, 106

Watt
James, 4

You might also like