Zhu - Thermal Physics
Zhu - Thermal Physics
Thermodynamics/Statistical Physics
Tues/Thurs 11:30AM - 12:45PM BPB 249
Instructor Prof. Qiang Zhu
Email [email protected]
Office BPB 232
Office hrs Tue/Thurs 10:00AM - 11:30AM
Course Outline:
Grade Distribution:
Assignments 20%
Midterm Exams 1/2 40%
Final Exam 40%
Extra Credits 10%
Course Description:
This course combines elements of classical thermodynamics and statistical physics and covers materials
from chapters 1 through 7 in the text book. Approximately, we spend two weeks for each chapter. The
weekly coverage might change as it depends on the progress of the class. There will be two exams during
the semester. These exams may include both take-home and in-class work. This final exam will cover all
materials taught in this semester. You may work with others on the homework, but take-home exams must
be done strictly by yourself. Barring documentable emergencies or observance of a certifiable religious
holiday, all exams must be taken at the time and place specified.
Learning Outcomes:
Please see the Student Syllabus Policies Handout for select, useful information for students. This document
can be found at:
https://www.unlv.edu/sites/default/files/page_files/27/SyllabiContent-MinimumCriteria-2018-2019.
pdf
0-1
Physics 467/667: Thermal Physics Spring 2019
Thermodynamics is aimed at describing the macroscopic phenomenon such as heat transfer, energy conver-
sion, chemical reactions. We study them by measuring some key quantities (such as pressure, temperature,
volume, viscosity, bulk modulus, .etc) and their correlations. The foundation of thermodynamics is based
on the thermodynamic laws:
3. the entropy of a system approaches a constant value as the temperature approaches absolute zero
N = n × NA (1.2)
1
Uthermal = N · f · kT (1.3)
2
∆U = Q + W (1.4)
∆W = − P∆V (1.5)
S = klnΩ (1.6)
However, thermodynamics does not tell us the microscopic details about the matter. We observe heat
always flows spontaneously from a hot object to a cold one. The underlying explanation of this thermody-
namic phenomenon is due to the change of entropy, a microscopic phenomenon. We might briefly discussed
them to some extents in thermodynamic classes. Here, we are going to to explore them more comprehen-
sively with the tool of statistic mechanics.
1-1
1-2 Lecture 1: From Thermodynamics to Statistics
gas molecules
(N, v)
∆v
In the early days of thermal physics, it was primarily shaped by experimental studies of the macroscopic
behavior of physical systems, through the work of Carnot, Joule, Clausius and Kelvin. However, people
were keen to know how the observed phenomenon were related to the motion of molecules and atoms.
The first attempt was the so called kinetic theory of gases, which aimed at explaining the macroscopic
behavior of gaseous systems in terms of the motion of molecules. Although a bit speculative, it began to
emerge as a real mathematical theory.
The theory for ideal gases makes the following assumptions:
1. The gas consists of very small particles known as molecules separate by large distances
4. The rapidly moving particles constantly collide among themselves and with the walls of the container.
With these assumptions, we then proceed to figure the relations between the pressure and atomic motion.
In such model, the pressure is equal to the force exerted by the atoms hitting and rebounding from a unit
area of wall. Consider a gas of N molecules, each of mass m, enclosed in a cube with V = L3 . When a gas
molecule collides with the wall perpendicular to the x axis and bounces off in the opposite direction with
the same speed (an elastic collision), the change in momentum is given by:
∆p mv2x
F= = (1.10)
∆t L
Lecture 1: From Thermodynamics to Statistics 1-3
Since we have many particles, the total force on the wall is:
mv¯2x
F= (1.11)
L
Since the motion of the particles is random and there is no bias applied to any direction, the averaged speed
in each direction should be identical.
v¯2x = v¯2y = v¯2z (1.12)
2K
PV = (1.16)
3
This is a first non-trivial result of the kinetic theory because it relates pressure (a macroscopic property), to
the (translational) kinetic energy of the molecules (a microscopic property).
From the ideal gas law PV = nRT = NkT, we can further derive the relation between K and T.
2K
PV = = NkT (1.17)
3
As one can see, this simple scheme could successfully explain some key features in an elegant way. How-
ever, it is limited by the oversimplified assumptions. The real contact with thermodynamics could not be
made until 1872 when Boltzmann developed his H-theorem and established a direct connection between
entropy and molecular dynamics. Simultaneously, the kinetic theory began giving away to its more so-
phisticated successor - the ensemble theory. The power of the techniques that finally emerged reduced
thermodynamics to the status of an consequence of the statistics and the mechanics of the molecules con-
stituting a given physical system. It was then natural to give the formalism name Statistical Mechanics
Physics 467/667: Thermal Physics Spring 2019
We consider a system composed of N identical particles confined to a space of V. The total energy E would
be equal to the sum of the energies of the individual particles.
E= ∑ n i ei (2.1)
i
The specification of the actual values of the parameters N, V, E then defines a macrostate of the system.
At the molecular level, however, a large number of possibilities still exist because at that level there will
be a large number of different ways to make the total state of N, V, E (think out arranging the coins with
different sequence of head and tail). Each of the different ways specifies a microstate or complexion of the
given system.
The actual number of all possible microstates (Ω) will be a function of N, V, E. In principle, it is from
the magnitude of the number of Ω and from its dependence on the parameters N, V, E, that complete
thermodynamics can be derived.
q+N−1 ( q + N − 1) !
Ω( N, q) = = (2.2)
q q!( N − 1)!
Exercises
Calculate the multiplicity of an Einstein solid with 5 oscillators and [1,2,3,4,5] units of Energy.
2-1
2-2 Lecture 2: The Statistical Basis of Thermodynamics
q Ω(5, q)
1
2
3
4
5
A1 A2
(N1 , V1 , E1 ) (N2 , V2 , E2 )
Let’s first figure out how Ω is related to the thermodynamic quantities. We consider two physical systems,
A1 and A2 , which are separately in equilibrium. Let the macrostate of A1 be represented by the parameters
N1 , V1 and E1 so that it has Ω1 ( N1 , V1 , E1 ) possible microstates, and the macrostate of A2 be represented by
Ω2 ( N2 , V2 , E2 ). Can we derive some thermodynamic properties from Ω1 ( N1 , V1 , E1 ) and Ω2 ( N2 , V2 , E2 )?
Let’s bring two systems into thermal contact. For simplicity, we only allow the heat exchange between the
two, while N, V remain fixed. This means there could be some interchanges between E1 and E2 , however,
it has to be restricted by the conservation law.
E = E1 + E2 = const (2.3)
From the microscopic view, the total number of microstates could be expressed as,
Ω1 ( E1 )Ω2 ( E2 ) = Ω1 ( E1 )Ω2 ( E − E1 ) (2.4)
When the system approaches to the equilibrium, what should be the value of E¯1 . According to the 2nd law,
the entropy should reach the maximum. Mathematically, we need to find E¯1 which satisfies,
∂Ω1 ( E1 ) ∂Ω2 ( E2 ) ∂E2
Ω2 ( E2 ) + Ω (E ) = 0 (2.5)
∂E1 E1 = E¯1 ∂E2 E2 = E¯2 ∂E1 1 1
Thus, our condition for equilibrium reduces to the equality of parameter β 1 and β 2 :
∂ln Ω( E)
β≡ (2.7)
∂E E= Ē
Lecture 2: The Statistical Basis of Thermodynamics 2-3
∂ln Ω( N, V, E)
β≡ (2.8)
∂E N,V,E= Ē
Therefore, we find when two systems are into thermal contact, the exchange of heat continues until the
equilibrium E1 , E2 reach some values. This happens only when the respective values of β 1 and β 2 be-
come equal. It is then natural to expect that the parameter β is somehow related to T. To determine this
relationship, we recall the thermodynamic formula
∂S 1
= (2.9)
∂E N,V T
This correspondence was firstly established by Boltzmann. It was Planck who first wrote the explicit for-
mula
S = klnΩ (2.11)
It means that the absolute value of the entropy of a given physical system in terms of the total number of
microstates accessible to it conformity with the given macrostate, which provides a bridge between micro
and macroscopic.
Our conditions for equilibrium now take the form of an equality between the pair of (β, η)
∂lnΩ( N, V, E)
η≡ (2.14)
∂V N,E,V =V̄
Similarly, there might be exchanges between particles, while need another parameter ζ,
∂lnΩ( N, V, E)
ζ≡ (2.15)
∂N V,E,N = N̄
To determine the physical meaning of the parameters η and ζ, we make use of the thermodynamic identity.
so
1
β=
kT
P (2.17)
η=
kT
µ
ζ=−
kT
T1 = T2
P1 = P2 (2.18)
µ1 = µ2
This is identical to the ones following from statistical considerations. The evaluations of P, µ, T indeed
requires that energy E be expressed as a function of N, V, E, this should, in principle be possible once S is
known.
For instance,
∂S 1
=
∂E N,V T
∂S P
= (2.19)
∂V N,V T
−µ
∂S
=
∂N N,V T
F = E − TS
G = F + PV = E − TS − PV = µN (2.20)
H = E + PV = G + TS
∂S ∂E
CV = T ( ) N,V = ( ) N,V
∂T ∂T (2.21)
∂S ∂H
CP = T ( ) N,P = ( ) N,P
∂T ∂T
Physics 467/667: Thermal Physics Spring 2019
N = n × NA (3.2)
1
Uthermal = N · f · kT (3.4)
2
By conservation of energy, the change in total thermal energy is the sum of heat entering the system and
work done on the system,
∆U = Q + W (3.5)
Q: heat transfer by conduction, convection, radiation.
W: mechanical/electric/chemical work.
while we already know how to calculate U according to eq 3.4, let’s try to figure out how to calculate W. We
always start from its original definition,
∆W = F∆X (3.6)
Suppose this is a quasistatic compression, i.e, every step is very slow and reversible, then we have
∆W = PA∆X (3.7)
3-1
3-2 Lecture 3: Energy in Thermal Physics
∆W = − P∆V (3.8)
A simple way to check if the derived equation is correct. W and -P∆V both have the unit of J.
How to calculate -P∆V?
2. P is not constant
P P
V V
Exercises
(Problem 1.32):
By applying a pressure of 200 atm, you can compress water to 99% of its usual volume. Sketch the process
on a PV diagram, and estimate the work required to achieve it. Does the result surprise you?
A B C Total
W
Q
U
Lecture 3: Energy in Thermal Physics 3-3
Let’s think about how compression is done on the ideal gas. There are two extremes as follows.
1. very slow that the temperature doesn’t change at all, i.e., isothermal compression
2. very fast that the no heat escapes from the gas, i.e., adiabatic compression
dW = − PdV. (3.13)
Since dU=dW
f
− PdV = NkdT. (3.14)
2
By plugging in PV = NkT, we get
dV f dT
− = . (3.15)
V 2 T
Let us integrate here,
Z V Z T
f dV f f dT
− = . (3.16)
Vi V 2 Ti T
then we get
Vf f T
ln = ln i . (3.17)
Vi 2 Tf
3-4 Lecture 3: Energy in Thermal Physics
V γ P = const. (3.19)
f +2
where γ = f is called the adiabatic exponent.
Homework: how to prove it? It will be intensively used in the following class!
isothermal: PV=const
V
Physics 467/667: Thermal Physics Spring 2019
VT f /2 = const. (4.1)
V γ P = const. (4.2)
f +2
where γ = f is called adiabatic exponent.
Exercises
(Problem 1.38):
Two identical bubbles rise from the bottom of a lake to its surface.
Bubble A rises quickly (adiabatic condition).
Bubble B rises slowly (isothermal condition).
Which buble is larger in the end?
(Problem 1.40):
In Lec02, we have determined that
dP mg
=− P (4.3)
dz kT
and obtained the pressure P as a function of height as follows,
P(z) = P(0) exp(−mgz/kT ) (4.4)
4-1
4-2 Lecture 4: Energy in Thermal Physics
The heat capacity of an object is the amount of heat needed to raise its temperature,
Q ∆U − W
C= = (4.6)
∆T ∆T
Specific heat capacity is a more fundamental metric,
C
c= . (4.7)
m
1. constant volume, CV
2. constant pressure, CP
Discussions:
Exercises
(Problem 1.44): Look up the table of thermodynamic data at room temperature. Browse through the CP
values in this table, and understand them according to the equipartition theorem.
Type Examples Ideal Value Anomaly
monoatomic gas
diatomic gas
polyatomic(linear) gas
polyatomic gas
Elemental solid
Binary solid
Lecture 4: Energy in Thermal Physics 4-3
Figure 4.1: Heat capacity at constant volume of one mole of hydrogen (H2 ) gas. Note that the temperature
scale is logarithmic.
Figure 4.2: Measured heat capacities at constant pressure (data points) for one mole each of three different
elemental solids.
4.3 Enthalpy
Constant pressure processes occur quite often in chemical reactions and phase transformations. Keeping
track of the work done during these processes gets to be a pain. Any idea to make it more convenient?
Instead of talking about the energy, we can agree to add the work due to PV in the given environment. This
results in a new quantity called the enthalpy (H),
H = U + PV (4.12)
Conveniently, we can express
∂H
CP = . (4.13)
∂T P
Think about
5.1 Introduction
We have explored the law of energy conservation and applied it to thermodynmic systems. In the mean-
time, we studied the relations between heat, work and thermal energy, and the connections between mar-
coscopic observables P, V, T and miscroscopic properties v, f .
However, some very fundamental questions remain unanswered,
1. what is temperature?
3. why do many processes happen in one direction, but never the reverse?
Let’s get started with a silly example of flipping three coins. How many possible outcomes are there?
Coin1 Coin2 Coin3
H H H
H H T
H T H
T H H
H T T
T T H
T H T
T T T
In total we have 8 outcomes, each is called a microstate.
Since all coins are indistinguishable, we are more interested in how many heads are there in all outcomes?
By simply counting the number from the above table, we know
3 heads, HHH
2 heads, HHT, HTH, THH
1 head, HTT, TTH, THT
0 head, TTT
each state is called a microstate.
microstate: Each of the eight different outcomes
macrostate: How many heads are there in all outcomes
5-1
5-2 Lecture 5: The Second Law and Entropy
Although each microstate can equally exist, but macrostates have different probabilities to explored. Clearly,
2 heads is more likely to be found than 3 heads. Here we introduce another quantity,
multiplicity (Ω): the number of microstates in a given macrostate.
In the context of coin games, let’s define Ω(n) as number of cases when we get n heads. If the total number
of coins is N, we can derive the equations as follows,
N! N
Ω( N, n) = = (5.1)
n! · ( N − n)! n
What will happen if we increase N. Let’s try N = 4 and 20 in Problem 2.1 and 2.2.
Such two-state systems are quite common in physics, such as two-state paramagnet.
N: Number of oscillators.
q: Number of energy states.
q+N−1 ( q + N − 1) !
Ω( N, q) = = (5.2)
q q!( N − 1)!
Exercises
Calculate the multiplicity of an Einstein solid with 5 oscillators and [1,2,3,4,5] units of Energy.
q Ω(5, q)
1
2
3
4
5
Computer programming
1. Write a small piece of program to calculate Ω( N ) in the context of flipping coins and plot them when
N=10, 15, 30, 100.
2. Write a small piece of program to calculate Ω( N ) in the context of Einstein solid and plot them when
N=10, 15, 30, 100 and q from 0 to 10.
Lecture 5: The Second Law and Entropy 5-3
In the previous section, we just learned how to count the Ω for an Einstein solid. Remember we are trying
to understand how heats are transferred, which essentially at least two solids. Let’s call the two solids A
and B separately.
Solid A Solid B
NA , q A NB , q B
Figure 6.1: Two interacting Einstein solids isolated from the rest of the universe.
Assuming that A and B are weakly coupled (just like what we did on the ideal gas model), the individual
energy units of the solids, q A and q B will change slowly. Under this assumption, the total number of
energies qtotal will be simple the sum of q A and q B .
To make life easier, let’s fix qtotal , what’s the multiplicity for any arbitrary q A ? If we just count A,
q A + NA − 1
Ω( A) = , (6.1)
qA
q B + NB − 1
Ω( B) = , q B = qtotal − q A . (6.2)
qB
Exercises
Write a table of q A , Ω( A), q B , Ω( B), Ω(total), when q A + q B = 5, NA =NB =6.
6-1
6-2 Lecture 6: The Second Law and Entropy
q( A) Ω( A) q( B) Ω( B) Ω(total)
0
1
2
3
4
5
To apply these formulas to large systems, we need a trick for evaluating factorials of large numbers. Here
is a trick called Stirling’s approximation,
√
N! ≈ N N e− N 2πN (6.4)
This can be roughly understood that N! is first approximated as N N , then averaged by ( N/e) N ,
N! ≈ N N e− N (6.5)
A more elegant way to express N! is to use the so called Gamma function. Suppose you start with the
integral, Z ∞
e− ax dx = 1/a (6.6)
0
Starting with this equation, you are able to prove eq 6.4. From the above, you can get the logarithm as
follows
ln N! ≈ N ln N − N − 1/2 ln(2πN ) (6.8)
2. Write a code to calculate the probability of Ω(q A ), when NA =[300, 3000], NB =[200, 2000], for q=[100,
1000], plot them and try to explain the differences. (hint: 2 plots)
3. Write a code to show the comparison of Stirling approximation in eq.6.10 and 6.9
0.08
0.07 Na =300
0.06 Nb =200
0.05 qa =100
P 0.04
0.03
0.02
0.01
0.00
0 20 40 60 80 100
0.025
0.020 Na =3000
Nb =2000
0.015 qa =1000
P
0.010
0.005
0.000
0 200 400 600 800 1000
qa
Figure 6.3: Probability distribution of Ω( N ) in two interacting Einstein solids for different q values.
14
N!
12 Γ(n + 1)
p
2πn (n/e) n
10
0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
N
Figure 6.5: Comparison between the Gamma function and Stirling’s approximation.
Physics 467/667: Thermal Physics Spring 2019
Remember we have done some computer programming for two-state systems. A general trend is that it
approaches a large number N, Ω( N ) tends to be localized. If we treat the histogram of Ω( N ) as a continues
function (true when N is very large), it looks like a very smooth curve and Ω( N ) follows some kind of
distribution.
Now let’s try to figure out what it is, by taking a Einstein solid as an example.
q+N−1 ( q + N − 1) !
Ω( N, q) = = (7.1)
q q!( N − 1)!
In reality, there are always many more energy units (q) than oscillators (N), so we assume q N.
To make it easier, let’s just remove -1 in eq. 7.1,
(q + N )!
ln Ω = ln
q!N!
= ln(q + N )! − lnq! − lnN! (7.2)
≈ (q + N )ln(q + N ) − (q + N ) − qlnq + q − NlnN + N
= (q + N )ln(q + N ) − qlnq − NlnN
Therefore, we have
q N2 q
ln Ω ≈ Nln +N+ ≈ Nln + N (7.4)
N q N
Naturally, we now know the general form of two Einstein Solids model,
eq A NA eq B NB
Ω( NA , q A , NB , q B ) = ( ) ( ) (7.6)
NA NB
7-1
7-2 Lecture 7: The Second Law and Entropy
e 2N
Ω( N, q A , q B ) = ( ) (q A q B ) N (7.7)
N
Based on what we plot in the homework, we know that Ω reaches its maximum value at q A = q B = q/2,
e 2N
Ωmax = ( ) (q/2)2N (7.8)
N
N
e 2N q 2
Ω( N, q, x ) = ( ) ( ) − x2 . (7.10)
N 2
2 N 2
q 2 q 2
ln −x = N ln −x
2 2
2 2
q 2x
= N ln (1 − )
2 q
2 2 (7.11)
q 2x
= N ln + ln 1 −
2 q
2 2
q 2x
≈ N ln −
2 q
hence we have
2
Ω = Ωmax · e− N (2x/q) (7.12)
1 −( x −µ)2
f ( x |µ, δ2 ) = √ e 2δ2 . (7.13)
2δ2 π
Lecture 7: The Second Law and Entropy 7-3
f (x)
µ−δ µ+δ
x
1. symmetric
2. Gaussian width
2x 2 q
N( ) =1 or x= √ (7.14)
q 2 N
Let’s plug in some number, say N=1020 . This results tell us, when two Einstein solids are in thermody-
namical equilibrium, any random fluctuation will be not measurable. The most-likely macrostates are very
localized.
Exercises
1. Problem 2.20.
2. Problem 2.23
Suppose we have a single gas atom (Ar), with a kinetic energy U in a container of volume V, what is its
corresponding Ω? Obviously, the possible microstate is proportional to V. In principle, the atom can stay
at any place of V. Also, each microstate can be represented as a vector, since it has velocity (more precisely
Momentum). Therefore
Ω ≈ V · Vp (7.15)
7-4 Lecture 7: The Second Law and Entropy
It appears that both V and Vp somehow relate to very large numbers, but would their product go to infinity?
Fortunately, we have the famous Heisenberg uncertainty principle:
∆x∆p x = h. (7.16)
For a one-dimensional chain, we define L as the length in real space, L p as the length in momentum space,
L Lp LL p
Ω1D = = . (7.17)
∆x ∆p x h
Therefore, its 3D version is,
VVp
Ω1 = . (7.18)
h3
Accordingly, the multiplicity function for an ideal gas of two molecules should be
1 V2
Ω2 = × area of P hypersphere (7.19)
2 h6
if the two molecules are indistinguishable. The general form for N should be
1 VN
ΩN = × area of P hypersphere. (7.20)
N! h3N
For N=1, how to calculate the area? Since U depends on the momentum by
1 1
U= m(v2x + v2y + v2z ) = ( p2 + p2y + p2z ) (7.21)
2 2m x
area =2 (d=1)
=2πr (d=2)
2
=4πr (d=3)
=.... (7.23)
=....
2π d/2 d−1
= r (d in general)
Γ(d/2)
For a d-dimensional hypersphere with a radius of r, we can solve it iteratively. When d=1, A(r )=2, d=2,
A(r ) = 2πr Z π Z π
A3 (r ) = A2 (r sin θ )rdθ = 2πr2 dθ = 4πr2 . (7.27)
0 0
Consequently, we can keep doing this
Z π
A d (r ) = Ad−1 (r sin(θ ))rdθ
0
2π (d−1)/2
Z π
= (r sin θ )d−2 rdθ (7.28)
0 Γ ( d− 1
2 )
2π (d−1)/2
Z π
= r d −1 (sin θ )d−2 dθ
Γ ( d− 1
2 )
0
√
πΓ( n+ 2
2 )
Z π
n
(sin θ ) dθ = (7.29)
0 Γ ( n+ 1
2 )
so
2π d/2 d−1
A d (r ) = r (7.30)
Γ(d/2)
Physics 467/667: Thermal Physics Spring 2019
8.1 Entropy
S = klnΩ (8.1)
2. Mixing A and B?
Exercise
Based on Figure 2.5, compute the entropy of the total, most likely, least likely macrostate. Compare them
with the number of typical values (0.77J/K).
V 4πmU 3/2 5
S = Nk [ln( ( ) )+ ] (8.4)
N 3Nh2 2
8-1
8-2 Lecture 8: The Second Law and Entropy
piston
If we mix two different gases, A and B, each with the same U, V and N, they occupy two halves of a divided
chamber.
∆S A = NklnVf /Vi = Nkln2 (8.5)
What if A and B are indistinguishable? (double counting?) Mixing only applies when A and B are different?
piston
1 mol He(g)
8.6 Homework
Prove that 1 mol Ar gas has larger entropy than 1 mol He.
Problems 2.17, 2.18, 2.22, 2.29, 2.30, 2.31, 2.32, 2.34, 2.37
Physics 467/667: Thermal Physics Spring 2019
Show that during the quasi-static isothermal expansion, the change of entropy is related to the heat input
Q by
Q Q
∆S = → T= (9.1)
T ∆S
It looks like T can expressed as energy divided by entropy.
300
SA
SB
250 S(total)
200
S/k
150
100
50
0
0 20 40 60 80 100
qA
Figure 9.1: Entropy as a function of q A in two interacting Einstein solids (NA =300, NB =200, q(total)=100).
In the context of two interacting Einstein solid, we once calculated Ω as a function of q A . Let’s redo it. At
equilibrium, we know Stotal reaches the maximum, therefore,
∂Stotal ∂Stotal
=0 → =0 (9.2)
∂q A ∂U A
9-1
9-2 Lecture 9: Entropy and Heat
Exercise
Calculate the slope of S-q graph for various points. (assuming e=0.1eV, 0.024 eV is about 300 K, )
∆S A −1 ∆SB −1
1. q=0, ( ∆U ) = ( ∆U B
) =
A
∆S A −1 ∆SB −1
2. q=10, ( ∆U ) = ( ∆U B
) =
A
∆S A −1 ∆SB −1
3. q=60, ( ∆U ) = ( ∆U B
) =
A
∂S −1 Nk −1
T=( ) =( ) (9.6)
∂U U
U = NkT (9.7)
This is exactly the thermal equipartition theorem applied to an Einstein Solid.
∂S −1
T=( ) = (9.9)
∂U
U = 3/2NkT (9.10)
dU Q
dS = = (constant volume, W = 0) (9.11)
T T
CV dT
dS = (constant volume, W = 0) (9.12)
T
Lecture 9: Entropy and Heat 9-3
Z T
f CV
∆S = S f − Si = dT (9.13)
Ti T
Z B
CV
∆S = dT (9.14)
A T
This value looks much smaller than the reference value in the appendix (197.67 J/K), because a constant
volume assumption is not realistic.
Solid A Solid B
500 K 1500 J 300 K
Figure 9.2: A schematic heat flow between two interacting Einstein solids.
4. Fundamentally, the net increase in entropy is the driving force behind the flow of heat.
5. The manifestation of 2nd law
Physics 467/667: Thermal Physics Spring 2019
In the last lecture, we just learned the relation between S and T, is there any analogy between S and P?
U A , VA , S A UB , VB , SB
PA pressure flow PB
Again, we start from the condition when the system reaches its equilibrium,
∂Stotal ∂Stotal
=0 → =0 (10.1)
∂U A ∂VA
as S is a function of U and V.
We already applied the 1st condition in the previous lecture. How about the 2nd condition?
∂S A ∂S ∂S A ∂S
+ B =0 → = B (10.2)
∂VA ∂VA ∂VA ∂VB
∂S NkT
P = T( )= (10.5)
∂V V
PV = NkT (10.6)
Again, we proved the ideal gas law.
10-1
10-2 Lecture 10: Entropy and Pressure
From the above sections, it seems that ∆S can be divided into two parts,
Let’s say,
∆S ∆S
∆S = ( )∆U + ( )∆V (10.7)
∆U ∆V
Suppose each step is very small, we use
∂S ∂S
dS = ( )dU + ( )dV (10.8)
∂U ∂V
dU PdV
dS = + (10.9)
T T
1. ∆U = 0, TdS = PdV
2. ∆V = 0, dU = TdS
dU = − PdV (10.13)
isentropic = quasistatic + adiabatic
Z T
f CP
∆S = S f − Si = dT (10.14)
Ti T
R 300
S(300K ) = S(0K ) + CP 0 T1 dT = 5.8 + 3.5*8.31*ln(300) = 173.89 J/K.
This value looks much smaller than the reference value in the appendix (197.67 J/K), because a constant
volume assumption is not realistic. A more realistic solution is
PB V
∆S = CV ln + CP ln B (10.15)
PA VA
when you consider Q = ∆U − W.
Q
∆S = quasistatic (10.16)
T
Q
∆S > in practice (10.17)
T
Lecture 10: Entropy and Pressure 10-3
2. free expansion
10.3 Homework
Problem 3.5, 3.8, 3.11, 3.14, 3.16, 3.27, 3.30, 3.31, 3.32, 3.33
Physics 467/667: Thermal Physics Spring 2019
When we talk a system of ideal gas, it is described by the following fundamental parameters, T, P, N,
U A , NA , S A UB , NB , SB
PA particle flow PB
Such a process due to an exchange of particles is called diffusion. Since diffusion is also a spontaneous
process, it must lead to the increase of entropy (recall we have learned the mixing entropy). This indicates
that S is also a function of N (as we learned in Chapter 2). Therefore, we can also derive the equilibrium
condition for a diffusion (analogous to P).
∂S A ∂SB
= (11.1)
∂NA ∂NB
∂S
1. uneven T ( ∂U = 1/T ) → heat flow (S ↑) , we call it thermal equilibrium.
11-1
11-2 Lecture 11: Entropy and Chemical Potential
∂S
2. uneven P( ∂V = P/T ) → pressure flow (S ↑) , we call it mechanical equilibrium.
∂S
3. uneven N ( ∂N = µ/T ) → particle flow (S ↑) , we call it diffusive equilibrium.
∂S ∂S ∂S
dS = ( )dU + ( )dV + ( )dN (11.3)
∂U ∂V ∂N
1 P µ
dS = dU + dV − dN (11.4)
T T T
1. ∆U = 0 , ∆V=0, µ=
2. ∆S = 0 , ∆V=0, µ=
µ is the system’s energy change when you add one particle while S and V is fixed. Normally, µ is negative.
∂U
µ=( ) (11.7)
∂N S,V
V 4πmU 3/2 5
S = Nk [ln( ( ) )+ ] (11.8)
N 3Nh2 2
Lecture 11: Entropy and Chemical Potential 11-3
4πmU 3/2 5 5 1
µ = − Tk [ln(V ( ) ) − lnN 5/2 + ] − Nk
3h2 2 2N
V 4πmU 3/2
= − kTln[ ( ) ] (11.9)
N 3Nh2
V 2πmkT 3/2
= − kTln[ ( ) ]
N h2
Note that here U = 3/2NkT was used in the last step.
He (0.32 eV, how to calculate?) Ar (0.42 eV, how to calculate?)
11.5 Exercise
Problem 3.37. Consider a monoatomic ideal gas that lives at a height z above sea level, so each molecule
has potential energy mgz in addition to its kinetic energy.
(a) Show that the chemical potential is the same as if the gas were at sea level plus mgz
V 2πmkT 3/2
µ = −kTln[ ( ) ] + mgz (11.10)
N h2
(b) Suppose you have two chunks of He gas, one at sea level, and one at height z, each having the same tem-
perature and volume. Assuming that they are in diffusive equilibrium, show that the number of molecules
in the higher chunk is
N (z) = N (0)e−mgz/kT (11.11)
11.6 Homework
The system consists of N spin particles, immersed in a constant magnetic field B point in the z direction.
Each particle behaves like a compass needle (dipoles). According to quantum mechanics, the component
of a particle’s dipole moment will take quantized value (see Fig. 12.1).
Figure 12.1: A two-state paramagnet with an external magnetic field B and N microscopic magnetic dipoles
facing up and down. The energy level of a single dipole in an ideal two-state paramagent are -µB for the
up date and µB for the down state.
Where N↑ and N↓ are the number of up and down dipoles, and N is the total number. Due to the distribution
of N↑ and N↓ , the total state may exhibit magnetization, M, which is the total magnetic moment of the
whole system. Each up dipole has +µ and each down has -µ. So,
U
M = µ( N↑ − N↓ ) = − (12.2)
B
Our goal is to understand how U and M relate to the temperature. Therefore, we need to know the multi-
plicity (since we know T is related to the entropy).
N N!
Ω( N↑ ) = = . (12.3)
N↑ N↑ !N↓ !
We can draw the the Ω-U diagram. Clearly, the maximum of Ω can be achieved when N↑ = N↓ . Namely, U
is zero.
1. When all dipoles are pointing up, U reaches the minimum, and the slope becomes very large.
2. When Ω reaches the maximum, the slope goes to zero, and then switches the sign.
12-1
12-2 Lecture 12: Paramagnetism
Since we know that T is the reciprocal of the slope of S − U graph, it means that the temperature is actually
infinite when U = 0. That is to say, the system will gladly give up energy to any other system whose
temperature is finite. At higher energies, the slope becomes negative. Does it mean temperature becomes
negative??
Negative temperature can occur only for a system whose total energy is limited, so that the multiplicity
decreases as the maximum allowed energy is approached.
Figure 12.2: Heat capacity and magnetization of a two-state paramagnet (computed from the analytic for-
mulas derived later in the text). Copyright 2000, Addison-Wesley.
S/k = ln N! − ln N↑ ! − ln( N − N↑ )!
≈ ln N − N − N↑ ln N↑ + N↑ − ( N − N↑ ) ln( N − N↑ ) + ( N − N↑ ) (12.4)
= N ln N − N↑ ln N↑ − ( N − N↑ ) ln( N − N↑ )
Now we get
1 k N − U/µB
= ln (12.6)
T 2µB N + U/µB
Therefore,
1 − e2µB/kT
µB
U = NµB = − NµB tanh (12.7)
1 + e2µB/kT kT
Lecture 12: Paramagnetism 12-3
To calculate the heat capacity of the paramagnet, we just need to differentiate the equation with respect to
T;
(µB/kT )2
∂U
CB = = Nk (12.9)
∂T N,B cosh2 (µB/kT )
This function approaches zero at both high T and low T.
At room temperature µB = 5.8 × 10−5 eV, while kT is about 1/40 eV. So, we can assume that µB/kT 1.
In this limit, tanh( x ) ≈ x, so the magnetization becomes
Nµ2 B
M≈ (12.10)
kT
The fact that M ∝ 1/T was discovered by Pierre Curie and is known as Curie’s law.
12.4 Homework
A heat engine is any device that absorbs heat and converts part of that energy into work. Unfortunately,
only part of the energy absorbed as heat can be converted to work. The reason is that the heat, as it flows
in, brings along entropy, which must somehow be disposed as waste.
Qh Qh
Engine W Refrigerator W
Qc Qc
Cold reservoir, Tc Cold reservoir, Tc
The benefit of a heat engine is the work produced, W. Let’s define the efficiency e,
W
e= (13.1)
Qh
Qh − Qc Qc
e= = 1− (13.3)
Qh Qh
Qc Q Qc Tc
S2 ≥ S1 → ≥ h → ≥ (13.4)
Tc Th Qh Th
13-1
13-2 Lecture 13: Engines and Refrigerators
Therefore, we conclude
Tc
e ≤ 1− (13.5)
Th
13.2 Refrigerator
Qc 1
e= = (13.8)
Qh − Qc Qh /Qc − 1
Suppose 1 mol He undergoes the following cycles, in which P2 = 2P1 , V4 =2V1 . Calculate the heat transfer
(Q) for each step, and the efficiency of the engine.
P P2, V2 P3, V3
P1, V1 P4, V4
isothermal
adiabatic
isothermal adiabatic
Exercises
1. Why must you put an air conditioner in the window of a building, rather than in the middle of a
room?
2. Can you cool off your kitchen by leaving the refrigerator door open?
13.5 Homework
In the previous lecture, we have learned the theoretical limit of heat engines. It is useful to know the upper
limit when we design an engine. However, one might also ask how it work in reality. Can we follow the
Carnot cycle in practice?
Now, let’s discuss a few of the engines used in the real world.
V V
V2 V1 V2 V3 V1
Recalling what we learned about P-V diagrams in compression work, we can trivially derive the efficiency
as a function of V
V2 γ−1 T T
PV γ = const → e = 1−( ) = 1− 1 = 1− 3 (Otto cycle) (14.1)
V1 T2 T4
V2
PV = const → e = 1− (Diesel cycle) (14.2)
V1
14-1
14-2 Lecture 14: Engines and Refrigerators
P
1, Expansion
4, Heat addition
2, Heat removal
3, Compression
V
V2 V1
A very different type of engine is the steam engine, in which the liquid water/steam is used as the work
substance. It works as follows,
3. steam hits the turbine, where it expands adiabatically, cools, the ends up at the original pressure;
Qc H − H1 H − H1
e = 1− = 1− 4 = 1− 4 (14.3)
Qh H3 − H2 H3 − H1
Qc H1 − H4
COP = = (14.4)
Qh − Qc H2 − H3 − H1 + H4
Lecture 14: Engines and Refrigerators 14-3
4, condenser 1, evaporator
V V
14.3 Homework
15.1 Outline
In the previous chapter, we applied the laws of thermodynamics to study cyclic processes. Now let’s turn to
chemical reactions and other phase transformations of matter. One complication is that the system usually
involves interactions with its surroundings, in thermal, mechanical, chemical ways. Therefore, energy is
not conserved in these cases. Instead, T, P, µ become crucial parameters.
The first task is to develop the conceptual tools needed to understand constant T, P processes. Recall we
have defined the concept of enthalpy (H),
H ≡ U + PV (15.1)
Let’s understand it in this way. If you could completely annihilate the system, H is the energy cost you
need to pay. In addition to its internal energy (U), you also need to plus PV work.
Similarly, we have two more useful quantities that are related to energy and analogous to H,
The four functions U, H, F, G are collectively called thermodynamic potentials. Their relations are shown
as follows,
− TS
U F
+ PV
H G
15-1
15-2 Lecture 15: Free Energy
1. measure the heat absorbed when the reaction takes place, ∆H;
3. ∆G = ∆H - T∆S
Consider the production of ammonia from nitrogen and hydrogen at 298 K and 1 bar,
N2 (g) + 3 H2 (g) −−→ 2 NH3 (g)
From the values of ∆H and S, compute ∆G for this reaction and check that it is consistent with the value
given in the table.
Consider the chemical reaction of the electrolysis of water to hydrogen and oxygen gas,
1
H2 O(l) −−→ H2 (g) + 2 O2 (g)
∂G
S = −( ) P,N (15.8)
∂T
∂G
V=( )T,N (15.9)
∂P
∂G
µ=( )T,P (15.10)
∂N
Functions encountered in physics are generally well enough behaved that their mixed partial derivatives
do not depend on which derivatives are taken first. For instance,
∂ ∂U ∂ ∂U
( )= ( ) (15.11)
∂V ∂S ∂S ∂V
From the thermodynamic identities (for U), we can evaluate the partial derivatives,
∂T ∂P
( )S = −( )V (15.12)
∂V ∂S
Similarly, we apply it on H,F,G,
∂ ∂H ∂ ∂H ∂T ∂V
( )= ( ) → ( ) = ( )P (15.13)
∂P ∂S ∂S ∂P ∂P S ∂S
∂ ∂F ∂ ∂F ∂S ∂P
( )= ( ) → ( ) T = ( )V (15.14)
∂V ∂T ∂T ∂V ∂V ∂T
∂ ∂G ∂ ∂G ∂S ∂V
( )= ( ) → ( )T = −( ) P (15.15)
∂P ∂T ∂T ∂P ∂P ∂T
These are Maxwell relations. It is useful because it could quantify the changes of entropy which are not
directly measurable, in terms of measurable quantities like T, V, P. Here let me just give some of its appli-
cations. The thermal expansion coefficient is defined as follows,
∆V/V 1 ∂V
β= = − ( )P (15.16)
∆T V ∂T
15-4 Lecture 15: Free Energy
According to the 3rd law, entropy approaches to 0 or some constant regardless of P. Hence β = 0 when T
goes to 0.
Now let’s dig into the difference between CV and CP
∂S ∂S
dS = ( )V dT + ( )T dV (15.18)
∂T ∂V
∂V ∂V
dV = ( ) p dT + ( )T dP (15.19)
∂T ∂P
∂S ∂S ∂V
(dS) P = ( )V dT + ( )T ( ) P dT (15.20)
∂T ∂V ∂T
That is,
∂S ∂S ∂S ∂V
( ) P = ( )V + ( ) T ( ) P (15.21)
∂T ∂T ∂V ∂T
∂S ∂V
CP = CV + T ( )T ( )P (15.22)
∂V ∂T
∂P ∂V
CP = CV + T ( )V ( ) P (15.23)
∂T ∂T
Therefore,
∂V 2 ∂V
CP = CV − T ( ) /( )T (15.25)
∂T P ∂P
TVβ2
CP − CV = − T ( βV )2 /(−κ T V ) = (15.27)
κT
Physics 467/667: Thermal Physics Spring 2019
For an isolated system, the entropy tends to increase. The system’s entropy determines the direction of
spontaneous change. But what if a system is not isolated? Now energy can pass between the system and
the environment. The 2nd law still applies. However, the total entropy would be the determining factor.
Let’s consider a small change in the total entropy;
1
Reconsider the case of burning water: H2 + 2 O2 −−→ H2 O
16-1
16-2 Lecture 16: Free Energy
1. phase boundary,
2. triple point, where solid, liquid, gas coexists
3. slope of phase boundary, ice anomaly
4. critical points, where gas and liquid are no longer distinguishable (fluid)
5. superconducting......
6. Curie temperature
Diamond/Graphite
∂G
( )T,N = V (16.7)
∂P
∂G
( ) P,N = −S (16.8)
∂T
At the phase boundary, the material is equally stable as a liquid or a gas, so its Gibbs free energy must be
the same,
Gl = Gg (16.9)
Now imagine increasing the temperature by dT and the pressure by dP, in such a way that the two phases
remain equally stable. Under this change,
dGl = dGg (16.10)
Therefore, by the thermodynamic identify for G,
Where µdN term is intentionally neglected (N is fixed). Now it is easy to solve for the slope of the phase
boundary line,
dP S g − Sl
= (16.12)
dT Vg − Vl
Therefore, the slope is determined by the entropies and volumes of the two phases. Shallow/stiff. In
practice, it is more convenient to write as
dP L
= (16.13)
dT T∆V
where, L is the latent heat for converting the material from liquid to gas. And this is known as the Clausius-
Clapeyron relation
Lecture 16: Free Energy 16-3
The P-T slope of the solid-liquid phase boundary is usually positive. But water is an exception. Why?
Diamond/Graphite, (300 K, 15 kbar) .v.s (1800 K, 60 kbar)
Below 0.3 K the slope of the 3 He solid-liquid phase boundary is negative, which phase is more dense?
Which phase has more entropy?
Solid phase is denser, and has more entropy.
At absolute 0 K, the slope goes to 0, as ∆S goes to 0.
From Liquid to Solid, S has to increase.
During adiabatic compression, S has to remain constant.
Therefore, T has to drop. A good way to reach ultra-low temperature.
16.4 Homework
G Graphite
Diamond
we should start with a specific mathematical model. The easiest model is perhaps to study liquid-gas
transformations based van der Waals model.
In 1873, van der Waals proposed a modified model based on the ideal gas law.
aN 2
(P + )(V − Nb) = NkT (17.1)
V2
1. Subtracting Nb from V, which accounts for the minimum volume when pressure goes to infinity.
2. Adding aN 2 /V 2 to P, which accounts for the short range attractive forces.
Why N and N 2 /V 2 ?
Considering all atoms are spheres, the minimum of V must be proportional to N.
The potential energy must be proportional to its density N × N/V.
NkT aN 2
P= − 2 (17.2)
V − Nb V
The constants of a and b must be fitted for different systems. b corresponds to the molecular volume, while
a is much more variable because of the complex intermolecular interactions. Now let us investigate the
consequence of the vdW model. At high T regime (V goes to high), it exactly behaves like gas. At low T
regime, the behavior is much more complicated. As V decreases the isotherm rises, falls, and then rises
again.
At a given P, T, the true equilibrium state of a system is determined by its Gibbs free energy. To calculate G
for a vdW fluid, let’s start with the thermodynamic identity.
17-1
17-2 Lecture 17: Phase Transitions, van der Waals Model
For a fixed amount of materials, at a given temperature, this equation reduces to dG = VdP. Dividing both
sides by dV gives,
∂G ∂P
( ) N,T = V ( ) N,T (17.4)
∂V ∂V
The right-hand side can be computed from the vdW model,
∂G NkTV 2aN 2
( ) N,T = − + (17.5)
∂V (V − Nb)2 V2
This equation allows us to plot the Gibbs free energy for any fixed T.
To understand it, let’s try to make the plot of G as a function of P and V. From the G − P plot, we can clearly
see that there is a triangle loop in the graph (2-3-4-5-6), corresponding to the unstable states. As the pressure
gradually increases, the system will go straight from 2 to 6, with an abrupt decrease in volume. A phase
transformation therefore occurs. At point 2, we call it a gas, but point 6 a liquid. At intermediate volumes,
the thermodynamically stable state is actually a combination of part gas and part liquid, as indicated by the
straight vertical line on the P − V diagram.
The pressure at the phase transformation is easy enough to determine from the graph of G, but there is a
way to obtain it from the PV graph. Clearly, there should be zero net work during the 2-3-4-5-6 loop, since
2, 6 are in equilibrium. This is called the Maxwell construction.
Repeating the Maxwell construction for a variety of temperatures gives the vapor pressures. The region
which the line crosses defines the regions in the coexistence of liquid and gas. However, what will happen
on high temperature? There is no way to build Maxwell construction. Hence the phase boundary must
disappear at some temperature, which we call it critical temperature, Tc. And the corresponding P, V is
called Pc and Vc. These values defines the critical point. If the temperature goes beyond the critical point,
the liquid and gas are no longer distinguishable.
∂P ∂2 P
=0 =0 (17.7)
∂V ∂V 2
2
P/Pc
1
0.0 0.5 1.0 1.5 2.0 2.5 3.0
V/Vc
3 1
2
2
V/Vc
3
1 4
5 6 7
7150
3 7
7100
4
7050
G
7000 5 1 2,6
6950
0.4 0.5 0.6 0.7 0.8 0.9 1.0
P/Pc
Figure 17.2: Gibbs free energy as a function of pressure for a vdW model at T=0.9 Tc. The corresponding
isotherm is shown at above.
17-4 Lecture 17: Phase Transitions, van der Waals Model
2
P/Pc
1 7
6 4 3 2 1
5
7400
7200
G
7 4 3
6 2
7000 5 1
0.5 1.0 1.5 2.0 2.5 3.0
V/Vc
Figure 17.3: Gibbs free energy as a function of volume for a vdW model at T=0.9 Tc . The corresponding
isotherm is shown above.
Physics 467/667: Thermal Physics Spring 2019
Phase Transformations become more complicated when a system contains two or more types of particles.
For example, what will happen if you lower the temperature of this mixture at 1 atm? You might expect
that all the oxygen will liquefy at 90.2 K, leaving a gas of pure nitrogen which would transform to liquid
till 77.4 K. However, no liquid at all forms until the temperature drops to 81.6 K. Similar behaviors happen
on liquid-solid transitions as well. How can we understand this?
0.006 10.0
0.005 9.5
0.004 9.0
0.003 8.5
G
S
0.002 8.0
7.5
0.001
7.0
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Composition Composition
Figure 18.1: The Phase diagram of an ideal mixture, where TS is the only term to contribute to ∆G.
The key is to look at the Gibbs free energy as we did on the vdW model. Let’s consider a system of two
molecules, A and B, suppose that they are initially separated, sitting side by side at the same temperature
and pressure. For an unmixed system, the total free energy is just the sum of the two subsystems,
G = (1 − x ) G A + xGB , (18.1)
18-1
18-2 Lecture 18: Phase Transformation of Mixtures
For the case of the ideal gas (Fig. 18.1), we can assume U, V don’t change.
Therefore the free energy of the mixture becomes,
Note this only applies to an ideal mixture. For real cases such as the mixing of liquids, it would usually
cause the increase of U.
Let n be the average number of the nearest neighbors, µ0 be the average potential energy of A-A or B-B
interactions,
U = 1/2Nnµ0 (18.3)
Therefore,
∆U = Nnx (1 − x )(µ AB − µ0 ) (18.5)
0.006
1.0
0.004
U
0.5 0.002
0.0 0.000
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Composition Composition
300
10 Mixing
200
9
G
100 Separate
8
0
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
Composition Composition
Figure 18.2: The Phase diagram of non-ideal mixture, where only ∆U contributes to ∆G. When ∆U is
positive, there exists a competition between U and TS.
The qualitative results are shown in Fig. 18.2. If µ AB is greater than µ0 , it means ∆U favors the mixing. But
it is not always the case. If µ AB is smaller than µ0 , there would be a competition between U and TS. At low
T, the shape is concave-down. At sufficiently high T, TS becomes dominant, thus the shape is concave-up.
Lecture 18: Phase Transformation of Mixtures 18-3
But a concave-down free energy function indicates an unstable mixture. Therefore, we know T determines
if ∆G is positive or negative. For each given x we can determine the critical temperature. Such a T − x
diagram could be very useful to help us identify which region the systems are miscible or not.
G G
G
3. TA < T < TB , liquid and gas will coexist, depending on the ratio x.
G − x diagram
T − x diagram
Most two-components solids do not maintain the same crystal structures over the entire range of composi-
tion. If we consider the the solid-liquid transition with the possibility of two solid phases, this picture will
become more complicated. Again the idea is to look at the free energy at various temperatures. Suppose TB
is the melting point of B and TA is the melting point of A.
1. At high temperatures the free energy of the liquid will be below that of either solid phase.
18-4 Lecture 18: Phase Transformation of Mixtures
gas
liquid
2. As the temperature drops, all three energy functions will increase ( ∂G ∂T = − S), but the free energy of
the liquid will increase fastest because it has a larger TS term. Below TB the liquid’s free energy curve
will intersect that of the B phase, so there is a range of compositions for which the stable configuration
is an unmixed combination of liquid and B.
3. As T decreases further, this range reaches the A side of the diagram and you will find the range of an
unmixed combination of liquid and A.
4. If we drop the T further, we will find that A+liquid and B+liquid will meet at a particular point
(Eutectic point). An Eutectic point defines the lowest melting point of the system.
A solution is similar to the case of a mixture. A solution is called dilute if the concentration of solute is much
less than the number of solvent molecules. In many ways the solute in a dilute solution behaves like an
ideal gas. We can therefore predict many of the properties quantitatively.
To make a prediction, we first need to know something about its chemical potentials. The chemical potential
µ A is related to the Gibbs free energy by µ A = ∂G/∂NA . Suppose we start with a pure solvent of A
molecules, then the Gibbs free energy is just NA times the chemical potential.
where, µ0 is the chemical potential of the pure solvent, a function of temperature and pressure.
Now imagine that we add a single B molecule to the system by holding T and P fixed, the change in G can
be expressed as follows,
dG = dU + PdV − TdS (19.2)
While dU and PdV won’t depend on NA , the TdS term is sensitive to such a change. The increase in S can
be considered as proportional to NA
dS = k ln NA (19.3)
G = NA µ0 ( T, P) + NB f ( T, P) − NB kT ln NA + NB kT ln NB − NB kT (19.6)
This expression is valid when NB NA . The solvent and solution chemical potential can be derived as
follows
∂G NB kT
µA = = µ0 ( T, P) − (19.7)
∂NA T,P,NB NA
∂G
µB = = µ0 ( T, P) − kT ln( NB /NA ) (19.8)
∂NB T,P,NA
19-1
19-2 Lecture 19: Dilute Solutions
As we would expect, adding more solute reduces the chemical potential of A and increases the chemical
potential of B. Moreover, the results depend only on the ratio of NB /NA . We can therefore define a quantity
as molality (m B ).
µ B = µ0 ( T, P) − kT ln m B (19.9)
Consider a solution that is separated from some pure solvent by a membrane that allows only solvent
molecules to pass through. According to eq(19.11), the chemical potential of the solvent in the solution is
less than that of the pure solvent. Particles tend to flow toward lower chemical potential, so the solvent
molecules will spontaneously flow from the pure solvent into the solution. This flow of molecules is called
osmosis. That osmosis should happen is counter-intuitive.
If you want to prevent osmosis from happening, the following condition must be met,
NB kT
µ0 ( T, P1 ) = µ0 ( T, P2 ) − (19.10)
NA
where P1 is the pressure on the side with pure solvent and P2 is the pressure on the side of solution. As-
suming that these two pressures are not too different, we can approximate
∂µ0
µ0 ( T, P2 ) ≈ µ0 ( T, P1 ) + ( P2 − P1 ) (19.11)
∂P
∂µ0 N kT
( P2 − P1 ) = B (19.12)
∂P NA
∂µ0
To evaluate the derivative ∂P , we use µ0 = G/N, so it is V/N. Therefore, the previous equation becomes,
NB kT n RT
P2 − P1 = = B (19.13)
V V
This pressure difference is called the osmotic pressure, and the formula is called van’t Hoff’s formula. It
says that the osmotic pressure is exactly the same as the pressure of an ideal gas of the same concentration
as the solute. This is useful for biophysics studies.
Similar to the mixture of two phases, the concentration of solutes can shift the boiling and freezing points
as well. Consider the case of a dilute solution at its boiling point, when it is in equilibrium with its gas
phase (Fig. 19.1). Assuming
where µ0 is the chemical potential of the pure solvent. Now, as in the osmitic ***
µ0 ( T, P0 ) = µgas ( T, P0 ) (19.16)
∂µ0 N kT µgas
µ0 ( T, P0 ) + ( P − P0 ) − B = µgas ( T, P0 ) + ( P − P0 ) (19.17)
∂P NA ∂P
V N kT V
( P − P0 ) − B = ( P − P0 ) (19.18)
N NA N gas
− NB P N
P − P0 = P0 , = 1− B (19.19)
NA P0 NA
Alternatively, we could hold the pressure fixed and solve for the shift in temperature needed to maintian
equilibrium in the presence of the solute. Let T0 be the boiling point of the pure solvent at P, so that
µ0 ( T0 , P) = µgas ( T0 , P) (19.20)
∂µ0 N kT ∂µgas
µ0 ( T0 , P) + ( T − T0 ) − B = µgas ( T0 , P) + ( T − T0 ) (19.21)
∂T NA ∂T
Again the first term on each side cancels. Each ∂µ/∂T is just minus S, so
S NB kT S
−( T − T0 ) − = −( T − T0 ) (19.22)
N liq NA N gas
NB kT02 n B RT92
T − T0 = = (19.23)
L L
With the results, let’s compute the boiling temperature of seawater. A convenient quantity to consider is
****
Figure 19.1: The presence of a solute reduces the boiling point of the solvent.
Physics 467/667: Thermal Physics Spring 2019
∂G
µ=( )T,P (20.1)
∂N
∂F
µ=( )T,V (20.2)
∂N
G = Nµ (20.3)
or
F = Nµ (20.4)
1. extensive quantities: V, N, S, U, H, F, G, M
2. intensive quantities: T, P, µ, ρ
When the total amount doubles, the values of extensive quantities will double as well. When you try to
sum up along −( ∂N
∂F
)T,V , it would fail. But it works for G, thus we have
G = N1 µ1 + N2 µ2 + .... = ∑ Ni µi (20.5)
i
∂µ 1 ∂G V kT
= = = ideal gas (20.6)
∂P N ∂P N P
An interesting fact about chemical reactions is that they hardly even go to completion. Consider the disso-
ciation of water into H+ and OH− ions, H2 O ←−→ H+ + OH−
20-1
20-2 Lecture 20: Free Energy and Chemical Equilibrium
This reaction tends to strongly go to the left. But it won’t be the whole story. Otherwise, there would be no
ions in a glass of water!
How do we understand this? At room temperature, the water molecules are constantly colliding with each
other at rather high speed. Every once in a while, one of these collisions is violent enough to break a
molecule apart into two ions. The ions then tend to separate. Eventually an equilibrium is reached between
the breaking apart and recombining.
This phenomenon could also be explained by the Gibbs free energy as a function of the balance between
products and reactants. If we draw G as a function of the extent of the reaction, just like we did with the
phase transitions, the reaction could be considered as the mixing between reactants and products. Without
mixing, G − x should behave like a linear curve, where G reaches minimum and maximum at x=0 or 1.
With mixing, the energy might reach a minimum at some state between 0 and 1.
without mixing
G
with mixing
Reactants Products
Figure 20.1: The free energy plot as a function of the extent of the chemical reaction.
We can characterize the equilibrium point by the condition that the slope of G is zero. Therefore,
0 = dG = ∑ µi dNi (20.8)
i
1. 2 H ←−→ H2
2. N2 + 3 H2 ←−→ 2 NH3
3. 2 CO + O2 ←−→ 2 CO2
Let’s investigate a reaction in more detail. The reaction that N2 and H2 combine to form ammonia is called
nitrogen fixation because it puts the nitrogen into a form that can be used by plants to synthesize amino
acids and other important molecules. Under the equilibrium condition, the chemical potentials must satisfy
the following,
P( N2 ) P( H2 ) P( NH3 )
µ0 ( N2 ) + kTln + 3µ0 ( H2) + 3kTln = 2µ0 ( NH3 ) + 2kTln . (20.9)
P0 P0 P0
P( N2 ) P( H2 ) P( NH3 )
kTln 0
+ 3kTln 0
− 2kTln = 2µ0 ( NH3 ) − µ0 ( N2 ) − 3µ0 ( H2 ) (20.10)
P P P0
P( N2 ) P3 ( H2 )
RTln = ∆G0 (20.11)
P2 ( NH3 )( P0 )2
or
P( N2 ) P3 ( H2 )
= e∆G /RT
0
(20.12)
P2 ( NH3 )( P0 )2
20.5 Homework
21.1 Outline
Most of the class has dealt with the second law of thermodynamics, in which we rely on the experimental
measurements (of S, H) to make quantitative predictions. Ideally, we would like to calculate all thermo-
dynamic quantities from first principles, starting from microscopic models of various systems of interest,
just like what we did with an ideal gas and Einstein solid. For these more complicated models the direct
combinatoric approach used in Chapters 2 and 3 would be too difficult. We therefore need to develop new
tools.
We must start with some assumptions. All microstates should be equally probable. Let’s say we have two
macrostates, s2 and s2 . Their energies are E(s1 ) and E(s2 ), and their probabilities are P(s1 ) and P(s2 ), the
corresponding multiplicities of the reservoir are Ω R (s1 ) and Ω R (s2 ). The ratio of probabilities for any two
states is
P ( s2 ) Ω (s )
= R 2 (21.1)
P ( s1 ) Ω R ( s1 )
1
dSR = (dUR + PdVR − µdNR ) (21.3)
T
For simplicity, let’s again ignore PdV and µdN. At atomospheric pressure, PdV is very small. µdN contri-
butions vary a lot. Let’s throw it away for now.
1
dSR = (UR (s2 ) − UR (s1 )) (21.4)
T
Therefore, we have,
P ( s2 ) e− E(s2 )/kT
= −E(s )/kT (21.5)
P ( s1 ) e 1
21-1
21-2 Lecture 21: The Boltzmann Factor and Partition Function
Now we conclude that the probability of each state is proportional to the corresponding Boltzmann factor
with a scaling factor. Let’s call it 1/Z for now,
1 − E(s)/kT
P(s) = e (21.7)
Z
1.0
0.8
0.6
P(s)
0.4
0.2
0.0
0 1 2 3 4 5
E(s)
Figure 21.1: The probability according to the Boltzmann distribution. (1) it decays exponentially; (2) high T
slows down the decay.
Now, you are probably wondering how to calculate Z. The trick is to use the condition of P(s),
1 1
∑ P(s) = ∑ Z e−E(s)/kT = Z ∑ e−E(s)/kT = 1 (21.8)
s s s
Hence,
Z= ∑ e−E(s)/kT (21.9)
s
The sum is not easy to carry out numerically for most of the cases. But we can take advantage of the
exponential behavior to do it numerically. When E(s) goes to larger than a few kT, the Boltzmann factor
decays very fast, thus we can simply sum a few kT.
The quantity Z is called the partition function. Z does not depend on any particular state s, but it does
depend on temperature. At very low T, Z is approximated to 1, since all the excited states have very small
Boltzmann factors. But at high temperature, Z will be much larger.
Problem 6.3 Consider a hypothetical atom that has just two states, a ground state of E=0, and an excited
state of E=2 eV. Draw a graph of the partition function for this system as a function of temperature, and
evaluate the partition function numerically at T=[300, 3000, 30000].
Lecture 21: The Boltzmann Factor and Partition Function 21-3
1.8
1.6
Z
1.4
1.2
1.0
0 2 4 6 8 10 12
kT/
In the previous section, we saw how to calculate the probability that a system is in any particular one
of its microstates in equilibrium with a reservoir at T. In the statistical mechanical systems that we are
considering, each probability is given by
1
Ē =
Z ∑ E(s)e−βE(s) (22.1)
s
Similarly, the average value of any other variable of interest can be computed in the same way.
1
X̄ =
Z ∑ X (s)e−βE(s) (22.2)
s
By using this equation, we will get the average value of any property. In statistical mechanics, we shall also
understand the fluctuations.
A very nice feature of the partition function is that
1 ∂Z ∂lnZ
Ē = − =− (22.3)
Z ∂β ∂β
when β = 1/kT. These formulas can be extremely useful when you have an explicit formula for Z
1 ∂2 Z
E¯2 = (22.4)
Z ∂β2
For a diatomic molecule, its rotational energies are quantized. The allowed rotational energies are,
E ( j ) = j ( j + 1) e (22.5)
where j can be 0, 1, 2, etc, and e is a constant that is inversely proportional to the molecule’s moment
of inertia. The number of degenerate states for level j is 2j+1. Given the energy level, we can write the
partition function as a sum over j,
∞ ∞
Zrot = ∑ (2j + 1)e−E( j)/kT = ∑ (2j + 1)e− j( j+1)e/kT (22.6)
j =0 j =0
22-1
22-2 Lecture 22: Average Values and the Equipartition Theorem
5
T=30 /k
T=3 /k
4
3
Z
0
0 2 4 6 8 10 12 14
j
Unfortunately, there is no way to evaluate the sum exactly in closed form. But it is not hard to do it numer-
ically. Even better, we can approximate the sum as an integral that yields a very simple result.
If we draw the curve, we could clearly see that the partition function is approximately the area under this
curve at high T
Z ∞
kT
Zrot ≈ (2j + 1)e− j( j+1)e/kT dj = (when kT >> e) (22.7)
0 e
As expected, the partition function increases when T increases. For CO at room T, Zrot is slightly greater
than 100.
With such approximation, we can then calculate the energy as well,
1 ∂Z ∂(1/βe)
Ērot = − = −( βe) = 1/kT (22.8)
Z ∂β ∂β
This is just the prediction of the equipartition theorem, since a diatomic molecule has two rotational degrees
of freedom. At low temperature, the 3rd law tells us that the heat capacity must go to zero. Above is the
case of diatomic molecules made of different atoms. If it is N2 ,
kT
Zrot ≈ (22.9)
2e
But this won’t effect the energy and heat capacities. (problem 6.30)
The equipartition theorem has been extensively used in this class. However, we haven’t proved it yet. It
turns out that the proof is quite easy, with the help of Boltzmann factors.
Suppose any energy term is in quadratic form, namely,
E(q) = cq2 (22.10)
Lecture 22: Average Values and the Equipartition Theorem 22-3
p
Let x = βcq, so
Z ∞
1 2
Z= e− x dx (22.13)
∆q
p
βc −∞
2
The function e− x is called a Gaussian function.
Z ∞ √
2
e− x dx = π (22.14)
−∞
Therefore, r
1 π
Z= = Cβ−1/2 (22.15)
∆q βc
√
where C = π/c/∆q
and the energy can be readily computed,
1 ∂Z 1
Ē = − = = kT (22.16)
Z ∂β 2
The limitation of the theorem is that we actually used the integral to replace the sum, which is correct when
the spacing between energy level (e) is much less than kT.
Physics 467/667: Thermal Physics Spring 2019
In the very first lecture, we briefly mentioned a microscopic model to link the speed of particles to the
temperature,
PV = Nmv2x = NkT (23.1)
But this is just a sort of average. Technically, the speeds of particles should follow some distribution. Let’s
call it D (v). What’s the dependence of D (v)?
The first factor should be just the Boltzmann factor.
2 /2kT
D (v) ∝ e E/kT = e−mv (23.2)
This only accounts for an ideal gas, where the transnational motion is independent of other variables.
The second factor should be the velocity space. For a given v, it could be in any direction. The the space is
4πv2 . Therefore,
2
D (v) = C · 4πv2 e−mv /2kT (23.3)
√
Changing variables to x = v m/2kT,
Z ∞
2kT 3/2 2
1 = 4πC ( ) x2 e− x dx (23.5)
m 0
23-1
23-2 Lecture 23: Maxwell Distribution, Partition Functions and Free Energy
0.8
0.6
0.4
0.2
0.0
0 200 400 600 800 1000
v (m/s)
Figure 23.1: The Maxwell speed distribution and different types of characteristic speeds.
For a system in equilibrium with a reservoir at temperature T, the quantity most analogous to Ω is Z. Does
the natural logarithm of Z have some meaning?
Recall the definition of F = U − TS, the partial derivative with respect to T is
∂F F−U
( )V,N = −S = (23.11)
∂T T
This is a differential equation for the function F ( T ), for any given V and N. If we use F̄ to express the kTlnZ,
then
∂ F̄ ∂ ∂
= (−kTlnZ ) = −klnZ − kT lnZ (23.12)
∂T ∂T ∂T
∂ F̄ U F̄ − U
= −klnZ − kT 2 = (23.14)
∂T kT T
Lecture 23: Maxwell Distribution, Partition Functions and Free Energy 23-3
This relation can be very useful to compute entropy, pressure, and so on.
∂F ∂F ∂F
S = −( )V,N P = −( )T,N µ=( )V,T (23.16)
∂T ∂V ∂N
Physics 467/667: Thermal Physics Spring 2019
Consider a system of just two particles, 1 and 2. If these particles do not interact with each other, their
energy is just simply E1 + E2, then
Where the sum runs over all states s for the composite system. If the two particles are distinguishable,
In this case, we split the total partition functions Z into separate Z1 and Z2.
Ztotal = Z1 Z2 (24.3)
If the two particles are indistinguishable, we have to apply 1/2 to reduce double counts.
1
Ztotal = Z Z2 (24.4)
2 1
This formula is not precisely correct, because there are some terms in the double sum in which both
particles are in the same state when the system is very dense.
Therefore, the generalization of the equation is the following
Ztotal = Z1 Z2 Z3 · · · ZN (24.5)
1 N
Ztotal = Z (24.6)
N! 1
1 N
Ztotal = Z (24.7)
N! 1
24-1
24-2 Lecture 24: Ideal Gas Model in Boltzmann Statistics
Since Z is additive,
Z1 = Ztr Zint (24.9)
Where
Ztr = ∑ e−Etr /kT Zint = ∑ e−Eint /kT (24.10)
To calculate Ztr , we can start with the case of a molecule confined to a one-dimensional box.
2L
λn = , n = 1, 2, ...., (24.11)
n
h hn
pn = = n = 1, 2, ...., (24.12)
λn 2L
p2n h2 n2
En = = (24.13)
2m 8mL2
Therefore,
− h2 n2
Z1d = ∑ e−En /kT = ∑ e 8mL2 kT (24.14)
n
By doing integration
Z ∞ √ r r
− h2 n2 π 8mL2 kT 2πmkT L
Z1d = e 8mL2 kT dn = 2
= 2
L= ( LQ : Quantum length) (24.15)
0 2 h h LQ
For 3 dimension,
p2x p2y p2
Etr = + + z (24.16)
2m 2m 2m
L x Ly Lz V
Ztr = = (VQ : Quantum Volume) (24.17)
LQ LQ LQ VQ
V
Z1 = Zint (24.18)
VQ
where
1 VZint N
Z= ( ) (24.19)
N! VQ
and
lnZ = N [lnV + lnZint − lnN − lnVQ + 1] (24.20)
∂lnZint 1 ∂VQ 3 3
U = −N +N = N Ēint + N = Uint + NkT (24.22)
∂β VQ ∂β 2β 2
∂U ∂Uint 3
CV = = + Nk (24.23)
∂T ∂T 2
Remember when deriving the Boltzmann factor in the previous chapter, we wrote the ratio of probabilities
of two different states as follows,
1
dSR = (dUR + PdVR − µdNR ) (25.2)
T
When constructing the Boltzmann factor, we ignored PdV and µdN terms. However, we want to keep the
µdN to account for the exchanges in the number of particles.
1 − E(s)/kT
P(s) = e (25.4)
Z
If more than one type of particle can be present, the µdN term becomes ∑ µi dNi
The grand partition is very useful in dealing with situation where the particles exchange during the process.
More importantly, the concept of Gibbs factors is very useful in quantum statistics, the study of dense
systems in which two or more identifcal particles have reasonable chance to occupy the same single-particle
state.
25-1
25-2 Lecture 25: Gibbs Factor, Bosons, Fermions
If the particles are fermions, n could be only 0 or 1, so the grand partition function becomes
Z = 1 + e−(e−µ)/kT (25.7)
and
n̄ = ∑ nP(n) = 0 · P(0) + 1 · P(1) + .... (25.10)
n
e−nx 1 ∂e−nx 1 ∂Z
n̄ = ∑n Z
=−
Z ∑ ∂x
=−
Z ∂x
(25.11)
n
Therefore,
1 ∂ (1 − e − x ) −1
n̄ = − =
1 − e− x ∂x (25.12)
1
= (e−µ)/kT
e −1
and
N e/kT
n̄ = NP(s) = e (25.14)
Z1
According to the result of Problem 6.44, the chemical potential for such system is
µ = −kTln( Z1 /N ) (25.15)
N e/kT
n̄ = NP(s) = e = eµ/kT ee/kT = e−(e−µ)/kT (25.16)
Z1
Show the plots of n versus e for Bosons, Fermions and Boltzmann particles. Clearly, the three distributions
become equal when (e − µ)/kT 1. For any of these applications, we can apply the distributions as long
as we know the chemical potential.
Physics 467/667: Thermal Physics Spring 2019
The Einstein Model of a solid crystal is expressed as an independent three dimensional harmonic oscillator.
The multiplicity of an Einstein solid containing N oscillators and q energy units is approximately
q+N q q+N N
Ω( N, q) ≈ ( ) ( ) (26.1)
q N
q+N q q+N N
S = klnΩ = kln( ) + kln( )
q N
(26.2)
q+N q+N
= kqln( ) + kNln( )
q N
1 ∂S 1 ∂S
= =
T ∂U e ∂q
k ∂
= [qln(q + N ) − qlnq + Nln(q + N ) − NlnN ]
e ∂q
k q q N
= [ln(q + N ) + − lnq − + + 0] (26.3)
e q+N q q+N
k q+N q+N q
= [ln + − ]
e q q+N q
k q+N
= ln
e q
When kT e,
Ne2 1 + e/kT
CV = = Nk (26.6)
kT 2 (e/kT )2
26-1
26-2 Lecture 26: The Einstein and Debye Models of Solids
Remember we would have 3N oscillators for N atoms, therefore CV = 3Nk. This exactly follows the
equipartition theorem.
Below kT ≈ e, the heat capacity falls off, approaching zero as the temperature goes to zero. This prediction
generally agrees with experiment but not in detail. The equation predicts CV exponentially goes to 0, when
T → 0. But experiments show that the true low-temperature behavior is cubic: CV ∝ T 3 . The problem with
the Einstein model is that the atoms in a crystal do not vibrate independently of each other. Therefore a
better assumption is needed.
In reality, there should be low-frequency modes of oscillation in which large groups of atoms are moving
together, and also high-frequency modes in which atoms are moving opposite to their neighbors. The units
of energy come in different sizes, proportional to the frequency of the modes of the vibration. Even at very
low temperatures, a few low-frequency modes are still active. This is the reason why the heat capacity goes
to 0 less dramatically than the prediction by the Einstein model.
Based on this reasoning, Debye proposed that each mode of oscillation has a set of equally spaced energy
levels with the unit of energy equal to
hcs hcs n
e = hf = = (26.7)
λ 2L
Where L is the length of crystal, n is the magnitude of the vector in n-space specifying the shape of the
wave, and cs is the speed of the wave.
When the mode is in equilibrium at temperature T, the number of units of energy is given by the Plank
distribution:
1
n̄ = e/kT (26.8)
e −1
We can think of these units of energy as particles obeying the Bose-Einstein statistics with µ=0. These
particles are called phonons.
To calculate the total thermal energy, we add up the energies of all allowed modes:
U = 3 ∑ ∑ ∑ en̄(e) (26.9)
n x ny nz
The factor of 3 counts the three polarization states for each polarization states for each n̄.
In a crystal, the atomic spacing puts a strict lower limit on the wavelength. n cannot exceed the number of
1
atoms in a row. If the 3D crystal is a cube, the n along any direction is N 3 , where N is the total volume.
Summing over a cube depends on n x , ny and nz in a very complicated way. Debye got the idea to pretend
that the relevant region of n-space is a sphere. He chose a sphere whose total volume is N. You can easily
show that the radius of the sphere has to be
6N 1/3
nmax = ( ) (26.10)
π
Physical explanations:
Therefore, we apply the Debye’s approximation and convert the sums to integrals in spherical coordinates,
Z nmax Z π/2 Z π/2
e
U=3 dn dθ dφn2 sinθ (26.11)
0 0 0 ee/kT − 1
Since the angular integrals gives π/2, leaving us with
n3
Z nmax
3π hcs
U= dn (26.12)
2 0 2L ehcs n/2LkT − 1
Then
hcs nmax hcs 6N 1/3 T
xmax = = ( ) = D (26.14)
2LkT 2kT πV T
TD is the Debye Temperature, a characteristic temperature.
After substitution, we get
9NkT 4 x3
Z TD /T
U= dx (26.15)
3
TD 0 ex − 1
When T TD , the upper limit of the integral is much less than 1, so x is always very small, and we can
approximate e x ≈ 1 + x. Then 1 cancels, and the integral is simply x2 dx, leading to the final result
9NkT 4 1 TD 3
U= 3
( ) = 3NkT (26.16)
TD 3 T
When T TD , the upper limit goes to infinity, and the integral gives a constant of π 4 /15, so the total
energy is
3π 4 NkT 4
U= 3
(26.17)
5 TD
12π 4 Nk 3
C = γT + 3
T , (26.18)
5TD
where γ is a dimensionless coefficient. The Debye temperature ranges from 88 K for lead to 1860 K for
diamond. Since the heat capacity reaches 95 of its maximum value at T = TD , the Debye temperature gives
you a rough idea of when you can get away with just using the equipartition theorem.
The Debye model gives you a rough idea still. For a more rigorous analysis, one needs to know exactly the
distribution of phonons, which belongs to a book in solid state physics.
26-4 Lecture 26: The Einstein and Debye Models of Solids
Figure 26.1: The Debye prediction for the heat capacity of a solid, with the prediction of the Einstein model
plotted for comparison. The constant in the Einstein model has been chosen to obtain the best agreement
with the Debye model at high temperatures. Note that the Einstein curve is much flatter than the Debye
curve at low temperatures. Copyright 2000, Addison-Wesley.
Physics 467/667: Thermal Physics Spring 2019
As stated in the introduction to Fermions and Bosons, Quantum Statistics starts to play a role in the dense
system and low temperatures. For an electron at room temperature, the quantum volume is
h
VQ = ( √ )3 = (4.3 nm)3 (27.1)
2πmkT
In a typical metal, there is about one (or two) conduction electron per atom, the volume per electron is
roughly the volume of an atom, namely (0.2 nm)3 . Therefore, the temperature regime for electron has a
pretty wide range. In other words, the temperature is much too low for Boltzmann statistics to apply.
In what follows, we will exclusively discuss the case of the electron, a typical Fermion at zero temperature
and above.
27.2 Electron at 0 K
At T = 0, the Fermi-Dirac distribution becomes a step function. All single-particle states with energy less
than µ are occupied, while all states with energy greater than µ are unoccupied. Here we call µ Fermi
energy e F .
In order to calculate e F , as well as other properties such as the total energy and the pressure of the electron
gas, let’s make the approximation that the electrons are free particles, subject to no external forces. This is
almost for metal accurate except that there still exist attractive forces from nearby ions in the crystal lattice.
The definite energy wavefunctions of a free electron inside a box are just sine waves. For a one-dimensional
box the allowed wavelengths and momenta are
2L h hn
λn = , pn = = (27.2)
n λn 2L
where n is any positive integer. In a three-dimensional box these equations apply separately to each direc-
tion, so
p2 h2
e= = (n2 + n2y + n2z ) (27.3)
2m 8mL2 x
These allowed states will form a series of eighth-spheres in the 3D space. And the radius of the largest
sphere will be e F ,
h2 n2max
eF = (27.4)
8mL2
27-1
27-2 Lecture 27: Free Electron Gases
The total volume of the eighth-sphere equals the number of lattice points enclosed. Therefore the total
number of occupied states is twice this volume,
1 4 3 πn3max
N = 2· · πnmax = (27.5)
8 3 3
To calculate the total energy of all electrons, we need to sum over the energies of the electrons in all occupied
states. Z Z Z
U = 2 ∑ ∑ ∑ e(n) = 2 e(n)dn x dny dnz . (27.6)
n x ny nz
The factor of 2 is for the two spin orientations for each n. Transforming the triple integral to spherical
coordinates, we have the total energy as follows,
Z nmax Z π/2 Z π/2
U=2 dn dθ dφn2 sinθe(n) (27.7)
0 0 0
The Fermi energy for conduction electrons in a typical metal is a few eVs. We can therefore define the Fermi
temperature as
TF = e F /k (27.9)
Fermi temperature is purely hypothetical for electrons in a metal, since metals liquefy or evaporate long
before it is reached. The pressure of an electron gas is
This is called degeneracy pressure, which keeps matter from collapsing under the huge electrostatic forces
that try to pull electrons and protons together.
A more measurable quantity is the bulk modulus,
∂P 10U
B = −V ( )T = (27.11)
∂V 9V
This quantity agrees with experiment within a factor of 3 or so, for most metals.
To better visualize the behavior of a Fermi gas at small nonzero T, we need to introduce a variable to
describe the distribution of electrons with respect to the energy,
Z nmax Z eF
π 8mL2 √
U=π n2 e(n)dn = e[ ( 2 )3/2 e]de (27.14)
0 0 2 h
π 8mL2 3/2 √ 3N √
g(e) = ( ) e = 2/3 e (27.15)
2 h2 2e F
√
So g(e) is proportional to e. In a more realistic model, we would want to consider the attraction of
electrons with the ions. Then the wavefunctions and energies would be much more complicated, g(e)
would be very different.
For an electron gas at 0 K, we can get the total number of electrons by just integrating the density of states
up to the Fermi energy Z eF
N= g(e)de ( T = 0) (27.16)
0
What about a finite temperature? We need to multiply g(e) by the probability of a state by the Fermi-Dirac
distribution. Z ∞
1
N= g(e) (e−u)/kT )+1 de (27.17)
0 e
Instead of falling immediately to zero at e = e F , the number of electrons per unit energy now drops more
gradually, over a width of a few times kT. It is important to note that the Fermi energy will shift a bit with
temperature.
Physics 467/667: Thermal Physics Spring 2019
As stated in the introduction to Fermions and Bosons, quantum statistics starts to play a role in dense
system and low temperatures. For an atom at room temperature, the quantum volume is
h2 2 2 2 3h2
e0 = ( 1 + 1 + 1 ) = (28.1)
8mL2 8mL2
1
N0 = (28.2)
e(e0 −µ)/kT − 1
When T is very small, N0 will be quite large. In this case, the denominator must be small,
1 kT
N0 = = (when N0 1) (28.3)
1 + (e0 − µ)/kT − 1 e0 − µ
The chemical potential µ must be equal to e0 at T=0, and just a bit less than e0 at small T. Now the question
is at which temperature we can observe that N0 remains very large?
1
N= ∑ e(es −µ)/kT − 1 (28.4)
s
Where g(e) is the density of states, which has a similar form following the electron gas model.
3/2
√
2 2πm
g(e) = √ V e (28.6)
π h2
28-1
28-2 Lecture 28: Bose-Einstein Condensation
Figure 28.1: The distribution of bosons as a function of energy is the product of two functions, the density
of states and the Bose-Einstein distribution. Copyright 2000, Addison-Wesley.
The trouble is that we cannot evaluate eq.(28.5) analytically. In order to work it out, we must guess some
value for the µ term. A good starting point is let µ=0. Changing the variable to x = e/kT
3/2 Z ∞ √
2 2πm ede
N =√ V
π h2 0 ee/kT −1
3/2 Z ∞ √ (28.7)
2 2πmkT xdx
=√ V x
π h2 0 e −1
3/2
2πmkT
N = 2.612 V (28.8)
h2
That result is wrong! It means that the number of particles purely depends on T. In fact, there can be only
one T corresponds to this value.
3/2
2πmkTc
N = 2.612 V (28.9)
h2
When T < Tc , this will be no longer valid during the transformation from summation to integral. This is
because the terms from e = 0 are missing. Therefore, it should be better expressed as
3/2
2πmkT
Nexited = 2.612 V (28.10)
h2
While the gap between N and Nexited is the number of atoms in the ground state.
3/2
T
N0 = N − Nexited = 1 − N (28.11)
Tc
The abrupt accumulation of atoms in the ground state below Tc is called Bose-Einstein condensation.
Lecture 28: Bose-Einstein Condensation 28-3
Figure 28.2: Number of atoms in the ground state (N0 ) and in excited states, for an ideal Bose gas in a three-
dimensional box. Below Tc the number of atoms in excited states is proportional to T 3/2 . Copyright 2000,
Addison-Wesley
In 1995 BEC of a gas of weakly interacting atoms was first achieved using Rb-87. Later, BEC was achieved
with dilute gases of atomic Li, Na, H, .etc.
The superfluid phase of 4 He is also considered to be an example of BEC.