0% found this document useful (0 votes)
56 views57 pages

Lectures1 7 1

This document provides lecture notes on thermodynamics. It begins with introducing some key definitions, including temperature, heat, work, and systems. It discusses how the early concept of temperature being related to a fluid called "caloric" was incorrect. The document then covers the history of thermometers and how temperature scales have been defined. It provides the first law of thermodynamics relating changes in internal energy to heat and work. The document concludes by defining some basic thermodynamic concepts like isolated and closed systems, thermal equilibrium, temperature, temperature change, and reversible processes.

Uploaded by

Dirk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views57 pages

Lectures1 7 1

This document provides lecture notes on thermodynamics. It begins with introducing some key definitions, including temperature, heat, work, and systems. It discusses how the early concept of temperature being related to a fluid called "caloric" was incorrect. The document then covers the history of thermometers and how temperature scales have been defined. It provides the first law of thermodynamics relating changes in internal energy to heat and work. The document concludes by defining some basic thermodynamic concepts like isolated and closed systems, thermal equilibrium, temperature, temperature change, and reversible processes.

Uploaded by

Dirk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Faculteit der Natuurwetenschappen,

Wiskunde en Informatica

Studiejaar 2019-2020

THERMODYNAMICS
lecture notes

LECTURER: Prof. Dr. A. V. Kimel


(Department of Ultrafast Spectroscopy of Correlated Materials)

Correctors: Mike Smeenk, Douwe Hoekstra


1. MAIN DEFINITIONS AND LAWS.
WHY ARE HEAT-ENGINES SO INEFFICIENT?
1.1 Introduction

The first step towards understanding thermodynamics is to forget the intuitive understanding of
temperature which all of us have. Temperature is a counterintuitive concept. If you have two bodies
with masses m1 and m2, their mass together will be m1+m2. This is not the case, when we talk about
temperature. Unlike mass, time or length there is no standard for units of temperature. There is no
etalon for 1 Kelvin. Temperature in thermodynamics is an abstract quantity. It acquires a physical
meaning only when we apply thermodynamics to a specific area of physics or chemistry.

What is wrong with our intuitive understanding of temperature? The problem is that historically
temperature was introduced in a wrong way. The very first philosophers - founders of
thermodynamics- started to think why some objects are hot and others are cold. They noticed that
when you bring a cold and a hot body in contact, the colder body heats up and the hotter cools down
until the bodies become equally hot. The philosophers also noticed that in this final state the bodies
can stay very long. The first hypothesis was that the hotness of a body is defined by the
concentration in the body of a special type of matter, a hypothetical fluid substance – caloric. The
caloric can flow as a liquid between two bodies brought in contact until the concentrations of caloric
in the bodies are equalized. Obviously, such a concept of temperature is wrong. This is why in
modern thermodynamics the definition of temperature does not imply any specific physical meaning.
The temperature remains an abstract quantity – something that describes the hotness of bodies and
equilibrates when differently hot bodies are brought in contact.

Intermezzo. History of thermometers.

Very first thermometers were built in the 16th century with the aim to measure the concentration of
caloric. At that time it was reasonable, because it was known that substances expand upon heating. It
was believed that expansion is due to inflow of caloric. To build a thermometer one can take gas or
liquid at constant pressure. Supplying heat to the substance will lead to its expansion. The change of
the volume is a measure of the concentration of caloric and thus is a measure of temperature. One of
the most crucial steps in the development of thermometers was done by Celsius. He was one of the
first who introduced a temperature scale. In fact, he measured the volumes of the thermosensitive
substance at the temperatures of melting of ice and at the steam point of water, respectively.
Afterwards, he divided the range between these two measured points in 100 equal intervals assigning
zero Celsius to the point of melting of ice, 100 Celsius to the steam point as well as assigning a linear
dependence between the volume and the measured temperature i.e. 𝑉 = 𝑐𝑜𝑛𝑠𝑡 ∗ 𝑇 . In fact, the
very first thermometer pre-defined the law which was “discovered” long afterwards.

A better way to define a temperature scale was suggested by Kelvin. Taking ideal gas of a fixed
volume as a thermosensitive substance and measuring the pressure of the gas, one can also quantify
temperature, which in the case of ideal gas is defined by the average kinetic energy of the gas atoms.
Assigning a linear dependence between the pressure and the temperature 𝑝 = 𝑐𝑜𝑛𝑠𝑡 ∗ 𝑇 with 100
units between the temperature of melting of ice (T0) and the temperature of steam boiling of water
(T), we obtain 𝑇– 𝑇0 = 100 𝐾 . Experiment shows that the ratio of pressures of gas at these two point
𝑝 𝑇
is = 1.36. It means that = 1.36. All together we obtain 1.36𝑇0 − 𝑇0 = 100 and 𝑇0 ≈ 273 𝐾.
𝑝0 𝑇0
The pressure of the ideal gas cannot be lower than 0. At T=0 K we achieve the lowest possible
temperature, when p=0.

1.2 Energy, Work, Heat

One of the first evidences showing that the concept of caloric is wrong were experiments in which
the hotness of body increased upon exerting a force on it and performing a work. For instance, cold
hands can be warmed up by rubbing them. The concept of caloric would imply that caloric is
generated out of nothing in this case. To solve this problem and avoid total negation of older theories
in a transition to modern ones, instead of caloric we now use the concept of heat. It is said that heat
together with work are forms of energy. Heat can be turned into work and work can be turned into
heat. The fundamental law of conservation of energy, also known as the first law of thermodynamics,
must be written as

Δ𝑈 = 𝑊 + 𝑄(1a),

where U is the change of internal energy of body after the body has accepted heat Q and
experienced work W from surroundings.

It is conventionally accepted that if a body performs work, the work is negative (W<0) and if the
work is performed on the body, the work is positive (W>0). Similarly, if the body rejects heat, the
heat is negative (Q<0) and if the heat is accepted, the heat is positive (Q>0).

Very often you can come across the first law of thermodynamics written in the differential form

𝑑𝑈 = 𝑑𝑊 + 𝑑𝑄 (1b).

Since the heat transferred and the mechanical work done do depend on the process, these are not
exact differentials. The notation d is used to emphasize this point.

Intermezzo. Exacte differentialen (zie ook MathWorld Wolfram).


A differential of the form

𝑑𝑓 = 𝑃(𝑥, 𝑦) 𝑑𝑥 + 𝑄 (𝑥, 𝑦) 𝑑𝑦

is exact (also called a total differential) if ∫ 𝑑𝑓 is path-independent. This will be true if


𝜕𝑓 𝜕𝑓
𝑑𝑓 = 𝑑𝑥 + 𝑑𝑦
𝜕𝑥 𝜕𝑦

1.3 Basic definitions

To avoid confusions, we would like to give definitions of the main concepts in thermodynamics.

System. Systems which are the objects in thermodynamics are macroscopic entities. Such a system
may consist of a great number of material particles (atoms, molecules, electrons, etc.) or of field
quantities such as electromagnetic field.

Isolated system. An independent system with absolutely no interaction with its surroundings.

Closed system. A system which has no material exchange with its surroundings is a closed system. A
closed system in classical mechanics would be considered an isolated system in thermodynamics.

Thermal equilibrium1. Regardless of the complexity of its initial state, if an isolated system is left
standing, the system eventually comes to a final state which does not change. This final state is
called the thermal equilibrium state. In the thermal equilibrium, the quantities which describe the
state of the system do not change.

Two isolated systems. When two isolated systems A and B are brought in contact, the total A+B
eventually comes to thermal equilibrium. It is said that A and B are in thermal equilibrium with each
other.

Temperature. In thermodynamics the temperature is an abstract quantity which prescribes thermal


equilibrium between two bodies in thermal contact. If 1 and 2 are the temperatures of two bodies
brought in thermal contact, the condition of thermal equilibrium is Θ1 = Θ2 . If Θ1 > Θ2, 2
increases when they are brought to thermal contact

Temperature change. Unlike the temperature, temperature change in thermodynamics is less abstract
quantity. At least there is a mathematical definition of it. Using the concept of heat capacity we can
relate the heat accepted by a body dQ and a change of parameter d. General definition of the heat
capacity states

(𝛼) 𝑑𝑄𝛽,𝛾
𝐶𝛽,𝛾 = (2),
𝑑𝛼

1
Sometimes authors of different books like to distinguish between thermal equilibrium and thermodynamic equilibrium. They define thermodynamic
equilibrium in the same way as we define thermal equilibrium here. When they talk about thermal equilibrium, they mean equilibrium of two
systems in thermal contact. I do not see strong reasons to introduce two types of equilibrium in this course. For instance, here we show how to apply
thermodynamics to large (not quantum mechanical) objects. Any of these objects can be split into many small, but still classical parts so that these
parts are in thermal contact. I do not see why one should distinguish thermal equilibrium of the parts from thermodynamic equilibrium of the whole
object.
, is general heat capacity upon changing of parameter α; are physical quantities which in the
considered process stay fixed. For instance, is the thermal heat capacity at fixed volume,
which relates the heat transferred to a system dQ and temperature change dT. See paragraph 3.6 in
“Equilibrium thermodynamics” by C. J. Adkins for more examples of heat capacities.

Reversible process. It is a process the direction of which can be "reversed" by means of infinitesimal
changes in some property of the surroundings. During a reversible process, the system is in
thermodynamic equilibrium with its surroundings throughout the entire process. This is an idealized
process in which it is possible to return both the system and surroundings to their original states.
Frictionless motion is an example of a reversible process.

Function of state is a property whose value does not depend on the path taken to reach that specific
value. Function of state is meant to describe the equilibrium state of a system. It is easier to
understand the meaning of this definition by considering real examples. For instance, among internal
energy, heat and work only internal energy is a function of state (see paragraph 1.2).

1.4 Model for idealized engine


In the first half of the 19th century many things about expansion of hot gases were known and people
tried to build first heat engines, but those were very ineffective. The development of thermodynamics
was greatly motivated by the goal to understand these fundamental limitations. Let’s follow the logic
of that time and try to build a theoretical model of an engine.

The mission of every heat-engine is to transfer heat into work. The engine will operate in cycles such
that in every cycle the heat Q1 will be supplied from a “hot reservoir” to a working body (Q1 > 0).
The working body will use the heat to produce the work W (W < 0). To keep generality of the model,
we assume that the heat does not turn into work completely, but a part of the heat Q2 is rejected
(| 2 |> 0) by the working body to another body, which we will call “cold reservoir”. Obviously after
one cycle the total internal energy of the working body should not be affected (ΔU = 0). Otherwise,
the body will get exhausted and the engine will stop working. Hence for such an engine using the
first law of thermodynamics we obtain | | | | | |.All these processes and the model itself
are summarized using scheme in Fig. 1.1.

Figure 1.1. Model for idealized engine.

 
We define the efficiency of the heat-engine in the form:

|𝑊|
𝜂 = |𝑄 (2a),
1|

which using the first law of thermodynamics becomes

| 𝑄2 |
𝜂 = 1− (2b).
| 𝑄1 |

As this is an idealized heat-engine, we assume that this is a reversible engine. It means that such an
engine can be run forwards and backwards. If it runs forwards, the heat flows from the hot towards
the cold reservoir and the working body performs a work. If the heat-engine runs backwards, the heat
is pumped from the cold to the hot reservoir and this requires performing some work on the working
body.

1.5 Second law of thermodynamics: early formulations

Can we imagine in nature a spontaneous heat flow from a colder to a hotter reservoir? Such an
phenomenon is obviously impossible as it would mean that the bodies spontaneously go further away
from the thermal equilibrium. Clausius formulated this empirical law by the following statement: It
is impossible for heat to transfer spontaneously from a colder to a hotter body without causing
other changes.

If we assume that this is true, we can also show that no process is possible whose sole result is the
complete conversion of heat to work. This statement was formulated by Thomson (Lord Kelvin)
and it can be proven with the help of the simple model of idealized heat-engine. Both these
statements made by Clausius and Kelvin are known as early formulations of the second law of
thermodynamics. In fact, this law shows that a part of the heat supplied to the working substance
must be rejected (| 𝑄2 | ≠ 0). From Eq. (2b) it is seen that the efficiency will never be equal to 1.

1.6 Proof of the Kelvin statement assuming that the Clausius statement is true
Assume that the Kelvin statement is not true and it is possible to have a process whose sole result is
the complete conversion of heat into work. In Fig. 1.2 this processes is realized by the working body
on the left. In this case we can take the work produced by the body and use it to feed an idealized
heat-engine that runs backwards (in Fig. 1.2 it is the one on the right). It is seen that the combination
of these two working bodies implies that the heat spontaneously flows from the cold reservoir to the
hot reservoir and this is impossible. Therefore, no process is possible whose sole result is the
complete conversion of heat into work.
Figure 1.2. Proof of the Kelvin statement assuming that the Clausius statement is true.

1.7 Carnot’s theorem


Using the simple model of idealized reversible heat-engine one can make another important step. In
particular, we can prove that no heat-engine operating between two given reservoirs can be more
efficient than a reversible heat-engine operating between the same two reservoirs. This
statement is also known as Carnot’s theorem. To prove it, we will again use the model for idealized
engine.

Figure 1.3. Proof of Carnot’s theorem

In Fig. 1.3 the left working body represents a reversible engine with the efficiency 𝜂𝐶 and the right
|𝑊𝐶 | |𝑊 |
one – hypothetical engine with the efficiency 𝜂𝐻 > 𝜂𝐶 . It means that |𝑄𝐶1 |
< |𝑄 𝐻 | . If we use the
𝐻1
work produced by the hypothetical engine and feed it to the reversible engine, the reversible engine
will run backwards pumping heat from the cold reservoir to the hot one. Since |𝑊𝐶 | = |𝑊𝐻 |, we
obtain that |𝑄𝐶1 | > |𝑄𝐻1 |. It means that the combination of these two engines solely pumps heat
from the cold to the hot reservoir and this violates the Clausius statement of the second law of
thermodynamics. A situation when 𝜂𝐻 = 𝜂𝐶 is still allowed. It also implies that all reversible heat-
engines are equally effective.
It is remarkable that Carnot formulated this theorem long before Clausius and Kelvin. In his
practically philosophical work “Reflections on the motive power of fire”, which was lacking
mathematics, he treated heat engine as it was a mill that is brought in motion by caloric flowing from
the hot to the cold reservoir.

1.8 Carnot’s cycle for the case of an ideal gas


In the particular example of ideal gas temperature acquires a clear physical meaning. It is a measure
of the average kinetic energy of particles forming the gas. Let’s try to construct a reversible cycle for
engine in which the gas plays the role of the working body.

In order to realize a reversible cycle which converts heat into work, one must find a sequence of
reversible processes which perform the conversion such that after the whole cycle the internal energy
of the working body does not change U=0. These processes must include transfer of heat from the
hot reservoir to the working body, actual production of work, and rejection of heat from the working
body to the cold reservoir. According to the Clausius statement of the second law of
thermodynamics, a transfer of heat from hot to cold body cannot be reversed. Therefore in order to
build a reversible cycle one should avoid any contacts of bodies with different temperatures at any
stage of the cycle. Is it possible at all? It was Sadi Carnot who solved this puzzle and realized that
such a cycle must consist of two isothermal and two adiabatic processes.

Figure 1.4 shows how the pressure and the volume of ideal gas, which plays the role of the working
body in Carnot’s engine, change during the cycle. As you know, the cycle consists of 4 steps:
1) Working body is in contact with the hot reservoir. The hot reservoir is in thermal
equilibrium with the working body. The temperature of the working body in this
process is T1. The working body, i.e. ideal gas, expands isothermally and
reversibly. In this process the working body absorbs heat Q1 and performs work
W1.
2) Working body got isolated from the reservoirs and expands adiabatically and
reversibly. The temperature of the ideal gas changes from T1 to T2.
3) Working body is in contact with the cold reservoir. The cold reservoir in this
process is in thermal equilibrium with the working body and the temperature of the
body is T2. The working body, i.e. ideal gas, got compressed isothermally and
reversibly. In this process the working substance rejects heat Q2 as a result of work
W2 performed on it.
4) Working body got isolated from the reservoirs and got compressed adiabatically
and reversibly. The temperature of the ideal gas changes from T2 to T1.
Figure 1.4. Carnot’s cycle with ideal gas as the working body.

For the case of an ideal gas an isothermal process means that the internal energy does not change.
Indeed if we take a gas of atoms, the total internal energy will be given by the sum of potential and
kinetic energies of the atoms constituting the gas. The potential energy is given by interatomic
interactions. In the case of ideal gas atoms do not interact. It means that the internal energy is fully
defined by the kinetic energy of the atoms and thus if the temperature of the gas does not change, the
internal energy does not change as well. It is thus clear that |Q2|= |W2| and |Q1|=|W1| meaning that
the net work performed by the heat-engine in this cycle is equal to |W|=|Q1|-|Q2|.

For an isothermal process of an ideal gas at temperature T1, one finds that |Q1|=| W1|. It is known that
a gas expanding from volume V1 to volume V2 at pressure p performs work dW= - pdV. For an ideal
𝑝𝑉 𝑉 𝑑𝑉 𝑉
gas it is known that =const. Therefore, Q1=const∫𝑉 2 𝑇 = 𝑐𝑜𝑛𝑠𝑡 𝑇1 ln(𝑉2). Similarly, we find the
𝑇 1 𝑉 1
𝑉 𝑑𝑉
expression for the second isothermal process of compression from V3 to V4:Q2=const∫𝑉 4 𝑇 =
3 𝑉
𝑉
𝑐𝑜𝑛𝑠𝑡 𝑇2 ln(𝑉4 ). For an adiabatic processes Poisson’s equations state 𝑝𝑉 𝛾 = 𝑐𝑜𝑛𝑠𝑡 or 𝑇𝑉 1−𝛾
=
3
𝑉2 𝑉3 |𝑄1| |𝑄2|
𝑐𝑜𝑛𝑠𝑡. It means that = . Therefore, it is easy to see that in this particular case = . The
𝑉1 𝑉4 𝑇1 𝑇2
| 𝑄2 | T
efficiency of such an engine is 𝜂 = 1 − | 𝑄1 |
= 1 − T2. As all reversible heat-engines are equally
1
effective, we have the expression for the efficiency of all reversible heat-engines
| 𝑄2 | T
𝜂 = 1− | 𝑄1 |
= 1 − T2 (3).
1

Moreover, we note that during the first isothermal processes (step 1), the working body and the hot
reservoir are in thermal equilibrium. It means that they have equal temperatures. Similarly, the cold
reservoir and the working substance have equal temperatures during the second isothermal process.
Hence we obtain a definition for the ratio of thermodynamic temperatures of two bodies. The ratio
of thermodynamic temperatures of two bodies is equal to the ratio of the amounts of heat
exchanged at infinitely large reservoirs at these temperatures by a reversible heat-engine
operating between them.

Finally, we can answer the main question of this lecture: why are heat engines so ineffective? As a
reversible engine is the most effective engine we can build, the maximum efficiency which we can
achieve is given by Eq. 3 and defined by the ratio of thermodynamic temperatures of two reservoirs.
If during one cycle of a reversible heat engine the temperature of the working body changes between
300 K and 400 K, its efficiency will not exceed 25%. For real engines it will be obviously much
lower.
2. CLAUSIUS’ THEOREM. ENTROPY.
SECOND LAW OF THERMODYNAMICS IN TERMS OF ENTROPY
2.1 “Paradox” showing why intuitive understanding of thermodynamics can be wrong

As you can see, a naïve “assumption” that heat is a flow of caloric was good enough to draw rather
correct conclusions regarding heat engine. Based on this assumption Sadi Carnot was able to build a
model for idealized engine, formulate the theorem about the efficiency of reversible engines, getting
very close to formulation of the second law of thermodynamics. The validity of this naïve, but
intuitive understanding of heat as a flow of caloric and heat engine as a mill, which uses caloric
instead of water, can be easily questioned by the example shown in Fig. 2.1.

Consider a reversible heat engine operating between two reservoirs at temperatures T1 and T3,
respectively. A part of the heat flowing through the working body is converted into work and the rest
is rejected to the cold reservoir (see Fig. 2.1). In the second engine we insert one additional reservoir
at temperature T2 between the working body and the hot reservoir (T1 > T2 > T3). Intuitively we think
that these two engines must have the same efficiency. Indeed, in the both examples the working body
accepts the same amount of heat Q1. According to Eq. 3, however, the efficiencies of the Carnot’s
𝑇 𝑇
cycles are not equal at all 𝜂1 = 1 − 𝑇3 and 𝜂2 = 1 − 3. As T2<T1, we obtain that 𝜂1 < 𝜂2 . It means
2 𝑇 1
that although the working bodies in these two cases obtain equal amounts of heat Q1, in the second
case more heat is rejected by the working substance without performing any work Q3 > Q3’. This
“paradox”, however, is purely due to our intuitive understanding of thermodynamics. In order to go
beyond this intuitive picture one has to introduce new counter-intuitive quantities such as entropy.
There is no device that measures entropy. We have no good feeling of this physical quantity. Why do
we need it? How was it introduced in the first place?

Figure 2.1. Two heat-engines which accept equal amounts of heat, but perform different amounts of work.

Let’s calculate the difference between the work performed by heat-engine 1 compared to that
𝑇 𝑇 𝑄 𝑄
performed by heat-engine 2. 𝑊13 − 𝑊23 = 𝑄1 (1 − 𝑇3 ) − 𝑄1 (1 − 𝑇3 ) = 𝑇3 ( 𝑇1 − 𝑇1 ). If we introduce
1 2 2 1
𝑄
new quantity 𝑆 = 𝑇 , where Q is the heat transferred to body at temperature T, we obtain a very
simple expression 𝑊13 − 𝑊23 = 𝑇3 Δ𝑆. It means that upon passing through the intermediate reservoir
a part of the energy degrades. The degraded energy cannot be transformed into work and must be
rejected to the cold reservoir. The amount of degraded energy is equal to 𝑇3 Δ𝑆. This degradation of
energy appeared when the third reservoir was introduced in the chain of heat transfer. In particular,
by adding the third reservoir in the chain we also introduced one irreversible process. The heat flow
between two reservoirs at temperatures T1 and T2 is obviously irreversible. This irreversibility results
in an increase of S and eventually leads to the degradation of energy. People started to use this new
quantity S calling it “entropy”.1

2. 2 Enigma of Q/T ratio

Consider two heat engines. The first one is a reversible Carnot’s engine with the efficiency 𝜂𝑟𝑒𝑣 . The
second is a hypothetical one with the efficiency 𝜂. The engines are operating between two reservoirs
as shown schematically in Fig. 2.2. According to Carnot’s theorem (see Lecture 1) 𝜂𝑟𝑒𝑣 ≥ 𝜂. It
𝑄 𝑄 𝑄 𝑄 𝑇
means that 1 − |𝑄2 | ≤ 1 − |𝑄𝑟2| or |𝑄2 | ≥ |𝑄𝑟2 | = 𝑇2. Hence we obtain that for any engine operating
1 𝑟1 1 𝑟1 1
𝑄2 𝑄1
between these two reservoirs | 𝑇 | ≥ | 𝑇 |. As the heat entering the working body is positive (this is
2 1
𝑄
conventionally accepted as explained in Lecture 1), we may write that ∑𝑖 𝑇 𝑖 ≤ 0 and extend this
𝑖
conclusion to a more general case:

When a system performs a cycle while in contact with the environment and absorbs or rejects
heat 𝑸𝒊 (i=1,2,…n) from the heat reservoir at temperature 𝑻𝒊 , after completion a cycle the
following holds
𝑸𝒊
∑𝒊 ≤𝟎 (2.1a),
𝑻𝒊

where equality is valid for a reversible cycle.

Rewriting the very same expression in the differential form gives that for any closed cycle
𝒅𝑸
∮ ≤𝟎 (2.1b),
𝑻

where equality necessarily holds for a reversible cycle. This result is called Clausius’s theorem.

1
The word originates from Greek “entrop𝑒̅”, which means “turning toward” or “transformation”. According to the Oxford Advanced American
Dictionary, entropy is the measurement of the energy that is present in the system, but is not available to do work. It was designed to denote that any
energy eventually and inevitably turns into a useless heat. The idea was inspired by an earlier formulation by Sadi Carnot of what is now known as the
second law of thermodynamics.
Figure 2.2. Two heat-engines (reversible and arbitrary) operating between two reservoirs.

2.3 Entropy
𝑑𝑄
The ratio seems to be useful in physics of heat transfer. We now define a new variable, which we
𝑇
will call entropy S. It is said that for an infinitesimal reversible change the change of entropy dS for a
system is defined as
𝑑𝑄𝑟𝑒𝑣
𝑑𝑆 = (2.2).
𝑇

It is easy to show that the entropy is a function of state (see paragraph 5.2 in “Equilibrium
Thermodynamics” by C. J. Adkins).

Additivity is probably the most important property of entropy. Using the definition of entropy it is
easy to see that if a system consists of two parts with the entropies SA and SB, the total entropy of the
system is SA+SB.

2.4 Second law of thermodynamics in terms of entropy

Since entropy is a function of state, a change of entropy upon a transfer of a thermodynamic system
from state A to state B does not depend on the path of the transfer. Let’s consider force F acting on a
thermodynamic system and a response of this system dx to the force. The work done on the system is
dW=Fdx. Work is not a function of state, but performing this work the system is transferred from
state A to state B. Imagine that there are two paths of this transfer. One path is reversible and the
other is arbitrary. These two paths together form a cycle for which according to Clausius’ theorem
𝑑𝑄
∮ ≤ 0.
𝑇

In the case of the reversible process of transfer from B to A the entropy changes as
𝐴 𝑑𝑄
∆𝑆𝑟𝑒𝑣 = ∫𝐵 𝑟𝑒𝑣 .
𝑇

𝐵 𝑑𝑄 𝐴 𝑑𝑄 𝐵 𝑑𝑄 𝐵 𝑑𝑄
According to Clausius’ theorem ∫𝐴 + ∫𝐵 𝑟𝑒𝑣 ≤ 0 or ∫𝐴 ≤ ∫𝐴 .
𝑖𝑟𝑟𝑒𝑣 𝑇 𝑇 𝑖𝑟𝑟𝑒𝑣 𝑇 𝑟𝑒𝑣 𝑇
Taking into account the definition of entropy, one obtains that for an arbitrary process
𝐵 𝑑𝑄
that∫𝐴 ≤ 𝑆𝐵 − 𝑆𝐴 , where the equality holds for those cases when the process from A to B is
𝑖𝑟𝑟𝑒𝑣 𝑇
reversible. Hence we obtain that for any infinitesimal change in a thermodynamic system the entropy
of the system does not change, if the process is reversible, and increases in irreversible processes
𝑑𝑄
≤ 𝑑𝑆 (2.3).
𝑇

For an isolated system ( 𝑑𝑄=0 and 𝑑𝑆 ≥ 0) it means that the entropy of an isolated system cannot
decrease.

In fact, this is a mathematical formulation of the second law of thermodynamics. In Fig. 2.3 one can
see a copy of the original article of R. Clausius in which he formulated this law and even applied it
together with the law of conservation of energy to the whole universe.

Figure 2.3. Copy of the original article of R. Clausius.

2.5 Entropy of ideal gas

In this paragraph as an example of calculation of entropy we will derive the entropy of 1 mole of
ideal gas. We assume that the molar specific heat of the gas at constant volume is constant, CV=CV0.
We can also assume that the gas is in a reservoir whose volume V and temperature T can be changed.
According to the first law of thermodynamics one can write for internal energy that dU=dW+dQ.
This law can be rewritten in terms of two variables in the considered experiment (dT and dV) dU=-
pdV+TdS. It means that

𝑑𝑈 + 𝑝𝑑𝑉
𝑑𝑆 =
𝑇
Upon a temperature increase at constant volume one can expect that 𝑑𝑈 = 𝐶𝑉 𝑑𝑇. Moreover, for 1
mole we have pV=RT, where R is the universal gas constant. Therefore, we obtain that
𝑑𝑇 𝑑𝑉
𝑑𝑆 = 𝐶𝑉 +𝑅 .
𝑇 𝑉

We integrate this expression


𝑑𝑇 𝑑𝑉
∫ 𝑑𝑆 = ∫ 𝐶𝑉 𝑇
+ ∫𝑅 𝑉
,

and finding the indefinite integrals obtain the expression for the entropy

𝑆(𝑇, 𝑉) = 𝐶𝑉 ln 𝑇 + 𝑅 ln 𝑉 + 𝑐𝑜𝑛𝑠𝑡,
where const is the constant of integration. It is seen that the entropy increases with an increase of the
temperature and the volume of the ideal gas.

2.6 How can we quantify entropy in real experiments?

Entropy is a weird concept. There is no device to measure entropy. Nevertheless, we can try to
develop our intuition to “feel” the entropy better. Look at the two situations with 10 hot non-
interacting atoms sketched in Fig. 2.4. The internal energies of the gas in these two cases are equal,
but the probability of the situation on the left is lower than that on the right. This simple illustration
demonstrates for any state of matter that there is a connection between the probability to realize this
specific state and disorder. We have shown that in equilibrium the entropy must be maximized.
Intuitively we also understand that the equilibrium state must be also the state with the highest
probability.

Consider two systems, 1 and 2, in states A and B, respectively. Let’s assume that the probability to
find system 1 in state A is g1 and the probability to find system 2 in state B is g2. Probability theory
says that the probability to find systems 1 and 2 in the states A and B is 𝑔 = 𝑔1 𝑔2 .According to the
definition of entropy, the total entropy of these two systems taken together is 𝑆 = 𝑆𝐴 + 𝑆𝐵 .
Therefore, it is seen that the entropy and the probability are interconnected such that
𝑆 = 𝑘 ∗ ln(𝑔)
where k is a constant.

Consequently, looking at the sketch in Fig. 2.4 it is seen that the entropy of the situation on the left is
lower than that on the right. It is also seen that by placing a piston on the position of the dashed line
one can harness the internal energy of the gas and let it perform work in the case of the situation on
the left, but not in the case of the situation on the right. Hence this illustration shows that the entropy
quantifies the energy which is present in the system, but is not available to do work.

Figure 2.4. Illustration of statistical meaning of entropy.

2.7 Modelling noise in experiments using entropy and second law of thermodynamics

Let’s consider an experiment that aims to measure quantity x. The measurements are affected by
noise. In order to model noise, we assume that fluctuations of the physical quantity are described by
a Gaussian distribution. It implies that if the true value of the quantity is 𝑥̅ , the probability that an
∆𝑥 ∆𝑥 ∆𝑥
outcome of the experiment falls in the range 𝑥̅ − ≤ 𝑥 ≤ 𝑥̅ + can be calculated as 𝑃 (𝑥̅ − ≤
2 2 2
∆𝑥 ∆𝑥
∆𝑥 𝑥̅ + 𝑥̅ + 1 (𝑥−𝑥̅ )2
𝑥 ≤ 𝑥̅ + )=∫ ∆𝑥
2
𝑓(𝑥)𝑑𝑥 = ∫ ∆𝑥
2
exp(− )𝑑𝑥, where f(x) is a Gaussian probability
2 𝑥̅ − 𝑥̅ − √2𝜋𝜎 2𝜎2
2 2
density function and 𝜎 is the dispersion of the distribution.

How do we know that the probability density f(x) is Gaussian? Usually people argue that f(x) is
Gaussian in accordance with the Central Limit Theorem. In fact, second law of thermodynamics
allows one to demonstrate in a relatively simple way that for a system in thermodynamic equilibrium
fluctuations of classical (i.e. not quantum mechanical) physical quantities are described by Gaussian
distributions.

The entropy of the system must be a function of the quantity x. As it was shown above,
S(x)=k*ln(gxand it also means that gx)=exp(S(x)/k). Note that 𝑔(𝑥 = 𝑥̅ ) is the probability to find
the system in the state with 𝑥 = 𝑥̅ . Therefore we expect that 𝑓(𝑥)~𝑔(𝑥) and thus aim to find 𝑔(𝑥).
Expanding function S(x) in Tailor series in the vicinity of 𝑥 = 𝑥̅ , one obtains

𝜕𝑆(𝑥̅ ) 1 2
𝜕 2 𝑆(𝑥̅ )
𝑆(𝑥) = 𝑆(𝑥̅ ) + (𝑥 − 𝑥̅ ) + (𝑥 − 𝑥̅ ) +⋯
𝜕𝑥 2 𝜕𝑥 2
Here we disregard terms of higher order assuming that the fluctuations have so small amplitude that
other terms can be neglected. Second law of thermodynamics states that the entropy of a system in
𝜕𝑆(𝑥̅ ) 𝜕2 𝑆(𝑥̅ )
thermodynamic equilibrium is at maximum. It means that =0 and < 0. If we for
𝜕𝑥 𝜕𝑥 2
𝜕2 𝑆(𝑥̅ )
simplicity express the second derivative as = −𝐵, one obtains that
𝜕𝑥 2

𝑆(𝑥̅ ) 𝐵
g(𝑥) = exp ( − 2𝑘 (𝑥 − 𝑥̅ )2 )
𝑘
Hence we can see that the probability density function should have the following form𝑓(𝑥)~
𝐵
exp (− 2𝑘 (𝑥 − 𝑥̅ )2 )This is a Gaussian distribution.

Therefore we have shown that fluctuations of a system around a thermodynamic equilibrium are
described by a Gaussian function. The noise can be seen as a result of fluctuations and can be
modelled accordingly. This statistical consideration of noise has been adapted from paragraph 110 in
L. D. Landau and E. M. Lifshitz “Statistical Physics”, Third Edition, Part 1:Volume 5 (Course of
Theoretical Physics, Volume 5)" (Butterworth-Heinemann, Oxford, 2006).

2.8 Entropy and irreversible processes in operation of heat-engines

To conclude the lecture, we explain the “paradox” in operation of heat-engine described in section
2.1. For this we describe the work of heat-engine using the concept of entropy. For a certain
sequence of processes the first law of thermodynamics states ∆𝑈 = 𝑊 + 𝑄 or 𝑄 = ∆𝑈 − 𝑊. For the
same sequence of processes the second law of thermodynamics in terms of entropy states 𝑇∆𝑆 ≥ 𝑄.
Using these laws we find that 𝑇∆𝑆 ≥ ∆𝑈 − 𝑊 and 𝑊 ≥ ∆𝑈 − 𝑇∆𝑆. Therefore, we can formulate
two statements:
- 𝑊 < 0. In this case the working body performs the work on the environment (the case of
heat-engine). The work that can be extracted from such a system in given surroundings is at
maximum if the processes are reversible.
- 𝑊 > 0. It means that in the considered sequence of processes the environment does work on
the working body (the case of a fridge). The work that must be done by the environment to
achieve the required changes is at minimum if the processes are reversible.
3. THERMODYNAMICS AS A METHOD IN PHYSICS.
THERMODYNAMIC POTENTIALS. MAXWELL RELATIONS
3.1 Introduction
The goal of this chapter is to show that thermodynamics can be very powerful as a method in
magnetism, electricity and mechanics. The diagram shown in Fig. 3.1 depicts different areas
of physics and mutual interconnections between them. Using first law of thermodynamics, the
concept of thermodynamic potential, the reciprocity and reciprocal theorems we can derive
expressions which describe the thermoelastic effect, piezoelectricity, pyroelectricity and the
magnetocaloric effect.

Figure 3.1. Different areas of physics and typical quantities employed to describe effects in
magnetism (H – magnetic field, m - magnetization), electricity (E - electric field, P - polarization),
thermodynamics (T - temperature, S - entropy), mechanics (f - force, L - displacement).

The first law of thermodynamics states that 𝑑𝑈 = 𝑑𝑊 + 𝑑𝑄. Until now we mainly discussed
(ideal) gases. In particular, when we had to define work performed on a system, we always
meant ideal gas under pressure. In this case, the work W can be calculated integrating dW=-
pdV. During the lectures I emphasized that thermodynamics is not a science about the physics
of (ideal) gases. Ideal gas is just a convenient model system. The power of thermodynamics is
in the fact that it can be applied to practically any area of physics. In each of these cases one
has to start with first law of thermodynamics, but using a proper expression for work
performed on physical system of interest. For instance, in the case of a piece of rubber, work
can be performed by tensional force f which enlarges the length of the rubber piece by dL:
dW=fdL. Mechanical force can also perform work against surface tension by increasing the
surface area by dA: dW=dA. Work can be performed by magnetic field H that induces
magnetic moment dm: dW=0Hdm. An electric field E performs work by inducing
polarization dp. It means that dW=Edp. Derivation of all these expressions for work
performed by external stimuli (𝐹, 𝐸, 𝜇0 𝐻) is beyond the scope of this course. However, for
thermodynamics these derivations are not that important. In order to be able to apply the
method of thermodynamics, it is sufficient to define an external stimulus (force 𝑋𝑖 ) on a
system and a response of the system (𝑑𝑥𝑖 ) such that the product of the stimulus and the
response is equal to the work performed on the system (dW). In general, work performed on a
physical system can be written as 𝑑𝑊 = 𝑋𝑖 𝑑𝑥𝑖 .

3.2 An example of real research with the help of thermodynamics. The case of ferromagnetism

Intermezzo. Ferromagnetism

Normally we assume that the magnetic moment m in media is induced by an external


magnetic field. Ferromagnetism is the basic mechanism by which certain materials (such as
iron) form permanent magnets. We know that every permanent magnet has two sides: a North
pole and a South pole. A stronger magnet can bring a weaker magnet in motion and change
the orientation of the poles of the latter. This is how electromotor works. It means that the
magnetic field of the stronger magnet performs work on the weaker one. In order to describe
the magnet, define the orientation of its poles and the strength of the produced magnetic field,
we use the concept of magnetic moment 𝐦. The magnetic moment is defined such that the
potential energy of a magnet with moment 𝐦 in external magnetic field H equals to 𝑊𝑝𝑜𝑡 =
− 𝜇0 𝐇𝐦. It means that if the magnetic field 𝐇 induces magnetic moment dm in the direction
of the field, the work performed on the magnet equals 𝑑𝑊 = 𝜇0 𝐻𝑑𝑚. What is the origin of
the spontaneous magnetic moment m in ferromagnets? Electrons have spin and, associated
with it, elementary magnetic moment. In ferromagnets (Fe, Co, Ni) quantum mechanical
exchange interaction between the spins is so strong that it aligns all the spins in the medium
so that the net spin and the net magnetization are not zero even with no external magnetic
field. When the temperature is too high, thermal fluctuations are stronger than the exchange
interaction and the spin order is destroyed. In physics and materials science, the Curie
temperature (TC), or the Curie point, is the temperature at which permanent magnets lose their
permanent magnetic properties. The Curie point for Fe is 1043 K, for Co 1388 K and for Ni
627 K. A typical temperature dependence of the magnetization is shown in Fig. 3.2. Applying
a very strong magnetic field one can reverse the magnetization of a permanent magnet. Field
dependence of the magnetization on an external magnetic field is often characterized by so
called hysteresis loop shown in Fig. 3.3. It is seen that with no magnetic field a ferromagnet
can be in one of two states characterized by opposite orientations of the magnetic moment. It
is experimental fact that in zero magnetic field these states have equal energies and equal
entropies. Switching ferromagnets between these two states with the help of an external
magnetic field is the main principle of magnetic recording, where the states magnets with
moments “up” and “down” can be used to represent bit-“0” and bit-“1”, respectively.
We may know nearly nothing about quantum mechanical origin of ferromagnetism, have very
naïve understanding of the phenomenon, but still using the laws and the concepts of
thermodynamics we will be able to predict behavior of ferromagnets . Experiments show that
the magnetic moment of ferromagnets m is a function of temperature. Above the so-called
Curie temperature, ferromagnets are in a paramagnetic state. It means that without an external
magnetic field H, the magnetic moment of the medium is zero (𝐦 = 0). Application of the
magnetic field to a paramagnet induces a magnetization in the medium (𝐦 = 𝜒𝑯). Below the
Curie temperature the magnetic moment is not zero, even with no applied magnetic field. Can
we theoretically derive the law for m(T) using the second law of thermodynamics?

Figure 3.2. Temperature dependence of the magnetization in of GaMnAs fabricated using different
annealing times. GaAs is a non-magnetic magnetic material. Doping the material with Mn-ions
induces magnetic order. The material becomes ferromagnetic with the Curie temperature 120 K.
Annealing the material at 160 K causes redistribution of the ions, improves the coupling between their
magnetic moments and results in an increase of the Curie temperature. It is interesting to note that
qualitatively all the temperature dependencies especially about their TC look the same (data take from
Lin Chen et al, Nano Lett., 11 (7), pp 2584–2589 (2011)).
Figure 3.3. Magnetic hysteresis loop and its description. The loop and the explanation are taken
from http://encyclopedia2.thefreedictionary.com/Hysterisis

Our first step is to define the set of independent variables relevant to this problem. The
variables in our case are m, H, S and T. Secondly, we construct a function which depends on
m, H and T. The only requirement to this function is that it should be at minimum, when at the
given H and T the ferromagnet is in a thermodynamic equilibrium. Intuitively, it is more
convenient to work with such a function than with entropy, because the function has a clear
analogy classical mechanics - potential energy. A physical system searches for equilibrium by
minimizing its potential energy.
Looking for such a function F, one can notice that a relatively simple expression
satisfies the abovementioned requirements:

𝐹 = 𝑈 − 𝑇𝑆 − 𝜇0 𝐦𝐇 (3.1),

where U is the internal energy and 𝜇0 is the vacuum permeability. Assuming and m and H are
mutually parallel, one can find that

𝑑𝐹 = 𝑑𝑈 − 𝑇𝑑𝑆 − 𝑆𝑑𝑇 − 𝜇0 𝑚𝑑𝐻 − 𝜇0 𝐻𝑑𝑚.

For the case of fixed T and H the equation becomes shorter:

𝑑𝐹 = 𝑑𝑈 − 𝑇𝑑𝑆 − 𝜇0 𝐻𝑑𝑚 (3.2).

According to the first law of thermodynamics (𝑑𝑈 = 𝑑𝑄 + 𝑑𝑊), the second law of
thermodynamics in terms of entropy (𝑑𝑄 ≤ 𝑇𝑑𝑆) and the expression of the work performed
on a medium by an external magnetic field H (𝑑𝑊 = 𝜇0 𝐻𝑑𝑚), one can write 𝑑𝑈 ≤ 𝑇𝑑𝑆 +
𝜇0 𝐻𝑑𝑚. It means that

𝑑𝑈 − 𝑇𝑑𝑆 − 𝜇0 𝐻𝑑𝑚 ≤ 0 (3.3).

From Eq. 3.2 and 3.3, we see that 𝑑𝐹 ≤ 0. It means that the guessed function F of an isolated
system can either decrease or stay constant. If the systems has reached the equilibrium and
does not evolve any further, it means that F has found its minimum. Any process that results
in an increase of F is forbidden by the laws of thermodynamics and thus the system simply
cannot leave this state. At thermal equilibrium F is at minimum.

As the next step we represent F(m,H) in terms of field dependent and field independent parts

𝐹(𝑚, 𝐻) = 𝐹(𝑚, 0) − 𝜇0 𝐻𝑚 (3.4)

and try to figure out what we can say about 𝐹(𝑚, 0).

Relying on our experience and intuition, it is reasonable to state that the internal energy of a
magnet does not change upon a reversal of m. On a hard drive, for instance, magnets
representing “0” bits have the same internal energy as magnets representing “1” bits. If one
builds an electromagnet which is able to acquire magnetic moment m directed either “up” or
“down”, the electromagnet will consume the same amount of energy for in these two cases
independently on the polarity of m. It means that the internal energy (and the entropy) of a
magnet with the magnetization pointing up is equal to the internal energy (and the entropy) of
the same magnet with the magnetization pointing down. Hence the internal U is an even
function of the magnetization m. From Eq.3.1 we conclude that the same is true for the
function 𝐹(𝑚, 0). Representing 𝐹(𝑚, 0) in terms of Taylor series gives:

𝐹(𝑚, 0) = 𝐴𝑚2 + 𝐵𝑚4 + ⋯ (3.5),

where A and B are the corresponding coefficients. From experiments we know that below the
Curie temperature the stable state of a ferromagnet corresponds to a state with 𝑚 > 0. At the
Curie temperature and above 𝑚 = 0. At thermal equilibrium 𝐹(𝑚, 0) is at minimum.
Therefore, it is clear that A is negative below the Curie temperature and positive above the
Curie temperature. The simplest possible function A(T) that satisfies these requirements is
𝐴(𝑇) = 𝑎(𝑇 − 𝑇𝐶 ), where TC is the Curie temperature, a is a proportionality coefficient
(a>0). Limiting the Taylor series to two terms, one obtains:

𝐹(𝑚, 0) = 𝑎(𝑇 − 𝑇𝐶 )𝑚2 + 𝐵𝑚4 (3.6).


𝑑𝐹(𝑚,0)
At thermal equilibrium 𝐹(𝑚, 0) is at minimum and = 0. From Eq.3.6 we find that
𝑑𝑚
2𝑎(𝑇 − 𝑇𝐶 )𝑚 + 4𝐵𝑚3 = 0 or, assuming that 𝑚 ≠ 0 for the case 𝑇 < 𝑇𝐶 , one obtains

𝑎
𝑚(𝑇) = √2𝐵 (𝑇𝑐 − 𝑇) (3.7).
Hence we obtain a simple equation which describes the temperature dependence of the
magnetization in a ferromagnet below the Curie temperature. Although this equation has been
derived relying on very rough approximations, qualitatively Eq.3.7 reproduces temperature
dependence of the magnetization in real ferromagnets very well.

Above the Curie temperature, one observes 𝑚 = 𝜒𝐻. We can continue in the same fashion
and derive the law for temperature dependence of χ. If the field H is not zero, we start from
Eq.3.4 and limit the Taylor series given by Eq.3.5 to one term only. As a result, we obtain

𝐹(𝑚, 0) = 𝑎(𝑇 − 𝑇𝐶 )𝑚2 − 𝜇0 𝐻𝑚


𝑑𝐹(𝑚,0)
The condition of having F at minimum gives = 0 and it means that
𝑑𝑚

2𝑎(𝑇 − 𝑇𝐶 )𝑚 − 𝜇0 𝐻 = 0

Since the magnetization is induced now by the magnetic field (𝑚 = 𝜒𝐻), we have

2𝑎(𝑇 − 𝑇𝐶 )𝜒𝐻 = 𝜇0 𝐻

and
𝜇0
𝜒=
2𝑎(𝑇 − 𝑇𝐶 )

This expression predicts that the paramagnetic susceptibility 𝜒 must diverge upon
approaching the Curie temperature from above. Experimentally it is seen as a sharp peak in
the dependence of the susceptibility on temperature.

It is thus convenient to work with functions which, like the potential energy, are at minimum
if the system is at equilibrium. Such functions are introduced for simplicity and called
thermodynamic potential or thermodynamic functions.

3.3. Examples of thermodynamic potentials

In the case of a gas one can define four thermodynamic potentials. These potentials are:

- internal energy: U
- enthalpy 𝐻 = 𝑈 + 𝑝𝑉
- Helmholtz function or free energy: 𝐹 = 𝑈 − 𝑇𝑆 (3.8)
- Gibbs function or Gibbs free energy 𝐺 = 𝑈 − 𝑇𝑆 + 𝑝𝑉

Why do we need so many of them? As it is mentioned above, the potentials in the first place
were introduced for convenience. Depending on the set of independent variables it is
convenient to use one or another potential. For instance, although a gas can be described by
many parameters (pressure, temperature, volume and entropy), only two of these variables are
independent. According to the first law of thermodynamics for the case of a gas

𝑑𝑈 = 𝑇𝑑𝑆 − 𝑝𝑑𝑉 (3.9).


It means that if in the problem of interest the entropy (S) and the volume (V) are two
independent variables, it is convenient to choose the internal energy as a thermodynamic
potential.

Similarly, one can show that

𝑑𝐻 = 𝑇𝑑𝑆 + 𝑉𝑑𝑝 (3.10).

𝑑𝐹 = −𝑆𝑑𝑇 − 𝑝𝑑𝑉 (3.11).

𝑑𝐺 = −𝑆𝑑𝑇 + 𝑉𝑑𝑝 (3.12).

It means that the enthalpy is a convenient thermodynamic potential for the problems in which
S and p are the independent variables. The Helmholtz energy was invented for the cases when
the variables are T and p. The Gibbs function is used when the variables are p and T.
Following a procedure similar to the one described at the beginning of section 3.2, one can
show for a physical system that these thermodynamic potentials are at minimum if the system
is at thermal equilibrium.

3.4 Reciprocity and reciprocal theorem. Order of differentiation.


(See proofs elsewhere. For instance, in paragraphs 1.9.1, 1.9.2 and 1.9.3 of C. J. Adkins,
“Equilibrium Thermodynamics” (Cambridge University Press, Cambridge, 2003))

In this paragraph we provide some useful mathematical expressions. Suppose that three
variables x,y,z are related forming such an equation 𝐹(𝑥, 𝑦, 𝑧) = 0 that it is possible to express
one of the variables in terms the other two independent ones 𝑥 = 𝑓(𝑦, 𝑧). According to the
reciprocal theorem:
𝜕𝑥 𝜕𝑧
(𝜕𝑧 ) (𝜕𝑥) = 1 (3.13).
𝑦 𝑦
The reciprocity theorem states
𝜕𝑥 𝜕𝑥 𝜕𝑧
(𝜕𝑦) = − (𝜕𝑧 ) (𝜕𝑦) (3.14).
𝑧 𝑦 𝑥

If 𝑓(𝑦, 𝑧) is a continuous function and its first as well as second derivatives are also
continuous (see Schwarz’s theorem), one finds that
𝜕2 𝑥 𝜕2 𝑥
= (3.15).
𝜕𝑧𝜕𝑦 𝜕𝑦𝜕𝑧

Figure 3.2. Graphical representation of x(y,z).


It is rather educative to discuss physical meaning of Eq. 3.15. In fact, it shows that if we
perform an experiment and measure x while changing y and z, it does not matter if one first
changes y and afterwards z, or first z is changed and afterwards y. It can be seen from Fig.
3.4. We consider a situation when the initial state of quantity x is point “1”. Changing y and z
we achieve that x reaches the final state given by point “2”. There are two ways to proceed
from “1”to “2”. We use Taylor’s series up to the second order assuming that changing y or z
does not affect x a lot. With the help of the series we find x in point “2” (x2) for the route
through point “A” i.e. when we first change y and afterwards z:
𝜕𝑥1 1 𝜕 2 𝑥1
𝑥𝐴 = 𝑥1 + ( )𝑧 𝜕𝑦 + ( 2 )𝑧 (𝜕𝑦)2 + ⋯
𝜕𝑦 2 𝜕𝑦

𝜕𝑥𝐴 1 𝜕 2 𝑥𝐴
𝑥2 = 𝑥𝐴 + ( )𝑦 𝜕𝑧 + ( 2 )𝑦 (𝜕𝑧)2 + ⋯
𝜕𝑧 2 𝜕𝑧
These expressions give
𝜕𝑥1 𝜕𝑥1 1 𝜕 2 𝑥1 1 𝜕 2 𝑥1 1 𝜕 𝜕𝑥1
𝑥2 = 𝑥𝐴1 + ( )𝑧 𝜕𝑦 + ( )𝑦 𝜕𝑧 + ( 2 )𝑧 (𝜕𝑦)2 + ( 2 )𝑦 (𝜕𝑧)2 + ( ) 𝜕𝑦𝜕𝑧 + ⋯
𝜕𝑦 𝜕𝑧 2 𝜕𝑦 2 𝜕𝑧 2 𝜕𝑧 𝜕𝑦 𝑧

Doing the same, but for the case when the changes proceed via point “B”, i.e. when we first
change z and afterwards y, we find:
𝜕𝑥1 𝜕𝑥1 1 𝜕 2 𝑥1 1 𝜕 2 𝑥1 1 𝜕 𝜕𝑥1
𝑥2 = 𝑥𝐴1 + ( )𝑧 𝜕𝑦 + ( )𝑦 𝜕𝑧 + ( 2 )𝑧 (𝜕𝑦)2 + ( 2 )𝑦 (𝜕𝑧)2 + ( ) 𝜕𝑧𝜕𝑦 + ⋯
𝜕𝑦 𝜕𝑧 2 𝜕𝑦 2 𝜕𝑧 2 𝜕𝑦 𝜕𝑧 𝑧

It is seen that in these two cases (i.e. proceeding via “A” or “B”) x will arrive to the very same
point “2”, if
𝜕 𝜕𝑥 𝜕 𝜕𝑥 𝜕2 𝑥 𝜕2 𝑥
( 1 ) = ( 1 ) or = 𝜕𝑦𝜕𝑧.
𝜕𝑦 𝜕𝑧 𝑧 𝜕𝑧 𝜕𝑦 𝑧 𝜕𝑧𝜕𝑦

3.5 Legendre differential transformation


Assume we have function 𝑓(𝑥) and 𝑑𝑓 = 𝑓 ′ (𝑥)𝑑𝑥. It means that 𝑑(𝑓 ′ 𝑥) = 𝑓 ′ 𝑑𝑥 + 𝑥𝑑𝑓 ′ or
𝑑(𝑥𝑓 ′ − 𝑓) = 𝑥𝑑𝑓 ′ . Taking 𝐹 = 𝑥𝑓 ′ − 𝑓 and 𝑦 = 𝑓 ′ (𝑥) one does the Legendre
transformation

𝑓(𝑥) → 𝐹(𝑦)

Note that 𝑑𝐹(𝑦) = 𝐹 ′ (𝑦)𝑑𝑦, where 𝑥 = 𝐹 ′ (𝑦). It means that the old variable (x) is the
derivative of the new function (F), while the derivative of the old function (f) is now the
variable (y).

3.6. Maxwell relations


3.6.1.General principles
Using thermodynamics we can predict behavior of physical system just from general
principles i.e. using first law of thermodynamics and the definition of entropy. Let us
consider, for instance, the simplest case of a gas under pressure. First law of thermodynamics
for the case of the gas under pressure 𝑝 states:
𝑑𝑈 = 𝑇𝑑𝑆 − 𝑝𝑑𝑉
If we take partial differentials of 𝑈, we obtain
𝜕𝑈 𝜕𝑈
( ) = 𝑇 and ( ) = −𝑝
𝜕𝑆 𝑉 𝜕𝑉 𝑆

Differentiating again with respect to the opposite variables gives:


𝜕2 𝑈 𝜕𝑇 𝜕2 𝑈 𝜕𝑝
= (𝜕𝑉) and = − (𝜕𝑆 )
𝜕𝑉𝜕𝑆 𝑆 𝜕𝑆𝜕𝑉 𝑉
If 𝑈 is a continuous function,

𝜕2 𝑈 𝜕2 𝑈
= 𝜕𝑆𝜕𝑉 .
𝜕𝑉𝜕𝑆

The equation means that it does not matter if one first changes volume and afterwards
entropy, or first entropy is changed and afterwards volume. The resulting change of the
internal energy is always the same. Anyway, after the differentiation one gets
𝜕𝑇 𝜕𝑝
(𝜕𝑉) = − (𝜕𝑆 ) (3.16).
𝑆 𝑉

The equation is an example of Maxwell relations.

Following the same procedure, but using other thermodynamic potentials, we can obtain more
Maxwell relations.

From 𝑑𝐻 = 𝑇𝑑𝑆 + 𝑉𝑑𝑝 one finds


𝜕𝑇 𝜕𝑉
(𝜕𝑝) = − ( 𝜕𝑆 ) (3.17).
𝑆 𝑝

From 𝑑𝐹 = −𝑆𝑑𝑇 − 𝑝𝑑𝑉 one finds


𝜕𝑆 𝜕𝑝
(𝜕𝑉) = (𝜕𝑇) (3.18).
𝑇 𝑉

From 𝑑𝐺 = −𝑆𝑑𝑇 + 𝑉𝑑𝑝 one finds


𝜕𝑆 𝜕𝑉
(𝜕𝑝) = − (𝜕𝑇 ) (3.19).
𝑇 𝑝

Another way to derive Maxwell relations from an expression for a thermodynamic potential is
based on the fact that the thermodynamic potential is a function of state and therefore the
corresponding function is exact differential. If 𝑥 is exact differential being a function of 𝑦 and
𝑧, one writes

𝑑𝑥 = 𝑌𝑑𝑦 + 𝑍𝑑𝑧,
𝜕𝑥 𝜕𝑥
where 𝑌 = (𝜕𝑦) and 𝑍 = (𝜕𝑧 ) .
𝑧 𝑦

Using

𝜕2 𝑥 𝜕2 𝑥
= 𝜕𝑧𝜕𝑦,
𝜕𝑦𝜕𝑧

from the last two expressions we obtain


𝜕𝑌 𝜕𝑍
( 𝜕𝑧 ) = (𝜕𝑦) .
𝑦 𝑧
Finally, it is convenient to remember three simple rules allowing checking correctness of a
derived Maxwell relation:

- Cross multiplication of variables always gives the form with the dimensions of energy
( , , ).

- Opposite pairs of variables are constant.

- The sign is positive, if (pressure) appears on the same side with .

 3.6.2 Thermoelastic effect

Here we consider an example of surface tension. Surface tension is the elastic tendency of a
fluid surface which makes it acquire the least surface area possible. If one wants to increase
the surface area of a liquid by dA, it is necessary to perform work W= dA,  where is the
parameter that characterizes surface tension. How does surface tension depend on
temperature? It means that we have to find an expression for .

 Firstly, we define the set of relevant variables.


The variables must be and dA.

 Secondly, we write first law of thermodynamics for our case assuming that we
consider only reversible processes
.
 Thirdly, we construct a thermodynamic potential with the right set of variable (
and dA).
In the expression for internal energy change we have to substitute one variable
(from to ). If the thermodynamic potential is chosen as , we have
the required set of variables in the potential:
.
This substitution of variables can be seen as a change of the coordinate system. In
mathematics this change of variables is called the Legendre differential
transformation.

 Finally, we derive the Maxwell relation using double differentiation. Assuming that
, one gets
(3.20a).

The same can be also rewritten in terms of heat capacity .

= (3.20b).
3.6.3 Pyroelectricity

Pyroelectricity is the ability of certain materials to generate a temporary voltage when they
are heated or cooled. How does the electric polarization of such a medium depend on
𝜕𝑝
temperature? To answer this question we must derive an equation for (𝜕𝑇) , where 𝑝 is the
𝐸
electric polarization and 𝐸 is the electric field.
 Firstly, we define the set of relevant variables.
The variables must be 𝑑𝑇 and dE.

 Secondly, we write first law of thermodynamics for our case assuming that we
consider only reversible processes
𝑑𝑈 = 𝐸𝑑𝑝 + 𝑇𝑑𝑆.
 Thirdly, we construct a thermodynamic potential with the right set of variable (𝑑𝑇
and dE).
In the expression for internal energy change 𝑑𝑈 we have to substitute two variables
(from 𝑑𝑆 to 𝑑𝑇 and from 𝑑𝑝 to 𝑑𝐸). If the thermodynamic potential is chosen as
𝐹 = 𝑈 − 𝑇𝑆 − 𝑝𝐸, we have the required set of variables in the potential:
𝑑𝐹 = 𝑑𝑈 − 𝑝𝑑𝐸 − 𝐸𝑑𝑝 − 𝑇𝑑𝑆 − 𝑆𝑑𝑇 = −𝑝𝑑𝐸 − 𝑆𝑑𝑇.

 Finally, we derive the Maxwell relation using double differentiation. Assuming that
𝜕2 𝐹 𝜕2 𝐹
= 𝜕𝑇𝜕𝐸 , one gets
𝜕𝐸𝜕𝑇
𝜕𝑝 𝜕𝑆
(𝜕𝑇) = (𝜕𝐸) (3.21).
𝐸 𝑇
3.6.4 Magnetocaloric effect
Magnetocaloric effect is an interdependence of the thermal and magnetic properties. Can we
𝜕𝑇
control temperature of media T applying magnetic field H? What can we say about 𝜕𝐻? Here
we would like to note that in accordance with the reciprocity theorem (see Eq. 3.14)
𝜕𝑇 𝜕𝑇 𝜕𝑆
(𝜕𝐻) = − (𝜕𝑆 ) (𝜕𝐻) (3.22),
𝑆 𝐻 𝑇
where
(𝑇)
𝜕𝑆 𝐶𝐻 𝜕𝑇 𝑇
(𝜕𝑇) = or (𝜕𝑆 ) = (𝑇) (3.23).
𝐻 𝑇 𝐻 𝐶𝐻
(𝑇) 𝜕𝑄 𝜕𝑆
Here we employed the definition of the heat capacity 𝐶𝐻 = ( 𝜕𝑇 ) = 𝑇 (𝜕𝑇) .
𝐻 𝐻
𝜕𝑆
We have no idea about (𝜕𝐻) in real magnets. In order to express it differently, we employ
𝑇
Maxwell relations!

 Firstly, we define the set of relevant variables.


The variables are 𝑑𝑇 and dH.
 Secondly, we write first law of thermodynamics for our case assuming that we
consider only reversible processes
𝑑𝑈 = 𝜇0 𝐻𝑑𝑚 + 𝑇𝑑𝑆.
 Thirdly, we construct a thermodynamic potential with the right set of variable (𝑑𝑇
and dH).
In the expression for internal energy change 𝑑𝑈 we have to substitute two variables
(from 𝑑𝑆 to 𝑑𝑇 and from 𝑑𝑚 to 𝑑𝐻). If the thermodynamic potential is chosen as
𝐹 = 𝑈 − 𝑇𝑆 − 𝜇0 𝐻𝑚, we have the required set of variables in the potential:
𝑑𝐹 = 𝑑𝑈 − 𝜇0 𝑚𝑑𝐻 − 𝜇0 𝐻𝑑𝑚 − 𝑇𝑑𝑆 − 𝑆𝑑𝑇 = −𝜇0 𝑚𝑑𝐻 − 𝑆𝑑𝑇.

 Finally, we derive the Maxwell relation using double differentiation. Assuming that
𝜕2 𝐹 𝜕2 𝐹
= 𝜕𝑇𝜕𝐻 , one gets
𝜕𝐻𝜕𝑇
𝜕𝑆 𝜕𝑚
(𝜕𝐻) = 𝜇0 ( 𝜕𝑇 ) (3.24).
𝑇 𝐻

Bringing together equations (3.22), (3.23) and (3.24), we obtain that


𝜕𝑇 𝑇 𝜕𝑚
(𝜕𝐻) = − (𝑇) 𝜇0 ( 𝜕𝑇 ) (3.25).
𝑆 𝐶𝐻 𝐻
This expression shows that it is possible to control temperature of media with the help of
magnetic field.
4. THERMODYNAMICS OF LIGHT
All bodies emit electromagnetic radiation and the character of the radiation depends on how hot
the body is. Intuitively we know that a hot body is a source of light and the intensity of the
emitted light increases upon a temperature increase. In the 19th century an attempt to estimate the
temperature of the sun from analysis of the thermal radiation led to derivation of an equation
which is known today as the Stefan-Boltzmann law. Here we review the main steps of this
derivation which to a large extent relied on the concepts of thermodynamics.

Imagine that radiation is trapped in a vessel with perfectly reflecting walls. It sounds weird, but
the situation is similar to the one when the vessel is filled with an ideal gas: the gas particles
(photons) are trapped in the vessel and do not interact with each other (photons do not interact).
We first find the internal energy of such a gas.

First law of thermodynamics states

𝑑𝑈 = 𝑑𝑄 + 𝑑W

For the gas we know that 𝑑𝑊 = −𝑝𝑑𝑉, where p is the gas pressure and V is the volume of the
vessel. We also assume that we consider only reversible processes i.e. 𝑑𝑄 = 𝑇𝑑𝑆 and thus

𝑑𝑈 = 𝑇𝑑𝑆 − 𝑝𝑑𝑉.

The total internal energy depends on the size of the vessel, obviously. Therefore we introduce a
size independent variable u which states for the internal energy of unit volume U=uV.

After differentiation of the equation for internal energy with respect to V, one obtains

𝜕𝑈 𝜕𝑆
( ) = 𝑇( ) −𝑝
𝜕𝑉 𝑇 𝜕𝑉 𝑇

and

𝜕𝑢 𝜕𝑆
𝑢 +𝑉( ) = 𝑇( ) −𝑝
𝜕𝑉 𝑇 𝜕𝑉 𝑇

Since we talk about ideal gas and thus particles of the gas do not interact with each other, we can
𝜕𝑢 𝜕𝑆
state that (𝜕𝑉) = 01. To find a possible substitute for (𝜕𝑉) , we find a Maxwell relation for it.
𝑇 𝑇
Using the procedure described in the previous lecture one gets:

1
If particles do not interact, the internal energy is given by the sum of kinetic energies of the particles. If m is the mass of a single particle , 𝜈 is
𝑚𝜈2
the average speed of the particles, for the internal energy one finds 𝑈 = 𝑛𝑉, where 𝑛 is the concentration of the particle in the gas. The
2
𝑈 𝑚𝜈2 𝜕𝑢
internal energy per unit volume does not depend on the volume 𝑢 = = 𝑛 and ( ) = 0.
𝑉 2 𝜕𝑉 𝑇
𝜕𝑆 𝜕𝑝
( ) =( )
𝜕𝑉 𝑇 𝜕𝑇 𝑉

Therefore we find that


𝜕𝑝
𝑢 = 𝑇 (𝜕𝑇) − 𝑝 (4.1)
𝑉

Using the kinetic theory (see “molecular model of an ideal gas” from Physics for Scientists and
Engineers by R. A. Serway and J. W. Jewett), one can show that

1
𝑝= 𝑢
3
In this way Eq.4.1 turns into

1 𝜕𝑢 𝑢
𝑢=𝑇 ( ) −
3 𝜕𝑇 𝑉 3

or

𝜕𝑢
4𝑢 = 𝑇 ( )
𝜕𝑇 𝑉

After integration one obtains

𝑢 = Β𝑇 4 +const (4.2).

The gas is ideal. It means that at zero temperature (T=0 K) the internal energy must be also zero.
Therefore we can state that for an ideal gas or for electromagnetic radiation in a vessel

𝑢 = Β𝑇 4 (4.3).

Now we take few more expressions from the kinetic theory of gases and find relation between
the internal energy stored in the vessel and the ability of the vessel to emit light. According to the
kinetic theory, the number of the gas particles (atoms or molecules) striking unit area of the
vessel wall per second is given by
1
𝑁 = 4 𝑛𝑐,

where n is the concentration and c is the mean speed of particles. If the photon energy is equal to
𝜂, the average power incident on unit area of the vessel wall is
1 1
𝑃 = 𝜂𝑁 = 4 𝜂𝑛𝑐 = 4 𝑢𝑐.

If a small hole area dA is cut in the wall, the energy escaping the container per second is
1
𝑑𝑃 = 4 𝑢𝑐 𝑑𝐴 (4.4).

The rest of the derivation is a question of definitions. We now introduce:

Spectral density u.This quantity is defined such that udis the energy contained in
radiation in the wavelength range between  and +d;
- Spectral absorptivity of a surface  is the absorbed fraction of the incident radiation at
the wavelength .
- Spectral emissive power of a surface e, is quantity defined such that edis the power
emitted in the form of radiation per unit area of the surface in the wavelength range
between  and +d.

Figure 4.1. Two reservoirs of energy interconnected with a channel allowing energy transfer in
both directions. We assume that energy carriers (photons) cannot change their wavelengths. To
account for it, in the model we insert a filter (blue rectangle) in the channel which allows
photons transfer between the reservoirs only in the range of wavelengths limited to d
(𝐴) (𝐵)
Thermodynamic equilibrium between the reservoirs thus implies 𝑢𝜆 = 𝑢𝜆 .

We assume that the emitting body is in equilibrium with the environment. In this case the
environment and the body can be represented by two reservoirs: A and B. To account for mutual
interactions between the body and the environment, we interconnect the reservoirs with a
channel allowing energy flow in both directions. The equilibrium of the reservoirs implies that
the internal energies per unit volume of the reservoirs are mutually equal. Assuming that the
(𝐴) (𝐵)
energy of a photon cannot be changed we obtain that 𝑢𝜆 = 𝑢𝜆 . According to the definition
given above, the energy emitted by the body per second in the range of wavelengths 𝑑𝜆 is 𝑒𝜆 𝑑𝜆.
According to Eq.4.4, the energy coming to the emitting body from the environment per second in
1
the range of wavelengths 𝑑𝜆 is 4 𝑐𝑢𝜆𝐵 𝑑𝜆. Thermodynamic equilibrium implies the equality
1
𝑒𝜆 𝑑𝜆 = 𝛼𝜆 𝑐𝑢𝜆𝐵 𝑑𝜆
4
(𝐴) (𝐵)
Taking into account 𝑢𝜆 = 𝑢𝜆 one obtains
1
𝑒𝜆 𝑑𝜆 = 𝛼𝜆 𝑐𝑢𝜆𝐴 𝑑𝜆
4
For absolutely black body we have 𝛼𝜆 =1 and thus
1
𝑒𝜆 = 𝑐𝑢𝜆𝐴
4
Using Eq. 4.3 we arrive to an expression for spectral emissive power of an absolutely black body
Β
𝑒𝜆 = 𝑐𝑇 4
4
Integrating over all wavelengths we obtain the total power radiated per unit area by a black body

𝑒 = 𝜎𝑇 4 (4.5),

where 𝜎 is Stefan’s constant. Josef Stefan found this law analyzing experimental data and
estimated the temperature of the Sun's surface. This was the first sensible value for the
temperature of the Sun. Five years later Ludwig Boltzmann explained the law theoretically.

Figure 4.2. Front pages of the original papers of Josef Stefan and Ludwig Boltzmann reporting
about the law known today as the Stefan-Boltzmann law.

5. THERMODYNAMICS OF PHASE TRANSITIONS


5.1 Introduction
Equilibrium of ice and water is the probably most known example of co-existence of two phases
of the very same compound. Changing the temperature of the mixture of ice and water or
atmospheric pressure we can change the ratio between masses of water and ice. Such a change of
the ratio, i.e. mass transfer between two states of matter, is called a phase transition. A substance
which is physically and chemically homogeneous is considered a single phase. Different states
of aggregation as gas, liquid, or different crystalline states are different phases. Transition of
matter from one phase to another is called phase transition. Consider two coexisting phases (ice
and water). Coexistence means that the phases are in thermodynamic equilibrium. How does the
phase boundary change, if we change pressure or temperature?
We consider the mixture in thermodynamic equilibrium. Pressure p and temperature T are two
quantities discussed in the problem. If the phases are in thermodynamic equilibrium,
temperatures of the phases are equal( 𝑇1 = 𝑇2 ) and the phases act on the phase boundary with
equal pressures (𝑝1 = 𝑝2). Therefore, it is natural to choose Gibbs function as thermodynamic
potential in our model. The Gibbs function of the mixture is

𝐺 = 𝑚1 𝑔1 + 𝑚2 𝑔2 (5.1),

where 𝑚1 , 𝑚2 are the masses of phase 1 and phase 2. 𝑔1 , 𝑔2 are specific Gibbs functions of the
phases. The specific Gibbs function of the mixture is thus 𝑔 = 𝐺/(𝑚1 + 𝑚2 ). The fact of
thermodynamic equilibrium as well as fixed p and T imply that 𝑑𝐺 = 0, 𝑑𝑔1 = 0 and 𝑑𝑔2 = 0.
Therefore,

𝑑𝐺 = 𝑔1 𝑑𝑚1 + 𝑔2 𝑑𝑚2 (5.2).

The total mass is conserved and 𝑑𝑚1 + 𝑑𝑚2 = 0. It means that from Eq. 5.2 we obtain that

𝑔1 = 𝑔2 (5.3).

It means that in thermodynamic equilibrium the phases have equal specific Gibbs
functions. If the line of equal specific Gibbs functions is crossed, one of the phase expands and
another one shrinks. The expanding phase is the one with the smaller specific Gibbs function.

5.2 Ehrenfest’s classification of phase transitions


There is a plenty of different phase transitions: solid-liquid, liquid-gas, ferromagnet-paramagnet,
superconductor-metal. Ehrenfest suggested an elegant way to classify all these phase transitions.
Consider two co-existing phases. In thermodynamic equilibrium of two phases one finds
𝑔1 = 𝑔2 . It means that if we consider 3D-graphs 𝑔1 (𝑝, 𝑇) and 𝑔2 (𝑝, 𝑇), the specific Gibbs
functions of two phases would correspond to two surfaces. The intersection of the surfaces
corresponds to the phase boundary. Although at the phase boundary 𝑔1 = 𝑔2 , 𝑔1 (𝑝, 𝑇) and
𝑔2 (𝑝, 𝑇) are different surfaces. Ehrenfest suggested to assign an order to phase transition. The
order of a transition is defined as the order of the lowest differential of the Gibbs function
or other thermodynamic potential which shows a discontinuity at the transition.
Figure 5.1. Specific Gibbs functions of two phases. Intersection of the planes corresponds to a
first-order phase transition.

Figure 5.1. represents a first order phase transition. In thermodynamic equilibrium of two phases,
𝜕𝑔 𝜕𝑔
the specific Gibbs functions are equal (𝑔1 = 𝑔2 ), but their first derivatives are not ( 𝜕𝑇1 ≠ 𝜕𝑇2 and
𝜕𝑔1 𝜕𝑔2
≠ ). This is why it is first order phase transition. From the definition of the Gibbs function
𝜕𝑝 𝜕𝑝
one can easily see that in the case of first order phase transitions crossing the boundary of equal
specific Gibbs functions is accompanied by a step-like change in entropy (∆𝑆) and volume (∆𝑉).

The line of equal specific Gibbs functions, i.e. intersection between the planes corresponding to
𝑔1 (𝑝, 𝑇) and 𝑔2 (𝑝, 𝑇), can be projected onto pT-plane (see Fig. 5.2).

Figure 5.2. Phase boundary projected onto pT-plane.


Consider two points a and b at the phase boundary. Passing from a to b will affect the specific
Gibbs functions and cause the following changes:
𝜕𝑔1 𝜕𝑔1
𝑑𝑔1 = ( ) 𝑑𝑝 + ( ) 𝑑𝑇 = 𝑣1 𝑑𝑝 − 𝑠1 𝑑𝑇
𝜕𝑝 𝑇 𝜕𝑇 𝑝
and
𝜕𝑔 𝜕𝑔
𝑑𝑔2 = ( 𝜕𝑝2 ) 𝑑𝑝 + ( 𝜕𝑇2 ) 𝑑𝑇 = 𝑣2 𝑑𝑝 − 𝑠2 𝑑𝑇,
𝑇 𝑝
where 𝑣1 , 𝑣2 are volumes of the phases per unit mass (specific volumes) and 𝑠1 , 𝑠2 are entropies
of the phases per unit mass (specific entropies). Everywhere along the phase boundary 𝑔1 = 𝑔2
meaning that 𝑑𝑔1 = 𝑑𝑔2 . As a result one obtains that 𝑣1 𝑑𝑝 − 𝑠1 𝑑𝑇 = 𝑣2 𝑑𝑝 − 𝑠2 𝑑𝑇. It laso
means that 𝑉1 𝑑𝑝 − 𝑆1 𝑑𝑇 = 𝑉2 𝑑𝑝 − 𝑆2 𝑑𝑇 and

𝑑𝑝 ∆𝑆 𝐿
= = (5.4),
𝑑𝑇 ∆𝑉 𝑇∆𝑉

where ∆𝑆 is the change in entropy and ∆𝑉 is the change in volume on passing across the phase
boundary. L is the latent heat i.e. heat absorbed or rejected upon the phase transition. Equation
5.4 is the Clausius-Clapeyron equation.

𝜕𝑔1
First order phase transition is characterized by abrupt change of volume because ( ) ≠
𝜕𝑝 𝑇
𝜕𝑔2
( 𝜕𝑝 ) and abrupt change of entropy. It can be also seen that first order phase transitions are
𝑇
𝑑𝑄 𝑇𝑑𝑆
characterized by divergence of heat capacity 𝐶 = = → ∞, coexistence of phases and
𝑑𝑇 𝑑𝑇
hysteresis.

What is hysteresis? An example of hysteresis was shown in Fig. 3.3. In the case of first order
phase transition from water to vapor one can observe temperature hysteresis in the dependence of
mass density on temperature. It means that upon heating up the transition from water to vapor
occurs at higher temperature than the transition from vapor to water upon a cooling down.
Thermodynamics predicts that if there is a boundary between two phases (gas and water), a
temperature increase will promote expanding of gas-phase and shrinking of liquid-phase.
However, if one takes a can with perfectly clean surface and starts to increase the temperature
slowly, making sure that bubbles are not formed, water can still be in the liquid phase even if the
liquid is characterized by a larger specific Gibbs function than the gas-phase. Only when the first
bubble, and thus a phase boundary, is formed, the bubble (gas-phase) starts to expand and the
larger the difference between the specific Gibbs functions of the two phases, the faster expansion
is. One may like to learn about overheated water
(https://www.youtube.com/watch?v=2FcwRYfUBLM) and supercooled water
(https://www.youtube.com/watch?v=pTdiTe3x0Bo), which are examples of temperature hysteresis
during first order phase transition.
5.3 First order phase transition in ferromagnets
Magnetic hysteresis is also a signature of first order phase transition. In this case two
coexisting phases are the states with two opposite orientations of the magnetization. The
thermodynamic potential relevant for this case is 𝐹 = 𝑈 − 𝑇𝑆 − 𝜇0 𝐦𝐇 (see Eq. 3.1, where m
is the magnetic moment). For the case of parallel m and H as well as for fixed T and H one
gets 𝑑𝐹 = 𝑑𝑈 − 𝑇𝑑𝑆 − 𝜇0 𝐻𝑑𝑚 (see Eq. 3.2). At the phase boundary separating two states
𝜕𝐹 𝜕𝐹
with opposite magnetizations, i.e. at the domain wall, ( 𝜕𝐻1 ) ≠ ( 𝜕𝐻2 ) meaning that 𝑚1 ≠ 𝑚2
𝑇 𝑇
i.e. at the domain wall the magnetic moment 𝑚 experiences a discontinuity.

Figure 5.3. Coexistence of phases (magnetic domains with magnetizations “up” and
“down”) and manifestation of the existence in magnetic hysteresis
loop.http://labfiz.uwb.edu.pl/lab/magmicroscope/?page_id=45&lang=en

5.4 Second order phase transitions


For second order phase transitions one may also derive an equation similar to Eq. 5.4. Assume
𝜕𝑔 𝜕𝑔 𝜕𝑔 𝜕𝑔
that 𝜕𝑇1 = 𝜕𝑇2 (𝑠1 = 𝑠2 ) and 𝜕𝑝1 = 𝜕𝑝2 (𝑣1 = 𝑣2 ).

𝜕𝑠
First we consider the equality of entropies. For entropy we know that 𝑑𝑠 = (𝜕𝑇) 𝑑𝑇 +
𝑝
𝜕𝑠
(𝜕𝑝) 𝑑𝑝. If temperature or pressure changes and if the phases are still in mutual equilibrium,
𝑇
the changes of entropies of the two phases are also equal (𝑑𝑠1 = 𝑑𝑠2 ). Therefore
𝜕𝑠 𝜕𝑠 𝜕𝑠 𝜕𝑠
( 𝜕𝑇1 ) 𝑑𝑇 + ( 𝜕𝑝1 ) 𝑑𝑝 = ( 𝜕𝑇2 ) 𝑑𝑇 + ( 𝜕𝑝2 ) 𝑑𝑝.
𝑝 𝑇 𝑝 𝑇

(𝑇)
𝐶
= 𝑝 ⁄𝑚𝑎𝑠𝑠 = 𝑚𝑎𝑠𝑠 𝑑𝑇 , isobaric cubic expansivity 𝛽 =
(𝑇) 1 𝑑𝑄𝑝
Introducing specific heat 𝑐𝑝

1 𝜕𝑉 𝜕𝑠 𝜕𝑣
( ) and applying Maxwell relation (𝜕𝑝) = − (𝜕𝑇) , we obtain
𝑉 𝜕𝑇 𝑝 𝑇 𝑝

1 𝜕𝑣1 1 𝜕𝑣2
𝑐𝑝1 𝑑𝑇 − ( ) 𝑑𝑝 = 𝑐𝑝2 𝑑𝑇 − ( ) 𝑑𝑝
𝑇 𝜕𝑇 𝑝 𝑇 𝜕𝑇 𝑝
or
𝑑𝑝 𝑐𝑝2 −𝑐𝑝1
= (5.5a).
𝑑𝑇 𝑣𝑇(𝛽2 −𝛽1 )
𝜕𝑔1 𝜕𝑔2
Starting from another equality for second order phase transition = (𝑣1 = 𝑣2 ) and after
𝜕𝑝 𝜕𝑝
1 𝜕𝑉
introducing isothermal compressivity 𝜅 = − 𝑉 (𝜕𝑝) we obtain
𝑇

𝑑𝑝 𝛥𝛽 𝛥𝑐
= = (5.5b).
𝑑𝑇 𝛥𝜅 𝑣𝑇𝛥𝛽

Equations (5.5a) and (5.5b) are called Ehrenfest’s equations. In the spirit of Ehrenfest’s
classification Table 5.1 summarizes behavior of physical quantities in the vicinity of the phase
transitions.

It is important to note that the diagram, shown in Fig. 5.1, is unfortunately not applicable to
𝝏𝒈 𝝏𝒈 𝝏𝟐 𝒈 𝝏𝟐 𝒈
second order phase transitions. If ( 𝝏𝑻𝟏 ) = ( 𝝏𝑻𝟐 ) and ( 𝝏𝑻𝟐𝟏 ) > ( 𝝏𝑻𝟐𝟐 ) , we always have
𝒑 𝒑 𝒑 𝒑
that 𝑔2 < 𝑔1 meaning that the system must always stay in the phase with 𝑔2 .

Note that according to Ehrenfest there must be phase transitions of third, fourth and higher
order. However, soon after Ehrenfest proposed his classification, it was realized that it is
rather incomplete.
Table 5.1. Behavior of physical quantities in the vicinity of phase transitions of first and second
order according to the classification of Ehrenfest.

5.5 Modern classification of phase transitions


The classification of Ehrenfest is logical and can be convenient, but it appears that in practice
it is incomplete. It can be seen using the example of phase transition from ferromagnetic to
paramagnetic state. Such a transition was considered in paragraph 3.2. Substituting Eq. 3.7
into the expression for thermodynamic potential F (Eq.3.6) and differentiating we find that
𝜕𝐹1 𝜕𝐹2
= . It means that this is not a first order phase transition. At the same time, according to
𝜕𝑇 𝜕𝑇
table 5.1 the phase transition does have signatures typical to transitions of first order, because
𝜕𝑀 𝜕2 𝐹
magnetic susceptibility 𝜒 = ( ) =( ) diverges at the transition point (see paragraph
𝜕𝐻 𝑇 𝜕𝐻 2 𝑇
𝜇0
3.2 that shows that 𝜒 = ). In short, although the classification of Ehrenfest works
2𝑎(𝑇−𝑇𝐶 )
well for transitions of first order, description of other phase transitions (non-first order) is
incomplete.

For a more general classification, it was decided to consider only the behavior of the first
derivative of thermodynamic potential. Based on it we can distinguish discontinuous and
continuous phase transitions:

a) Discontinuous phase transitions are characterized by a jump in the first derivative of


thermodynamic potential at the phase transition. The transitions are thus equivalent to
first order phase transitions in the classification of Ehrenfest.
b) Continuous phase transitions are such that the first derivative of thermodynamic
potential upon crossing the phase boundary changes continuously. Phenomenological
theory of continuous phase transitions was developed by L. Landau (see Fig. 5.3).
Figure 5.3. Original article of L. Landau on continuous phase transitions.

Phase transition from the ferromagnetic to the paramagnetic state is continuous. It can be
confusing but it is now conventionally accepted to call discontinuous and continuous phase
transitions as transitions of first and second order, respectively.
6. CHEMICAL POTENTIAL. THERMODYNAMICS OF DIFFUSION.
THERMODYNAMICS OF CHEMICAL REACTIONS
After thermodynamics of phase transitions, where we discussed equilibrium between different
phases of the same compound, we would like to make the next step and develop
thermodynamic theory of equilibria between different substances. We start with equilibrium
of substances which can get mixed but do not combine chemically. It means that the
constituting substances retain their original properties and can be physically separated. In
short, we will talk about mixtures. Diffusion is one of the key phenomena in every mixture.
Can we understand diffusion thermodynamically?

6.1. Gibbs paradox

We consider a mixture of two ideal gases. Two different gases in a box separated by a
partition (see Fig.1). The gases have the same temperatures (𝑇) and pressures (𝑝). The
numbers of moles of the gases are 𝑛1 and 𝑛2 , respectively. 𝑉1 and 𝑉2 are the volumes
occupied by the gases. As it was shown in Chapter 2, the entropy of 1 mole of ideal gas is
equal to

𝑆(𝑇, 𝑉) = 𝐶𝑉 ln 𝑇 + 𝑅 ln 𝑉 + 𝑐𝑜𝑛𝑠𝑡,

where const is the constant of integration and 𝐶𝑉 is the molar specific heat of the gas at
constant volume. If the partition is removed, the gases diffuse. The total entropy change after
the diffusion is equal to
𝑉 +𝑉 𝑉 +𝑉
∆𝑆 = ∆𝑆1 + ∆𝑆2 = 𝑛1 𝑅 ln 1𝑉 2 + 𝑛2 𝑅 ln 1𝑉 2 (6.1)
1 2

It is seen that as a result of the diffusion the entropy increases. The fact of such an entropy
increase could also be predicted, because both gases expand and diffuse irreversibly. It is not
a surprise that the diffusion of different gases leads to an increase of entropy. We will call
this entropy change entropy of mixing.

Suppose now that we have filled the two volumes with the same gas. The equation implies a
paradoxical increase of the entropy upon removing the partition. We know, however, that
removing the partition between two volumes filled with the very same gas, at the very same
pressure and temperature will not launch any irreversible processes. This contradiction is now
called Gibb’s paradox after J. W. Gibbs who proposed this thought experiment in 1875.

Resolving the paradox does not seem to be a difficult problem today. Equation (6.1) was
derived for the case of different i.e. distinguishable gases. People may argue that if the gases
become more and more alike, upon the gradual disappearance of differences the entropy of
mixing should experience a paradoxical jump. To resolve the paradox it is enough to realize
that a gradual change from identical to different gases is not possible. Today, with all
available knowledge of atomic structure, it is obvious that there is no continuous transition
from hydrogen to helium, for instance. In the 19th century, i.e. before the discovery of
electron and the development of quantum mechanics, interpretation of such a thought
experiment was often a subject of intense debates.
Figure 1. The thought experiment demonstrating Gibbs’ paradox.

6.2 Chemical potential

In this paragraph we aim to understand how to account for diffusion in thermodynamic theory
of mixtures. We again consider a container separated by a partition. Both parts of the
container are filled with the very same gas. The gas in these two parts of the container has the
very same temperature. The left part having the volume 𝑉1 contains 𝑁1 of gas particles. The
right part with the volume 𝑉2 contains 𝑁2 gas particles. Two parts are interconnected with a
valve facilitating diffusion of the gas particles in both directions. If 𝐺 is the Gibbs function of
the gas (𝐺 = 𝑈 − 𝑇𝑆 + 𝑝𝑉, see Chapter 3), it is clear that 𝐺 = 𝐺1 + 𝐺2 , where 𝐺1 and 𝐺2 are
the Gibbs functions of the gas in the right and the left parts of the container, respectively. As a
result of diffusion the total number of the particles does not change and 𝑑𝑁1 = −𝑑𝑁2 .
Naturally, the Gibbs function 𝐺1 is a function of 𝑁1 and 𝐺2 is a function of 𝑁2 . We notice
that

𝜕𝐺1 𝜕𝐺2
∆𝐺 = ( ) ∆𝑁1 + ( ) ∆𝑁
𝜕𝑁1 𝑇 𝜕𝑁2 𝑇 2

Using the fact that ∆𝑁1 = −∆𝑁2, the equation can be written with less variables i.e.

𝜕𝐺1 𝜕𝐺2
∆𝐺 = ( ) ∆𝑁1 − ( ) ∆𝑁
𝜕𝑁1 𝑇 𝜕𝑁2 𝑇 1

If the mixture reaches thermodynamic equilibrium, the Gibbs function should be at the
minimum i.e. the equilibrium requires that 𝑑𝐺 = 0 or ∆𝐺 = 0. Therefore at equilibrium one
should observe that
𝜕𝐺 𝜕𝐺
(𝜕𝑁1 ) = (𝜕𝑁2 ) (6.2).
1 𝑇 2 𝑇

The derivative of the thermodynamic potential with respect to the number of particles is
called the chemical potential. More particularly, the chemical potential for 𝑁 gas particles at
the temperature 𝑇 with the Gibbs function 𝐺 is defined as
𝜕𝐺
𝜇(𝑇, 𝑁) = (𝜕𝑁) (6.3).
𝑇
From Eqs. 6.2 and 6.3 it is seen that the equality of chemical potentials expresses the
condition for diffusive equilibrium

𝜇1 = 𝜇2 (6.4).

The chemical potential of an ensemble of particles shows how the Gibbs function changes
upon adding a single particle to the ensemble. In order to explicitly emphasize this
dependence, the earlier introduced definitions of thermodynamic potentials must be slightly
changed. For instance, if we have a mixture of several gases, the changes of the total internal
energy and the total Gibbs function are written as 𝑑𝑈 = −𝑝𝑑𝑉 + 𝑇𝑑𝑆 + ∑𝑖 𝜇𝑖 𝑑𝑁𝑖 and
𝑑𝐺 = 𝑉𝑑𝑝 − 𝑆𝑑𝑇 + ∑𝑖 𝜇𝑖 𝑑𝑁𝑖 , where 𝜇𝑖 is the chemical potentials of the i-gas. Expressions
for other thermodynamics potentials are upgraded accordingly by adding ∑𝑖 𝜇𝑖 𝑑𝑁𝑖 term.

With the help of chemical potential we enrich thermodynamic theory. It is seen that a
difference in chemical potentials acts as a driving force for the transfer of particles just as a
difference in temperature acts as a driver for a transfer of energy. In short, with the help of
chemical potentials diffusion can be understood thermodynamically.

As an exercise, one can calculate chemical potential for the case of ideal gas. We consider a
mixture of several gases at the temperature 𝑇 with the total pressure 𝑝. Each 𝑖-th gas is an
ideal gas under pressure 𝑝𝑖 . If 𝑛𝑖 is the number of moles of the i-th gas, we can also define the
concentration of the i-th component as
𝑛𝑖
𝑐𝑖 =
∑𝑖 𝑛𝑖

It is clear that
𝑛𝑖 𝑝𝑖
𝑐𝑖 = ∑ = (6.5).
𝑖 𝑛𝑖 𝑝

From the definition of the Gibbs function (𝐺 = 𝑈 − 𝑇𝑆 + 𝑝𝑉 and d𝐺 = −𝑆𝑑𝑇 + 𝑉𝑑𝑝 ), it is


seen that
𝜕𝐺
( 𝜕𝑝𝑖 ) = 𝑉𝑖 ,
𝑇

where 𝑉𝑖 is the volume occupied by the i-gas. For 𝑛𝑖 moles of the i-th ideal gas we have
𝑅𝑇
𝑉𝑖 = 𝑛𝑖 and
𝑝𝑖

𝜕𝐺𝑖 𝑅𝑇
( ) = 𝑛𝑖
𝜕𝑝 𝑇 𝑝𝑖

Integrating the last expression, for 1 mole of the i-th ideal gas we obtain

𝐺𝑖 (𝑇, 𝑝𝑖 ) = 𝐺0𝑖 (𝑇) + 𝑛𝑖 𝑅𝑇𝑙𝑛 𝑝𝑖 ,

where 𝐺0𝑖 (𝑇) is the constant of integration. This part of the Gibbs function does not depend
on the pressure 𝑝𝑖 . The same expression can be written differently
𝐺𝑖 (𝑇, 𝑝𝑖 ) = 𝐺0𝑖 (𝑇) + 𝑛𝑖 𝑅𝑇𝑙𝑛 𝑝 + 𝑛𝑖 𝑅𝑇𝑙𝑛 𝑐𝑖

In an ideal gas particles do not interact. Therefore, the Gibbs function of ideal gas is the sum
of the Gibbs potentials of the particles or simply the chemical potential times the number of
particles. If the gas has 𝑁𝑖 particles, one can find this number from 𝑁𝑖 =𝑛𝑖 𝑁𝐴 , where 𝑁𝐴 is the
Avagadro constant. Therefore, the chemical potential of the i-th ideal gas is 𝜇𝑖 (𝑇, 𝑝𝑖 ) =
𝐺0𝑖 (𝑇)+𝑅𝑇𝑙𝑛 𝑝+𝑅𝑇𝑙𝑛 𝑐𝑖
. The same expression can be written in a simpler form
𝑁𝐴

𝜇𝑖 (𝑇, 𝑝𝑖 ) = 𝜇0𝑖 (𝑝, 𝑇) + 𝑘𝑇𝑙𝑛 𝑐𝑖 (6.6),


𝑅
where 𝑘 is the Boltzmann constant (𝑘 = 𝑁 ) and 𝜇0𝑖 (𝑝, 𝑇) is the part of the chemical potential
𝐴
which depends only on the total pressure and temperature.

6.3 Examples of employing the concept of chemical potential in physics problems

6.3.1 Semiconducting p-n junction

The best way to understand chemical potential is to discuss diffusive equilibrium in the
presence of a step of potential energy. Here we use a semiconductor, such as silicon (Si) or
germanium (Ge), as an example. Using ion implantation it is possible to change electrical
properties of these media. For instance, at low temperatures pure Si does not conduct electric
current because of the lack of mobile charge carriers. Implanting a single ion of phosphorus
(P) into Si it is possible to create a single mobile electron in the semiconductor. Implanting a
single atom of Boron (B) takes one immobile electron from a Si atom and in this way creates
a vacancy in the ensemble of Si electrons. This vacancy is called “hole”. The hole can be seen
as a mobile positively charged particle. Such an implantation is called doping. Therefore
using dopants (i.e. implanted ions) it is possible to fabricate semiconductors with high
concentration of excess mobile electrons (n-type semiconductor) and holes (p-type
semiconductor). Bringing n-type and p-type semiconductors in contact with each other results
in a phenomenon called p-n junction. In such a junction free electrons and holes will diffuse.
Excess electrons will diffuse from the n-type semiconductor into the p-type semiconductor,
while the holes will diffuse into the opposite direction. Upon the diffusion of the electrons and
the holes, the p-type semiconductor charges negatively, while the n-type semiconductor
acquires a positive charge. In this way at the interface of the n- and p-type semiconductors the
diffusion builds up an electric field corresponding to the electric voltage ∆𝑉. When the
electric voltage becomes too large, the diffusion stops. The phenomenon can be easily
understood in terms of chemical potentials. Thermodynamically the diffusion of electrons
from the n-type to the p-type semiconductor is launched due to the initial difference of the
chemical potentials of the electron gas in this two types of semiconductors ∆𝜇𝑖𝑛 . At the
moment when the diffusion stops the chemical potentials are equalized and the final
difference between the chemical potentials is null ∆𝜇𝑓𝑖𝑛 =0. It occurs because the initial
difference between the chemical potentials got compensated by an increase of the potential
energy of mobile electron in the p-type semiconductor i.e. ∆𝜇𝑖𝑛 = |𝑞|∆𝑉, where 𝑞 is the
charge of the mobile particle (electron). In short, the chemical potential in thermodynamics is
equivalent to the potential energy in mechanics.
Figure 2. (a) Physical arrangement of a p–n junction. (b) Internal electric field magnitude
versus x for the p–n junction. (c) Internal electric potential difference V versus x for the p–n
junction (the figure is taken from R. A. Serway, J. W. Jewett, Physics for Scientists and
Engineers with Modern Physics (eighth edition).

6.3.2 Finding the variation of the concentration of oxygen with attitude

Another easy example of diffusive equilibrium between systems in different external


potentials is the equilibrium between layers at different heights of the Earth’s atmosphere.
The chemical potential of the oxygen can be written as the chemical potential of ideal gas (Eq.
6.5) in the gravitational field 𝑀𝑔ℎ

𝜇𝑖 (𝑇, 𝑝𝑖 ) = 𝜇0𝑖 (𝑝, 𝑇) + 𝑘𝑇𝑙𝑛(𝑐𝑖 ) + 𝑀𝑔ℎ,

where 𝑀 is the atomic mass of oxygen, 𝑔 is the gravitational acceleration and ℎ is the height.
Thermodynamic equilibrium between layers of air at the height ℎ and at the surface of the
Earth ℎ = 0 implies the equality of the chemical potentials for the layers

𝑘𝑇𝑙𝑛(𝑐𝑖 (ℎ)) + 𝑀𝑔ℎ = 𝑘𝑇𝑙𝑛(𝑐𝑖 (0)),

where 𝑐𝑖 (ℎ) and 𝑐𝑖 (0) are the concentrations of the oxygen at the height ℎ and 0, respectively.
Assuming that the layers have the same temperature, we get
0 .

or after converting this logarithmic equation into exponential one

0 exp .

The expression is also known as the Boltzmann distribution.

6.4 Phase transitions in mixtures. Gibbs phase rule

In this paragraph we will develop thermodynamic theory of equilibrium in mixtures. In order


to account for chemically different constituting substances we introduce the term of
“component”.

Components are chemically different and chemically independent parts of mixture. Water is
just one component. Solution of salt in water is a two component system. If a part of salt in
the solution is crystallized, it means that salt in the solution can be found either in liquid or in
solid phase.

We consider thermodynamic equilibrium in a mixture which consists of C components and


each of the components can be in R different phases. In the theory of phase transitions
discussed in the previous chapter one can also use chemical potentials instead of specific
Gibbs functions. If is the chemical potential for the i-th component in the k-phase, the
condition of equilibrium between phases “1” and “2” of the i-th component will be given by
the equality of chemical potentials of these two phases

If we write the condition of equilibrium for each of the components, we obtain C(R-1)
equations:

⋯ ,

⋯ ,

………………………….., (6.7).

Each of the chemical potentials in this system depends on the temperature of the mixture
and the pressure . These are two independent parameters. Moreover, the chemical potential
depends on the concentrations of the i-th component in the k-phase. The equilibrium
described by Eqs.6.7 implies that there is no transfer of particles between different phases of
the same component. It means that the masses of the phases are fixed and if the concentration
of the k-th component in the i-phase is defined as
(𝑘)
(𝑘) 𝑁𝑖
𝑥𝑖 = (𝑘) ,
∑𝑖 𝑁𝑖

(𝑘)
where 𝑁𝑖 is the number of particles of the k- th component and in the i-th phase, it is seen
that not all the concentrations are mutually independent. It follows from the fact that

∑𝑖 𝑥𝑖(𝑘) = 1.

Therefore in each of the 𝑅-phases we will find 𝐶 − 1 mutually independent concentrations.


Such a mixture has in total (𝐶 − 1)𝑅 + 2 independent variables, where “+2” stays for
pressure and temperature.

To summarize, a mixture of 𝐶 components and 𝑅 phases has (𝐶 − 1)𝑅 + 2 independent


parameters and in thermodynamic equilibrium these parameters must obey 𝐶(𝑅 − 1)
equations.

The system of equations has solutions only if the number of equations does not exceed the
number of parameters (𝐶 − 1)𝑅 + 2 ≥ 𝐶(𝑅 − 1) or

𝑅 ≤𝐶+2 (6.8a)

This condition says that for any mixture which consists of 𝐶 components, the number of
phases that can maintain mutual equilibrium is equal to 𝐶 + 2 or less. This expression is
called Gibbs phase rule. In particular, the equation shows that although Helium can exist in 4
different phases, maximum three of those phases can be simultaneously in equilibrium with
each other. The very same phase rule is often given in a different form

𝐷𝑜𝐹 = 2 + 𝐶 − 𝑅 (6.8b)

where 𝐷𝑜𝐹 is the number of degrees of freedom i.e. parameters that can be freely changed
without destroying the equilibrium.

For instance, we can consider a solution of KCl and NaCl in water. The solution (liquid
phase) can be in an equilibrium with the gas phase (vapor) and three different solid phases can
also emerge (KCl crystals, NaCl crystals and ice). How many phases of this mixture can be in
equilibrium at the same time? In the problem we have 3 components (KCl, NaCl and water).
Applying the Gibbs’ phase rule we obtain 𝑅 ≤ 𝐶 + 2 i.e. the number of phases which can
simultaneously be found in equilibrium is no more than 5.

6.4 Thermodynamics of chemical reactions

Finally, we discuss thermodynamic theory of chemical reactions. In particular, we focus on a


chemical reaction that can go both ways i.e. directly (from-left-to-right) or inversely (from-
right-to-left). An example of such a reaction is

3H2 + N2 ↔ 2NH3
When the speeds of the direct and the inverse reactions are the same, the chemical equilibrium
is reached and the masses of the participants of the reaction do not change anymore. The
reaction effectively stops. Can we find the condition for the equilibrium?

We express the equilibrium between the compounds participating in the reaction in the
following form1

2NH3 − 3H2 − N2 = 0

Or being more general

∑ 𝜈𝑖 𝐴𝑖 = 0
𝑖

where 𝐴𝑖 is a symbol for the i-th chemical compound participating in the reaction and 𝜈𝑖 is the
coefficient showing how many molecules of the i-th compound participate in the reaction. If
𝜈𝑖 is positive, it means that this substance is produced as a result of the direct reaction. If 𝜈𝑖 is
negative, it means that this substance is consumed as a result of the direct reaction. In the
example given above 𝜈𝑁𝐻3 = 2, 𝜈𝐻2 = −3 , 𝜈𝑁2 = −1. If the reaction proceeds at the given
pressure 𝑝 and temperature 𝑇, it is convenient to work further with the Gibbs function.

𝑑𝐺 = 𝑉𝑑𝑝 − 𝑆𝑑𝑇 + ∑ 𝜇𝑖 𝑑𝑁𝑖


𝑖

Thermodynamic equilibrium between the direct and the inverse reaction implies that 𝑑𝐺 = 0
and at constant pressure and temperature it means that ∑𝑖 𝜇𝑖 𝑑𝑁𝑖 =0. After one act of the
reaction we obtain 𝑑𝑁𝑖 = 𝜈𝑖 . Therefore the condition for equilibrium for a chemical reaction
states that

∑𝑖 𝜈𝑖 𝜇𝑖 = 0 (6.9).

We assume that the participants of the reaction are ideal gases. As it was shown above (see
paragraph 6.2), for the Gibbs function of 1 mole of ideal gas in a mixture we have

𝐺𝑖 (𝑇, 𝑝𝑖 ) = 𝐺0𝑖 (𝑇) + 𝑅𝑇𝑙𝑛 𝑝𝑖 ,

Therefore for the chemical potential we obtain

𝜇𝑖 = 𝜇0𝑖 (𝑇) + 𝑘𝑇 ln 𝑝𝑖 ,

According to the Dalton law 𝑝 = ∑𝑖 𝑝𝑖 and using Eq.6.5 we obtain

𝜇𝑖 = 𝜇0𝑖 (𝑇) + 𝑘𝑇 ln(𝑝𝑐𝑖 )

The equation for the equilibrium of chemical reaction can thus be written as

1
This is not a chemical reaction, but a way to demonstrate that all these compounds are in equilibrium.
∑ 𝜈𝑖 (𝜇0𝑖 (𝑇) + 𝑘𝑇 ln(𝑝𝑐𝑖 )) = 0
𝑖

or

𝑘𝑇 ∑ 𝜈𝑖 𝑙𝑛(𝑐𝑖 ) + 𝑘𝑇 ∑ 𝜈𝑖 𝑙𝑛(𝑝) + ∑ 𝜈𝑖 𝜇0𝑖 (𝑇) = 0

1
or ∑ 𝑙𝑛(𝑐𝑖 𝜈𝑖 ) = − ∑ 𝜈𝑖 𝑙𝑛 𝑝 − ∑ 𝜈𝑖 𝜇0𝑖 (𝑇)
𝑘𝑇

Converting the logarithmic equation to exponential one


1
∏ 𝑐𝑖 𝜈𝑖 = 𝑝− ∑ 𝜈𝑖 𝑒𝑥𝑝 (− ∑ 𝜈𝑖 𝜇0𝑖 (𝑇)) (6.10)
𝑘𝑇

This expression is known as the law of mass action. The law was first proposed in 1864
by C. M. Guldberg and P. Waage. J. H. van 't Hoff independently came to the similar
conclusions in 1877. The law is often written as

∏ 𝑐𝑖 𝜈𝑖 = 𝐾(𝑇, 𝑃) (6.11),

where 𝐾(𝑇, 𝑃) is the equilibrium constant2. It is interesting to discuss three possible


cases:

- ∑ 𝜈𝑖 > 0. In this case, the direct reaction results in an increase of the number of
molecules. The law of mass action predicts that an increase of the pressure
results in a decrease of the equilibrium constant and the amount of the reaction
products should decrease.
- ∑ 𝜈𝑖 < 0. In this case, the direct reaction results in a decrease of the number of
molecules. The law of mass action predicts that an increase of the pressure
results in an increase of the equilibrium constant and the amount of the reaction
products should increase.
- ∑ 𝜈𝑖 = 0. In this case, the number of molecules in the reaction does not change.
The law of mass action predicts that pressure should not affect the outcome of
the reaction.

2
In high school courses on chemistry you described chemical equilibrium with the help of so-called
evenwichtconstante (Dutch). You were also told that this constant is a function of temperature and pressure.
The derivation given above explains the origin of these dependencies.
7. THIRD LAW OF THERMODYNAMICS. CONCLUDING REMARKS.
Until now we have used the definition of entropy in terms of differential 𝑑𝑆. Such a definition
leaves an uncertainty about the absolute value of the entropy. In principle, it does not constitute
a drawback, because in applications of thermodynamics to physical problems we practically
always deal with differences in entropy. The same is also true for the internal energy 𝑈.
However, when we discuss thermodynamics of chemical reactions, absolute values of the Gibbs
function 𝐺 = 𝑈 − 𝑇𝑆 + 𝑝𝑉 becomes important. Hence, it is clear that in order to fully benefit
from the law of mass action, for instance, one would need to know how entropies of different
chemical substances depend on temperature. Aiming to develop quantitative theory in
chemistry, W. Nernst formulated the principle what is now known as the third law of
thermodynamics. According to the law, the entropy of a system at absolute zero is a well-
defined constant.

7.1. Third law of thermodynamics

First of all, we note that the value of entropy change ∆𝑆 for any process is finite at all
temperatures. Secondly, we look at the behavior of enthalpy (𝐻 = 𝑈 + 𝑝𝑉) and Gibbs function
(𝐺 = 𝑈 – 𝑇𝑆 + 𝑝𝑉) at low temperatures. It is seen that

𝐻 − 𝐺 = 𝑆𝑇

and, if 𝑆 is finite,

lim (𝐻 – 𝐺 ) = 0
𝑇→0

According to the definitions of the thermodynamic potentials

𝑑𝐻 = 𝑇𝑑𝑆 + 𝑉𝑑𝑝 and 𝑑𝐺 = −𝑆𝑑𝑇 + 𝑉𝑑𝑝.

It also means that

∆𝐻 = 𝑇∆𝑆 + 𝑉∆𝑝 and ∆𝐺 = −𝑆∆𝑇 + 𝑉∆𝑝.

For an isotropic process we also have that lim𝑇→0 (∆𝐻 – ∆𝐺 ) = 0. Note that upon a temperature
increase the entropy cannot decrease. Hence we must conclude that at absolute zero the enthalpy
and the Gibbs function are not only equal but also asymptotically tangent to each other and
∂∆H ∂∆G
limT→0 ( ∂T ) − limT→0 ( ∂T ) = 0 (7.1).
p p

Again according to the definitions of the thermodynamic potentials

∆𝐻 – ∆𝐺
∆𝑆 = (7.2)
𝑇
L'Hôspital's rule states that for functions 𝑓 and 𝑔 which are differentiable on an interval I except
possibly at a point c contained in I, if lim𝑥→𝑐 (𝑓(𝑥) ) = 0, lim𝑥→𝑐 (𝑔(𝑥) ) = 0, 𝑔′ (𝑥) ≠ 0 for all
𝑓 ′ (𝑥) 𝑓 ′ (𝑥) 𝑓(𝑥)
x in I with x ≠ c and limx→c (𝑔′ (𝑥)) exsists, limx→c (𝑔′ (𝑥)) = limx→c (𝑔(𝑥)).

Applying L'Hôspital's rule to Eq. 7.2 and afterwards using Eq.7.1 one obtains

(∂∆H⁄∂T) − (∂∆G⁄∂T)
p p
lim (∆𝑆) = lim ( )=0
𝑇→0 T→0 1

In 1906 W. Nernst stated that for any isothermal process

lim 𝑇→0 (∆𝑆) = 0 (7.3).

It means that upon approaching the absolute zero entropies of substances approach a well defined
constant.

7.2. Unattainability of absolute zero

Nernst did not like the concept of entropy. Therefore in 1912 he reformulated Eq.7.3 as the law
of unattainability of absolute zero. In the following we show that the statement of constant
entropy at very low temperatures is equivalent to the statement of unattainability of absolute
zero.

In order to see it, one has to realize first that:

1. The most efficient process of cooling down is adiabatic cooling down. Let’s consider the
case when the entropy of a system depends on temperature and some parameter X. To be
more specific, we can talk about ideal gas as a system and volume V as the parameter.
The most efficient way to perform work on a system is to do it in a reversible way (i.e.
entropy does not increase). The most efficient cooling down is a reversible cooling down.
In the case of ideal gas, the most efficient (i.e. reversible) way to cool the gas down is to
do it via adiabatic expansion.
2. Upon a temperature increase entropy can only increase. It follows from the fact that
𝜕𝑄
heat capacity (𝐶𝑋 = (𝜕𝑇 ) ) is positive
𝑋
𝜕𝑆 𝐶𝑋
( ) = >0
𝜕𝑇 𝑋 𝑇

Let’s assume that third law of thermodynamics is wrong and at 𝑇 = 0 𝐾 the entropy is not a
well-defined constant, but a function of X. In this case, it is easy to show that absolute zero is
attainable. Qualitative plots of S(T) for X=X1 and X=X2 in Fig. 7.1show how the cooling may
work.
Figure.7.1. Hypothetical entropy diagrams demonstrating unattainability of absolute zero. The
figure is taken from C. J. Adkins “Equilibrium thermodynamics” Page 247).

Alternatively, we can start the proof assuming that it is possible to achieve T=0 K. If this is
performed in an adiabatic way, taking into account the properties of entropy (it increases
together with temperature and does not change in an adiabatic process), it would automatically
mean that the entropy at T=0 K must depend on some parameter X i.e. entropy is not a well-
defined constant.

One can also try to obtain a better insight into Fig. 7.1(b) that corresponds to the case when the
entropy is a well-defined constant. Consider a process in which we are attempting to decrease the
temperature from 𝑇1 to 𝑇2 by change some parameter 𝑋 from 𝑋1 to 𝑋2 (a horizontal line of the
cooling path in Fig.7.1b). Using the second law of thermodynamics, we obtain the entropies of
the initial and final states
𝑇1
𝜕𝑆
S(𝑇1 , 𝑋1 ) = S(0, 𝑋1 ) + ∫ ( ) 𝑑𝑇
0 𝜕𝑇 𝑋=𝑋1
𝑇2
𝜕𝑆
S(𝑇2 , 𝑋2 ) = S(0, 𝑋2 ) + ∫ ( ) 𝑑𝑇
0 𝜕𝑇 𝑋=𝑋2
Adiabatic cooling implies that S(𝑇1 , 𝑋1 ) = S(𝑇2 , 𝑋2 ) and according to the third law of
thermodynamics we have S( 0, 𝑋1 ) = S( 0, 𝑋2 ). It means that
𝑇1 𝑇2
𝜕𝑆 𝜕𝑆
∫ ( ) 𝑑𝑇 = ∫ ( ) 𝑑𝑇
0 𝜕𝑇 𝑋=𝑋1 0 𝜕𝑇 𝑋=𝑋2

If we assume that 𝑇2 = 0, it means that 𝑇1 = 0. It means that there is no adiabatic path to T=0,
which does not start from T=0.

Nernst has also proposed his “proof” of third law thermodynamics based on a Carnot’s cycle. If
we assume that absolute zero can be reached, a Carnot’s cycle operating between a hot reservoir
at temperature T1 and a cold reservoir at T2=0 K would have the efficiency 1. This would
contradict the second law of thermodynamics (heat coming from the hot reservoir will be fully
transformed into work). This is impossible! W. Nernst proposed this logic as a proof of third law
of thermodynamics from second law of thermodynamics. This approach was criticized by A.
Einstein. In fact, this is not a proof of unattainability of absolute zero, but a proof that a Carnot’s
cycle with a cold reservoir at T2=0 K is impossible.

7.3 Consequences of third law of thermodynamics

7.3.1. Heat capacity


𝐻−𝐺
Applying L'Hôspital's rule to 𝑆 = , one obtains
𝑇

(∂H⁄∂T) −(∂G⁄∂T)
p p
lim𝑇→0 (𝑆) = limT→0 ( )
1

According to the definitions of the thermodynamic potentials

𝑑𝐻 = 𝑇𝑑𝑆 + 𝑉𝑑𝑝 and 𝑑𝐺 = −𝑆𝑑𝑇 + 𝑉𝑑𝑝.

In this way, if no work is performed on the system and thus 𝑑𝑈 = 𝑑𝑄, we obtain that
(∂H⁄∂T) = (∂U⁄∂T) = Cp , where Cp is the heat capacity of at constant pressure. It is also
p p

clear that (∂G⁄∂T) = −S. Therefore, we see that


p

lim𝑇→0 (𝑆) = limT→0 (Cp + S).

The equation together with third law of thermodynamics implies that limT→0 (Cp ) = 0.

Alternatively, one can simply use the definition of the heat capacity
𝜕𝑄 𝜕𝑆
𝐶𝑥 = ( 𝜕𝑇 ) or 𝐶𝑥 = 𝑇 (𝜕𝑇) .
𝑥 𝑥
From the latter it follows that

𝜕𝑆 𝜕𝑆
𝐶𝑥 = 𝑇 ( ) = ( )
𝜕𝑇 𝑥 𝜕 ln 𝑇 𝑥

Upon approaching the absolute zero 𝑆 is a constant and ln 𝑇 → −∞. It is seen that

limT→0 (Cx ) = 0.

7.3.2 Response functions


1 𝜕𝑉
Many response functions such as isobaric cubic expansivity ( 𝛽 = 𝑉 (𝜕𝑇 ) ) and compressibility
𝑝
1 𝜕𝑉
(𝜅 = − 𝑉 (𝜕𝑝) ) can be expressed in terms of derivatives of entropy. For instance, one can derive
𝑇
the Maxwell relation showing

𝜕𝑆 𝜕𝑉
( ) = −( )
𝜕𝑝 𝑇 𝜕𝑇
𝑝

𝜕𝑆
According to third law of thermodynamics lim𝑇→0 (∆𝑆) = 0. It means that lim𝑇→0 (𝜕𝑝) = 0,
𝑇
𝜕𝑉
lim𝑇→0 (𝜕𝑇 ) = 0 and the isobaric cubic expansivity tends to zero lim (𝛽) = 0.
𝑝 𝑇→0

𝜕𝑉 𝜕𝑉 𝜕𝑇
According to the reciprocity theorem (𝜕𝑝) = − (𝜕𝑇 ) (𝜕𝑝) . As it was shown above using third
𝑇 𝑝 𝑉
𝜕𝑉 𝜕𝑉
law of thermodynamics, lim𝑇→0 (𝜕𝑇 ) = 0. It means that lim𝑇→0 (𝜕𝑝 ) = 0 and the
𝑝 𝑇
compressibility tends to zero lim (𝜅) = 0.
𝑇→0

Another example is the phenomenon of surface tension. In Chapter 3 we showed that

∂γ ∂S
( ) =( ) ,
∂T A ∂A T

where A is surface area and γ is surface tension. According to third law of thermodynamics
∂S
lim (∂A) =0
𝑇→0 T

It also means that


∂γ
lim (∂T) = 0.
𝑇→0 A

The results of these mathematical exercises with Maxwell relations were confirmed
experimentally soon after formulation of third law of thermodynamics (see Fig. 7.2)
Figure 7.2. Scan of the original paper by H. Kamerlingh Onnes and co-authors reporting about
surface tension of liquid He at low temperatures.

7.3.3 Phase transitions

Third law of thermodynamics also predicts anomalies in the behavior phase transitions.

According to the Clausius-Clapeyron equation (see Eq.5.4) for first order phase transition one
expects
dp ∆S
= ∆V.
dT

dp
Third law of thermodynamics predicts that for such a transition limT→0 dT = 0.

The laws derived for temperature dependencies of physical quantities may become invalid at
very low temperatures. As an example one can mention the law for temperature dependence of
the magnetic moment in ferromagnet (see Chapter 3 Eq. 3.7). The law derived from first and
second laws of thermodynamics fails upon approaching the absolute zero. While Eq.3.7 works
very well at high temperatures confirming that m~√Tc − T, at low temperatures one should take
into account the following Maxwell relation
∂S ∂m
(∂H) = μ0 ( ∂T ) .
T H

∂S
According to third law of thermodynamics lim𝑇→0 (∂H) = 0. It means that upon approaching
T
the absolute zero, the law for temperature dependence of magnetic moment in ferromagnet,
which we derived based on first and second law of thermodynamics, must fail to obey third law
∂m
of thermodynamics lim ( ∂T ) = 0.
𝑇→0 H

7.4 Concluding remarks

This lecture concludes the course. The goal of this course was to show that thermodynamics is a
powerful and universal method in physics. If you feel that you still do not understand
thermodynamics, I would like to encourage you not to give up and remember the quote assigned
to A. Sommerfeld (professor of many Nobel laureates):

“Thermodynamics is a funny subject.The first time you go through it, you don't understand it at
all. The second time you go through it, you think you understand it, except for one or two points.
The third time you go through it, you know you don't understand it, but by that time you are so
used to the subject, it doesn't bother you anymore...”;-).

You might also like