0% found this document useful (0 votes)
10 views30 pages

1.2 The Zeroth Law of Thermodynamics: Thermodynamic Laws

Uploaded by

adibalam43
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views30 pages

1.2 The Zeroth Law of Thermodynamics: Thermodynamic Laws

Uploaded by

adibalam43
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

1

1.2 The Zeroth Law of Thermodynamics


We start here with a consideration of general thermodynamic laws that govern all possible processes
in the universe. These laws are the fundamental building blocks on which all of thermodynamics
rests; they deserve our full attention. Do not be discouraged by a certain degree of abstraction;
things will get better later.
The first of these principles invokes transitive properties (see below) that characterize the interactive
processes carried out in succession. Its main power derives from the fact that we thereby can set up
the so-called equations of state for any material and generate a thermodynamic temperature scale.
We begin by positing the so-called zeroth law of thermodynamics in a familiar and sensible
formulation which asserts that
Two bodies in equilibrium with a third are in equilibrium with each other.
To study the implications of this seemingly obvious statement, 1 consider the special case where the
mechanical properties of a system can be characterized solely in terms of a prevailing pressure P and
volume V. While V is well defined, the concept of pressure, briefly introduced in the last section, is
related to the description of forces in Section 1.4; here, it is the force per unit area at which the
system is maintained at volume V.
We follow the procedure advocated by Buchdahl. 2 Consider two systems 1 and 2 that are initially
isolated; let P1 and P2 be the pressures whereby the systems remain at fixed volumes V1 and V2.
Proper adjustments must be made to permit physically possible pairs of pressure–volume variables
(P1,V1) and (P2,V2) to be independently established. Let these two units so constructed now be joined
to form a compound system and equilibrated. It is an experience of mankind that only three of the
four variables can then be independently altered. We take account of this restriction by setting up a
mathematical relation β3(P1,V1,P2,V2) = 0, where β3 is just a fancy label for the particular mathematical
function needed to specify the interrelation between the indicated variables; its detailed form is not
of interest at this point.
We now repeat the process by joining system 1 to a new system 3 characterized by the pressure–
volume variables P3,V3. By the same line of argument, after equilibrating the compound system, we
encounter another interrelation of the form β2(P1,V1,P3,V3) = 0. Lastly, on joining systems 2 and 3, we
must set up a third mathematical restriction of the form β1(P2,V2,P3,V3) = 0. If equilibrium prevails
after establishing each combination, we require for consistency with the zeroth law that system 3
remain unaltered in its union with either system 1 or 2. This has an interesting consequence: for, we
are allowed to solve the equation β1(P2,V2,P3,V3) = 0 for P3 in terms of P2,V2,V3, as well as the
equation β2(P1,V1,P3,V3) = 0 for P3 in terms of P1,V1,V3. These solutions take the
form P3 = Φ1(P2,V2,V3) = Φ2(P1,V1,V3). From this relation, we now construct the following difference
function, which we call λ, to wit:
(1.2.1)Φ1(P2,V2,V3)−Φ2(P1,V1,V3)≡λ(P1,V1,P2,V2,V3)=0.
The prescribed construction of the function λ unfortunately generates a glaring inconsistency: the
functional dependence of λ on V3 is absent from the function β3 = 0; also, it makes no sense to have
to refer to system 3 when combining systems 1 and 2. To resolve this difficulty, we introduce a new
requirement: we demand that V3 is present in the functions Φ1 and Φ2 in such a manner
that V3 is eliminated when the difference between Φ1 and Φ2 is constructed. This is achieved in most
general terms by requiring that the functions Φ assume the forms Φ 1 = f2(P2,V2)h(V3) + q(V3) and
Φ2 = f1(P1,V1)h(V3) + q(V3), where h and q are the arbitrary functions (any functions you please) of V3,
that serve to separate out the V3 dependence. Substitution of the last two equations in Eq.
(1.2.1) then leads to the relation
(1.2.2a)f1(P1,V1)=f2(P2,V2).
Similarly, consistent with the zeroth law, we find that
2

(1.2.2b)f1(P1,V1)=f3(P3,V3).D

2.1 Thermodynamic Origin of Temperature


The scientific meaning for temperature is at the heart of thermodynamics. It arises from the
Zeroth Law of thermodynamics that states that if two systems are in thermal equilibrium and one of
those systems is in thermal equilibrium with a third system, then all three systems are in thermal
equilibrium with each other. Thus, temperature is the property of a system that conveys information
about the thermal equilibrium of the system. The Zeroth Law only establishes equality of
temperatures and permits the use of any single valued function as an empirical temperature scale. In
order to establish a metric scale for temperature, one that allows meaningful ratios of temperature,
the Second Law of thermodynamics is used to define an absolute temperature, T, by expressing the
law as
(2.1)dS≥δQ/T
where dS is the change in entropy and δQ is the change in heat. There are other thermodynamically
equivalent ways of defining the temperature scale as described in, for example, reference [1].
Equation (2.1) gives a metric temperature scale but requires, in addition, a definition of magnitude
and sign in order to define the unit. To establish the thermodynamic temperature scale the Système
International d’unités (SI), defines the kelvin, symbol K, by fixing the temperature of the triple point
of water, T(H2O, s + 1 + g) = 273.16 K. Over time a variety of other temperature scales have been
developed but they are no longer useful for reporting scientific data. The exception is the Celsius
temperature scale. The Celsius temperature, t, is related to the absolute temperature by
(2.2)t/∘C=T/K−273.15
and the unit is the degree Celsius, symbol °C. On this scale the ice point is 0 °C and the triple point of
water is 0.01 °C.
In principle, any suitable thermodynamic equation may be used as the basis for a thermometer.
However, with the exception of the radiation thermometers used at high temperatures,
thermodynamic thermometers cannot achieve the highest precision desired, and are complex and
time consuming to use. To overcome these difficulties, an International Temperature Scale, ITS, is
defined by the Comité International des Poids et Mesures (CIPM) under the Convention du Mètre, the
founding treaty for the SI, and is regularly revised with the current version agreed to in 1990 and
known as ITS-90 [2,3]. The ITS are empirical temperature scales giving a close approximation to the
known thermodynamic scale, but are more precise and easier to use. All temperature measurements
should be traceable to the current ITS. Some earlier ITS were known as International Practical
Temperature Scales (IPTS).
Because of the differences between the various temperature scales and because they have the same
name for their units, it is often necessary to distinguish between scale temperature and
thermodynamic temperature. The symbols T90 and t90 are used for the kelvin and Celsius
temperatures on the current scale, ITS-90, and previous scales are similarly denoted, for example, on
the International Practical Temperature Scale of 1968 (IPTS-68), T68 and t68 are used for the kelvin and
Celsius temperatures.
There are three provisos concerning the scientific use of ITS. Firstly, while the scale is more precise, it
does not guarantee thermodynamic accuracy; it is very dependent on the accuracy of
the thermodynamic data used to establish the scale as discussed in Section 2.8. For example, recent
data indicates that near 300 K the ITS-90 differs from the thermodynamic scale by about 5 mK.
Secondly, ITS varies with time because it is updated approximately every 20 years. This means that
older thermodynamic data may not be in agreement with recent data. For example, under the IPTS-
68 the normal boiling point of water, T68(H2O, 1 + g, p = 0.101 325 MPa), was 373.15 K but under ITS-
90 T90(H2O, 1 + g, P = 0.101 325 MPa) = 373.124 K, a difference of 26 mK. Thirdly, the ITS-90 is not
3

strictly single valued; it exhibits non-uniqueness because of both the way it is defined and the
properties of real thermometers. For example, two laboratories’ temperature measurements may
differ by as much as 2 mK around 400 K, yet both comply with the ITS-90, assuming other
uncertainties are negligible. Therefore, at this level of accuracy, measured thermodynamic properties
may not appear to be smooth functions.
This chapter introduces high precision thermometry for those requiring a close match to the
thermodynamic temperature. To achieve the highest accuracies close adherence to the published
guidelines [2] is necessary. Lower accuracy thermometry is covered in other publications and
guidelines [4–7]. Since it is not possible to cover all thermometry applications for all possible
environments; in this chapter the emphasis is on making measurements traceable to the ITS-90. In
particular, the limits on accuracy and precision are examined in detail. Unless otherwise stated, all
uncertainties are reported as the standard uncertainty or one standard deviation.
At the extremes of temperature, the use or ITS-90 may not always be appropriate because new
techniques for realising the temperature scale are constantly developed. Extensions of thermometry
to very high and very low temperatures are outlined.

What is Heat Capacity?


When heat is absorbed by a body, the temperature of the body increases and when heat is lost, the
temperature decreases. The temperature of an object is the measure of the total kinetic energy of
the particles that make up that object. So when heat is absorbed by an object this heat gets
translated into the kinetic energy of the particles and as a result the temperature increases. Thus, the
change in temperature is proportional to the heat transfer.
The formula q = n C ∆T represents the heat q required to bring about a ∆T difference in temperature
of one mole of any matter. The constant C here is called the molar heat capacity of the body. Thus,
the molar heat capacity of any substance is defined as the amount of heat energy required to change
the temperature of 1 mole of that substance by 1 unit. It depends on the nature, size, and
composition of the system.
In this article, we will discuss two types of molar heat capacity – CP and CV and derive a relationship
between Cp and Cv.
What are Heat Capacity C, CP, and CV?
 The molar heat capacity C, at constant pressure, is represented by CP.
 At constant volume, the molar heat capacity C is represented by CV.
In the following section, we will find how C P and CV are related, to an ideal gas.
The relationship between CP and CV for an Ideal Gas
From the equation q = n C ∆T, we can say:
At constant pressure P, we have
qP = n CP∆T
This value is equal to the change in enthalpy, that is,
qP = n CP∆T = ∆H
Similarly, at constant volume V, we have
qV = n CV∆T
This value is equal to the change in internal energy, that is,
qV = n CV∆T = ∆U
We know that for one mole (n=1) of an ideal gas,
∆H = ∆U + ∆(pV ) = ∆U + ∆(RT) = ∆U + R ∆T
Therefore, ∆H = ∆U + R ∆T
Substituting the values of ∆H and ∆U from above in the former equation,
4

CP∆T = CV∆T + R ∆T
CP = CV + R
C P – CV = R
Difference Between Isothermal and Adiabatic Process
An isothermal process is a process that occurs under constant temperature but other parameters of
the system can be changed accordingly. On the other hand, in an adiabatic process, heat transfer
occurs to keep the temperature constant. The main difference between isothermal and adiabatic
process is that the isothermal process occurs under constant temperature, while the adiabatic
process occurs under varying temperature. The work done in an isothermal process is due to the
change in the net heat content of the system. Meanwhile, the work done in an adiabatic process is
due to the change in its internal energy.

Examples of Isothermal Process


An isothermal process occurs in systems that have some means of regulating the temperature. This
process occurs in systems ranging from highly structured machines to living cells. A few examples of
an isothermal process are given below.
 Changes in state or phase changes of different liquids through the process of melting and
evaporation are examples of the isothermal process.
 One of the examples of the industrial application of the isothermal process is the Carnot
engine. In this engine, some parts of the cycles are carried out isothermally.
 A refrigerator works isothermally. A set of changes take place in the mechanism of a
refrigerator but the temperature inside remains constant. Here, the heat energy is removed
and transmitted to the surrounding environment.
 Another example of an isothermal process is the heat pump. The heat is either removed
from the house and dumped outside or the heat is brought inside the house from outside to
warm the house. In either case, the goal is to keep the house at the desired temperature
setting.
What is Boyle’s Law?
An isothermal process is of special interest for ideal gasses. An ideal gas is a hypothetical gas whose
molecules don’t interact and face an elastic collision with each other. Joule’s second law states that
the internal energy of a fixed amount of an ideal gas only depends on the temperature. Thus, the
internal energy of an ideal gas in an isothermal process is constant.
In an isothermal condition, for an ideal gas, the product of Pressure and Volume (PV) is constant. This
is known as Boyle’s law. Physicist and chemist Robert Boyle published this law in 1662. Boyle’s law is
often termed as Boyle–Mariotte law, or Mariotte’s law because French physicist Edme Mariotte
independently discovered the same law in 1679.
Boyle’s Law Equation
The absolute pressure exerted by an object of an ideal gas is inversely proportional to the volume it
occupies if the temperature and amount of gas remain unchanged within a closed system.
There are a couple of ways in which the above-stated law can be expressed. The most basic way is
given as follows:
PV = k
where P is the pressure, V is the volume and k is a constant.
The law can also be used to find the volume and pressure of a system when the temperature is held
constant in the system as follows:
PiVi = Pf Vf
where,
5

 Pi is the initial pressure


 Pf is the final pressure
 Vi is the initial volume
 Vf is the final volume
The way people breathe and exhale air out of their lungs can be explained by Boyle’s Law. When the
diaphragm contracts and expands, lung volume decreases and increases respectively, changing the
air pressure inside them. The pressure difference between the interior of the lungs and the external
air produces either inhalation or exhalation.
Reversible and Irreversible Processes
We see so many changes happening around us every day, such as boiling water, rusting of iron,
melting ice, burning of paper, etc. In all these processes, we observe that the system in consideration
goes from an initial state to a final state where some amount of heat is absorbed from the
surroundings and some amount of work W is done by the system on the surrounding. Now, for how
many such systems can the system and the surrounding be brought back to their initial state? With
common examples such as rusting and fermentation, we can say that it is not possible in most cases.
In this section, we shall learn about reversible and irreversible processes.
What Are Reversible Processes?
A thermodynamic process (state i → state f ) is said to be reversible if the process can be turned back
to such that both the system and the surroundings return to their original states, with no other
change anywhere else in the universe. As we know, in reality, no such processes as reversible
processes can exist. Thus, reversible processes can easily be defined as idealizations or models of real
processes on which the limits of the system or device are to be defined. They help us in incurring the
maximum efficiency a system can provide in ideal working conditions and, thus, the target design
that can be set.
Read More: Thermodynamic Process
Examples of Reversible Processes
Here, we have listed a few examples of Reversible Processes:
 extension of springs
 slow adiabatic compression or expansion of gases
 electrolysis (with no resistance in the electrolyte)
 the frictionless motion of solids
 slow isothermal compression or expansion of gases
What Are Irreversible Processes?
An irreversible process can be defined as a process in which the system and the surroundings do not
return to their original condition once the process is initiated. Take an example of an automobile
engine that has travelled a distance with the aid of fuel equal to an amount ‘x’. During the process,
the fuel burns to provide energy to the engine, converting itself into smoke and heat energy. We
cannot retrieve the energy lost by the fuel and cannot get back the original form. There are many
factors due to which the irreversibility of a process occurs, namely:
1. The friction that converts the energy of the fuel to heat energy
2. The unrestrained expansion of the fluid prevents from regaining the original form of the fuel
Heat transfer through a finite temperature, the reverse of which is not possible as the
forward process, in this case, is spontaneous
3. Mixing of two different substances that cannot be separated as the intermixing process is
again spontaneous in nature, the reverse of which is not feasible.
Thus, some processes are reversible while others are irreversible in nature, depending upon their
ability to return to their original state from their final state.
6

Examples of Irreversible Processes


A few examples of Irreversible Processes are:
 Relative motion with friction
 Throttling
 Heat transfer
 Diffusion
 Electricity flow through a resistance
The Second Law of Thermodynamics Equation
Mathematically, the second law of thermodynamics is represented as
ΔSuniv > 0
Where ΔSuniv is the change in the entropy of the universe.
Entropy is a measure of the randomness of the system, or it is the measure of energy or chaos within
an isolated system. It can be considered a quantitative index that describes the quality of energy.
Meanwhile, there are a few factors that cause an increase in the entropy of the closed system. Firstly,
in a closed system, while the mass remains constant, there is an exchange of heat with the
surroundings. This change in the heat content creates a disturbance in the system, thereby increasing
the entropy of the system.
Secondly, internal changes may occur in the movements of the molecules of the system. This leads to
disturbances which further cause irreversibilities inside the system resulting in the increment of its
entropy.
Also Read: Third Law of Thermodynamics

Different Statements of the Law


There are two statements on the second law of thermodynamics, and they are
1. Kelvin- lank Statement
2. Clausius Statement
Kelvin-Planck Statement
It is impossible for a heat engine to produce a network in a complete cycle, if it exchanges heat only
with bodies at a single fixed temperature.
Exceptions:
If Q2 =0 (i.e., Wnet = Q1, or efficiency=1.00), the heat engine produces work in a complete cycle by
exchanging heat with only one reservoir, thus violating the Kelvin-Planck statement.
Also Read: Zeroth Law of Thermodynamics

Clausius’s Statement
It is impossible to construct a device operating in a cycle that can transfer heat from a colder body to
a warmer one without consuming any work. Also, energy will not flow spontaneously from a low-
temperature object to a higher-temperature object. It is important to note that we are referring to
the net transfer of energy. Energy transfer can take place from a cold object to a hot object by the
transfer of energetic particles or electromagnetic radiation. However, the net transfer will occur from
the hot object to the cold object in any spontaneous process. And some form of work is needed to
transfer the net energy to the hot object. In other words, unless the compressor is driven by an
external source, the refrigerator won’t be able to operate. The heat pump and refrigerator work on
Clausius’s statement.

Both Clausius’s and Kelvin’s statements are equivalent, i.e., a device violating Clausius’s statement
will also violate Kelvin’s statement and vice versa
7

Carnot's Theorem and Cycle


 Having defined the efficiency and (coefficient of performance) of a heat engine it can be
shown that the second law places a limit on the maximum attainable value of this
efficiency. This analysis can be split into two distinct parts - Carnot's Theorem and the
Carnot Cycle.
 CARNOT'S THEOREM
"The efficiency of all reversible engines operating between the same two temperatures is the
same, and no irreversible engine operating between these temperatures can have a greater
efficiency than this"
 In order to understand this theorem we must first define what we mean by a reversible
and irreversible process. Without a more detailed description of thermodynamics this is
not an easy task, however, the following statements give a flavor of what reversibility
involves,
o A reversible process is one which can be made to "retrace" its path exactly.
o A process is reversible when the successive states of the process are Infinitesimally
close to Equilibrium States. i.e. the process is quasi-equilibrium.
o With a reversible process it is possible to restore the system to its original state
without needing an external agent or changing its surroundings.
o Reversible processes are an abstraction that aids the analysis of real processes.
o A reversible process is a standard of comparison for an actual system.
o Truly reversible thermal processes would require an infinite amount of time for
completion.
All real physical proesses are irreversible. Just as the ideal gas approximates the behaviour of all
gases, but no real gas is truly ideal; we can devise processes which are close to being reversible,
but never quite get there.
 CARNOT CYCLE
The Carnot cycle is an ideal reversible cyclic process involving the expansion and compression of an
ideal gas, which enables us to evaluate the efficiency of an engine utilizing this cycle.

Each of the four distinct processes are reversible. Using the fact that no heat enters or leaves in
adiabatic processes we can show that the work done in one cycle, W = Q 1 - Q3 where Q1 is the heat
entering at tempertature TH in the isothermal process A -> B and Q3 is the heat leaving at
temperature TC in the isothermal process C -> D.
 By using the ideal gas equation (pV = nRT), the fact that W = Q for isothermal processes

and the fact that for adiabatic ideal gas processes it can be shown
that,

Therefore, the efficiency of a Carnot cycle is given by,

Remember, this is the ideal heat engine (reversible) efficiency. It sets the maximum theoretically
attainable efficiency of any real engine operating between the same two temperatures.
8

Be careful. The temperatures in the ideal gas law must be in Kelvin, therefore the temperatures in
the efficiency equation are also in Kelvin.
Entropy Changes in Reversible Processes
A change is said to occur reversibly when it can be carried out in a series of infinitesimal steps, each
one of which can be undone by making a similarly minute change to the conditions that bring the
change about. For example, the reversible expansion of a gas can be achieved by reducing the
external pressure in a series of infinitesimal steps; reversing any step will restore the system and the
surroundings
to their previous state. Similarly, heat can be transferred reversibly between two bodies by changing
the temperature difference between them in infinitesimal steps each of which can be undone by
reversing the temperature difference.
The most widely cited example of an irreversible change is the free expansion of a gas into a vacuum.
Although the system can always be restored to its original state by recompressing the gas, this would
require that the
surroundings
perform work on the gas. Since the gas does no work on the surrounding in a free expansion (the
external pressure is zero, so PΔV=0PΔV=0,) there will be a permanent change in the
surroundings
. Another example of irreversible change is the conversion of
mechanical work
into frictional heat; there is no way, by reversing the motion of a
weight
along a surface, that the heat released due to friction can be restored to the system.
hese diagrams show the same expansion and compression ±ΔV carried out in different numbers of
steps ranging from a single step at the top to an "infinite" number of steps at the bottom. As the
number of steps increases, the processes become less irreversible; that is, the difference between
the work done in expansion and that required to re-compress the gas diminishes. In the limit of an
”infinite” number of steps (bottom), these work terms are identical, and both the system and
surroundings
(the “world”) are unchanged by the expansion-compression cycle. In all other cases the system (the
gas) is restored to its initial state, but the
surroundings
are forever changed.
Work and Reversibility
Changes in entropy (ΔSΔS), together with changes in enthalpy (ΔHΔH), enable us to predict in which
direction a chemical or
physical change
will occur spontaneously. Before discussing how to do so, however, we must understand the
difference between a reversible process and an irreversible one. In a reversible process, every
intermediate
state between the extremes is an
equilibrium
state, regardless of the direction of the change. In contrast, an irreversible process is one in which
the
intermediate
states are not
equilibrium
9

states, so change occurs spontaneously in only one direction. As a result, a reversible process can
change direction at any time, whereas an irreversible process cannot. When a gas expands reversibly
against an external pressure such as a piston, for example, the expansion can be reversed at any time
by reversing the motion of the piston; once the gas is compressed, it can be allowed to expand again,
and the process can continue indefinitely. In contrast, the expansion of a gas into a vacuum
(Pext=0Pext=0) is irreversible because the external pressure is measurably less than the internal
pressure of the gas. No
equilibrium
states exist, and the gas expands irreversibly. When gas escapes from a microscopic hole in a balloon
into a vacuum, for example, the process is irreversible; the direction of airflow cannot change.
ΔSsysΔSsys for an Isothermal Expansion (or Compression)
As a substance becomes more dispersed in space, the thermal energy it carries is also spread over a
larger
volume
, leading to an increase in its entropy. Because entropy, like energy, is an extensive property, a dilute
solution of a given substance may well possess a smaller entropy than the same
volume
of a more concentrated solution, but the entropy per mole of solute (the molar entropy) will of
course always increase as the solution becomes more dilute.
For gaseous substances, the
volume
and pressure are respectively direct and inverse measures of
concentration
. For an
ideal gas
that expands at a constant temperature (meaning that it absorbs heat from the
surroundings
to compensate for the work it does during the expansion), the increase in entropy is given by
ΔS=
R
ln(V2V1)(13.4.6)(13.4.6)ΔS=
R
ln⁡(V2V1)
Note: If the gas is allowed to cool during the expansion, the relation becomes more complicated and
will best be discussed in a more advanced course.
Because the pressure of an
ideal gas
is inversely proportional to its
volume
, i.e.,
P=n
R
TV(13.4.7)(13.4.7)P=n
R
TV
we can easily alter Equation 13.4.613.4.6 to express the entropy change associated with a change in
the pressure of an
ideal gas
10

:
ΔS=
R
ln(P1P2)(13.4.8)(13.4.8)ΔS=
R
ln⁡(P1P2)
Also the
concentration
c=n/Vc=n/V for an
ideal gas
is proportional to pressure
P=c
R
T(13.4.9)(13.4.9)P=c
R
T
we can expressing the entropy change directly in concentrations, we have the similar relation
ΔS=
R
ln(c1c2)(13.4.10)(13.4.10)ΔS=
R
ln⁡(c1c2)
Although these equations strictly apply only to perfect gases and cannot be used at all for liquids and
solids, it turns out that in a dilute solution, the solute can often be treated as a gas dispersed in the
volume
of the solution, so the last equation can actually give a fairly accurate value for the entropy of
dilution of a solution. We will see later that this has important consequences in determining the
equilibrium
concentrations in a
homogeneous
reaction mixture.
Temperature Entropy Diagram

Entropy change of a system is given by . During the reversible process, the energy
transfer as heat to the system from the surroundings is given by
(24.1)
11

Figure 24.1

Refer to figure 24.1. Here T and S are chosen as independent variables. The is the area under
the curve. The first law of thermodynamics gives . Also for a reversible process, we
can write,
and (24.2)
Therefore,
(24.3)
For a cyclic process, the above equation reduces to

(24.4)

For a cyclic process, the above equation reduces to

For a cyclic process represents the net heat interaction which is equal to the net work done by
the system. Hence the area enclosed by a cycle on a T − S diagram represents the net work done by a
system. For a reversible adiabatic process, we know that
(24.5)
or,
(24.6)
or,
(24.7)

Hence a reversible adiabatic process is also called an isentropic process. On a T − S diagram, the
Carnot cycle can be represented as shown in Fig 24.1. The area under the curve 1-2 represents the
energy absorbed as heat by the system during the isothermal process. The area under the
curve 3-4 is the energy rejected as heat by the system. The shaded area represents the net
work done by the system.
We have already seen that the efficiency of a Carnot cycle operating between two thermal reservoirs
at temperatures T1and T2 is given by

(24.8)
12

This was derived assuming the working fluid to be an ideal gas. The advantage of T − S diagram can
be realized by a presentation of the Carnot cycle on the T − S diagram. Let the system change its
entropy from to during the isothermal expansion process 1-2. Then,
(24.9)
and,
(24.10)
and,

or,

(24.11)

What is the Third Law of Thermodynamics?


The third law of thermodynamics states that the entropy of a perfect crystal at a temperature of zero
Kelvin (absolute zero) is equal to zero.
Entropy, denoted by ‘S’, is a measure of the disorder/randomness in a closed system. It is directly
related to the number of microstates (a fixed microscopic state that can be occupied by a system)
accessible by the system, i.e. the greater the number of microstates the closed system can occupy,
the greater its entropy. The microstate in which the energy of the system is at its minimum is called
the ground state of the system.
Table of Contents
 Mathematical Explanation of the Third Law
 Applications
 Frequently Asked Questions – FAQs
At a temperature of zero Kelvin, the following phenomena can be observed in a closed system:
 The system does not contain any heat.
 All the atoms and molecules in the system are at their lowest energy points.
Therefore, a system at absolute zero has only one accessible microstate – it’s ground state. As per the
third law of thermodynamics, the entropy of such a system is exactly zero.

This law was developed by the German chemist Walther Nernst between the years 1906 and 1912.
Alternate Statements of the 3rd Law of Thermodynamics
The Nernst statement of the third law of thermodynamics implies that it is not possible for a process
to bring the entropy of a given system to zero in a finite number of operations.
The American physical chemists Merle Randall and Gilbert Lewis stated this law differently: when the
entropy of each and every element (in their perfectly crystalline states) is taken as 0 at absolute zero
temperature, the entropy of every substance must have a positive, finite value. However, the entropy
at absolute zero can be equal to zero, as is the case when a perfect crystal is considered.
The Nernst-Simon statement of the 3rd law of thermodynamics can be written as: for a condensed
system undergoing an isothermal process that is reversible in nature, the associated entropy change
approaches zero as the associated temperature approaches zero.
Another implication of the third law of thermodynamics is: the exchange of energy between two
thermodynamic systems (whose composite constitutes an isolated system) is bounded.
13

Absolute Zero is Unattainable


The third law postulates that the entropy of a substance is always finite and that it approaches a
constant as the temperature approaches zero. The value of this constant is independent of the values
of any other state functions that characterize the substance. For any given substance, we are free to
assign an arbitrarily selected value to the zero-temperature limiting value. However, we cannot
assign arbitrary zero-temperature entropies to all substances. The set of assignments we make must
be consistent with the experimentally observed zero-temperature limiting values of the entropy
changes of reactions among different substances. For perfectly crystalline substances, these reaction
entropies are all zero. We can satisfy this condition by assigning an arbitrary value to the zero-
temperature molar entropy of each element and stipulating that the zero-temperature entropy of
any compound is the sum of the zero-temperature entropies of its constituent elements. This
calculation is greatly simplified if we let the zero-temperature entropy of every element be zero. This
is the essential content of the third law.
The Lewis and Randall statement incorporates this selection of the zero-entropy reference state for
entropies, specifying it as “a crystalline state” of each element at zero degrees. As a result, the
entropy of any substance at zero degrees is greater than or equal to zero. That is, the Lewis and
Randall statement includes a convention that fixes the zero-temperature limiting value of the
entropy of any substance. In this respect, the Lewis and Randall statement makes an essentially
arbitrary choice that is not an intrinsic property of nature. We see, however, that it is an
overwhelmingly convenient choice.
We have discussed alternative statements of the first and second laws. A number of alternative
statements of the third law are also possible. We consider the following:
It is impossible to achieve a temperature of absolute zero.
This statement is more general than the Lewis and Randall statement. If we consider the application
of this statement to the temperatures attainable in processes involving a single substance, we can
show that it implies, and is implied by, the Lewis and Randall statement.
The properties of the heat capacity, CPCP, play a central role in these arguments. We have seen
that CPCP is a function of temperature. While it is not useful to do so, we can apply the defining
relationship for CPCP to a substance undergoing a phase transition and find CP=∞CP=∞. If we think
about a substance whose heat capacity is less than zero, we encounter a contradiction of our basic
ideas about heat and temperature: If q>0q>0 and q/ΔT<0q/ΔT<0, we must have ΔT<0ΔT<0; that is,
heating the substance causes its temperature to decrease. In short, the theory we have developed
embeds premises that require CP>0CP>0 for any system on which we can make measurements.
Let us characterize a pure-substance system by its pressure and temperature and consider reversible
constant-pressure processes in which only pressure–volume work is possible.
Then (∂S/∂T)P=CP/T(∂S/∂T)P=CP/T and dS=CPdT/TdS=CPdT/T. We now want to show: the Lewis and
Randall stipulation that the entropy is always finite requires that the heat capacity go to zero when
the temperature goes to zero. (Since we are going to show that the third law prohibits
measurements at absolute zero, this conclusion is consistent with our conclusion in the previous
paragraph.) That the heat capacity goes to zero when the temperature goes to zero is evident
from S=CPdT/T.S=CPdT/T. If CPCP does not go to zero when the temperature goes to
zero, dSdS becomes arbitrarily large as the temperature goes to zero, which contradicts the Lewis
and Randall statement.
To develop this result more explicitly, we let the heat capacities at temperatures TT and zero
be CP(T)CP(T) and CP(0)CP(0), respectively. Since CP(T)>0CP(T)>0 for any T > 0T > 0, we have S(T)
14

−S(T∗)>0S(T)−S(T∗)>0 for any T>T∗>0T>T∗>0. Since the entropy is always finite, ∞>S(T)
−S(T∗)>0∞>S(T)−S(T∗)>0, so that
∞>limT∗→0[S(T)−S(T∗)] >0∞>limT∗→0⁡[S(T)−S(T∗)] >0
and
∞>limT∗→0∫TT∗CPT dT>0∞>limT∗→0⁡∫T∗TCPT dT>0
For temperatures in the neighborhood of zero, we can expand the heat capacity, to arbitrary
accuracy, as a Taylor series polynomial in TT:
CP(T)=CP(0)+(∂CP(0)∂T)PT+12(∂2CP(0)∂T2)PT2+…CP(T)=CP(0)+(∂CP(0)∂T)PT+12(∂2CP(0)∂T2)PT2+…
The inequalities become
∞>limT∗→0{CP(0)lnTT∗ +(∂CP(0)∂T)P(T−T∗)+14(∂2CP(0)∂T2)P(T−T∗)2+…} >0∞>lim
T∗→0⁡{CP(0)ln⁡TT∗ +(∂CP(0)∂T)P(T−T∗)+14(∂2CP(0)∂T2)P(T−T∗)2+…} >0
The condition on the left requires CP(0)=0CP(0)=0.
We could view the third law as a statement about the heat capacities of pure substances. We infer
not only that CP>0CP>0 for all T>0T>0, but also that
limT→0(CPT)=0 limT→0⁡(CPT)=0
More generally, we can infer corresponding assertions for closed reversible systems that are not pure
substances: (∂H/∂T)P>0(∂H/∂T)P>0 for all T>0T>0,
and limT→0T−1(∂H/∂T)P=0 limT→0⁡T−1(∂H/∂T)P=0 . (The zero-temperature entropies of such
systems are not zero, however.) In the discussion below, we describe the system as a pure substance.
We can make essentially the same arguments for any system; we need only
replace CPCP by (∂H/∂T)P(∂H/∂T)P. The Lewis and Randall statement asserts that the entropy goes to
a constant at absolute zero, irrespective of the values of any other thermodynamic functions. It
follows that the entropy at zero degrees is independent of the value of the pressure. For any two
pressures, P1P1 and P2P2, we have S(P2,0)−S(P1,0)=0S(P2,0)−S(P1,0)=0.
Letting P=P1P=P1 and P2=P+ΔPP2=P+ΔP and, we have
S(P+ΔP,0)−S(P,0)ΔP=0S(P+ΔP,0)−S(P,0)ΔP=0
for any ΔPΔP. Hence, we have
(∂S∂P)T=0=0
that is, T2−T1<t∗2−t1T2−T1<t2∗−t1>. Equivalently, the reversible process reaches a lower
temperature: T2<t∗2T2<t2∗>. From
dS=CPTdT−(∂V∂T)PdPdS=CPTdT−(∂V∂T)PdP
we can calculate the entropy changes for these processes. For the reversible process, we calculate
ΔSrev=S(P2,T2)−S(P1,T1)ΔSrev=S(P2,T2)−S(P1,T1)
To do so, we first calculate
(ΔS)T=S(P2,T1)−S(P1,T1)(ΔS)T=S(P2,T1)−S(P1,T1)
for the isothermal reversible transformation from state P1P1, T1T1 to the state specified
by P2P2 and T1T1. For this step, dTdT is zero, and so
(ΔS)T=∫P2P1(∂V∂T)PdP(ΔS)T=∫P1P2(∂V∂T)PdP
We then calculate
(ΔS)P=S(P2,T2)−S(P2,T1)(ΔS)P=S(P2,T2)−S(P2,T1)
for the isobaric reversible transformation from state P2P2, T1T1 to state P2P2, T2T2. For this
transformation, dPdP is zero, and
(ΔS)P=−∫T2T1CPTdT(ΔS)P=−∫T1T2CPTdT
Then,
ΔSrev=S(P2,T2)−S(P1,T1)=∫T2T1CPTdT−∫P2P1(∂V∂T)PdP=0ΔSrev=S(P2,T2)−S(P1,T1)=∫T1T2CPTdT−∫P1P
2(∂V∂T)PdP=0
Because ΔSrev=0ΔSrev=0, the reversible process is unique; that is, given P1P1, T1T1, and P2P2, the
final temperature of the system is determined. We find T2T2 from
15

∫T2T1CPTdT=∫P2P1(∂V∂T)PdP

What is Enthalpy?
Enthalpy is the measurement of energy in a thermodynamic system. The quantity of enthalpy equals
to the total content of heat of a system, equivalent to the system’s internal energy plus the product
of volume and pressure.
For more content on thermodynamics click here.
Technically, enthalpy describes the internal energy that is required to generate a system and the
amount of energy that is required to make room for it by establishing its pressure and volume and
displacing its environment.

When a process begins at constant pressure, the evolved heat (either absorbed or released) equals
the change in enthalpy. Enthalpy change is the sum of internal energy denoted by U and product of
volume and Pressure, denoted by PV, expressed in the following manner.
H=U+PV
Enthalpy is also described as a state function completely based on state functions P, T and U. It is
normally shown by the change in enthalpy (ΔH) of a process between the beginning and final states.
ΔH=ΔU+ΔPV
If the pressure and temperature don’t change throughout the process and the task is limited to
pressure and volume, the change in enthalpy is given by,
ΔH=ΔU+PΔV
The flow of heat (q) at constant pressure in a process equals the change in enthalpy based on the
following equation,
ΔH=q
Gibbs Free Energy
Gibbs free energy, also known as the Gibbs function, Gibbs energy, or free enthalpy, is a quantity
that is used to measure the maximum amount of work done in a thermodynamic system when the
temperature and pressure are kept constant. Gibbs free energy is denoted by the symbol ‘G’. Its
value is usually expressed in Joules or Kilojoules. Gibbs free energy can be defined as the maximum
amount of work that can be extracted from a closed system.
Download Complete Chapter Notes of Thermodynamics
Download Now
16

This property was determined by American scientist Josiah Willard Gibbs in the year 1876 when he
was conducting experiments to predict the behaviour of systems when combined together or
whether a process could occur simultaneously and spontaneously. Gibbs free energy was also
previously known as “available energy.” It can be visualised as the amount of useful energy present in
a thermodynamic system that can be utilised to perform some work.
Table of Contents
 Gibbs Free Energy Equation
 Second Law of Thermodynamics
 Calculating Change in Gibbs Free Energy
 Relationship between Free Energy and Equilibrium Constant
 Relationship between Gibbs Free Energy and EMF of a Cell
 Gibbs Free Energy Problems
Gibbs Free Energy Equation
Gibbs free energy is equal to the enthalpy of the system minus the product of the temperature
and entropy. The equation is given as:
G = H – TS
Where,
G = Gibbs free energy

What is Helmholtz free energy?


Helmholtz free energy is a concept in thermodynamics where the work of a closed system with
constant temperature and volume is measured using thermodynamic potential. It may be described
as the following equation:
 F = U -TS
 Where,
 F = Helmholtz free energy in Joules
 U = Internal energy of the system in Joules
 T = Absolute temperature of the surroundings in Kelvin
 S = Entropy of the system in joules per Kelvin
Derivation:
The Helmholtz equation is derived using the law of thermodynamics, so according to 1st law of the
thermodynamics
 𝜹Q = 𝜹W + dU
If the 1st law of thermodynamics is applied to closed systems,
 For the close system
 𝜹Q = TdS
 𝜹W = PdV
 ggg
 dU = d(TS) – SdT – PdV
 Note: d(TS) = SdT – TdS
 dU – d(TS) = – (SdT + PdV)
 dF = – (SdT + PdV)
Application of Helmholtz Free Energy
In equation of state:
The Helmholtz function is used to describe pure fluids with great precision as the sum of an ideal gas
and residual components, such as industrial refrigerants.
In auto-encoder:
17

An artificial neural network called an auto-encoder is used to encode data efficiently. The total cost
of the original code and the rebuilt code is calculated here using Helmholtz energy.

H = enthalpy
T = temperature
S = entropy
OR
or more completely as:
G = U + PV – TS
Where,
 U = internal energy (SI unit: joule)
 P = pressure (SI unit: pascal)
 V = volume (SI unit: m3)
 T = temperature (SI unit: kelvin)
 S = entropy (SI unit: joule/kelvin)
Variations of the Equation
Gibbs free energy is a state function; hence it doesn’t depend on the path. So, change in Gibbs free
energy is equal to the change in enthalpy minus the product of temperature and entropy change of
the system.
ΔG = ΔH – Δ(TS)
If the reaction is carried out under constant temperature {ΔT=O}
ΔG = ΔH – TΔS
This equation is called the Gibbs-Helmholtz equation.
ΔG > 0; the reaction is non-spontaneous and endergonic
ΔG < 0; the reaction is spontaneous and exergonic
ΔG = 0; the reaction is at equilibrium
Note:
1. According to the second law of thermodynamics, the entropy of the universe always
increases in a spontaneous process.
2. ΔG determines the direction and extent of chemical change.
3. ∆G is meaningful only for reactions in which the temperature and pressure remain constant.
The system is usually open to the atmosphere (constant pressure), and we begin and end the
process at room temperature (after any heat that we have added or which is liberated by the
reaction has dissipated).
4. ∆G serves as the single master variable that determines whether a given chemical change is
thermodynamically possible. Thus, if the free energy of the reactants is greater than that of
the products, the entropy of the world will increase when the reaction takes place as written,
and so the reaction will tend to take place spontaneously. ΔS universe = ΔS system + ΔS
surroundings
5. If ΔG is negative, the process will occur spontaneously and is referred to as exergonic.
6. Therefore, spontaneity is dependent on the temperature of the system.

The Maxwell Relations


18

Modeling the dependence of the Gibbs and Helmholtz functions behave with varying temperature,
pressure, and volume is fundamentally useful. But in order to do that, a little bit more development
is necessary. To see the power and utility of these functions, it is useful to combine the First and
Second Laws into a single mathematical statement. In order to do that, one notes that since
dS=dqTdS=dqT
for a reversible change, it follows that
dq=TdSdq=TdS
And since
dw=TdS−pdVdw=TdS−pdV
for a reversible expansion in which only p-V works is done, it also follows that
(since dU=dq+dwdU=dq+dw):
dU=TdS−pdVdU=TdS−pdV
This is an extraordinarily powerful result. This differential for dUdU can be used to simplify the
differentials for HH, AA, and GG. But even more useful are the constraints it places on the variables T,
S, p, and V due to the mathematics of exact differentials!
Maxwell Relations
The above result suggests that the natural variables of internal energy are SS and VV (or the function
can be considered as U(S,V)U(S,V)). So the total differential (dUdU) can be expressed:
dU=(∂U∂S)VdS+(∂U∂V)SdVdU=(∂U∂S)VdS+(∂U∂V)SdV
Also, by inspection (comparing the two expressions for dUdU) it is apparent that:
(∂U∂S)V=T(22.3.1)(22.3.1)(∂U∂S)V=T
and
(∂U∂V)S=−p(22.3.2)(22.3.2)(∂U∂V)S=−p
But the value doesn’t stop there! Since dUdU is an exact differential, the Euler relation must hold
that
[∂∂V(∂U∂S)V]S=[∂∂S(∂U∂V)S]V[∂∂V(∂U∂S)V]S=[∂∂S(∂U∂V)S]V
By substituting Equations 22.3.122.3.1 and 22.3.222.3.2, we see that
[∂∂V(T)V]S=[∂∂S(−p)S]V[∂∂V(T)V]S=[∂∂S(−p)S]V
or
(∂T∂V)S=−(∂p∂S)V(∂T∂V)S=−(∂p∂S)V
This is an example of a Maxwell Relation. These are very powerful relationship that allows one to
substitute partial derivatives when one is more convenient (perhaps it can be expressed entirely in
terms of αα and/or κTκT for example.)
A similar result can be derived based on the definition of HH.
H≡U+pVH≡U+pV
Differentiating (and using the chain rule on d(pV)d(pV)) yields
dH=dU+pdV+VdpdH=dU+pdV+Vdp
Making the substitution using the combined first and second laws (dU=TdS–pdVdU=TdS–pdV) for a
reversible change involving on expansion (p-V) work
dH=TdS–pdV+pdV+VdpdH=TdS–pdV+pdV+Vdp
This expression can be simplified by canceling the pdVpdV terms.
dH=TdS+Vdp(22.3.3)(22.3.3)dH=TdS+Vdp
And much as in the case of internal energy, this suggests that the natural variables
of HH are SS and pp. Or
dH=(∂H∂S)pdS+(∂H∂p)SdV(22.3.4)(22.3.4)dH=(∂H∂S)pdS+(∂H∂p)SdV
Comparing Equations 22.3.322.3.3 and 22.3.422.3.4 show that
(∂H∂S)p=T(22.3.5)(22.3.5)(∂H∂S)p=T
and
19

(∂H∂p)S=V(22.3.6)(22.3.6)(∂H∂p)S=V
It is worth noting at this point that both (Equation 22.3.122.3.1)
(∂U∂S)V(∂U∂S)V
and (Equation 22.3.522.3.5)
(∂H∂S)p(∂H∂S)p
are equation to TT. So they are equation to each other
(∂U∂S)V=(∂H∂S)p(∂U∂S)V=(∂H∂S)p
Morevoer, the Euler Relation must also hold
[∂∂p(∂H∂S)p]S=[∂∂S(∂H∂p)S]p[∂∂p(∂H∂S)p]S=[∂∂S(∂H∂p)S]p
so
(∂T∂p)S=(∂V∂S)p(∂T∂p)S=(∂V∂S)p
This is the Maxwell relation on HH. Maxwell relations can also be developed based on AA and GG.
The results of those derivations are summarized in Table 6.2.1
Derivation of Maxwell’s law of distribution of velocities and
its experimental verification
Introduction
The kinetic molecular theory is used to determine the motion of a molecule of an ideal gas under a
certain set of conditions. However, when looking at a mole of ideal gas, it is impossible to measure
the velocity of each molecule at every instant of time. Therefore, the Maxwell-Boltzmann
distribution is used to determine how many molecules are moving between
velocities vv and v+dvv+dv. Assuming that the one-dimensional distributions are independent of one
another, that the velocity in the y and z directions does not affect the x velocity, for example, the
Maxwell-Boltzmann distribution is given by
dNN=(m2πkBT)1/2e−mv22kBTdv(3.1.2.1)(3.1.2.1)dNN=(m2πkBT)1/2e−mv22kBTdv
where
 dN/NdN/N is the fraction of molecules moving at velocity vv to v+dvv+dv,
 mm is the mass of the molecule,
 kbkb is the Boltzmann constant, and
 TT is the absolute temperature.1
Additionally, the function can be written in terms of the scalar quantity speed c instead of the vector
quantity velocity. This form of the function defines the distribution of the gas molecules moving at
different speeds, between c1c1 and c2c2, thus
f(c)=4πc2(m2πkBT)3/2e−mc22kBT(3.1.2.2)(3.1.2.2)f(c)=4πc2(m2πkBT)3/2e−mc22kBT
Finally, the Maxwell-Boltzmann distribution can be used to determine the distribution of the kinetic
energy of for a set of molecules. The distribution of the kinetic energy is identical to the distribution
of the speeds for a certain gas at any temperature. 2
Plotting the Maxwell-Boltzmann Distribution Function
Figure 1 shows the Maxwell-Boltzmann distribution of speeds for a certain gas at a certain
temperature, such as nitrogen at 298 K. The speed at the top of the curve is called the most probable
speed because the largest number of molecules have that speed.
20

Figure 1: The Maxwell-


Boltzmann distribution is shifted to higher speeds and is broadened at higher
temperatures.from OpenStax.
Figure 2 shows how the Maxwell-Boltzmann distribution is affected by temperature. At lower
temperatures, the molecules have less energy. Therefore, the speeds of the molecules are lower and
the distribution has a smaller range. As the temperature of the molecules increases, the distribution
flattens out. Because the molecules have greater energy at higher temperature, the molecules are
moving faster.

Figure 2: The Maxwell-


Boltzmann distribution is shifted to higher speeds and is broadened at higher temperatures.
from OpenStax.
Figure 3 shows the dependence of the Maxwell-Boltzmann distribution on molecule mass. On
average, heavier molecules move more slowly than lighter molecules. Therefore, heavier molecules
21

will have a smaller speed distribution, while lighter molecules will have a speed distribution that is
more spread out.

Figure 3: The
speed probability density functions of the speeds of a few noble gases at a temperature of 298.15 K
(25 °C). The y-axis is in s/m so that the area under any section of the curve (which represents the
probability of the speed being in that range) is dimensionless. Figure is used with permission from
Wikipedia.
Related Speed Expressions
Three speed expressions can be derived from the Maxwell-Boltzmann distribution: the most
probable speed, the average speed, and the root-mean-square speed. The most probable speed is
the maximum value on the distribution plot. This is established by finding the velocity when the
following derivative is zero
df(c)dc|Cmp=0df(c)dc|Cmp=0
which is
Cmp=2RTM−−−−−√(3.1.2.3)(3.1.2.3)Cmp=2RTM
The average speed is the sum of the speeds of all the molecules divided by the number of molecules.
Cavg=∫∞0cf(c)dc=8RTπM−−−−−√(3.1.2.4)(3.1.2.4)Cavg=∫0∞cf(c)dc=8RTπM
The root-mean-square speed is square root of the average speed-squared.
Crms=3RTM−−−−−√(3.1.2.5)(3.1.2.5)Crms=3RTM
where
 RR is the gas constant,
 TT is the absolute temperature and
 MM is the molar mass of the gas.
It always follows that for gases that follow the Maxwell-Boltzmann distribution (if thermalized)
Cmp<Cavg<Crms(3.1.2.6)(3.1.2.6)Cmp<Cavg<Crms
Concept of mean free path
In a gaseous system, the molecules never move in a straight path without interruptions. This is
because they collide with each other and change speed and direction. Between every two collisions,
a molecule travels a path length. The mean free path is the average of all path lengths between
collisions. In this article, we will derive an expression for the mean free path.
22

Table of Contents
 What is Mean Free Path?
 Derivation of Mean Free Path
 Mean Free Path Factors
 Frequently Asked Questions – FAQs

What is Mean Free Path?


A gas molecule’s mean free path λ is its average path length between collisions.
Mathematically the mean free path can be represented as follows:
λ=12πd2NV
Let’s look at the motion of a gas molecule inside an ideal gas; a typical molecule inside an ideal gas
will abruptly change its direction and speed as it collides elastically with other molecules of the same
gas. Though between the collisions, the molecule will move in a straight line at some constant speed,
this is applicable for all the molecules in the gas.
It is difficult to measure or describe this random motion of gas molecules thus, we attempt to
measure its mean free path λ.
As its name says, λ is the average distance travelled by any molecule between collisions, we expect λ
to vary inversely with N/V, which is the number of molecules per unit volume or the density of
molecules because if there are more molecules, more are the chances of them colliding with each
other hence reducing the mean free path, and also λ would be inversely proportional to the diameter
d of the molecules, because if the molecules were point masses, then they would never collide with
each other, thus larger the molecule smaller the mean free path. It should be proportional to π times
the diameter square and not the diameter itself because we consider the circular cross-section and
not the diameter itself.
Click on the links provided below to read more about the speed of the gas molecules

Energy Density Formula


Energy density is the computation of the amount of energy that can be stored in a given mass of a
substance or a system. So, the more the energy density of a system or material, the greater will be
the amount of energy stored in its mass. Energy can be stored in many varieties of substances and
systems. Any material can release the energy in four types of reactions. Which are nuclear, chemical,
electrochemical and electrical. While calculating the amount of energy in a system most often only
useful or extractable energy is measured. In scientific equations, we often compute energy density.
In this topic, we will discuss the energy density formula with examples. Let us learn it!

Source:en.wikipedia.org
Energy Density Formula
What is energy density?
23

Energy Density can be defined as the total amount of energy in a system per unit volume. For
example, the number of calories available per gram weight of the food. Foodstuffs have low energy
density and will provide less energy per gram of the food. It means that we can eat more of them
due to fewer calories.
Therefore we may say that energy density is the amount of energy accumulated in a system per unit
volume. It is denoted by letter U. Magnetic and electric fields are also the main sources for storing
the energy.
Energy Density Formula
In the case of electric field or capacitor, the energy density formula is expressed as below:
Electrical energy density = permittivity×Electricfieldsquared2In the form of equation,
UE = 12ε0E2
The energy density formula in case of magnetic field or inductor is as below:
Magnetic energy density = magneticfieldsquared2×magneticpermeability
In the form of an equation,
UB = 12μ0B2
The general energy is:
U = UE+UB
Where,

Regarding the electromagnetic waves, both magnetic and electric fields are involved in contributing
to energy density equally. Thus, the formula of energy density will be the sum of the energy density
of electric and magnetic fields both together.
Solved Examples
Q.1: In a certain region of space, the magnetic field has a value of 3×10−2 T. And the electric field has
a value of 9×107Vm−1. Determine the combined energy density of the electric and magnetic fields
both.
Solution: First we have to calculate the density and energy of each field separately. Then we will add
the densities to obtain the total energy density.
Given parameters in the question are:
B = 3×10−2T
E = 9×107Vm−1
ε0=8.85×10−12C2N−1m−2
μ=4×π×10−7NA−2 .
Thus electrical energy density,
UE = 12ε0E2
= 12×8.85×10−12×(9×107)2
= 12×8.85×10−12×(9×107)2
= 35842.5Jm−3
Now, magnetic energy density,
UB = 12μ0B2
= 12×4×π×10−7×(3×10−2)2
= 12×4×3.14×10−7×(3×10−2)2
= 358.1Jm−3
Thus total energy density will be,
U =UE+UB
= 35,842.5 + 358.1
= 36200.6 Jm−3
24

Derivation of Planck’s Radiation law


Planck considered the black body radiations (in the hohlraum) to consist of linear oscillators of
molecular dimensions and that the energy of a linear oscillator can assume only the discrete values

0,hv,2hv,3hv....nhv
If N0, N1,N2....are the number of oscillators per unit volume of the hologram possessing
energies0,hv,2hv....respectively, then the total number of oscillators N per unit volume will be
N = N0+N1+N2+.....

But the number of oscillators, Nr having energy Er is given by (Maxwell’s formula)

Nr = N0

Putting these values in eqn., we get

N = N0 +N0 +N0 +......N0

= N0(1+ + +.....)

The total energy of N oscillators will be

Hence the average energy per oscillator is


25

(on dividing numerator and denominator bye-hv/Kt)


Thus we see that the average energy of the oscillator is not Kt (as given by classical theory)but equal
to hv/(ehv/kt-1) according to Planck’s quantum theory.
Further, it can be deduced that he number of oscillators per unit volume having frequent in the range
of v and v+dv is equal to

Hence the average energy per unit value (i.e., energy density) inside the enclosure is obtained by
multiplying with i.e. it is given by

Putting and ,the average energy per unit volume in the enclosure of the wave

lights between and +d will be

Or the energy radiated by the black body corresponding to wavelength is

Which is Planck’s radiation law or Pluck’s distribution law?


The above equation is also quite often written in the form

where and c2=hc/k are universal constants.


Deduction of Wien’s distribution law
Wien's and Stefan's Laws are found, respectively, by differentiation and integration of Planck's
equation. Neither of these is particularly easy, and they are not found in every textbook. Therefore, I
derive them here.
Wien's Law
Planck's equation for the exitance per unit wavelength interval (equation 2.6.1) is
MC=1λ5(eK/λT−1),(2.11.1)(2.11.1)MC=1λ5(eK/λT−1),
in which I have omitted some subscripts. Differentiation gives
1CdMdλ=−1(eK/λT−1)2⋅[5λ4⋅(eK/λT−1)+λ5⋅(−Kλ2T)eK/λT].(2.11.2)(2.11.2)1CdMdλ=−1(eK/
λT−1)2⋅[5λ4⋅(eK/λT−1)+λ5⋅(−Kλ2T)eK/λT].
26

MM is greatest when this is zero; that is, when


x=5(1−e−x),(2.11.3)(2.11.3)x=5(1−e−x),
where
x=KλT.(2.11.4)(2.11.4)x=KλT.
Hence, with equation 2.6.9, the wavelength at which M is a maximum, is given by
λ=hckxT.(2.11.5)(2.11.5)λ=hckxT.
The maximum value of MM is found be substituting this vale of λλ back into Planck's equation, to
arrive at equation 2.7.16. The corresponding versions of Wien's Law appropriate to the other
version's of Planck's equation are found similarly.
Stefan's Law
Integration of Planck's equation to arrive at Stefan's law is a bit more tricky.
It should be clear that ∫∞0Mλdλ=∫∞0Mνdν∫0∞Mλdλ=∫0∞Mνdν, and therefore I choose to integrate
the easier of the functions, namely MνMν. To integrate MλMλ, the first thing we would do anyway
would be to make the substitution ν=c/λν=c/λ.
Planck's equation for the blackbody exitance per unit frequency interval is
Mν=C3∫∞0ν3dνeK2ν/T−1.(2.11.6)(2.11.6)Mν=C3∫0∞ν3dνeK2ν/T−1.
Let x=K2ν/Tx=K2ν/T; then
Mν=2πk4T4c2h3∫∞0x3dxex−1,(2.11.7)(2.11.7)Mν=2πk4T4c2h3∫0∞x3dxex−1,
And, except for the numerical value of the integral, we already have Stefan's law. The integral can be
evaluated numerically, but not without difficulty, and there is an analytical solution for it.
Consider the indefinite integral and integrate it by parts:
∫x3dxex−1=x3ln(1−e−x)−3∫x2ln(1−e−x)dx+const.(2.10.1)(2.10.1)∫x3dxex−1=x3ln⁡(1−e−x)
−3∫x2ln⁡(1−e−x)dx+const.
Now put the limits in:
∫∞0x3dxex−1=−3∫∞0x2ln(1−e−x)dx.(2.10.2)(2.10.2)∫0∞x3dxex−1=−3∫0∞x2ln⁡(1−e−x)dx.
Write down the Maclaurin expansion of the integrand:
∫∞0x3dxex−1=3∫∞0x2(e−x+12e−2x+13e−3x+...)dx(2.10.3)
(2.10.3)∫0∞x3dxex−1=3∫0∞x2(e−x+12e−2x+13e−3x+...)dx
and integrate term by term to obtain
∫∞0x3dxex−1=6(1+124+134+...).(2.11.8)(2.11.8)∫0∞x3dxex−1=6(1+124+134+...).
We must now evaluate 1+124+134+...1+124+134+...
The series ∑1∞1nm∑1∞1nm is the Riemann ζζ-function. For m=1m=1, it diverges.
For m=3,5,7,m=3,5,7, etc., it has to be evaluated numerically. For m=2,4,6,m=2,4,6, etc., the sums
can be written explicitly in terms of ππ. For example:
ζ(2)=π26,(2.10.4)(2.10.4)ζ(2)=π26,
ζ(4)=π490,(2.10.5)(2.10.5)ζ(4)=π490,
ζ(6)=π6945.(2.10.6)(2.10.6)ζ(6)=π6945.
One of the stages necessary in evaluating the ζζ-function is to derive the infinite product
sinαπαπ=[1−α2][1−(12α)2][1−(13α)2]...(2.11.9)(2.11.9)sin⁡απαπ=[1−α2][1−(12α)2][1−(13α)2]...
If we can do that, we are more than halfway there.
Let's start by considering the Fourier expansion of cosθxcos⁡θx:
cosθx=∑0∞ancosnx(2.11.10)(2.11.10)cos⁡θx=∑0∞ancos⁡nx
In Equation 2.11.102.11.10 nn is an integer, θθ not necessarily so; we shall suppose that θθ is some
number between 0 and 1. There is no need to consider any sine terms, because cosθxcos⁡θx is an
even function of xx. We work out what the Fourier coefficients are in the usual way, to get
an=(−1)n2θsinθπθ2−n2,n=1,2,3,...(2.11.11)(2.11.11)an=(−1)n2θsin⁡θπθ2−n2,n=1,2,3,...
As usual, and for the usual reason, a0a0 is an exception:
27

a0=sinθπθπ.(2.11.12)(2.11.12)a0=sin⁡θπθπ.
We have therefore arrived at the Fourier expansion of cosθxcos⁡θx:
cosθx=2θsinθππ(12θ2−cosxθ2−12+cos2xθ2−22−cos3xθ2−32+...).(2.11.13)
(2.11.13)cos⁡θx=2θsin⁡θππ(12θ2−cos⁡xθ2−12+cos⁡2xθ2−22−cos⁡3xθ2−32+...).
Put x=πx=π and rearrange slightly:
πcotθπ−1θ=2θ(1θ2−12+1θ2−22+...).(2.11.14)(2.11.14)πcot⁡θπ−1θ=2θ(1θ2−12+1θ2−22+...).
Since we are assuming that θθ is some number between 0 and 1, we shall re-write this so that the
denominators are all positive:
πcotθπ−1θ=−2θ12−θ2−2θ22−θ2−...(2.11.15)(2.11.15)πcot⁡θπ−1θ=−2θ12−θ2−2θ22−θ2−...
Now multiply both sides by dθdθ and integrate from θ=0θ=0 to θ=αθ=α. The integration must be
done with care. The indefinite integral of the left hand side
is lnsinθπ−lnθ+constantln⁡sin⁡θπ−ln⁡θ+constant, i.e. ln(sinθπθ)+constantln⁡(sin⁡θπθ)+constant. The
definite integral between 00 and αα is ln(sinαπα)−limθ→ 0ln(sinθπθ)ln⁡(sin⁡απα)−limθ→ 0ln⁡(sin⁡θπθ).
The limit of the second term is lnπln⁡π, so the definite integral is ln(sinαπαπ)ln⁡(sin⁡απαπ). Integrating
the right hand side is a bit easier, so we arrive at
ln(sinαπαπ)=ln(12−α212)+ln(22−α222)+...(2.11.16)
(2.11.16)ln⁡(sin⁡απαπ)=ln⁡(12−α212)+ln⁡(22−α222)+...
On taking the antilogarithm, we arrive at the required infinite product:
sinαπαπ=[1−α2][1−(12α)2][1−(13α)2]...(2.11.17)(2.11.17)sin⁡απαπ=[1−α2][1−(12α)2][1−(13α)2]...
Now expand this as a power series in α2α2:
sinαπαπ=1+()α2+()α4+()α6+...(2.11.18)(2.11.18)sin⁡απαπ=1+()α2+()α4+()α6+...
The first one is easy, but subsequent ones rapidly get more difficult, but you do have to get at least as
far as α4α4.
Now compare this expansion with the ordinary Maclaurin expansion:
sinαπαπ=1−π23!α2+π45!α4−...(2.11.19)(2.11.19)sin⁡απαπ=1−π23!α2+π45!α4−...
and we arrive at the correct expressions for the Riemann ζζ-functions. We then get for Stefan's law:
M=2π5k415h3c2T4=σT4,(2.11.20)(2.11.20)M=2π5k415h3c2T4=σT4,
where σ=5.6705×10−8 W m−2K−4.σ=5.6705×10−8 W m−2K−4.
Questions
Finally, now that you have struggled through Riemann’s zeta-function, let’s just make sure that you
have understood the really simple stuff, so here are a couple of easy questions – and you won’t have
to bother with zeta-functions.
1. By what factor should the temperature of a black body be increased so that
a) The integrated radiance (over all frequencies) is doubled?
b) The frequency at which its radiance is greatest is doubled?
c) The spectral radiance per unit wavelength interval at its wavelength of maximum spectral radiance
is doubled?
2. A block of shiny silver (absorptance = 0.23) has a bubble inside it of radius 2.2cm2.2cm, and it is
held at a temperature of 1200K1200K.
A block of dull black carbon (absorptance = 0.86) has a bubble inside it of radius 4.3cm4.3cm, and it
is held at a temperature of 2300K2300K,
Calculate the ratio
Integrated radiation energy density inside the carbon bubbleIntegrated radiation energy density
insdie the silver bubble.(2.10.7)(2.10.7)Integrated radiation energy density inside the carbon
bubbleIntegrated radiation energy density insdie the silver bubble.
Deriving the Rayleigh-Jeans Radiation Law
28

The Rayleigh-Jeans Radiation Law was a useful, but not completely successful attempt at establishing the
functional form of the spectra of thermal radiation. The energy density uνuν per unit frequency interval at a
frequency νν is, according to the The Rayleigh-Jeans Radiation,
uν=8πν2kTc2uν=8πν2kTc2
where kk is Boltzmann's constant, TT is the absolute temperature of the radiating body, and cc is the speed of
light in a vacuum.
This formula fits the empirical measurements for low frequencies, but fails increasingly for higher frequencies.
The failure of the formula to match the new data was called the ultraviolet catastrophe. The significance of this
inadequate so-called law is that it provides an asymptotic condition which other proposed formulas, such as
Planck's, need to satisfy. It gives a value to an otherwise arbitrary constant in Planck's thermal radiation
formula.
The Derivation
Consider a cube of edge length LL in which radiation is being reflected and re-reflected off its walls. Standing
waves occur for radiation of a wavelength λλ only if an integral number of half-wave cycles fit into an interval in
the cube. For radiation parallel to an edge of the cube this requires
Lλ/2=mLλ/2=m
where mm is an integer or, equivalently
λ=2Lmλ=2Lm
Between two end points there can be two standing waves, one for each polarization. In the following the
matter of polarization will be ignored until the end of the analysis and there the number of waves will be
doubled to take into account the matter of polarization.
Since the frequency νν is equal to c/λc/λ, where cc is the speed of light
ν=cm2Lν=cm2L
It is convenient to work with the quantity qq, known as the wavenumber, which is defined as
q=2πλq=2πλ
and hence
q=2πνcq=2πνc
In terms of the relationship for the cube,
q=2πm2L=π(mL)q=2πm2L=π(mL)
and hence
q2=π2(mL)2q2=π2(mL)2
Another convenient term is the radian frequency ω=2πνω=2πν. From this it follows that q=ω/cq=ω/c.
If mXmX, mYmY, mZmZ denote the integers for the three different directions in the cube then the condition for
a standing wave in the cube is that
q2=π2[(mXL)2+(mYL)2+(mZL)2]q2=π2[(mXL)2+(mYL)2+(mZL)2]
which reduces to
m2X+m2Y+m2Z=4L2ν2c2mX2+mY2+mZ2=4L2ν2c2
Now the problem is to find the number of nonnegative combinations of (mXmX, mYmY, mZmZ) that fit between
a sphere of radius RR and and one of radius R+dRR+dR. First the number of combinations ignoring the
nonnegativity requirement can be determined.
The volume of a spherical shell of inner radius RR and outer radius R+dRR+dR is given by
dV=4πR2dRdV=4πR2dR
If
R=m2X+m2Y+m2Z−−−−−−−−−−−−−√R=mX2+mY2+mZ2
then
R=4L2ν2c2−−−−−−√=2LνcR=4L2ν2c2=2Lνc
and hence
dR=2Ldνc.dR=2Ldνc.
This means that
dV=4π(2Lνc)2(2Lc)dν=32π(L3ν2c3)dν
Now the nonnegativity require for the combinations (mXmX, mYmY, mZmZ) must be taken into account. For the
two dimensional case the nonnegative combinations are approximately those in one quadrant of circle. The
approximation arises from the matter of the combinations on the boundaries of the nonnegative quadrant. For
29

the three dimensional case the nonnegative combinations constitute approximately one octant of the total.
Thus the number dNdN for the nonnegative combinations of (mXmX, mYmY, mZmZ) in this volume is equal
to (1/8)dV(1/8)dV and hence
dN=4πν2(L3c3)dνdN=4πν2(L3c3)dν
The average kinetic energy per degree of freedom is ½kT½kT, where kk is Boltzmann's constant. For harmonic
oscillators there is an equality between kinetic and potential energy so the average energy per degree of
freedom is kTkT. This means that the average radiation energy EE per unit frequency is given by
dEdν=kT(dNdν)=4πkT(L3c3)ν2dEdν=kT(dNdν)=4πkT(L3c3)ν2
and the average energy density, uνuν, is given by
duνdν=(1L3)(dEdν)=4πkTν2c3duνdν=(1L3)(dEdν)=4πkTν2c3
The previous only considered one direction of polarization for the radiation. If the two directions of
polarization are taken into account a factor of 2 must be included in the above formula; i.e.,
duνdν=8πkTν2c3duνdν=8πkTν2c3
This is the Raleigh-Jeans Law of Radiation and holds empirically as the frequency goes to zero.
Deriving the Wien's Displacement Law from Planck's Law
Wien's displacement law states that the blackbody radiation curve for different temperatures
peaks at a wavelength inversely proportional to the temperature. The shift of that peak is a direct
consequence of the Planck radiation law which describes the spectral brightness of black body
radiation as a function of wavelength at any given temperature. However it had been discovered
by Wilhelm Wien several years before Max Planck developed that more general equation, and
describes the entire shift of the spectrum of black body radiation toward shorter wavelengths as
temperature increases.
Derive Wien's displacement law from Planck's law. Proceed as follows:
ρ(ν,T)=2hν3c3(ehνkBT−1)(1)(1)ρ(ν,T)=2hν3c3(ehνkBT−1)
We need to evaluate the derivative of Equation 11 with respect to νν and set it equal to zero to
find the peak wavelength.
ddν{ρ(ν,T)}=ddν⎧⎩⎨⎪⎪⎪⎪⎪⎪2hν3c3(ehνkBT−1)⎫⎭⎬⎪⎪⎪⎪⎪⎪=0(2)
(2)ddν{ρ(ν,T)}=ddν{2hν3c3(ehνkBT−1)}=0
This can be solved via the quotient rule or product rule for differentiation. Selecting the latter for
convenience requires rewriting Equation 22 as a product:
ddν{ρ(ν,T)}=2hc3ddν{(ν3)(ehνkBT−1)−1}=0(3)(3)ddν{ρ(ν,T)}=2hc3ddν{(ν3)(ehνkBT−1)−1}=0
applying the product rule (and power rule and chain rule)
=2hc3[(3ν2)(ehνkBT−1)−1−(ν3)(ehνkBT−1)−2(hkBT)ehνkBT]=0(4)(4)=2hc3[(3ν2)(ehνkBT−1)−1−(ν3)
(ehνkBT−1)−2(hkBT)ehνkBT]=0
so this expression is zero when
(3ν2)(ehνkBT−1)−1=(ν3)(ehνkBT−1)−2(hkBT)ehνkBT(5)(5)(3ν2)(ehνkBT−1)−1=(ν3)
(ehνkBT−1)−2(hkBT)ehνkBT
or when simplified
3(ehνkBT−1)−(hvkBT)ehνkBT=0(6)(6)3(ehνkBT−1)−(hvkBT)ehνkBT=0
We can do a substitution u=hνkBTu=hνkBT and Equation 66 becomes
3(eu−1)−ueu=0(7)(7)3(eu−1)−ueu=0
Finding the solutions to this equation requires using Lambert's W-functions and results numerically
in
u=3+W(−3e−3)≈2.8214(8)(8)u=3+W(−3e−3)≈2.8214
so unsubstituting the uu variable
u=hνkBT≈2.8214(9)(9)u=hνkBT≈2.8214
or
ν≈2.8214kBhT≈(2.8214)(1.38×10−23J/K)6.63×10−34JsT≈(5.8×1010Hz/K)T(10)(11)(12)
(10)ν≈2.8214kBhT(11)≈(2.8214)(1.38×10−23J/K)6.63×10−34JsT(12)≈(5.8×1010Hz/K)T
30

The consequence is that the shape of the blackbody radiation function would shift proportionally
in frequency with temperature. When Max Planck later formulated the correct blackbody radiation
function it did not include Wien's constant explicitly. Rather, Planck's constant h was created and
introduced into his new formula. From Planck's constant h and the Boltzmann constant k, Wien's
constant (Equation 99) can be obtained.

You might also like