Amd Apuntes Fluidos

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Lectures on Thermodynamics.

G. Ceder

Summer 2001

Chapter 1

Energy Accounting, Variables and


Properties of Systems

1.1 INTRODUCTION AND DEFINITIONS

The History of Thermodynamics

The subject of thermodynamics has a diverse history. Originally developed in the


18th and early 19th century for understanding how much work steam machines could
produce, in the middle part of the 20th century it became the basic framework in which to
understand the macroscopic behavior of materials. In some sense thermodynamics may
be the eternal subject that has kept its prominent place in science, even when new insights
and trends emerge. Now that atomistic and electronic descriptions of materials are
becoming more powerful, a new role for thermodynamics emerges as the science that
defines the boundary conditions for such microscopic investigations. Even in
information theory, probably one of the fastest growing fields of the 21st century,
thermodynamics is playing a prominent role in quantifying information content.
Definitely Albert Einstein agreed with the prominent role played by Thermodynamics in
Science:
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

A theory is the more impressive the greater the simplicity of its premise, the
more different kind of things it relates, and the more extended its area of
applicability. Therefore the deep impression that classical thermodynamics
made upon me. It is the only physical theory of universal content which I am
convinced will never be overthrown, within the framework of applicability of
its basic concepts.
Albert Einstein

Imagine yourself back in the beginning of the 18th century. Newton’s


understanding of mechanical motion is well penetrated in science and engineering.
Robert Boyle has formulated its Law of corresponding states in gasses, but the insight of
J.C. Maxwell into electromagnetism is still more than a century away. In 1712
Newcomen builds the first steam-fired engine to lift buckets of water out of mines, one of
the biggest engineering problems of that time. The idea of “mechanized power” takes off
and propels England into the Industrial Revolution. In the early 19th century, the steam
train rolls across the European continent. While it was understood at that time that steam
engines essentially convert heat to mechanical work, no quantitative theory was known to
predict conversion efficiencies, and heat was not quite yet seen as a form of energy. It is
out of the practical engineering need to predict the work produced by steam engines that
the science of thermodynamics was born. As is often the case, the engineering moved
significantly faster than the science. The formulation of a rigorous framework for
thermodynamics only occurred in the mid to late 19th century by people such as Herman
Helmholtz, Lord Kelvin, William Rankine, Rudolf Clausius and Willard Gibbs. At that
time England had already 10,000 miles of railroad and Union Pacific and Central Pacific
had met up to link their trans-continental tracks in Utah so one could now go across the
US by train. With so much delay in the development of the core ideas of
thermodynamics, let us just hope they are really good.

Why is the subject called thermodynamics ? It is often stated that this is a misnomer
because thermodynamics deals with systems at equilibrium (and hence not with the
dynamics of how they reach that equilibrium). This interpretation is somewhat
misguided and masks the true origin of thermodynamics. In the early 19th century the
mathematical description of a mechanical system was quite well known. It was know
that forces, equations of motion, etc. could be derived from a function that describes the
energy in terms of coordinates*. Thermodynamics came about as the effort to add
temperature to the variables that describes a system. In that sense the name
thermodynamics (temperature + mechanical dynamics) is actually quite well chosen.

Thermodynamics is what we refer to as a macroscopic science. It does not use or


produce any information about the microscopic nature of matter. Materials are simply
treated as systems defined by a few properties and equations of state. Because it is blind

*
Similar to what we now think of as Hamiltonians or Lagrangians

-2-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

to what is inside the system, thermodynamics can not independently generate numerical
data about a specific system. Rather thermodynamics develops relations between
properties of materials, and links together data which may at first glance seem completely
unrelated. This is both the strength and weakness of the subject. To get an actual
number for the property of a material thermodynamics always requires input, either from
an experimental measurement, or as has been the case more recently, from quantum
mechanical calculations on materials. Just remember NINO: Nothing in, nothing out.
Because the equations of thermodynamics are not system specific, it is almost universally
applicable. One of the more satisfying aspects of the subject is that the general equations
of thermodynamics apply to any system, whether it is a power plant, a piece of material,
or yourself.

1.2 SYSTEM AND SURROUNDINGS


A thermodynamic system is a well-defined part of space or matter. It can be
defined by a boundary or as a certain piece of matter. The word system is therefore used
in a much more general sense than you may be used to. The system boundaries do not
need to be physical boundaries, nor do they need to be localized in space. As an
example, let us define as a system, all the water molecules contained in a rain drop.
When the drop splatters on the ground, it breaks up in many parts. Some may run of into
a creek, while others may evaporate or even combine chemically with other materials
(e.g. in a plant). In principle, after the rain drop has hit the ground, our system can still
be defined as all the water molecules that were originally in the drop. Of course, in
practice it will be very difficult to keep track of such a system. The particular choice of
system is often crucial for solving a thermodynamics problem. For the same physical
situation a clever system definition can lead to a short elegant solution, while other
definitions may only lead you down a path of cumbersome math.

Clearly, the definition of a system is intimately related to the choice of a


boundary. System boundaries can be open or closed to matter, rigid or deformable,
insulating or heat-transparent. Even more esoteric boundaries may be specified, e.g. a
wall that lets A and B through in opposite direction and in the proportion of one A
molecule for every two B molecules. As long as it can be fully specified it is an
acceptable definition for a system boundary. Just remember that a system can be either
defined in terms of a boundary (physical or non-physical) or by "tagging" a certain
amount of matter (as in the case of the rain drop). Everything in the universe that is not
part of the system will be referred to as the environment or the surroundings. Hence, the
universe is everything (system + surroundings).

Usually it is possible to describe the same physical situation with a number of


different system definitions. Take for example in figure 1, the heating and evaporation of
water from a beaker that is being heated with a gas burner. One may take as the system
the beaker of water. In that case, it is an open system since matter (the evaporating water)
leaves the system. Alternatively, one may take as the system all the water that is initially

-3-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

in the beaker (even after some of it has evaporated). This will be a closed system from
which no matter leaves, by definition. An other alternative is to include the gas burner in
the system. In this case there is no heat flow into the system (unlike in the first two
cases). Obviously, different system definitions should lead to the same physical results,
and each choice is a-priori not better than the other. We will later see that the explicit
terms in the basic equations of thermodynamics will have a different value for different
system definitions. Often a smart choice of for the system definition may therefore make
it possible to zero some terms in equations and lead to easier solutions. So remember, the
obvious system choice is not always the most convenient one to solve a problem !

evaporate evaporate evaporate

water water water

a) b) c)

Figure 1.1: Water evaporates from a beaker that is heated with a burner: three different system

definitions:

a) system = liquid water: system is open for matter and energy

b) system = all water, liquid or vapor: systems is closed for matter

c) system also includes burner: no heat flow into the system.

1.3 CONSERVATION OF ENERGY: THE FIRST LAW OF


THERMODYNAMICS
Conservation principles are quite general in physics. The First Law of
Thermodynamics is simply the conservation of energy. In words, it states that energy is
never destroyed nor created, merely exchanged between different parts of the physical

-4-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

world1. In thermodynamics, energy exchanges occur across the boundaries of a system.


The First Law implies that the sum of all energy flows across the boundaries of a system
is equal to the net change of energy of the system:
∑ energy
boundary
flow = ∆( Esystem ) (1.1)

In this definition, the energy flow is taken to be positive (negative) when it goes into (out
off) the system. We use the symbol
∆ to denote a finite change of a
THE UNITS OF ENERGY
quantity ( as compared to an
infinitesimal one). The First Law is
like balancing a checkbook. To
figure out the balance of your
account you start with the old
balance, add the deposits and subtract
the expenses.
Although this conservation
principle may seem obvious to you, it
was only established around the early
to mid 19th century by such scientists
as Sadi Carnot2* , Emile Clapeyron
and James Joule, names you will hear
many more times in this course. One
of the difficulties in formulating the
First Law was that in order to verify
the conservation of energy one needs to be aware of all manifestations of energy flows.
Otherwise "magical" energy increases or decreases show up in a system (similar to an
anonymous donor in your checking account* ). Although the equivalence between
mechanical forms of energy was understood in the early 19th century, the quantification
of heat as an energy flow was hotly (pun !) debated at the time.

Equation 1.1. already shows the nature of Thermodynamics. To figure out what
happens to a system we observe it from the outside (observing what happens across its

1 At this point you may be distressed about relativity and the fact that energy can be converted to mass and
vice-versa. Not to worry. We will only deal with non-relativistic situations in this class. If one day you
are going to work on nuclear bombs or perform research on the solar system, you will be happy to know
that the First Law has been extended by some to include the energy-mass conversion. In this formulation
of the first Law it is simply E + mc2 which is conserved.
2
Sadi Carnot also has the distinction of being one of the few generals in Napoleon's army who was never
defeated. He must have been absent at Waterloo.

*
just you wish

-5-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

boundary.) Since one can never "measure" the energy of a system, the only way one can
keep track of it is by observing how it is modified.

The remainder of this chapter is used to bring equation (1.1) in a form where it is
of practical use to us. To get there, we need to find an explicit way to calculate the
energy flows (left hand side of equation 1) and say something about the different ways in
which a system can store energy (right hand side of the equation.) Being Engineers that
should not be all. Ultimately we are not interested in the energy of a system, but in its
more relevant properties. We will therefore try to relate changes in the energy to changes
in other variables that describe the system. For example, in an electrically heated house,
the system's ( = house) energy goes up when we draw electrical power from the grid, but
what we ultimately are interested in is to figure out what the temperature increase is of
the house. So we need to relate the energy of the house to its temperature.
So the scheme is as follows:

observe energy flows -> obtain energy change of system -> relate to property change

1.4 THE ENERGY OF A SYSTEM


When the net energy flow over the boundary of a system is not zero, the system's
energy will change. The components of the system's energy related to motion are the
kinetic energy (Ekin) and the potential energy (Epot). All other sources and sinks of
energy of a system are referred to as internal energy (Eint or U):

Esystem = Ekin + E pot + Eint (1.2)


For a system at rest, the variation of system energy will equal to variation of internal
energy:

∆Esystem = ∆Eint ≡ ∆Usys (1.3)


In this course we will only deal with systems at rest, so the change of the system
energy will always be the change in internal energy.

The concept of internal energy is really a thermodynamic invention. If you were


to look in detail into the system, you would obviously find that that energy goes into
kinetic or potential energy of the parts that make up the system. For example, when the
energy of a system is raised by heating it, that energy is actually stored in the potential
and kinetic energy of the electrons and atoms. But since the thermodynamic laws have to
be oblivious to the content of the system, we have to act like we do not know how energy
is taken up. Hence we simply call it internal energy of the system and make no judgment

-6-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

on how energy is stored internally. Clearly, this is in line with our earlier description of
thermodynamics as a purely macroscopic science.

1.5 ENERGY FLOWS


The mechanisms by which energy can be exchanged with a system are called
energy flows. Typically, these are divided into three categories: heat, work and matter
flow.
Heat (symbol Q) is the exchange of energy between a hot and a cold body. Work
(symbol W) is the result of mechanical, electrical or magnetic action upon on a system. A
simple example of work is the mechanical work that is exerted when we push onto a
system. The work in this case is the force applied times the displacement*. The action
that results in work is not always as visible as in the previous case of mechanical work.
For example, when current is drawn from an electrical transformer, the primary coil
performs magnetic work on the secondary coil, even though these coils are not in contact
with each other.
A system that is open can also exchange energy because of a net flow of matter
(symbol M). Since the molecules that flow across the boundary of a system transport
energy with them, this flow needs to be accounted for in the energy balance for the
system. An example of matter flow is the action that occurs when you drive up to a gas
station and fill up the car with gas. The gasoline increases the internal energy of the car.
Later this will be converted to kinetic energy of the car.
With these definitions the First Law of Thermodynamics can be written as:

Q + W + M = ∆Esys (1.4a)
The mass flow is often written explicitly as the sum of the contributions from various
chemical components i:
M = ∑ miµi
i

where µi is the energy associated with component i.


Often we will deal with a closed and stationary system so that equation 1.4.
reduces to:

Q + W = ∆Usys (1.5)
This is a good time to say something about equations. Thermodynamics has
many of them. Trying to remember them without remembering the conditions under
which they are valid is useless. Rather than remembering the equations, try to focus on
the concepts they reproduce. Equation 1.5. still represents the same concept as equation
1.1., namely that the energy change of a system can be calculated by summing the net
energy flow across its boundaries. All we have done in (1.5) is limiting those flows and
energy changes to a specific, but common, case.

*
More exactly, work is the scalar product of the force vector with the displacement vector

-7-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

1.6 DIFFERENTIAL FORMULATION OF THE FIRST LAW


In many cases we will look at infinitesimally small parts of a process.
Infinitesimal flows are distinguished from infinitesimal changes in the properties of the
system by using another differential operator ( δ for flows, d for properties). With this
convention, the First Law can be written as:
δW +δQ + ∑d m µ i i
= dU sys (1.6)
i

where µi is the specific energy of component i (i.e. the energy per kg or per mol, etc.).

1.7 TYPES OF WORK


There are a variety of ways to perform work on a system. All work terms take the
form of some generalized force times a response of the system.

l dl
F

Figure 1.2: Pulling on a bar so that it extends causes energy to flow into the bar.

Mechanical Work

Figure 1.2 shows a system being elongated by the application of a force. The
work performed by this force is F.dl . This examples produces a one-dimensional stress
state. In general the mechanical energy transferred to a system by stressing it is
δW = V ∑ σ ij dε ij
System Boundary i, j

where σ is the stress and ε is the strain.


V is the volume of the system.
dl
dA

-8-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

One particular case of isotropic stress is a pressure change. If the system’s properties are
isotropic the above equation can be simplified considerably. The force on an
infinitesimal area element of the material is -pdA, where A is the area of the system.
The work done by extending that area element out an amount dl among the normal of dA
is -p.dA.dl. It can easily be recognized that dA.dl is the change increase in volume
associated the extension of that area of surface. Integrating this over the complete
surface gives the common formula for the work performed by volume changes of a
material in a pressure field:

δW = – pdV (1.8)
This is the most common mechanical work term for thermodynamic systems and
is always effective when a system undergoes changes under constant pressure. It can be
interpreted as follows: When a system expands ( dV > 0) the boundary pushes against
the force of the outside pressure. Hence work is performed on the environment and the
system loses some energy to the environment. Similarly a shrinking system ( dV < 0)
receives energy from the environment as the pressure performs work on the system.

Electrical Work

Systems can also exchange energy as electrical work. It the system is at potential
φ with respect to the environment and an infinitesimal amount of charge dq of charge is
transferred from the environment to the system, work in the amount of φ dq is performed
on the system by the environment. Hence in this case:

δW = – φ dq (1.9)
Magnetic Work

Similarly, systems that can be magnetized can receive or perform work in a


magnetic field. Think the of force needed to pull a magnet from your refrigerator !. The
following field variables are defined in magnetism:

H: applied field
M: magnetization (of the material)
B: induction B = µo(H + M)

The work performed in a magnetic field is H.dB, which can be rewritten as:

δW = HdB = µ oHdH + µo HdM (1.10)


The first term (µo HdH) is the work performed on the applied field itself and
would be present even without a material there, whereas the second term is the work
performed on the material in the field. If we only treat the material as the system, only
the term (µo HdM) should be counted. In magnetism work, magnetization is typically

-9-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

given per unit volume. Hence to acount for all the work one needs to multiply with the
volume:

δW = µ oV HdM (1.11)

One can encounter more work terms than these in thermodynamics. These will
always have the form of X.dY, where X is some “force” and Y is a “response” of the
material. For example, in electrical work φ is the driving force and dq is the resulting
displacement of charge. Similarly in magnetism, H is the applied field and M is the
resulting magnetization.
These variables X and Y that appear together in a work term are called conjugate
variables. Hence (p,V) is a conjugate pair. So are (H,M) and (V,q). Later we will see
other conjugate pairs.

1.8 SOME EXAMPLES OF WORK

Spring
Generalized strain
Capacitor
magnetism: the hystheresis loop

1.9 STATE VARIABLES


Previously we wrote down the First Law of Thermodynamics. It basically states
that the net balance of energy flows over the boundary of a system changes the energy of
the system. A change in energy of a system will modify the system in some way. We
will say that it changes the state of the system. For example, an increase in kinetic
energy of the system will change the velocity of the system. An increase in internal
energy may lead to a change in temperature or volume of the system (we will see later
that many other properties of the system can change as well). All these variables that
characterize the state of a system are called, appropriately, state variables.

How many variables, and which ones, are needed is a subtle problem in
thermodynamics. Typically, the state variables are Temperature (T) and the variables
that appear in the work differentials that can exchange energy with the system. These are
the properties of the system that can vary (hence the name “variable”).
Note that state variables do not completely specify a system. Many other features
of a system may determine the state functions, even though these features do not change
during a typical thermodynamics process. We will call these attributes of the system
parameters. For example, compare a piece of pure, perfect, crystalline copper with a
cold-worked piece of Cu. These are obviously not exactly the same systems as the latter

-10-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

copper will have a much higher dislocation density. This difference may be relevant
when integrating the F.dl term if the two materials differ in elastic properties. Hence, the
dislocation density can be treated as a parameter. Chemical nature is essentially also a
parameter. For a closed system, the number of moles of each chemical component
(assuming no chemical reactivity between the components) is a parameter; for an open
system it is a variable.
In the most strict sense, one could argue that every characteristic of a system
should be captured either in its state variables or parameters. In practice we often get
away with much less and we will neglect these more subtle attributes of materials that
modify the thermodynamic properties very little.

The number of variables is not the same as the number of independent degrees of
freedom for the system as some variables are related to each other. It can be shown that
for each conjugate pair of variables appearing in a work term, there is an equation of state
that relates them. This equation of state is a property of the system. In other cases there
may also be equilibrium relations that constrain the variables to certain relations.

Example: Simple ideal gas

A single component ideal gas can be completely characterized with four variables
from: volume, pressure, temperature and number of moles. There is one conjugate work
pair ( p,V), hence there is one equation of state (PV = nRT). Three variables therefore
suffice to fully specify the system. If the ideal gas is a closed system, the number of
moles is a parameter and the system can be specified with two independent variables.

Extensive and Intensive Variables

Extensive variables scale with the size and amount of system. Hence these
include variables such as volume, mass, internal energy … Intensive variables are size
independent and include variables such as temperature, pressure, electrical potential.
Densities are extensive variables that are normalized by another extensive variable
(usually the size or mass of the system). These include concentration ( which is some
number of moles divided by another number of moles or divided by the volume), specific
volume, magnetization density (M) etc.

1.10 THE SIMPLE SYSTEM

A simple or limp system is the most basic system of thermodynamics. It can be


described with only p,V,T. Hence it only has one work term (- pdV) and one equation of
state. Its state is fully characterized whenever two variables are specified. Note that a
simple system has to be isotropic in its properties, otherwise the mechanical work term
would be more complicated than -pdV. Many closed systems of liquids or gasses can be
approximated as simple. The approximation is also used often for solids when no

-11-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

mechanical stresses (besides pressure fields) are present. Obviously no magnetic or


electrical work flows can be present in a simple system.

1.11 STATE FUNCTIONS


As their name implies, state functions are functions of the state of the material and
only of the state of the material. Most properties you know are state functions, meaning
that they always have the same value when the material is in the same state, regardless of
how it got there or how long it has been there. You can think of state functions as
functions of the state variables.

Example:
The internal energy of an ideal gas is a function of only three variables:

Uideal gas = U (T,p,n) , (1.12)


or

Uideal gas = U (T,V,n) , (1.13)

or any other three variables from p,V,T,n.

If the system were closed, n would be a parameter and one could write Un(T,P),
though usually the subscript n is not written.

1.12 DIFFERENTIALS

We will never really have an explicit form for a state function such as the energy
in terms of the state variables. Actually, if you came up with an exact form of a state
function for a real material, I would place my bet on you for a Nobel Prize in a few years.
Fortunately, we can get by with less information. Most often we will not be interested in
the absolute value of a state function, but in the difference between two states. This
difference can be computed with our favorite tool: differential calculus. Let's do this very
formally at first and then go through some examples. If the internal energy of a system
depends on a bunch of state variables Xi with i going from 1 to N, then any infinitesimal
change in U, caused by an infinitesimal variation of the state variables can be computed
from the total differential of U:

-12-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

d U = U ( x1 + d x1 , x2 + d x2 , K , x N + d xN ) − U ( x1 , x2 , K , x N )
⎛ ∂U ⎞ ⎛ ∂U ⎞ ⎛ ∂U ⎞ (1.13)
=⎜ ⎟ d x1 + ⎜ ⎟ d x2 + K+ ⎜ ⎟ d xN
⎝ ∂x1 ⎠ x , x3 , K, x N
⎝ ∂x 2 ⎠ x ,x 3 ,K , xN
⎝ ∂ xN ⎠ x , K ,x N − 1
2 1 1 , x2

If we need the internal energy change between two states, 1 and 2, that are more than an
infinitesimal amount away, we will of course have to integrate this equation between
state 1 and 2. Again, the reason we have to do things this way is because we will never
really know the function U. You will see later, however, that the partial derivatives of U
with respect to the state variables are things we can often measure. So we don't know the
function, but at least (sometimes) we will know the derivatives.
Also note that if the two states for which we need the internal energy difference have the
same value for some state variable Xi, its differential, dXi, is zero in equation (1.13) and
we don't need to know the partial derivative of U with respect to that variable* . Yes,
more stuff we don't need to know !
Anything stated here for the internal energy U is applicable to all other state functions as
well.

1.13 INTEGRATING DIFFERENTIAL EQUATIONS


If we need to compute a change in a quantity between states we can integrate the
differential equation that describes its change. For example, let’s assume we need to
know the internal energy difference between two states that only differ in the value of X1
and all other variables have the same value in both states. We can obtain this internal
energy difference by integrating equation (1.13):
state 2 state 2
⎛ ∂U ⎞ state 2
⎜ ⎟
∫ dU = ∫ ⎝ ∂x1 ⎠ x
d x1 = ∫ Fdx 1 = U (state 2) − U(state 1) (1.14)
state 1 state 1 2 , x 3, K ,x N state1

Not every infinitesimal quantity is the differential of a function. For example, the
work in a simple system is the integral of δW = –pdV. Since the state of a simple
system can be specified with two variables, we can plot the changes of such a system in a
two-dimensional p-V diagram (figure 1.4). In this diagram, the total work done in going
between two states is minus the integral under the curve describing the trajectory between
the two states. Clearly, the work performed along trajectory I is larger than along
trajectory II(a+b). The reason for this difference is that δW is not the differential of a
state function (property). W is an energy flow and there is no function W(X1, X2, … ,

*
Of course, this is not completely correct. The partial derivatives are themselves state functions and, in
principle, they also depend on all the state variables. Though luck !

-13-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

XN). The integral of δW is path-dependent,


P
whereas the integral of the differential of a
state function ( = property) is not path
dependent.
I Note that since the First Law for a
IIb
simple, closed system gives dU = δW +
IIa δQ, the integral of δQ is necessarily also
path dependent (since the integral of dU is
V NOT path dependent).

Figure 1.4: Work performed depends on

path between states

1.14 OTHER PROPERTIES

Some other properties of systems often appear in thermodynamic equations.

Heat capacity

Theheat capacity is defined as the ratio of the heat input to a system to the
temperature change induced:
⎛ δQ ⎞
C = (1.15)
⎝ dT ⎠
The above equation does not yet fully specify a unique value of the heat capacity since
δQ depends on the path taken (Remember, it is not an exact differential ?). Since a
simple system is described by two independent variables, specifying a dT is not sufficient
to describe the path taken. Hence, more correctly, equation (1.15) should look like:
⎛ δQ ⎞
Cpath = (1.16)
taken
⎝ dT ⎠ path taken

For simple systems the path can be specified by indicating what occurs to one of the other
variables (e.g. p or V). The most common heat capacities used are the constant pressure
heat capacity and the constant volume heat capacity:

⎛ δQ⎞
Cp = ⎜ ⎟ (1.17)
⎝ dT ⎠ p
⎛ δQ ⎞
CV = (1.18)
⎝ dT ⎠ V

-14-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

Note that the right hand sides are not partial derivatives (since Q is not a function), but
simple the ratio of two infinitesimal quantities. Most often we deal with the heat capacity
that is normalized per unit mass of the system. Per mole, equations (1.17-18) become:
1 ⎛ δQ⎞
cp = ⎜ ⎟ (1.19)
n ⎝ dT ⎠ p
1 ⎛ δQ⎞
cV = ⎜ ⎟ (1.20)
n ⎝ dT ⎠ V

Note that in principal, one could specify plenty of other heat capacities along more
complex paths. These heat capacities would be some combination of the properties
defined under constant V and p. In the case of systems that need to be described with
more than two variables, there will be more than two heat capacities (e.g. a system with
magnetic or electrical fields).

Values of heat capacities

Cp for air ≈ 30 - 33 J/mol-K


Cp for water ≈ 75 J/mol-K = 1 kcal/kg-K
Cp for iron ≈ 24 J/mol-K

Heat capacities are temperature dependent.

For gasses a rule of thumb is that for monoatomic gasses (Ar,

Kr, …):

Cv ≈ 3/2 R and Cp ≈ 5/2 R

For diatomic gasses (O2, N2, …):

Cv ≈ 5/2 R and Cp ≈ 7/2 R

Exercise
Write down the possible heat capacities for a system in a magnetic field ?

Thermal expansion:

Linear thermal expansion is defined as the relative change of length per unit temperature
at constant pressure:

1 ⎛ dl ⎞
αl = ⎜ ⎟ (1.21)
l ⎝ dT ⎠ p

-15-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

Because we often deal with shapeless system, it is more convenient in thermodynamics to


use the volumetric thermal expansion:
1 ⎛ dV⎞
αV = ⎜ ⎟ (1.22)
V ⎝ dT ⎠ p
It can be easily be shown that αV = 3 αl .

VALUES OF THERMAL EXPANSION

Compressibility

Compressibility is defined as the relative change of volume with pressure at constant


temperature:
1 ⎛ dV ⎞
βV = − ⎜ ⎟ (1.23)
V ⎝ dp ⎠ T
The minus sign is used to make the compressibility positive (since volume decreases with
pressure). Note that compressibility is the “volume version” of elastic constants for
solids.

Sometimes an adiabatic compressibility is also defined.

-16-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

1.15 SOME PROPERTIES OF IDEAL GASSES

An ideal gas is a particular substance that has a very simple equation of state. Many real
gasses behave as ideal gasses when they are well above their condensation temperatures.
From the ideal gas equation several other relations can be derived. When you use these
relations, always keep in mind that they only apply to ideal gasses. Hence the equations
derived from them will not be generally applicable to all materials.

Properties of ideal gasses:

I: PV = nRT
II: cp - cv = R

⎛ ∂U ⎞
III: ⎜ ⎟ = 0

⎝ ∂V ⎠ T

Property III implies that the internal energy of a closed ideal gas only depends on its
temperature. It is left for the reader to prove that:

dU = ncv dT (1.24)
So that the internal energy only depends on temperature. Please keep in mind that this is
only true for an ideal gas. For other systems, the internal energy will also be a function
of other state variables (and often strongly so).

1.16 EXAMPLES

1) Calculate work during isothermal compression of an ideal gas


2) Adiabatic compression + applications

-17-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

1.17 THE ENTHALPY

Many practical processes occur under constant pressure. Under this condition, a system
can always perform or receive energy through a change in volume (pdV term). This
becomes relevant when we consider for example the heating of a material. The internal
energy of a material determines its temperature. Say I could figure out how much internal
energy I need to add to raise the temperature of a system by one degree. One might think
that if I add that amount of energy by supplying heat a one-degree temperature rise would
occur. This is not correct. If some amount of heat flow raises the temperature of a
material, its volume will also change (usually expands). This expansion causes work to
be performed on the environment through the pdV term. Clearly, some of the energy we
put in as heat directly leaves the system again as work on the environment ! Hence not
all the heat we put is used to raise the internal energy. For this process we can write:

dU = δQ − pdV (1.25)
The change in internal energy is the net balance of the heat flow in and the work
performed on the environment from the expanding system. The amount of heat I need to
supply to obtain my internal energy increase is actually:

δQ = dU + pdV = d(U + pV) (1.26)


The latter equality is true because we only consider constant pressure changes. Because
it so often occurs in thermodynamics, the combination U + pV is given a new name, the
enthalpy (H) Enthalpy is defined as:

H ≡ U + pV (1.27)

For a simple system the enthalpy change of the system in a process under constant
pressure is the amount of heat added to the system:

(dH ) p = (δQ) p (1.28)

Again, note the restrictions on this equation. It is only true for a simple system (with
only pdV work) and under constant pressure.

The enthalpy finds much use in studying chemical reactions:

For example, if one wants to find the total heat released (or absorbed) in the isothermal,
isobaric reaction

CO + 1/2 O2 → CO2

-18-
SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

one only needs to calculate the change in enthalpy in going from the reactants to the
product(s). If the reaction occurs at 298K, the heat exchange is:
1 298
Q = 298
HCO − HCO
298
− H (1.29)
2
2 O2
Enthalpies of most compounds are tabulated at 298K (or other temperatures). Since as
with energy, the enthalpy has not absolute zero, a reference state has to be agreed on. For
the elements the state of zero enthalpy is taken as the stable form at 298K and 1 atm
pressure. Hence for oxygen, this is molecular oxygen in a gas; for carbon it is graphite.
Once the enthalpy of elements is set, the enthalpy of compounds is determined by the
heat release in their isothermal, isobaric formation reaction. This enthalpy of formation is
often tabulated at 298K and can be taken as the value of the enthalpy for that compound
at 298K.

1.18 TEMPERATURE DEPENDENCE OF THE ENTHALPY OF


MATERIALS.

While the enthalpy of materials is tabulated at 298K, the enthalpy at other temperatures
can be found from the relation between the heat flow and the enthalpy:

dH p = δQ p = n cp dT (1.30)
Hence,
⎛ ∂H ⎞
cp = (1.31)
⎝ dT ⎠ p
In a material the specific enthalpy increases with temperature proportional to the heat
capacity. At a phase transition, the enthalpy varies by a discontinuous amount which is
equal to the latent heat of transformation. For example, the heat required to melt ice at
0oC is equal to the enthalpy difference between the liquid at 0oC and the ice at 0oC.
Figure 1.5 shows a typical evolution of enthalpy with temperature.

-19-

SMA 5101: Thermodynamics and Kinetics of Materials G. Ceder

H
Cp,l
∆Hm

Cp,s

Tm T
Figure 1.5: Enthalpy as function of temperature. At a transition the
enthalpy is discontinuous.

-20-

You might also like