Ocw 243 Lecture01 2024feb06 v2 Transcript
Ocw 243 Lecture01 2024feb06 v2 Transcript
mp4
[SQUEAKING]
[RUSTLING]
[CLICKING]
GIAN PAOLO OK, welcome. So this is 2.43. I hope you are in the right place. My name is Gian Paolo Beretta, and I am very
BERETTA: happy to teach this course. As you know, it's the first time this spring, although this course has been taught very
similar in Politecnico of Milano for about maybe 12 years, for the first year, PhD students in the energy program.
So I'm using viewgraphs that I've developed over the years. In fact, this is a very particular year for
thermodynamics, as you know. It's the 200th birthday. And I like to say that right now, I reached one-third of the
age of thermodynamics. When I started studying it, 1978, it was one-seventh. So my entire academic career has
been around this.
Of course, Carnot is only one of many who have contributed over these two centuries, in thermodynamics, like
Carnot, Joule, Rankine, Helmholtz, Clausius, Kelvin, Massieu, van der Waals, Gibbs, Boltzmann, van't Hoff, Planck,
Duhem, Nernst, Sommerfeld, Carathéodory, Einstein, Keenan-- and I'll say a few words more about him, Fermi,
Guggenheim, Onsager, Ziegler, de Groot, Prigogine, Mazur, Hatsopoulos, and Prausnitz. And I apologize for those
who I haven't written down there.
Actually, you may notice that the only person currently alive is John Prausnitz. But he deserves, I believe, a place
Now, the reason I'm particularly happy about teaching this course here goes back to Keenan and Hatsopoulos. I'd
like to reference them, I mean, Keenan, Hatsopoulos, but also, the late Professor Gyftopoulos -- [CLEARS
THROAT] sorry -- as the Keenan School of Thermodynamics. Because Keenan-- who, by the way, was using the
office downstairs in 3-339, as we also did, I mean Hatsopoulos for a few years and me for just five years, when I
was young-- we used the same room. So that was a privilege for me.
As you see, these are the books that have characterized this school of teaching thermodynamics. You will see it's
different from the traditional exposition. I like to say more general and more free of vicious, logical loops, that are
Professor Gyftopoulos has been teaching this course, or maybe the course that now has become 2.42 -- it used to
be called 2.451 -- he taught that for about 20 or maybe 25 years. And I helped him during those years. And then
This book is in the background of what we will do here, only in the background, not that we are going to cover the
book. But it helps me a lot, the fact that when in class, I will say, you can prove, or it can be proved. You can find
the proofs in all details in that book, which is, I mean, very good point from my point of view, because I don't
need to spend time repeating the proofs except only very few fundamental ones and easy ones.
But you can find them there. And if you are very fond of the subject, you want to go deeper, you may study it
there, and you may come and discuss them in my office, anytime you want. Actually, my office hours are
anytime, either by appointment or just drop by. And most of the times, all the time, I will be in the office working.
So you can come or call anytime. If you want, we can set up, also, more official office hours.
Now, the objective of this course is not applications. Because I presume that the applications have been seen
already in previous courses that you've been exposed to or maybe in your doctoral or master's thesis work. What
I want to put the emphasis on is a broad perspective on the assumptions that are needed to make -- or that are
behind -- certain models, standard models, little non-standard models, models that you use in physics and in
engineering.
Because thermodynamics, as you know, covers applications in an enormous amount of fields. So nobody can
really say, I know all applications of thermodynamics. They're essentially infinite. The good thing, since we are in
an engineering department, is that, as you know, Carnot was an engineer. Thermodynamics started from
engineering, and then it was picked up by the physicists. And so it goes back and forth.
So we share the subject. We engineers share the subject with the physicists. And the physicists share it with the
engineers, not only engineers, I mean all kinds of engineering, so including chemistry. Chemical engineering is
very strong in -- has a very strong history and that's all about -- thermodynamics.
Now, the emphasis of the course is on non-equilibrium. Until when I did my PhD studies here, thermodynamics in
those times was considered a dead subject. Because everybody was thinking of thermodynamics as just
equilibrium.
And in fact, that was, for still a few years later, maybe in the late '90s, that there has been a revival of
thermodynamics because of non-equilibrium, because of irreversible processes now and also, because of
quantum thermodynamics. Now it is an enormous field, quantum thermodynamics. In those times, it was
considered useless.
And Hatsopoulos and Gyftopoulos, in my opinion, are the true pioneers of that subject, even though, since their
point of view was not standard, they're not so recognized and not everywhere as pioneers. But if you read a
recent paper in Science or Scientific American about quantum thermodynamics, I think it's called Quantum
Steampunk, who was written -- by the way -- by the winner of a Prigogine prize in thermodynamics, which I
happen to be involved with. Well, she wrote this book, Steampunk Quantum Thermodynamics, and also, this
Science article, where she does recognize the roles of Hatsopoulos and Gyftopoulos.
So the course is three parts. One, the first part is the review of the basic concepts and definitions, but with this
emphasis on non-equilibrium and especially, on the definition of entropy for non-equilibrium states. And in
particular, we will see an interesting aspect of what heat interaction is and the generalization of the concept of
heat interaction that is useful for describing the non-equilibrium states of continua, like fluids.
The second part, which is focused on chemical potentials or electrochemical potentials, so multi-component
systems and the role that chemical potentials have in describing them -- so I'm not reading all these titles. And
the last part, the last third of the course is more explicitly on the standard theories of non-equilibrium, like, in
particular, the role of Onsager reciprocal relations and again, the assumptions that are needed to describe these
states, which are non-equilibrium, and how they evolve in time, and how you relate the entropy production by
So, this is about the grading policy. Everything is going to be oral. The emphasis -- the reason why this is very
unusual, and may scare some of you. If it does scare you, and you still have to take the qualifiers, consider it as a
If you are instead past that, and you are aiming to a career in academics, where you might have a chance that
among your duties, you have to teach the subject, which is typical -- if you go to a mechanical engineering
department, typically nobody wants to teach thermodynamics. So they say "you are the new one, you do it" until
Well, this is the reason why I provide the viewgraphs. I provide the videos. So I want to make it easy for you if
you want to teach it this way because it's already done. And I'm very happy to share all this because, as I said,
So, there will be one homework and four midterms, essentially, so five items that you have to do. They are all
take-home. And they are all of the kind that you -- I'll give you one or two viewgraphs of the ones that I use in
class, and you have to repeat, in your own way, in front of a camera, no more than five minutes, the words that
go in between the equations that -- or the things that are written on the viewgraphs.
So that's, essentially, a short lecture that you give on that subject, following, and using -- you can use the videos,
you can use whatever you want to to do them. So I would consider those easy. So you shouldn't be concerned
And yes, there is also a final exam, which is also oral. That's oral in presence. It lasts 30 minutes. And also, for
that it's the same idea-- I tell you, take the viewgraph on, say, chemical equilibrium, and talk about it. So you
You can also take your own hand notes or whatever you want, as long as you don't come without anything, and
you waste all the 30 minutes trying to write a couple of the formulas you will remember. Because that's I
consider a waste of time. It doesn't allow me to check that you have tried to go over all of the things that we
have covered. Actually, the final is going to cover only the second and third part of the course.
OK. So let's get started. Thermodynamics -- so what is thermodynamics? Does any of you have an idea of what's
GIAN PAOLO OK, that's true. Anything more general? You use thermodynamics in your research. What do you use it for?
BERETTA: Maybe you do molecular dynamics simulations. That's a way to compute properties of stuff.
In fact, heat and work are certainly peculiar kinds of interactions that occur and that are described in
thermodynamics. But in general, thermodynamics is a subject where you -- let me use words that I haven't
defined yet -- it's a subject that allows you to compute properties of physical systems and see how they change
with time and how they are maybe transferred from one system to the other, like by means of heat and work, so
So this is why I want to spend some time in defining these things, these words. Because these words are loaded
of meaning and therefore, represent restrictions within the theory of thermodynamics. Actually, thermodynamics
is really modeling. You want to model physical reality, exactly like physics.
Or mechanics does some modeling of certain aspects. It's also similar. So the key word is "modeling." You want
to describe a glass of water, a bottle of water, the water in a bottle. And you make a model. You're typically
simplifying the situation to make it contain enough of what you need to describe that water, but nothing more.
So sometimes, this system, this will be a system, the water inside this container. And this system is interacting
with the air that surrounds it. So you need to consider both the water and the surroundings.
But most of the times, you don't need to consider what happens in, say, the corridor of your lab or what happens
-- where is the moon right now. So you can simplify by removing unnecessary aspects of your physical reality
and concentrating on what you really want to represent. And also, you want to simplify.
So as you know, I can describe water as if it's composed of molecules, and molecules can be considered for
certain aspects, like indivisible, like atoms in the Democritus sense. You cannot split them. But if you're
interested in chemistry, well, then the molecule can be split. And then the building blocks are the atoms.
Or if you're interested in nuclear phenomena, then the building blocks and not even the atoms, they are the
protons, the neutrons, the electrons. It can even go to a lower level of description. You can go to quarks. And the
quarks can be split into maybe string theories and things like that. So who knows if there is a bottom level of
But not for every model, you need to go that deep. You just use whatever is necessary -- for enough -- for the
phenomenon you want to describe. So that's the -- let's call it art of modeling, to get rid of the unnecessary.
So usually, a system, we agree, and we understand that it's made up -- physical systems, they're made up of
constituents. That could be the molecules, the atoms, the nuclei, the protons, the nucleons, and so on, depending
on the level of description you choose. So you set up this bunch of molecules, which are the ingredients of your
You have to explain or describe how they interact with each other. For example, in a molecular dynamics
simulation, you want to describe what kind of potential, like Lennard-Jones, hard spheres, soft spheres, and
things like that. So these are internal forces between the constituents of your system.
And then you also have to -- because most of the times your system is confined. For example, the air in this room
is confined by the walls, or the water here is confined by the bottle. And so you want to describe, also, what I
would call the external forces as something that puts in the model the effect of the walls.
But these effects of the external forces must be particularly simple. And there it's written that those forces
should not depend on the coordinates of constituents that are not the constituents of your system. So what does
that mean? For example, the walls, yes, the walls are made of molecules.
So when I have a molecule of oxygen that goes and hits the wall, and it bounces back, what affects that
bouncing is certainly the molecules of the wall. But you don't need to describe to include the molecules of the wall
in your model unless you're interested in the effects, also, of what happens inside the wall. You could just
describe that as a mirror bouncing or elastic bouncing. Or in elastic bouncing, you can define rules of how the
So these rules may depend on some control parameters. For example, where the wall is. You could have a
classical piston. You can move a wall. And thus, you control the position of the wall. You could also control with
other parameters what the wall does to your particles as long as it that effect is not dependent on -- within your
So if the external forces are of this kind, then we can call it a system. And here is a counterexample. For example,
suppose I have a hydrogen atom with a proton and an electron. And we all know that electrostatics makes such
that the force -- so the force between proton and electron depends on the relative distance of between the two.
And so if I have an electron which is bound in an atom, and I want to consider it as my system, I cannot. That
cannot be a system, because for the external force depends on the coordinates of the proton. So if you want to
consider that situation, you have to include the proton in your system.
Or you can only do it if the electron is in a situation where the effect of the proton is independent of the relative
position. Those are very few, except if the electron is very far away from the proton so it's free particle. So the
And as I said, most of the times, you need your system, which is confined by, say, the walls, according to those
rules. And also, you need another system to be outside, which is capable of interacting with it. For example,
usually we call that the environment of the system. Which, in principle, is the rest of the universe, but in practice
is that little part of the rest of the universe that is relevant to your application.
So once we have defined the word "system," we said, well, we want to study the properties. So how do we study
properties? Well, we go to the lab, and we do measurements. Those are called general physical observables. The
things that you can measure are physical observables. And they correspond to measurement procedures, so
recipes that you apply in a lab, that you should be able to apply in any lab.
So the given recipes should not be lab dependent. Otherwise, it gives something that is not a feature of your
system, but it's a feature of your system in your lab. So the lab is part of your system. OK.
But that's not enough. Not all physical observables can be called properties. In order for them to be called
properties, you need the result of the outcome of the measurement to be dependent, again, within your model,
time, t1 and t2, and measure with the stick the distance between those two points, that's certainly a physical
observable. You measure it. There is a measurement procedure. Take the stick and measure it.
But it depends on two times. It's not a property. Even if you divide it by the time interval between those two
instants, you get a ratio, which looks like -- you would call it the average traveled distance. But that, again, is not
Instead, we all know that if you take the limit of t2 going to t1 and measure only for a short distance that same
ratio, that one tends to a property that we call the velocity. So that is a property. And it is assigned to the time
And properties, there are many properties that you can describe and that are defined by the various
measurement procedures that you can devise in order to study your system. For example, if this is my system, I
could count how many molecules are in there. So counting is a good -- maybe difficult, but it's a well-defined
And if there is not only water, but maybe there is also carbon dioxide, I should count how many water molecules
and how many carbon dioxide and maybe some oxygen, some nitrogen. All right. Then I can also measure if my
system is a variable volume, not this bottle, but if it is a variable volume container, then I can also measure the
So that's not really a property, because it's the value of one of those that we would call the parameters of the
external, that characterize the external forces. And these parameters, yes, for a lot of situations in which you
want to study properties of substances volume is the main and only parameter. But as you go, especially for
So the shape of your container matters. We will see more about this. So, for example, if you want to describe the
geometry of this room, you need many parameters. This is just a box. It will be L1, L2, L3. But if you consider all
the details, like there is a corner there. You've got tables. How many parameters you need? You need lot of
parameters. And they all count for some properties, but maybe not all. So these are still just the parameters at
And then you can start measuring properties. Properties depend on the model you are considering. For example,
suppose I consider the simplest classical mechanical system, a particle, a single particle in a box. And let's make
this box one-dimensional, so it's even simpler. Just it can move back and forth between as x goes from 0 to L.
So this is your particle, a point mass. M, you have to specify M. So it's a parameter of your system. The number
of particles at time t, well, it's equal to 1. It's just one particle at any time. So it's a constant. Also, M is just M at
any time.
What else can I measure? Well, I can measure where the particle is. So that's x. That may change. So that's a
variable property. I can also measure x dot, the rate of change, like we did there for velocity.
So it turns out, as you know, that these two, essentially, or their equivalent, position and momentum, are the
independent properties. If you measure those, with those measurement results, you can compute all other
I know x -- maybe I want to compute x squared or x cubed or x to the n. And n can be anything. So I have an
But I could shorten the list by just using the independent ones or a set of independent ones. So let's say that
these in this list are the independent ones, those that are strictly needed in order to compute all the other
properties. So if you know at one instant of time t, all these numbers -- because each one of them is a number. So
if you have that list of numbers, there is nothing else you can say or you can measure about your system at that
So that thing fixes all the properties. That is what we call state. So the state of a system is a list of numbers from
which you can compute all its properties at one instant of time.
The same system -- let's make it here -- in another theory, for example -- not for example, another important
theory is quantum mechanics. And then you know that in quantum mechanics, the story is a bit more
complicated. Because, for example, if you want to measure position, you cannot really talk about or you're not
completely allowed to think of the particle having position until you measure it.
If you ask the particle, Where are you? So you make a measurement act for position, the particle is compelled to
give you an answer. So it will answer, yes, I'm here. But you cannot infer from that single act of measurement
that the next time you have the same system in the same state, you perform a measurement act, and the
All you can predict is the probability. All you can represent is the probability to get an answer. So if the
probability to have the particle between x and x plus dx and the probability is this, the area here under this
curve, and this curve is obtained like the -- it's the wave function modulus squared, something like that times dx.
So for this particular simple quantum mechanical system, since you need a function in order to describe the
probabilities, a function is an infinite set of numbers because it has a value here. There's another value here, and
So this is a system which has an infinite number of independent properties, which are, though, thanks to
mathematics, expressible in terms of a single function. In more complicated situations, the function is not the
wave function but is called the density operator, but still, it's a mathematical object that contains all the
So having rho of t allows you to compute the values of all properties for a quantum system. So that's the word
"state." Usually, one says, in the easy exposition of thermodynamics, one says, well, the state of the system is its
condition at the given instant of time. That's what we mean by condition. It's a list of numbers, all the values of
The next stage is that if I know the state of the system -- and actually, here I'm going to use the letter A to
represent -- we use it for several purposes. The box typically represents the idea of system, and A is the name of
the system.
A sub 1, it's really A at time t1. So A sub 1 is defined as the list of all the properties at a given instance of time,
So if I know now the state, I want to see how the state evolves in time. And the state, namely the properties, can
change in time by many reasons, due to many phenomena. Some of them are related to the effect of internal
forces, like particles in the box that would collide, and the effect of the collision makes their positions change.
They may be due, also, to the external forces, collision with the wall or change of the position of a wall. So
somehow, you could say that if you are not playing around with the parameters of the system, changing the
walls, or moving the walls, and what happens is just due to the internal forces, you call that internal dynamics.
You call it a spontaneous process within the system. It happens all within the system.
If, instead, the change of your state, so the change of the properties, is due to either to what you do to the walls,
to the moving of the parameters, or to what happens on the outside of your system. So if I have some hot air that
goes around this bottle, and therefore, there is a heat exchange through the wall that is going to heat up the
bottle or maybe cool it down, you put the bottle in the refrigerator, it cools down. That is an interaction.
And interactions are of several kinds. And we will spend time in describing and giving models for these kind of
interactions; first of all, work, and then heat. You will notice that we will define heat. We do not start from heat.
And this is very important if you are a fan of logical consistency. Because heat is typically defined by heuristic
arguments and by a lot of hand-waving even by the best physicists. So if you go, there is a series of lectures -- I'll
mention another in a moment -- that you may know by Feynman, that you find on YouTube -- it was a wonderful
professor in the '60s. And he has these very funny lectures. He was a great teacher.
And you should see how much hand-waving he does in order to describe what heat is. Because it has to do with
molecular agitation and all that. Yes, you can do it that way. And you'll see here, we will do it in entirely different
way, which is equivalent. But you will see where it derives from. Heat is going to be a consequence of the laws of
thermodynamics.
Which is the way it should. Because after all, heat, as you correctly mentioned, it's a peculiar concept of
thermodynamics. So how could you pretend or attempt to describe it by just using mechanical concepts? More of
that as we go along.
If a system undergoes spontaneous processes and is unable to affect the state of the environment, we call it an
isolated system. So we have the state, and we want to describe how the state evolves in time. Now, this picture is
a trajectory. But this trajectory you have to understand it as a trajectory in state space.
So for every independent property, you should think of having -- suppose you have just three independent
properties, like x, y, z, then the state is a point here. But if I have more independent properties, so I need more
axes, one for each independent property, so I need a multi-dimensional space. Not easy to draw on the
description of the trajectory in our models is the solution of a differential equation that we call the equation of
motion of the system. Which says, the rate of change of the state is some function of the state itself and of the
various forces, external forces or parameters of things that you can control from outside, of the external forces,
OK, so that's the equation of motion. If your model has the equation of motion, like in molecular dynamics
simulations, I believe you do have the equations of motion. And those are exactly what you solve in order to see
the system evolve. Then by analyzing that equation of motion or maybe the results-- or maybe the results will
suggest ideas about analyzing the equation and finding some general features of that time evolution.
These features, which you can prove them, they are theorems of the equation of motion. Some of these
theorems are features of all models, good models, successful models of physical reality. In other words, what I'm
saying is that, if you're describing a piece of physical reality, with your model, that model-- so it will have an
equation of motion.
And that equation of motion will always have a couple of theorems, no matter what the system is, small or large,
complicated, or simple. Those two theorems are general features of all modeling assumptions for modeling, for
all modeling of physical reality. That is what we call-- I mean, they become so important, these results, because
they are general, that we call them laws. Like law of conservation of momentum.
And here, in particular, we will be concerned about those that have to do with energy and entropy. But the
message here is that these results, if you have the equation of motion, are theorems of the equation of motion.
However, there is an alternative. Suppose you don't have the equation of motion, or you don't want to solve the
entire equation of motion for your system because perhaps it's too complicated. I mean, can you imagine solving
If you do your molecular dynamics simulations, the number of particles may be, I don't know, a million, but not
10 to the 23. So not only it is impractical to actually do it, but it's actually useless. Because you can extract a lot
of information by, first of all, studying smaller systems, as long as they are not too small. We will see. But also,
you may not need all the details of the time evolution.
However, even if you give up the description in terms of a full dynamical description, in terms of a full equation of
motion, you shouldn't forget about these two theorems, that any equation of motion must satisfy. At that level of
description, you call them fundamental principles. You call them laws. You call them postulates. They become the
starting point, something that you know must be satisfied by your model.
And so instead of describing the entire time evolution we use, when we can or when it is enough, we use the idea
of process. The idea of process is not only an initial and a final state of your system on which you want to check
that those laws are satisfied. Yes, you do need an initial and a final state. But you also need to describe the
effects that the time evolution of your system induced in the rest-- in the environment of the system.
So how do I describe the effects on the environment of the system? Well, let's call B the environment. The effects
on B are described by the change of its state. What else can I do more than just saying how all the properties of B
that's the simplification. But I need to describe the initial and final states of my system and of the environment.
And now we get ready to introduce the First Law. In order to introduce the First Law, we need to define a
particular kind of process that allows us to connect this more general theory that we are constructing to what we
know from mechanics. So we consider the simplest mechanical object, something like that, here it's represented
That's why we're going to call this particular kind of process a weight process. A weight process is one in which
the external effect-- you remember here, the external effect is the change of state of your system B. In this case,
for the weight process, B is just the weight, or the only effect that occurs outside of your system is the change in
So if you have a process like this, your system changes from one state to another. And the only external effect is
the change in elevation of a weight. We call it weight process. If there is no change, if not even the weight-- so if
z2 is equal to z1, then there are no external effects. This could be a spontaneous process.
But the idea of spontaneous process is really more demanding. It requires that at all instants of time, from t1 to
t2, the weight outside doesn't move. Instead, you could have a 0 change in elevation weight process, in which it's
only the initial and final elevation of the weight that are the same. But during the process, it may have gone up
or down. As long as at the end, it goes back to the initial height, then it would be 0.
So why is a weight process important? Because it enters the statement of the First Law. I know that usually,
especially in engineering applications, but not only, you say, all right, I'm going to write the First Law. And by
First Law, what you mean is really what we will call the energy balance equation.
But the First Law, in fact, the way we are introducing it here, contains much more than just the idea of energy
conservation and energy exchangeability that is behind the energy balance equation, as we will see in a second.
But it also is what guarantees the existence of property energy for every system. And it does so because we
If you go back to that picture of the proton and the electron, and I said, all right, I cannot take the electron and
call it a system. It doesn't qualify as a system. And in fact, for this thing which is not a system, I cannot apply the
First Law. And as you will see, the consequence of the First Law will be the existence of property energy.
This thing doesn't have energy. You cannot define the energy of the electron when bound to a proton. Why not?
Well, yes, if you have an electron without the proton, far away, it has its own rest mass. And maybe you can
define its energy as using this famous formula. But when it is bound to a proton, and so there is an electrostatic
interaction, there is some energy that is related to this interaction. It's called the interaction energy. It's a
proton. So that system doesn't have energy. And so I'm happy that we have excluded it-- yes, I'll be there in a
moment-- I'm happy that we have excluded it by our definition of system, so that we will be able to say that
every system has energy as a property. Every system satisfies the First Law the way we state it. But it has to be a
system.
The limit-- for the electron by itself, no. But if I consider the atom, and the atom is far from everything else, then
the two-- I mean, this force becomes an internal force. And that energy, the interaction energy, becomes part of
So for example, in terms of Hamiltonians, if you are familiar with that, you would write the Hamiltonian of the
electron, which is the thing that represents the energy when the electron will be by itself, then the Hamiltonian of
the proton and then the interaction energy between electron and proton. So that's well-defined for this entire
system, for the combined-- no, for this, actually, for the only system you can define for this situation.
But you cannot say, OK, but this is the energy of the electron. No. This would be the energy of the electron if the
AUDIENCE: So what's special about [INAUDIBLE] that-- does that apply to [INAUDIBLE] or specifically, is that [INAUDIBLE]?
GIAN PAOLO No, no, no. This is just an example, and this is the reason why I need it. Suppose this is the Moon, and this is the
BERETTA: Earth. Same thing, there is a gravitational force. You cannot consider just the Moon as a system if for what you
want to describe, it is important what its interaction with the Earth is.
For the Moon it's simpler in the sense that if you consider short instants of time in which the Earth is relatively
fixed, so there is no change in the relative distance, then maybe for that instant of time, the Moon will move a
little bit. The relative distance will not change. So within that time frame, maybe you can neglect the
gravitational effect from the Earth. But it's exactly the same idea.
Or, say, suppose this is the air molecules, O2 molecules that are in this room, and this is the cement molecules in
the wall. If I can model the cement molecules in such a way that they imply a fixed effect on the molecules
outside, so that this becomes an external force that does not depend on the position or the movement of the
And, essentially, what I do is to substitute this with an effective potential of the outside onto my internal
molecules. And that effective potential is a way to describe external forces. So this is what you need to do in
order to come up with something that you can call a system and to which you can assign an energy. That's what
any model in physics is about. I'll be happy to discuss it later more, if you want, later on, any day.
So what's the First Law? The First Law says-- this is important. It's actually made up of two statements. The first
one is that you take any system and take any two states of your system. They can always be interconnected via
Now, there is something wrong in this slide. I'll have to correct it, the arrow. Because the law doesn't say that you
can always do it in this direction. It says you can interconnect them. So if you cannot go from A1 to A2, then you
The second assertion is about how much potential energy-- because from mechanics, we know that the weight
that goes up and down in a gravity field has a change in potential energy. So the potential energy change of your
weight here depends only on the pair of states, not on the particular way or a particular weight process, with
Or in other words, all the weight processes that interconnect those two states have the same external effect or
equivalent of that external effect. Because sure, I'm using the weight in a gravity field as a prototype of
mechanical effect, but you could also use a charge in an electric field, or you could use a point mass that
changes from one velocity to a higher velocity. So the kinetic energy changes. Anything that is purely mechanical
And so those two assertions in the First Law allow you to define a measurement procedure for property energy.
Because as we said before, a property is defined by a measurement procedure. And what's the measurement
procedure for energy? Well, you come to my lab with your system in a given state, and you ask me to measure
its energy. And I take your system, and I ask you to pay. And then I'll tell you before you leave that, look, really,
on the door, there is written that I measure energy. I actually measure energy differences.
So you have to pick another state of your system that I will consider as a reference. And then I will measure the
difference in energy between those two states. And how do I do it? Well, once you're gone, because I don't want
to show you my secrets, I go on my shelves and find weight processes. And I find one that fits, that allows me to
change the state of your system from the reference one that you have chosen to the actual state you're
interested in. And I measure how much gravitational potential energy change the weight has undergone when I
did that.
So that's a measurement procedure. And I give you the number. And of course, the units are already those of
potential energy, so they're the units of energy, the joule in S.I. And what's nice of these two assertions is that
they are valid for any system, any well-defined system here. And so you, in principle, can find this weight process
Using that definition, you can prove. Now, if you're interested in the proof, go to the book. Because I want to go
quickly over these things. Because presumably, these are easy concepts. But the proofs are not that obvious. But
you can prove from those statements these fact-- that energy differences are additive.
So if I consider a system which is composite of two subsystems, namely, the definition of system applies for each
of them, so I consider a composite system C, which, for example, could be the entire mini universe, namely, my
system plus the environment. Or it could be just two systems. So system C doesn't have to be isolated.
In either case, you can prove by applying that same measurement procedure that defines energy that we have
just seen, if you apply it to system A in state A1, you get a number. I call it E, energy of system A in state t1, or
system, that I call C11, is actually C11, is actually A1B1. So it means that it's the state in which subsystem A is in
state 1, and subsystem B is in state B1. The theorem says that the difference between energy, the difference in
energy for system C, with respect to the reference, is equal to the sum of the differences of energy for A and B
with respect to their respective references, provided that I have chosen as reference-- a compatible reference for
system C.
So I cannot choose arbitrarily the value for the reference energy for A and for B and also for C. Otherwise, I do
not get additivity of the values. I only get additivity of the differences. All right.
Another almost trivial consequence of the definition of energy is that when the external effect, when you have a
weight process so the only external effect is the change in height of a weight, and the weight doesn't change in
height-- so there is no external effect-- then E2 is equal to E1. So that's the principle of energy conservation. So
The other idea, which is important, is the fact that energy can be exchanged. So in other words, you may have
energy of your system, so the energy of your system may change. But if it changes, suppose it goes down for one
system, it must go up for some other system. So this is a consequence of the two aspects that we have just seen.
Conservation, if system C is isolated, we've seen here no change outside. So the energy is conserved. But the
energy is also additive. So if you combine these two, it says that whatever change of energy occurs for A, system
A, is equal and opposite in sign to the change in B. So if it goes down for A, it goes up for B.
Most of the times, our interest is in system A. And so if our focus is on the system A, the effect on system B, we
sometimes call it energy transferred. Or actually, most of the times, we call it energy exchanged between A and
B. Because the idea is that if it disappears from A, and it appears from B, it has been exchanged.
Now, you could argue, Well, maybe is this really a transfer? And the argument is fascinating. I refer you to this
wonderful lecture by Feynman. I added this. So you don't have it in your slides, but I'll post it in today's post,
along with the final slides for today. And there is a link to the YouTube lecture, in which in a very funny way, he
describes these principles of-- great conservation principles, explains why they're great.
But here, the reference is because he explains, also, that using relativity concepts, but in a simple way, it
explains that things cannot just-- if a property is conserved, it also must be actually transferable. How does he
say? Yeah, transferable across the interface between one system and the other. It doesn't simply disappear here
and reappear, but it moves across the boundary. So that's what we call local conservation.
So for our purposes, in order to obtain the equation that we usually call the energy balance equation, we rewrite
this right-hand side with another symbol, which is this symbol here. It's energy. The superscript means
During the time or during the process 1, 2, so during the time interval t1 to t2. So that with this way of defining,
you can rewrite this equation as the energy balance, like here. E2 minus E1 is equal to EA received from-- here I
already started simplifying the notation, so I dropped the B. But I should have written B because B is the
environment of A.
And the arrow on the symbol is the equivalent of what usually, you're used to as the sign convention for energy
exchanges. Now, some people like to say, well, I consider heat positive if going out and work positive if going in,
or vice versa. This notation allows you to define whatever, using the arrow.
And the fact-- this symbol here, which we will call energy exchange, can have both positive and negative values.
If it is positive, it means that the flow of energy is in the direction of the arrow. So in that case, it's positive if in.
But you can also define it as positive if out. In that way, the energy balance is written this way.
So you choose the sign convention, which allows you to avoid having to have a figure like here, to give meaning
to properties that are not just numbers. Because the interactions, the energy exchanged via an interaction
between two systems has not only a value, but also a direction. So it's like a vector. It's a little simpler than a
vector because it's just in or out. But it has two items, the amount and the direction.
And then sometimes, we also write the energy balance equation in rate form. So we divide by the time interval
that occurs between t2 and t1. And if that time interval is infinitesimal, meaning that we can follow the process
as a function of time, we actually-- this is a more refined description that allows you not only in finite time
intervals, but also, infinitesimal time intervals. Then you can write it in a form of a differential equation.
So the rate of change of the energy is equal to the energy, the rate at which energy is transferred into your
system. So this is the energy balance equation. I don't know if you survive, but I'll try to finish maybe a couple of
minutes before. But I'd prefer not to take a break, because it allows you to go away. That's the problem. I don't
So I want to give the ingredients and maybe, also, the statement of the Second Law so that you can see that in a
couple of hours of lecture, you can get almost to the Second Law. The Second Law, at least in the statement that
we are going to adopt, has to do with states and certain types of states. In particular, the stable equilibrium
states will play a fundamental role in-- a key role, central role, in the Second Law. So we need to define what we
mean by that.
So we can classify states according to two criteria. So that makes for four possibilities. The rows here are
characterized by whether the state changes in time or does not change with time, and the columns, by what is
the cause for those changes. Why does it change? Does it change because of something that occurs outside, so
it's an interaction with an external system, with the environment of the system? Or does it occur even if those
And these are the various possibilities. So we call an unsteady state if it changes with time. And that is because
of external interactions. The typical example, just to-- consider, a bathtub, in which you have water coming in
from the faucet, and you have a drain open. And suppose that this is a simple system, in which the only property,
relevant property, is the shape and position of this interface between water and air.
So an unsteady state would be one in which the level changes because there is no balance between the inflow
and the outflow. If, instead, you have that the state doesn't change, in other words, you have a flat surface that
stays put at the same a given height all the time, it means that-- but this is due to the fact that there is balance
moves, because maybe I shake it, and there are waves, that is a non-equilibrium state. In the non-equilibrium
state, the moving aspect is related to the internal dynamics, not to the interactions with what's outside, not with
Now let's focus on equilibrium. There is equilibrium and equilibrium. The concept of equilibrium is known from
mechanics. So I'm sure you've seen it somehow, somewhere. The stability of equilibrium also is important. It is
Now, if we go back to that idea of describing the change of state in state space, so that's A1, or A of t1, and this
is A of t2, with the understanding that now, the blackboard, this point represents a point in a multi-dimensional
space, in which each axis is an independent property. So this is the solution of a differential equation, which is
So if the state is changing, because as time goes on, it moves from here to there, this is not an equilibrium state.
An equilibrium state would be one in which the state does not change. So what's the trajectory? The trajectory is
a point. You start there, and you remain there. So that's equilibrium.
The question of stability, is whether trajectories that are nearby, where do they go? You play it with the game of
a mathematician who likes epsilons and deltas. So the mathematician, in this case, the mathematician's name is
Lyapunov. Maybe there is-- not sure about the O. Sorry for the spelling. Maybe it's wrong. I'm not sure about the
So Lyapunov says, if the state, this state, equilibrium state, is stable, give me any small number epsilon. Then I
can find a smaller number delta, which is a function of epsilon, such that if I consider all the states that are within
distance delta from the equilibrium state, so within this small hypersphere, ball of radius delta, any trajectory
that has a point in there will not evolve farther away than the bigger sphere. So it will stay close, so to speak.
So this would be stable. But actually, this definition of stability is local, locally stable. So that would be a feature
of this state and also, of that state. However, and I'm not going to-- because I don't remember it, and it doesn't
matter-- to give you the epsilon version of the metastability. But it can be done, and I can give you the reference
for that.
There is a difference between these locally stable state and equilibrium state and these other locally stable
equilibrium states. This one, which we call metastable, is such that-- look what I can do. I can, in a weight
process, take this mass, or whatever it is, take it uphill, up to the cusp, and then take it down to this same level,
So in order to do that, I had to spend some energy from the weight outside to pull the mass up. But then that
energy is given back to the weight, at least if I get exactly to this same height. At that stage, I have been able to
do what? I have been able to change the state into a different and entirely different state, this one, which if I let it
go, it will start bouncing back and forth.
So definitely, this situation in which the thing bounces back and forth is different from this one. So this stable
equilibrium state-- I'm sorry, this metastable equilibrium state, I have been able to change it to a different state
without leaving any external effects. Whereas, this is not possible for this lowest energy equilibrium state.
Because if you want to change it, wherever I want to go, I need to spend some energy, and it will not return it
back to me.
So with this in mind, the idea and the actual definition of stable equilibrium state, which is important, very
important for the statement, for our statement of the Second Law, is stable equilibrium state is an equilibrium
state that cannot be changed without leaving net external effects. Without net external effects, I cannot change a
And why is that important? Sorry. All right, do I have the time? No. OK. Why is that important? We'll return then
to the previous slide. Because the Second Law is about this question. It addresses and gives an answer to this
We all know the answer that mechanics gives. Mechanics says, one, only one, is the minimum energy, provided,
of course, you have fixed the number of particles and the values of the parameters of your external forces. So
However, the statement that we will adopt, which is due to Hatsopoulos and Keenan, is that no, in
thermodynamics, that's not true. There is one stable equilibrium state. Once you have fixed the numbers, the
number of particles and the values of the parameters, you have one equilibrium state, stable equilibrium state,
for every value of the energy. Which, of course, appears to be a contradiction with mechanics.
And as you know, people have gone on for a century, talking about the discrepancy between mechanics and
thermodynamics, and How do we resolve it? and so on. We'll see that there is no discrepancy, and there is no
serious problem the moment we realize that the states considered in mechanics is just a subset of the states
considered in thermodynamics.
And actually, we will see that this is the subset of states for which the entropy is equal to 0. But we haven't
defined entropy yet. Entropy definition will come next time. And so for the moment, I think I would finish this. I
promised that, five minutes earlier, and we'll take it from here.