0% found this document useful (0 votes)
17 views

ENG2086 Lecture 5 - M Scullion Notes

Uploaded by

Mh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

ENG2086 Lecture 5 - M Scullion Notes

Uploaded by

Mh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

ENG2086 Lecture 5 - M Scullion Notes

28 August 2023 11:27

Engineering Mathematics 2 - University of Glasgow


Multivariable Calculus
LECTURE 5: Total Differentials and Errors, Intro to Exact Differentials
Notes (Dr Mark Scullion)

TOTAL DIFFERENTIAL

Some functions may have more than one independent variable, for example u=f(x,y) has two. This can have partial
derivatives ∂𝑢/𝜕𝑥 and ∂𝑢/𝜕𝑦. In addition to derivatives we can also look at something called a differential. A
differential can be used to estimate how much the function u=f(x,y) will change by, for small changes in all of its
independent variables at the same time. This is useful for calculating errors or small changes in a function as we
shall see. [Note that a differential (e.g. df) is an incremental change in a function, not a rate of change (a rate of
change is a derivative df/dt). They are not the same thing. There is no change per something, just change]. Suppose
we have a function u=f(x,y) and we want to find out how this function will change in value ∆u when we have small
changes in both ∆x and ∆y. {Note that a function could have any number of independent variables, not just 2. We
need to include small changes in all of these variables).

∆u, the change (increment) in u is just the new value of u when x and y have changed by ∆x and ∆y
respectively, minus the original value of the u=f(x,y).

We can replace all the f's with U's as U=f(x,y):

The total differential of u=f(x,y) is given by the above equation. Note that we have a mixture of differential
and partial derivatives. If we had more variables we would have more terms. For example see V below:

The total differential (e.g. dV above) needs to contain terms for every independent variable (e.g. x, y and z). Instead
of writing each partial derivative in terms of f we can write in terms of V (both methods are equivalent as V=f(x,y,z)).
It sort of looks like the total dV needs a little bit of 𝜕𝑉 from each of the variables. The dx and 𝜕𝑥 sort of cancel out, as
!"
do the dy and 𝜕𝑦, and dz and 𝜕𝑧 each contributing a little bit of 𝜕𝑉 to dV. This is not strictly true as a derivative !# is
a single quantity not two dividing each other, whilst ∂𝑥 is partial, dx is not, but thinking about it this way helps to
visualise and remember how to write the total differential. The total differential helps us work out how the function
V will change for small changes in all its variables x, y and z (i.e. what is dV, when we have the changes dx, dy and
This is useful for determining errors or how a function will change with changes in its variables. Another way to view
it is that although dV in the above example is a change rather than a rate of change, how much it changes by will
depend on the rate of change of V with respect to each of its variables. If we re-write again below we see that the
change dV depends on the rate of change of V with respect to x (∂𝑉/𝜕𝑥) times how much x actually changes by (𝑑𝑥)
plus the rate of change of V with respect to y (∂𝑉/𝜕𝑦) times how much y actually changes by (𝑑𝑦) plus the rate of
change of V with respect to z (∂𝑉/𝜕𝑧) times how much z actually changes by (𝑑𝑧)

Let's consider some examples:

Consider the cylinder example given in the slides:

To calculate the error in the volume of the cylinder we will do two ways. The first is to calculate using
our equation for volume, making the changes ∆r and ∆h and calculate what this does to ∆V. We will
then compare with an approximation we can obtain via total differentials (more later).

This is the exact range of ∆V calculated using the equation for V. An alternative approach is to
use the differential. As long as the variables that are changing only change by a small amount,
then ∆r≈dr, ∆h≈dh and ∆V≈dV . The smaller are the changes in variables, the more accurate
this approximation is. Instead of using the exact method above, we can use the total
differential to get an approximation which may be faster or easier to achieve depending on
the complexity of the equation.

Instead of specifying the error in absolute terms, we could also specify it as a relative error (i.e. the
fraction or percentage change versus the original value). This is useful as often several terms cancel
when we divide by the original equation (here V) and we are left with an expression that depends on
some combination of the relative errors of the independent variables. A good tip here is that for each
partial derivative in the total differential after you find its equation try to re-write the partial
derivatives in terms of the original function (here the original function is V so we try to re-write the
partial derivatives in terms of V. This means compare the qn for V and the partial derivative so see
the factor between them). When you come to do the relative error things will cancel easily.

If we had known from the start that we were going to calculate relative errors an
easier way to do this question would be to re-write each partial derivative in
terms of the original function. This will be more easy to cancel terms and you will
get an answer in terms of each variable's relative error. See the following
question repeated below:

Re-writing the partial derivatives in terms of the original function V made


cancellation of V terms easy in the relative errors. I suggest you try to do this to
make the maths easier. It is really only useful for relative errors

Let's look at another example. Another useful error is the relative error bound. The tells us what the
maximum magnitude of the relative error could be, so the actual relative error must have a magnitude
that is less than or equal to this. For example, you may have a system that depends on many variables. If
you know the maximum range these variables can change by, the error bound will tell you what is the
maximum possible change in the function's value. The actual error must have a magnitude that is less
than or equal to this maximum.

a and b can both be positive or negative as they are errors (usually errors are ± some number). When we take the
magnitudes of them the sign disappears: |a| and |b| are both positive numbers. Similarly |a+b| and |a-b| are
also positive numbers due to the magnitude signs, but the magnitude of each of these will depend on the sign of
a and b. For example if a was a positive number and b was a positive number then |a-b| would be smaller in
magnitude than |a| due to the b taking away from a inside |a-b|. If however a was positive and b was negative,
then inside |a-b| -b would become positive (two negatives cancel) making |a-b| bigger than |a|. Let's look at an
example to explain better. For example if a=2 and b=1 then |a-b|=|2-1|=1, whilst |a|=2 and |b|=1. In this case
|a-b|=1 is less than |a| + |b| = 3. If however a=2 and b=-1 then |a-b|=|2--1|=3, whilst |a|=2 and |b|=1. In this
case |a-b|=3 is equal to |a| + |b| = 3. As we don't know whether a and b are positive or negative then the best
we can say in general terms is that |a-b|< |a| + |b|. We are not saying they are equal, we are saying less than or
equal to. We need to do this as errors are given over a range. We know they can be plus or minus some number,
but we don’t know what the actual error in one particular measurement may be, just that it falls somewhere in
the range. In the example question above this note the expression for the relative error in r had a negative sign
in front of it. In the relative error bound we want to find the maximum possible error. Each variable's error can be
positive or negative. In the above example the biggest possible error would occur when the value of the relative
error in r was a negative number (as the minus sign in front of the expression for dr/r would cancel with the
negative error in the value of dr/r). If we had kept dr/r positive then this would actually reduce the error in F in
the above example as the minus sign in front of our expression for dr/r would take away from the other relative
errors making the overall error smaller. The relative error bound assumes the worst case so we apply magnitude
signs and get rid of the minus signs in front of our expressions using the general rule mentioned in this note that
|a-b|< |a| + |b|

Only 𝑁' and λ have errors. 𝑡 we will assume has no error as we are told it is well known (this is just for
this example. In the real world errors can never be zero, but if the time is measured much more
accurately than the other variables then we can assume its error is negligible). Although λ is a physical
constant, this constant was determined by experimental measurements thus it can also have an error.
For error purposes alone we will therefore treat λ as a variable (as has an error) and 𝑡 as a constant
(as has very little or no error). The total differential for error purposes need only contain terms with
errors in them.

I mentioned before that the differential method for calculating errors or changes is only an
approximation. Why? Recall from Lecture 3 Notes we calculated partial derivatives using the
method of increments.

At the end of the method of increments we let ∆x -> 0, which means some of the terms
(underlined in green) effectively disappear. However if our error in x is not very small, we can
no longer ignore these terms in green. We know that errors are never zero, thus the
differential method for calculating errors will always give us these extra terms in real
examples. Each additional term is smaller than the previous due to an additional factor of ∆x
(multiplying a small number by itself makes it even smaller). In this example we used x^4.
Higher order polynomials will have even more extra terms, but each of these will be even
smaller. As almost any function can be represented as a series of polynomials via a Taylor
expansion (more in later lectures) this means any function we differentiate will also have
these extra terms if we can no longer assume ∆x -> 0. When we derived the equation for a
total differential we also assumed ∆𝑥 → 0 and ∆𝑦 → 0 to get the dx, dy and partial
!$ !$
derivatives in 𝑑𝑢 = !#
𝑑𝑥 + !% 𝑑𝑦. If these increments no longer tend to zero then the
equation for the total differential would only be approximately true. The differential method
therefore always gives us an approximation, with the approximation being more accurate the
smaller is the relative error in the function's variables. If the relative error is very small then it
gives almost the same answer as doing by exact increments. If it gets larger we start to notice
some error in our approximation as we don't take into account these extra terms.

Exact Differentials
Previously we have seen that the total differential of a function needs to include terms for every
independent variable of the function. So if U=f(x,y,z) the total differential would be:

We now consider something called an exact differential. This is something very similar to a total differential and
often they are pretty much the same thing but there is a subtle difference that causes confusion. In the above
example we started with a function U=f(x,y,z) and wrote down the total differential dU. If instead we go the
other way: we are given an equation that looks like a total differential and asked is it really a total differential? If
there is a function U that would generate this differential when U was put into a total differential form, then we
say the differential is exact. It has a solution U, it really is a total differential. This is not always possible. Some
differentials cannot be generated from the total differential of a function: there is no function capable of
producing them -> they are not exact -> they have no solution U -> they are not a total differential

This equation looks like it has the same form as a total differential. It has something times dx plus something times dy.

If we make the assumption that 2𝑥𝑦 = 𝜕𝑢/𝜕𝑥 and 𝑥 & = 𝜕𝑢/𝜕𝑦 then these two equations would be the same. Can
we make such an assumption though? This would only be true if there existed a function 𝑈 that when we
differentiated it partially with respect to x we got 2𝑥𝑦 AND when we differentiated it partially with respect to y we
got 𝑥 & . As long as there is a function U that can do that, then we can say that the differential form we were
presented with does conform to a total differential, thus it has a solution: it is AN EXACT DIFFERENTIAL. If there is no
function U that would sastisfy 𝜕𝑈/𝜕𝑥 = 2𝑥𝑦 AND 𝜕𝑢/𝜕𝑦 = 𝑥 & then the differential form has no solution so it is not
an exact differential. In this particular example there is a function that would satisfy these conditions: the function
𝑈 = 𝑥 & 𝑦 when we partially differentiate it with respect to x becomes 2𝑥𝑦 and when we partially differentiate U
with respect to y it becomes 𝑥 & which agrees with the differential form we were given. As it has a function U (a
parent function) then this example is an exact differential.

More generally we may be presented with some equation like the below that has some function p of x and y
multiplying dx and some function q of x and y multiplying dy.

We therefore have a term multiplying dx plus a term multiplying dy. It therefore looks like the same form as a
total differential

If we can find a function U that we put into the differential form equation (2) and it comes out looking exactly
the same as equation (1) then we can say that (1) is an exact differential as it has a solution U. We are saying
that pdx+qdy really is a total differential (we weren't sure) as we can find a function that will generate it from
the total differential

It is not always possible to find a function that when put into (2) results in (1). Only those differentials that have
a parent function that does satisfy it are exact. Therefore all exact differentials are also total differentials, but
not all differentials are exact differentials because some differentials are not total differentials as no function
can generate them from the total differential form: they have no solution for U. Only if we can find a function U
can we really say that a differential in eqn (1) is equal to dU (as if the differential is not exact then there is no U
by definition).

In the slides we see a few examples of functions that look like total differentials and are asked if they are exact.

How do we know that is there no function that satisfies 2)? How did we find the correct equation for
1)? We can't just rely on luck or trial or error as there are an infinite number of functions possible in
mathematics. What can we do? Both of these questions will be answered in the next lecture!

You might also like