Full
Full
EQUATIONS
Niels Walet
University of Manchester
University of Manchester
Partial Differential Equations
Niels Walet
This open text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the
hundreds of other open texts available within this powerful platform, it is licensed to be freely used, adapted, and distributed.
This book is openly licensed which allows you to make changes, save, and print this book as long as the applicable license is
indicated at the bottom of each page.
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of
their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and
new technologies to support learning.
The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online
platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable
textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the
next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an
Open Access Resource environment. The project currently consists of 13 independently operating and interconnected libraries
that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books.
These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level)
and horizontally (across different fields) integrated.
The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot
Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning
Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant
No. 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation nor the US Department of Education.
Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information
on our activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our
blog (http://Blog.Libretexts.org).
4: FOURIER SERIES
4.1: TAYLOR SERIES
4.2: INTRODUCTION TO FOURIER SERIES
4.3: PERIODIC FUNCTIONS
4.4: ORTHOGONALITY AND NORMALIZATION
4.5: WHEN IS IT A FOURIER SERIES?
4.6: FOURIER SERIES FOR EVEN AND ODD FUNCTIONS
4.7: CONVERGENCE OF FOURIER SERIES
BACK MATTER
INDEX
GLOSSARY
2 3/3/2021
CHAPTER OVERVIEW
1: INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
In this course we shall consider so-called linear Partial Differential Equations (P.D.E.’s). This
chapter is intended to give a short denition of such equations, and a few of their properties.
However, before introducing a new set of denitions, let me remind you of the so-called ordinary
differential equations ( O.D.E.’s) you have encountered in many physical problems.
1.2: PDE’S
1 3/3/2021
1.1: Ordinary Differential Equations
ODE’s are equations involving an unknown function and its derivatives, where the function depends on a single variable, e.g.,
the equation for a particle moving at constant velocity,
d
x(t) = v, (1.1.1)
dt
The unknown constant x is called an integration constant, and can be determined if we know where the particle is located at
0
time t = 0 . If we go to a second order equation (i.e., one containing the second derivative of the unknown function), we find
more integration constants: the harmonic oscillator equation
2
d 2
x(t) = −ω x(t) (1.1.3)
dt2
has as solution
x = A cos ωt + B sin ωt, (1.1.4)
Review
∂
To remind you of what that means: u(x, t) denotes the differentiation of u(x, t) w.r.t. x keeping t fixed,
∂x
∂ 2 2 2
(x t + x t ) = 2xt + t . (1.2.2)
∂x
Equation 1.2.1 called linear since u and its derivatives appear linearly, i.e., once per term. No functions of u are allowed.
∂
Terms like u , sin(u), u
2
u, etc., break this rule, and lead to non-linear equations. These are interesting and important in
∂x
their own right, but outside the scope of this course.
Equation 1.2.1 is also homogeneous (which just means that every term involves either u or one of its derivatives, there is
no term that does not contain u). The equation
2
∂ 1 ∂
u(x, t) = u(x, t) + sin(x) (1.2.3)
2
∂x k ∂t
is called inhomogeneous, due to the sin(x) term on the right, that is independent of u.
Linearity
Why is all that so important? A linear homogeneous equation allows superposition of solutions. If u1 and u2 are both
solutions to the heat equation,
2 2
∂ 1 ∂ ∂ 1 ∂
u1 (x, t) − u1 (x, t) = u2 (x, t) − u2 (x, t) = 0, (1.2.4)
2 2
∂x k ∂t ∂t k ∂x
For a linear inhomogeneous equation this gets somewhat modified. Let v be any solution to the heat equation with a
sin(x) inhomogeneity,
2
∂ 1 ∂
v(x, t) − v(x, t) = sin(x). (1.2.6)
2
∂x k ∂t
In that case v + au , with u a solution to the homogeneous equation, see Equation 1.2.4, is also a solution,
1 1
2
∂ 1 ∂
[v(x, t) + au1 (x, t)] − [v(x, t) + au1 (x, t)] =
2
∂x k ∂t
2
∂ 1 ∂ ∂ 1 ∂
v(x, t) − v(x, t) + a ( u1 (x, t) − u1 (x, t)) = sin(x).
2
∂x k ∂x ∂x k ∂t
Finally we would like to define the order of a PDE as the power in the highest derivative, even it is a mixed derivative (w.r.t.
more than one variable).
∂x ∂y
2 2
∂ u ∂u ∂ u
b. y 2
2
+u +x
2
2
=0
∂x ∂x ∂y
2 2
∂ u ∂ u
c. 2
+
2
=0
∂x ∂x
Answer
TBA
Exercise 1.2.2
What is the order of the following equations?
2 2
∂ u ∂ u
a. 2
+
2
=0
∂x ∂y
2 4 2
∂ u ∂ u ∂ u
b. 2
−2
3
+
2
=0
∂x ∂ x∂y ∂y
Answer
TBA
1 3/3/2021
2.1: Examples of PDE
Partial differential equations occur in many different areas of physics, chemistry and engineering. Let me give a few examples,
with their physical context. Here, as is common practice, I shall write ∇ to denote the sum 2
2 2
∂ ∂
2
∇ = + +… (2.1.1)
2 2
∂x ∂y
This can be used to describes the motion of a string or drumhead (u is vertical displacement), as well as a variety of other
waves (sound, light, ...). The quantity c is the speed of wave propagation.
The heat or diffusion equation,
1 ∂u
2
∇ u = (2.1.3)
k ∂t
This can be used to describe the change in temperature (u) in a system conducting heat, or the diffusion of one substance in
another (u is concentration). The quantity k , sometimes replaced by a , is the diffusion constant, or the heat capacity.
2
Notice the irreversible nature: If t → −t the wave equation turns into itself, but not the diffusion equation.
Laplace’s equation:
2
∇ u =0 (2.1.4)
Helmholtz’s equation:
2
∇ u + λu = 0 (2.1.5)
This occurs for waves in wave guides, when searching for eigenmodes (resonances).
Poisson’s equation:
2
∇ u = f (x, y, …) (2.1.6)
The equation for the gravitational field inside a gravitational body, or the electric field inside a charged sphere.
Time-independent Schrödinger equation:
2m
2
∇ u = [E − V (x, y, …)]u = 0 (2.1.7)
2
ℏ
Klein-Gordon equation
2
2
1 ∂ u 2
∇ u− +λ u = 0 (2.1.8)
2 2
c ∂t
These are all second order differential equations. (Remember that the order is defined as the highest derivative appearing in
the equation).
where the coefficients (i.e., a, … , g) can either be constants or given functions of x, y. If g is 0 the system is called
homogeneous, otherwise it is called inhomogeneous. Now the differential equation is said to be
elliptic ⎫ ⎧ positive
⎪ ⎪
2
hyperbolic ⎬ if Δ(x, y) = ab − c is ⎨ negative (2.2.2)
⎭
⎪ ⎩
⎪
parabolic zero
We neglect d and e since they only describe a shift of the origin. Such a quadratic equation can describe any of the geometrical
figures discussed above. Let me show an example, a = 3, b = 3, c = 1 and f = −3 . Since ab − c = 8 , this should describe 2
2 2
ξ +η ξ −η
3ξ + 3η + 2ξη = 4 ( – ) + 2( – ) =3 (2.2.4)
√2 √2
which is indeed the equation of an ellipse, with rotated axes, as can be seen in Figure 2.2.1
We now recognize that ∇ is nothing more than the determinant of this matrix, and it is positive if both eigenvalues are equal,
negative if they differ in sign, and zero if one of them is zero. (Note: the simplest ellipse corresponds to x + y = 1 , a 2 2
⎝ 2 ⎠
xz yz z
Since this has always (for all x, y, z) two zero eigenvalues this is a parabolic differential equation.
Exercise 2.E. 1
What is the order of the following equations
1. ∂
3
u ∂
2
u
+ =0 (2.E.1)
3 2
∂x ∂y
2. ∂
2
u ∂
4
u ∂
2
u
−2 + =0 (2.E.2)
2 3 2
∂x ∂x u ∂y
Answer
TBA
Exercise 2.E. 2
Classify the following differential equations (as elliptic, etc.)
1. ∂
2
u ∂
2
u ∂
2
u
−2 + =0 (2.E.3)
2 2
∂x ∂x∂y ∂y
2. ∂
2
u ∂
2
u ∂u
+ + =0 (2.E.4)
2 2
∂x ∂y ∂x
3. ∂
2
u ∂
2
u ∂u
− +2 =0 (2.E.5)
2 2
∂x ∂y ∂x
4. ∂
2
u ∂u ∂u
+ +2 =0 (2.E.6)
2
∂x ∂x ∂y
5. ∂
2
u ∂
2
u
y +x =0 (2.E.7)
2 2
∂x ∂y
Answer
TBA
1 3/3/2021
3.1: Introduction to Boundary and Initial Conditions
As you all know, solutions to ordinary differential equations are usually not unique (integration constants appear in many
places). This is of course equally a problem for PDE’s. PDE’s are usually specified through a set of boundary or initial
conditions. A boundary condition expresses the behavior of a function on the boundary (border) of its area of definition. An
initial condition is like a boundary condition, but then for the time-direction. Not all boundary conditions allow for solutions,
but usually the physics suggests what makes sense. Let me remind you of the situation for ordinary differential equations, one
you should all be familiar with, a particle under the influence of a constant force,
2
∂ x
= a. Which leads to (3.1.1)
2
∂t
∂x
= at + v0 , and (3.1.2)
∂t
1 2
x = at + v0 t + x0 . l. 21 (3.1.3)
2
This contains two integration constants. Standard practice would be to specify (t = 0) = v and x(t = 0) = x . These are
∂x
∂t
0 0
linear initial conditions (linear since they only involve x and its derivatives linearly), which have at most a first derivative in
them. This one order difference between boundary condition and equation persists to PDE’s. It is kind of obviously that since
the equation already involves that derivative, we can not specify the same derivative in a different equation.
The important difference between the arbitrariness of integration constants in PDE’s and ODE’s is that whereas solutions of
ODE’s these are really constants, solutions of PDE’s contain arbitrary functions.
Let me give an example. Take
u = yf (x) (3.1.4)
then
∂u
= f (x). (3.1.5)
∂y
This can be used to eliminate f from the first of the equations, giving
∂u
u =y (3.1.6)
∂y
The problem is that without additional conditions the arbitrariness in the solutions makes it almost useless (if possible) to write
down the general solution. We need additional conditions, that reduce this freedom. In most physical problems these are
boundary conditions, that describes how the system behaves on its boundaries (for all times) and initial conditions, that specify
the state of the system for an initial time t = 0 . In the ODE problem discussed before we have two initial conditions (velocity
and position at time t = 0 ).
As before the maximal order of the derivative in the boundary condition is one order lower than the order of the PDE. For a
second order differential equation we have three possible types of boundary condition
Figure 3.2.1 : A sketch of the normal derivatives used in the von Neumann boundary conditions.
Typically we cannot specify the gradient at the boundary since that is too restrictive to allow for solutions. We can – and in
physical problems often need to – specify the component normal to the boundary, see Figure 3.2.1 for an example. When this
normal derivative is specified we speak of von Neumann boundary conditions.
In the case of an insulated (infinitely thin) rod of length a , we can not have a heat-flux beyond the ends so that the gradient of
the temperature must vanish (heat can only flow where a difference in temperature exists). This leads to the BC
∂u ∂u
(0, t) = (a, t) = 0. (3.2.4)
∂x ∂x
and
T ∂u
u(a, t) − (a, t) = 0, (3.4.7)
k ∂x
1 3/3/2021
4.1: Taylor Series
One series you have encountered before is Taylor’s series,
∞ n
(x − a)
(n)
f (x) = ∑ f (a) , (4.1.1)
n!
n=0
where f (n)
(x) is the n th derivative of f . An example is the Taylor series of the cosine around x = 0 (i.e., a = 0 ),
cos(0) = 1,
′ ′
cos (x) = − sin(x), cos (0) = 0,
(2) (2)
cos (x) = − cos(x), cos (0) = −1,
(3) (3)
cos (x) = sin(x), cos (0) = 0,
(4) (4)
cos (x) = cos(x), cos (0) = 1.
Notice that after four steps we are back where we started. We have thus found (using m = 2n in (4.1.1)) )
∞ m
(−1)
2m
cos x = ∑ x , (4.1.2)
(2m)!
m=0
Exercise 4.1.1
Show that
∞ m
(−1)
2m+1
sin x = ∑ x .
(2m + 1)!
m=0
Answer
TBA
We shall study how and when a function can be described by a Fourier series. One of the very important differences with
Taylor series is that they can be used to approximate non-continuous functions as well as continuous ones.
The smallest positive value of p for which f is periodic is called the (primitive) period of f.
Exercise 4.3.1
What is the primitive period of sin(4x)?
Answer
π
2
.
This is called a trigonometric series. If the series approximates a function f (as will be discussed) it is called a Fourier series
and a and b are the Fourier coefficients of f.
In order for all of this to make sense we first study the functions
nπx nπx
{1, cos ( ), sin ( )}, n = 1, 2, … , (4.4.2)
L L
L
nπx
∫ 1 ⋅ cos ( )dx = 0 (4.4.4)
−L
L
L
nπx
∫ 1 ⋅ sin ( )dx = 0 (4.4.5)
−L
L
L L
mπx nπx 1 (m + n)πx (m − n)πx
∫ cos ( ) ⋅ cos ( )dx = ∫ cos ( ) + cos ( )dx (4.4.6)
−L
L L 2 −L
L L
,
0 if n ≤ m
={ (4.4.7)
L if n = m
L L
mπx nπx 1 (m + n)πx (m − n)πx
∫ sin ( ) ⋅ sin ( )dx = ∫ cos ( ) + cos ( )dx (4.4.8)
−L L L 2 −L L L
,
0 if n ≤ m
={ (4.4.9)
L if n = m
L L
mπx nπx 1 (m + n)πx (m − n)πx
∫ cos ( ) ⋅ sin ( )dx = ∫ cos ( ) + cos ( )dx (4.4.10)
−L L L 2 −L L L
,
0 if n ≤ m
={ (4.4.11)
L if n = m
If we consider these integrals as some kind of inner product between functions (like the standard vector inner product) we see
that we could call these functions orthogonal. This is indeed standard practice, where for functions the general definition of
inner product takes the form
b
If this is zero we say that the functions f and g are orthogonal on the interval [a b] with weight function w. If this function is 1,
as is the case for the trigonometric functions, we just say that the functions are orthogonal on [a b].
The norm of a function is now defined as the square root of the inner-product of a function with itself (again, as in the case of
vectors),
−−−−−−−−−−−−−
b
2
∥f ∥ = √∫ w(x)f (x ) dx. (4.4.13)
a
Exercise 4.4.1
What is the normalised form of {1, cos ( nπx
L
), sin (
nπx
L
)}?
Answer
1 1 nπx 1 nπx
{ ,( ) cos ( ), ( ) sin ( )}
√2L √L L √L L
A set of mutually orthogonal functions that are all normalised is called an orthonormal set.
We can now use the orthogonality of the trigonometric functions to find that
L
1
∫ f (x) ⋅ 1dx = a0
L −L
L
1 nπx
∫ f (x) ⋅ cos ( )dx = an
L −L
L
L
1 nπx
∫ f (x) ⋅ sin ( )dx = an
L −L
L
This defines the Fourier coefficients for a given f (x). If these coefficients all exist we have defined a Fourier series, about
whose convergence we shall talk in a later lecture.
An important property of Fourier series is given in Parseval’s lemma:
L 2 ∞
La
2 0 2 2
∫ (f (x)) dx = + L ∑(an + bn ). (4.5.2)
−L
2
n=1
This looks like a triviality, until one realises what we have done: we have once again interchanged an infinite summation and
an integration. There are many cases where such an interchange fails, and actually it make a strong statement about the
orthogonal set when it holds. This property is usually referred to as completeness. We shall only discuss complete sets in these
lectures.
Now let us study an example. We consider a square wave (this example will return a few times)
−3 if − 5 + 10n < x < 10n
f (x) = { , (4.5.3)
3 if 10n < x < 5 + 10n
0 5
1 nπx 1 nπx
an = ∫ −3 cos( )+ ∫ 3 cos( ) =0
5 −5
5 5 0
5
0 5
1 nπx 1 nπx
bn = ∫ −3 sin( )+ ∫ 3 sin( )
5 −5 5 5 0 5
0 5
3 nπx ∣ 3 nπx ∣
= cos( )∣ − cos( )∣
nπ 5 ∣ nπ 5 ∣
−5 0
12
6 if n odd
nπ
= [1 − cos(nπ)] = {
nπ 0 if n even
And thus (n = 2m + 1 )
12 1 (2m + 1)πx
f (x) = ∑ sin( ). (4.5.4)
π 2m + 1 5
m=0
Exercise 4.5.1
What happens if we apply Parseval’s theorem to this series?
Answer
We find
5 ∞ 2
144 1
∫ 9dx = 5 ∑( ) (4.5.5)
2 2m + 1
−5 π
m=0
These have somewhat different properties than the even and odd numbers:
1. The sum of two even functions is even, and of two odd ones odd.
2. The product of two even or two odd functions is even.
3. The product of an even and an odd function is odd.
Exercise 4.6.1
Which of the following functions is even or odd?
a) sin(2x), b) sin(x)cos(x), c) tan(x), d) x , e) x , f) |x|
2 3
Answer
even: d, f; odd: a, b, c, e.
an odd function. These series are interesting by themselves, but play an especially important rôle for functions defined on half
the Fourier interval, i.e., on [0, L] instead of [−L, L]. There are three possible ways to define a Fourier series in this way, see
Fig. 4.6.1
1. Continue f as an even function, so that f ′
(0) = 0 .
2. Continue f as an odd function, so that f (0) = 0 .
3. Neither of the two above. We now nothing about f at x = 0 .
Figure 4.6.1: A sketch of the possible ways to continue f beyond its definition region for 0 < x < L . From left to right as
even function, odd function or assuming no symmetry at all.
an = 2 ∫ (1 − x) cos nπxdx
0
1
2 2 ∣
= { sin nπx − [cos nπx + nπx sin nπx]}∣
2 2
nπ n π ∣
0
0 if n even
={ 4 .
2 2
if n is odd
n π
So, changing variables by defining n = 2m + 1 so that in a sum over all m n runs over all odd numbers,
∞
1 4 1
f (x) = + ∑ cos(2m + 1)πx. (4.6.4)
2 2
2 π (2m + 1)
m=0
Figure 4.7.1: The square and triangular waves on their fundamental domain.
1. A square wave,
f (x) = 1 for −π < x < 0 ; f (x) = −1 for 0 < x < π.
2. a triangular wave,
g(x) = π/2 + x for −π < x < 0 ; g(x) = π/2 − x for 0 .
< x < π
∞
4 1
g(x) = ∑ cos(2m + 1)x.
2
π (2m + 1)
m=0
Let us compare the partial sums, where we let the sum in the Fourier series run from m = 0 to m = M instead of
m = 0 … ∞ . We note a marked difference between the two cases. The convergence of the Fourier series of g is uneventful,
and after a few steps it is hard to see a difference between the partial sums, as well as between the partial sums and g . For f ,
the square wave, we see a surprising result: Even though the approximation gets better and better in the (flat) middle, there is a
finite (and constant!) overshoot near the jump. The area of this overshoot becomes smaller and smaller as we increase M . This
is called the Gibbs phenomenon (after its discoverer). It can be shown that for any function with a discontinuity such an effect
is present, and that the size of the overshoot only depends on the size of the discontinuity! A final, slightly more interesting
version of this picture, is shown in Fig. 4.7.3.
5.1: COOKBOOK
Let me start with a recipe that describes the approach to separation of variables, as exemplified in
the following sections, and in later chapters.
1 3/3/2021
5.1: Cookbook
Let me start with a recipe that describes the approach to separation of variables, as exemplified in the following sections, and
in later chapters. Try to trace the steps for all the examples you encounter in this course.
1. Take care that the boundaries are naturally described in your variables (i.e., at the boundary one of the coordinates is
constant)!
2. Write the unknown function as a product of functions in each variable.
3. Divide by the function, so as to have a ratio of functions in one variable equal to a ratio of functions in the other variable.
4. Since these two are equal they must both equal to a constant.
5. Separate the boundary and initial conditions. Those that are zero can be re-expressed as conditions on one of the unknown
functions.
6. Solve the equation for that function where most boundary information is known.
7. This usually determines a discrete set of separation parameters.
8. Solve the remaining equation for each parameter.
9. Use the superposition principle (true for homogeneous and linear equations) to add all these solutions with an unknown
constants multiplying each of the solutions.
10. Determine the constants from the remaining boundary and initial conditions.
We shall attack this problem by separation of variables, a technique always worth trying when attempting to solve a PDE,
Thus the left-hand side, a function of t , equals a function of x on the right-hand side. This is not possible unless both sides are
independent of x and t , i.e. constant. Let us call this constant −λ .
We obtain two differential equations
′
T (t) = −λkT (t)
′′
X (x) = −λX(x)
Exercise 5.2.1
What happens if X(x)T (t) is zero at some point (x = x 0, t = t0 ) ?
Answer
Nothing. We can still perform the same trick.
Note
This is not so trivial as I suggest. We either have X(x ) = 0 or T (t ) = 0 . Let me just consider the first case, and
0 0
λ > 0
Write α 2
=λ , so that the equation for X becomes
′′ 2
X (x) = −α X(x). (5.2.6)
2 2
which has a nontrivial (i.e., one that is not zero) solution when αL = nπ , with n a positive integer. This leads to λ n =
n π
2
.
L
λ = 0
We find that X = A + Bx . The boundary conditions give A = B = 0 , so there is only the trivial (zero) solution.
λ < 0
′′ 2
X (x) = −α X(x). (5.2.9)
The boundary condition at x = 0 gives A = 0 , and the one at x = L gives B = 0 . Again there is only a trivial solution.
We have thus only found a solution for a discrete set of “eigenvalues” λ > 0 . Solving the equation for n T we find an
exponential solution, T = exp(−λkT ) . Combining all this information together, we have
2 2
n π nπ
un (x, t) = exp(−k t) sin( x). (5.2.11)
2
L L
The equation we started from was linear and homogeneous, so we can superimpose the solutions for different values of n ,
∞ 2 2
n π nπ
u(x, t) = ∑ cn exp(−k t) sin( x). (5.2.12)
2
L L
n=1
This is a Fourier sine series with time-dependent Fourier coefficients. The initial condition specifies the coefficients c , which n
2L n n+1
2L
= − (−1 ) = (−1 ) .
nπ nπ
imp×capac
∂ ∂
V (0, t) = V (40, t) = 0,
∂x ∂x
V (x, 0) = f (x),
∂
V (x, 0) = 0,
∂t
Separate variables,
V (x, t) = X(x)T (t). (5.3.1)
We find
′′ ′′
X T
= LC = −λ. (5.3.2)
X T
λ
′′
T =− T.
LC
We can also separate most of the initial and boundary conditions; we find
′ ′ ′
X (0) = X (40) = 0, T (0) = 0. (5.3.3)
λ > 0
40
,X
n = cos(αn x) . We find that
nπt nπt
Tn (t) = Dn cos( ) + En sin( ). (5.3.4)
−−
− −−
−
40 √LC 40 √LC
′
T (0) = 0 implies E
n =0 , and taking both together we find (for n ≥ 1 )
nπt nπx
Vn (x, t) = cos( − ) cos(
−− ). (5.3.5)
40 √LC 40
λ = 0
X(x) = A + Bx . B = 0 due to the boundary conditions. We find that T (t) = Dt + E , and D is 0 due to initial condition.
We conclude that
V0 (x, t) = 1. (5.3.6)
λ < 0
No solution.
Taking everything together we find that
40
1 nπx
an = ∫ f (x) cos( )dx.
20 0
40
If we are looking for a steady state solution, i.e., we take u(x, y, t) = u(x, y) the time derivative does not contribute, and we
get Laplace’s equation
2 2
∂ ∂
u+ u = 0, (5.4.2)
2 2
∂x ∂y
an example of an elliptic equation. Let us once again look at a square plate of size a × b , and impose the boundary conditions
u(x, 0) = 0,
u(a, y) = 0,
u(x, b) = x,
u(0, y) = 0. (5.4.3)
(This choice is made so as to be able to evaluate Fourier series easily. It is not very realistic!) We once again separate
variables,
u(x, y) = X(x)Y (y), (5.4.4)
and define
′′ ′′
X Y
=− = −λ. (5.4.5)
X Y
or explicitly
′′ ′′
X = −λX, Y = λY . (5.4.6)
With boundary conditions X(0) = X(a) = 0 , Y (0) = 0 . The 3rd boundary conditions remains to be implemented.
Once again distinguish three cases:
λ > 0
a
,λn = αn
2
. We find
′ ′
= Cn exp(αn y) + Dn exp(−αn y). (5.4.7)
λ ≤ 0
No solutions
So we have
n=1
n=1
2a n+1
= (−1 ) . (5.4.10)
nπ
Exercise 5.4.1
The dependence on x enters through a trigonometric function, and that on y through a hyperbolic function. Yet the
differential equation is symmetric under interchange of x and y . What happens?
Answer
The symmetry is broken by the boundary conditions.
Let me give an example of these procedures. Consider a vibrating string attached to two air bearings, gliding along rods 4m
apart. You are asked to find the displacement for all times, if the initial displacement, i.e. at t = 0 s is one meter and the initial
velocity is x/t m/s.
0
The differential equation and its boundary conditions are easily written down,
2 2
∂ 1 ∂
u = u,
2 2 2
∂x c ∂t
∂ ∂
u(0, t) = u(4, t) = 0, t > 0,
∂x ∂x
u(x, 0) = 1,
∂
u(x, 0) = x/ t0 .
∂t
Exercise 5.5.1
What happens if I add two solutions v and w of the differential equation that satisfy the same BC’s as above but different
IC’s,
∂
v(x, 0) = 0 , v(x, 0) = x/ t0 ,
∂t
∂
w(x, 0) = 1 , w(x, 0) = 0?
∂t
Answer
u =v + w , we can add the BC’s.
If we separate variables, u(x, t) = X(x)T (t), we find that we obtain easy boundary conditions for X(x),
′ ′
X (0) = X (4) = 0, (5.5.1)
2 2
but we have no such luck for . As before we solve the eigenvalue equation for X, and find solutions for λ =
(t) n
n π
16
,
n = 0, 1, . . ., and X = cos( x) . Since we have no boundary conditions for T (t) , we have to take the full solution,
nπ
n
4
T0 (t) = A0 + B0 t,
nπ nπ
Tn (t) = An cos ct + Bn sin ct,
4 4
and thus
∞
1 nπ nπ nπ
u(x, t) = (A0 + B0 t) + ∑ (An cos ct + Bn sin ct) cos x. (5.5.2)
2 4 4 4
n=1
which implies A 0 =2 ,A
n = 0, n > 0 .
∞
∂ 1 nπc nπ
u(x, 0) = x/ t0 = B0 + ∑ Bn cos x. (5.5.4)
∂t 2 4 4
n=1
This is the Fourier sine-series of x, which we have encountered before, and leads to the coefficients B0 = 4 and
B =−
n
64
3
n π
if n is odd and zero otherwise.
3
c
So finally
64 1 nπct nπx
u(x, t) = (1 + 2t) − ∑ sin cos . (5.5.5)
3 3
π n 4 4
n odd
The left and right ends are both attached to a thermostat, and the temperature at the left side is fixed at a temperature of 500 K
and the right end at 100 K. There is also a heater attached to the rod that adds a constant heat of sin( ) to the rod. The πx
u(0, t) = 500,
u(2, t) = 100,
1 πx
u(x, 0) = sin( ) + 500.
k 2
where h will be determined so as to make v satisfy a homogeneous equation. Substituting this form, we find
2
∂ ∂ πx
′′
v=k v + kh + sin( ). (5.6.3)
2
∂t ∂x 2
′′
1 πx
h (x) = − sin( ), (5.6.4)
k 2
At the same time we let h carry the boundary conditions, h(0) = 500 , h(2) = 100 , and thus
4 πx
h(x) = −200x + 500 + sin( ). (5.6.6)
2
kπ 2
v(0, t) = v(π, t) = 0,
This is a problem of a type that we have seen before. By separation of variables we find
∞ 2 2
n π nπ
v(x, t) = ∑ bn exp(− kt) sin x. (5.6.7)
4 2
n=1
n=1
And thus
∞ n
200 4 πx 800 (−1) πnx 2
−k(nπ/2 ) t
u(x, t) = − + 500 + sin( )+ ∑ sin( )e . (5.6.10)
2
x π k 2 π n+1 2
n=1
π
sin x
π k
. As can be seen in Fig.
2
5.6.1 this approach is quite rapid – we have
chosen k = 1/500 in that figure, and summed over the first 60 solutions.
Figure 5.6.1 : Time dependence of the solution to the inhomogeneous Equation 5.6.10.
6.3: EXAMPLES
Now let me look at two examples.
1 3/3/2021
6.1: Background to D’Alembert’s Solution
I have argued before that it is usually not useful to study the general solution of a partial differential equation. As any such
sweeping statement it needs to be qualified, since there are some exceptions. One of these is the one-dimensional wave
equation
2 2
∂ 1 ∂
u(x, t) − u(x, t) = 0, (6.1.1)
2 2 2
∂x c ∂t
2 2 2
∂ ∂ ∂ ∂
u = ū + 2 ū + ū,
2 2
∂x ∂w ∂w∂z ∂z
∂ ∂ ∂ ∂ ∂ ∂ ∂
u = ū w+ ū z =c( ū − ū) ,
∂t ∂w ∂t ∂z ∂t ∂w ∂z
2 2 2
∂ 2
∂ ∂ ∂
u =c ( ū − 2 ū + ū) (6.2.2)
2 2
∂t ∂w ∂w∂z ∂z
2
∂
An equation of the type ū = 0 can easily be solved by subsequent integration with respect to z and w. First solve for
∂w∂z
the z dependence,
∂
ū = Φ(w), (6.2.4)
∂w
where Φ is any function of w only. Now solve this equation for the w dependence,
Let me assume f (±∞) = 0 . I shall assume this also holds for F and G (we don’t have to, but this removes some arbitrary
constants that don’t play a rôle in u). We find
F (x) + G(x) = f (x),
′ ′
c(F (x) − G (x)) = g(x). (6.2.7)
Note that Γ is the integral over g . So Γ will always be a continuous function, even if g is not!
And in the end we have
1
G(x) = [f (x) − Γ(x) − C ] (6.2.9)
2
⎧ x +1 if −1 < x < 0
Figure 6.2.1 : The graphical form of (6.2.11 ), for (from left to right) ,
t = 0s t = 0.5s and t = 1s . The dashed lines are
1 1
f (x + t) (leftward moving wave) and f (x − t) (rightward moving wave). The solid line is the sum of these two, and thus
2 2
the solution u .
length of the string. The way to do that depends on the kind of boundary conditions: Here we shall only consider a string fixed
at its ends.
u(0, t) = u(a, t) = 0,
∂
u(x, 0) = g(x). (6.2.13)
∂t
Initially we can follow the approach for the infinite string as sketched above, and we find that
1
F (x) = [f (x) + Γ(x) + C ] ,
2
1
G(x) = [f (x) − Γ(x) − C ] . (6.2.14)
2
Now we understand that f and Γ are completely arbitrary functions – we can pick any form for the initial conditions we want.
Thus the relation found above can only hold when both terms are zero
f (x) = −f (−x),
Figure 6.2.2: A schematic representation of the reflection conditions in Equations 6.2.16 and 6.2.17.
The dashed line represents f and the dotted line Γ.
The reflection conditions for f and Γ are similar to those for sines and cosines, and as we can see from Figure 6.2.2 both f
Example 6.3.1
Find graphically a solution to
2 2
∂ ∂
u = u (c = 1m/s)
2 2
∂t ∂x
2x if 0 ≤ x ≤ 2
u(x, 0) = { .
24/5 − 2x/5 if 2 ≤ x ≤ 12
∂
u(x, 0) = 0
∂t
u(0, t) = u(12, t) = 0
Solution
1
We need to continue f as an odd function, and we can take Γ = 0 . We then have to add the left-moving wave f (x + t)
2
1
and the right-moving wave f (x − t) , as we have done in Figs. ???
2
Example 6.3.2
Find graphically a solution to
2 2
∂ ∂
u = u (c = 1m/s)
∂t2 ∂x2
u(x, 0) = 0
∂ 1 if 4 ≤ x ≤ 6
u(x, 0) = { .
∂t 0 elsewhere
u(0, t) = u(12, t) = 0.
Solution
In this case f =0 . We find
x
′ ′
Γ(x) = ∫ g(x )dx
0
⎧0 if 0 < x < 4
= ⎨ −4 + x if 4 < x < 6 .
⎩
2 if 6 < x < 12
1 3/3/2021
7.1: Polar Coordinates
Polar coordinates in two dimensions are defined by
x = ρ cos ϕ, y = ρ sin ϕ,
−−−−−−
2 2
ρ = √x +y , ϕ = arctan(y/x),
x ∂ y ∂
= −
2
ρ ∂ρ ρ ∂ϕ
∂ sin ϕ ∂
= cos ϕ − ,
∂ρ ρ ∂ϕ
∂ ∂ρ ∂ ∂ϕ ∂
= +
∂y ∂y ∂ρ ∂y ∂ϕ
y ∂ x ∂
= +
2
ρ ∂ρ ρ ∂ϕ
∂ cos ϕ ∂
= sin ϕ + ,
∂ρ ρ ∂ϕ
We can write
∂ 1 ∂
^ρ
∇ = e ^ϕ
+e
∂ρ ρ ∂ϕ
e
^ϕ = (− sin ϕ, cos ϕ),
2 2 2 2
2 2
∂ sin ϕ cos ϕ ∂ sin ϕ ∂ sin ϕ ∂ sin ϕ cos ϕ ∂
∇ = cos ϕ + + + +
2 2 2 2 2
∂ρ ρ ∂ϕ ρ ∂ρ ρ ∂ϕ ρ ∂ϕ
2 2 2 2
∂ sin ϕ cos ϕ ∂ cos ϕ ∂ cos ϕ ∂ sin ϕ cos ϕ ∂
2
+ sin ϕ − + + −
2 2 2 2 2
∂ρ ρ ∂ϕ ρ ∂ρ ρ ∂ϕ ρ ∂ϕ
2 2
∂ 1 ∂ 1 ∂
= + +
2 2 2
∂ρ ρ ∂ρ ρ ∂ϕ
2
1 ∂ ∂ 1 ∂
= (ρ )+ .
2 2
ρ ∂ρ ∂ρ ρ ∂ϕ
ϕ = arctan(y/x)
−−−−−−
2 2
√x + y
θ = arctan( )
z
or alternatively
x = r cos ϕ sin θ,
y = r sin ϕ sin θ
z = r cos θ
x ∂ y ∂ xz ∂
= − + − −−−− −
2 2
r ∂r x +y ∂ϕ r √ x2 + y 2 ∂θ
2
∂ ∂r ∂ ∂ϕ ∂ ∂θ ∂
= + +
∂y ∂y ∂r ∂y ∂ϕ ∂y ∂θ
y ∂ x ∂ yz ∂
= + + −−−−−−
2 2
r ∂r x +y ∂ϕ 2
2
r √x + y
2 ∂θ
∂ ∂r ∂ ∂ϕ ∂ ∂θ ∂
= + +
∂z ∂z ∂r ∂z ∂ϕ ∂z ∂θ
− −−−− −
z ∂ √ x2 + y 2 ∂
= −
2
r ∂r r ∂θ
∂ sin θ ∂
= sin θ sin ϕ − .
∂r r ∂θ
e
^ϕ = (− sin ϕ, cos ϕ, 0),
e
^θ = (cos ϕ cos θ, sin ϕ cos θ, − sin θ).
2
1 ∂ 2
∂ 1 1 ∂ ∂ 1 ∂
Δ = (r )+ (sin θ )+ (7.2.1)
2 2 2 2
r ∂r ∂r r sin θ ∂θ ∂θ r ∂ϕ
Finally, for integration over these variables we need to know the volume of the small cuboid contained between r and r + δr ,
θ and θ + δθ and ϕ and ϕ + δϕ .
2
∫ f (x, y, z)dxdydz = ∫ f (r, θ, ϕ)r sin θ dr dθ dϕ. (7.2.2)
V V
Contributors
Template:ContribWalet
8.1: EXAMPLE
8.2: THREE CASES FOR Λ
8.3: PUTTING IT ALL TOGETHER
1 3/3/2021
8.1: Example
Consider a circular plate of radius c m, insulated from above and below. The temperature on the circumference is 100
∘
C on
half the circle, and 0 C on the other half.
∘
Figure 8.1.1: The boundary conditions for the temperature on a circular plate.
The differential equation to solve is
2 2
∂ u ∂u ∂ u
2
ρ +ρ + u = 0, (8.1.1)
2 2
∂ρ ∂ρ ∂ϕ
8.1.1: Periodic BC
There is no real boundary in the ϕ direction, but we introduce one, since we choose to let ϕ run from 0 to 2π only. So what
kind of boundary conditions do we apply? We would like to see “seamless behaviour”, which specifies the periodicity of the
solution in ϕ ,
u(ρ, ϕ + 2π) = u(ρ, ϕ),
∂u ∂u
(ρ, ϕ + 2π) = (ρ, ϕ).
∂ϕ ∂ϕ
∂u ∂u
(ρ, 2π) = (ρ, 0).
∂ϕ ∂ϕ
We have to solve
′′ 2
Φ = α Φ, (8.2.1)
which leads to
2 2
sin(2απ ) = −(1 − cos(2απ)) , (8.2.4)
and thus we only have a non-zero solution for α = n , an integer. We have found
2
λn = n , Φn (ϕ) = An cos nϕ + Bn sin nϕ. (8.2.6)
λ =0
We have
′′
Φ = 0. (8.2.7)
λ >0
The solution (hyperbolic sines and cosines) cannot satisfy the boundary conditions.
Now let me look at the solution of the R equation for each of the two cases (they can be treated as one),
2 ′′ ′ 2
ρ R (ρ) + ρR (ρ) − n R(ρ) = 0. (8.2.10)
Let us attempt a power-series solution (this method will be discussed in great detail in a future lecture)
α
R(ρ) = ρ . (8.2.11)
The term with the negative power of ρ diverges as ρ goes to zero. This is not acceptable for a physical quantity (like the
temperature). We keep the regular solution,
n
Rn (ρ) = ρ . (8.2.14)
For n = 0 we find only one solution, but it is not very hard to show (e.g., by substitution) that the general solution is
R0 (ρ) = C0 + D0 ln(ρ). (8.2.15)
The one remaining boundary condition can now be used to determine the coefficients A and B , n n
∞
A0
n
U (c, ϕ) = +∑c (An cos nϕ + Bn sin nϕ)
2
n=1
We find
π
1
A0 = ∫ 100 dϕ = 100,
π 0
π
n
1 100 π
c An = ∫ 100 cos nϕ dϕ = sin(nϕ)| = 0,
0
π 0
nπ
π
1 100 π
n
c Bn = ∫ 100 sin nϕ dϕ = − cos(nϕ)|
0
π 0 nπ
200/(nπ) if n is odd
={ .
0 if n is even
In summary
200 ρ n sin nϕ
u(ρ, ϕ) = 50 + ∑ ( ) . (8.3.2)
π c n
n odd
We clearly see the dependence of u on the pure number r/c , rather than ρ. A three dimensional plot of the temperature is
given in Fig. 8.3.1.
1 3/3/2021
9.1: Frobenius’ Method
Let us look at the a very simple (ordinary) differential equation,
′′
y (t) = t y(t), (9.1.1)
with initial conditions y(0) = a , y (0) = b . Let us assume that there is a solution that is analytical near t = 0 . This means that
′
k
y(t) = c0 + c1 t + … = ∑ ck t . (9.1.2)
k=0
′ k−1
y (t) = c1 + 2 c2 t + … = ∑ kck t ,
k=1
′′ k−2
y (t) = 2 c2 + 3 ⋅ 2t + … = ∑ k(k − 1)ck t ,
k=2
2 k+1
ty(t) = c0 t + c1 t +… = ∑ ck t . (9.1.3)
k=0
= 2 c2 + (3 ⋅ 2 c3 − c0 )t + …
∞
k−2
= 2 c2 + ∑ {k(k − 1)ck − ck−3 } t . (9.1.4)
k=3
Here we have collected terms of equal power of t . The reason is simple. We are requiring a power series to equal 0. The only
way that can work is if each power of x in the power series has zero coefficient. (Compare a finite polynomial....) We thus find
c2 = 0, k(k − 1)ck = ck−3 . (9.1.5)
The last relation is called a recurrence of recursion relation, which we can use to bootstrap from a given value, in this case c 0
1 1 1
c3 = c0 , c4 = c1 , c5 = c2 = 0. (9.1.6)
6 12 20
These in turn can be used to determine c 6, c7 , c8 , etc. It is not too hard to find an explicit expression for the c ’s
3m − 2
c3m = c3(m−1)
(3m)(3m − 1)(3m − 2)
3m − 2 3m − 5
= c3(m−1)
(3m)(3m − 1)(3m − 2) (3m − 3)(3m − 4)(3m − 5)
(3m − 2)(3m − 5) … 1
= c0 ,
(3m)!
3m − 1
c3m+1 = c3(m−1)+1
(3m + 1)(3m)(3m − 1)
3m − 1 3m − 4
= c3(m−2)+1
(3m + 1)(3m)(3m − 1) (3m − 2)(3m − 3)(3m − 4)
(3m − 2)(3m − 5) … 2
= c1 ,
(3m + 1)!
c3m+1 = 0. (9.1.7)
3m 3m+1
y(t) = a [1 + ∑ c3m t ] + b [1 + ∑ c3m+1 t ]. (9.1.8)
m=1 m=1
The technique sketched here can be proven to work for any differential equation
′′ ′
y (t) + p(t)y (t) + q(t)y(t) = f (t) (9.1.9)
provided that p(t), q(t) and f (t) are analytic at t = 0 . Thus if p, q and f have a power series expansion, so has y .
Any point t where p(t) and q(t) are singular is called a singular point. Of most interest are a special class of singular points
0
called regular singular points, where the differential equation can be given as
2 ′′ ′
(t − t0 ) y (t) + (t − t0 )α(t)y (t) + β(t)y(t) = 0, (9.2.1)
with α and β analytic at t = t0 . Let us assume that this point is t0 = 0 . Frobenius’ method consists of the following
technique: In the equation
2 ′′ ′
x y (x) + xα(x)y (x) + β(x)y(x) = 0, (9.2.2)
γ k
y(x) = x ∑ cn x . (9.2.3)
n=0
Equation 9.2.5 is called the indicial equation. It is a quadratic equation in γ, that usually has two (complex) roots. Let me call
these γ , γ . If γ − γ is not integer one can prove that the two series solutions for y with these two values of γ are
1 2 1 2
independent solutions.
Let us look at an example
3
2 ′′ ′
t y (t) + ty (t) + ty = 0. (9.2.6)
2
Here α(t) = 3/2 , β(t) = t , so t = 0 is indeed a regular singular point. The indicial equation is
3
2
γ(γ − 1) + γ =γ + γ/2 = 0. (9.2.7)
2
k
(9.2.8)
−1/2 k
y2 (t) = t ∑ dk t .
INDEPENDENT SolutionS
Independent solutions are really very similar to independent vectors: Two or more functions are independent if none of
them can be written as a combination of the others. Thus x and 1 are independent, and 1 + x and 2 + x are dependent.
n=0
γ +1 n
y2 (t) = y1 (t) ln t + t 1
∑ dn t . (9.3.2)
n=0
Notice that this last solution is always singular at t = 0 , whatever the value of γ ! 1
γ1 n
y1 (t) = t ∑ cn t . (9.3.3)
n=0
n=0
The constant a is determined by substitution, and in a few relevant cases is even 0, so that the solutions can be of the
generalised series form.
Example 9.3.1 :
Find two independent solutions of
2 ′′ ′
t y + ty + ty = 0 (9.3.5)
near t = 0 .
Solution
The indicial equation is γ 2
=0 , so we get one solution of the series form
n
y1 (t) = ∑ cn t . (9.3.6)
We find
2 ′′ n
t y = ∑ n(n − 1)cn t
1
′ n
ty = ∑ ncn t
1
n
′
n+1 n
ty1 = ∑ cn t = ∑ cn′ −1 t
′
n n
′ 2 3
ty =0 + c1 t + 2c2 t + 3c3 t + …
1
(9.3.7)
2 3
ty1 =0 + c0 t + c1 t + c2 t + …
2 ′′ ′ 2 2
t y + ty + ty =0 + (c1 + c0 )t + (4 c2 + c1 )t + (9 c3 + c2 )t + …
2 ′′ ′ 2 n
t y + ty + ty = ∑(cn n + cn−1 )t , (9.3.8)
n=1
and thus
∞
1
n n
y1 (t) = ∑(−1 ) x (9.3.11)
2
n!
n=0
n
y2 (t) = ln(t)y1 (t) + t ∑ dn t (9.3.12)
n=0
y3 (t)
Here I replace the power series with a symbol, y for convenience. We find 3
y1 (t)
′ ′ ′
y = ln(t)y + +y
2 1 3
t
′
2 y (t) y1 (t)
′′ ′′ 1 ′′
y = ln(t)y + − + +y
2 1 3
t t2
′ 2 ′′ ′
= 2ty1 + t y3 + ty3 + ty3 = 0.
2
2 cn + dn (n + 1 ) + dn−1 = 0, (9.3.13)
Example 9.3.2 :
Find two independent solutions of
2 ′′ 2 ′
t y + t y − ty = 0 (9.3.14)
near t = 0 .
Solution
k
y2 (t) = at ln t + ∑ dk t (9.3.15)
k=0
We find
′ k−1
y = a + a ln t + ∑ kdk t
2
k=0
′′ k−2
y = a/t + ∑ k(k − 1)dk t
2
k=0
We thus find
∞
2 ′′ 2 ′ 2 k
t y +t y − ty2 = a(t + t ) + ∑ [ dk k(k − 1) + dk−1 (k − 2)] t
2 2
k=q
We find
d0 = a, 2 d2 + a = 0, dk = (k − 2)/(k(k − 1))dk−1 (k > 2) (9.3.16)
On fixing d 0 =1 we find
∞
1 k+1 k
y2 (t) = 1 + t ln t + ∑ (−1 ) t (9.3.17)
(k − 1)!k!
k=2
1 3/3/2021
10.1: Temperature on a Disk
Let us now turn to a different two-dimensional problem. A circular disk is prepared in such a way that its initial temperature is
radially symmetric,
u(ρ, ϕ, t = 0) = f (ρ). (10.1.1)
Then it is placed between two perfect insulators and its circumference is connected to a freezer that keeps it at 0
∘
C , as
sketched in Fig. 10.1.2.
u(c, t) = 0,
u(ρ, 0) = f (ρ).
The radial equation (which has a regular singular point at ρ = 0 ) is closely related to one of the most important equation of
−
mathematical physics, Bessel’s equation. This equation can be reached from the substitution ρ = x/√λ , so that with
R(r) = X(x) we get the equation
2
2
d d 2 −
x X(x) + x X(x) + x X(x) = 0, X(√λ c) = 0. (10.1.3)
dx2 dx
Clearly x = 0 is a regular singular point, so we can solve by Frobenius’ method. The indicial equation is obtained from the
lowest power after the substitution y = x , and is
γ
2 2
γ −ν =0 (10.2.2)
2
n . Now let us solve the problem and explicitly
substitute the power series,
ν n
y =x ∑ an x . (10.2.3)
n n n
which leads to
2 2
[(m + ν ) −ν ] am = −am−2 (10.2.5)
or
1
am = − am−2 . (10.2.6)
m(m + 2ν )
2
1 1
=( ) a2(k−2)
4 k(k − 1)(k + n)(k + n − 1)
k
1 n!
= (− ) a0 .
4 k!(k + n)!
If we choose1 a 0 =
1
n!2
n
we find the Bessel function of order n
∞ k
(−1) 2k+n
x
Jn (x) = ∑ ( ) . (10.2.8)
k!(k + n)! 2
k=0
There is another second independent solution (which should have a logarithm in it) with goes to infinity at x = 0 .
1. This can be done since Bessel’s equation is linear, i.e., if g(x) is a solution C g(x) is also a solution.↩
∞ ∞ −t
de
−t ν −1 ν −1
Γ(ν ) = ∫ e t dt = − ∫ t dt
0 0
dt
∞
−t ν −1 ∞ −t ν −2
= −e t ∣ + (ν − 1) ∫ e t dt
∣0
0
Thus for integer argument the Γ function is nothing but a factorial, but it also defined for other arguments. This is the sense in
which Γ generalises the factorial to non-integer arguments. One should realize that once one knows the Γ function between the
values of its argument of, say, 1 and 2, one can evaluate any value of the Γ function through recursion. Given that
Γ(1.65) = 0.9001168163we find
Exercise 10.3.1
Evaluate Γ(3), Γ(11), Γ(2.65).
Answer
2! = 2 , 10! = 3628800, 1.65 × 0.9001168163 = 1.485192746
.
We also would like to determine the Γ function for ν <1 . One can invert the recursion relation to read
Γ(ν )
Γ(ν − 1) = , (10.3.5)
ν −1
This works for any value of the argument that is not an integer. If the argument is integer we get into problems. Look at Γ(0).
For small positive ϵ
Γ(1 ± ϵ) 1
Γ(±ϵ) = =± → ±∞. (10.3.7)
±ϵ ϵ
Thus Γ(n) is not defined for n ≤ 0 . This can be easily seen in the graph of the Γ function, Fig. 10.3.1.
This can be evaluated by a very smart trick, we first evaluate Γ(1/2) using polar coordinates2
∞ ∞
1 2 −x
2
−y
2
Γ( ) = 4∫ e dx ∫ e dy
2 0 0
∞ π/2
2
−ρ
= 4∫ ∫ e ρdρdϕ = π.
0 0
− 1 −
Γ(1/2) = √π , Γ(3/2) = √π , etc. (10.3.9)
2
∞ k
(−1) −ν +2k
x
J−ν (x) = ∑ ( ) .
k!Γ(−ν + k + 1) 2
k=0
for any non-integer value of ν . This also holds for half-integer values (no logs).
d −ν −ν
[x Jν (x)] = −x Jν +1 (x),
dx
d
ν ν
[ x Jν (x)] = x Jν −1 (x),
dx
d 1
[ Jν (x)] = [ Jν −1 (x) − Jν +1 (x)] ,
dx 2
−ν −ν
∫ x Jν +1 (x) dx = −x Jν (x) + C ,
ν ν
∫ x Jν −1 (x) dx = x Jν (x) + C .
Let me prove a few of these. First notice from the definition that J n (x) is even or odd if n is even or odd,
∞ k
(−1) n+2k
x
Jn (x) = ∑ ( ) . (10.5.1)
k!(n + k)! 2
k=0
Substituting x = 0 in the definition of the Bessel function gives 0 if ν >0 , since in that case we have the sum of positive
powers of 0, which are all equally zero.
Let’s look at J −n :
∞ k
(−1) n+2k
x
J−n (x) = ∑ ( )
k!Γ(−n + k + 1)! 2
k=0
∞ k
(−1) x −n+2k
=∑ ( )
k!Γ(−n + k + 1)! 2
k=n
∞ l+n
(−1) x n+2l
=∑ ( )
(l + n)!l! 2
l=0
n
= (−1 ) Jn (x).
Here we have used the fact that since Γ(−l) = ±∞ , 1/Γ(−l) = 0 [this can also be proven by defining a recurrence relation
for 1/Γ(l)]. Furthermore we changed summation variables to l = −n + k .
The next one:
∞ k
(−1) 2k−1
−ν
x
=2 ∑ ( )
(k − 1)!Γ(ν + k + 1) 2
k=1
∞ l
(−1) 2l+1
−ν
x
= −2 ∑ ( )
(l)!Γ(ν + l + 2) 2
l=0
∞ l
(−1) x 2l+1
−ν
= −2 ∑ ( )
(l)!Γ(ν + 1 + l + 1) 2
l=0
∞ l
(−1) x 2l+ν +1
−ν
= −x ∑ ( )
(l)!Γ(ν + 1 + l + 1) 2
l=0
−ν
= −x Jν +1 (x).
Similarly
d ν ν
[ x Jν (x)] = x Jν −1 (x).
dx
The next relation can be obtained by evaluating the derivatives in the two equations above, and solving for J ν (x):
−ν ′ −ν −1 −ν
x Jν (x) − ν x Jν (x) = −x Jν +1 (x),
ν ν −1 ν
x Jν (x) + ν x Jν (x) = x Jν −1 (x).
′
2 Jν (x) = Jν +1 (x) + Jν −1 (x).
where λ is a number, and r(x) and s(x) are greater than 0 on [a, b]. We apply the boundary conditions
′
a1 y(a) + a2 y (a) = 0,
′
b1 y(b) + b2 y (b) = 0,
Theorem 10.6.1
If there is a solution to (10.6.1) then λ is real.
Proof
Assume that λ is a complex number (λ = α + iβ ) with solution Φ . By complex conjugation we find that
′
+ [p(x) + λs(x)]Φ(x) = 0
∗ ′ ′ ∗ ∗
[r(x)(Φ ) (x)] + [p(x) + λ s(x)](Φ )(x) = 0
′ ∗ ′ ∗ ′
= r(b) [ Φ (b)(Φ ) (b) − Φ (b)Φ (b)]
′ ∗ ′ ∗ ′
−r(a) [ Φ (a)(Φ ) (a) − Φ (a)Φ (a)]
= 0,
where the last step can be done using the boundary conditions. Since both Φ (x)Φ(x) and s(x) are greater than zero we ∗
b
conclude that ∫ s(x)Φ (x)Φ(x) dx > 0 , which can now be divided out of the equation to lead to λ = λ .
a
∗ ∗
Theorem 10.6.2
Let Φ and Φ be two solutions for different values of λ , λ
n m n ≠ λm , then
b
Proof
The proof is to a large extent identical to the one above: multiply the equation for Φn (x) by Φm (x) and vice-versa.
Subtract and find
Theorem 10.6.3
Under the conditions set out above
a. There exists a real infinite set of eigenvalues λ , … , λ , … with lim
0 = ∞.
n n→∞
b. If Φ is the eigenfunction corresponding to λ , it has exactly n zeroes in [a, b]. No proof shall be given.
n n
Proof
No proof shall be given.
as (divide by x)
2
′ ′
ν
[x y ] + (x − )y = 0 (10.6.8)
x
We cannot identify ν with λ , and we do not have positive weight functions. It can be proven from properties of the equation that
the Bessel functions have an infinite number of zeroes on the interval [0, ∞). A small list of these:
J0 : 2.42 5.52 8.65 11.79 …
J1/2 : π 2π 3π 4π … (10.6.9)
u(c, t) = 0,
u(ρ, 0) = f (ρ).
−
So how does the equation for R relate to Bessel’s equation? Let us make the change of variables x = √λ ρ . We find
d − d
= √λ , (10.7.2)
dρ dx
−
and we can remove a common factor √λ to obtain (X(x) = R(ρ) )
′ ′
[x X ] + xX = 0, (10.7.3)
n=1
with λ n
2
= (xn /c ) .
In order to understand how to determine the coefficients An from the initial condition u(ρ, 0) = f (ρ) we need to study
Fourier-Bessel series in a little more detail.
j=1
The corresponding self-adjoint version of Bessel’s equation is easily found to be (with R j (ρ) = Jν (αj ρ) )
2
′
ν
′ 2
(ρR ) + (α ρ − )Rj = 0. (10.8.2)
j j
ρ
′
b1 Rj (c) + b2 R (c) = 0
j
c 2
′
ν
′ 2 ′
∫ [(ρR ) + (α ρ − )Rj ] 2ρR dρ = 0. (10.8.4)
j j j
0
ρ
This can be brought to the form (integrate the first term by parts, bring the other two terms to the right-hand side)
c c c
d 2
′ 2 ′ 2 2 ′
∫ (ρR ) dρ = 2 ν ∫ Rj R dρ − 2 α ∫ ρ Rj R dρ
j j j j
0
dρ 0 0
2 c c
∣ 2
c
2
′ 2 2 ′
(ρR ) ∣ =ν R ∣ − 2α ∫ ρ Rj R dρ.
j j∣ j j
∣ 0
0 0
In order to make life not too complicated we shall only look at boundary conditions where f (c) = R(c) = 0 . The other cases
(mixed or purely f (c) = 0 ) go very similar! Using the fact that R (r) = J (α ρ) , we find
′
j ν j
′ ′
R = αj Jν (αj ρ). (10.8.6)
j
We conclude that
c
c
2 2 2 ′ 2
2α ∫ ρ R dρ = [(ραj Jν (αj ρ)) ∣
∣
j j
0
0
2 2
′ 2
=c α (Jν (αj c))
j
2
ν
2
=c α ( Jν (αj c) − Jν +1 (αj c))
j
αj c
2 2 2
= c α (Jν +1 (αj c)) ,
j
Example 10.8.1 :
Consider the function
3
x 0 < x < 10
f (x) = { (10.8.8)
0 x > 10
Solution
From our definitions we find that
∞
j=1
with
10
2
3
Aj = ∫ x J3 (αj x)dx
2
100 J4 (10 αj ) 0
10αj
2 1 4
= ∫ s J3 (s)ds
2 5
100 J4 (10 αj ) α 0
j
2 1
4
= (10 αj ) J4 (10 αj )ds
2 5
100 J4 (10 αj ) α
j
200
= .
αj J4 (10 αj )
Using α = … , we find that the first five values of A are 1050.95, −821.503, 703.991, −627.577, 572.301
j j . The
first five partial sums are plotted in Fig. 10.8.1.
Figure 10.8.1: A graph of the first five partial sums for x expressed in J . 3
3
1 3/3/2021
11.1: Modelling the Eye
Let me model the temperature in a simple model of the eye, where the eye is a sphere, and the eyelids are circular. In that case
we can put the z -axis straight through the middle of the eye, and we can assume that the temperature does only depend on r, θ
and not on ϕ . We assume that the part of the eye in contact with air is at a temperature of 20 C, and the part in contact with
∘
the body is at 36 C.
∘
Figure 11.1.1 : The temperature on the outside of a simple model of the eye
If we look for the steady state temperature it is described by Laplace’s equation,
2
∇ u(r, θ) = 0. (11.1.1)
1 ∂ 2
∂ 1 ∂ ∂
(r u) + (sin θ u) = 0. (11.1.2)
r2 ∂r ∂r r2 sin θ ∂θ ∂θ
The equation for R will be shown to be easy to solve (later). The one for T is of much more interest. Since for 3D problems
the angular dependence is more complicated, whereas in 2D the angular functions were just sines and cosines.
The equation for T is
′ ′
[sin θT ] + λT sin θ = 0. (11.1.5)
This equation is called Legendre’s equation, or actually it carries that name after changing variables to x = cos θ . Since θ runs
from 0 to π, we find sin θ > 0 , and we have
− −−− −
2
sin θ = √ 1 − x . (11.1.6)
After this substitution we are making the change of variables we find the equation (y(x) = T (θ) = T (arccos x) , and we now
− −−− −
differentiate w.r.t. x, d/dθ = −√1 − x d/dx ) 2
d 2
dy
[(1 − x ) ] + λy = 0. (11.1.7)
dx dx
This equation is easily seen to be self-adjoint. It is not very hard to show that x = 0 is a regular (not singular) point – but the
equation is singular at x = ±1 . Near x = 0 we can solve it by straightforward substitution of a Taylor series,
j
y(x) = ∑ aj x . (11.1.8)
j=0
j−2 j j j
∑ j(j − 1)aj x − ∑ j(j − 1)aj x − 2 ∑ jaj x + λ ∑ aj x =0 (11.1.9)
i j
∑(i + 1)(i + 1)ai+2 x − ∑[j(j + 1) − λ] aj x = 0. (11.1.10)
j=0 j=0
k(k + 1) − λ
ak+2 = ak . (11.1.11)
(k + 1)(k + 2)
If λ = n(n + 1) this series terminates – actually those are the only acceptable solutions, any one where λ takes a different
value actually diverges at x = +1 or x = −1 , not acceptable for a physical quantity – it can’t just diverge at the north or south
pole (x = cos θ = ±1 are the north and south pole of a sphere).
We thus have, for n even,
2 n
yn = a0 + a2 x + … + an x . (11.1.12)
3
1
2 1 3
P2 = x − , P3 = (5 x − 3x), (11.1.15)
2 2
2
1 4 2 1 5 3
P4 = (35 x − 30 x + 3), P5 = (63 x − 70 x + 15x).
8 8
Exercise 11.2.1
Show that F (x, t) = 1
1−xt
is a generating function of the polynomials x . n
Answer
Look at
∞
1
n n
= ∑x t (|xt| < 1).
1 − xt
n=0
Exercise 11.2.2
Show that F (x, t) = exp( tx−t
2t
) is the generating function for the Bessel functions,
∞
tx − t n
F (x, t) = exp( ) = ∑ Jn (x)t .
2t
n=0
Answer
TBA
Exercise 11.2.4
(The case of most interest here)
∞
1
n
F (x, t) = − −−−− −−−−− = ∑ Pn (x)t .
√ 1 − 2xt + t2
n=0
Answer
TBA
2. P (1) = 1 .
n
3. P (−1) = (−1) .
n
n
Let us prove some of these relations, first Rodrigues’ formula (Equation 11.2.1). We start from the simple formula
2
d 2 n 2 n
(x − 1) (x − 1) − 2nx(x − 1) = 0, (11.2.2)
dx
n n+1 n+2
d 2 n
d 2 n 2
d 2 n
= n(n + 1) (x − 1) + 2(n + 1)x (x − 1) + (x − 1) (x − 1)
n n+1
dx dx dxn+2
n n+1
d 2 n
d 2 n
− 2n(n + 1) (x − 1) − 2nx (x − 1)
n
dx dxn+1
n n+1 n+2
d 2 n
d 2 n 2
d 2 n
= −n(n + 1) (x − 1) + 2x (x − 1) + (x − 1) (x − 1)
n
dx dxn+1 dxn+2
n n
d 2
d d 2 n
d 2 n
= −[ (1 − x ) { (x − 1 ) } + n(n + 1) { (x − 1 ) }] = 0.
n n
dx dx dx dx
dxn
2
(x − 1)
n
satisfies Legendre’s equation. The normalization follows from the evaluation of the highest
coefficient,
n
d 2n!
2n n
x = x , (11.2.3)
n
dx n!
2 n!
to get the properly normalized P . n
Let’s use the generating function to prove some of the other properties: 2.:
1 n
F (1, t) = = ∑t (11.2.4)
1 −t
n
x −t n−1
= ∑ nt Pn (x)
2 1
(1 − 2tx + t ) .5
n=0
∞
x −t n n−1
∑ t Pn (x) = ∑ nt Pn (x)
2
1 − 2xt + t
n=0 n=0
∞ ∞ ∞ ∞ ∞
∞ ∞ ∞
n n n
∑ t (2n + 1)x Pn (x) = ∑(n + 1)t Pn+1 (x) + ∑ nt Pn−1 (x)
Proofs for the other properties can be found using similar methods.
which can be determined using the relation 5. twice to obtain a recurrence relation
1 1
(2n − 1)x Pn−1 (x) − (n − 1)Pn−2 (x)
2
∫ Pn (x)dx = ∫ Pn (x) dx
−1 −1
n
1
(2n − 1)
= ∫ x Pn (x)Pn−1 (x)dx
n −1
1
(2n − 1) (n + 1)Pn+1 (x) + nPn−1 (x)
= ∫ Pn−1 (x)dx
n −1
2n + 1
1
(2n − 1)
2
= ∫ P (x)dx,
n−1
2n + 1 −1
and the use of a very simple integral to fix this number for n = 0 ,
1
2
∫ P (x)dx = 2. (11.3.2)
0
−1
f (x) = ∑ An Pn (x)
1
2n + 1
An = ∫ f (x)Pn (x)dx
2 −1
0, −1 < x < 0
f (x) = { . (11.3.3)
1, 0 <x <1
Answer
We find
1
1 1
A0 = ∫ P0 (x)dx = ,
2 0
2
1
3 1
A1 = ∫ P1 (x)dx = ,
2 0
4
1
5
A2 = ∫ P2 (x)dx = 0,
2 0
1
7 7
A3 = ∫ P3 (x)dx = − .
2 0
16
All other coefficients for even n are zero, for odd n they can be evaluated explicitly.
The singular part is not acceptable, so once again we find that the solution takes the form
∞
n
u(r, θ) = ∑ An r Pn (cos θ) (11.4.2)
n=0
We now need to impose the boundary condition that the temperature is 20 C in an opening angle of 45 , and 36 elsewhere.
∘ ∘ ∘
These integrals can easily be evaluated, and a sketch for the temperature can be found in figure 11.4.1.
Figure 11.4.1 : A cross-section of the temperature in the eye. We have summed over the first 40 Legendre polynomials.
Notice that we need to integrate over x = cos θ to obtain the coefficients A . The integration over θ in spherical coordinates is
n
π 1
∫ sin θdθ = ∫ 1dx , and so automatically implies that cos θ is the right variable to use, as also follows from the
0 −1
orthogonality of P (x).
n