bookdown-Lecture notes
bookdown-Lecture notes
Vahid Shahrezaei
2021-03-24
2
Contents
Welcome 5
5 Linear ODEs 35
5.1 General solution of the non-homogeneous linear ODE . . . . . . 37
5.2 First order linear ODEs with constant coefficients . . . . . . . . . 39
5.3 Second order linear ODEs with constant coefficients . . . . . . . 41
5.4 𝑘th order Linear ODEs with constant coefficients . . . . . . . . . 45
5.5 Euler-Cauchy equation . . . . . . . . . . . . . . . . . . . . . . . . 46
5.6 Using Fourier Transforms to solve linear ODEs . . . . . . . . . . 47
3
4 CONTENTS
8 Introduction to Bifurcations 79
8.1 Bifurcations in linear systems . . . . . . . . . . . . . . . . . . . . 79
8.2 Qualitative behavior of non-linear 1D systems . . . . . . . . . . . 80
These are lecture notes for the second part of Calculus and Applications first year
module at the Department of Mathematics, Imperial College London. The notes
are split into three parts on Fourier Transform, Ordinary Differential Equations
and Introduction to Multivariate Calculus. Please refer to course Blackboard
for additional materials recommended text books for further reading.
These lecture notes are adobted from existing courses in our department. Part
I of the course is based on the old M2AA2 course (Andrew Walton) and Part II
and III are based on the old M1M2 course (Frank Berkshire, Mauricio Barahona,
Andrew Parry). Some examples and ideas from the old mechanics course M1A1
is included as well.
These notes are produced with accessibility in mind and I hope it meets your
requirements. You can experiment with the controls in the toolbar at the top of
the html version of the notes. You can search for a word, adjust typeface, font
size, font and background color. You can also download a copy of these notes
in differnt formats, if you wish for offline use. I hope you enjoy this course and
let me know if you have any comments or questions by email.
© Vahid Shahrezaei (2021) These notes are provided for the personal study of
students taking this module. The distribution of copies in part or whole is not
permitted.
5
6 CONTENTS
Part I: Fourier Transforms
7
Chapter 1
Fourier Transforms
Last term, we saw that Fourier series allows us to represent a given function,
defined over a finite range of the independent variable, in terms of sine and co-
sine waves of different amplitudes and frequencies. Fourier Transforms are the
natural extension of Fourier series for functions defined over ℝ. A key reason for
studying Fourier transforms (and series) is that we can use these ideas to help
us solve differential equations as seen in this course regarding ordinary differen-
tial equations and more extensively next year in relation to partial differential
equations. There are also many other applications for Fourier transforms in
science and engineering, particularly in the context of signal processing.
∞
1 𝑛𝜋𝑥 𝑛𝜋𝑥
𝑓(𝑥) = 𝑎0 + ∑{𝑎𝑛 cos ( ) + 𝑏𝑛 sin ( )}.
2 𝑛=1
𝐿 𝐿
where the corresponding Fourier coefficients are given by
𝐿
1 𝑛𝜋𝑥
𝑎𝑛 = ∫ 𝑓(𝑥) cos ( ) 𝑑𝑥, 𝑛 = 0, 1, 2, ⋯ ,
𝐿 −𝐿 𝐿
𝐿
1 𝑛𝜋𝑥
𝑏𝑛 = ∫ 𝑓(𝑥) sin ( ) 𝑑𝑥, 𝑛 = 1, 2, ⋯ .
𝐿 −𝐿 𝐿
Expressed in the exponential form the Fourier series can be represented as
∞
𝑓(𝑥) = ∑ 𝑐𝑛 𝑒𝑖𝑛𝜋𝑥/𝐿 , |𝑥| < 𝐿,
𝑛=−∞
9
10 CHAPTER 1. FOURIER TRANSFORMS
𝐿
1
𝑐𝑛 = ∫ 𝑓(𝑥)𝑒−𝑖𝑛𝜋𝑥/𝐿 𝑑𝑥, 𝑛 = 0, ±1, ±2, ⋯ .
2𝐿 −𝐿
𝛿𝜔 = 𝜔𝑛+1 − 𝜔𝑛 ,
This result can be extended for a function 𝑓(𝑥) defined on ℝ by taking the limit
of 𝐿 → ∞ from the Fourier series. Using the angular frequency notation from
above and replacing sum with integral using the Riemann sum, noting that
𝛿𝜔 → 0 as 𝐿 → ∞ , we obtain
∞ ∞
1
𝑓(𝑥) = ∫ {∫ 𝑓(𝑠)𝑒−𝑖𝜔𝑠 𝑑𝑠} 𝑒𝑖𝜔𝑥 𝑑𝜔.
2𝜋 −∞ −∞
We therefore have shown that for a function 𝑓(𝑥) defined over −∞ < 𝑥 < ∞
we have the following
∞
1 ̂ 𝑖𝜔𝑥
𝑓(𝑥) = ∫ 𝑓(𝜔)𝑒 𝑑𝜔,
2𝜋 −∞
where
∞
̂
𝑓(𝜔) =∫ 𝑓(𝑥)𝑒−𝑖𝜔𝑥 𝑑𝑥.
−∞
̂
The function 𝑓(𝜔) (also denoted as ℱ{𝑓(𝑥)}) is known as Fourier transform
of 𝑓(𝑥), which is analogous to the Fourier coefficients in a Fourier series. The
̂
relation above between 𝑓(𝑥) and 𝑓(𝜔) is also known as inverse Fourier transform.
Note that some books use slightly different definitions of Fourier transform with
different normalisation. In order to evaluate the integrals above, a necessary
condition is that 𝑓(𝑥) and its transform decay at ±∞. Using the Dirac delta
function this restriction can be overcome as seen later.
Proof of Fourier’s integral formula
In the previous section in a non-rigorous way we arrived at the result
∞ ∞
1
𝑓(𝑥) = ∫ {∫ 𝑓(𝑠)𝑒−𝑖𝜔𝑠 𝑑𝑠} 𝑒𝑖𝜔𝑥 𝑑𝜔.
2𝜋 −∞ −∞
To prove this more formally, we need to assume that 𝑓(𝑥) is such that
1.1. FOURIER’S INTEGRAL FORMULA 11
∞
∫ |𝑓(𝑥)| 𝑑𝑥
−∞
converges. We will also assume that 𝑓(𝑥) and 𝑓 ′ (𝑥) are continuous for all 𝑥 (this
can be relaxed as discussed at the end). We start by writing the RHS above in
the form
𝐿 ∞
1
lim ∫ {∫ 𝑓(𝑠)𝑒−𝑖𝜔(𝑠−𝑥) 𝑑𝑠} 𝑑𝜔 =
𝐿→∞ 2𝜋
−𝐿 −∞
𝐿 ∞ ∞
1
lim ∫ {∫ 𝑓(𝑠) cos [𝜔(𝑠 − 𝑥)] 𝑑𝑠 − 𝑖 ∫ 𝑓(𝑠) sin [𝜔(𝑠 − 𝑥)] 𝑑𝑠} 𝑑𝜔.
𝐿→∞ 2𝜋
−𝐿 −∞ −∞
The first integral in curly brackets is even about 𝜔 = 0, while the second is
odd. Also, because of the absolute convergence of the inner integral, we can
interchange the order of integration. Therefore, the expression simplifies to
𝐿 ∞
1
lim∫ {∫ 𝑓(𝑠) cos [𝜔(𝑠 − 𝑥)] 𝑑𝑠} 𝑑𝜔
𝐿→∞ 𝜋
0 −∞
∞ 𝐿
1
= lim ∫ 𝑓(𝑠) {∫ cos [𝜔(𝑠 − 𝑥)] 𝑑𝜔} 𝑑𝑠
𝐿→∞ 𝜋
−∞ 0
∞
1 sin [𝐿(𝑠 − 𝑥)]
= lim ∫ 𝑓(𝑠) 𝑑𝑠
𝐿→∞ 𝜋 𝑠−𝑥
−∞
∞
1 sin (𝐿𝑢)
= lim ∫ 𝑓(𝑥 + 𝑢) 𝑑𝑢,
𝐿→∞ 𝜋 𝑢
−∞
using the substitution 𝑢 = 𝑠 − 𝑥. We now split the integral into two parts in
the following form
∞ ∞
1 𝑓(𝑥 + 𝑢) − 𝑓(𝑥) sin (𝐿𝑢)
lim {∫ sin (𝐿𝑢) 𝑑𝑢 + 𝑓(𝑥) ∫ 𝑑𝑢} .
𝐿→∞ 𝜋 𝑢 𝑢
−∞ −∞
𝑥0 (with finite left and right hand derivatives there), the LHS of the formula is
replaced by [𝑓(𝑥0 +) + 𝑓(𝑥0 −)]/2 (analogous to the Fourier series convergence
we investigated earlier).
1, if |𝑥| < 𝑑,
𝑓(𝑥) = {
0, if |𝑥| > 𝑑.
𝑑 𝑑
̂ 𝑒−𝑖𝜔𝑥 1 2
𝑓(𝜔) = ∫ 1.𝑒−𝑖𝑤𝑥 𝑑𝑥 = [ ] = − (𝑒−𝑖𝜔𝑑 − 𝑒𝑖𝜔𝑑 ) = sin 𝜔𝑑.
−𝑑 −𝑖𝜔 −𝑑 𝑖𝜔 𝜔
̂
See Figure 1.1 for a graph of 𝑓(𝜔) for different values of 𝑑. Note that as 𝑑 gets
larger, 𝑓 ̂ becomes more concentrated in the vicinity of 𝜔 = 0. This is a general
property of Fourier transforms and its inverse and relates to uncertainty princi-
ple. A function which is more localised around zero has a wider inverse Fourier
transform. See unseen question 1 for the derivation and further discussion of
the uncertainty principle.
8
6
4
2
0
−2
−6 −4 −2 0 2 4 6
Figure 1.1: Graph of the Fourier transform for 𝑑 = 1 (green) and 𝑑 = 4 (red)
1.2. FOURIER COSINE AND SINE TRANSFORMS 13
̂
Thus, for an even function 𝑓(𝑥) we have 𝑓(𝜔) = 2𝑓𝑐̂ (𝜔).
Using the inversion formula for the regular transform and exploiting the evenness
of 𝑓𝑐̂ (𝜔), we can obtain the inversion formula for the Fourier cosine transform:
∞ ∞
1 ̂ 2
𝑓(𝑥) = ∫ 𝑓(𝜔)𝑒𝑖𝜔𝑥
𝑑𝜔 = ∫ 𝑓𝑐̂ (𝜔) cos 𝜔𝑥 𝑑𝜔.
2𝜋 −∞ 𝜋 0
̂
For an odd function 𝑓(𝑥), we have 𝑓(𝜔) = −2𝑖𝑓𝑠̂ (𝜔).
Example 1.2. Find the Fourier cosine transform of the rectangular wave
1, if |𝑥| < 𝑑,
𝑓(𝑥) = {
0, if |𝑥| > 𝑑.
We can use the definition of Fourier cosine transform directly noting that the
function is even. But also, as we have already obtained the Fourier transform
of this function in the last example, we simply have:
1 ̂ 1
𝑓𝑐̂ (𝜔) = 𝑓(𝜔) = sin 𝜔𝑑.
2 𝜔
14 CHAPTER 1. FOURIER TRANSFORMS
Chapter 2
Properties of Fourier
Transforms
1 ̂𝜔
ℱ{𝑓(𝑎𝑥)} =
𝑓( ).
𝑎 𝑎
Proof Starting on the LHS, and making the substitution 𝑠 = 𝑎𝑥:
∞ ∞
1 1 𝜔
ℱ{𝑓(𝑎𝑥)} = ∫ 𝑓(𝑎𝑥)𝑒−𝑖𝜔𝑥 𝑑𝑥 = ∫ 𝑓(𝑠)𝑒−𝑖(𝜔/𝑎)𝑠 𝑑𝑠 = 𝑓(̂ ).
−∞ 𝑎 −∞ 𝑎 𝑎
15
16 CHAPTER 2. PROPERTIES OF FOURIER TRANSFORMS
-(v) A similar result, but this time involving a shift in transform space:
∞
ℱ{𝑒𝑖𝜔0 𝑥 𝑓(𝑥)} = ∫ ̂ − 𝜔 ).
𝑓(𝑥)𝑒−𝑖(𝜔−𝜔0 )𝑥 𝑑𝑥 = 𝑓(𝜔 0
−∞
-(vi) Symmetry formula The following result is very useful. Suppose the Fourier
̂
transform of 𝑓(𝑥) is 𝑓(𝜔); change the variable 𝜔 to 𝑥; then
̂
ℱ{𝑓(𝑥)} = 2𝜋𝑓(−𝜔).
Proof Starting with the inversion formula and changing variables from 𝜔 to 𝑠,
we have ∞ ∞
1 ̂ 𝑖𝜔𝑥 1 ̂ 𝑖𝑠𝑥
𝑓(𝑥) = ∫ 𝑓(𝜔)𝑒 𝑑𝜔 = ∫ 𝑓(𝑠)𝑒 𝑑𝑠.
2𝜋 −∞ 2𝜋 −∞
If we now let 𝑥 = −𝜔 and then 𝑠 = 𝑥, we get:
∞ ∞
1 ̂ −𝑖𝜔𝑠 1 ̂ −𝑖𝜔𝑥 1 ̂
𝑓(−𝜔) = ∫ 𝑓(𝑠)𝑒 𝑑𝑠 = ∫ 𝑓(𝑥)𝑒 𝑑𝑥 = ℱ{𝑓(𝑥)},
2𝜋 −∞ 2𝜋 −∞ 2𝜋
as required.
The following results are particularly useful when applying Fourier transforms
to differential equations (as seen later this term and next year in the context of
partial differential equations).
-(vii)
𝑑𝑛 𝑓 ̂
ℱ{ } = (𝑖𝜔)𝑛 𝑓(𝜔).
𝑑𝑥𝑛
∞
ℱ{𝑑𝑛 𝑓/𝑑𝑥𝑛 } = ∫ (𝑑𝑛 𝑓/𝑑𝑥𝑛 )𝑒−𝑖𝜔𝑥 𝑑𝑥
−∞
∞
∞
= [(𝑑𝑛−1 𝑓/𝑑𝑥𝑛−1 )𝑒−𝑖𝜔𝑥 ]−∞ + 𝑖𝜔 ∫ (𝑑𝑛−1 𝑓/𝑑𝑥𝑛−1 )𝑒−𝑖𝜔𝑥 𝑑𝑥
−∞
= 𝑖𝜔ℱ{𝑑𝑛−1 𝑓/𝑑𝑥𝑛−1 }
=⋯
̂
= (𝑖𝜔)𝑛 𝑓(𝜔).
2.1. BASIC PROPERTIES 17
-(viii)
ℱ{𝑥𝑓(𝑥)} = 𝑖𝑓 ′̂ (𝜔).
-(ix)
Proof We prove (a) and (c) and leave the others as exercises. For (a) we have,
integrating by parts:
∞
ℱ𝑐 {𝑓 ′ (𝑥)} = ∫ 𝑓 ′ (𝑥) cos 𝜔𝑥 𝑑𝑥
0
∞
∞
= [𝑓(𝑥) cos 𝜔𝑥]0 + 𝜔 ∫ 𝑓(𝑥) sin 𝜔𝑥 𝑑𝑥
0
Theorem 2.1 (Convolution theorem). Suppose 𝑓(𝑥) and 𝑔(𝑥) are two functions
̂
defined over ℝ with Fourier transforms given as 𝑓(𝜔) and 𝑔(𝜔),
̂ we have:
̂ 𝑔(𝜔).
ℱ{𝑓 ∗ 𝑔} = 𝑓(𝜔) ̂
Proof We start on the LHS, change the order of integration and then use the
substitution 𝑠 = 𝑥 − 𝑢 at fixed 𝑢:
∞ ∞
∫ {∫ 𝑓(𝑥 − 𝑢)𝑔(𝑢) 𝑑𝑢} 𝑒−𝑖𝜔𝑥 𝑑𝑥
𝑥=−∞ 𝑢=−∞
∞ ∞
=∫ 𝑔(𝑢) {∫ 𝑓(𝑥 − 𝑢)𝑒−𝑖𝜔𝑥 𝑑𝑥} 𝑑𝑢
𝑢=−∞ 𝑥=−∞
∞ ∞
=∫ 𝑔(𝑢) {∫ 𝑓(𝑠)𝑒−𝑖𝜔(𝑠+𝑢) 𝑑𝑠} 𝑑𝑢
𝑢=−∞ 𝑠=−∞
∞ ∞
= (∫ 𝑔(𝑢)𝑒−𝑖𝜔𝑢 𝑑𝑢) (∫ 𝑓(𝑠)𝑒−𝑖𝜔𝑠 𝑑𝑠) = 𝑔(𝜔) ̂
̂ 𝑓(𝜔),
−∞ −∞
as required.
The convolution theorem suggests that convolution is commutative. This can
also be shown easily from the definition by using a change of variable in the
integration.
A similar convolution theorem holds for the inverse functions.
1 ̂
ℱ{𝑓(𝑥)𝑔(𝑥)} = 𝑓(𝜔) ∗ 𝑔(𝜔).
̂
2𝜋
2.3. ENERGY THEOREM FOR FOURIER TRANSFORMS 19
The proof of this result, using Dirac delta function is discussed as a quiz in the
lectures and using symmetry formula is seen in the problem sheet.
By setting
̂
𝑓(𝜔) = 1/(4 + 𝜔2 ), 𝑔(𝜔)
̂ = 1/(9 + 𝜔2 ),
2𝑎
we have (from the quiz in the lectures) that ℱ{𝑒−𝑎|𝑥| } = 𝑎2 +𝜔2 for 𝑎 > 0.
Therefore
1
ℱ−1 { } = 𝑓(𝑥) ∗ 𝑔(𝑥)
(4 + 𝜔2 )(9 + 𝜔2 )
∞
1
= ∫ 𝑒−2|𝑥−𝑢| 𝑒−3|𝑢| 𝑑𝑢
24 −∞
=⋯
1 −2|𝑥| 1
= 𝑒 − 𝑒−3|𝑥| .
20 30
Note that there are other ways to compute the inverse, for example, we could
decompose the original function into partial fractions and invert term-by-term.
Theorem 2.2 (Energy theorem). Suppose 𝑓(𝑥) s real valued function defined
̂
over ℝ with Fourier transform given as 𝑓(𝜔), we have:
∞ ∞
1 ̂
2 2
∫ ∣𝑓(𝜔)∣ 𝑑𝜔 = ∫ [𝑓(𝑥)] 𝑑𝑥.
2𝜋 −∞ −∞
̂
ℱ{[𝑓(−𝑥)]∗ } = [𝑓(𝜔)] ∗
.
The proof follows from the regular mean-value theorem for 𝐺 say, by defining
𝑔 = 𝐺′ . Geometrically this means that the area under the curve is equivalent
to that of a rectangle with length equal to the interval of integration.
Definition of the Dirac delta-function (impulse function)
Consider the following step-function:
As 𝑘 increases, 𝑓𝑘 (𝑥) gets taller and thinner (see Figure 2.1). We define the
Dirac delta function to be
4
3
2
1
0
although, of course, this limit doesn’t exist in the usual mathematical sense.
Effectively 𝛿(𝑥) is infinite at 𝑥 = 0 and zero at all other values of 𝑥. The key
property however, is that its integral (area under the curve) is one.
Sifting property of the delta function The delta function is most useful in
how it interacts with other functions. Consider
∞
∫ 𝑔(𝑥)𝛿(𝑥) 𝑑𝑥,
−∞
where 𝑔(𝑥) is a continuous function defined over (−∞, ∞). Using our definition
of the delta-function we can rewrite this as
∞ 1/𝑘
𝑘
lim ∫ 𝑔(𝑥)𝑓𝑘 (𝑥) 𝑑𝑥 = lim ∫ 𝑔(𝑥) 𝑑𝑥
𝑘→∞ −∞ 𝑘→∞ −1/𝑘 2
𝑘 1 1
= lim 𝑔(𝑥)̄ ( − (− )) ,
𝑘→∞ 2 𝑘 𝑘
for some 𝑥̄ in [−1/𝑘, 1/𝑘], using the mean-value theorem for integrals. Clearly,
as 𝑘 → ∞, we must have 𝑥̄ → 0. The expression above simplifies to
𝑘2
𝑔(0) = 𝑔(0).
2𝑘
From this we can deduce that the inverse Fourier transform of 1 is 𝛿(𝑥). From
this last result, and using the inversion formula, we see that an alternative
representation of the delta function is
∞
1
𝛿(𝑥) = ∫ 𝑒±𝑖𝜔𝑥 𝑑𝜔,
2𝜋 −∞
with the ± arising from the observation that 𝛿(𝑥) is an even function of 𝑥 about
𝑥 = 0. If we are prepared to work in terms of delta-functions, we can now take
the Fourier transforms of functions that do not decay as 𝑥 → ±∞.
23
Chapter 3
Introduction to ordinary
differential equations
Example 3.1. Consider the following ODE for the function 𝑓(𝑥):
1
2 3
𝑑2 𝑓 𝑑𝑓
2
= 5 [1 + ( ) ]
𝑑𝑥 𝑑𝑥
25
26CHAPTER 3. INTRODUCTION TO ORDINARY DIFFERENTIAL EQUATIONS
Solving an ODE is the task of finding 𝑓(𝑥) such that the ODE is satisfied over
the domain of 𝑥 (e.g. ℝ).
ODEs appear naturally in many areas of sciences and humanities. In the follow-
ing we provide some examples.
𝑑2 𝑥 𝑑𝑥
𝑚 = 𝐹 (𝑡, 𝑥, )
𝑑𝑡2 𝑑𝑡
This is a second order ODE for the position of the object 𝑥(𝑡).
-Third law when one body exerts a force on a second body, the second body
exerts a force equal in magnitude and opposite in direction on the first body.
Mechanics used to be thaught until recently in our first year Mathematics course
as it provides many links to different areas of mathemaitcs. If you have any
doubts, look at this video, for a very cool counting problem for colliding particles
and its very unexpected solution and link to mathematics.
In geometry, given radius of curvature 𝑅(𝑥, 𝑦), we can find equation for the
curve 𝑦(𝑥) using the following ODE that definition for the radius of curvature.
3/2
𝑑𝑦 2
(1 + ( 𝑑𝑥 ) )
𝑅(𝑥, 𝑦) = 𝑑2 𝑦
.
𝑑𝑥2
is a general family of solutions that fulfil the ODE. The parameters {𝑐𝑖 }𝑘𝑖=1
are the constants of integration and are usually fixed by initial or boundary
conditions. In this course we, concern ourselves with methods that allow us
to obtain such solutions. Rigourous mathematical results on existance and
uniqueness of such solutions are discussed in the second year.
𝑑𝑥
= 𝑣.
𝑑𝑡
Not all ODEs are analytically solvable. In this section we discuss some types
of first and second order ODEs that are analytically solvable and see some
examples.
29
30 CHAPTER 4. FIRST AND SECOND ORDER ODES
𝑑𝑥
∫ = ∫ 𝐹2 (𝑡)𝑑𝑡 + 𝑐1 .
𝐹1 (𝑥)
𝑑𝑦
+ 𝑝(𝑥)𝑦 = 𝑞(𝑥).
𝑑𝑥
Solution This is solved by finding an integrating factor (IF). We look for 𝐼(𝑥)
such that:
𝑑𝑦 𝑑[𝐼(𝑥)𝑦]
𝐼(𝑥) [ + 𝑝(𝑥)𝑦] = ,
𝑑𝑥 𝑑𝑥
Then, we have
𝑑[𝐼(𝑥)𝑦]
= 𝐼(𝑥)𝑞(𝑥),
𝑑𝑥
∫ 𝑑[𝐼(𝑥)𝑦] = ∫ 𝑞(𝑥)𝐼(𝑥) 𝑑𝑥 + 𝑐1 ,
1
𝑦(𝑥) = [∫ 𝑞(𝑥)𝐼(𝑥) 𝑑𝑥 + 𝑐1 ] .
𝐼(𝑥)
𝑑(𝐼𝑦) 𝑑𝑦
=𝐼 + 𝐼𝑝𝑦,
𝑑𝑥 𝑑𝑥
𝑑𝑦 𝑑𝐼 𝑑𝑦
𝐼 +𝑦 =𝐼 + 𝐼𝑝𝑦,
𝑑𝑥 𝑑𝑥 𝑑𝑥
𝑑𝐼
∫ = ∫ 𝑝(𝑥) 𝑑𝑥 + 𝑐′ .
𝐼
So we have:
𝐼(𝑥) = 𝐴𝑒∫ 𝑝(𝑥) 𝑑𝑥 ,
where 𝐴 is a new arbitrary constant (of integration).
So, we have the following for the general solution:
𝑑𝑦 𝑦
= 𝐹 ( ).
𝑑𝑥 𝑥
Solution Let 𝑢 = 𝑦/𝑥 we obtain:
𝑑𝑦 𝑑𝑢
=𝑢+𝑥
𝑑𝑥 𝑑𝑥
The ODE in terms of 𝑢(𝑥), which is separabale is
𝑑𝑢
𝑢+𝑥 = 𝐹 (𝑢),
𝑑𝑥
Finding general solution 𝑢𝐺𝑆 (𝑥) for this ODE then we find the general solution
for the original ODE as 𝑦𝐺𝑆 (𝑥) = 𝑢𝐺𝑆 (𝑥)𝑥.
𝑑𝑢 𝑑𝑦
= (1 − 𝑛)𝑦−𝑛 .
𝑑𝑥 𝑑𝑥
Writing the original ODE in terms of 𝑢 we have:
𝑑𝑢
+ (1 − 𝑛)𝑝(𝑥)𝑢 = (1 − 𝑛)𝑞(𝑥),
𝑑𝑥
which is a linear ODE for 𝑢(𝑥), so we obtain 𝑢𝐺𝑆 (𝑥) and then we have
1
𝑦𝐺𝑆 = 𝑢𝐺𝑆
1−𝑛
.
𝑑𝑦 𝑑2 𝑦
𝐺(𝑥, 𝑦, , ) = 0,
𝑑𝑥 𝑑𝑥2
32 CHAPTER 4. FIRST AND SECOND ORDER ODES
𝑑2 𝑦 𝑑𝑦
= 𝐹 (𝑥, 𝑦, ).
𝑑𝑥2 𝑑𝑥
The second order ODEs are common in Mechanics as Newton’s second law is
such ODE with independent variable as time 𝑡. They are difficult to solve for
general 𝐹 but there are some special cases that can be solved as described in
the following. Also, the linear case is discussed in the next chapter in detail.
𝑢 = ∫ 𝐹 (𝑥)𝑑𝑥 + 𝑐1
𝑦𝐺𝑆 = ∫ [∫ 𝐹 (𝑥)𝑑𝑥] 𝑑𝑥 + 𝑐1 𝑥 + 𝑐2 .
𝑑𝑦
4.2.2 𝐹 only depends on 𝑥 and 𝑑𝑥
𝑑2 𝑦 𝑑𝑦
2
= 𝐹 (𝑥, )
𝑑𝑥 𝑑𝑥
𝑑𝑦
Solution Let 𝑢 = 𝑑𝑥 then we have 𝑑𝑢𝑑𝑥 = 𝐹 (𝑥, 𝑢), which is a first order ODE. If
we could obtain the general solution 𝑢𝐺𝑆 (𝑥; 𝑐1 ) then we have:
𝑑𝑢 𝑑𝑢 𝑑𝑦 𝑑𝑢 𝑑 1 2
= =𝑢 = ( 𝑢 ) = 𝐹 (𝑦),
𝑑𝑥 𝑑𝑦 𝑑𝑥 𝑑𝑦 𝑑𝑦 2
which is a first order separable ODE for 𝑢(𝑦). We have:
1 2
𝑢 = ∫ 𝐹 (𝑦)𝑑𝑦 + 𝑐1 = 𝐺(𝑦) + 𝑐1 .
2
4.2. SECOND ORDER ODES 33
So we have:
𝑑𝑦
= 𝑢 = ±√2𝐺(𝑦) + 2𝑐1 ,
𝑑𝑥
which is a first order separable ODE for 𝑦(𝑥) and can be integrated to obtain
𝑦𝐺𝑆 (𝑥; 𝑐1 , 𝑐2 ) as seen in the following example.
𝑑𝑥
Let velocity to be 𝑢 = 𝑑𝑡 , then we have:
𝑑𝑢 𝑑 1 2 𝑘𝑥
𝑎= = [ 𝑢 ]=− .
𝑑𝑡 𝑑𝑥 2 𝑚
Integrating both sides we obtain:
𝑢2 𝑘 2
=− 𝑥 + 𝑐1 .
2 2𝑚
This equation gives us a constant of motion (𝐸 = 𝑐1 𝑚), which is known as total
energy, the sum of kinetic energy (1/2𝑚𝑢2 ) and potential energy (1/2𝑘𝑥2 ).
𝑑𝑥 2 𝑑𝑥
𝑢= 𝑑𝑡 = ±√ 2𝐸−𝑘𝑥
𝑚 ⟹ ∫ = ∫ 𝑑𝑡
2
±√ 2𝐸−𝑘𝑥
𝑚
1 𝑑𝑥 𝑚 𝑘
∫ = √ sin−1 (√ 𝑥) = 𝑡 + 𝑐2
√2𝐸/𝑚 𝑘 2
√1 − 2𝐸 𝑥 𝑘 2𝐸
𝑑𝑦
4.2.4 𝐹 only depends on 𝑦 and 𝑑𝑥
𝑑2 𝑦 𝑑𝑦
2
= 𝐹 (𝑦, )
𝑑𝑥 𝑑𝑥
𝑑𝑦 𝑑𝑢
let 𝑢 = 𝑑𝑥 ⟹ 𝑑𝑥 = 𝐹 (𝑦, 𝑢). So we have
𝑑𝑢 𝑑𝑢 𝑑𝑦 𝑑𝑢 𝑑 1 2
= =𝑢 = ( 𝑢 ).
𝑑𝑥 𝑑𝑦 𝑑𝑥 𝑑𝑦 𝑑𝑦 2
34 CHAPTER 4. FIRST AND SECOND ORDER ODES
Therefore we have the following first order ODE for 𝑢(𝑦) to solve
𝑑 1 2
( 𝑢 ) = 𝐹 (𝑦, 𝑢).
𝑑𝑦 2
Given 𝑢𝐺𝑆 (𝑦; 𝑐1 ) being a general solution for the above ODE, we have the fol-
lowing first order ODE for 𝑦(𝑥):
𝑑𝑦
= 𝑢𝐺𝑆 (𝑦; 𝑐1 ).
𝑑𝑥
Chapter 5
Linear ODEs
𝑑𝑘 𝑦 𝑑𝑘−1 𝑦 𝑑𝑦
𝛼𝑘 (𝑥) 𝑘
+ 𝛼𝑘−1 (𝑥) 𝑘−1 + ... + 𝛼1 (𝑥) + 𝛼0 (𝑥)𝑦 = 𝑓(𝑥),
𝑑𝑥 𝑑𝑥 𝑑𝑥
where 𝛼𝑘 (𝑥),.., 𝛼0 (𝑥) and 𝑓(𝑥) are functions of only the independent variable 𝑥.
The ODE is called homogeneous if 𝑓(𝑥) = 0 and inhomogeneous otherwise.
Some examples of linear ODEs:
• First order ODE
𝑑𝑦
+ 𝑝(𝑥)𝑦 = 𝑞(𝑥).
𝑑𝑥
• Bessel’s equation
𝑑2 𝑦 𝑑𝑦
𝑥2 +𝑥 + (𝑥2 − 𝑛2 )𝑦 = 0.
𝑑𝑥2 𝑑𝑥
• Legendre’s equation
𝑑2 𝑦 𝑑𝑦
(1 − 𝑥2 ) 2
− 2𝑥 + 𝑛(𝑛 + 1)𝑦 = 0.
𝑑𝑥 𝑑𝑥
Linear Operators
𝑑
We define the differential operator as 𝒟[𝑓] ≡ 𝑑𝑥 [𝑓].
35
36 CHAPTER 5. LINEAR ODES
Linear ODEs are associated with a linear operator defined using the differential
𝑘
operators: ℒ[𝑦] ≡ ∑𝑖=0 𝛼𝑖 (𝑥)𝒟𝑖 [𝑦] since:
A linear ODE can thus be simply written as ℒ[𝑦] = 𝑓(𝑥) and a homogenous
ODE as ℒ[𝑦] = 0. Linearity of ℒ has an important consequence. If we have
two solutions 𝑦1 and 𝑦2 of a homogenous linear ODE ℒ[𝑦] = 0, then any linear
combinations of these solutions are also solutions for this ODE, since:
Linear independence
A set of functions {𝑓𝑖 (𝑥)}𝑘𝑖=1 is said to be linearly independent if 𝑓𝑖 ’s satisfy the
following condition:
if and only if 𝑐1 = 𝑐2 = ⋯ = 𝑐𝑘 = 0.
Linear ODEs are easier to solve because of the following important property of
their solutions. This is a basic consequence of linearity of differential operators.
Proposition 5.1. The solutions of the homogeneous linear ODE ℒ[𝑦] = 0 form
a vector space (see MATH40003: Linear Algebra and Groups) of dimension 𝑘,
where 𝑘 is the order of the ODE. Therefore, the general solution of a linear
homogeneous ODE can be written as
𝐻
𝑦𝐺𝑆 (𝑥; 𝑐1 , ..., 𝑐𝑘 ) = 𝑐1 𝑦1 + 𝑐2 𝑦2 + ... + 𝑐𝑘 𝑦𝑘 ,
𝑊 [{𝑦𝑖 (𝑥)}𝑘𝑖=1 ] ≠ 0.
5.1. GENERAL SOLUTION OF THE NON-HOMOGENEOUS LINEAR ODE37
𝑘
∃𝑖 ∈ {1, ⋯ , 𝑘}, 𝑐𝑖 ≠ 0, ∑ 𝑐𝑖 𝑦𝑖 (𝑥) = 0.
𝑖=1
𝑘
𝑑𝑦𝑖
∑ 𝑐𝑖 =0
𝑖=1
𝑑𝑥
⋮
𝑘 𝑘−1
𝑑 𝑦𝑖
∑ 𝑐𝑖 = 0,
𝑖=1
𝑑𝑥𝑘−1
Example 5.1. Show that sin(𝑥) and cos(𝑥) are linearly independent.
sin(𝑥) cos(𝑥)
𝕎2×2 = [ ],
cos(𝑥) sin(𝑥)
then
𝑊 (𝑥) = det𝕎 = − sin2 (𝑥) − cos2 (𝑥) = −1 ≠ 0,
therefore sin(𝑥) and cos(𝑥) are linearly independent.
Note There exists examples in which the Wronskian vanishes without the func-
tions being linearly dependent. An example is given in a quiz in the lectures.
where, {𝑦𝑖 }𝑘𝑖=1 are the basis of the solution vector space (a set of linearly
independent solutions of the homogeneous linear ODE).
2. We obtain any/one solution of the full non-homogeneous ODE, which is
also known as particular integral (𝑦𝑃 𝐼 ):
ℒ[𝑦𝑃 𝐼 ] = 𝑓(𝑥).
Then for the solution to the full problem by combining the results above
and due to linearity, we have:
𝐻
ℒ[𝑦𝐺𝑆 (𝑥; 𝑐1 , ⋯ , 𝑐𝑘 )] = ℒ[𝑦𝐶𝐹 + 𝑦𝑃 𝐼 ] = ℒ[𝑦𝐺𝑆 ] + ℒ[𝑦𝑃 𝐼 ] = 𝑓(𝑥).
So the general solution of the non-homogeneous linear ODE is the sum of the
complementary function and a particular integral. As seen in a quiz in the
lectures different choices of particular integrals results in the same family of
general solutions.
One useful consequence of the linearity is that if the RHS of the ODE is sum of
two functions:
ℒ[𝑦] = 𝑓1 (𝑥) + 𝑓2 (𝑥).
We can break the second step of finding particular integral into additional steps.
𝐻
1. Find 𝑦𝐶𝐹 = 𝑦𝐺𝑆 (𝑥; 𝑐1 , ⋯ , 𝑐𝑘 ) such that ℒ[𝑦𝐶𝐹 ] = 0.
2. Find any solution to ℒ1𝒫ℐ [𝑦] = 𝑓1 (𝑥).
3. Find any solution to ℒ2𝒫ℐ [𝑦] = 𝑓2 (𝑥).
Then, we have 𝑦𝐺𝑆 = 𝑦𝐶𝐹 + 𝑦𝑃1 𝐼 + 𝑦𝑃2 𝐼 .
Linear ODEs with constant coefficients
The general linear ODE is not always analytically solvable. Next year, you
will see approximative and numerical methods to solve this kind of ODEs. In
the rest of this course, we will focus on the case of linear ODEs with constant
coefficients (𝛼𝑖 s not depending on independent variable 𝑥):
𝑘
ℒ[𝑦] = ∑ 𝛼𝑖 𝒟𝑖 [𝑦] = 𝑓(𝑥)
𝑖=0
5.2. FIRST ORDER LINEAR ODES WITH CONSTANT COEFFICIENTS39
𝛼0
𝑥
we can obtain the general solution using the integrating factor 𝐼(𝑥) = 𝑒 𝛼1 . We
obtain
𝑥 𝑓(𝑥)
𝛼 𝛼 𝛼0
− 𝛼0 𝑥 − 𝛼0 𝑥
𝑦𝐺𝑆 = 𝑐1 𝑒 1 +𝑒 1 ∫ 𝑒 𝛼1 𝑑𝑥.
𝛼1
Alternative method
1. Solve the corresponding homogeneous ODE:
𝑑𝑦
ℒ[𝑦𝐶𝐹 ] = 𝛼1 + 𝛼0 𝑦 = 0.
𝑑𝑥
This is a separable ODE and by integration we obtain:
𝛼
𝐻 − 𝛼0 𝑥
𝑦𝐶𝐹 = 𝑦𝐺𝑆 (𝑥; 𝑐1 ) = 𝑐1 𝑒 1 .
𝑥2 ∶ 𝛼 0 𝐴 = 0 ⇒ 𝐴 = 0;
1
𝑥1 ∶ 2𝛼1 𝐴 + 𝛼0 𝐵 = 1 ⇒ 𝐵= ;
𝛼0
𝛼
𝑥0 ∶ 𝛼 1 𝐵 + 𝛼 0 𝐶 = 0 ⇒ 𝐶 = − 12 ,
𝛼0
which gives us the same general solution obtained using the first method:
𝛼
− 𝛼0 𝑥 𝑥 𝛼
𝑦𝐺𝑆 = 𝑦𝐶𝐹 + 𝑦𝑃 𝐼 = 𝑐1 𝑒 1 +[ − 12 ] .
𝛼0 𝛼0
− 𝛼0 𝑥
𝛼 1
𝑦𝐺𝑆 = 𝑦𝐶𝐹 + 𝑦𝑃 𝐼 = 𝑐1 𝑒 1 + 𝑒𝑏𝑥 .
𝛼1 𝑏 + 𝛼0
𝑦𝑃 𝐼 = 𝐴(𝑥)𝑒𝑏𝑥 .
𝑑2 𝑦 𝑑𝑦
ℒ[𝑦] = 𝛼2 + 𝛼1 + 𝛼0 𝑦 = 0.
𝑑𝑥2 𝑑𝑥
So, we have the following two candidate solutions 𝑦1𝐻 = 𝑒𝜆1 𝑥 and 𝑦2𝐻 = 𝑒𝜆2 𝑥 .
For these solutions to form a basis for the solution space of the homogeneous
linear ODE, they should be linear independence. We evaluate the Wronskian:
𝑒𝜆1 𝑥 𝑒𝜆2 𝑥
𝑊 (𝑥) = det [ 𝜆1 𝑥 ] = 𝑒(𝜆1 +𝜆2 )𝑥 (𝜆2 − 𝜆1 ).
𝜆1 𝑒 𝜆2 𝑒𝜆2 𝑥
So, if the roots of the characteristics equation are distinct (𝜆1 ≠ 𝜆2 ), then
𝑊 (𝑥) ≠ 0 and the solutions form a linearly independent set. So we have:
𝐻
𝑦𝐶𝐹 = 𝑦𝐺𝑆 (𝑥; 𝑐1 , 𝑐2 ) = 𝑐1 𝑒𝜆1 𝑥 + 𝑐2 𝑒𝜆2 𝑥 .
42 CHAPTER 5. LINEAR ODES
𝛼1
For the case of 𝜆1 = 𝜆2 = − 2𝛼 , we have 𝑦1 = 𝑒𝜆1 𝑥 , what about the second
2
solution 𝑦2 ? We can try the ansatz 𝑦2 = 𝐴(𝑥)𝑦1 (𝑥) = 𝐴(𝑥)𝑒𝜆1 𝑥 . This is similar
to the method of variation of parameters. In the context of 2nd order linear
ODEs, when we have one of the solutions and looking for the second solution,
this method is called the method of reduction of order. Plugging this ansatz
into the ODE we obtain:
𝑑𝐴 𝑑𝑦 𝑑2 𝐴 𝑑𝐴 𝑑𝑦1 𝑑2 𝑦
𝛼0 [𝐴𝑦1 ] + 𝛼1 [ 𝑦1 + 𝐴 1 ] + 𝛼2 [ 2 𝑦1 + 2 + 𝐴 21 ] = 0.
𝑑𝑥 𝑑𝑥 𝑑𝑥 𝑑𝑥 𝑑𝑥 𝑑𝑥
This result in the following simple ODE and solution for 𝐴(𝑥):
𝑑2 𝐴
=0 ⇒ 𝐴(𝑥) = 𝐵1 𝑥 + 𝐵2 ⇒ 𝑦2 = (𝐵1 𝑥 + 𝐵2 )𝑒𝜆1 𝑥 .
𝑑𝑥2
We note that 𝑦2 we have obtained here contains 𝑦1 , so we can choose 𝑦2 =
𝑥𝑒𝜆1 𝑥 . Testing the linear independence of these solutions, we should evaluate
the Wronskian:
𝑒𝜆1 𝑥 𝑥𝑒𝜆1 𝑥
𝑊 (𝑥) = det [ 𝜆1 𝑥 𝜆1 𝑥 ] = 𝑒2𝜆1 𝑥 ≠ 0.
𝜆1 𝑒 𝑒 + 𝜆1 𝑥𝑒𝜆1 𝑥
So 𝑦1 and 𝑦2 are linearly independent and can span the solution space. So we
have the following general solution for the case characteristic equation has the
repeated root 𝜆1 :
𝐻
𝑦𝐶𝐹 = 𝑦𝐺𝑆 (𝑥; 𝑐1 , 𝑐2 ) = 𝑐1 𝑒𝜆1 𝑥 + 𝑐2 𝑥𝑒𝜆1 𝑥 .
𝛼1 𝛼2 − 4𝛼0 𝛼2
𝜆1,2 = − ±√ 1 .
2𝛼2 4𝛼22
as 𝑥 → ∞, 𝑦𝐶𝐹 → 𝑒𝜆1 𝑥 → ∞.
• If 𝜆2 < 𝜆1 < 0
as 𝑥 → ∞, 𝑦𝐶𝐹 → 𝑒𝜆1 𝑥 → 0.
2. 𝛼21 − 4𝛼0 𝛼2 < 0 ⟹ 𝜆1,2 ∈ ℂ
5.3. SECOND ORDER LINEAR ODES WITH CONSTANT COEFFICIENTS43
𝛼21 − 4𝛼0 𝛼2 𝛼1
∣ ∣ = 𝜔2 ⟹ 𝜆1,2 = − ± 𝑖𝜔
4𝛼22 2𝛼2
So, we have for the general solution:
𝛼 𝛼
− 2𝛼1 𝑥 − 2𝛼1 𝑥
𝑦𝐶𝐹 = 𝑒 2 [𝑐1 𝑒𝑖𝜔𝑥 + 𝑐2 𝑒−𝑖𝜔𝑥 ] = 𝑒 2 [(𝑐1 + 𝑐2 ) cos 𝜔𝑥 + 𝑖(𝑐1 − 𝑐2 ) sin 𝜔𝑥] .
If the ODE has real coefficients the solution 𝑦𝐶𝐹 ∈ ℝ. Therefore, choosing 𝑐1 and
𝑐2 to be complex conjugate, we obtain 𝑐1′ = 𝑐1 + 𝑐2 ∈ ℝ and 𝑐2′ = 𝑖(𝑐1 − 𝑐2 ) ∈ ℝ.
So the we can write the general solution of the homogeneous ODE with complex
roots in following forms:
𝛼 𝛼
− 2𝛼1 𝑥 − 2𝛼1 𝑥
𝑦𝐶𝐹 = 𝑒 2 [𝑐1′ cos 𝜔𝑥 + 𝑐2′ sin 𝜔𝑥] = 𝑒 2 𝐴 cos(𝜔𝑥 − 𝜙),
where in the later, we have used the following change of constants of integrations
𝑐1′ = 𝐴 cos 𝜙 and 𝑐2′ = 𝐴 sin 𝜙. Figure 5.1 shows possible behaviors of 𝑦𝐶𝐹
𝛼1
depending on the value of the parameter 𝑑 = 2𝛼 .
2
4
2
y
0
−2
0 1 2 3 4 5 6
Figure 5.1: 𝑦𝐶𝐹 for 𝑑 > 0 (red), 𝑑 < 0 (green) and 𝑑 = 0 (blue); all three
solutions have the same phase but for clarity of visualisation different amplitudes
(𝐴) are used in each case.
𝑑2 𝑦 𝑑𝑦
ℒ[𝑦] = 2
−3 + 2𝑦 = 𝑒8𝑥 .
𝑑𝑥 𝑑𝑥
• First step: The characteristic equation is 𝜆2 − 3𝜆 + 2 = 0, so we have
𝜆1 = 2 and 𝜆2 = 1, so
𝑦𝐶𝐹 = 𝑐1 𝑒2𝑥 + 𝑐2 𝑒𝑥 .
44 CHAPTER 5. LINEAR ODES
• Second step: Finding a particular integral ℒ[𝑦𝑃 𝐼 ] = 𝑓(𝑥). 1st try ansatz
𝑦𝑃 𝐼 = 𝐴𝑒−2𝑥 , which does not work. 2nd try ansatz 𝑦𝑃 𝐼 = 𝐴𝑥𝑒−2𝑥 , which
also does not work. Let’s try ansatz: 𝑦𝑃 𝐼 = 𝐴(𝑥)𝑒−2𝑥 using the method
of variation of parameters. By plugging into the ODE we obtain:
𝑑2 𝐴 𝑥2
=1 ⇒ 𝐴= + 𝐵1 𝑥 + 𝐵 2 .
𝑑𝑥2 2
So, we obtain the general solution:
𝑥2 −2𝑥
𝑦𝐺𝑆 = 𝑦𝐶𝐹 + 𝑦𝑃 𝐼 = 𝑐1 𝑒−2𝑥 + 𝑐2 𝑥𝑒−2𝑥 + 𝑒 .
2
Instead of using the method of variation of parameters, we could have
guessed the ansatz 𝑦𝑃 𝐼 = 𝐵𝑥2 𝑒−2𝑥 directly and obtaining value of 𝐵 using
the method of undetermined coefficients.
- First step: Solving the Homogeneous problem ℒ[𝑦𝐻 ] = 0. In this case roots
are complex, so we have:
𝑒(1+𝑖)𝑥 𝑒(1−𝑖)𝑥
𝑓(𝑥) = 𝑒𝑥 sin 𝑥 = − .
2𝑖 2𝑖
5.4. 𝐾TH ORDER LINEAR ODES WITH CONSTANT COEFFICIENTS 45
So, we can use the following two ansatz to find particular integrals using the
method of undetermined coefficients: 𝑦𝑃1 𝐼 = 𝐴𝑥𝑒(1+𝑖)𝑥 and 𝑦𝑃2 𝐼 = 𝐴𝑥𝑒(1−𝑖)𝑥 . By
plugging into the ODE using the first and second part of 𝑓(𝑥), one finds values
for 𝐴 and 𝐵 respectively (left as an exercise for you) Then the general solution
is:
𝑦𝐺𝑆 = 𝑦𝐶𝐹 + 𝑦𝑃1 𝐼 + 𝑦𝑃2 𝐼 .
𝐻
𝑦𝐺𝑆 (𝑥; 𝑐1 , ⋯ , 𝑐𝑘 ) = 𝑦𝐶𝐹 + 𝑦𝑃 𝐼 = 𝑦𝐺𝑆 (𝑥; 𝑐1 , ⋯ , 𝑐𝑘 ) + 𝑦𝑃 𝐼 (𝑥)
First step: Solving the Homogeneous problem ℒ[𝑦𝐻 ] = 0
We can try the ansatz 𝑦𝐻 = 𝑒𝜆𝑥 :
𝑘 𝑘
ℒ[𝑒𝜆𝑥 ] = 𝑒𝜆𝑥 ∑ 𝛼𝑖 𝜆𝑖 = 0 ⇒ ∑ 𝛼𝑖 𝜆𝑖 = 0.
𝑖=0 𝑖=0
This is the characterstic equation of the 𝑘th order linear ODE. It has 𝑘 roots that
can be always obtained numerically (in the absence of an analytical solution).
• Case 1: 𝑘 roots of the characteristic polynomial are distinct:
The solutions 𝐵 = {𝑒𝜆𝑖 𝑥 }𝑘𝑖=1 can be shown to be linearly independent using the
Wronskian:
1 1 ⋯ 1
∣ ∣
𝜆 𝜆2 ⋯ 𝜆𝑘
𝑘 ∣ 1 ∣
𝑊 (𝑥) = det 𝕎(𝑥) = 𝑒∑𝑖=1 𝜆𝑖 𝑥 ∣ ⋮ ⋮ ⋮ ∣=
∣ ∣
∣𝜆𝑘−1
1 𝜆𝑘−1
2 ⋯ 𝜆𝑘−1
𝑘 ∣
𝑘
𝑒∑𝑖=1 𝜆𝑖 𝑥 ∏ (𝜆𝑖 − 𝜆𝑗 ) ≠ 0; (Vandermonde determinant)
1≤𝑖<𝑗≤𝑘
46 CHAPTER 5. LINEAR ODES
• Case 2: Not all of the 𝑘 roots are distinct. Below, we consider the
particular case of having 𝑑 repeated roots and 𝑘 − 𝑑 distinct roots.
𝑑𝑘 𝑦 𝑘−1 𝑑
𝑘−1
𝑦 𝑑𝑦
ℒ[𝑦] = 𝛽𝑘 𝑥𝑘 𝑘
+ 𝛽 𝑘−1 𝑥 𝑘−1
+ ⋯ + 𝛽1 𝑥 + 𝛽0 𝑦 = 𝑓(𝑥).
𝑑𝑥 𝑑𝑥 𝑑𝑥
Using the change of variable 𝑥 = 𝑒𝑧 , the Euler-Cauchy equation can be trans-
formed into a linear ODE with constant coefficients.
2
𝑑 𝑦 𝑑𝑦
Example 5.7. Solve the following ODE: 𝑥2 𝑑𝑥 3
2 + 3𝑥 𝑑𝑥 + 𝑦 = 𝑥 .
𝑑2 𝑦 𝑑 𝑑𝑦 𝑑 𝑑𝑦 𝑑𝑧 1 𝑑2 𝑦 𝑑𝑦
2
= ( )= ( ) = 2[ 2 − ].
𝑑𝑥 𝑑𝑥 𝑑𝑥 𝑑𝑧 𝑑𝑥 𝑑𝑥 𝑥 𝑑𝑧 𝑑𝑧
So, in terms of the new independent variable we have the following linear ODE
with constant coefficients.
𝑑2 𝑦 𝑑𝑦
2
+2 + 𝑦 = 𝑒3𝑧 .
𝑑𝑧 𝑑𝑧
5.6. USING FOURIER TRANSFORMS TO SOLVE LINEAR ODES 47
Example 5.8. Find a solution for the following ODE, known as the Airy equa-
tion or the Stokes equation, which arises in different areas of physics. Assume
lim𝑥→±∞ 𝑦(𝑥) = 0.
𝑑2 𝑦
− 𝑥𝑦 = 0.
𝑑𝑥2
This is a linear 2nd order ODE with non-constant coefficients and so far we have
not seen a method of solving it. Note that this ODE is not also one of the types
that are discussed in Section 4.2. Our strategy is to take Fourier transform from
this ODE and see if we can solve for the Fourier transform. Using the properties
in Section 2.1, we obtain:
𝑑𝑦(𝜔)
̂
−𝜔2 𝑦(𝜔)
̂ −𝑖 =0
𝑑𝜔
Introduction to Systems of
ODEs
𝑑𝑦1 𝑑𝑦 𝑑 𝑘 1 𝑦1 𝑑𝑘𝑛 𝑦𝑛
𝐺1 (𝑥, 𝑦1 , 𝑦2 , ⋯ , 𝑦𝑛 , ,⋯, 𝑛,⋯, , ⋯ , ) = 0,
𝑑𝑥 𝑑𝑥 𝑑𝑥𝑘1 𝑑𝑥𝑘𝑛
These models can be used to predict the dynamics of predator and prey systems
such as rabbits (𝑥(𝑡)) and foxes (𝑦(𝑡)). A classic model is Lotka-Volterra model
(1925/26) that can exhibit a periodic solution of the population of predator and
preys as one goes up and the other goes down.
49
50 CHAPTER 6. INTRODUCTION TO SYSTEMS OF ODES
𝑑𝑥
= 𝑎𝑥 − 𝑏𝑥𝑦,
𝑑𝑡
𝑑𝑦
= 𝑑𝑥𝑦 − 𝑐𝑦.
𝑑𝑡
The chemical rate equations for a set of chemical reactions. For example, con-
sider the reversible binary reaction 𝐴 + 𝐵 ⇌ 𝐶 with forward rate of 𝑘1 and
backward rate of 𝑘2 . We have the following rate equations for the concentra-
tions [𝐴], [𝐵] and [𝐶].
𝑑[𝐴] 𝑑[𝐵]
= = 𝑘2 [𝐶] − 𝑘1 [𝐴][𝐵],
𝑑𝑡 𝑑𝑡
𝑑[𝐶] 𝑑[𝐴]
=− = −𝑘2 [𝐶] + 𝑘1 [𝐴][𝐵].
𝑑𝑡 𝑑𝑡
These kind of equations are used in mathematical modelling of biochemical
reaction networks in systems and synthetic biology.
Using the second Newton law for each mass (𝐹 = 𝑚𝑎) and the Hook’s law
(𝐹 = −𝑘Δ𝑥) for the springs and given the relaxed lengths of each spring (𝑙1 to
6.1. SYSTEMS OF FIRST ORDER ODES 51
𝑙4 ), and assuming the distance between the walls is the sum of the the relaxed
lengths of the springs, we have the following system of ODEs:
𝑑 2 𝑥1
= −𝑘1 (𝑥1 − 𝑙1 ) + 𝑘2 (𝑥2 − 𝑥1 − 𝑙2 ),
𝑑𝑡2
2
𝑑 𝑥2
= −𝑘2 (𝑥2 − 𝑥1 − 𝑙2 ) + 𝑘3 (𝑥2 − 𝑥2 − 𝑙3 ),
𝑑𝑡2
𝑑 2 𝑥3
= −𝑘3 (𝑥3 − 𝑥2 − 𝑙3 ) + 𝑘4 (𝑙1 + 𝑙2 + 𝑙3 − 𝑥3 ).
𝑑𝑡2
𝑑𝑆
= −𝛽𝑆𝐼,
𝑑𝑡
𝑑𝐼
= 𝛽𝑆𝐼 − 𝛾𝐼,
𝑑𝑡
𝑑𝑅
= 𝛾𝐼.
𝑑𝑡
This model and its variations are the basis of a lot of mathematical modelling
that has been performed for predicting the effect of different interventions in
the Covid-19 world-wide pandemic. If you like to learn a bit more about the
SIR ODE system above check out this video. Also, the SIR model can be
modeled using a so-called agent-based stochastic approach, where the individuals
and their random interactions are specifically followed. You can check out this
interesting video from the 3blue1brown series to see some cool exploration of
this approach to SIR models.
𝑑𝑦1
= 𝐹1 (𝑥, 𝑦1 , 𝑦2 , ⋯ , 𝑦𝑛 ),
𝑑𝑥
52 CHAPTER 6. INTRODUCTION TO SYSTEMS OF ODES
⋮
𝑑𝑦𝑛
= 𝐹𝑛 (𝑥, 𝑦1 , 𝑦2 , ⋯ , 𝑦𝑛 ).
𝑑𝑥
Example 6.5 (Turning a higher order ODE into systems of 1st order ODEs).
The following second order ODE is the equation of motion for damped harmonic
oscillator for a mass 𝑚 that is attached to an ideal spring which follows the
Hook’s law (𝐹𝑆 = −𝑘𝑥, where 𝑥 is position of the mass measured from the
spring relaxed position) and has a damping friction force that is propotional
and opposite in direction to its velocity 𝐹𝐷 = −𝜂 𝑑𝑥𝑑𝑡 and is acted on by a
deriving force 𝐹 (𝑡).
𝑑2 𝑥 𝑑𝑥
𝑚 +𝜂 + 𝑘𝑥 = 𝐹 (𝑡).
𝑑𝑡2 𝑑𝑡
By defining new variable 𝑢 = 𝑑𝑥/𝑑𝑡 as the velocity of the mass 𝑚, we can turn
this second order ODE to a system of two first order ODEs:
𝑑𝑥
= 𝑢,
𝑑𝑡
𝑑𝑢 1
= [𝐹 (𝑡) − 𝜂𝑢 − 𝑘𝑥].
𝑑𝑡 𝑚
We can use a general vector notation to write systems of 1st order ODEs as
𝑑𝑦𝑛×1
⃗ ⃗ (𝑡, 𝑦𝑛×1
= 𝐹𝑛×1 ⃗ ).
𝑑𝑡
where 𝛼𝑖𝑗 ∈ ℝ are the constant coefficients forming the matrix 𝐴𝑛×𝑛 . We can
write the system of linear ODEs in matrix form:
𝑑𝑦
𝑑𝑡
1
𝛼11 𝛼12 … 𝛼1𝑛 𝑦1 𝑔1 (𝑡)
⎡ 𝑑𝑦 2 ⎤ ⎡𝛼 𝛼22 … 𝛼2𝑛 ⎤ ⎡ 𝑦2 ⎤ ⎡ 𝑔2 (𝑡) ⎤
⎢ 𝑑𝑡 ⎥ = ⎢ 21 ⎥⎢ ⎥ + ⎢ ⎥.
⎢ ⋮ ⎥ ⎢ ⋮ ⋮ ⋱ ⋮ ⎥⎢ ⋮ ⎥ ⎢ ⋮ ⎥
𝑑𝑦
⎣ 𝑑𝑡𝑛 ⎦ ⎣𝛼𝑛1 𝛼𝑛2 … 𝛼𝑛𝑛 ⎦ ⎣𝑦𝑛 ⎦ ⎣𝑔𝑛 (𝑡)⎦
𝑑
Now if we define the linear operator ℒ[𝑦]⃗ = [ 𝑑𝑡 − 𝐴]𝑦 ⃗ then we can write the
system of linear first order ODEs as
ℒ[𝑦]⃗ = 𝑔(𝑡).
⃗
𝑑
Since both matrix 𝐴 and derivative 𝑑𝑡 are linear operators, the operator asso-
ciated to systems of linear ODEs ℒ is also a linear operator. By linearity we
have:
ℒ[𝜆1 𝑦1⃗ + 𝜆2 𝑦2⃗ ] = 𝜆1 ℒ[𝑦1⃗ ] + 𝜆2 ℒ[𝑦2⃗ ].
Therefore, the solutions of the homogenous systems of 𝑛 linear ODEs ℒ[𝑦𝐻 ⃗ ]=0
forms a vector space of dimension 𝑛. So a set of linearly independent solutions
𝐵 = {𝑦𝑖⃗ }𝑛𝑖=1 form a basis for this space. Therefore, similar to linear ODEs, the
general solution can be written as
𝑛
𝐻
𝑦𝐺𝑆
⃗ = ∑ 𝑐𝑖 𝑦𝑖⃗ ,
𝑖=1
𝑦𝐺𝑆
⃗ (𝑡; 𝑐1 , 𝑐2 , ⋯ , 𝑐𝑛 ) = 𝑦𝐶𝐹
⃗ + 𝑦𝑃⃗ 𝐼 .
𝑑𝑦𝐻
⃗
ℒ[𝑦𝐻
⃗ ]=0 ⟹ = 𝐴𝑦𝐻
⃗
𝑑𝑡
First, we consider the case where matrix 𝐴 has 𝑛 distinct roots and
therefore is diagonalizable.
That means that there exists a matrix 𝑉 where, we have:
𝜆1 0 0
𝑉 −1 𝐴𝑉 = Λ = ⎡
⎢0 ⋱ 0⎤ ⎥.
⎣0 0 𝜆𝑛 ⎦
𝑑𝑧 ⃗
= Λ𝑧.⃗
𝑑𝑡
𝑑𝑧𝑖
The 𝑖th row of this equation gives us 𝑑𝑡 = 𝜆𝑖 𝑧𝑖 , which can be solved, so we
obtain:
𝑐1 𝑒𝜆1 𝑡 𝑐 𝑒𝜆1 𝑡
⎡ 𝑐 𝑒𝜆2 𝑡 ⎤ ⋮ ⋮ ⋮ ⎡ 1 𝜆2 𝑡 ⎤
𝑐2 𝑒
𝑍⃗ = ⎢ 2 ⎥ ⟹ ⃗ = 𝑉 𝑍⃗ = ⎡
𝑦𝐻 ⎢𝑣1⃗ 𝑣2⃗ ⋯ 𝑣𝑛⃗ ⎤
⎥⎢ ⎥.
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
𝜆𝑛 𝑡 ⎣⋮ ⋮ ⋮ ⎦ 𝜆𝑛 𝑡
⎣𝑐𝑛 𝑒 ⎦ ⎣𝑐𝑛 𝑒 ⎦
ℒ[𝑦𝑃⃗ 𝐼 ] = 𝑔(𝑡).
⃗
We will use Ansatz and use the methods of undetermined coefficients and vari-
ation of parameters as done for the linear ODEs.
𝑑𝑥
= −4𝑥 − 3𝑦 − 5,
𝑑𝑡
𝑑𝑦
= 2𝑥 + 3𝑦 − 2.
𝑑𝑡
𝜆2 + 𝜆 − 6 = 0 ⟹ 𝜆1 = 2, 𝜆2 = −3.
We have the corresponding eigenvectors:
1 3
𝑣1⃗ = [ ], 𝑣2⃗ = [ ].
−2 −1
𝐻 1 3
𝑦𝐶𝐹 ⃗ (𝑡; 𝑐1 , 𝑐2 ) = 𝑐1 𝑒2𝑡 [
⃗ = 𝑦𝐺𝑆 ] + 𝑐2 𝑒−3𝑡 [ ] .
−2 −1
2nd step is to find any particular integral 𝑦𝑃⃗ 𝐼 (𝑡)} that satisfies:
−5
ℒ[𝑦𝑃⃗ 𝐼 ] = 𝑔(𝑡)
⃗ =[ ].
−2
We use the ansatz:
𝑎
𝑦𝑃⃗ 𝐼 = [ ] .
𝑏
By plugging the ansatz into the ODE we obtain the undetermined coefficients
𝑎 and 𝑏:
𝑎 5 1 3 3 5 −7/2
𝑦𝑃⃗ 𝐼 = [ ] = 𝐴−1 [ ] = − [ ][ ] = [ ].
𝑏 2 6 −2 −4 2 3
So we have:
1 3 −7/2
𝑦𝐺𝑆 ⃗ + 𝑦𝑃⃗ 𝐼 = 𝑐1 𝑒2𝑡 [ ] + 𝑐2 𝑒−3𝑡 [ ] + [
⃗ (𝑡; 𝑐1 , 𝑐2 ) = 𝑦𝐶𝐹 ].
−2 −1 3
Example 6.7. Solve the problem of damped Harmonic spring with zero forcing
𝑑2 𝑥 𝑑𝑥
𝑚 2
+𝜂 + 𝑘𝑥 = 0
𝑑𝑡 𝑑𝑡
56 CHAPTER 6. INTRODUCTION TO SYSTEMS OF ODES
This is a second order linear ODE and we can solve it using the methods dis-
cussed in the last chapter. Using the ansatz 𝑒𝜆𝑡 , we obtain the following char-
acteristic equation:
−𝜂 ± √𝜂2 − 4𝑘𝑚
𝑚𝜆2 + 𝜂𝜆 + 𝑘 = 0 ⇒ 𝜆1,2 = .
2𝑚
If the roots are distinct we obtain:
Alternatively, we can transform the second order ODE to systems of first order
ODEs as seen before:
𝑑𝑥
= 𝑢,
𝑑𝑡
𝑑𝑢 𝜂 𝑘
= − 𝑢 − 𝑥.
𝑑𝑡 𝑚 𝑚
We obtain the same 𝜆1,2 as above for the eigenvalues of corresponding matrix
𝐴 for this system of ODEs.
For the eigenvectors (𝑣1⃗ and 𝑣2⃗ ) of 𝐴, we have:
0 1 𝑣1𝑥 𝑣
𝑣1⃗ ∶ 𝐴𝑣1⃗ = 𝜆1 𝑣1⃗ ⟹ [ 𝑘 𝜂 ][ ] = 𝜆1 [ 1𝑥 ] ,
−𝑚 −𝑚 𝑣1𝑢 𝑣1𝑢
1
𝑣1𝑢 = 𝜆1 𝑣1𝑥 ⟹ 𝑣1⃗ = [ ].
𝜆1
Similarly, we obtain for the second eigenvector:
1
𝑣2⃗ ∶ 𝐴𝑣2⃗ = 𝜆2 𝑣2⃗ ⟹ 𝑣2⃗ = [ ].
𝜆2
𝑥𝐺𝑆 1 1
𝑦𝐺𝑆
⃗ =[ ] = 𝑐1 𝑒𝜆1 𝑡 [ ] + 𝑐2 𝑒𝜆2 𝑡 [ ] ,
𝑢𝐺𝑆 𝜆1 𝜆2
which gives us the same general solution for 𝑥𝐺𝑆 as obtained using the previous
method.
When 𝐴 has repeated eigenvalues
Case 1: 𝐴 is still diagonalizable (it has 𝑛 linearly independent eigenvectors).
Then we can still use the method described. For example for 𝑛 = 2 we have:
𝜆 0 1 0
𝐴=[ ] ⇒ 𝑣1⃗ = [ ] , 𝑣2⃗ = [ ] .
0 𝜆 0 1
6.2. SYSTEMS OF LINEAR 1ST ORDER ODES WITH CONSTANT COEFFICIENTS57
So, we have
1 0
⃗ = 𝑐1 𝑒𝜆𝑡 [ ] + 𝑐2 𝑒𝜆𝑡 [ ] .
𝑦𝐶𝐹
0 1
Example 6.8.
𝑑𝑦 ⃗ 1 −1
= 𝐴𝑦;⃗ 𝐴=[ ].
𝑑𝑡 1 3
(1 − 𝜆)(3 − 𝜆) + 1 = 0 ⇒ 𝜆1 = 𝜆2 = 2.
1 −1 𝑣1𝑥 𝑣
𝐴𝑣1⃗ = 𝜆1 𝑣1⃗ ⟹ [ ] [ ] = 2 [ 1𝑥 ] .
1 3 𝑣1𝑦 𝑣1𝑦
1 𝛼
𝑊 =[ ].
−1 𝛽
So that:
2 1
𝑊 −1 𝐴𝑊 = [ ] = 𝐽,
0 2
where, 𝐽 is the Jordan normal form for a 2 by 2 matrix. We have:
2 1 1 −1 1 𝛼 1 𝛼 2 1
𝐴𝑤 = 𝑤 [ ] ⇒ [ ][ ]=[ ][ ].
0 2 1 3 −1 𝛽 −1 𝛽 0 2
𝛼 − 𝛽 = 1 + 2𝛼 1
⟹ 𝛼 + 𝛽 = −1 ⟹ 𝑤⃗ 2 = [ ],
𝛼 + 3𝛽 = −1 + 2𝛽 −2
58 CHAPTER 6. INTRODUCTION TO SYSTEMS OF ODES
giving us
1 1
𝑊 =[ ],
−1 −2
and we can check that 𝑊 −1 𝐴𝑊 = 𝐽 . The Jordan normal form allows us to
solve the non-diagonalizable systems of ODEs:
𝑑𝑦 ⃗ 𝑑𝑦 ⃗
= 𝐴𝑦 ⃗ ⟹ 𝑊 −1 = [𝑊 −1 𝐴𝑊 ] 𝑊 −1 𝑦.⃗
𝑑𝑡 𝑑𝑡
𝑑𝑧 ⃗
Letting 𝑧 ⃗ = 𝑊 −1 𝑦,⃗ we obtain 𝑑𝑡 = 𝐽 𝑧,⃗ so we have:
𝑑𝑧1
= 2𝑧1 + 𝑧2 ,
𝑑𝑡
𝑑𝑧2
= 2𝑧2 .
𝑑𝑡
We can solve the second ODE to obtain 𝑧2 = 𝑐2 𝑒2𝑡 and we can then use this
solution to solve for 𝑧1 from the first equation above:
𝑧1 = 𝑐1 𝑒2𝑡 + 𝑐2 𝑡𝑒2𝑡 .
So in vector form:
𝑧 𝑐 𝑒2𝑡 + 𝑐2 𝑡𝑒2𝑡
⃗ = [ 1] = [ 1
𝑧𝐺𝑆 ].
𝑧2 𝑐2 𝑒2𝑡
Now we can obtain the general solution for 𝑦𝐺𝑆
⃗ .
𝑐1 𝑒2𝑡 + 𝑐2 𝑡𝑒2𝑡
𝑦𝐺𝑆 ⃗ = [𝑣1⃗
⃗ = 𝑊 𝑧𝐺𝑆 𝑤⃗ 2 ] [ ] = (𝑐1 𝑒2𝑡 + 𝑐2 𝑡𝑒2𝑡 )𝑣1⃗ + 𝑐2 𝑒2𝑡 𝑤⃗ 2 .
𝑐2 𝑒2𝑡
𝜆1 1 0
𝐽 =⎡
⎢0 𝜆1 0⎤ ⎥.
⎣0 0 𝜆2 ⎦
60 CHAPTER 6. INTRODUCTION TO SYSTEMS OF ODES
Chapter 7
Qualitative analysis of
ODEs
So far we have focused to obtain analytical solution to ODEs, but this is not
always possible. Even, when it is possible, it is not always very insightful. In
this section, we will focus on qualitative analysis of ODEs and as before we
mostly focus on linear ODEs. We’ll discuss asymptotics behavior, fixed points
(and their stability) and phase plane analysis.
𝑑𝑃 (𝑡)
= 𝐾𝑃 (𝑡)
𝑑𝑡
61
62 CHAPTER 7. QUALITATIVE ANALYSIS OF ODES
𝑑𝑦 ⃗
[ ] =0
𝑑𝑡 𝑦= ⃗ 𝑦∗⃗
Example 7.2 (Logistic growth). A modified model for population growth that
does not lead to exponential growth for 𝐾 > 0 is the logistic growth model,
with a carrying capacity 𝐶:
𝑑𝑃 𝑃
= 𝐾𝑃 (1 − ).
𝑑𝑡 𝐶
𝑑𝑃
=0 ⇒ 𝑃1∗ = 0, 𝑃2∗ = 𝐶,
𝑑𝑡
with the former being an unstable fixed point and the latter being an stable one
as defined below.
𝑑𝑦 ⃗
= 𝐴𝑦.⃗
𝑑𝑡
A system of linear ODEs can have one or infinitely many fixed points:
⎧ 0
𝑑𝑦 ⃗ { 𝑦∗⃗ = [ ] , if Det(𝐴) ≠ 0,
=0 ⇒ 𝐴𝑦∗⃗ = 0 ⇒ ⎨
0
𝑑𝑡 { ∗
⎩ 𝑦 ⃗ could be a line or plane, if Det(𝐴) = 0.
⃗ − 𝑦∗⃗ ‖ = 0.
lim ‖𝑦(𝑡)
𝑡→∞
Intuitively, in this case the solution does approach to the fixed point over long
times.
⃗ 𝑐 1 , ⋯ , 𝑐𝑛 ) ∈ ℝ 𝑛 .
𝑦(𝑡;
Figure 7.1: Illustration of phase plane for a 2 dimensional system. Two solutions
starting at different initial conditions are shown (blue). Also, some representa-
tive velocity vectors from the vector field are drawn (red).
One can interpret the phase plane in terms of dynamics. The solution 𝑦(𝑡) ⃗
corresponds to a trajectory of a point moving on the phase plane with velocity
𝑑𝑦 ⃗
(𝑡). For a system of first order ODEs of the form:
𝑑𝑡
𝑑𝑦 ⃗
= 𝐹 (𝑦),
⃗
𝑑𝑡
64 CHAPTER 7. QUALITATIVE ANALYSIS OF ODES
𝑑𝑦 ⃗
= 𝐴𝑦 ⃗
𝑑𝑡
Eigenvectors define very special directions in the phase plane (𝐴𝑣1⃗ = 𝜆1 𝑣1⃗ ) in a
linear system. The line defined by 𝑣1⃗ in the phase plane is an invariant, meaning
that if we start on 𝑣1⃗ , we will remain on it. Let 𝑦(0)
⃗ = 𝛼𝑣1⃗ with 𝛼 ∈ ℝ. We
have:
𝑑𝑦(0)
⃗
𝐴𝑦(0)
⃗ = 𝛼𝐴𝑣1⃗ = 𝛼𝜆1 𝑣1⃗ = ,
𝑑𝑡
0
so, if 𝜆1 > 0, 𝑦(𝑡) grows along 𝑣1⃗ and if 𝜆1 < 0, 𝑦(𝑡) decays along 𝑣1⃗ to [ ].
0
To check this explicitly, we go back to the general solution of systems of ODE
(2 dimensional case):
⃗ = 𝑐1 𝑒𝜆1 𝑡 𝑣1⃗ + 𝑐2 𝑒𝜆2 𝑡 𝑣2⃗
𝑦(𝑡)
Let 𝑦(0)
⃗ = 𝛼𝑣1⃗ , this gives 𝑐1 = 𝛼 and 𝑐2 = 0. So we have:
so, we see explicitly, from this solution that, if 𝜆1 > 0, 𝑦(𝑡) grows along 𝑣1⃗ and
0 0
if 𝜆1 < 0, 𝑦(𝑡) decays along 𝑣1⃗ to [ ]. [ ] is the fixed point of the system as
0 0
we have
𝑑𝑦 ⃗ 0 0 0
([ ]) = 𝐴 [ ] = [ ] .
𝑑𝑡 0 0 0
Example 7.4 (First example of qualitative and phase plane analysis of linear
systems of ODE).
𝑑𝑦 ⃗ −4 −3
= 𝐴𝑦;⃗ 𝐴 = [ ].
𝑑𝑡 2 3
7.2. PHASE PLANE ANALYSIS 65
We have the solution for this system of linear ODEs using the eigenvalues and
eigenvectors for the matrix 𝐴:
1 3
⃗ (𝑡) = 𝑐1 𝑒2𝑡 [
𝑦𝐺𝑆 ] + 𝑐2 𝑒−3𝑡 [ ] .
−2 −1
Aim of phase plane analysis is to obtain the phase portrait of the system, which
is a summary of all distinct solutions, with qualitatively different trajectories in
the phase plane. To do this for our linear system we draw the lines corresponding
to the directions of the eigenvectors and trajectories that start on these lines.
We consider the asymptotic behavior and we also compute (some examples of
the) vector field at some points to draw some representative trajectories. For
1
example, we have at 𝑦 ⃗ = [ ]:
0
𝑑𝑦 ⃗ 1 1 −4
([ ]) = 𝐴 [ ] = [ ] .
𝑑𝑡 0 0 2
Figure 7.2 shows the phase portrait for this system. We can explicitly obtain
the equations for the trajectories using either the general solution (as seen in the
quiz in the lectures) or by obtaining an ODE for 𝑦(𝑥) by dividing the equation
for 𝑑𝑦 𝑑𝑥
𝑑𝑡 by 𝑑𝑡 :
𝑑𝑥
= −4𝑥 − 3𝑦 𝑑𝑦 2𝑥 + 3𝑦
𝑑𝑡 ⟹ =
𝑑𝑦 𝑑𝑥 −4𝑥 − 3𝑦
= 2𝑥 + 3𝑦
𝑑𝑡
This is a homogeneous first order ODE and by using the change of variable
𝑢 = 𝑦/𝑥, we can obtain the following solution.
where 𝑐 is a constant of integration and its different values gives us the different
trajectories in the phase plane as illustrated in the phase portrait in Figure 7.2.
This figure and all the figures in the next section are plotted using the following
online applet.
66 CHAPTER 7. QUALITATIVE ANALYSIS OF ODES
Figure 7.2: The phase portrait for the linear ODE in Example 7.4.
𝑑𝑦 ⃗ 𝑎 𝑏
= 𝐴𝑦;⃗ 𝐴=[ ].
𝑑𝑡 𝑐 𝑑
The general solution of this system can be written in terms of the eigenvalues
and eigenvectors of matrix 𝐴 as seen in the last chapter. The eigenvalues can
be obtained by solving the following characteristic equation:
𝜆2 − 𝜏 𝜆 + Δ = 0,
Figure 7.3: The saddle point phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
𝜏2
2.1.1: Repelling or unstable node. 0 < Δ < 4 ; 𝜏 > 0.
In this case we have 𝜆1 , 𝜆2 ∈ ℝ+ and 𝜆1 > 𝜆2 > 0. So starting on 𝑣2⃗ blow-up
along the direction of 𝑣2⃗ . Otherwise, blow up in the direction of 𝑣1⃗ .
Figure 7.4: The repelling or unstable node phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
𝜏2
2.1.2: Attracting or stable node. 0 < Δ < 4 ;𝜏 < 0.
−
In this case we have 𝜆1 , 𝜆2 ∈ ℝ and 𝜆2 < 𝜆1 < 0. So starting on 𝑣2⃗ decays
0
to [ ] along the direction of 𝑣2⃗ . Otherwise, decays along the direction of 𝑣1⃗ to
0
0
[ ]. So, for 𝑐1 ≠ 0, we have:
0
0
𝑡→∞ ⟹ ⃗ → 𝑐1 𝑒−|𝜆1 | 𝑣1⃗ → [ ] .
𝑦(𝑡)
0
𝑥2
+ 𝜔2 𝑥 = 0.
𝑑𝑡2
7.3. GENERAL SYSTEM OF LINEAR ODES IN 2 DIMENSION 69
Figure 7.5: The attracting or stable node phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
We can write this second order linear ODE as a system of two first order linear
ODEs, but defining 𝑦 = 𝑑𝑥
𝑑𝑡 . The general solution for this system can be written
as:
𝑥 𝐴 sin(𝜔𝑡 + 𝜙)
[ 𝐺𝑆 ] = [ 0 ].
𝑦𝐺𝑆 𝐴0 𝜔 cos(𝜔𝑡 + 𝜙)
From this result we see that the trajectory of solutions are ellipses.
𝑦2 𝑦2
𝑥2 + 2
= 𝐴20 = 𝑥20 + 02 ,
𝜔 𝜔
where 𝐴0 is a constant of integration and it depends on the initial conditions
𝑥0 and 𝑦0 .
The phase portrait illustrating the elliptic profile can be seen in Figure 7.6. To
figure out the direction of motion we evaluate the vector field at some points:
0 𝑑𝑦 ⃗ 𝑦
𝑦⃗ = [ ] ⇒ = 𝐴𝑦 ⃗ = [ ] ,
𝑦 𝑑𝑡 0
𝑥 𝑑𝑦 ⃗ 0
𝑦⃗ = [ ] ⇒ = 𝐴𝑦 ⃗ = [ 2 ] .
0 𝑑𝑡 −𝜔 𝑥
70 CHAPTER 7. QUALITATIVE ANALYSIS OF ODES
0
Note that in this case the point 𝑦 ⃗ = [ ] is a stable fixed point, with Lyapunov
0
stability as the trajectories around the origin do not blow up but also do not
Linear Phase Portraits: Matrix Entry 14/02/2021, 17)10
-2
-4
det x
-4 -2 0 2 4
tr 0.00
Companion Matrix
0.00 1.00
A=
-3.40 0.00
Eigenvalues
Im
4 x= center
y= u' = A u Clear
2
λ 1 = −1.84 i
0 Re
λ 2 = 1.84 i
-2
-4 -2 0 2 4 -4 -2 0 2 4
-4
-4 -2 0 2 4 c -3.40 d 0.00
Figure 7.6: The centre or elliptic phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
𝜏2
2.2.2: Repelling or unstable spiral. Δ > 4 ; 𝜏 > 0.
The eigenvalues are complex with the real part being postive. So, for the general
solution we have:
𝜏
𝑦 ⃗ = 𝑒 2 𝑡 [𝑐1 𝑒𝑖𝜔𝑡 𝑣1⃗ + 𝑐2 𝑒−𝑖𝜔𝑡 𝑣2⃗ ] ,
which, asymptotically blow up in an oscilatory fashion. The phase portrait
illustrating the repelling or unstable spiral can be seen in Figure 7.7.
𝜏2
2.2.3: Attracting or stable spiral. Δ > 4 ; 𝜏 <0
The eigenvalues are complex with the real part being negative. So, for the
general solution we have:
https://mathlets.org/javascript/build/linPhasePorMatrix.html# Page 1 of 1
𝜏
𝑦 ⃗ = 𝑒 2 𝑡 [𝑐1 𝑒𝑖𝜔𝑡 𝑣1⃗ + 𝑐2 𝑒−𝑖𝜔𝑡 𝑣2⃗ ] ,
0
which, asymptotically as 𝑡 → ∞, 𝑦 ⃗ → [ ]. The phase portrait illustrating the
0
attracting or stable spiral can be seen in Figure 7.8.
3.1: Line of repelling or unstable fixed points. Δ = 0; 𝜏 > 0.
Linear Phase Portraits: Matrix Entry 14/02/2021, 17)11
7.3. GENERAL SYSTEM OF LINEAR ODES IN 2 DIMENSION 71
LINEAR PHASE PORTRAITS: MATRIX ENTRY mode + help
3.00 y
-2
-4
det x
-4 -2 0 2 4
tr 1.85
Companion Matrix
0.00 1.00
A=
-3.00 1.85
Eigenvalues
Im
4 x= spiral source
y= u' = A u Clear
2
λ 1 = 0.93 − 1.46 i
0 Re
λ 2 = 0.93 + 1.46 i
-2
-4 -2 0 2 4 -4 -2 0 2 4
-4
-4 -2 0 2 4 c -3.00 d 1.85
Figure 7.7: The repelling or unstable spiral phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
https://mathlets.org/javascript/build/linPhasePorMatrix.html# Page 1 of 1
Figure 7.8: The attracting or stable spiral phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
72 CHAPTER 7. QUALITATIVE ANALYSIS OF ODES
Figure 7.9: Line of repelling or unstable fixed points phase portrait. This figure
is plotted using this [online applet](http://mathlets.org/mathlets/linear-phase-
portraits-matrix-entry/)
Figure 7.10: line of attracting or stable fixed points phase portrait. This figure
is plotted using this [online applet](http://mathlets.org/mathlets/linear-phase-
portraits-matrix-entry/)
1 0 𝑐
𝑦 ⃗ = 𝑐1 𝑒𝜆𝑡 [ ] + 𝑐2 𝑒𝜆𝑡 [ ] = 𝑒𝜆𝑡 [ 1 ] .
0 1 𝑐2
Figure 7.11: Repelling star node phase portrait. This figure is plotted
using this [online applet](http://mathlets.org/mathlets/linear-phase-portraits-
matrix-entry/)
Example 7.6.
𝑑𝑦 ⃗ 1 −1
= 𝐴𝑦;⃗ 𝐴=[ ].
𝑑𝑡 1 3
In this last chapter, using the Jordan normal form, we showed the general
solution of this system of ODEs to be:
1 1
where 𝑣1⃗ = [ ] is the only eigenvector, and 𝑤⃗ 2 = [ ]. We see that as
−1 −2
𝑡 → ∞, 𝑦(𝑡)
⃗ blows up in the direction of 𝑣1⃗ . We can estimate the vector field at
specific points to helps draw the phase portrait. The phase portrait for this case
can be seen in Figure 7.12, which is of the type unstable improper or degenerate
node (4.2.1). Stable improper or degenerate node (4.2.2) have a similar phase
0
portrait to this with an asymptotically stable fixed point at the origin [ ].
0
The catalogue of phase portraits for the 2 dimensional linear systems of ODEs
can be seen in Figure 7.13 on the (𝜏 , Δ) plane. We note that the solutions can
be unstable, asymptotically stable or Lyapunov stable in the different regions
7.4. PHASE PLANE ANALYSIS FOR 2D NONLINEAR SYSTEMS 75
Figure 7.12: Unstable improper or degenerate node phase portrait. This figure
is plotted using this [online applet](http://mathlets.org/mathlets/linear-phase-
portraits-matrix-entry/)
of (𝜏 , Δ) plane. The system can have one fixed point or infinitely many fixed
points. Also, solutions could be oscillatory or non-oscillatory in different regions
of the parameter space.
Figure 7.13: The catalogue of phase portraits for the 2 dimensional linear sys-
tems of ODEs
𝑑𝑢 𝛼1
= − 𝑢,
𝑑𝑡 1 + 𝑣𝛽
𝑑𝑣 𝛼2
= − 𝑣.
𝑑𝑡 1 + 𝑢𝛾
Characterising the fixed points of this model, allowed the authors to successfully
design and construct one of the first synthetic genetic networks. This system
has up to 3 fixed points (two stable and one unstable ones).
𝑑𝑥
= 𝑎𝑥 − 𝑏𝑥𝑦,
𝑑𝑡
𝑑𝑦
= 𝑑𝑥𝑦 − 𝑐𝑦.
𝑑𝑡
By setting the derivatives to zero, we obtain the following two fixed points for
the system.
𝑥∗ 0 𝑥∗ 𝑐/𝑑
[ ∗] = [ ] , [ ∗] = [ ].
𝑦 1 0 𝑦 2 𝑎/𝑏
𝑥 = 𝑥∗ + Δ𝑥,
𝑦 = 𝑦∗ + Δ𝑦,
7.5. EXTENSION OF PHASE PLANE ANALYSIS TO HIGHER DIMENSIONAL SYSTEMS77
where, Δ𝑥, Δ𝑦 << 1. Linearising around the first fixed point by omitting terms
of the order of Δ𝑥Δ𝑦, we obtain:
𝑑Δ𝑥
= 𝑎Δ𝑥,
𝑑𝑡
𝑑Δ𝑦
= −𝑐Δ𝑦.
𝑑𝑡
This is a 2D linear system of ODEs that exhibits a saddle-point phase portrait
suggesting that the first fixed point is unstable. Similarly, linearising around
the second fixed point, we obtain:
𝑑Δ𝑥 𝑏𝑐
= − Δ𝑦,
𝑑𝑡 𝑑
𝑑Δ𝑦 𝑑𝑎
= Δ𝑥.
𝑑𝑡 𝑏
This is a 2D linear system of ODEs that exhibits a centre phase portrait suggest-
ing that the second fixed point has Lyapunove stability. Putting these together
we get the phase portrait in Figure 7.14 for the Lotka-Voltera Model that sug-
gests the system has periodic trajectories around the second fixed point in the
phase plane.
𝑑𝑦 ⃗
= 𝐹 ⃗ (𝑦).
⃗
𝑑𝑡
5. Fixed points (𝑦∗⃗ ) are obtained by:
𝑑𝑦 ⃗ ∗
(𝑦 ⃗ ) = 0.⃗
𝑑𝑡
The approach directly generalises for the linear systems as the solutions are
given by the eigenvectors and eigenvalues of matrix 𝐴. As in the 2 dimen-
sional case, solutions starting in the directions set by the eigenvectors, stay on
these directions and grow or decay depending on the sign of the corresponding
eigenvalue.
Stability of linear 𝑛 dimensional systems For the 2 dimensional case we
had stability where 𝜏 ≤ 0 and Δ ≥ 0. In terms of eigenvalue characterization
this meant the real part of the eigen values are negative. Similarly, for the
general 𝑛 dimensional linear systems of ODEs we have stability if the real part
of all the eigenvalues are negative.
Lorenz system (1933, Edward Lorenz)
More complex dynamics in phase planes are possible. Lorenz proposed a system
of 3 nonlinear equations that is a model of atmospheric convection. This rather
simple model for certain values of parameters (e.g. 𝜎 = 10, 𝛽 = 38 and 𝜌 = 28),
exhibits a complex non-periodic dynamics that is an example of chaos. This
dynamical behavior is characterised by the divergence of trajectories in the
phase plane starting from near identical initial conditions as illustrated in this
video.
𝑑𝑥
= 𝜎(𝑦 − 𝑥),
𝑑𝑡
𝑑𝑦
= 𝑥(𝜌 − 𝑧) − 𝑦,
𝑑𝑡
𝑑𝑧
= 𝑥𝑦 − 𝛽𝑧.
𝑑𝑡
Chapter 8
Introduction to Bifurcations
𝑑 𝑥 0 1 𝑥
[ ]=[ 2 ][ ].
𝑑𝑡 𝑢 −𝜔 −2𝑘 𝑢
Given the catalogue of phase portraits in the (𝜏 , Δ) plane we saw in the last
𝑥
⃗ 𝑘) = [ ]
chapter, we can use these to see how the qualitative behavior of 𝑦(𝑡;
𝑢
changes when 𝑘 is varied. For this system we have 𝜏 = −2𝑘 and Δ = 𝜔2 , so only
𝜏 depends on the tunable parameter 𝑘 and Δ is always positive. As Figure 8.1
shows there are 7 different phase portraits that can be observed as 𝑘 is varied.
Defining a bifurcation as a change in stability of the system, there is only one
bifurcation point at 𝑘 = 0 for this system.
79
80 CHAPTER 8. INTRODUCTION TO BIFURCATIONS
In linear systems the bifurcations are related to changes in the stability of the
system, as that is the main type of change that can happen in the dynamics of
the system.
In non-linear systems there is a whole zoo of bifurcations and we will not cover
these here but we will consider one dimensional nonlinear systems in the next
section.
𝑑𝑦
= 𝑓(𝑦); 𝑦 ∈ ℝ1
𝑑𝑡
Example 8.2 (First (trivial) example of the phase plane and bifurcation anal-
ysis in one dimension).
𝑑𝑦
= 𝑘𝑦 = 𝑓(𝑦; 𝑘).
𝑑𝑡
The fixed point for this system is 𝑦∗ = 0 as 𝑓(𝑦∗ = 0) = 0. The general solutio
for this ODE is
𝑦(𝑡; 𝑘) = 𝑦(0)𝑒𝑘𝑡 .
The general solution suggests that the fixed point is stable for 𝑘 < 0 and is
𝑑𝑦
unstable for 𝑘 > 0. Also, we can also use a plot of = 𝑓(𝑦; 𝑘) vs 𝑦 for different
𝑑𝑡
values of 𝑘 to draw the vector field as illustrated in Figure 8.2 to obtain the
stability of the fixed points. A stable fixed point has a flow towards it, while an
unstable fixed points has an outward flow.
Figure 8.2: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑘, illustrates the vector
field and the stability of the fixed point for Example 8.2.
Figure 8.3: The bifurcation diagram (plot of fixed points and their stability vs
the tuning parameter 𝑘) for Example 8.2.
Figure 8.4: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑟, illustrates the vector
field and the stability of the fixed point for saddle-node bifurcation.
8.2. QUALITATIVE BEHAVIOR OF NON-LINEAR 1D SYSTEMS 83
For 𝑟 < 0 we have 2 fixed points (one stable, one unstable), at 𝑟 = 0 we have
one half-stable fixed point and for 𝑟 > 0 we have no fixed point. There is a
saddle-node bifurcation at 𝑟 = 0. These changes in the number and stability of
the fixed points can be summarised using the bifurcation diagram, which is a
plot of fixed points vs parameter 𝑟 (see Figure 8.5).
Figure 8.5: The bifurcation diagram (plot of fixed points and their stability vs
the tuning parameter 𝑟) for saddle-node bifurcation.
Figure 8.6: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑟, illustrates the vector
field and the stability of the fixed point for transcritical bifurcation.
For 𝑟 < 0 we have 2 fixed points (one stable, one unstable), at 𝑟 = 0 we have
one half-stable fixed point and for 𝑟 > 0 go back to two fixed points. At the
84 CHAPTER 8. INTRODUCTION TO BIFURCATIONS
Figure 8.7: The bifurcation diagram (plot of fixed points and their stability vs
the tuning parameter 𝑟) for transcritical bifurcation.
𝑑𝑦
= 𝑟𝑦 − 𝑦3 = 𝑓(𝑦; 𝑟)
𝑑𝑡
Note that the equation is invariant under the change of variable 𝑦 → −𝑦, which
signifies√the symmetry. The system has up to three fixed points 𝑦∗ = 0 and
𝑦∗ = ± 𝑟. Figure 8.8 using the vector field, illustrates number and stability of
these fixed points for different values of 𝑟.
For 𝑟 < 0 we have 1 stable fixed point, at 𝑟 = 0 we have still one stable fixed
point and for 𝑟 > 0 we have three fixed points (two stable and a middle one
that is unstable). At the bifurcation point 𝑟 = 0 an exchange of stabilities takes
place between the fixed points. The changes in the number and stability of the
fixed points is summarised in the bifurcation diagram (see Figure 8.9).
Subcritical pitchfork bifurcation
8.2. QUALITATIVE BEHAVIOR OF NON-LINEAR 1D SYSTEMS 85
Figure 8.8: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑟, illustrates the vector
field and the stability of the fixed point for supercritical pitchforkbifurcation.
Figure 8.9: The bifurcation diagram (plot of fixed points and their stability vs
the tuning parameter 𝑟) for supercritical pitchfork bifurcation.
86 CHAPTER 8. INTRODUCTION TO BIFURCATIONS
𝑑𝑦
= 𝑟𝑦 + 𝑦3 = 𝑓(𝑦; 𝑟)
𝑑𝑡
√
The system again has up to three fixed points 𝑦∗ = 0 and 𝑦∗ = ± −𝑟. Figure
8.10 using the vector field, illustrates number and stability of these fixed points
for different values of 𝑟.
Figure 8.10: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑟, illustrates the vector
field and the stability of the fixed point for subcritical pitchfork bifurcation.
For 𝑟 < 0 we have we have three fixed points (two unstable and a middle one
that is stable), at 𝑟 ≥ 0 we have one unstable fixed point. At the bifurcation
point 𝑟 = 0 an exchange of stabilities takes place between the fixed points. The
changes in the number and stability of the fixed points is summarised in the
bifurcation diagram (see Figure 8.11).
𝑑𝑦 𝑘
= = 𝑓(𝑦, 𝑘)
𝑑𝑡 𝑦
Here, 𝑓(𝑦; 𝑘) is not defined at 𝑦𝑠𝑖𝑛𝑔 = 0. Figure 8.12 using the vector field,
shows the stability of the singularity for different values of 𝑘.
For this example, we have an explicit solution that provides further insight in
the behavior of the system for different values of 𝑘 and initial condition.
8.2. QUALITATIVE BEHAVIOR OF NON-LINEAR 1D SYSTEMS 87
Figure 8.11: The bifurcation diagram (plot of fixed points and their stability vs
the tuning parameter 𝑟) for subcritical pitchfork bifurcation.
Figure 8.12: The plot of 𝑓(𝑦) vs 𝑦 for different values of 𝑘, illustrates the vector
field and the stability of the singularity in Example 8.3.
88 CHAPTER 8. INTRODUCTION TO BIFURCATIONS
𝑑𝑦 𝑘
= ⟹ ∫ 𝑦𝑑𝑦 = ∫ 𝑘𝑑𝑡 ⟹ 𝑦 = ±√2𝑘𝑡 + 𝑦02 ,
𝑑𝑡 𝑦
where 𝑦0 = 𝑦(𝑡 = 0) is the inital condition. For 𝑦0 > 0, we have the positive
solution and for 𝑦0 < 0, we have the negative solution. Also, for 𝑘 > 0 the solu-
tions blow up to infinity and for 𝑘 < 0 solutions approach 0. This is summarised
in the bifurcation diagram (see Figure 8.13).
Figure 8.13: The bifurcation diagram (plot of singular points and their stability
vs the tuning parameter 𝑘) for Example 8.3.
𝑑𝑓 ∗
Now, if 𝑑𝑦 (𝑦 ) > 0 then the fixed point 𝑦∗ is unstable as the perturbations
around the fixed point grow. However, if 𝑑𝑓 ∗ ∗
𝑑𝑦 (𝑦 ) < 0 then the fixed point 𝑦 is
stable as the perturbations around the fixed point decay to zero.
90 CHAPTER 8. INTRODUCTION TO BIFURCATIONS
Part III: Introduction to
Multivariate Calculus
91
Chapter 9
Partial Differentiation
𝑓∶ ℝ → ℝ𝑛 .
In the third part of the module, we now consider functions of several variables
or Multivariable or Multivariate functions.
𝑓∶ ℝ𝑛 → ℝ.
𝑓(𝑥1 , ⋯ , 𝑥𝑛 ) ∈ ℝ
𝑥
𝑓(𝑥, 𝑦) = 𝑓(𝑥);
⃗ 𝑥 ⃗ = [ ] ∈ ℝ2
𝑦
9.1 Representation
As seen in Figure 9.1), there are the following two representations for the func-
tions of two variables.
1. 3D representation where 𝑓(𝑥, 𝑦) is the height.
2. Level curves 𝑥𝐶⃗ = (𝑥, 𝑦)𝐶 , where 𝑓(𝑥𝐶 ⃗ ) = 𝐶. For each 𝐶 there will be a
set of points that fulfill this condition. This kind of representation is also
known as a contour plot.
93
94 CHAPTER 9. PARTIAL DIFFERENTIATION
Figure 9.1: The 3D and contour plot of function 𝑓(𝑥, 𝑦) = 𝑥2 + 2𝑦2 drawn using
[Wolfram Alpha](wolframalpha.com)
9.2. LIMIT AND CONTINUITY 95
lim 𝑓(𝑥)⃗ = 𝐶,
⃗ 𝑥⃗∗
𝑥→
𝑥𝑦, if 𝑥, 𝑦 ≠ 0,
𝑔(𝑥, 𝑦) = {
1, if 𝑥 = 𝑦 = 0.
𝜕𝑓 𝜕𝑓
𝑔1 (𝑥, 𝑦) = ( ) ; 𝑔2 (𝑥, 𝑦) = ( )
𝜕𝑥 𝑦 𝜕𝑦 𝑥
𝜕 𝜕𝑓 𝜕 𝜕𝑓 𝜕2𝑓 𝜕2𝑓
( )= ( ) ⇒ = .
𝜕𝑦 𝜕𝑥 𝜕𝑥 𝜕𝑦 𝜕𝑦𝜕𝑥 𝜕𝑥𝜕𝑦
Example 9.1 (Obtain all the first and second partial derivatives of the following
function.).
𝑢(𝑥, 𝑦) = 𝑥2 sin 𝑦 + 𝑦3 .
𝜕𝑢
( ) = 2𝑥 sin 𝑦,
𝜕𝑥 𝑦
𝜕𝑢
( ) = 𝑥2 cos 𝑦 + 3𝑦2 .
𝜕𝑦 𝑥
𝜕2𝑢
= 2 sin 𝑦,
𝜕𝑥2
𝜕2𝑢
= 2𝑥 cos 𝑦,
𝜕𝑥𝜕𝑦
𝜕2𝑢
= −𝑥2 sin 𝑦 + 6𝑦,
𝜕𝑦2
𝜕2𝑢
= 2𝑥 cos 𝑦.
𝜕𝑦𝜕𝑥
We observe that the symmetry of the mixed derivatives in this example holds.
9.4. TOTAL DIFFERENTIATION OF A FUNCTION OF SEVERAL VARIABLES97
𝑢 = 𝑢(𝑥, 𝑦) = 𝑢(𝑥);
⃗ 𝑥 ⃗ ∈ ℝ2 .
𝑑𝑢
Now, if we have 𝑥 = 𝑥(𝑡) and 𝑦 = 𝑦(𝑡) with 𝑡 ∈ ℝ. What is ?
𝑑𝑡
Combining total differentiation and the chain rule for ordinary functions, one
can obtain:
𝜕𝑢 𝜕𝑢
𝑑𝑢 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦,
𝜕𝑥 𝑦 𝜕𝑦 𝑦
𝜕𝑢 𝑑𝑥 𝜕𝑢 𝑑𝑦
=( ) ( ) 𝑑𝑡 + ( ) ( ) 𝑑𝑡.
𝜕𝑥 𝑦 𝑑𝑡 𝜕𝑦 𝑦 𝑑𝑡
98 CHAPTER 9. PARTIAL DIFFERENTIATION
So, we have:
𝑑𝑢 𝜕𝑢 𝑑𝑥 𝜕𝑢 𝑑𝑦
= ( ) ( ) + ( ) ( ).
𝑑𝑡 𝜕𝑥 𝑦 𝑑𝑡 𝜕𝑦 𝑦 𝑑𝑡
This generalises to a function of 𝑛 variables 𝑢(𝑥1 , ⋯ , 𝑥𝑛 ) with 𝑥𝑖 = 𝑥𝑖 (𝑡) and
𝑡 ∈ ℝ:
𝑛
𝑑𝑢 𝜕𝑢 𝑑𝑥𝑖
= ∑( ) .
𝑑𝑡 𝑖=1
𝜕𝑥 𝑖 𝑑𝑡
Example 9.2 (Consider a cylinder that its radius and height are expanding
with time).
𝑟(𝑡) = 2𝑡; ℎ(𝑡) = 1 + 𝑡2 .
𝑑𝑉
Evaluate the rate of change in volume .
𝑑𝑡
To obtain ( 𝜕𝑢
𝜕𝑥 )𝑡 , we combine total differentiation for 𝑢(𝑥, 𝑦) and 𝑦(𝑡, 𝑥). We
get:
𝜕𝑢 𝜕𝑢
𝑑𝑢 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦,
𝜕𝑥 𝑦 𝜕𝑦 𝑥
𝜕𝑦 𝜕𝑦
𝑑𝑦 = ( ) 𝑑𝑥 + ( ) 𝑑𝑡.
𝜕𝑥 𝑡 𝜕𝑡 𝑥
Now, by plugging 𝑑𝑦 in the expression for 𝑑𝑢 and rearranging we get:
𝜕𝑢 𝜕𝑢 𝜕𝑦 𝜕𝑢 𝜕𝑦
𝑑𝑢 = [( ) + ( ) ( ) ] 𝑑𝑥 + ( ) ( ) 𝑑𝑡.
𝜕𝑥 𝑦 𝜕𝑦 𝑥 𝜕𝑥 𝑡 𝜕𝑦 𝑥 𝜕𝑡 𝑥
Now, thinking of the above expression as the total derivative of 𝑢(𝑥, 𝑡), we
obtain:
𝜕𝑢 𝜕𝑢 𝜕𝑢 𝜕𝑦
( ) =( ) +( ) ( ) ,
𝜕𝑥 𝑡 𝜕𝑥 𝑦 𝜕𝑦 𝑥 𝜕𝑥 𝑡
𝜕𝑢 𝜕𝑢 𝜕𝑦
( ) =( ) ( ) .
𝜕𝑡 𝑥 𝜕𝑦 𝑥 𝜕𝑡 𝑥
Let
ℎ = ℎ(𝑥, 𝑦); with 𝑥 = 𝑥(𝑢, 𝑣) and 𝑦 = 𝑦(𝑢, 𝑣).
Considering total derivative of ℎ(𝑥, 𝑦) and substituting total derivatives of
𝑥(𝑢, 𝑣) and 𝑦(𝑢, 𝑣), and rearranging, we obtain:
𝜕ℎ 𝜕𝑥 𝜕ℎ 𝜕𝑦 𝜕ℎ 𝜕𝑥 𝜕ℎ 𝜕𝑦
𝑑ℎ = [( ) ( ) + ( ) ( ) ] 𝑑𝑢+[( ) ( ) + ( ) ( ) ] 𝑑𝑣.
𝜕𝑥 𝑦 𝜕𝑢 𝑣 𝜕𝑦 𝑥 𝜕𝑢 𝑣 𝜕𝑥 𝑦 𝜕𝑣 𝑢 𝜕𝑦 𝑥 𝜕𝑣 𝑢
𝜕ℎ 𝜕ℎ 𝜕𝑥 𝜕ℎ 𝜕𝑦
( ) =( ) ( ) +( ) ( ) ,
𝜕𝑢 𝑣 𝜕𝑥 𝑦 𝜕𝑢 𝑣 𝜕𝑦 𝑥 𝜕𝑢 𝑣
𝜕ℎ 𝜕ℎ 𝜕𝑥 𝜕ℎ 𝜕𝑦
( ) =( ) ( ) +( ) ( ) .
𝜕𝑣 𝑢 𝜕𝑥 𝑦 𝜕𝑣 𝑢 𝜕𝑦 𝑥 𝜕𝑣 𝑢
Note, that strictly speaking the transformed function ℎ(𝑥, 𝑦) should be denoted
as ℎ′ (𝑢, 𝑣) as the transformed function is a ‘different’ function of its variables.
But, very commonly, the prime on ℎ′ (𝑢, 𝑣) is not used. For example ℎ(𝑥, 𝑦) =
𝑥2 + 𝑦2 in polar coordinates is ℎ′ (𝑟,′ 𝜃) = 𝑟2 , while common notation of ℎ(𝑟, 𝜃)
could imply 𝑟2 + 𝜃2 , if one thinks of plugging 𝑟 and 𝜃 in the original function
ℎ(𝑥, 𝑦). One should be aware of this notational ambiguity.
𝑦 = 𝑓(𝑥); 𝑥∈ℝ
𝐹 (𝑥, 𝑦) = 0.
𝐹 (𝑥, 𝑦) = 𝑦 − 𝑓(𝑥) = 0.
𝑧 = 𝑧(𝑥, 𝑦)
𝜕𝐹 𝜕𝐹 𝜕𝐹
𝑑𝐹 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦 + ( ) 𝑑𝑧 = 0.
𝜕𝑥 𝑦,𝑧 𝜕𝑦 𝑥,𝑧 𝜕𝑧 𝑥,𝑦
Taking total differential from the explicit form 𝑧 = 𝑧(𝑥, 𝑦), we obtain:
𝜕𝑧 𝜕𝑧
𝑑𝑧 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦.
𝜕𝑥 𝑦 𝜕𝑦 𝑥
Now, solving for 𝑑𝑧 in the 𝑑𝐹 equation above, we thus have the following rela-
tionship between derivatives of the implicit and explicit form:
𝜕𝐹
)(
𝜕𝑧 𝜕𝑥 𝑦,𝑧
( ) =− ,
𝜕𝑥 𝑦 𝜕𝐹
( )
𝜕𝑧 𝑥,𝑦
𝜕𝐹
)(
𝜕𝑧 𝜕𝑦 𝑥,𝑧
( ) =− .
𝜕𝑦 𝑥 𝜕𝐹
( )
𝜕𝑧 𝑥,𝑦
Example 9.3 (Obtain the partial derivatives of 𝑧 using the explicit and implicit
forms).
Let 𝑧(𝑥, 𝑦) = 𝑥2 + 𝑦2 − 5.
𝑑𝑧 = 2𝑥𝑑𝑥 + 2𝑦𝑑𝑦.
𝑑𝐹 = 2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 − 𝑑𝑧 = 0.
Which results to the same total derivative for 𝑑𝑧, that we obtained above from
the explicit form.
𝑑𝑓 1 𝑑2 𝑓 1 𝑑3 𝑓
𝑓(𝑥0 + Δ𝑥) = 𝑓(𝑥0 ) + ( ) Δ𝑥 + ( 2 ) (Δ𝑥)2 + ( 3 ) (Δ𝑥)3 + ⋯ .
𝑑𝑥 𝑥0 2 𝑑𝑥 𝑥 3! 𝑑𝑥 𝑥
0 0
9.7. TAYLOR EXPANSION OF MULTIVARIATE FUNCTIONS 101
𝜕𝑓 1 𝜕2𝑓 1 𝜕 𝑓 3
= 𝑓(𝑥0 , 𝑦0 ) + ( ) Δ𝑦 + ( 2 ) (Δ𝑦)2 + ( 3 ) (Δ𝑦)3 + ⋯
𝜕𝑦 𝑥⃗ 2 𝜕𝑦 𝑥⃗ 3! 𝜕𝑦 𝑥⃗
0 0 0
𝜕𝑓 𝜕2𝑓 1 𝜕3𝑓
+ Δ𝑥 [( ) + ( ) Δ𝑦 + ( 2 ) (Δ𝑦)2 + ⋯]
𝜕𝑥 𝑥⃗0 𝜕𝑦𝜕𝑥 𝑥⃗ 2 𝜕𝑦 𝜕𝑥 𝑥⃗
0 0
2 3
1 𝜕 𝑓 𝜕 𝑓 1 𝜕3𝑓
+ (Δ𝑥)2 [( 2 ) + ( ) Δ𝑦 + ⋯] + (Δ𝑥) 3
[( ) + ⋯]
2 𝜕𝑥 𝑥⃗ 𝜕𝑦𝜕𝑥2 𝑥⃗ 3! 𝜕𝑥3 𝑥⃗
0 0 0
𝜕𝑓 𝜕𝑓
= 𝑓(𝑥0⃗ ) + [( ) Δ𝑥 + ( ) Δ𝑦] +
𝜕𝑥 𝑥⃗0 𝜕𝑦 𝑥⃗
0
2 2
1 𝜕 𝑓 𝜕 𝑓 𝜕2𝑓
[( 2 ) (Δ𝑥)2 + 2 ( ) Δ𝑥Δ𝑦 + ( 2 ) (Δ𝑦)2 ]
2! 𝜕𝑥 𝑥⃗ 𝜕𝑥𝜕𝑦 𝑥⃗ 𝜕𝑦 𝑥⃗
0 0 0
+ ⋯.
⃗ 𝑥⃗ ⎡ 𝜕𝑥 ⎤
∇𝑓 0
=⎢ ⎥ .
𝜕𝑓
⎣ 𝜕𝑦 ⎦𝑥⃗
0
Hessian Matrix associated with the function 𝑓 evaluated at the point 𝑥0⃗ is
defined as:
𝜕2𝑓
𝐻𝑖𝑗 (𝑥0⃗ ) = ( )
𝜕𝑥𝑖 𝜕𝑥𝑗
𝑥⃗0
We can write the Taylor expansion up to the second order in terms of the
Gradient and Hessian:
⃗ = 𝐴(𝑥⃗ ) + [𝑦 Δ𝑥 1 0 1 Δ𝑥
𝐴(𝑥0⃗ + Δ𝑥) 0 𝑜 𝑥0 ] [ ] + [Δ𝑥 Δ𝑦] [ ][ ] + ⋯
Δ𝑦 2 1 0 Δ𝑦
= 𝑥0 𝑦0 + (𝑦0 Δ𝑥 + 𝑥0 Δ𝑦) + Δ𝑥Δ𝑦.
Example 9.5 (Using Taylor Expansion for error analysis). What is the maxi-
mum error in ℎ given errors in 𝑥 and 𝜃 of Δ𝑥 and Δ𝜃, respectively:
ℎ(𝑥, 𝜃) = 𝑥 tan 𝜃.
We have 𝑥 = 𝑥0 ± Δ𝑥 and 𝜃 = 𝜃0 ± Δ𝜃, we are looking for Δℎ, using the Taylor
expansion of ℎ up to the first order we have:
𝜕ℎ 𝜕ℎ
ℎ(𝑥0 ± Δ𝑥, 𝜃0 ± Δ𝜃) = ℎ(𝑥0 , 𝑦0 ) ± ( ) Δ𝑥 ± ( ) Δ𝜃 + ⋯ .
𝜕𝑥 𝑥⃗0 𝜕𝜃 𝑥⃗0
Applications of
Multivariate Calculus
𝑥 = 𝑟 cos 𝜃, 𝑦 = 𝑟 sin 𝜃.
𝜕𝑥 𝜕𝑥
𝑑𝑥 = ( ) 𝑑𝑟 + ( ) 𝑑𝜃,
𝜕𝑟 𝜃 𝜕𝜃 𝑟
𝜕𝑦 𝜕𝑦
𝑑𝑦 = ( ) 𝑑𝑟 + ( ) 𝑑𝜃.
𝜕𝑟 𝜃 𝜕𝜃 𝑟
103
104 CHAPTER 10. APPLICATIONS OF MULTIVARIATE CALCULUS
𝜕𝑥 𝜕𝑥
⎡( 𝜕𝑟 ) ( )
𝜕𝜃 𝑟 ⎤ cos 𝜃 −𝑟 sin 𝜃
𝐽 = ⎢ 𝜕𝑦 𝜃 ⎥
𝜕𝑦 ⎥ = [ sin 𝜃 ].
⎢ 𝑟 cos 𝜃
( ) ( )
⎣ 𝜕𝑟 𝜃 𝜕𝜃 𝑟 ⎦
𝜕𝑟 𝜕𝑟
𝑑𝑟 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦
𝜕𝑥 𝑦 𝜕𝑦 𝑥
𝜕𝜃 𝜕𝜃
𝑑𝜃 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦
𝜕𝑥 𝑦 𝜕𝑦 𝑥
In vector-matrix form we can writh this using matrix 𝐾 defined below as 𝑑𝑟⃗ =
⃗
𝐾 𝑑𝑥:
𝜕𝑟 𝜕𝑟
⎡( 𝜕𝑥 ) ( 𝜕𝑦 ) ⎤ cos 𝜃 sin 𝜃
𝐾 = ⎢ 𝜕𝜃 𝑦 𝑥⎥
𝜕𝜃 ⎥ = [− sin𝑟 𝜃 cos𝑟 𝜃 ] .
⎢
( ) ( )
⎣ 𝜕𝑥 𝑦 𝜕𝑦 𝑥 ⎦
It is evident that 𝐾, the Jacobian of the transformation from polar to cartersian
is equal to the inverse of 𝐽 , the Jacobian of the transformation from cartesian
to polar.
𝐾 = 𝐽 −1 .
𝑑𝑥 𝑑𝑟
𝑑𝑠2 = (𝑑𝑥)2 + (𝑑𝑦)2 = [𝑑𝑥 𝑑𝑦] [ ] = [𝑑𝑟 𝑑𝜃] 𝐽 𝑇 𝐽 [ ] = (𝑑𝑟)2 + 𝑟2 (𝑑𝜃)2 .
𝑑𝑦 𝑑𝜃
𝑖̂ 𝑗̂ 𝑘̂
⃗ = ‖det ⎡
𝑑𝐴 = ‖𝑑𝑥⃗ × 𝑑𝑦‖ ⎢𝑑𝑥 0 0⎤ ⎥‖.
⎣ 0 𝑑𝑦 0⎦
What is area element for a general transformation? Consider a general transfor-
mation in two-dimensions:
𝑥 = 𝑥(𝑢, 𝑣) and 𝑦 = 𝑦(𝑢, 𝑣).
We have:
𝑑𝑥 𝑑𝑢
[ ] = 𝐽 [ ],
𝑑𝑦 𝑑𝑣
where 𝐽 is the Jacobian of the transformation. We therefore have for 𝑑𝑢⃗ and 𝑑𝑣⃗
in cartesian coordinates:
𝑑𝑢 0
𝑑𝑢⃗ = 𝐽 [ ] , 𝑑𝑣⃗ = 𝐽 [ ] .
0 𝑑𝑣
Using the cross-product rule for the area of the parallelogram, we have:
𝑑𝐴′ = ‖𝑑𝑢⃗ × 𝑑𝑣‖
⃗ = |det𝐽 |𝑑𝑢𝑑𝑣.
𝜕𝑓 𝜕𝑓 𝜕 2 𝑓
𝑓(𝑥1 , ⋯ , 𝑥𝑛 , 𝑓, ,⋯, , , ⋯) = 0.
𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥𝑖 𝑥𝑗
𝜕2𝑢 𝜕2𝑢
+ 2 = 0.
𝜕𝑥2 𝜕𝑦
2. Wave Equation for 𝑢(𝑥, 𝑡) (describing the dynamics of a wave with speed 𝑐
in one spatial dimension and time):
𝜕2𝑢 1 𝜕2𝑢
2
− 2 2 = 0.
𝜕𝑥 𝑐 𝜕𝑦
Our discussion of PDEs here will be very brief, but this topic is a major part
of your multivariable calculus course and one of the applied elective courses in
the second year. If you would like to have a sneak preview and for some cool
connections to the Fourier series you saw last term, you can check out this video.
Transforming a PDE under a change of coordinates
We again consider the example of transformation from cartesian to polar coor-
dinates:
𝑢(𝑥, 𝑦) ⟷ 𝑢(𝑟, 𝜃),
with 𝐽 being Jacobian of the transformation. Using total differentiation we
have:
𝜕𝑢 𝜕𝑢
𝑑𝑢 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦
𝜕𝑥 𝑦 𝜕𝑦 𝑦
𝜕𝑢 𝜕𝑥 𝜕𝑥 𝜕𝑢 𝜕𝑦 𝜕𝑦
=( ) [( ) 𝑑𝑟 + ( ) 𝑑𝜃] + ( ) [( ) 𝑑𝑟 + ( ) 𝑑𝜃]
𝜕𝑥 𝑦 𝜕𝑟 𝜃 𝜕𝜃 𝑟 𝜕𝑦 𝑦 𝜕𝑟 𝜃 𝜕𝜃 𝑟
𝜕𝑢 𝜕𝑥 𝜕𝑢 𝜕𝑦 𝜕𝑢 𝜕𝑥 𝜕𝑢 𝜕𝑦
= [( ) ( ) + ( ) ( ) ] 𝑑𝑟 + [( ) ( ) + ( ) ( ) ] 𝑑𝜃.
𝜕𝑥 𝑦 𝜕𝑟 𝜃 𝜕𝑦 𝑦 𝜕𝑟 𝜃 𝜕𝑥 𝑦 𝜕𝜃 𝑟 𝜕𝑦 𝑦 𝜕𝜃 𝑟
Now, by equating the terms in the above and the total derivative of 𝑢(𝑟, 𝜃), and
also using the definition of 𝐽 , we obtain the following:
𝜕
𝜕
( 𝜕𝑟 )𝜃 ( 𝜕𝑥 )𝑦
𝑇
[ 𝜕 ] 𝑢(𝑟, 𝜃) = 𝐽 [ 𝜕 ] 𝑢(𝑥, 𝑦).
( 𝜕𝜃 )𝑟 ( 𝜕𝑦 )
𝑥
10.3. EXACT ODES 107
𝜕2𝑢 𝜕2𝑢
+ 2 = 0.
𝜕𝑥2 𝜕𝑦
𝜕 𝜕𝑢 𝜕𝑟 𝜕𝑢 𝜕𝜃
[𝑢] = +
𝜕𝑥 𝜕𝑟 𝜕𝑥 𝜕𝜃 𝜕𝑥
𝜕 sin 𝜃 𝜕
= [cos 𝜃 − ] 𝑢.
𝜕𝑟 𝑟 𝜕𝜃
𝜕2 𝜕 sin 𝜃 𝜕 𝜕 sin 𝜃 𝜕
[𝑢] = [cos 𝜃 − ] [cos 𝜃 − ]𝑢
𝜕𝑥2 𝜕𝑟 𝑟 𝜕𝜃 𝜕𝑟 𝑟 𝜕𝜃
𝜕 2 𝑢 2 cos 𝜃 sin 𝜃 𝜕𝑢 2 cos 𝜃 sin 𝜃 𝜕 2 𝑢 sin2 𝜃 𝜕𝑢 sin2 𝜃 𝜕 2 𝑢
= cos2 𝜃 2 + − + + .
𝜕𝑟 𝑟2 𝜕𝜃 𝑟 𝜕𝑟𝜕𝜃 𝑟 𝜕𝑟 𝑟2 𝜕𝜃2
Similarly, we have:
Finally, we have:
For example, if one looking for function 𝑢(𝑟) (with no dependence on 𝜃) fulfilling
the Laplace equation, we could solve the following ODE:
𝑑𝑢 1 𝑑𝑢
+ = 0.
𝑑𝑟2 𝑟 𝑑𝑟
𝑑𝑦 −𝐹 (𝑥, 𝑦)
= .
𝑑𝑥 𝐺(𝑥, 𝑦)
108 CHAPTER 10. APPLICATIONS OF MULTIVARIATE CALCULUS
𝜕𝑢 𝜕𝑢 𝑑𝑦 − ( 𝜕𝑢
𝜕𝑥 )
𝑑𝑢 = ( ) 𝑑𝑥 + ( ) 𝑑𝑦 = 0 ⇒ = 𝜕𝑢 𝑦 .
𝜕𝑥 𝑦 𝜕𝑦 𝑦 𝑑𝑥 ( 𝜕𝑦 )
𝑦
For the solution 𝑢(𝑥, 𝑦) = 0 to exist then we need the RHS of the above equation
to be equal to the RHS of the ODE, we are trying to solve. This will be the
case if
𝜕𝑢 𝜕𝑢
𝐹 =( ) and 𝐺 = ( ) .
𝜕𝑥 𝑦 𝜕𝑦 𝑦
But, if that is the case, due to the symmetry of the partial second mixed deriva-
tives, we should have:
𝜕𝑢2 𝜕𝑢2 𝜕𝐹 𝜕𝐺
= ⇒ = .
𝜕𝑦𝜕𝑥 𝜕𝑥𝜕𝑦 𝜕𝑦 𝜕𝑥
𝜕𝑢 𝜕𝑢
𝐹 =( ) and 𝐺 = ( ) ,
𝜕𝑥 𝑦 𝜕𝑦 𝑦
and then 𝑢(𝑥, 𝑦) = 0 is a solution of the first order ODE. We call this kind of
ODE exact.
Letting
We can check the condition of integrability and see the ODE is exact:
𝜕𝐹 𝜕𝐺
= = 2𝑥 − cos 𝑥 sin 𝑦,
𝜕𝑦 𝜕𝑥
since the ODE is exact, we can look for a solution in implicit form 𝑢(𝑥, 𝑦) = 0
such that:
𝜕𝑢 𝜕𝑢
𝐹 (𝑥, 𝑦) = and 𝐺(𝑥, 𝑦) = .
𝜕𝑥 𝜕𝑦
10.3. EXACT ODES 109
When the ODE is not exact, sometimes we can find a function (an integrating
factor) that will make the equation exact. Given:
Example 10.3 (Is this ODE exact? If not find an integrating factor to make
it exact).
𝑑𝑦 𝑥𝑦 − 1
= .
𝑑𝑥 𝑥(𝑦 − 𝑥)
Letting 𝐹 = 𝑥𝑦 − 1 and 𝐺 = 𝑥2 − 𝑥𝑦, we see that the ODE is not exact as:
𝜕𝐹 𝜕𝐺
≠ .
𝜕𝑦 𝜕𝑥
So, we will try to find a 𝜆(𝑥) (or 𝜆(𝑦) that will make the ODE exact). We need
to find 𝜆(𝑥) such that:
Now we can solve the exact ODE that is obtained (left as a quiz):
1
(𝑦 − )𝑑𝑥 + (𝑥 − 𝑦)𝑑𝑦 = 0.
𝑥
Using a similar approach for the functions of two variables 𝑓(𝑥, 𝑦), we have
stationary points are the points where tangent plane at 𝑥∗ is parallel to (𝑥, 𝑦)
plane:
𝜕𝑓 ∗ 𝜕𝑓 ∗
(𝑥⃗ ) = (𝑥⃗ ) = 0.
𝜕𝑥 𝜕𝑦
Then, the type of stationary point can be determined using the Taylor expansion
around the stationary point 𝑥∗⃗ and by the Hessian matrix.
𝜕2𝑓 ∗ 𝜕2𝑓 ∗
𝜕𝑓 ∗ 𝜕𝑓 ∗ 1 ⎡ 2
(𝑥⃗ ) ( 𝑥 ⃗ )⎤
𝑓(𝑥∗⃗ +Δ𝑥)⃗ = 𝑓(𝑥∗⃗ )+[ (𝑥 ⃗ ) ⃗ Δ𝑥𝑇⃗ ⎢ 𝜕 2 𝑥
(𝑥⃗ )] Δ𝑥+ 𝜕𝑥𝜕𝑦
2 ⎥ Δ𝑥.⃗
𝜕𝑥 𝜕𝑦 2 ⎢ 𝜕 𝑓 ∗ 𝜕 𝑓 ∗ ⎥
(𝑥 ⃗ ) ( 𝑥 ⃗ )
⎣ 𝜕𝑦𝜕𝑥 𝜕2𝑦 ⎦
10.4. SKETCHING FUNCTIONS OF TWO VARIABLES 111
Given the fact that the first partial derivatives of 𝑓(𝑥)⃗ are zero at the stationary
points, we have:
1
𝑓(𝑥∗⃗ + Δ𝑥)⃗ − 𝑓(𝑥∗⃗ ) = Δ𝑥𝑇⃗ 𝐻(𝑥∗⃗ )Δ𝑥.⃗
2
Under the assumption of continuity, we have the symmetry of the second mixed
partial derivatives:
𝜕2𝑓 𝜕2𝑓
= ,
𝜕𝑦𝜕𝑥 𝜕𝑥𝜕𝑦
therfore the Hessain is symmetric (𝐻 = 𝐻 𝑇 ), which implies that 𝐻 is diago-
nalizable and it has real eigenvalues 𝜆1 and 𝜆2 . So, there exists a similarity
transformation 𝑉 that diagonalise the Hessian:
𝜆1 0
𝑉 −1 𝐻𝑉 = Λ = [ ].
0 𝜆2
1
𝑓(𝑥∗⃗ + Δ𝑥)⃗ − 𝑓(𝑥∗⃗ ) = Δ𝑥𝑇⃗ [𝑉 ((𝑥∗⃗ )Λ(𝑥∗⃗ )𝑉 𝑇 (𝑥∗⃗ )] Δ𝑥.⃗
2
1 𝑇 1
Δ𝑓 = 𝑓(𝑥∗⃗ + Δ𝑥)⃗ − 𝑓(𝑥∗⃗ ) = Δ𝑧 ⃗ Λ(𝑥∗⃗ )Δ𝑧 ⃗ = [(Δ𝑧1 )2 𝜆1 + (Δ𝑧2 )2 𝜆2 ] .
2 2
Given this expression, we can use the sign of the eigenvalues to classify the
stationary points.
• If 𝜆1 , 𝜆2 ∈ ℝ+ , we have Δ𝑓 > 0 as we move away from the stationary
point, suggesting 𝑥∗⃗ is a minimum.
• If 𝜆1 , 𝜆2 ∈ ℝ− , we have Δ𝑓 < 0 as we move away from the stationary
point, suggesting 𝑥∗⃗ is a maximum.
• If 𝜆1 ∈ ℝ+ and 𝜆2 ∈ ℝ− , we classify 𝑥∗⃗ is a saddle point, since Δ𝑓 can be
positive or negative depending the direction of Δ𝑥.⃗
We note that, we could use the trace (𝜏 ) and determinant Δ of the matrix 𝐻
to know the sign of the eigenvalues with explicityly calculating the eigenvalues
as done in section 7.3 in the analysis of the 2D linear ODEs. In particular,
Δ > 0, 𝜏 > 0 (Δ > 0, 𝜏 < 0) suggests eigenvalues are positive (negative) and we
have a minima (maxima). Δ < 0 indicates a saddle point stationary point.
𝑢(𝑥, 𝑦) = (𝑥 − 𝑦)(𝑥2 + 𝑦2 − 1)
112 CHAPTER 10. APPLICATIONS OF MULTIVARIATE CALCULUS
We note the function is continuous and there are no singularities. The asymp-
totic behavior is that 𝑢(𝑥, 𝑦) → ±∞ as 𝑥, 𝑦 → ±∞.
Next, we find the level curves at zero
𝑢(𝑥, 𝑦) = 0 ⇒ 𝑥=𝑦 and 𝑥2 + 𝑦2 − 1 = 0
In the next step, we obtain the stationary points.
𝜕𝑢 ∗ 𝜕𝑢 ∗
(𝑥⃗ ) = (𝑥⃗ ) = 0
𝜕𝑥 𝜕𝑦
𝜕𝑢
= (𝑥2 + 𝑦2 − 1) + 2𝑥(𝑥 − 𝑦) = 0,
𝜕𝑥
𝜕𝑢
= −(𝑥2 + 𝑦2 − 1) + 2𝑦(𝑥 − 𝑦) = 0.
𝜕𝑦
Adding these two equations we obtain 𝑥∗ = 𝑦∗ or 𝑥∗ = −𝑦∗ .
2
1. 𝑥∗ = 𝑦∗ ⇒ 2𝑥∗ − 1 = 0 ⇒ 𝑃1 = ( √12 , √12 ), 𝑃2 = (− √12 , − √12 ).
2
2. 𝑥∗ = −𝑦∗ ⇒ 6𝑥∗ − 1 = 0 ⇒ 𝑃3 = ( √16 , − √16 ), 𝑃4 = (− √16 , √16 ).
We classify the stationary points using the Hessain:
6𝑥 − 2𝑦 2𝑦 − 2𝑥
𝐻(𝑥)⃗ = [ ]
2𝑦 − 2𝑥 2𝑥 − 6𝑦
Now we use the determinant (Δ) and trace (𝜏 ) of matrix 𝐻 at each stationary
point to classify each stationary point:
1 1 4 √1 0
𝑃1 = ( √ , √ ) ⇒ 𝐻(𝑃1 ) = [ 2 ] ⇒ Δ < 0 ⇒ 𝑃1 is a saddle point.
2 2 0 −4 √12
1 1 −4 √12 0
𝑃2 = (− √ , − √ ) ⇒ 𝐻(𝑃2 ) = [ ] ⇒ Δ < 0 ⇒ 𝑃2 is a saddle point.
2 2 0 4 √12
1 1 √8 − √46
𝑃3 = ( √ , − √ ) ⇒ 𝐻(𝑃3 ) = [ √64 √8
] ⇒ Δ > 0, 𝜏 > 0 ⇒ 𝑃3 is a minimum.
6 6 − 6 6
1 1 − √8 √4
𝑃4 = (− √ , √ ) ⇒ 𝐻(𝑃4 ) = [ √4 6 6 ] ⇒ Δ > 0, 𝜏 < 0 ⇒ 𝑃4 is a maximum.
6 6 6
− √86
Given the location and stability of the stationary points, we complete our sketch
of
Figure 10.1: The contour plot and sketch of function 𝑢(𝑥, 𝑦) = (𝑥−𝑦)(𝑥2 +𝑦2 −1)