Math 285 Notes
Math 285 Notes
C I A N S O N O T H E R P L A N E T S - A L L F A R A H E A D O F Y O U R S - H A V E S O LV E D I T ?
W H Y, T H E R E I S A C H A P O N S AT U R N - H E L O O K S S O M E T H I N G L I K E A M U S H -
R O O M O N S T I LT S - W H O S O LV E S PA R T I A L D I F F E R E N T I A L E Q U AT I O N S M E N -
TA L LY ; A N D E V E N H E ’ S G I V E N U P. "
I N O R D E R T O S O LV E T H I S D I F F E R E N T I A L E Q U AT I O N Y O U L O O K AT I T T I L L A
S O L U T I O N O C C U R S T O YO U .
G E O R G E P Ó LYA
JARED C. BRONSKI ALDO J. MANFROI
DIFFERENTIAL
E Q U AT I O N S
U N I V E R S I T Y O F I L L I N O I S M AT H E M AT I C S
Copyright © 2023 Jared C. Bronski Aldo J. Manfroi
tufte-latex.googlecode.com
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in com-
pliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/
LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an “as is” basis, without warranties or conditions of any kind, either
express or implied. See the License for the specific language governing permissions and limitations under
the License.
2.3 Non-homogeneous linear equations: Operator notation and the structure of solutions 71
2.3.1 Operator notation 72
2.4 The structure of solutions to a non-homogeneous linear differential equation 73
2.5 The method of undetermined coefficients 74
2.6 The Annihilator Method 78
2.6.1 Problems: Undetermined Coefficients 81
2.7 Variation of Parameters 82
2.8 The Laplace Transform 85
2.8.1 Laplace Transform Problems 89
II Boundary Value Problems, Fourier Series, and the Solution of Partial Dif-
ferential Equations. 125
two equilibria, one stable and one unstable. for higher fishing rates
kP02
h> 4 there are no equilibria. 52
10
1.15 The Euler method got some positive press in the movie “Hidden Fig-
ures”, when Katherine Goble Johnson used it to calculate the orbits
of the Mercury astronauts. The trajectories of the first two manned
Mercury missions (Freedom 7, piloted by Alan Shepard and Liberty
Bell 7, piloted by Gus Grissom) were calculated entirely by hand by
Johnson and other computers. Glenn’s flight (Friendship 7) was the
first to have the orbit calculated by an electronic computer. Glenn
refused to fly the mission unless Johnson checked the results of the
electronic computation personally. NASA; restored by Adam Cuerden, Public domain, via Wiki-
media Commons 53
dy
1.16 A graph of the exact solution to dt = y2 + t2 with y(0) = 0 for
t ∈ (0, 1) together with the Euler and improved Euler approxima-
tions to the solution with N = 6 subdivisions ((∆t = 16 ). The step
size has been deliberately chosen to be large to exaggerate the dif-
ference. It is apparent that the improved Euler method does a bet-
ter job of approximating the solution than the standard Euler method,
and that the fourth order Runge-Kutta can’t be distinguished from
the exact solution at this scale. 54
7.1 The temperature (in ◦ C) at the center of the rod as a function of time.
154
13
Introduction
Ordinary Differential
Equations
1
First Order Differential Equations
dy d2 y dk y
F (t, y, , 2 ,..., k ) = 0
dt dt dt
Example 1.1.1. The following are all differential equations. The first five
are ordinary differential equations, the remaining two are partial differential
20 differential equations
equations.
dy
= −ky (1.1)
dt
d2 y
= −y (1.2)
dt2
d5 y
= − y3 (1.3)
dt5
8
d y dy
+y
=y (1.4)
dt8 dt
2 11 2
dy d y
y2 + = (1.5)
dt dt11
∂u 2
∂ u
= 2 (1.6)
∂t ∂x
∂u ∂u ∂3 u
+u + 3 = 0 (1.7)
∂t ∂x ∂x
The order of a differential equation is the order of the high- Note that the term order is used in
est derivative which appears in the equation. The order of the equa- many different ways in mathematics.
In the context of differential equations
tions above are 1, 2, 5, 8, 11, 2 and 3 respectively. It is important to order refers exclusively to the order of
be able to determine the order of a differential equation since this the derivative. Don’t confuse the order
with the highest power that appears, or
determines the way in which the differential equation is posed. In any other usage of order.
particular it determines how many pieces of initial data we must
provide in order to have a unique solution.
In addition to the differential equation itself we typically need
to specify a certain number of initial conditions in order to find a
unique solution. The number of initial conditions that we need to
specify is generally equal to the order of the differential equation. For
instance for a second order differential equation we would typically
specify two pieces of initial data. Usually this would be the value of In many applications we specify the
the function and the value of the first derivative at the initial point t0 . values of the function and the first
k − 1 derivatives at the point t0 , so this
For instance an example might be will be assumed throughout the first
part of the text. In a few important
d2 y dy applications, however, it is necessary
= y3 − y y (0) = 1 (0) = −1. to specify function values at more
dt2 dt than one point. For instance one might
specify the function value(s) at t = 0 as
A differential equation where all of the data is specified at a single well as t = 1. This is called a boundary
point is called an initial value problem. A differential equation where value problem and will be discussed in
a subsequent section of the notes.
the data is specified at two or more points is called a boundary value
problem. We will mostly discuss initial value problems in this course,
although we will consider boundary value problems later in the
semester.
first order differential equations 21
Exercise 1.1.1
For the initial value problem
d2 y dy
= y3 − y y (0) = 1 (0) = −1.
dt2 dt
d2 y d3 y
compute the value of dt2 (0) as well as dt3 (0).
Notice that this can be continued indefinitely: given the value of
dy
y(0) and dt (0) we can compute the value of any other derivative at
t = 0. This explains why, for a second order equation, two pieces
of initial data are the right amount of data – once we have specified
the first two derivatives we can in principle calculate all of the higher
order derivatives.
dn y d n −1 y
an (t) n
+ a n −1 ( t ) n −1 + . . . a 0 ( t ) y = f ( t ).
dt dt
In the special case where the right hand side f (t) is zero
dn y d n −1 y
an (t) + a n −1 ( t ) + . . . a0 (t)y = 0.
dtn dtn−1
the equation is said to be homogeneous. If the right hand side is non-
zero the equation is said to be non-homogeneous. We will see in later
chapters that, if we are able to solve the homogeneous equation, then
we will be able to solve the non-homogeneous equation.
A great deal of time in this class will be spent learning to solve
linear differential equations. Solving nonlinear differential equations
is in general quite difficult, although certain special kinds can be
solved exactly.
160
money in the account after i years then (assuming that no other money is 150
added to or removed from the account in this period) Pi satisfies the equation 140
130
Pi+1 = (1 + r ) Pi . 120
110
160
r
Pi+1/2 = (1 + ) Pi 150
2 140
r r
Pi+1 = (1 + ) Pi+1/2 = (1 + )2 Pi . 130
2 2 120
110
Here i is still measured in years, and r is still the yearly rate. The above 100
Years
equation has the solution 0 5 10 15 20
2
The extra term r4 represents the fact that you are getting interest on the
160
150
interest. If the interest is compounded n times per year we find that 140
130
r ni
Pi = (1 + ) P0 120
n 110
Years
The continuum limit consists of letting n tend to infinity, and assuming 5 10 15 20
that Pi goes over to a continuous function P(t). Using the finite difference Figure 1.3: An initial invest-
P −P
approximation to the derivative, i+∆∆ i ≈ dP dt we find the differential ment of $100 at 2% (r = .02)
equation interest compounded continu-
dP
= rP. ously.
dt
It is not hard to see that this differential equation has the solution
P(t) = P0 ert .
The graphs in the side margin on the previous depict the growth of an
initial investment of $100 earning 2% interest per year over a twenty-four
year period when in the cases where the interest is compounded yearly,
quarterly (4× per year), and continuously. You can see that the graphs look
very similar.
A similar model governs radioactive decay except that in the case of
radioactive decay the constant r is negative, since one loses a fixed fraction
of the population at each step.
first order differential equations 23
Example 1.1.4 (Newton’s Law of Motion). Let x (t) denote the position
2
of a particle. Newton’s second law of motion states that f = ma = m ddt2x .
If we assume that the force is conservative, meaning that the work done on
a particle is independent of the path, then it follows that force is given by
minus the derivative of the potential energy V (x(t)).
In this case we have
d2 x
m = F = −∇x V (x(t)).
dt2
This is, of course, a differential equation for the position x(t). It is second
order and is generally nonlinear.
Exercise 1.1.3
Check that the function y(t) = 1 − t2 satisfies the differential equation
ty′ − y = −(1 + t2 )
Exercise 1.1.4
Suppose that y(t) satisfies y′′ + y = 0 together with the initial condi-
dk y
tions y(0) = 1, y′ (0) = 0. What is dtk
(0) as a function of k?
Example 1.2.1. Suppose that the height y(t) of a falling body evolves
according to We see again a lesson from the previous
d2 y section: we have a second order differ-
= −g ential equation the solution of which
dt2 involves two arbitrary constants of in-
Find the height as a function of time. tegration, v and h. The general solution
of an equation of nth order will typically
Integrating up once we find that
involve n arbitrary constants. Here the
constants enter linearly, since the equa-
dy
= − gt + v tion is linear. For a nonlinear equation
dt the dependence on the constants could
be much more complicated.
where v is a constant of integration (representing the velocity of the body at
time t = 0: v = y′ (0)). Integrating up a second time gives the equation
gt2
y=− + vt + h
2
where h = y(0) is a second constant of integration representing the initial
height of the body.
dT
= −k( T − T0 )
dt
arising in Newton’s law of cooling.
If we divide the equation through by T − T0 we find
dT
dt
= −k
T − T0
ln | T − T0 | = −kt + A
where A is a constant of integration. Exponentiating give The form of this solution should not be
surprising. As t → ∞ the exponential
term decays away and we have T (t) →
T0 . This implies that, at long times, the
temperature of the body tends to the
equilibrium temperature.
26 differential equations
| T − T0 | = e−kt+ A = e A e−kt
T − T0 = ±e A e−kt
T = T0 + Ce−kt .
In the last step we have a prefactor ±e A , where the ± comes from elim-
inating the absolute value. This is, in the end, just a constant so we call
C = ±e A .
dy
= ty y (0) = 1
dt
We can rewrite this as
dy
= tdt.
y
Once we have done this we are in a situation where we have only y on the
left-hand side of the equation and only t on the right. Since the equation has
been “separated” in this way we can integrate it. Integrating gives
dy
Z Z
= tdt (1.8)
y
t2
ln(y(t)) = +C (1.9)
2
Imposing the initial condition y(0) = 1 gives C + 0 = ln 1 = 0, therefore
t2
ln |y(t)| =
2
t2
y(t) = e 2 .
The most general first order equation which can be solved this way
is called a separable equation, which we define as follows:
dy
Z Z
= g(t)dt + C
f (y)
dy t2 + 5
= 3 y (0) = 1
dt y + 2y
One should be careful in applying this idea. In particular one must
be careful to not introduce extraneous roots, or to remove relevant
roots, by assuming that certain quantities are non-zero. Here is an
example
Exercise 1.2.1
Solve the differential equation y′ = ty ln |y|
Exercise 1.2.2
Suppose that a projectile is fired upwards at 100 meters per second,
and that the acceleration due to gravity is 10m s−2 . At what time
does the projectile hit the ground?
Exercise 1.2.3
If the projectile has a horizontal velocity of 20 meters per second how
far has it traveled horizontally when it strikes the ground?
Exercise 1.2.4
Solve the following initial value problems.
a) y′ = cos(t) + 1, y (0) = 2 b) t2 y′ = 1, y (1) = 0
1 ′
c) ty = et , y (0) = 0 d) ty′ = 2, y (1) = 3
e) tyy′ = 1, y (1) = 2 f) y′ = y cos(t) + y, y (0) = 2
y2
g) y′ = 2ty + y − 2t − 1, y(0) = h) y′ = t +1 , y (0) = 2
0
first order differential equations 29
dy
= f (y, t)
dt | {z } 1.0
function of (t, y)
|{z}
slope
y
small line segment of slope f (y, t). This is known as a vector field or
-0.5
slope field.
By “following” the slope lines we can generate a solution curve to -1.0
the differential equation. This method is excellent for giving a quali- -1.0 -0.5 0.0 0.5 1.0
Example 1.3.1. Consider the differential equation Figure 1.4: A slope field for
dy t
dt = − y (blue) together with a
dy t solution curve (red).
=−
dt y
The slope field associated to this equation, along with one solution curve,
are shown in the margin. One can see that the curve is tangent to the line
segments. Since this equation is separable it can be solved explicitly
dy t
=−
dt y
ydy = −tdt
y2 t2
= c−
2 2
y2 + t2 = 2c
dy t 0.5
=
dt y
0.0
while this differential equation looks quite similar to the previous one the
y
slope field as well as the solutions appear quite different. The solution curve
-0.5
looks like a hyperbola, and this can be verified by integrating the equation.
-1.0
the equation
dP P
= k(1 − ) P.
dt P0
y
0.5
Here k and P0 are positive constants. The quantity k is a growth rate and
quantity P0 is known as the carrying capacity. Note that for small
populations, P < P0 the growth rate is positive, but for populations above 0.0
the maximum sustainable one (P > P0 ) the growth rate becomes negative
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
The slope field is shown in Figure (1.9) for parameter values k = 2 and
Figure 1.6: The slope field
P0 = 1, along with a typical solution curve (red). The solution grows in an
for the logistic equation
exponential fashion for a while but the population saturates at the carrying dy
dt = 2(1 − y ) y
capacity P0 = 1.
This model has also been applied to study the question of “peak oil.” In
this context the quantity P(t) represents the total amount of oil pumped
from the ground from the beginning of the oil industry to time t. This is as-
sumed to follow a logistic curve, with the carrying capacity Po representing
the total amount of accessible oil.
dP
= k0 ( P0 − P(t)) P(t)
dt
The logistic nature of the curve is meant to reflect the fact that as more oil
is pumped the remainder becomes harder to recover. This makes a certain
amount of sense - there is only a finite amount of oil on Earth, so it makes
sense that P(t) should asymptote to a constant value. There is considerable
debate as to whether this hypothesis is correct, and if so how to estimate the
parameters k, P0 .
One way to estimate the constant P0 , which represents the total amount
of oil, is to look at P′ (t), the rate at which oil is being pumped from the
ground, as this is something for which we have data. It is easy to see from
the graph of P(t) that P′ (t) has a single maximum (hence the phrase peak
oil) and decays away from there (in the graph above the maximum of P′ (t)
occurs at t = 0. It is not hard to calculate that P′ (t), the rate of oil produc-
tion, has a maximum when P = P0 /2, which is to say half of all the oil is
gone. The easiest way to see this is to differentiate the differential equation
P′ (t) = kP( P0 − P)
′′
P (t) = k( P0 − 2P) P′ = k2 P( P0 − P)( P0 − 2P)
first order differential equations 31
The pumping rate P′ (t) should have a maximum when P′′ (t) = 0 which
occurs when P = 0, P = P20 or P = P0 . From the equation we have that
P′ (t) = 0 when P = 0 or when P = P0 , so the maximum of the pumping
rate occurs when half of the oil has been pumped from the ground. Thus
one way to try to estimate the total amount of oil is to look at the pumping
records, try to determine when the peak production occured (or will occur)
and conjecture that, at that point, half of all the oil has been pumped.
While the jury is still out on this (for more details see David Goodstein’s
book “Out of Gas: The End of the Age Of Oil ”) it seems that the peak oil Figure 1.7: A graph of US Oil
theory has done a very good job of predicting US oil production but not such production as a function of
a good job predicting natural gas production. time from 1900 to 2020. The red
curve depicts a fit to a logis-
The graph in the margin shows the crude oil production in the
tic curve. By Plazak - Own work, CC BY-SA 4.0,
US and a fit of the curve P′ (t) to it. It is clear that the peak is around
https://commons.wikimedia.org/w/index.php?curid=42670844
1970. A crude way to estimate the total area under the curve is to
approximate it as a triangle of base ≈ 70yrs and a height of about
3.5Gb/year giving an area of about 125Gb (= 125 Billion barrels) of
total production (By eyeball this looks like an overestimate). This
suggests that the total amount of crude produced in the US over all
time will asymptote to something like 250Gb. The US currently con-
sumes 15 Million barrels of oil a day, about 23 of which are imported.
All data and the US crude production graph from Wikipedia
does not have a unique solution. There are at least two solutions sat-
isfying the same differential equation and the same initial condition:
2 3
y1 ( t ) = ( t ) 2
3
y2 (t) = 0.
dy
= f (y, t) y ( t0 ) = y0
dt
• If f (y, t) is continuous in a neighborhood of the point (y0 , t0 ) then a
solution exists in some rectangle |t − t0 | < δ, |y − y0 | < ϵ.
∂f
• If, in addition, ∂y (y, t) is continuous in a neighborhood of the point
(y0 , t0 ) then the solution is unique.
dy
= (y − 5) ln(|y − 5|) + t y (0) = 5
dt
the function (y − 5) ln(|y − 5|) + t is continuous in a neighborhood of
y = 5, t = 0. However computing the first partial with respect to y gives
∂f
(y, t) = 1 + ln |y − 5|
∂y
dy(n)
( t ) = f ( y ( n −1) ( t ), t ) y ( n ) ( t0 ) = y0 .
dt
which can be integrated up to give
Z t
y ( n ) ( t ) = y0 + f (y(n−1) (t), t)dt
t0
dy
= f (y, t) y ( t0 ) = y0 .
dt
∂f
Suppose that f (y, t) and ∂y are continuous in a neighborhood of the point
(y0 , t0 ). Then the initial value problem has a unique solution in some inter-
val [t0 − δ, t0 + δ] for δ > 0, and the Picard iterates yn (t) converge to this
solution as n → ∞.
y′ = y y (0) = 1
The first Picard iterate is y(0) (t) = 1. The second is the solution to
dy(1)
= y (0) ( t ) = 1 y (1) ( 0 ) = 1
dt
34 differential equations
The solution to this is y(1) (t) = 1 + t. Continuing in this fashion y(2) (t)
solves
dy(2)
= y (1) ( t ) = 1 + t y (2) ( 0 ) = 1
dt
t2
so y(2) (t) = 1 + t + 2. Continuing in this fashion we find that
t2 t3
y (3) ( t ) = 1 + t + +
2 6
t2 t3 t4
y (4) ( t ) = 1 + t + + +
2 6 24
t2 t3 t4 t5
y (5) ( t ) = 1 + t + + + +
2 6 24 120
It is not difficult to see by induction that the nth Picard iterate is just the
nth Taylor polynomial for the function y = et , which is the solution to the
original differential equation.
Example 1.4.3. Find the first few Picard iterates of the equation
y ′ = t2 + y2 y (0) = 1
They are
y (0) ( t ) = 1
y(1) (t) = 1 + t + t3 /3
first order differential equations 33
2t3 t4 2t5 t7
y (2) ( t ) = 1 + t + t 2 +
They are
+ + + y (0) ( t ) = 1
3 6 15 63
y(1) (t) = 1 + t + t3 /3
first order differential equations 33
2t3 t4 2t5 t7
y (2) ( t ) = 1 + t + t 2 +
They are
+ + + y (0) ( t ) = 1
3 6 15 63
y(1) (t) = 1 + t + t3 /3
first order differential equations 33
2t3 t4 2t5 t7
y (2) ( t ) = 1 + t + t 2 +
They are
+ + + y (0) ( t ) = 1
3 6 15 63
y(1) (t) = 1 + t + t3 /3
first order differential equations 33
2t3 t4 2t5 t7
y (2) ( t ) = 1 + t + t 2 +
They are
+ + + y (0) ( t ) = 1
3 6 15 63
y(1) (t) = 1 + t + t3 /3
first order differential equations 33
2t3 t4 2t5 t7
47t7 41t8
y (2) ( t ) = 1 + t + t 2 +
They are
+ + +
3 6 15 63
y (0) ( t ) = 1
y(1) (t) = 1 + t + t3 /3
y (2) ( t ) = 1 + t + t 2 +
2t3
+ +
t4 2t5 t7 The solution to this is y(1) (t) = 1 + t. Continuing in this fashion y(2) (t)
solves
first order differential equations 35
47t7 41t8
4 5 8 29 47t7 41t8
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
4
3
5
6 15
+
8
63
29 47t7 41t8
so y(2) (t) = 1 + t +
dy(2)
y (3) ( t ) = 1 + t +
dt
= y (1) ( t ) = 1 + t
t2
y (2) ( 0 ) = 1
It is not difficult to see by induction that the nth Picard iterate is just the
y (4) ( t ) = 1 + t +
t2
2. Continuing in this fashion we find that
y (5) ( t ) = 1 + t +
t2
t2
2
2
+
+ +
t3
t3
6
6 24
t4
4 5 8 29
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 + +
+
299t9 4t10
3
184t11
6 15
t12
90 315
+
630 original differential equation.
11340
+
525
+
51975
+
4t13 t15
2268 12285 59535
+ +
Example 1.4.3. Find the first few Picard iterates of the equation
nth Taylor polynomial for the function y = et , which is the solution to the
2
+ +
t3
6 24 120
t4
+
t5
47t7 41t8
3 6 15 90 315 630
The pattern for the coefficients in this example is not so clear.
They are
y (0) ( t ) = 1
y 0 = t2 + y2 y (0) = 1
y(1) (t) = 1 + t + t3 /3
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
2t3
+
The solution to this is y(1) (t) = 1 + t. Continuing in this fashion y(2) (t)
11340
299t9
+
4t10
525
4
3
3
+
+ +
184t11
5
6
t4
51975
6
2t5
15
15
+
8
+
t12
63
t7
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
solves
so y(2) (t) = 1 + t +
It is not difficult to see by induction that the nth Picard iterate is just the
Example 1.4.3. Find the first few Picard iterates of the equation
dy(2)
They are
for r < 1. In other words shrinks the distances between points in the
y(1) (t) = 1 + t + t3 /3
y (3) ( t ) = 1 + t +
set by a fixed contraction factor r that is strictly less than one. Then
there is a unique element h that is fixed by the map: T (h) = h, and
We will not present the full proof here, but the basic principle
y (2) ( t ) = 1 + t + t 2 +
y (4) ( t ) = 1 + t +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
y (5) ( t ) = 1 + t +
dt
t2
2.
= y (1) ( t ) = 1 + t
t2
t2
t2
2
2
+
+ +
+ +
t3
t3
t3
6
6
y (2) ( 0 ) = 1
24
t4
t4
+
t5
first order differential equations 35
4t10
525
2t3
4
3
3
+
+ +
184t11
5
6
t4
51975
6
2
2t5
15
y (0) = 1
15
6
+
8
+
t7
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
The solution to this is y(1) (t) = 1 + t. Continuing in this fashion y(2) (t)
solves
so y(2) (t) = 1 + t +
It is not difficult to see by induction that the nth Picard iterate is just the
Example 1.4.3. Find the first few Picard iterates of the equation
We will not present the full proof here, but the basic principle
y (0) ( t ) = 1
y(1) (t) = 1 + t + t3 /3
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
dy(2)
11340
y (3) ( t ) = 1 + t +
y (4) ( t ) = 1 + t +
y (5) ( t ) = 1 + t +
299t9
dt
t2
2.
= y (1) ( t ) = 1 + t
+
y 0 = t2 + y2
4t10
525
2t3
4
3
3
+
+ +
184t11
5
6
t4
51975
6
t2
t2
t2
2
2
+
+ +
+ +
2t5
15
y (0) = 1
15
t3
t3
t3
6
+
8
+
63
24
24 120
t7
t4
t4
29
90
+
+
t5
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
The solution to this is y(1) (t) = 1 + t. Continuing in this fashion y(2) (t)
solves
so y(2) (t) = 1 + t +
It is not difficult to see by induction that the nth Picard iterate is just the
Example 1.4.3. Find the first few Picard iterates of the equation
They are
We will not present the full proof here, but the basic principle
y (0) ( t ) = 1
y(1) (t) = 1 + t + t3 /3
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
dy(2)
11340
y (3) ( t ) = 1 + t +
y (4) ( t ) = 1 + t +
y (5) ( t ) = 1 + t +
299t9
dt
t2
2.
= y (1) ( t ) = 1 + t
+
y 0 = t2 + y2
4t10
525
2t3
4
3
3
+
+ +
184t11
5
6
t4
51975
6
t2
t2
t2
2
2
+
+ +
+ +
2t5
15
y (0) = 1
15
t3
t3
t3
6
+
8
+
63
24
24 120
t7
t4
t4
29
90
+
+
t5
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
so y(2) (t) = 1 + t +
It is not difficult to see by induction that the nth Picard iterate is just the
Example 1.4.3. Find the first few Picard iterates of the equation
They are
We will not present the full proof here, but the basic principle
y (0) ( t ) = 1
y(1) (t) = 1 + t + t3 /3
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
dy(2)
11340
y (3) ( t ) = 1 + t +
y (4) ( t ) = 1 + t +
y (5) ( t ) = 1 + t +
299t9
dt
t2
2.
= y (1) ( t ) = 1 + t
+
y 0 = t2 + y2
4t10
525
2t3
4
3
3
+
+ +
184t11
5
6
t4
51975
6
t2
t2
t2
2
2
+
+ +
+ +
2t5
15
y (0) = 1
15
t3
t3
t3
6
+
8
+
63
24
24 120
t7
t4
t4
29
90
+
+
t5
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
11340
299t9
dy
dy
dt
dt
+
ty
4t10
= 2
= 2
525
2t3
4
3
3
ty
t + y2
t + y2
+
+ +
ty
ty
184t11
5
6
t4
51975
6
2t5
15
15
y (0) = 0
y (1) = 0
+
8
+
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
11340
299t9
dy
dy
dt
dt
+
ty
4t10
= 2
= 2
525
2t3
4
3
3
ty
t + y2
t + y2
+
+ +
ty
ty
184t11
5
6
t4
51975
6
2t5
15
15
y (0) = 0
y (1) = 0
+
8
+
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
11340
299t9
dy
dy
dt
dt
+
ty
4t10
= 2
= 2
525
2t3
4
3
3
ty
t + y2
t + y2
+
+ +
ty
ty
184t11
5
6
t4
51975
6
2t5
15
15
y (0) = 0
y (1) = 0
+
8
+
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
11340
299t9
dy
dy
dt
dt
+
ty
4t10
= 2
= 2
525
2t3
4
3
3
ty
t + y2
t + y2
+
+ +
ty
ty
184t11
5
6
t4
51975
6
2t5
15
15
y (0) = 0
y (1) = 0
+
8
+
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (2) ( t ) = 1 + t + t 2 +
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
+
11340
299t9
dy
dy
dt
dt
+
ty
4t10
= 2
= 2
525
2t3
4
3
3
ty
t + y2
t + y2
+
+ +
ty
ty
184t11
5
6
t4
51975
6
2t5
15
15
y (0) = 0
y (1) = 0
+
8
+
29
90
+
4t13
47t7
315
+
+
t15
41t8
630
first order differential equations
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
299t9 4t10 184t11 t12 4t13 t15
ciple. The contraction mapping principle is the simplest example of
what mathematicians call a fixed point theorem. Leaving aside tech-
for r < 1. In other words shrinks the distances between points in the
set by a fixed contraction factor r that is strictly less than one. Then
there is a unique element h that is fixed by the map: T (h) = h, and
4 5 8 29
+ + + + + +
Suppose we have a set Ω, and for each element f ∈ Ω we have a no-
tion of the size of each element ∥ f ∥. This is what is called a normed
Figure 1.8: An illustration of the
contraction mapping principle.
+
11340 525 51975 2268 12285 59535
space. Now suppose that we have a map T from Ω to itself such that
for any two elements f , g ∈ Ω we have that ∥ T ( f ) − T ( g)∥ ≤ ρ∥ f − g∥
for ρ < 1. In other words T shrinks the distances between points in
the set by a fixed contraction factor ρ that is strictly less than one.
Then there is a unique element h that is fixed by the map: T (h) = h,
47t7 41t8
3 6 15 90 315 630
the margin figure. The proof of the Picard existence theorem amounts
to showing that the Picard iterates form (for small enough neigh-
borhood of (t0 , y0 ) a contraction mapping, and thus converge to a
unique fixed point.
We will not present the full proof here, but the basic principle
proven we will now apply it to some examples.
Having some idea of how the existence and uniqueness theorem is
dy
origin (Why?), so the theorem does not apply. On the other hand if we had
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
nical details the contraction mapping principle says the following.
Suppose we have a set Ω, and for each element f ∈ Ω we have a no-
+ + + + + +
Figure 1.8: An illustration of the
tion of the size of each element ∥ f ∥. This is what is called a normed
4 5 8 29
contraction mapping principle.
+
for any two elements f , g ∈ Ω we have that ∥ T ( f ) − T ( g)∥ ≤ ρ∥ f − g∥
for ρ < 1. In other words T shrinks the distances between points in
the set by a fixed contraction factor ρ that is strictly less than one.
Then there is a unique element h that is fixed by the map: T (h) = h,
and this fixed point can be found by taking any starting point and
3 6 15 90 315 630
to showing that the Picard iterates form (for small enough neigh-
borhood of (t0 , y0 ) a contraction mapping, and thus converge to a
unique fixed point.
Having some idea of how the existence and uniqueness theorem is
We will not present the full proof here, but the basic principle
proven we will now apply it to some examples.
dy ty
= 2 y (0) = 0
dt t + y2
4 5 8 29
origin (Why?), so the theorem does not apply. On the other hand if we had
the initial condition
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
nical details the contraction mapping principle says the following.
Suppose we have a set Ω, and for each element f ∈ Ω we have a no-
+ + + + + +
Figure 1.8: An illustration of the
tion of the size of each element ∥ f ∥. This is what is called a normed
contraction mapping principle.
space. Now suppose that we have a map T from Ω to itself such that
+
for any two elements f , g ∈ Ω we have that ∥ T ( f ) − T ( g)∥ ≤ ρ∥ f − g∥
for ρ < 1. In other words T shrinks the distances between points in
the set by a fixed contraction factor ρ that is strictly less than one.
Then there is a unique element h that is fixed by the map: T (h) = h,
and this fixed point can be found by taking any starting point and
3 6 15 90 315 630
to showing that the Picard iterates form (for small enough neigh-
borhood of (t0 , y0 ) a contraction mapping, and thus converge to a
unique fixed point.
Having some idea of how the existence and uniqueness theorem is
We will not present the full proof here, but the basic principle
proven we will now apply it to some examples.
dy ty
= 2 y (0) = 0
dt t + y2
y (3) ( t ) = 1 + t + t 2 + t 3 + t 6 + t 5 + t 6 +
dt t + y2
+ + + + + +
Figure 1.8: An illustration of the
tion of the size of each element ∥ f ∥. This is what is called a normed
contraction mapping principle.
+
space. Now suppose that we have a map T from Ω to itself such that
11340 525 51975 2268 12285 59535 for any two elements f , g ∈ Ω we have that ∥ T ( f ) − T ( g)∥ ≤ ρ∥ f − g∥
for ρ < 1. In other words T shrinks the distances between points in
the set by a fixed contraction factor ρ that is strictly less than one.
Then there is a unique element h that is fixed by the map: T (h) = h,
and this fixed point can be found by taking any starting point and
3 6 15 90 315 630
the margin figure. The proof of the Picard existence theorem amounts
to showing that the Picard iterates form, for small enough neigh-
borhood of (t0 , y0 ) a contraction mapping, and thus converge to a
unique fixed point.
Having some idea of how the existence and uniqueness theorem is
proven we will now apply it to some examples.
We will not present the full proof here, but the basic principle Example 1.4.4. Consider the equation
dy
= 2
ty
y (0) = 0
dt t + y2
We will not present the full proof here, but the basic principle Example 1.4.4. Consider the equation
dy
= 2
ty
y (0) = 0
dt t + y2
and this fixed point can be found by taking any starting point and
iterating the map: lim N →∞ T N ( f ) = h. This theorem is illustrated by
the margin figure. The proof of the Picard existence theorem amounts
to showing that the Picard iterates form, for small enough neigh-
borhood of (t0 , y0 ), a contraction mapping, and thus converge to a
unique fixed point.
Having some idea of how the existence and uniqueness theorem is
proven we will now apply it to some examples.
dy ty
= 2 y (0) = 0
dt t + y2
ty
The function f (t, y) = t2 +y2 is NOT continuous in a neighborhood of the
origin (Why?), so the theorem does not apply. On the other hand if we had
the initial condition
dy ty
= 2 y (1) = 0
dt t + y2
ty ∂f
then the function f (t, y) = t2 +y2 and the partial derivative ∂y are continu-
ous in a neighborhood of the point y = 0, t = 1, so the equation would have
a unique solution.
Exercise 1.4.1
Determine if the Theorem of Existence and Uniqueness guarantees
a solution and a unique solution to the following initial value prob-
lems.
y2 y2
a) y′ + t2
= ty, y (0) = 1 b) y′ + t2
= ty, y (3) = 1
y+t y+t
c) y′ = y−t , y (1) = 2 d) y′ = y−t , y (2) = 2
y1/3 ty
e) y′ = t2
, y (1) = 0 f) y′ + cos(y)
= 0, y (0) = 0
dy
+ P(t)y = Q(t) y ( t0 ) = y0 .
dt
36 differential equations
Theorem 1.5.1. Consider the first order linear inhomogeneous initial value
problem
dy
+ P(t)y = Q(t) y ( t0 ) = y0 .
dt
If P(t) and Q(t) are continuous on a interval ( a, b) containing t0 then the
initial value problem has a unique solution on ( a, b).
dy
+ P(t)y = 0
dt
In this case the equation is separable with solution
P(t)dt
R
y ( t ) = y0 e −
P(t)dt .
R
We’re going to introduce a new variable w(t) = y(t)e Note
that
dw dy
= ( + P(t)y)e P(t)dt
R
dt dt
Taking the equation
dy
+ P(t)y = Q(t)
dt
P(t)dt
R
and multiplying through by e gives
dy R
P(t)dt
R
P(t)dt
( + P(t)y)e = Q(t)e
| dt {z }
dw
dt
dw R
P(t)dt
= Q(t)e
dt
The latter can be integrated up, since it is an exact derivative. This
Remark 1.5.1. Here the mathematics
terminology differs somewhat from the
engineering terminology. In engineering
it is customary to split the solution into
two parts, the zero input and zero state
functions. The zero state solution is a par-
ticular solution that satisfies zero boundary
conditions at the initial point t0 . This can
be achieved by taking t0 as the lower limit
in the integral. The zero state solution is
first order differential equations 37
gives Z
P(t)dt
R
w= Q(t)e +c
P(t)dt
R
since w(t) = y(t)e we can solve for y and get
Z
P(t)dt P(t)dt
+ |ce− {zP(t)dt}
R R R
y = e− Q(t)e (1.13)
| {z } homogeneous
particular
dy
+ P(t)y = Q(t)
dt
we should
P(t)dt .
R
• Calculate the integrating factor µ(t) = e
• Integrate up.
d ( y1 − y2 )
+ P(t)(y1 − y2 ) = 0
dt
so the difference between any two solutions of the inhomogeneous
problem solves the homogeneous problem.
38 differential equations
y′ + ty = t3
tdt t2
R
µ(t) = e =e2
dy
+ tan(t)y = cos(t)
dt
The integrating factor is, in this case
sin(t)
dt 1
R
µ(t) = e cos(t) = e− ln(cos(t)) =
cos(t)
dy
sec(t) + sec(t) tan(t)y = 1
dt
dy d
recognizing the righthand side as sec(t) dt + sec(t) tan(t)y = dt (sec( t ) y )
we have
d
(sec ty) = 1
dt
sec ty = t + C
y = t cos t + C cos(t)
liter flows into the tank at 10 L min−1 and well mixed water flows out at
10 L min−1 . What is the differential equation governing the amount of salt
in the tank at time t? Solve it.
Let Y (t) represent the total amount of salt (in kilograms) in the wa-
ter. The total amount of water in the tank is a constant 100 liters. The
amount of salt flowing into the tank is .01 kg/min. The amount flow-
ing out depends on the concentration of the salt. If the amount of salt is
Y (t) then the concentration is Y (t)/100, and the amount flowing out is
10Y (t)/100 = Y (t)/10. So the differential equation is
dY Y
= .01 −
dt 10
1 t
10 dt
R
The integrating factor is e = e 10 giving
d t t
(Ye 10 ) = .01e 10 (1.14)
dt
t t
(Ye 10 ) = .1e 10 + c (1.15)
t
− 10
Y = .1 + ce (1.16)
(1.17)
and the total amount of salt is asymptotic to 0.1 kg. Can you see why this
must be so? Can you show it without explicitly solving the equation?
A more interesting problem occurs when the water flows into the
tank at a different rate than it flows out. For instance:
Exercise 1.5.1
Suppose now that the water flows out of the tank at 5 L min−1 . What
is the equation governing the amount of salt in the tank? Solve it! At
what time t is the salt content in the tank at a minimum?
Exercise 1.5.2
Use the integrating factor method to solve the following first order
linear equations. If an initial condition is given then solve for the
constant, otherwise find the general solution.
dy
a) dt + 2y = 1 + t y (0) = 0
dy 2 sin(t)
b) dt + t y = t2
dy
c) t dt + y = sin(t) y(π ) = 0
dy 2t cos(t)
d) dt + t2 +1 y = t2 +1 y (0) = 1
dy sin(t) sin(t)
e) dt + cos(t) y = cos3 (t) y (0) = 0
40 differential equations
dy
N ( x, y) + M( x, y) = 0 (1.18)
dx
where M, N are related in a particular way. In order to be exact there
must be some function ψ( x, y) (called the potential) such that
∂ψ
= N ( x, y) (1.19)
∂y
∂ψ
= M( x, y). (1.20)
∂x
Exact equations are connected with vector calculus and the prob-
lem of determining if a vector field is a gradient, so you may have
previously encountered this idea in vector calculus or physics. To be slightly more careful the equality
An obvious first question is how to recognize an exact equation. of mixed partials is guaranteed by
Clairaut’s theorem.
For general functions N ( x, y) and M ( x, y) there is not always such
a potential function ψ( x, y). A fact to recall from vector calculus is Theorem 1.6.1 (Clairaut). Suppose that
ϕ( x, y) is defined on D, an open subset of
∂2 ψ ∂2 ψ
the equality of mixed partials: ∂x∂y = ∂y∂x . Differentiating the above R2 . Further suppose that the second order
equations with respect to x and y respectively gives mixed partials ϕxy and ϕyx exist and are
continuous on R2 . Then the mixed partials
ϕxy and ϕyx must be equal on D.
∂2 ψ ∂N
= (1.21)
∂x∂y ∂x
∂2 ψ ∂M
= (1.22)
∂y∂x ∂y
∂ψ
= y3 + 3x2 y (1.26)
∂y
∂ψ
= 3y2 x (1.27)
∂x
Well, first we have to check that this is possible. The necessary condition
is that the mixed partials ought to be equal. Differentiating gives
∂N ∂ 3
= y + 3x2 y = 6xy
∂x ∂x
∂M ∂
= 3xy2 = 6xy
∂y ∂y
Thus we know that there exists such a ψ. Now we have to find it. We can
start by integrating up
y4 3x2 y2
Z
ψ= y3 + 3x2 ydy = + + f ( x ).
4 2
Here we are integrating with respect to y, so the undetermined constant is
a function of x. This makes sense: if we take the partial derivative of f ( x )
with respect to y we will get zero. We still don’t know f ( x ), but we can use
the other equation
ψx = 3xy2 + f ′ ( x )
comparing this to the above gives f ′ ( x ) = 0 so f ( x ) = c. Thus the solution
is given by
y4 3x2 y2
ψ( x, y) = + =c
4 2
Note that this solution is IMPLICIT. You can actually solve this for y as
a function of x (The above is quadratic in y) but you can also leave it in this
form.
Example 1.6.2.
dy −2xy
= 2
dx x + y2
42 differential equations
( x2 + y2 )dy + (2xy)dx = 0
ψx = 2xy
ψ = x2 y + g(y)
ψ = x2 y + g(y)
ψy = x2 + g′ (y) = x2 + y.2
y3
From this we can conclude that g′ (y) = y2 or g(y) = 3 . Thus the solution
is
y3
+ x2 y = c
3
Exercise 1.6.1
Classify each of the following equations as exact or not exact. If they
are exact find the solution in the form ψ( x, y) = c. You are given an
initial condition so you should be able to determine c.
dy
a) (5x + 2y + 2) dx + (2x + 5y + 1) = 0; y (1) = 3
dy
b) (3x + 7y + 2) dx + (2x + 5y + 1) = 0; y (2) = 8
dy
c) (−3x2 + 10xy − 3y2 − e x−y ) dx + (3x2 − 6xy + 5y2 + e x−y ) =
0; y (0) = 1
dy
d) (3xy − 3y2 ) dx + (3x2 + 3y2 ) = 0; y (2) = −1
dy
e) (6xy − 3y2 ) dx + (3x2 + 3y2 ) = 0; y(−1) = −2
dy
f) (2x + y) dx + (2y − x ) = 0; y(−1) = 0
dy
g) (3y2 + xe xy ) dx + (2x + ye xy ) = 0; y (0) = 1
dy
Z Z
= f (t)dt.
g(y)
dy
= y (1 − y )
dt
dy
= y − y2 ≈ y.
dt
dy
= y (1 − y )
dt
dy
= dt
y (1 − y )
1 1
+ dy = dt
y 1−y
where the last step follows from the method of partial fractions, from calcu-
lus. Assuming that the population y is between 0 and 1 we can integrate
this up to find
ln y − ln(1 − y) = t + c
y
= et+c
1−y
y = et+c − et+c y
et+c
y=
1 + et+c
The side margin shows the slope field and some solutions to the logistic
equation for different values of c but they all show the same characteristic
shape: populations with positive initial conditions all tend to the asymptote
y = 1, the maximum sustainable population.
Figure 1.9: The slope field and
some solution curves for the
dy
logistic equation dt = (1 − y)y
44 differential equations
Exercise 1.6.2
Find a general solution for each equation.
cos(t)
a) y′ = y2
b) yy′ = t + ty2
1
c) t2 y′ = cos(y)
d) t(y2 + 1)y′ = y
dy y
= F( )
dt t
y
These can be solved by the substitution v = t or y = tv. This leads to
dy dv
= t + v = F (v)
dt dt
which is a separable equation.
An example of this type is
dy ty
= 2
dt t + y2
dv t2 v v
t +v = 2 2 2
=
dt t +t v 1 + v2
Which is separable.
Exercise 1.6.3
Find a general solution for each equation.
y2 t y
a) yy′ = 2t + t b) y′ = 2( y + t )
+ t
t 2t−y
c) ty′ = y + y d) y′ = t+y
sin( t )
dy t2 + 2ty + y2
=−
dt 1 + ( t + y )2
Well, it is clear that the righthand side is only a function of the sum t + y.
This suggests making the substitution v = y + t. The equation above
first order differential equations 45
becomes
dv v2
−1 = − (1.28)
dt 1 + v2
dv 1
= (1.29)
dt 1 + v2
2
(1 + v )dv = dt (1.30)
Integrating this up gives the equation
v3
v+ = t+c (1.31)
3
1
y + t + ( y + t )3 = t + c (1.32)
3
Exercise 1.6.4
Find a general solution for each equation.
1
a) y′ = ey+2t − 2 b) y′ = (3y − t)2 + 3
y′′ y2 = y′
dy 1 cy − 1
= c− = (1.33)
dt y y
y
dy = dt (1.34)
cy − 1
which can be integrated up to get
y log(1 − cy)
+ = t − t0 .
c c2
Here we have denoted the second constant of integration by t0 . Note that
as this is a second order equation we have two arbitrary constants of inte-
gration (c and t0 ) but they do not enter linearly. Also note that we have an
implicit representation for y as a function of t but we cannot solve explicitly
for y as a function of t.
Exercise 1.6.5
Find a general solution for each equation.
a) y′′ + 3y′ = te−3t b) y′′ + 2ty′ = 0
c) ty′′ = y′ d) y′′ = 2yy′
e) ey y′′ = y′ f) yy′′ = (y′ )2
Example 1.6.7. Find the general solution to the nonlinear first order equa-
tion
dy
+ y = cos(t)y5 .
dt
dy
Making the change of variables w(t) = y(t)−4 ; dw
dt = −4y
−5
dt gives the
following differential equation for w(t):
dw
−1/4 + w = cos(t).
dt
This is first order linear and can be solved: the general solution is
16 4
w(t) = Ae−4t + cos(t) − sin(t) = y(t)−4
17 17
which gives
− 1
16 4
4
y(t) = Ae−4t + cos(t) − sin(t)
17 17
Exercise 1.6.6
Find a general solution for each equation.
t
a) y′ + y = y b) ty′ = 3y + ty2
√ 2y
c) ty′ + 2y = t3 y d) y′ = ty3 − t
Exercise 1.6.7
Solve the following initial value problems.
a) ty′ + 2y = t3 y2 , y (1) = 1
y2 sin( x )−2xy−ey
b) y′ = x2 + xey +2y cos( x )
, y (0) = 1
13 ′ 5
c) ty′′ + 2y′ = t, y (1) = 12 , y (1) = 4
d) (t + 1)y′ = y2 , y (0) = 1
e) y′′ = y(y′ )2 , y(0) = 0, y′ (0) = 1
2t−y
f) y′ = y+t , y (0) = −1
√ 6y
g) y′ = 2t y − t , y (1) = 4
h) t2 y′ = y + 3
2x −3x2 y2 −y3
i) y′ = 2x3 y+3xy2
, y (1) = 1
48 differential equations
dy
= f ( y ). (1.35)
dt
Again these equations are always solvable in principle, as they are
separable and can be reduced to integration, but such a representa-
tion may not be very useful. Even assuming that the integration can
be done explicitly the method naturally gives t as a function of y.
It may not be possible to explicitly invert the relation to find y as a
function of t.
The ideas in this section amount to an elaboration on the idea of
a slope field. Equation (1.35) relates the slope of the curve y(t) to the
value of f (y). A special role is played by the points where the slope
is zero. These are called equilibria.
dy
Definition 1.7.1. An equilibrium is any constant solution to dt= f ( y ).
dy dy0
If y(t) = y0 is a solution to Equation (1.35) then dt = dt = 0 = f ( y0 ),
hence an equilibrium is a solution to f (y0 ) = 0.
dy
= y2 − 1
dt
first order differential equations 49
interval would be populated with downward arrows. If, on the other hand,
y > 1 or y < −1 then y2 − 1 > 0, and these two regions would be filled 1.0
with upwards arrows. This is illustrated in the three margin figures. Figure 0.5
1.10 depicts the slope field. It is clear that the slopes do not depend on t, only 0.0
one y, so we don’t really need a two dimensional figure – all we really need
-0.5
to know is how the slopes depend on y.
dy
In the next figure, (Fig. (1.11) we have plotted the slope dt = y2 − 1 as a -1.0
function of y. Note that this plot is turned 90 degrees from the slope field: in -1.5
the slope field t is taken to be the independent variable, here y is taken to be -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
Stability is a very important concept from the point of view of Figure 1.12: A plot of the phase
dy
applications, since it tells us something about the robustness of the line for dt = y2 − 1. The two
solution. Stable equilibria are in some sense “self-correcting”. If we equilibria are y = −1 and y = 1.
start with an initial condition that is close to the equlibrium then the
dynamics will naturally act in such a way as to move the solution
closer to the equilibrium. If the equilibrium is unstable, on the other
hand, the dynamics tends to move the solution away from the equi-
librium. This means that unstable equilibria are less robust – we have
to get things just right in order to observe them.
It would be nice to have a method to decide whether a given equi-
librium is stable or unstable. This is the content of the next theorem.
dv
ϵ = f (y∗ + ϵv).
dt
So far this is exact – all that we have done is to rewrite the equation.
At this point, however, we make an approximation. Specifically we
will Taylor expand the function f to first order in ϵ and use the fact
that f (y∗ ) = 0:
dv
ϵ = f (y∗ + ϵv) ≈ f (y∗ ) + ϵ f ′ (y∗ )v + O(ϵ2 ) = ϵ f ′ (y∗ )v.
dt
y−y∗
So the quantity v = ϵ satisfies the approximate equation
dv
= f ′ (y∗ )v.
dt
We know how to solve this equation: v exhibits exponential growth if
f ′ (y∗ ) > 0 and exponential decay if f ′ (y∗ ) < 0.
dP
= P( P0 − P)
dt
The equilibria are P = 0 and P = P0 . In this case f ′ ( P) = P0 − 2P, and
we have f ′ (0) = P0 > 0 and f ′ ( P0 ) = − P0 < 0, so the equilibrium
with zero population is unstable and the equilibrium with P = P0 is stable.
The zero population is unstable, so a small population will tend to grow
exponentially. As the population approaches P0 the growth rate slows and
the population approaches an asymptote. If the population begins above P0
then the population will decrease towards the equilibrium population P0 .
//
The first flip-flop was built in 1918
Example 1.7.3 (Bistability). The differential equation
by W. Eccles and F.W. Jordan using a
pair of vacuum tubes. Vacuum tube
dy flip-flops were used in the Colossus
= y(1 − 2y)(y − 1)
dy code-breaking computer in Bletchly
Park during World War II.
Photo of William Eccles. Uncredited from a photograph that
rium y = 12 is unstable, while y = 0 and y = 1 are stable. The phase line for
this equation is depicted in the margin.
Flip-flops are ubiquitous in consumer electronics, as they form the basis
of logic gates and RAM memory, and one can buy any number of integrated Figure 1.13: The phase-line for
circuits with various numbers and types of flips-flops on a single chip. a bistable system (flip-flop)
dy
Bistability means that they will remain in one of the two stable states dt = y(1 − 2y)(y − 1). The
(either 0 or 1) until some external force comes along to change the state. equilibria y = 0 and y = 1 are
stable, the equilibrium y = 21
Exercise 1.7.1 unstable.
Population P
tially to the stable equilibrium P0 – and an unstable equilibrium population P0
P = 0.
One might try to model the effect of fishing on a fish population by a
model of the following form 0
k P20
Fishing Rate h
4
0
dP
= kP( P0 − P) − h. (1.36) Figure 1.14: The bifurcation di-
dt
Here the first term is the standard logistic term and the second term is a agram for the logistic model
constant, which is meant to represent fish being caught at a constant rate. with constant harvesting,
dP
Let’s assume that h, the rate at which fish are caught, is something that we dt = kP( P0 − P) − h. For
kP2
can set by a policy decision. We’d like to understand how the fish population low fishing rates, h < 40 there
is influenced by h, and how large a fishing rate we can allow before it ceases are two equilibria, one stable
to be sustainable. and one unstable. for higher
The righthand side is a quadratic, so there are two roots and thus two kP02
fishing rates h > 4 there are
equilibria. The quadratic formula gives the equilibria as no equilibria.
q
P0 ± P02 − 4h/k
P=
2
√
so if h/k is less than P0 /2 there are two positive real roots. The lower
one has positive derivative and the upper one has negative derivative so the
√
upper one is stable and the lower one is unstable. So if h/k < P20 there is
always a stable fixed point with a population greater than P20 .
√ kP2
Now consider what happens if h/k is greater than P0 /2, or h > 40 .
√
P ± P02 −4h/k
In this case the roots P = 0 2 are complex, and so there are
no equilibria. In this case the righthand side of Equation (1.36) is always
negative. This implies that the population rapidly decays to zero and the
population crashes.
Consider this from a policy point of view. Imagine that, over time, the
kP2
fishing rate is slowly increased until h is just slightly less than 40 . As the
fishing rate is increased the stable equilibrium population decreases, but it is
always larger than P20 , and there is a second, unstable equilibrium just below
P0
2 . There is no obvious indication that the population is threatened, but
if h is further increased then the two equilibria vanish and the population
crashes.
This suddent disappearance of a stable and an unstable equilibrium is
known as the “saddle-node bifurcation”’. It is illustrated in the margin
Figure (1.14), which plots the locations of the two equilibria (the stable and
the unstable) as a function of the fishing rate h. The stable branch is depicted
by a solid line, the unstable branch by a dashed line. For small values of h
kP02
there is one stable equilibrium and one unstable one, but at h = 4 the two
kP02
equilibria collide and vanish, and for h > 4 there are no equilibria.
Method 1.8.1. The first order Euler scheme for the differential equation
dy
= f (y, t) y ( a ) = y0 (1.37)
dt
on the interval ( a, b) is the following iterative procedure. For some choice
of N we divide the interval ( a, b) into N subintervals with ∆t = b− a
N and
ti = t0 + i∆t. We then define yi by the following iteration
the Euler method is the equivalent of the left-hand rule for Riemann
■ Euler
☺.
...
..
0.3 ... ▼ Heun
.
....
sums then the equivalent of the midpoint rule is the following rule:
.
.
.... ■
..
.... ☺ RK4
...
0.2 .▼...
.☺
....
....
..
....... ■
.
......
yi+1 = yi + f (yi + f (yi , ti )∆t/2, ti + ∆t/2)∆t. 0.1
....... ■
.........
.........
☺......
.▼
.......
...........▼
☺
..
...
...
...
...
. ■
☺▼ ▼................................☺
.■
■.................................☺ ......
.▼
■
0.2 0.4 0.6 0.8 1.0
This rule is also called the midpoint rule, and you can see why. In-
stead of evaluating f at (ti , yi ) it is evaluated at the midpoint between Figure 1.16: A graph of the ex-
dy
this and the next point. This is second order accurate: we have that act solution to dt = y2 + t2 with
yi − y(ti ) = O(∆t2 ). You can also see that this rule requires a bit more y(0) = 0 for t ∈ (0, 1) together
computational work: we have to evaluate the function f (y, t) twice with the Euler and improved
for each step that we take. A different second order method is called Euler approximations to the
Heun’s method or the improved Euler method. It is defined by solution with N = 6 subdivi-
sions ((∆t = 61 ). The step size
has been deliberately chosen to
yi+1 = yi + f (yi , ti )∆t/2 + f (yi + f (yi , ti )∆t, ti + ∆t)∆t/2
be large to exaggerate the dif-
ference. It is apparent that the
improved Euler method does
a better job of approximating
the solution than the standard
Euler method, and that the
fourth order Runge-Kutta can’t
be distinguished from the exact
first order differential equations 55
k1 = f (yi , ti )∆t
k2 = f (yi + k1 /2, ti + ∆t/2)∆t
k3 = f (yi + k2 /2, ti + ∆t/2)∆t
k4 = f (yi + k3 , ti + ∆t)∆t
1
y i +1 = y i + (k + 2k2 + 2k3 + k4 )
6 1
and is the natural analog to Simpson’s rule for numerical integration.
RK4 is a fourth order method, yi − y(ti ) = O(∆t4 ). There are many
other methods that one can define – each has its own advantages and
disadvantages.
2
Higher Order Linear Equations
dn y d n −1 y d n −2 y
n
+ a n −1 ( t ) n −1 + a n −2 ( t ) n −2 + . . . a 0 ( t ) y = f ( t )
dt dt dt
In other words a linear differential equation is a linear relationship
between y and its first n derivatives. It can depend on the indepen-
dent variable t in an arbitrary way. To begin with we first mention
the fundamental existence and uniqueness theorem. Since the differ-
ential equation depends linearly on y and its derivatives the existence
theorem becomes simpler, and can be stated solely in terms of the
coefficients ak (t) and f (t).
d n y n −1 dk y ( n −1)
y(t0 ) = y0 , y′ (t0 ) = y0′ , . . . y(n−1) (t0 ) = y0
dtn k∑
+ a k ( t ) = f (t)
=0 dtk
−1
If the functions { ak (t)}nk= 0 , f ( t ) are all continuous in an open set t ∈ ( a, b )
containing t0 then the given initial value problem has a unique solution
defined for all t ∈ ( a, b) .
Note that this result is much stronger than the general (nonlinear)
existence/uniqueness result. The nonlinear result only guarantees the
existence of a solution in some small neighborhood about the initial
condition. We don’t really know a priori how big that interval might
be. For linear equations we know that as long as the coefficients are
well-behaved (continuous) then a unique solution exists.
It is worth defining a couple of pieces of terminology that will be
important.
58 differential equations
d n y n −1 dk y
n
+ ∑ ak (t) k = 0
dt k =0 dt
A linear differential equation in which the forcing term f (t) is non-zero is
called “non-homogeneous” or ”inhomogeneous”.
Inhomogeneous Homogeneous
dy dy
− 5y = t − 3y = 0
Example 2.0.1. dt dt
d2 y dy d2 y dy
+ 2 + y = cos t + et − + 5y = 0
dt2 dt dt2 dt
4 d3 y
d y 1 d4 y d3 y
+ cos t 3 + y = + sin t 3 + et y = 0
dt 4 dt 1 + t3 dt 4 dt
Exercise 2.0.1
Without actually solving the problems, determine in which interval I
we are guaranteed a unique solution for each initial value problem.
t2 ′ cos (t) sin (t)
a) y′′′ − t −2 y + t +3 y = t2
, y(−1) = 2, y′ (−1) = 3, y′′ (−1) =
4
t2 ′ cos (t) sin (t)
b) y′′′ − t −2 y + t +3 y = t2 , y(1) = 2, y′ (1) = 3, y′′ (1) = 4
t2 ′ cos (t) sin (t)
c) y′′′ − t− 2 y + t +3 y = t2 , y(3) = 2, y′ (3) = 3, y′′ (3) = 4
t +1
d) (t − 2)y′′ + sin (t)
y = e t , y (1) = 0, y′ (1) = 1
d n y n −1 dk y
n
+ ∑ ak (t) k = 0
dt k =0 dt
higher order linear equations 59
If y1 (t), y2 (t), . . . ym (t) are all solutions to the above equation then any
arbitrary linear combination, y(t) = c1 y1 (t) + c2 y2 (t) + . . . cm ym (t) is also
a solution.
with the fact that the derivatives of y enter into the differential equa-
tion linearly.
d2 y
+y = 0
dt2
It is not hard to guess that the functions y1 (t) = cos(t), y2 (t) = sin(t)
both satisfy the equation. By the superposition theorem y = c1 cos(t) +
c2 sin(t) is also a solution. If one is given the equation with boundary condi-
tions
y′′ + y = 0 y(0) = y0 y′ (0) = y0′
we can try the solution y = c1 cos(t) + c2 sin(t) and see if we can solve for
c1 , c2 . Substituting t = 0 gives
the case of higher order equations. We’ll need some facts from linear
algebra regarding the solvability of a linear system of equations.
The first is the idea of linear independence. A set of vectors
⃗v1 , ⃗v2 . . . , ⃗vk is linearly indpendent if ∑ ci⃗vi = 0 implies that ci = 0
for all i. In other words a set of vectors is linearly independent if the
only linear combination of the vectors that adds up to the zero vector
is when the coefficient of each vector individually is zero. We extend
this same definition to functions
c1 = 0
c2 = 0.
Similarly the functions y1 (t) = sin t and y2 (t) = 2 sin t are linearly
dependent, since 2y1 − y2 = 0.
The second fact we need from lineal algebra is the following one
about the solutions of a set of n linear equations in n unknowns,
which is a standard fact from any first course in linear algebra.
d n y n −1 dk y
dtn
+ ∑ a k ( t )
dtk
=0
k =0
d n y n −1 dk y ( n −1)
y(t0 ) = y0 ; y′ (t0 ) = y0′ ; . . . y(n−1) (t0 ) = y0
dtn k∑
+ a k ( t ) =0
=0 dtk
y1 ( t0 ) y2 ( t0 ) y3 ( t0 ) . . . y n ( t0 ) c1 y0
y1′ (t0 ) y2′ (t0 ) y3′ (t0 ) . . . y′n (t0 )
c2
y0′
.. .. .. .. .. = ..
. . . .
.
.
( n −1) ( n −1) ( n −1) ( n −1) ( n −1)
y1 ( t0 ) y2 ( t0 ) y3 ( t0 ) . . . y n ( t0 ) cn y0
y1 ( t ) y2 ( t ) y3 ( t ) . . . yn (t)
y1′ (t) y2′ (t) y3′ (t) . . . y′n (t)
W (y1 , y2 , . . . , yn )(t) = .. .. .. ..
. . . .
( n −1) ( n −1) ( n −1) ( n −1)
y1 ( t ) y2 ( t ) y3 (t) . . . yn (t)
Proof. If y1 (t), y2 (t), . . . , yn (t) are linearly dependent then there ex-
ists constants c1 , c2 , . . . cn not all zero such that c1 y1 (t) + c2 y2 (t) +
. . . cn yn (t) = 0. Differentiating with respect to t shows that c1 y1′ (t) +
c2 y2′ (t) + . . . cn y′n (t) = 0, and similarly all higher derivatives. Thus we
have a linear combination of the columns of the matrix which sums
to zero, and thus the determinant is zero.
62 differential equations
d n y n −1 di y
n
+ ∑ ai ( t ) i = 0
dt i =0 dt
d n y n −1 di y
dtn
+ ∑ a i ( t )
dti
=0
i =0
y ( t0 ) = y0
y′ (t0 ) = y0′
..
.
( n −1)
y ( n −1) ( t 0 ) = y 0
Theorem 2.1.4 (Abel). Suppose that y1 (t), y2 (t)...yn (t) are solutions to
the linear homogeneous linear differential equation
dn y d n −1 y d n −2 y
n
+ a n −1 ( t ) n −1 + a n −2 ( t ) n −2 + . . . a 0 ( t ) y = 0
dt dt dt
higher order linear equations 63
Then the Wronskian W (t) solves the FIRST ORDER homogeneous linear
differential equation
W ′ + a n − 1 ( t )W = 0
and so
Corollary 2.1.1. Suppose that y1 (t), y2 (t)...yn (t) are solutions to the linear
homogeneous linear differential equation
dn y d n −1 y d n −2 y
+ a n − 1 ( t ) + a n − 2 ( t ) + . . . a0 ( t ) y = 0
dtn dtn−1 dtn−2
with ai (t) continuous on the whole real line. Then the Wronskian is either
identically zero or it is never zero,
y′′ + y = 0
has solutions y1 (t) = cos(t) and y2 (t) = sin(t). It follows from Abel’s
theorem that the Wronskian solves
W′ = 0
4 6
y′′ − y′ + 2 y = 0
t t
are y1 = t2 and y2 = t3 . Notice that the coefficients are continuous
everywhere except t = 0. It follows that the Wronskian solves
4
W′ − W = 0
t
which has the solution W = ct4 . Computing the Wronskian of y1 and y2
gives
W = y1 y2′ − y1′ y2 = t2 (3t2 ) − (2t)(t3 ) = t4
The Wronskian is zero at t = 0 and non-zero everywhere else. Thus we can
satisfy an arbitrary initial condition everywhere EXCEPT at t = 0
Exercise 2.1.1
Determine if the following set of solutions are linearly independent
or dependent by calculating the Wronskian.
dn y d n −1 y dy
n
+ p n −1 n −1 + . . . p 1 + p 0 y = 0
dt dt dt
higher order linear equations 65
( r n + p n −1 r n −1 + p n −2 r n −2 + . . . p 1 r + p 0 ) = 0
( r n + p n −1 r n −1 + p n −2 r n −2 + . . . p 1 r + p 0 ) = 0
y′′ − 3y′ + 2y = 0
y (0) = 1
y ′ (0) = 4
y (0) = A + B = 1
y′ (0) = A + 2B = 4
( r n + p n −1 r n −1 + p n −2 r n −2 + . . . p 1 r + p 0 ) = 0
dn y d n −1 y dy
n
+ p n −1 n −1 + . . . p 1 + p 0 y = 0
dt dt dt
has solutions y1 = er1 t , y2 = er2 t , . . ..
We won’t prove this, but it follows from an identity for the deter-
minant of what is known as a Vandermonde matrix. The Wronskian
is given as the product of an exponential (which never vanishes)
times a product over the differences of roots. If all of the roots are
distinct the difference between any two roots is non-zero, and thus
the product is non-zero, and so the Wronskian is never zero.
r3 + 6r2 + 3r − 10 = 0
It is easy to see that the roots are r = 11 ,r2 = −2 and r3 = −5. This gives
Since the roots are distinct the solutions are guaranteed to be linearly inde-
pendent. In fact we know that the Wronskian satisfies
Exercise 2.2.1
Below are a list of polynomials together with a root of that polyno-
mial. Find the multiplicity of the root.
a) P(r ) = (1 − r2 ) for r = 1
b) P(r ) = (1 − r )2 for r = 1
c) P(r ) = r3 − 3r2 + 4 for r = 2
d) P(r ) = r3 − 3r2 + 4 for r = 1
e) P(r ) = r5 + r3 for r = 0
There are some subtleties connected with complex roots and multi-
ple roots. The next two subsections illustrate this.
e(µ+iσ)t = eµt (cos σt + i sin σt) = eµt cos σt + i eµt sin σt.
( r n + p n −1 r n −1 + p n −2 r n −2 + . . . p 1 r + p 0 ) = 0
68 differential equations
dn y d n −1 y dy
n
+ p n −1 n 1
+ . . . p1 + p0 y = 0
dt dt − dt
mr2 + k = 0,
y′′ − 2y′ + y = 0.
looking for a solution in the form y = ert gives the characteristic equation
r2 − 2r + 1 = (r − 1)2 = 0
dn y d n −1 y dy
n
+ p n −1 n −1 + . . . p 1 + p 0 y = 0
dt dt dt
is a constant coefficient nth order differential equation, and that
( r n + p n −1 r n −1 + p n −2 r n −2 + . . . p 1 r + p 0 ) = 0
is the characteristic polynomial. Recall that the total number of roots of the
polynomial counted according to mutliplicity is n. For each simple root
(multiplicity one) ri we get a solution
yi ( t ) = eri t
y1 ( t ) = eri t
y2 (t) = teri t
y3 ( t ) = t2 eri t
..
.
y k ( t ) = t k −1 e r i t
y1 (t) = e(µ+iσ)t
y2 (t) = e(µ−iσ)t
y (0) = A = 0
y ′ (0) = A + B = 0
y′′ (0) = A + 2B + 2C = 1
1
which can be solved to find A = 0, B = 0, C = 2 and the solution to the
initial value problem y(t) = 21 t2 et
r4 + 2r3 − 2r − 1 = 0
y1 ( t ) = e − t
y2 (t) = te−t
y3 ( t ) = t2 e − t
y4 ( t ) = e t
Writing the general solution as y(t) = Ae−t + Bte−t + Ct2 e−t + Det we
higher order linear equations 71
Exercise 2.2.2
Find general solutions to the following differential equations
a) y′′ + y′ − 6y = 0
b) y′′′ − 6y′′ + 9y′ = 0
c) y′′′ + 4y′ = 0
d) y′′′ + 3y′′ + 3y′ + y = 0
e) y′′′′ + 2y′′ + y = 0
f) y′′′ − 3y′′ + 4y = 0
g) y′′′ − 5y′′ + y′ − 5y = 0
h) y′′′′ − 8y′′′ + 16y′′ = 0
i) y′′′ + 4y′′ + 6y′ = 0
d2 d
L= 2
+5 +4
dt dt
then the operator L is something that acts on functions and returns
another function. For instance L acting on a function y gives
So
d2 d
L = (1 + t2 ) 2
+ 9t − et
dt dt
then
Ly = (1 + t2 )y′′ + 9ty′ − et y
Note that in each of these cases the operator L has the property
that L( ay1 ( a) + by2 (t)) = aLy1 (t) + bLy2 (t). This is called a linear
operator. This follows from the fact that the derivative (and hence
the nth derivative) is a linear operator, and so a linear combination
of these is also a linear operator. Notice that a linear homogeneous
differential equation can always be written in the form
Ly = 0
Ly = f (t)
Ly = f (t)
has a solution y part (t) (called the particular solution). This solution need
not involve any arbitrary constant. Then the general solution to
Ly = f (t)
y′′ + y = t
It isn’t hard to see that if y(t) = t; y′ (t) = 1; y′′ (t) = 0 and so y′′ + y =
0 + t = t. So the general solution is the particular solution y p (t) = t plus a
solution to the homogeneous problem. The homogeneous problem is
y′′ + y = 0
which has the solution yhomog (t) = A sin t + B cos t. Therefore the general
solution to
y′′ + y = t
is given by y = t + A sin t + B cos t.
74 differential equations
Example 2.4.2. Find the zero input response and zero state response to
y′ + y = 1 y (0) = 2
It is not hard to guess that a particular solution is y part (t) = 1 which gives
the zero state response as
yzs = 1 − e−t .
Ly = f (t)
dn d n −1
L= + p 1 + . . . pn
dtn dtn−1
with p1 , p2 , . . . pn constant, and f (t) is a forcing term that can be
written as an
• Polynomial in t
• sin(ωt) or cos(ωt)
• Exponential e at ,
higher order linear equations 75
f (t) = cos(t)
f ( t ) = t5
f (t) = e−6t
f (t) = et cos(t) + t11 sin(t) − 256 sin3 (3t)
f (t) = t3 e−t cos(t) + t2 sin(5t) − 11
f (t) = et − 22t cos(5t)
• Polynomial Pn (t) = ∑ ai ti .
You should
• Make a guess (ansatz) for y(t) in the same form as f (t) , with unde-
termined coefficients A1 , A2 , . . . An .
y = A1 sin t + A2 cos t
y′ = A1 cos t − A2 sin t
y′′ = − A1 sin t − A2 cos t
y′′ + 3y′ − 4y = −(5A1 + 3A2 ) sin t + (−5A2 + 3A1 ) cos t.
− (5A1 + 3A2 ) = 1
(−5A2 + 3A1 ) = 0
since that will give us the result that we want. We can solve for A1
5 3
and A2 to find A1 = − 34 ; A2 = − 34 .
Exercise 2.5.1
Suppose that in the example above one tried a solution of the form
y = A1 sin t + A2 cos t + A3 e2t . Show that you must have A3 = 0.
Exercise 2.5.2
(1) State the form for the solution by undetermined coefficients and
(2) find the solution. Does the “naive” version work for the last prob-
lem?
L(y) = f (t)
guess y(t) of the same form as f (t) unless one or more of the terms of your
guess is itself a solution to the homogeneous equation
Ly = 0
in this case multiply these terms by the smallest power of t such that none of
the terms in your guess satisfy the homogeneous equation.
Example 2.5.1. Consider the differential equation
d2 y
− y = t2 et + sin t
dt2
Normally when we see t2 et we would guess y = At2 et + Btet + Cet +
D sin t + E cos t. In this case the solutions to the homogeneous problem
d2 y
dt2
− y = 0 has two linearly indpendent solutions y1 = et and y2 = e−t .
That means that we should try a solution of the form y = At3 et + Bt2 et +
Ctet + D sin t + E cos t. Note that we don’t multiply the sin t or cos t terms
by t, as they are not solutions to the homogeneous equation.
Example 2.5.2. Consider the differential equation
d4 y d3 y d2 y dy
− 2 + 2 − 2 + y = t + 5 sin t + 3et + 2tet
dt4 dt3 dt2 dt
The characteristic equation for the homogeneous problem is
r4 − 2r3 + 2r2 − 2r + 1 = 0
This has four roots. r = ±i are simple roots and r = 1 is a double root. This
gives four solutions to the homogeneous problem
y1 (t) = sin t
y2 (t) = cos t
y3 ( t ) = e t
y4 (t) = tet
78 differential equations
Normally for the right-hand side f (t) = t + 5 sin t + 3et + 2tet we would
guess something of the following form
Here the underbraces show the term(s) in f (t) that are responsible for the
terms in y(t). However we have some exceptions to make here: some of the
terms in our guess are solutions to the homogeneous problem: The func-
tions sin t, cos t, et and tet all solve the homogeneous equation. We should
mulitply the solutions by the smallest power of t so that no terms in the
guess solve the homogeneous equation. The function A + Bt doesn’t
solve the homogeneous problem, so we don’t need to change these terms. The
functions sin t, cos t do, but t sin t and t cos t do not, so we multiply these
terms by t. The functions et and tet both solve the homogeneous equation.
If we multiply by t we get Etet + Ft2 et . One of these terms still solves the
homogeneous problem. If we multiply by t2 we get Et2 et + Ft3 et , none of
which solves the homogeneous problem. Thus we should guess
5 1 1
y(t) = 2 + t + t sin t + t2 et + t3 et .
4 4 6
Finding the correct form of the solution in the method of undeter-
mined coefficients becomes a bit cumbersome when the characteristic
equation has roots of high multiplicity. There is a variation of this
method, usually called the annihilator method. This method is a little
more work but it is always clear what the correct form of the solution
should be. This is the subject of the next section.
Ly = f (t)
L̃Ly = L̃ f = 0
d2 y
+ y = et
dt2
using annihilators.
To do this we first find the solution to the homogeneous problem. The
d2 y
characteristic equation for dt2 + y = 0 is r2 + 1 = 0. Solving for r gives
r = ±i or y1 (t) = sin t, y2 (t) = cos t
We next need to find an annihilator, something that “kills” the righthand
d
side. The operator dt − 1 does the trick: if we act on the function et with this
d
operator we get zero. Acting on the above equation with dt − 1 gives
y′′′ − y′′ + y′ − y = 0
third is not. Thus we should try a particular solution of the form y = Aet .
Substituting into the original equation gives
y′′ + y = 2Aet = et
Thus 2A = 1 and A = 21 . Note that if we mistakenly included the other
terms B sin t + C cos t we would find that B and C are arbitrary.
Example 2.6.2. Find the correct form of the solution for the method of
undetermined coefficients for the equation
d6 y d4 y d2 y
+ 3 + 3 + y = 2 sin t + 5 cos t
dt6 dt4 dt2
The homogeneous equation is
d6 y d4 y d2 y
+ 3 + 3 +y = 0
dt6 dt4 dt2
so that the characteristic equation is
r6 + 3r4 + 3r2 + 1 = 0
which can be written as (r2 + 1)3 = 0, so r = ±i are roots of multiplicity 3.
This means that the six linearly independent solutions are
y1 = cos t
y2 = t cos t
y3 = t2 cos t
y4 = sin t
y5 = t sin t
y6 = t2 sin t
d2
The annihilator for A sin t + B cos t is L̃ = dt2
+ 1. Acting on both sides
of the equation with the annihilator gives
d2 d6 y d4 y d2 y
( + 1 )( + 3 + 3 + y) = 0.
dt2 dt6 dt4 dt2
The characteristic equation is
(r 2 + 1)4 = 0
This has eight linearly independent solutions:
y1 = cos t
y2 = t cos t
y3 = t2 cos t
y4 = t3 cos t
y5 = sin t
y6 = t sin t
y7 = t2 sin t
y8 = t3 sin t
higher order linear equations 81
d 2 d6 y d4 y d2 y
There are two solutions of ( dt 2 + 1)( dt6 + 3 dt4 + 3 dt2 + y ) = 0 that do not
d6 y d4 y d2 y
solve dt6 + 3 dt4 + 3 dt2 + y = 0. These are t3 sin t and t3 cos t. Therefore
the particular solution can be assumed to take the form At3 sin t + Bt3 cos t.
f (t) Annihilator
d
1 dt
d k +1
Pk (t) dtk+1
d
e at dt − a
d2
A sin ωt + B cos ωt dt2
+ ω2
Ae at sin ωt + Be at cos ωt ( dtd − a)2 + ω 2
2
Pk (t) sin ωt + Qk (t) cos ωt ( dtd 2 + ω 2 )k+1
Pk (t)e at sin ωt + Qk (t)e at cos ωt (( dtd − a)2 + ω 2 )k+1
Exercise 2.6.1
Find particular solutions to the following differential equations
dy dy
a) dt + y = et b) dt + 3y = sin(t) + e2t
dy d2 y
c) dt + y = t sin(t) + e−t d) dt2
+ y = te−t
d2 y
e) dt2
+ 2 dy
dt + 5y = e
−t + cos( t )
Exercise 2.6.2
Find annihilators for the following forcing functions
a) f (t) = t2 + 5t
b) f (t) = sin(3t)
c) f (t) = 2 cos(2t) + 3t sin(2t)
d) f (t) = t2 et + sin(t)
Exercise 2.6.3
Find the correct form of the guess for the method of undetermined
coefficients. You do not need to solve for the coefficients (although
you may– answers will be given in the answer section.)
82 differential equations
a) y′′ + y = et + 1 b) y′ + y = sin(t)
d7 y 5
c) y′′ + y = sin(t) d) dt7
+ 32 ddt5y − 18 dy
dt + 11y =
3 2
11t + t + 7t + 2
d7 y 5 2 d5 y 4 3 d2 y
e) dt7
+ 32 ddt5y − 18 ddt2y = 180t3 f) dt5
+ 3 ddt4y + 3 ddt3y + dt2
=
12t2 + 6t + 8 − 3e−t
Exercise 2.6.4
Find a particular solution to the following equations using the
method of undetermined coefficients.
d2 y dy
+ a1 ( t ) + a0 ( t ) y = f ( t )
dt2 dt
Suppose that y1 (t) and y2 (t) are two linearly independent solutions to the
homogeneous problem. Then a particular solution to the non-homogeneous
problem is given by
Z t Z t
y1 ( s ) f ( s ) y2 ( s ) f ( s )
y p ( t ) = y2 ( t ) ds − y1 (t) ds
a W (s) a W (s)
y = A ( t ) y1 ( t ) + B ( t ) y2 ( t )
Since we now have two unknowns, A(t) and B(t), but only one equa-
tion, we will need a second equation in order to have a unique solu-
tion. Differentiating gives
A ′ ( t ) y1 ( t ) + B ′ ( t ) y2 ( t ) = 0
A′ (t)y1′ (t) + B′ (t)y2′ (t) = f (t)
f ( t ) y2 ( t )
A′ (t) = − (2.1)
W ( y1 , y2 )
f ( t ) y1 ( t )
B′ (t) = . (2.2)
W ( y1 , y2 )
dn y d n −1 y
n
+ a n −1 ( t ) n −1 + . . . + a 0 ( t ) y = f ( t )
dt dt
Suppose that y1 (t), y2 (t), . . . yn (t) are n linearly independent solutions to
the homogeneous problem. Then a particular solution to the non-homogeneous
problem is given by
y p ( t ) = ∑ Ai ( t ) yi ( t )
84 differential equations
d2 y 1 dy 4
+ − 2y = t
dt2 t dt t
The homogeneous equation is equidimensional and can be solved by looking
for a solution of the form y(t) = tα . Substituting in the homogeneous
equation we get
t3
yp =
5
Exercise 2.7.1
Use variation of parameters to find a particular solution for each
equation. If the particular solution can also be found using undeter-
mined coefficients, use both methods to check your answer.
a) y′′ + y′ − 6y = et
b) y′′ + 1t y′ − 9
t2
y = 7t2 (See example above)
e−(s− a)t ∞
Z ∞ Z ∞
1
F (s) = L(e at ) = e−st e at dt = e−(s− a)t dt = |0 =
0 0 −(s − a) s−a
L( f ′ ) = sL( f ) − f (0)
My = f (t)
P ( s )Y ( s ) + Q ( s ) = F ( s )
where
F (s) Q(s)
y ( t ) = L −1 ( ) − L −1 ( )
P(s) P(s)
These two terms are precisely the “zero input” and “zero state” solu-
tions. The quantity
Q(s)
yh (t) = −L−1 ( )
P(s)
satsifies the homogeneous equation My = 0 along with the given
initial conditions. The term
F (s)
y p ( t ) = L −1 ( )
P(s)
solves My = f along with zero initial conditions. As a first example
we will solve a constant coefficient linear homogeneous equation.
higher order linear equations 87
L(y′′ ) + L(y) = 0
s2 L(y) − sy(0) − y′ (0) + L(y) = 0
s2 L(y) − s + L(y) = 0
s
Y (s) = L(y) = 2
s +1
From our very small table of Laplace transforms we know that L(cos(bt) =
s
s2 + b2
. Taking b = 1 we can conclude that
using the Laplace transform. Taking the Laplace transform of both sides
gives
1
(s2 + 2s + 1)Y (s) = L(te−t ) =
( s + 1)2
1
Y (s) =
( s + 1)4
3! 6
L(t3 e−t ) = =
( s + 1)4 ( s + 1)4
1
s2 Y (s) − sy(0) − y′ (0) + 2 (sY (s) − y(0)) + 2Y (s) =
s−1
1
(s2 + 2s + 2)Y (s) − 1 =
s−1
1 1
Y (s) = +
s2 + 2s + 1 (s − 1)(s2 + 2s + 2)
1
At this point we must re-express (s−1)(s2 +2s+2)
using partial fractions.
higher order linear equations 89
One can write down equations corresponding to the s2 , s and 1 terms but
it is easier to plug in a couple of s values. If substitute s = 1 in the above we
find that
1
5A = 1 A= .
5
Similarly if we substitute s = 0 in we find that
2 3
1 = 2A − C C = 2A − 1 = −1 = −
5 5
Finally if we take the s2 terms we find that
1
0 = A+B B=−
5
This gives
1 1 1 s 3 1 1 1 1 1 s 2 1
Y (s) = L(y) = − − + = − +
5 s − 1 5 s2 + 2s + 2 5 s2 + 2s + 2 s2 + 2s + 2 5 s − 1 5 s2 + 2s + 2 5 s2 + 2s + 2
Note that s2 + 2s + 2 = (s + 1)2 + 1 by completing the square. From the
table we can see that
s+a b
L(e at cos(bt)) = L(e at sin(bt)) =
( s + a )2 + b2 ( s + a )2 + b2
We will need to slightly rewrite one of the terms
1 s 1 s+1−1 1 s+1 1 1
− =− 2 =− 2 +
5 s2 + 2s + 2 5 s + 2s + 2 5 s + 2s + 2 5 s2 + 2s + 2
This gives
1 1 1 s+1 3 1 1 1 3
Y (s) = L(y) = − + = L(et ) − L(e−t cos(t)) + L(e−t sin(t))
5 s − 1 5 ( s + 1)2 + 1 5 ( s + 1)2 + 1 5 5 5
From this we can see that
1 1 3
y(t) = et − e−t cos(t) + e−t sin(t)
5 5 5
Exercise 2.8.1
Given y(t) solving the specified differential equation find the Laplace
transform Y (s). You do not need to compute the inverse transform.
a) y′ + y = sin(t) + 6 y (0) = 2
b) y′′ + y = e2t + t y(0) = 3; y ′ (0) = 2
c) y′′′′ + y = te3t y(0) = 0; y′ (0) = 0; y′′ (0) = 0; y′′′ (0) = 0
90 differential equations
Exercise 2.8.2
Find the inverse Laplace transform of the following functions:
1 s
a) Y (s) = s −1 b) Y (s) = s2 +1
1 5s+7
c) Y (s) = ( s +3)5
d) Y (s) = s2 +4
4s+3 2s+5
e) Y (s) = ( s −2)2 +1
f) Y (s) = s2 −1
4s2 −9s−4
g) Y (s) = s(s−1)(s+2)
Exercise 2.8.3
Solve each initial value problem using the Laplace Transform
mx ′′ = −kx + sin(ωt)
92 differential equations
′′
mxpart + kxpart = (k − mω 2 )( A1 cos(ωt) + A2 sin(ωt))
= sin(ωt)
1
Thus we need to choose A1 = 0 and A2 = k−mω 2 . The general
solution is the sum of the particular solution and the solution to the
homogeneous problem, so
r ! r !
k k sin(ωt)
x (t) = A cos t + B sin t +
m m (k − mω 2 )
Exercise 3.1.1
Verify that a particular solution to
r !
k
mx ′′ = −kx + F0 sin t
m
is given by
q
k
t cos mt
xpart (t) = − F0 √
2 km
Exercise 3.1.2
Suppose that the suspension of a car with bad shock absorbers can
be modelled as a mass-spring system with no damping. Assume
that putting a 100kg mass into the trunk of the car causes the car to
sink by 2cm, and that the total mass of the car is 1000kg. Find the
resonant frequency of the suspension, in s−1 .
You will first need to find the spring constant of the suspension.
For simplicity take the gravitational acceleration to be g = 10ms−2 .
d2 x dx
m = −kx − γ + f (t)
dt2 dt
where, as always, f (t) represents some external forcing term. Again it is worthwhile doing some
First we look at the homogeneous equation. dimensional analysis. The coefficient γ,
when multiplied by a velocity, should
give a force, so γ should have units
d2 x dx kg s−1 . Notice that from the three
m + γ + kx = 0.
dt2 dt quantities m, γ, k we can form the
quantity √γ , which is dimensionless.
mk
This is a measure of how important
damping is in the system. If √γ is
mk
small then damping is less important
compared with inertial (mass-spring)
effects. If √γ is large it means that
mk
inertial effects are small compared with
damping. This will become important
in our later analysis.
94 differential equations
mr2 + γr + k = 0
γ2 − 4km
p
−γ ±
r= .
2m
There are two cases here, with very different qualitative behaviors.
Figure 3.1: The solutions to the
in the case γ2 < 4km, or equivalently √γ < 2, the characteristic 2
km equation ddt2x + γ dx
dx + 4x = 0
polynomial has a complex conjugate pair of roots. In this situation
with x (0) = 1; x ′ (0) = 0 for
the two linearly independent solutions are given by
γ = 1, 4, 5 (shown in red, blue,
r ! and black respectively) rep-
γ
− 2m t k γ2 resenting the underdamped,
x1 ( t ) = e cos − t (3.1)
m 4m2 critically damped, and over-
damped cases. Note that the
r !
γ
− 2m t k γ2
x2 ( t ) = e sin − t (3.2) solution decays fastest in the
m 4m2
critically damped case.
car is 1000kg. Assume that for best performance one would like the
suspension of the car to be critically damped. How should the value
of the damping coefficient γ, in kg s−1 , be?
mechanical and electrical oscillations 95
Exercise 3.1.4
Verify that, in the critically damped case, the two solutions to the
homogeneous problem are given by
γ
x1 (t) = e− 2m t
γ
x2 (t) = te− 2m t
d2 x dx
m 2
+ γ + kx = F0 sin(ωt).
dt dx
If we look for a solution of the form
x = A1 cos(ωt) + A2 sin(ωt)
(k − mω 2 ) A1 + γωA2 = 0 (3.3)
2
(k − mω ) A2 − γωA1 = F0 . (3.4)
The first equation comes from requiring that the coefficients of the
cos(ωt) terms sum to zero, the second from demanding that the
coefficients of the sin(ωt) terms sum to F0 . The solution to these two
linear equations is given by
(k − mω 2 )2 + γ2 ω 2 A1 = F0 γω (3.5)
(k − mω 2 )2 + γ2 ω 2 A2 = F0 (k − mω 2 ) (3.6)
Exercise 3.1.5
Suppose that a car with bad shocks can be modeled as a mass-spring
system with a mass of m = 750kg, a spring constant of k = 3.75 × 105
N/m, and a damping coefficient of γ = 2 × 104 kg s−1 , and that the
car is subject to a periodic forcing of f (t) = 1000N sin(150t) due to an
unbalanced tire. Find the particular solution. What is the amplitude
of the resulting oscillations?
mechanical and electrical oscillations 97
Exercise 3.1.6
Find the homogeneous solutions
a) y′′ + 4y = 0
b) y′′ + y = 0
c) y′′ + 6y′ + 10y = 0
d) y′′ + 5y′ + 6y = 0
Exercise 3.1.7
Solve the following initial value problems
Exercise 3.1.8
Solve the initial value problems
Exercise 3.1.9
Solve the following initial value problems
Exercise 3.1.10
Find the general solution to the homogeneous damped harmonic
oscillator equation
d2 y dy
m 2 +γ + ky = 0
dt dt
for the following parameter values. In each case classify the equation
as overdamped, underdamped or critically damped.
98 differential equations
Exercise 3.1.11
Solve the following initial value problem
d2 y dy
m + γ + ky = 0
dt2 dt
for the following sets of initial values and parameters.
Exercise 3.1.12
Suppose that a car suspension can be modeled as damped mass-
spring system with m = 1500 kg and spring constant k = 40 000 Ns/m.
Exercise 3.1.13
Suppose that a car suspension can be modeled as damped mass-
spring system with m = 2000 kg. Also suppose that if you load 600 kg
in the car the height of the suspension sinks by 1 cm. How large
should the damping coefficient be so that the suspension is critically
damped for the unloaded car? Recall Hooke’s law, F = −k∆x, and be
careful to keep consistent units.
Exercise 3.1.14
Consider the damped, driven harmonic oscillator
d2 y dy
4 + + 4y = cos(ωt)
dt2 dt
mechanical and electrical oscillations 99
Exercise 3.1.15
d2 y
+y = 0 y(0) = cos(s) y′ (0) = − sin(s)
dt2
Exercise 3.1.16
Exercise 3.1.17
[A]
[B]
mechanical and electrical oscillations 101
[C]
The figures above depict the amplitude, as a function of frequency ω,
for the particular solution to
a) m = 1, γ = 1, k = 4
b) m = 1, γ = 2, k = 4
c) m = 1, γ = 4, k = 4
3.2.1 RC Circuits
dVR dV dI 1 dV
+ C = R + I = 0 = 0,
dt dt dt C dt
since the battery supplies a constant voltage. Thus the current satis-
1
fies R dI
dt + C I = 0. To find the initial condition we note that at time
1 0
t = 0 the voltage drop across the capacitor is VC = C 0 I (t)dt = 0,
R
Figure 3.5: A simple circuit
and thus VR (0) = RI (0) = V0 , and the differential equation becomes consisting of a battery, switch,
dI 1 V0 resistor and capacitor.
R + I=0 I (0) = .
dt C R
This can be solved to give
V0 − t
I (t) = e RC ,
R
from which the other quantities can be derived. For instance VR =
t
IR = V0 e− RC . The voltage drop across the capacitor, VC can be found
mechanical and electrical oscillations 103
1
Rt
either by using VC + VR = V0 or by taking VC = C 0 I ( s ) ds and
t
− RC
doing the integral. Both methods give VC = V0 (1 − e ). Note that
a resistance R ohms times a capacitance of C Farads gives time in
seconds. This is usually called the RC time constant – one common
application of RC circuits is to timing.
Exercise 3.2.1
A circuit consists of a 5V battery, a 5 kΩ resistor, a 1000µF capacitor
and a switch in series. At time t = 0 the switch is closed. At what
time does VC , the voltage drop across the capacitor, equal 4V?
A somewhat more interesting problem is when the voltage is not
constant (DC) but instead varies in time, for instance, the voltage
varies sinusoidally as V (t) = V0 sin(ωt), as in Figure 3.6. Essentially
the same derivation above gives the equation Figure 3.6: A simple circuit
consisting of a resistor, a capac-
dI 1 dV (t) itor, and a sinusoidally varying
R + I= = V0 ω cos(ωt).
dt C dt voltage.
This can be solved by the method of undetermined coefficients, look-
ing for a particular solution of the form A1 cos(ωt) + A2 sin(ωt).
Alternatively one can use the formula derived in the section on mass
spring systems with the replacements m → 0, γ → R, k → C1 and
F0 → V0 ω. Either way we find the solution
V0 RC2 ω 2 V0 ωC
Ipart (t) = 2
sin(ωt) + cos(ωt).
(ωRC ) + 1 (ωRC )2 + 1
t
The homogeneous solution Ihomog (t) = Ae− RC is exponentially In many problems with damping such
as this the homogeneous solution is
decaying, so we will assume that enough time has passed that this
exponentially decaying. This is often
term is negligibly small. called the "transient response" and can,
From here any other quantity of interest can be found. For in- in many situations, be neglected.
stance VC , the voltage drop across the capacitor, is given by Again note that ω has units s−1 and
RC has units s so ωRC is dimension-
less. Dimensionless quantities are an
VC = VR − V (t) = RI (t) − V0 sin(ωt)
important way to think about a system,
V0 sin(ωt) V RωC cos(ωt) since they do not depend on the sys-
= − 0 tem of units used. If ωRC is small it
1 + (ωRC )2 1 + (ωRC )2
means that the period of the sinusoid
V0 is much longer than the time-constant
= p sin(ωt + ϕ)
1 + (ωRC )2 of the RC circuit, and it should be-
have like a constant voltage. In this
case, where ωRC ≈ 0 it is not hard to
where ϕ = arctan(−ωRC ).
see that VC ≈ V0 sin(ωt). If ωRC is
large, on the other hand, the voltage
drop across the capacitor will be small,
3.2.2 RLC Circuits, Complex numbers and Phasors V cos(ωt)
VC ≈ − 0 ωRC . This is the simplest
example of a low-pass filter. Low fre-
quencies are basically unchanged, while
There is a third type of passive component that is interesting high frequencies are damped out.
from a mathematical point of view, the inductor. An inductor is a
104 differential equations
dI (t) 1
Z
VL + VR + VC = L + RI (t) + I = V ( t ).
dt C
Taking the derivative gives a second order differential equation for
the current i (t) It is again worth thinking a bit about
d2 I ( t ) dI 1 dV units. The quantity RC has units of
L 2
+R + I = . (3.7) time – seconds if R is measured in
dt dt C dt Ohms and C in Farads. The quantity
L
Note that all of the results of the section on mechanical oscillations R is also a unit of time, seconds if L is
measured in Henrys and R in Ohms.
translates directly to this equation, with inductance being analo- 2
The quantity RLC is dimensionless. The
gous to mass, resistance to damping coefficient, the reciprocal of equation is overdamped, with exponen-
capacitance to the spring constant, current to displacement and the 2
tially decaying solutions, if RLC > 4 and
derivative of voltage to force. Therefore we will not repeat those cal- is underdamped, with solutions in the
form of an exponentially damped sine
culations here. Rather we will take the opportunity to introduce a or cosine, if R2 C
< 4.
L
new, and much easier way to understand these equations through
the use of complex numbers and what are called "phasors". While
it requires a little bit more sophistication the complex point of view
makes the tedious algebraic calculations that we did in the previous
section unnecessary: all that one has to be able to do is elementary
operations on complex numbers.
In many situations one is interested in the response of a circuit to
a sinusoidal voltage. The voltage from a wall outlet, for instance, is
sinusoidal. If the voltage is sinusoidal, V (t) = V0 cos(ωt) then from
(3.7) the basic differential equation governing the current I (t) is
d2 I ( t ) dI 1 dV
L 2
+R + I = = −ωV0 sin(ωt).
dt dt C dt
We could use the method of undetermined coefficients here, as we
did in the previous section, and look for a particular solution in the
form I (t) = A cos(ωt) + B sin(ωt) but it is a lot easier to use the
Euler formula and complex arithmetic. Recall the Euler formula says
mathematical trick: since the equations are linear one can solve for
a complex current, Ĩ, and then take the real part, and that gives the
particular solution. In other words the equation
d2 I ( t ) dI 1 dV
iωt
L + R + I = = V0 Re iωe
dt2 dt C dt
is the same as
2
d Ĩ (t) d Ĩ (t) 1 dV
Re L 2
+ R + Ĩ ( t ) = = Re iωV0 eiωt
dt dt C dt
and because it is linear we can instead solve
d2 Ĩ (t) d Ĩ (t) 1 dṼ
L +R + Ĩ (t) = = iV0 ωeiωt
dt2 dt C dt
and then take the real part of the complex current Ĩ (t). This will be a
lot simpler for the following reason: instead of looking for a solution
as a linear combination of cos(ωt) and sin(ωt) we can just look for a
solution in the form Ĩ (t) = Aeiωt . The constant A will be complex but
we won’t have to solve any systems of equations, etc – we will just
have to do complex arithmetic. When we are done we just have to
take the real part of the complex solution Ĩ (t) and we get the solution
to the original problem.
To begin we consider the case of an RC circuit with imposed volt-
age V (t) = V0 cos(ωt) :
dI 1 dV
R + I= = −ωV0 cos(ωt).
dt C dt
Complexifying the voltage Ṽ (t) = V0 eiωt we get ddtṼ = iωV0 eiωt and
thus
d Ĩ 1 dṼ
R + Ĩ = = iωV0 eiωt .
dt C dt
Using undetermined coefficients we look for a solution in the form
Ĩ (t) = I0 eiωt . Since this guess already contains both the sin(ωt) and
cos(ωt) terms we do not need any additional terms. This is what
makes the method easier. Substituting this into the equation gives It is worth noting here that RCω is
1
dimensionless, so ωC has units of
1 Ohms.
(iωR + ) I0 eiωt = iωV0 eiωt
C
which is the same as
iωV0
I0 =
iωR + C1
V0
= i
R− ωC
i
R+ ωC
= V0 1
R2 + ω 2 C2
106 differential equations
or
V0 R − iωL
V0 = ( R + iωL) I0 or I0 = = 2 V0
( R + iωL) R + ω 2 L2
Since the complex number (impedance) ( R − iωL) lies in the fourth
quadrant this implies that the voltage leads the current, or equiva-
lently the current lags the voltage. This is illustrated in the marginal
figure.
Of course any real circuit will have inductance, resistance and Figure 3.8: The particular so-
capacitance. In this case we have lution for an RL-circuit with a
sinusoidal voltage. The current
d2 Ĩ d Ĩ 1
L 2
+ R + Ĩ = iV0 ωeiωt . Ĩ (t) will lag behind the voltage
dt dt C
by an angle between 0 and π2 .
Looking for a solution of the form Ĩ (t) = I0 eiωt we find that As time increases the picture
1 1 rotates counterclockwise but
(−ω 2 L + iωR + ) I0 = iωV0 or V0 = I0 ( R + i (ωL − )).
C ωC the angle between the volt-
The impedance ( R + i (ωL − ωC1
)) can either be in the fourth or the age and the current does not
first quadrant, depending on the frequency ω and the sizes of the change.
mechanical and electrical oscillations 107
1
inductance L and the capacitance C. If ωL > ωC or ω 2 LC > 1 then
1
( R + i (ωL − ωC )) lies in the first quadrant and the voltage leads the
current. If ω 2 LC < 1 then the voltage lags the current. If ω 2 LC = 1
then the impedance is real and the voltage and the current are in
phase. We have encountered this condition before – this is (one)
definition of effective resonance.
The relative phase between the voltage and the current in an AC
system is important, and is related to the "power factor". For reasons
of efficiency it is undesirable to have too large of an angle between
the voltage and the current in a system: the amount of power that
can be delivered to a device is Vrms Irms cos(θ ), where θ is the angle
between the current and the voltage. If this angle is close to π2 then
very little power can be delivered to the device since cos(θ ) is small.
Many industrial loads, such as electric motors, have a very high There is a mnemonic "ELI the ICE
inductance due to the windings. This high inductance will usually be man", to remember the effects of induc-
tance and capacitance. In an inductive
offset by a bank of capacitors to keep the angle between the voltage circuit (L is inductance) the voltage
and the current small. E leads the current I. In a capacitive
circuit (C is capacitance) the current I
Example 3.2.1. Solve the linear constant coefficient differential equation leads the voltage E.
d3 y dy
+ 4 + 5y = cos(2t)
dt3 dt
using complex exponentials.
We can replace this with the complex differential equation
d3 z dz
+ 4 + 5z = e2it
dt3 dt
and then take the real part. Looking for a solution in the form z(t) = Ae2it
we find that
d3 z dz
+ 4 + 5z = 0.
dt3 dt
The characteristic polynomial is
r3 + 4r + 5 = 0
108 differential equations
√
1± 19 i
which has the three roots r = −1, r = 2 . Thus the general solution to
d3 y dy
3
+ 4 + 5y = cos(2t)
dt dt
is
√ √
1 t 19t t 19t
y = cos(2t) + A1 e−t + A2 e 2 cos( ) + A3 e 2 sin( )
5 2 2
or alternatively in terms of complex exponentials as
√ √
1 1+ i 19 1−i 19
y= cos(2t) + A1 e−t + A2 e 2 t
+ A3 e 2 t
5
Example 3.2.2. Solve the differential equation
d3 y d2 y dy
− 3 +2 = sin(t)
dt3 dt2 dt
As before we can solve this by solving the complex equation
d3 z d2 z dz
3
− 3 2 + 2 = eit
dt dt dt
and taking the imaginary part. Looking for a solution in the form z(t) =
Aeit we find that
3 − i it
y(t) = Im( e )
10
3−i
= Im( (cos(t) + i sin(t))
10
3 1
= sin(t) − cos(t)
10 10
r3 − 3r2 + 2r = 0
3 1
y(t) = sin(t) − cos(t) + A1 + A2 et + A3 e2t
10 10
4
Systems of Ordinary Differential Equations
4.1 Introduction
d2 x1
mO = − k ( x1 − x2 )
dt2
d2 x2
m C 2 = − k ( x2 − x1 ) − k ( x2 − x3 )
dt
d2 x3
mO 2 = − k ( x 3 − x 2 ) .
dt
In epidemiology a basic set of models for the spread of a conta-
gious disease are the SIR models. There are many different variations
of the SIR models, depending on various modeling assumptions, but
110 differential equations
dS
= − βIS
dt
dI
= βIS − kI
dt
dR
= kI
dt
In this model the quantities S(t), I (t), R(t) represent the population
of susceptible, infected and recovered people in the population. The
constants β and k are constants representing the transmission rate
and recovery rate of for the disease in question, and together the
solution to this system of equations provides a model for under-
standing how a disease propagates through a population.
In other situations it may be desirable to write a higher order
equation as a system of first order equations. Take, for instance, the
second order equation
d2 y dy
2
+ p ( t ) + q ( t ) y = f ( t ).
dt dt
dy
If we introduce the new variable z(t) = dt and using the fact that
dz d2 y
dt = then the second order equation above can be written as the
dt2
system of equations
dy
=z
dt
dz
+ p ( t ) z + q ( t ) y = f ( t ).
dt
dv
= A ( t ) v + g ( t ).
dt
Here the vector quantities v(t), g (t) and the matrix A(t) are given by
" # " # " #
y(t) 0 0 1
v(t) = g (t) = A(t) = .
z(t) f (t) −q(t) − p(t)
dn y d n −1 y dy
+ p n − 1 ( t ) + . . . + p1 ( t ) + p0 ( t ) y = f ( t )
dtn dtn−1 dt
equations
dy
z1 ( t ) =
dt
dz1 d2 y
z2 ( t ) = (= 2 )
dt dt
..
.
dzn−2 d n −1 y
z n −1 ( t ) = (= n−1 )
dt dt
dzn−1
= − p n −1 ( t ) z n −1 − p n −2 z n −2 − . . . − p 1 z 1 − p 0 y + f ( t ).
dt
This is a set of n first order equations in y, z1 , z2 , . . . zn−1 . There are,
of course, many ways to rewrite an nth order equation as n first order
equations but this is in some sense the most standard. In many situ-
ations, particularly if one must solve the equations numerically, it is
preferable to write a single higher order equation as a system of first
order equations.
dv
= A(t)v + f (t) v(t0 ) = v0 . (4.2)
dt
Suppose that A(t), f (t) depends continuously on t in some interval I con-
taining t0 . Then (4.2) has a unique solution in the interval I.
Definition 4.2.1. Suppose that v1 (t), v2 (t), v3 (t) . . . vn (t) are n solutions
to equation 4.3. The Wronskian W (t) of these solutions is defined to be the
determinant of the matrix with columns v1 (t), v2 (t), v3 (t) . . . vn (t). We say
that the vectors are linearly independent if W (t) ̸= 0.
d2 y dy
+ p(t) + q(t)y = 0
dt2 dt
with y1 (t) and y2 (t) two linearly independent solutions. Previously we
defined the Wronskian to be W (t) = y1 (t)y2′ (t) − y1′ (t)y2 (t). We can write
this second order equation as a pair of first order equations by introducing a
dy
new variable z(t) = dt . As we showed earlier y(t), z(t) satisfy
dy
=z
dt
dz
= − p(t)z − q(t)y.
dt
systems of ordinary differential equations 113
Since y1 (t), y2 (t) are two solutions to the original second order equation we
have that two solutions to the system of equations are given by
! ! ! !
y1 ( t ) y1 ( t ) y2 ( t ) y2 ( t )
v1 (t) = = v2 (t) = = .
z1 ( t ) y1′ (t) z2 ( t ) y2′ (t)
In the case where the system comes from a higher order linear
equation of the form
dn y d n −1 y d n −2 y dy
n
+ p n − 1 ( t ) n 1
+ p n − 2 ( t ) n 2
+ · · · + p1 ( t ) + p0 ( t ) y = 0
dt dt − dt − dt
the matrix A(t) takes the form
0 1 0 0... 0
0 0 1 0... 0
. .. .. ..
..
A(t) =
. . .
0 0 ... 0 1
− p0 ( t ) − p1 ( t ) − p 2 ( t ) . . . − p n −2 ( t ) − p n −1 ( t )
so that Tr ( A(t)) = − pn−1 (t) and Abel’s theorem implies that dWdt =
− pn−1 (t)W.
As in the case of Abel’s theorem for a nth order equation the main
utility of Abel’s theorem is the following corollary, which tells us
that (assuming that the coefficients are continuous) the Wronskian is
either never zero or it is always zero.
Corollary 4.2.1. Suppose that v1 (t), v2 (t), . . . vn (t) are solutions to the
system
dv
= A(t)v.
dt
in some interval I in which A(t) is continuous. Then the Wronskian W (t)
is either never zero in the interval I or it is always zero in the interval I.
Proof. The proof follows simply from Abel’s theorem. Suppose that
there exists a point t0 ∈ I such that W (t0 ) = 0. From Abel’s theorem
the Wronskian satisfies the differential equation
dW
= Tr( A(t))W W ( t0 ) = 0
dt
One solution is obviously W (t) = 0 but A(t) is continuous, and
therefore by our existence and uniqueness theorem this is the only
solution.
dv
= A(t)v.
dt
Define the matrix M (t) to be the matrix with column {vi (t)}in=1 . Then the
general solution to (4.4) is given by
Z t
v(t) = M (t) M −1 (s) g (s)ds + M (t) M (t0 )−1 v0
t0
Proof. The proof is quite similar to the integrating factor method for
a first order scalar equation. We have the equation
dv
= A(t)v + g (t) v(t0 ) = v0
dt
dM
= A(t) M.
dt
dv dM dw
= w(t) + M (t)
dt dt dt
dw
= A(t) Mw(t) + M (t)
dt
dw
= A(t)v + M (t)
dt
dw
A(t)v + M (t) = A(t)v + g (t)
dt
dw
M (t) = g (t)
dt
dw
= M −1 ( t ) g ( t )
dt
116 differential equations
dv
= Av. (4.5)
dt
where the matrix A does not depend on t.
We can find a number of solutions to a constant coefficient equa-
tion, perhaps even all of them, using basic linear algebra.
Lemma 4.4.1. Suppose that the matrix A has a set of eigenvectors {uk }
with corresponding eigenvalues λk :
Auk = λk uk
are solutions to (4.5). If the eigenvalues are distinct then the solutions vk (t)
are linearly independent.
The matrix A has eigenvalues of λ1 = −1, with eigenvector (1, −1) T and
λ2 = 3, with eigenvector (1, 1) T . This gives
!
1
v1 (t) = e−t
−1
!
1 3t
v2 (t) = e
1
Since the eigenvalues are distinct these solutions are linearly independent.
using the power series definition. We begin by computing the first few
powers.
!
−1 0
A2 = = −I
0 −1
!
0 1
A3 = = −A
−1 0
!
1 0
A4 = =I
0 1
!
0 −1
A5 = =A
1 0
...
t2 2 t3 3 t4 4
etA = I + tA + A + A + A +...
2 6 24
t2 t4 t3 t5
= (1 − + + . . .) I + (t − + + . . .) A
2 24 6 120
= cos(t) I + sin(t) A
!
cos(t) − sin(t)
= .
sin(t) cos(t)
e λ1 t
0 0 0 ... 0
0 e λ2 t 0 0 ... 0
0 0 e λ3 t 0 ... 0
−1
etA =U . .. .. .. U
.. . . .
0 0 ... 0 e λ n −1 t 0
0 0 ... 0 0 eλn t
B0 = I
B1 = A − λ1 I
B2 = ( A − λ2 I ) ( A − λ1 I ) = ( A − λ2 I ) B1
B3 = ( A − λ3 I ) ( A − λ2 I ) ( A − λ1 I ) = ( A − λ3 I ) B2
..
.
Bn−1 = ( A − λn−1 I ) ( A − λn−2 I ) . . . ( A − λ1 I ) = ( A − λn−1 I ) Bn−2 .
dr1
= λ1 r1 r1 (0) = 1
dt
dr2
= λ2 r2 + r1 ( t ) r2 (0) = 0
dt
dr3
= λ3 r3 + r2 ( t ) r3 (0) = 0
dt
..
.
drn
= λ n r n + r n −1 ( t ) r n (0) = 0
dt
Then the matrix exponential is given by
n −1
etA = r1 (t) B0 + r2 (t) B1 + . . . rn (t) Bn−1 = ∑ rk+1 (t) Bk .
k =0
120 differential equations
Example 4.4.3. Compute the matrix exponential etA where A is the matrix
1 1 0
A = 1 1 0
0 0 2
1
− 12
1 1 0 1 0 0 2 0
etA = −1 1 0 0 e2t 0 21 1
0
2
0 0 1 0 0 e2t 0 0 1
2t
e +1 e2t −1
0
2t2 2
e2t +1
= e 2−1 0
2
0 0 e2t
systems of ordinary differential equations 121
Using Putzer’s method we have that the matrices B0,1,2 are given by
1 0 0
B0 = 0 1 0
0 0 1
1 1 0
B1 = A − 0I = A = 1 1 0
0 0 2
−1 1 0 1 1 0
B2 = ( A − 2I ) B1 = 1 −1 0 1 1 0
0 0 0 0 0 2
0 0 0
= 0 0 0
0 0 0
dr1
= 0r1 r1 (0) = 1
dt
r1 (t) = e0t = 1
dr2
= 2r2 (t) + 1 r2 (0) = 0
dt
e2t 1
r2 ( t ) = −
2 2
Ordinarily we would have to compute the solution to
dr3 e2t 1
= 2r3 + r2 (t) = 2r3 + − r3 (0) = 0
dt 2 2
(you can check that the solution is r3 (t) = 2t e2t + 14 e2t − 41 ) but in this case
since it is being multiplied by B2 which is the zero matrix we don’t really
need to calculate this. Thus we have
2t
e +1 e2t −1
2t 2 2 0
e −1
etA = 1 · B0 +
2t 2t
B1 = e 2−1 e 2+1 0
2
0 0 e2t
dr1
= r1 r1 (0) = 1
dt
r1 ( t ) = e t
dr2
= r2 ( t ) + e t r2 (0) = 0
dt
r2 (t) = tet
dr3
= r3 (t) + tet r3 (0) = 0
dt
t2
r2 ( t ) = e t
2
The matrix exponential is given by
t2
etA = et B0 + tet B1 + et B2
2
2 2 2
(1 + 2t + t2 )et (2t + t2 )et (t + t2 )et
2 2 2
= −(t + t2 )et (1 − t − t2 )et − t2 et
−tet −tet (1 − t ) e t
systems of ordinary differential equations 123
Exercise 4.5.1
Find the characteristic polynomial and the eigenvalues for the follow-
ing matrices
! ! !
3 2 5 4 0 1
a) b) c)
2 3 0 2 −1 0
! 3 2 1 −2 1 1
2 4
d) e) 0 2 3 f) 0 −1 1
−9 2
0 3 2 1 1 −2
3 1 2
g) 0 4 1
0 0 −2
Exercise 4.5.2
Find the eigenvectors for the matrices in problem 4.5.1
Exercise 4.5.3
Solve the following initial value problems
! !
dv 4 2 3
a) dt = v; v (0) =
2 4 5
! !
1 2 2
b) dv
dt = 0 2 v; v (0) =
2
! !
dv 1 2 2
c) dt = v; v (0) =
0 1 3
Exercise 4.5.4
Find the matrix exponential etA for the matrices given in problem
4.5.1
Exercise 4.5.5
Find the eigenvalues and eigenvectors for the two matrices
2 1 1 2 1 1
0 2 4 0 2 0
0 0 2 0 0 2
Exercise 4.5.6
Find the matrix exponentials for the two matrices given in problem
4.5.5
Part II
d3 y dy dy d2 y
+ y2 + cos(y) = 5 y(0) = 1; (0) = 2; (0) = −2.
dt3 dt dt dt2
For initial value problems we have an existence and uniqueness the-
orem that (under some mild technical conditions) guarantees that
there exists a unique solution in some small neighborhood of the
initial data.
A differential equation problem where values are specified at more
than one point is called a boundary value problem. For instance the
problem
π
y′′ + y = 0 y(0) = 1 y( ) = 0
2
is a boundary value problem. In some cases, such as the above, there
is a unique solution (in the above case the solution is y = cos(t)).
In other cases there may be no solution, or the solution may not be
unique. While many applications of differential equations are posed
as initial value problems there are also applications which give rise to
two point boundary value problems.
One example of a boundary value problem arises in computing the
deflection of a beam:
Example 5.1.1. The equation for the deflection y( x ) of a beam with con-
stant cross-sectional area and elastic modulus is given by
d4 y
EI = g( x )
dx4
where E is a constant called the “elastic modulus”, and I is the second mo-
ment of the cross-sectional area of the beam. A larger value of E represents a
128 differential equations
A cos(0) + B sin(0) = A = 0
A cos(π ) + B sin(π ) = − A = 0 .
In this case we get two equations that are not linearly independent,
but they are consistent. We know from linear algebra that there are
an infinite number of solutions – A = 0 and B is undetermined.
In the second case we can solve the inhomogeneous equation
using the method of undetermined coefficients or of variation of
parameters. We find that the general solution is given by
x cos(πx )
y( x ) = A cos(πx ) + B sin(πx ) − .
2π
In this case when we try to solve the boundary value problem we
find
0 cos(0)
y(0) = A cos(0) + B sin(0) − =A=0
2π
π cos(π ) 1
y(1) = A cos(π ) + B sin(π ) − = − A + = 0.
2π 2
In this case the boundary conditions lead to an inconsistent set of
linear equations: A = 0 and A = 12 , for which there can be no
solution.
The following theorem tells us that these are the only possibilities:
a two point boundary value problem can have zero, one, or infinitely
many solutions. For simplicity we will state and prove the theorem
for second order two point boundary value problems; the proof for
higher order boundary value problems is basically the same, with
minor notational changes.
130 differential equations
Theorem 5.2.1. Consider the second order linear two point boundary value
problem
d2 y dy
+ p( x ) + q( x )y = Ly = f ( x ) (5.3)
dx2 dx
α1 y ( a ) + α2 y ′ ( a ) = A
β 1 y(b) + β 2 y′ (b) = B,
d2 y dy
+ p( x ) + q( x )y = Ly = 0 (5.4)
dx2 dx
α1 y ( a ) + α2 y ′ ( a ) = 0
β 1 y(b) + β 2 y′ (b) = 0,
2. If Problem (5.4) has a unique solution then Problem (5.3) has a unique
solution.
3. If Problem (5.4) does not have a unique solution then Problem (5.3)
either has no solutions or an infinite number of solutions.
Proof. This theorem may remind you of the following fact from linear
algebra: a system of linear equations
Mx = b
α1 w ( a ) + α2 w ′ ( a ) = A β 1 w( a) + β 2 w′ ( a) = B,
y( x ) = Cy1 ( x ) + Dy2 ( x ).
boundary value problems 131
We know from linear algebra that typically the only solution to this
set of homogeous linear equations is C = 0, D = 0. There is a non-
zero solution if and only if the matrix
!
α1 y1 ( a) + α2 y1′ ( a) (α1 y2 ( a) + α2 y2′ ( a))
M= (5.5)
( β 1 y1 (b) + β 2 y1′ (b)) ( β 1 y2 (b) + β 2 y2′ (b))
or equivalently
! !
C −(α1 y part ( a) + α2 y′part ( a))
M = (5.6)
D −(α1 y part (b) + α2 y′part (b))
Note that the M arising here is the same as the M arising in the solu-
tion to the homogeneous problem. Therefore the system of equations
(5.6), and hence the boundary value problem, has a unique solution
if and only if the matrix M is non-singular, which is true if and only
if the homogeneous boundary value problem (5.4) has a unique solu-
tion y( x ) = 0.
Exercise 5.2.1
Determine if the following boundary value problems have no solu-
tions, one solution, or infinite solutions. If there are solutions, find
them.
Exercise 5.2.2
Determine the values of A for which the following boundary value
problems have solutions.
L(λ)y = 0,
α1 y ( a ) + α2 y ′ ( a ) = 0 β 1 y(b) + β 2 y′ (b) = 0.
This equation always has the solution y( x ) = 0, which we call the trivial
solution. We know from Theorem 5.2.1 that such an equation will either
have a unique solution or an infinite number of solutions. The eigenvalues
are the values of λ for which the boundary value problem has an infinite
number of solutions. The non-trivial solutions are called eigenfunctions.
More generally an nth order eigenvalue problem would consist of an nth
order linear differential equation involving an unknown parameter, together
with n homogeneous boundary conditions. Most commonly these would
be specified at two points, although one could specify boundary conditions
at more than two points. In the theory of elastic beams, for instance, one
often has to solve a fourth order eigenvalue problem, with two boundary
conditions specified at each end of the beam.
Example 5.3.1. For what values of λ does the boundary value problem
y = A cos(kx ) + B sin(kx )
y = A + Bx
y = Aekx + Be−kx
y′′ + λy = 0 y (0) = 0 y( L) = 0
y = A cos(kx ) + B sin(kx )
134 differential equations
Using the first boundary condition we have that y(0) = A = 0. Using this
result in the second boundary condition we have y( L) = B sin(kL) = 0.
Therefore either B = 0 or sin(kL) = 0. We are only interested in non-zero
solutions so we choose sin(kL) = 0 which is true for kL = π, 2π, 3π, . . . =
2 2
nπ. So the eigenvalues are λn = k2 = n Lπ2 and the corresponding eigen-
functions are yn ( x ) = sin( nπx
L ). We use the subscript in λn and yn ( x ) to
indicate that eigenfunctions correspond to specific eigenvalues.
Case 2: λ = 0
The equation is now y′′ = 0 which has solutions
y = A + Bx
y = Aekx + Be−kx
Example 5.3.3. Find all of the eigenvalues and the corresponding eigen-
functions for the boundary value problem
r2 + r + λ = 0
q
which has roots r = − 12 ± 1
4 − λ. It is convenient to separate this into
three cases:
• λ < 14 , when the characteristic equation has real and distinct roots.
Case 1: λ > 14
Let’s write λ = 14 + k2 where k is a non-zero real number. The general
solution is given by y( x ) = Ae− x/2 cos(kx ) + Be− x/2 sin(kx ). The
boundary value problems 135
y (0) = A + B = 0
y(1) = Aer1 + Ber2 = 0
Example 5.3.4 (Guitar String). The equation for a one dimensional vibrat-
ing string, such as a guitar string, is
∂2 y 2
2∂ y
= c y(0, t) = 0 y( L, t) = 0.
∂t2 ∂x2
Here c is a constant representing the wave speed of the string. If we look for
a “pure tone” or harmonic we look for a solution of the form
y = f ( x ) cos(ωt)
Substituting this into the equation gives the following equation for f ( x )
c2 f xx + ω 2 f = 0 f (0) = f ( L) = 0.
This is an eigenvalue problem for ω, the frequency. The fact that this equa-
tion has non-zero solutions only for specific values of ω tells us that the
string will only vibrate at certain specific frequencies, the eigenvalues. The
general solution to the above is given by
ω ω
f ( x ) = A cos( x ) + B sin( x )
c c
136 differential equations
Exercise 5.3.1
Calculate all of the eigenvalues and the corresponding eigenfunctions
for the following boundary value problems.
6.1 Background
Av = λv
has n linearly independent eigenvectors {vi }in=1 . These vectors are linearly
independent, form a basis for Rn , and can be chosen to be orthonormal:
(
0 i ̸= j
vi · vj =
1 i=j
Theorem 6.1.2. Suppose that {vi }in=1 is an orthonormal basis for Rn . For
138 differential equations
To find (for instance) α1 we can take the dot product with v1 to get
n
v1 · w = ∑ αi vi · v1 = α1 v1 · v1 + α2 v1 · v2 + . . . αn v1 · vn .
i =1
Λ = U T AU
The topic of most of the rest of the course will be Fourier series.
The classical Fourier series, along with the Fourier sine and cosine
series, are the analog of the eigenvector expansion for certain sec-
ond order boundary value problems. The functions {sin( nπx ∞
L )}n=1 or
nπx ∞
{cos( L )}n=0 are analogous to the eigenvectors, and most "reason-
able" functions can be decomposed in terms of these basis functions.
Z 2L
5πx
f ( x ) cos dx
0 L
Z 2L ∞ Z 2L
5πx 5πx nπx
= A0 cos dx + ∑ An cos cos dx
0 L n =1 0 L L
∞ Z 2L
5πx nπx
+ ∑ Bn cos sin dx. (6.2)
n =1 0 L L
Next we should notice that all of the terms on the righthand side of the
140 differential equations
Theorem 6.2.1. Suppose that f ( x ) defined for x ∈ (0, 2L) can be repre-
sented as a series of the form
∞ nπx ∞ nπx
f ( x ) = A0 + ∑ An cos
L
+ ∑ Bn sin
L
. (6.3)
n =1 n =1
1 2L
Z
A0 = f ( x )dx (6.4)
2L 0
Z 2L
1 nπx
An = f ( x ) cos dx n≥1 (6.5)
L 0 L
Z 2L
1 nπx
Bn = f ( x ) sin dx (6.6)
L 0 L
The series in Equation (6.3) of Theorem 6.2.1 is called a Fourier
series. The somewhat surprising fact is that under some very mild
assumptions all periodic functions can be represented as a Fourier
series.
1
converges to f ( x ) at points of continuity of f ( x ), and to 2 ( f ( x − ) + f ( x + ))
at the jump discontinuities.
fourier series 141
x ∈ (0, 12 )
(
1
f (x) =
0 x ∈ ( 21 , 1)
and repeated periodically (see the margin figure). Find the Fourier series for
f ( x ).
The period in this case is 2L = 1. The Fourier coefficients are easy to
compute in this case, and are given as follows:
Z 1 1 Figure 6.1: The square wave
1
Z
2
A0 = f ( x )dx = dx = function.
0 0 2
1
sin(2πnx ) 12
Z
2
An = 2 cos(2nπx )dx = |0 = 0
0 πn
1
cos(2πnx ) 21 1 − cos(πn)
Z
2
Bn = 2 sin(2nπx )dx = − |0 =
0 πn πn
Given the fact that cos(πn) = 1 if n is even and cos(πn) = −1 if n is odd
Bn can also be written as
2
(
n odd
Bn = πn
0 n even
The margin figure depicts the square wave function f ( x ) together with
the first twenty-six terms of the Fourier series
Figure 6.2: The square wave
Example 6.2.2. Find the Fourier series for the function f ( x ) defined as function together with the first
follows twenty-six terms of the Fourier
series.
f ( x ) = x (1 − x ) x ∈ (0, 1)
f ( x + 1) = f ( x )
The graph of this function on the interval (−1, 2) is presented in the side
margin: the function continues periodically to the rest of the real line.
This is a Fourier series with 2L = 1 The Fourier coefficients can be
calculated by integration by parts as
1
Z2L 1 Z
1
A0 = f ( x )dx = x (1 − x )dx = Figure 6.3: The function f ( x ) =
2L 0 0 6
1 2L
Z nπx Z 1 x (1 − x ) for x ∈ (0, 1) and ex-
An = f ( x ) cos dx = 2 x (1 − x ) cos (2nπx ) dx =
L 0 L 0 tended periodically.
1 x (1 − x ) (1 − 2x ) cos(2nπx ) 1
=2 − sin (2nπx ) |10 + |0
4n3 π 3 2nπ 4n2 π 2
1
=− 2 2
n π
Z 1
Bn = 2 x (1 − x ) sin(2nπx )dx = 0.
0
142 differential equations
The easiest way to see that all coefficients Bn must be zero is to note that
sin(2nπx ) is an odd function and the function f ( x ) is an even function, so
they must be orthogonal. This gives the series
∞
1 cos(2πnx )
f (x) = −∑
6 n =1 n2 π 2
The margin figure shows a comparison between f ( x ), (in black) the first five
terms of the Fourier series (in red) , and the first twenty-five terms of the
Fourier series (in blue). It is clear that even five terms of the Fourier series
gives a good approximation to the original function (though with visible
error), and that it is difficult to distinguish between the original function
and the first twenty-five terms of the Fourier series– one can see a very small
deviation between the two near x = 0 and x = 1, the endpoints. Note
that since f ( x ) is a piecewise C2 continuous function the Fourier series
converges to the function at all points.
main. Again this is because it requires two reflections to get back the
original function. Also note that the resulting function will typically
have jump discontinuities unless the original function tends to zero at
x = 0 and x = L.
The Fourier cosine and Fourier sine series are connected with the
even and the odd extensions of a function defined on (0, L) to the
whole line. We’ll begin with the even extension. We saw in the pre-
vious section that the even extension of a function defined on (0, L)
results in a function with a period of 2L. Thus we can expand this
function in a Fourier series of period 2L. Because the extended func-
tion is even we expect that this series will only involve the cosine
terms, since cosine is an even function. The formulas for the Fourier
coefficients become
1
Z 2L
A0 = f ( x )dx
2L 0
1 2L nπx
Z
An = f ( x ) cos dx
L 0 L
Z 2L
1 nπx
Bn = f ( x ) sin dx Figure 6.8: The function defined
L 0 L
∞ nπx nπx x ∈ (0, L) (top) and Fourier co-
f ( x ) = A0 + ∑ An cos + Bn sin
L L sine series (middle) and Fourier
n =1
sine series (bottom).
Now the original function is defined only on (0, L) so it would be
preferable to express everything in terms of the function values in
the interval (0, L). For the An terms since f ( x ) and cos( nπx
L ) are both
even functions the integrals over (0, L) and over ( L, 2L) are equal. For
the Bn terms, on the other hand, f ( x ) is even and sin(( nπx
L ) is odd, so
the integrals over (0, L) and over ( L, 2L) are equal in magnitude and
opposite in sign and cancel. This gives
1 L
Z
A0 = f ( x )dx (6.7)
L 0
2 L nπx
Z
An = f ( x ) cos dx (6.8)
L 0 L
∞ nπx
f ( x ) = A0 + ∑ An cos (6.9)
n =1
L
This is the Fourier cosine series for a function f ( x ) defined on (0, L).
The Fourier sine series is similar, but we make the odd extension
of f ( x ) across all boundaries. This again results in a function with
fourier series 145
1
Z 2L
A0 = f ( x )dx
2L 0
Z 2L
1 nπx
An = f ( x ) cos dx
L 0 L
1 2L nπx
Z
Bn = f ( x ) sin dx
L 0 L
∞ nπx nπx
f ( x ) = A0 + ∑ An cos + Bn sin .
n =1
L L
This time when we reduce the integrals to integrals over the interval
(0, L) we find that when computing An the contributions from the
integration over (0, L) and the integration over ( L, 2L) are equal
in magnitude but opposite in sign, since f ( x ) is odd and cos even.
This means that all of the An terms are zero. For the Bn terms the
contributions from the integration over (0, L) and the integration over
( L, 2L) are the same, so the integral is just twice the integral over
(0, L). This results in
Z L
2 nπx
Bn = f ( x ) sin dx (6.10)
L 0 L
∞ nπx
f ( x ) = ∑ Bn sin . (6.11)
n =1
L
Cosine Series
146 differential equations
∞ nπx
f ( x ) = A0 + ∑ An cos
L
n =1
1 L
Z
A0 = f ( x )dx
L 0
2 L nπx
Z
An = f ( x ) cos dx
L 0 L
Example 6.4.1. We consider the Fourier sine and cosine series for the
function f ( x ) = x on the interval x ∈ (0, 2). The Fourier cosine series is
given by
1 2
Z
A0 = xdx = 1
2 0
Z 2
2 nπx 4
An = x cos dx = 2 2 (cos(nπ ) − 1)
2 0 2 n π
∞ nπx
f ( x ) = A0 + ∑ An cos
n =1
2
∞
4 nπx
= 1 + ∑ 2 2 (cos(nπ ) − 1) cos
n =1 n π
2
Note that the integral for An is easily done by parts. Similarly the Fourier
sine series is given by
Z 2
2 nπx
Bn = x sin dx
2 0 L
4
=− cos(nπ )
nπ
∞
(−1)n+1 nπx
f (x) = ∑ sin .
n =1
nπ L
The Fourier cosine and Fourier sine series for f ( x ) = x defined for
x ∈ (0, 2) is depicted in margin figure 6.10. The graphs depict the first
fifty terms of the cosine series (top) and sine series (bottom). The fifty term
cosine series is essentially indistinguishable from the even extension of the Figure 6.10: The first fifty terms
function. The fifty term sine series is a good approximation but one can of the Fourier cosine (top) and
see a noticeable oscillation. This is the typical artifact associated with a sine (bottom) series for function
jump discontinuity, and is known as Gibbs phenomenon. This same effect f ( x ) = x defined for x ∈ (0, 2).
can sometimes be seen in image compression – the jpg algorithm uses a
discrete cosine series to do compression. At high compression rates one can
sometimes observe this ringing phenomenon in regions where the image has
a sharp transition from light to dark.
fourier series 147
Exercise 6.5.1
Calculate the Fourier series expansions for the given functions de-
fined over one period. Pay attention to the interval in which they are
defined and adjust the limits of integration accordingly.
a) f ( x ) = 42 for x ∈ (0, π )
(
x for x ∈ (0, 2)
b) f ( x ) =
4−x for x ∈ (2, 4)
c) f ( x ) = 5 − 2x for x ∈ (−3, 3)
d) f ( x ) = 10 cos(2x ) for x ∈ (−π, π )
(
x for x ∈ (0, 1)
e) f ( x ) =
1 for x ∈ (1, 2)
f) f ( x ) = x2 for x ∈ (−5, 5)
Exercise 6.5.2
Calculate the Fourier sine series and the Fourier cosine series expan-
sions for the given functions.
a) f ( x ) = 42 for x ∈ (0, π )
(
x for x ∈ (0, 2)
b) f ( x ) =
4−x for x ∈ (2, 4)
c) f ( x ) = 5 − 2x for x ∈ (0, 3)
d) f ( x ) = 10 cos(2x ) for x ∈ (0, π )
(
x for x ∈ (0, 1)
e) f ( x ) =
1 for x ∈ (1, 2)
f) f ( x ) = x2 for x ∈ (0, 5)
7
Partial Differential Equations and Separation of Vari-
ables
∂u ∂2 u
= σ 2.
∂t ∂x
The one dimensional heat equation governs propagation of heat in a
one-dimensional medium such as a thin rod.
Let’s think a little about what this equation means. Imagine that
at some time t the temperature profile is given by u( x, t). If u( x, t)
has a local minimum (a cool spot) then u xx > 0 and hence ut > 0.
Conversely if u( x, t) has a local maximum, a hot spot, then u xx < 0
and hence ut < 0. So at any given time the hot spots will be getting
cooler and the cold spots will be getting warmer. The heat equation
is the mathematical expression of the idea that a body will tend to
move towards thermal equilibrium.
In solving ordinary differential equations we need to specify ini-
tial conditions. For the heat equation we need to specify an initial
temperature distribution, u( x, 0) = u0 ( x ), to specify what the initial
distribution of temperature in the rod looks like. Finally when we
150 differential equations
are solving the heat equation we usually are doing so on some finite
domain. We generally need to say something about what happens at
the boundary of the domain. There are different types of boundary
conditions. The two most important types are called Dirichlet and
Neumann conditions. A Dirichlet condition consists of specifying
u( x, t) at the boundary. This would correspond to specifying the tem-
perature at the boundary. We could imagine, for instance, putting
one end of the rod in a furnace at a fixed temperature and studying
how the rest of the rod heats up. The end of the rod in the furnace
would satisfy a Dirichlet condition.
A Neumann condition means that u x is specified at the boundary.
Physically specifying u x amounts to specifying the rate at which heat
is entering or leaving the body through the boundary. A homoge-
neous Neumann boundary condition, u x = 0, means that no heat
is entering or leaving the body through this boundary (the end is
insulated). For instance the heat equation
∂u ∂2 u
=σ 2
∂t ∂x
u( x, 0) = 0
u( L, t) = T0
u x (0, t) = 0
∂u ∂2 u
=σ 2 (7.1)
∂t ∂x
u( x, 0) = u0 ( x ) (7.2)
u(0, t) = TLeft (7.3)
u( L, t) = Tright . (7.4)
∂2 u
=0 (7.5)
∂x2
u(0, t) = TLeft (7.6)
u( L, t) = Tright . . (7.7)
Note that this is just Equations (7.1)–(7.4) with ut = 0 (we are looking
for an equilibrium, so it should not change in time) and the initial
condition removed. This is easy to solve, as it is basically just an
ODE. The solution is
TRight − TLeft
uequilibrium ( x ) = TLeft + x.
L
If we now define the new function
TRight − TLeft
v( x, t) = u( x, t) − uequilibrium ( x ) = u( x, t) − TLeft + x
L
then it is straightforward to see that the function v( x, t) satisfies
∂v ∂2 v
=σ 2 (7.8)
∂t ∂x
TRight − tLeft
v( x, 0) = u0 ( x ) − TLeft + x (7.9)
L
v(0, t) = 0 (7.10)
v( L, t) = 0. (7.11)
∂v ∂2 v
=σ 2 (7.12)
∂t ∂x
v(0, t) = 0 (7.13)
v( L, t) = 0. (7.14)
v( x, t) = T (t) X ( x ).
152 differential equations
dT d2 X
(t) X ( x ) = σT (t) 2
dt dx
dT d2 X
dt ( t ) dx2
(x)
=σ ,
T (t) X (x)
d2 X
dx2
(x)
= −λ,
X (x)
d2 X
( x ) = −λX ( x )
dx2
Here the minus sign is not necessary, we have included it simply
for convenience. Now we would like to incorporate the boundary
condition that v = T (t) X ( x ) vanish at x = 0 and x = L. This implies
that X ( x ) must vanish at x = 0 and x = L. This gives us the two
point boundary value problem
d2 X
dx2
(x)
= −λ,
X (x)
d2 X
( x ) = −λX ( x )
dx2
X (0) = 0
X ( L) = 0.
dT n2 π 2
= −σλT = −σ 2
dt L
2 π2
−σ n t
which has the solution T (t) = e L2 . Thus the separated solution is
2 π2
−σ n t nπx
v( x, t) = Be L2 sin( ).
L
partial differential equations and separation of variables 153
Note that this is a solution for every integer n. We can use super-
position to get a more general solution: since the sum of solutions is
a solution we have that
∞ 2 π2
−σ n t nπx
v( x, t) = ∑ Bn e L2 sin(
L
) (7.15)
n =1
is a solution.
2 π2
−σ n t
To summarize the function v( x, t) = ∑∞ n=1 Bn e
L2 sin( nπx
L )
satisfies the heat equation with homogeneous boundary conditions
vt = σv xx
v(0, t) = 0
v( L, t) = 0.
Note that we know how to solve this problem, and to compute the
coefficients Bn so the equation (7.16) holds. This is a Fourier sine
series, and thus we know that
Z L
2 nπx
Bn = v0 ( x ) sin( )dx.
L 0 L
We will summarize these results in the form of a theorem.
vt = σv xx
v( x, 0) = v0 ( x )
v(0, t) = 0
v( L, t) = 0
a function of time. Assuming that heat is lost only through the ends of the
rods how long until the maximum temperature in the rod is 100◦ C?
The heat equation is
vt = σv xx
v( x, 0) = 300◦ C x ∈ (0, 2)
◦
v(0, t) = 0 C
v (2 , t ) = 0◦ C
2
Z2m nπx
Bn = 300◦ C sin( )dx
2m 0 2m
1 2m nπx 2 m
=− cos( )|
1 m nπ 2m 0
600◦ C
= (1 − cos(nπ ))
nπ
and so the solution is given by
∞
600◦ C 2 2
−2×10−5 m2 s−1 n π2 t nπx
v( x, t) = ∑ nπ
(1 − cos(nπ )) e 4 m sin(
2m
).
n =1
It is clear that the maximum temperature in the rod will be at the center.
Plotting the function
Figure 7.1: The temperature (in
∞
1200◦ C −2×10−5 m2 s−1 (2n−1)22 π2 t 1 ◦ C) at the center of the rod as a
v(1 m, t) = ∑ e 4 m sin((n − )π )
( 2n − 1 ) π 2
n =1 function of time.
(Figure 7.1) we find the depicted graph of the temperature at the center of
the rod. One can see that the temperature at the center of the rod first dips
below 100◦ C at time t ≈ 20, 000s.
X ′′ ( x ) = −λX ( x )
X ′ (0) = 0
X ′ ( L) = 0
and since λ > 0 we must have that B = 0. Taking the derivative and
substituting x = L gives
√ √ √ √ √ √
X ′ ( L) = − A λ sin( λL) + B λ cos( λL) = − A λ sin( λL) = 0,
156 differential equations
Here we have used the fact that B = 0. Once again since λ > 0 we
√
must have that either A = 0 or sin( λL) = 0. We are interested in
finding a non-zero solution: if A = 0 then our solution is identically
√ √
zero. So we require that sin( λL) = 0. This implies that λL = nπ
2 2
or λ = n Lπ2 . This gives X ( x ) = A cos( nπx
L ). Substituting this back
into Equation (7.18) we find that
T ′ (t) X ′′ ( x ) n2 π 2
=σ = −σ 2
T (t) X (x) L
n2 π 2
T ′ (t) = −σ T (t)
L2
2 π2 t
− σn
So the separated solutions for λ > 0 are T (t) X ( x ) = An e L2 cos( nπx
L ).
For λ = 0 we have X ′′ ( x ) = 0 and so X ( x ) = A + Bx. Imposing the
boundary conditions gives
X ′ (0) = B = 0
X ′ ( L) = B = 0.
So there are no conditions on A. Thus X ( x ) = A is a solution and the
corresponding T (t) solves
T ′ (t) X ′′ ( x )
=σ =0
T (t) X (x)
and so T (t) is also constant. Putting them together we have T (t) X ( x ) =
A0 .
Finally for λ < 0 the general solution is
√ √
−λx λx
X ( x ) = Ae + Be .
It is easy to check that there are no eigenvalues in this case: the only
solution satisfying X ′ (0) = 0 and X ′ ( L) = 0 is X ( x ) = 0.
Now that we have found all of the separated solutions we can
combine them: the equation is linear, so any linear combination of
solutions is a solution. This gives us a more general solution
nπx −σ n2 π22 t
v( x, t) = A0 + ∑ An cos( )e L .
L
This function satisfies the partial differential equation vt = σv xx
along with the boundary conditions v x (0, t) = 0 and v x ( L, t) = 0. The
only thing remaining is the initial condition:
nπx
v( x, 0) = A0 + ∑ An cos( ) = v0 ( x ).
L
This is a Fourier cosine series problem! We already know that
1 L
Z
A0 = v0 ( x )dx
L 0
Z L
2 nπx
An = v0 ( x ) cos( )dx.
L 0 L
We state this as a theorem
partial differential equations and separation of variables 157
Theorem 7.1.2. The solution to the heat equation with Neumann boundary
conditions
vt = σv xx
v( x, 0) = v0 ( x )
v x (0, t) = 0
v x ( L, t) = 0
Exercise 7.1.1
a) b)
vt = 5 v xx vt = 2 v xx
v( x, 0) = x v( x, 0) = 5 − x
v(0, t) = 0 v x (0, t) = 0
v(3, t) = 0 v x (5, t) = 0
c)
vt = 3 v xx
v( x, 0) = 10
v x (0, t) = 0
v(π, t) = 0
7.2.1 Background
vtt = c2 v xx (7.19)
v(0, t) = 0 (7.20)
v( L, t) = 0 (7.21)
v( x, 0) = f ( x ) (7.22)
vt ( x, 0) = g( x ). (7.23)
u xx + λu = 0
u(0, t) = 0
u( L, t) = 0
nπ 2
has the solutions un ( x ) = sin( nπx
L ) and λn = . This suggests
L
that we expand the solution in terms of the eigenfunctions. Looking
for a solution in the form
∞
nπx
v( x, t) = ∑ βn (t) sin( L
)
n =1
d2 β n n2 π 2 c2
2
=− βn .
dt L2
This is a constant coefficient linear differential equation, the solution
is given by
nπct nπct
β n (t) = Bn cos( ) + Cn sin( )
L L
This gives
∞
nπx nπct nπx nπct
v( x, t) = ∑ Bn sin( L
) cos(
L
) + Cn sin(
L
) sin(
L
) (7.24)
n =1
7.2.2 Interpretation
One thing that we notice about the solution to the wave equation
given in equation (7.24) is that the time dependence is sinusoidal.
This is in contrast to the solution to the heat equation , where the
time dependence is decaying exponential. This reflects the different
physics behind the heat and wave equations: heat problems tend
to decay to some kind of equilibrium. This decay is reflected in the
(decaying) exponential dependence on time. The wave equation, on
the other hand, supports sustained oscillations. This is reflected in
the sinusoidal dependence on time.
In general a solution to the wave equation is given by a superpo-
sition of terms of the form sin( nπx nπct
L ) cos( L ). These building blocks
n
are called standing waves. The wavenumber of these waves is k = 2L
nc
and the angular frequency is ω = 2L . The tone produced by a vi-
brating string will contain of a number of different frequencies. The
lowest frequency is n = 1: this is called the fundamental. The next
is n = 2: this has a frequency twice the fundamental frequency. This
is exactly one octave above the fundamental. The next mode, n = 3,
produces has a frequency three times the fundamental. In music a
pair of tones in a perfect fifth have frequencies in the ratio 3:2, so
n = 3 and n = 2 form a perfect fifth. Similarly n = 4 and n = 3
form a fourth, etc. Much of our music theory developed the way it
did because the first instruments were stringed instruments.
∞
nπx nπct nπx nπct
v( x, t) = ∑ Bn sin( L
) cos(
L
) + Cn sin(
L
) sin(
L
)
n =1
∞
∂v nπc nπx nπct nπc nπx nπct
( x, t) = ∑ − Bn sin( ) sin( )+ Cn sin( ) cos( )
∂t n =1
L L L L L L
∞
∂v nπc nπx
( x, 0) = ∑ Cn cos( ) = g( x )
∂t n =1
L L
This is again a Fourier sine series problem, but this time the coeffi-
cient is nπc
L Cn . Thus we have
nπc 2 L nπx
Z
Cn = g( x ) sin( )dx
L L 0 L
Z L
2 nπx
Cn = g( x ) sin( )dx
nπc 0 L
vtt = v xx
v(0, t) = 0
v(1, t) = 0
v( x, 0) = 0
vt ( x, 0) = 1
Bn = 0
Z L
2 nπx 2L
Cn = sin( )dx = 2 2 (1 − cos(nπ ))
nπc 0 L n π c
∞
2L nπx nπt
v( x, t) = ∑ n 2 π2
(1 − cos(nπ )) sin(
L
) sin(
L
)
n =1
Exercise 7.2.1
a) b)
vtt = 9 v xx vtt = 16 v xx
v(0, t) = 0 v(0, t) = 0
v(2, t) = 0 v(3, t) = 0
v( x, 0) = x v( x, 0) = 6
vt ( x, 0) = 0 vt ( x, 0) = 5 sin(πx )
Part III
if and only if one of the factors is zero. The first factor is zero if x − 1 = 0
or x = 1. The second is zero if ( x + 1) = 0 or x = −1. The third factor
√
is zero if x2 + 1 = 0 or x = ± −1 = ±i. Thus the four roots are
x = 1, x = −1, x = i, x = −i.
a + ib a + ib c − id ( a + ib)(c − id)
= = .
c + id c + id c − id c2 + d2
Under this definition complex numbers add in the way that vec-
tors normally do: they add component-wise. We will give an inter-
pretation to multiplication shortly, but first we introduce the polar
representation. First we need to recall the Euler formula, which tells
us how to exponentiate complex numbers.
The quantities r and θ, referred to as the modulus and the argument, are
defined by
p b
r= a2 + b2 = | z | tan(θ ) =
a
a = r cos θ b = r sin θ
background material 167
is defined by
−5
tan θ = =1
−5
Here we have to be careful: there are two angles θ such that tan(θ ) = 1,
θ = π4 , in the first quadrant and θ = 5π
4 , in the third quadrant. The original
point is −5 − 5i = (−5, −5) in the third quadrant, so
√ 5π
(−5 − 5i ) = 5 2e 2 i
The argument arg(z) = θ and the modulus |z| have some nice
properties under multiplication:
in the second equation we have the understanding that the argument is only
defined up to multiples of 2π. In other words when we multiply complex
numbers the absolute values multiply and the angles add.
we also have
e−iθ = cos(θ ) − i sin(θ )
and
1 iθ
cos(θ ) = e + e−iθ (8.1)
2
1 iθ
sin(θ ) = e − e−iθ (8.2)
2i
We can actually prove these two formulas using differential equa-
tions. The left side of (8.1), cos(θ ), solves the initial value problem
The right side of (8.1), 12 eiθ + e−iθ , also solves this initial value
8.2.1 Matrices
A matrix M is just a rectangular array of numbers, M jk . Some exam-
ples include
1
M1 = 2
−7
3 2
M2 = 1 8
0 −5
2 1 3 −2
M3 = − 4 6 0 8
1 0 0 −5
There are a number of algebraic operations that we can perform
on matrices. Firstly we can multiply a matrix by a constant (scalar) a.
This simply amounts to multiplying each entry of the matrix by the
same scalar. For example
3 2
M2 = 1 8
0 −5
9 6
3 · M2 = 3 24
0 −15
2 1 3 −2
M3 = − 4 6 0 8
1 0 0 −5
−8 −4 −12 8
− 4 · M3 = 16 −24 0 −32
−4 0 0 20
Next we can add two matrices provided they have the same dimen-
sions! Addition is done termwise – we add the corresponding matrix
entries. For example
3 2 1 7 4 9
1 8 + 0 −1 = 1 7
0 −5 2 −3 2 −8
170 differential equations
If you have been exposed to the "dot product" or "scalar product" in one of
your other classes the ( j, l ) entry of MN is given by the dot product of the
jth row of M with the l th column of N.
Exercise 8.2.1
Find pairs of matrices M and N such that the following hold
a) M + N is well-defined but MN is not.
b) MN is well-defined but M + N is not.
c) M + N , MN and NM are all well-defined but MN ̸= NM.
One very important special matrix is the identity matrix, usually
denoted I or In×n . The n × n matrix I has entries of 1 down the main
diagonal and 0 everywhere else.
Example 8.2.2.
! 1 0 0
1 0
I2×2 = I3 × 3 = 0 1 0
0 1
0 0 1
The identity matrix has the property that if I is the n × n identity
matrix and N is any n × m matrix then
IN = N
MI = M
8.2.3 Determinants
We next introduce the idea of a determinant. For any n × n matrix M
we define a scalar quantity, linear in the rows of the matrix, called the
determinant. The determinant is a measure of the independence of
the rows of the matrix. The determinant can be defined axiomatically,
by the following axioms. Suppose that r1 , r2 . . . rn are the n rows of
the matrix, and let det(r1, r2, . . . , rn ) denote the determinant. Then
the determinant is the unique function satisfying the following three
properties.
172 differential equations
Note that the property that the sign switches if we swap two rows
implies that the determinant is equal to zero if two rows are equal,
since that implies that det(r1 , r1 , r3 , . . . rn ) = − det(r1 , r1 , r3 , . . . rn ) and
thus det(r1 , r1 , r3 , . . . rn ) = 0. Combining this with linearity we see
that if any row is equal to a sum of other rows then the determinant
is zero.
Here are some useful properties of the determinant:
• The determinant is zero if and only if the rows are linearly dependent.
• The determinant is zero if and only if the columns are linearly dependent.
and
a b c
det d e f = a · e ·i + b · f · g + c · d · h − g · e · c − h · f · a −i · d · b
g h i
Note: if you have noticed a pattern here be aware that it doesn’t general-
ize in the most obvious way to 4 × 4 matrices. The “down diagonals minus
up diagonals” formula does not work for 4 × 4 matrices.
background material 173
then the (2, 3) minor would be the determinant of the matrix obtained by
removing the 2nd row and the 3rd column.
8 1 −7
8 1
3 2 4 = = 8 · 0 − 6 · 1 = −6
6 0
6 0 4
The above formula gives the expansion by column minors, exchanging the
role of i, j gives the formula for row minors.
8 1 −7 8 1 −7 8 1 −7
det(M) = (−1)3 · 3 · 3 2 4 + (−1)4 · 2 · 3 2 4 + (−1)5 · 4 · 3 2 4
6 0 4 6 0 4 6 0 4
1 −7 8 −7 8 1
= (−1) · 3 · +2· + (−1) · 4 ·
0 4 6 4 6 0
= (−3) · 4 + 2 · (8 · 4 + 6 · 7) − 4 · (−6)
= 160
174 differential equations
Here are some tips for computing determinants. Firstly note that
while the minors expansion works for any row or column it is eas-
iest to choose a row or a column that contains many zero entries,
since one does not need to compute the corresponding minor deter-
minants. If, for instance, we had done the expansion in the second
column instead of the second row we would not need to compute the
(3, 2) minor determinant, as it would be multiplied by zero. Secondly
notice that the factor (−1)i+ j alternates between +1 and −1. To get
the correct sign draw a checkerboard pattern with + in the upper
left-hand corner and alternating + and − signs. The the entries give
the sign of (−1)i+ j .
+ − +
− + −
+ − +
so the signs going across the second row are (− + −). This factor
is in addition any signs on the matrix coefficients themselves, of
course.
Example 8.2.5. Applying these tricks to the previous example we can see
that it would be easier to expand in either the second column or the third
row. Expanding in the third row gives
1 −7 8 1
det(M) = +6 +4 = 6 · (4 − (−14)) + 4 · (16 − 3) = 108 + 52 = 160
2 4 3 2
Mx = b
Mx = b
8x1 + x2 − 7x3 = 11
3x1 + 2x2 + 4x3 = 0
6x1 + 4x3 = 5
11 1 −7
0 2 4
5 0 4 178 89
x1 = = =
8 1 −7 160 80
3 2 4
6 0 4
8 11 −7
3 0 4
6 5 4 −133
x2 = =
8 1 −7 160
3 2 4
6 0 4
8 1 11
3 2 0
6 0 5 −67
x3 = =
8 1 −7 160
3 2 4
6 0 4
Exercise 8.2.2
Check that the determinants in the numerators of the expressions for
x1,2,3 are equal to 178, −133 and −67 respectively.
A closely related result is the following:
Mv = λv.
Eigenvectors are nice because nice because the action of the ma-
trix on an eigenvector is particularly simple – the vectors just get
"stretched" by a scalar factor, the eigenvalue.
Next we outline how to find the eigenvalues and eigenvectors.
where I is the identity matrix. The eigenvalues of M are the roots of the
characteristic polynomial.
Mv = λv.
background material 177
(M − λI)v = 0.
We know from Theorem 8.2.2 that the only solution to this equation
is v = 0 unless det(M − λI) = 0. Since we require a non-zero solution
λ must be a root of the characteristic polynomial.
5−λ 2 −1
3−λ 3
P(λ) = 0 3−λ 3 = (5 − λ ) = (5 − λ)(λ2 − 6λ)
3 3−λ
0 3 3−λ
(M − λI)v = 0
− v1 + 2v2 − v3 = 0
− 3v2 + 3v3 = 0
3v2 − 3v3 = 0
Note that the second and third equations are linearly dependent so we only
need one of these. Thus we must solve
− v1 + 2v2 − v3 = 0
− 3v2 + 3v3 = 0
The second equation implies that v2 = v3 . Substituting this into the first
equation shows that −v1 + v2 = 0 or v1 = v2 . Thus all three components of
v must be equal. We could choose, for instance, v3 = 1. This would give
1
v = 1
1
2v2 = 0
v3 = 0
0 = 0.
background material 179
Solution 1.1.1
d2 y dy
( 0 ) = y 3 ( 0 ) − y ( 0 ) = 13 − 1 = 0 y (0) = 1 (0) = −1.
dt2 dt
d2 y
= y3 − y
dt2
d3 y dy dy
3
= 3y2 −
dt dt dt
d3 y dy dy
(0) = 3y2 (0) (0) − (0) = 3 · 12 (−1) − (−1) = −2
dt3 dt dt
Solution 1.1.3
y = 1 − t2
dy
= −2t
dt
dy
t − y = −2t2 − (1 − t2 ) = −(1 + t2 )
dt
182 differential equations
Solution 1.1.4
y (0) = 1
y ′ (0) = 0
y′′ (0) = −1
y′′′ (0) = 0
y′′′′ (0) = 1
dk y
It is easy to guess the pattern here – dtk
(0) = 0 if k is odd and
dk y k
dtk
(0) = (−1) if k is even.
2
Solution 1.2.1
Solve the differential equation y′ = ty ln |y|
This equation is separable as
dy
= tdt
y ln |y|
Integrating both sides we get
t2
ln(ln(y)) = +A
2
and solving for y:
t2
y(t) = eCe 2
where we called C = e A
Solution 1.2.2
We use the equation we found for the vertical position of a falling
body:
gt2
y=− + vt + h
2
where v is the initial velocity and h is the initial height. Setting v =
10, h = 0, and g = −10, we get
y = −5t2 + 100t
Solution 1.2.3
The horizontal velocity is given to us as v x = 20 and it is unaffected
by any force (we are neglecting drag). Therefore the distance traveled
is just 20 ∗ 20 = 400 meters.
problem solutions 183
Solution 1.2.4
Solution 1.4.1
y2
a) y′ + t2
= ty, y(0) = 1. No guaranteed solution.
y2
b) y′ + t2
= ty, y(3) = 1. Guaranteed unique solution.
y+t
c) y′ = y−t , y(1) = 2. Guaranteed unique solution.
y+t
d) y′ = y−t , y(2) = 2. No guaranteed solution.
y1/3
e) y′ = t2
, y(1) = 0. Guaranteed to exist but not guaranteed to
be unique.
ty
f) y′ + cos(y)
= 0, y(0) = 0. Guaranteed unique solution.
Solution 1.5.1
It is strongly recommended that one keep track of units here. In any
problem of this type the basic principle is that the rate of change
of any quantity is the rate at which the quantity enters the system
minus the rate at which it leaves.
Let W (t) denote the total amount of liquid in the tank. Liquid
enters at a rate of 10 L min−1 and leaves at a rate of 5 L min−1 . There
are initially 100 L of liquid, so W (t) satisfies the equation
dW
= 5 L min−1 W (0) = 100 L.
dt
Integrating up shows that W (t) = 100 L + 5 L min−1 · t
Next we need to write down the differential equation for the
amount of salt in the tank. The liquid enters the tank at 10 L min−1 ,
and the liquid contains 0.0001 kg L−1 . Thus the rate at which salt
enters the tank is
Let S(t) denote the total amount of salt in the tank. The rate at
which salt leaves is slightly trickier. Liquid leaves at 5 L min−1 . The
salt content of this liquid (in kg L−1 ) will be the total amount of salt
(in kg) divided the total amount of liquid (in L), so that rate at which
salt leaves:
5 L min−1 S(t)
Rate out = 5 L min−1 · S(t)/W (t) =
100 L + 5 L min−1 · t
S(t)
=
20 min + t
Initially the total amount of salt in the tank is 10 kg, so we have
that the differential equation for the amount of salt is
dS S(t)
= 0.001 kg min−1 − S(0) = 10 kg
dt 20 min + t
Since this equation is first order linear it can be solved with an inte-
grating factor. Writing it as
dS S(t)
+ = 0.001 kg min−1 S(0) = 10 kg
dt 20 min + t
the integrating factor is
dt
= elog(20 min+t) = 20 min + t
R
µ(t) = e 20 min+t
d
((20 min + t)S(t)) = 0.001 kg min−1 · (20 min + t)
dt
= 0.02 kg + 0.001 kg min−1 · t.
t2
((20 min + t)S(t)) = C + 0.02 kg · t + 0.001 kg min−1 ·
2
In order to solve for the constant C we substitute t = 0 to find
Solution 1.5.2
dy 1
a) + 2y = 1 + t y(0) = 0 y(t) = 2t − e−2t + 1
dt 4
dy sin(t) cos(t)
b) dt + 2t y = t2
y(t) = A
t2
− t2
dy 1+cos(t)
c) t dt + y = sin(t) y(π ) = 0 y(t) = − t
dy 2t cos(t) 1+sin(t)
d) dt + t2 +1
y = t2 +1
y (0) = 1 y(t) = 1+ t2
dy sin(t) sin(t) 1
e) y y (0) = 0 y(t) = sec2 (t) − cos(t)
dt + cos(t)
= cos3 (t) 3
Solution 1.6.1
a) Exact: x2 + 5xy + x + y2 + 2y = 32
b) Not exact.
c) Exact: x3 − 3x2 y + 5xy2 − y3 + e x−y = −1 + e−1
d) Not exact.
e) Exact: x3 + 3y2 x − y3 = −5
x2 y2 1
f) Exact: 2xy − 2 + 2 = 2
g) Exact: x2 + y3 + e xy = 2
Solution 1.6.2
Find a general solution for each equation.
cos(t)
a) y′ = y2
. y3 = 3 sin(t) + C
2
b) yy′ = t + ty2 . y2 = Cet − 1
1 1
c) t2 y′ = cos(y)
. sin(y) = C − t
y2
d) t(y2 + 1)y′ = y. 2 + ln( yt ) = C
Solution 1.6.3
Find a general solution for each equation.
y2
a) yy′ = 2t + t . y2 = t2 ln(t4 ) + C
b) y′ = + yt . y2 + ty = t2 (ln(t) + C )
t
2( y + t )
y
c) ty′ = y + t y . cos t = C − ln(t)
sin( ) t
2t−y √
d) y′ = t+y . y = −t ± 3t2 + C
186 differential equations
Solution 1.6.4
Find a general solution for each equation.
a) y′ = ey+2t − 2. y = − ln(C − t) − 2t
b) y′ = (3y − t)2 + 13 . y = 1
C −9t + t
3
Solution 1.6.5
Find a general solution for each equation.
a) y′′ + 3y′ = te−3t . y = − 16 t2 − 19 t e−3t + Ae−3t + B
Rt 2
b) y′′ + 2ty′ = 0. y = A t e−s ds + B
0
c) ty′′ = y′ . y = At2 + B
d) y′′ = 2yy′ . y = A tan( At + B)
At+b
e) ey y′′ = y′ . y = ln 1+eA or y = A
f) yy′′ = (y′ )2 . y = Be At
Solution 1.6.6
Find a general solution for each equation.
1
a) y′ + y = yt . y2 = t − 2 + Ce−2t
4t3
b) ty′ = 3y + ty2 . y= C − t4
√ 2
t3
C
c) ty′ + 2y = t3 y. y= 8 + t
2y 1
d) y′ = ty3 − t . y2 = t2 +Ct4
Solution 1.6.7
1
a) Bernoulli equation. y = 2t2 −t3
Solution 1.7.1
dy
a) dt = 6y − 2y2 . The equilibria are y = 0 (unstable) and y = 3
(stable).
dy
b) dt = y2 − 4y. The equilibria are y = 0 (stable) and y = 4
(unstable).
dy
c) dt = y2 − y3 . The equilibria are y = 0 (semi-stable) and y = 1
(stable)
Solution 2.0.1
Without actually solving the problems, determine in which interval I
we are guaranteed a unique solution for each initial value problem.
t2 ′
a) y′′′ − t −2 y+ cos (t)
t +3 y =
sin (t)
t2
, y(−1) = 2, y′ (−1) = 3, y′′ (−1) =
4; I = (−3, 0)
t2 ′
b) y′′′ − t −2 y+ cos (t)
t +3 y =
sin (t)
t2
, y(1) = 2, y′ (1) = 3, y′′ (1) =
4; I = (0, 2)
t2 ′
c) y′′′ − t −2 y+ cos (t)
t +3 y =
sin (t)
t2
, y(3) = 2, y′ (3) = 3, y′′ (3) =
4; I = (2, ∞)
t +1
d) (t − 2)y′′ + sin (t)
y = et , y(1) = 0, y′ (1) = 1; I = (0, 2)
Solution 2.1.1
Solution 2.2.1
Solution 2.2.2
Find general solutions to the following differential equations
Solution 2.5.2
Solution 2.6.1
Find particular solutions to the following differential equations
dy et
a) dt + y = et ; yp = 2
dy 2t e2t 3 sin (t) cos (t)
b) dt + 3y = sin( t ) + e ; yp = 5 + 10 − 10
dy −t t
c) dt + y = t sin( t ) + e ; y p = te−t + 2 (sin (t) − cos (t)) +
cos (t)
2
d2 y −t
d) dt2
+ y = te−t ; y p = ( t + 1) e 2
d2 y
e) dt2
+ 2 dy
dt + 5y = e
−t + cos( t ); yp = e−t
4 + cos (t)
5 + sin (t)
10
Solution 2.6.2
d3
a) dt3
d2
b) dt2
+9
problem solutions 189
2
d2
c) dt2
+4
3 2
d d
d) dt −1 dt 2 + 1
Solution 2.6.3
Solution 2.6.4
Solution 2.7.1
Use variation of parameters to find a particular solution for each
equation. If the particular solution can also be found using undeter-
mined coefficients, use both methods to check your answer.
t
a) y′′ + y′ − 6y = et ; y p = − e4 , we can use undetermined
coeff.s
b) y′′ + 1t y′ − 9
t2
y = 7t2 ; y p = t4
Solution 2.8.1
2 6
a) Y (s) = s +1 + s ( s +1)
+ (s+1)(1s2 +1) . The inverse transform is
1 1 7 −t
y(t) = 2 sin( t ) − 2 cos( t ) − 2 e + 6.
3s+2
b) Y (s) = s2 +1
+ (s−2)(11+s2 ) + s2 (11+s2 ) The inverse transform is
y(t) = t + 15 e2t + 14 3
5 cos( t ) + 5 sin( t ).
1
c) Y (s) = (s4 +1)(s−3)2
. The inverse transform is a mess.
We have not chosen to simplify Y (s) in any way. Your answers may
look slightly different if you chose to put on a common denominator,
apply partial fractions, or otherwise manipulate the result.
Solution 2.8.2
Problems a) – e) can be done by looking them up in a table, possibly
after having done some elementary simplifications. Problems f) and
g) require the use of partial fractions in the given form to express
Y (s) in a form that can be looked up in the table.
a) y(t) = et
b) y(t) = cos(t)
1 4 −3t
c) y(t) = 24 t e
d) y(t) = 5 cos(2t) + 27 sin(2t)
e) y(t) = 4e2t cos(t) + 11e2t sin(t)
f) Y (s) = A B
s−1 + s+1 giving y ( t ) = 27 et − 32 e−t
A B C
g) Y (s) = s + s−1 + s+2 giving y(t) = 2 − 3et + 5e−2t
Solution 2.8.3
Solve each initial value problem using the Laplace Transform method
Solution 3.1.6
Find the homogeneous solutions
Solution 3.1.7
Solve the following initial value problems
Solution 3.1.8
Solve the initial value problems
1
a) y′′ + 4y = cos(t) y (0) = 0 y ′ (0) = 0 y= 3 (cos(t) − cos(2t))
1
b) y′′ + y = cos(2t) y (0) = 1 y ′ (0) = 0 y= (4 cos(t) − cos(2t))
3
√ √
c) y′′ + 5y = cos(t) y(0) = 0 y ′ (0) = 1 y = 55 sin 5t +
√
1 1
4 cos ( t ) − 4 cos 5t
1
√
d) y′′ + 6y = cos(3t) y(0) = 0 y ′ (0) = 1 y= 3 cos( 6t) −
1
√ √
3 cos(3t ) + 6 sin( 6t )
Solution 3.1.9
Solve the following initial value problems
1
a) y′′ + 2y′ + y = cos(t) y (0) = 0 y ′ (0) = 0 y= 2 sin(t) −
1 −t
2 te
5 −t
b) y′′ + 3y′ + 2y = sin(t) y(0) = 1 y ′ (0) = 0 y = 2e −
6 −2t 1 3
5e + 10 sin(t) − 10 cos(t)
c) y′′ + 6y′ + 10y = sin(t) + cos(t) y(0) = 0 y′ (0) = 0 y=
1 −3t cos( t ) + 8e−3t sin( t )
39 cos( t ) + 5 sin( t ) − e
Solution 3.1.10
Find the general solution to the homogeneous damped harmonic
oscillator equation
d2 y dy
m 2 +γ + ky = 0
dt dt
for the following parameter values. In each case classify the equation
as overdamped, underdamped or critically damped.
192 differential equations
Solution 3.1.11
Solve the following initial value problem
d2 y dy
m 2
+ γ + ky = 0
dt dt
for the following sets of initial values and parameters.
a) m = 20.0 kg, γ = 40.0 Ns/m, k = 100.0 N/m y (0) =
0 m, y′ (0) = 5 m/s
y = 2.5 m exp(−1.0 s−1 t) sin(2.0 s−1 t)
b) m = 25.0 kg, γ = 50.0 Ns/m, k = 25.0 N/m y (0) =
′
2 m, y (0) = 0 m/s
y = 2.0 m exp(−1.0 s−1 t) + 2.0 m/s t exp(−1.0 s−1 t)
c) m = 10.0 kg, γ = 50.0 Ns/m, k = 60.0 N/m y (0) =
1 m, y′ (0) = 1 m/s
y = 4.0 m exp(−2.0 s−1 t) − 3.0 m exp(−3.0 s−1 t)
d) m = 10.0 kg, γ = 10.0 Ns/m, k = 30.0 N/m y (0) =
2 m, y′ (0) = −1 m/s
y = 2.0 m exp(−0.5 s−1 t) cos(1.66 s−1 t)
Solution 3.1.12
dy
The quantity γ dt should have units of force, since this represents
force
the drag force. This means γ has units of velocity , or Ns/m. This is
√
equivalent to kg/s. The critical damping satisfies γ = 4km or
γ ≈ 15 492 Ns/m.
Solution 3.1.13
First we have to compute the spring constant k of the suspension,
then use this to compute the critical damping. The extra force on the
problem solutions 193
suspension due to the load is 600 kg × 9.8 m/s2 = 5880 N. The spring
F
constant is k = − ∆x = − 5880 N
−6 cm = 98 000 N/m. Finally since we know
that k = 98 000 N/m and m = 2000 kg we have that γ = 28 000 Ns/m
Solution 3.1.14
The easiest way to solve this is to solve the complexified equation
4y′′ + y′ + 4y = eiωt
eiωt
y=
4 − 4ω 2 + iω
The amplitude is the absolute value of the complex number A =
1
4−4ω 2 +iω
so we can maximize the amplitude by minimizing the ab-
solute value of the denominator. So we want to minimize |4 − 4ω 2 +
iω |2 = ω 2 + (4 − 4ω 2 )2 = f (ω ). This is a pretty straightforward
calculus problem: we have that f ′ (√ω ) = 64ω 3 − 62ω. The three crit-
ical points are ω = 0 and ω = ± 862 ≈ 0.984. The first is a local
minimum and the other two are local maxima. At the local maxima
63
the amplitude of the particular solution is | A| = 64
Solution 3.1.15
If y(t) = cos(t + s) then we have
y(t) = cos(t + s)
dy
dt = − sin(t + s)
d2 y
dt2
= − cos(t + s)
d2 y
Adding the first and the third equation shows that dt2 + y = 0. As
far as initial conditions we have that y(0) = cos(0 + s) = cos(s) and
y′ (0) = − sin(0 + s) = − sin(s). So cos(t + s) satisfies
d2 y
+y = 0 y(0) = cos(s) y′ (0) = − sin(s)
dt2
However we can also solve this equation by taking the general so-
lution y = A cos(t) + B sin(t) and solving for A and B. This gives
y = cos(t) cos(s) − sin(t) sin(s). Since solutions are unique we must
have cos(t) cos(s) − sin(t) sin(s) = cos(t + s).
Solution 3.1.16
Firstly note that if y = Aeiωt then
1
so that A = k −mω 2 +iγω
. The squared magnitude of the denominator
is |z|2 = a + b = (k − mω 2 )2 + γ2 ω 2 = f (ω ). Differentiating
2 2
Solution 3.1.17
The smaller the damping coefficient γ the higher the resonance peak,
so graph [C] corresponds to the smallest damping γ = 1, [B] cor-
responds to damping k = 2 and [A] corresponds to the strongest
damping γ = 4. Note that two of these cases have γ2 < 2km so they
will have three critical points, while one has γ2 > 2km so it has one
critical point.
Solution 4.5.1
Solution 4.5.2
These are one possible set of solutions: recall that any (non-zero)
multiple of an eigenvector is an eigenvector.
! !
1 1
a) v1 = ; v2 =
1 −1
! !
1 4
b) v1 = ; v2 =
0 −3
! !
1 1
c) v1 = ; v2 =
i −i
! !
2 2
d) v1 = ; v2 =
3i −3i
3 1 1
e) v1 = 2 ; v2 = 0; v3 = −4
2 0 4
problem solutions 195
1 1 1
f) v1 = 1 ; v2 = −1; v3 = 1
−2 1 1
1 1 11
g) v1 = 1 ; v2 = 0; v3 = 5
0 0 −30
Solution 4.5.3
4e6t − e2t
!
a) v =
4e6t + e2t
4e2t − 2et
!
b) v =
2e2t
!
6tet + 2et
c) v =
3et
Solution 4.5.4
a)
et e5t e5t et
!
2 + 2 2 − 2
e5t et et e5t
2 − 2 2 + 2
b)
4 2t
e5t e3t − 1
!
3e
0 e2t
c)
!
cos(t) sin(t)
− sin(t) cos(t)
196 differential equations
d)
2 2t
e2t cos(6t)
!
3 e sin(6t )
− 23 e2t sin(6t) e2t cos(6t)
e)
−t 5e3t 3e5t e−t 7e3t 3e5t
e3t − e8 −
8 + 4 8 − 8 + 4
e−t e5t e5t e−t
0 2 + 2 2 − 2
e 5t e−t e − t e5t
0 2 − 2 2 + 2
f)
e−3t e−2t 1 1 e−2t 1 e−3t
3 + 2 + 6 2 − 2 3 − 3
e − 3t e − 2t 1 e−2t 1 1 e−3t
3 − 2 + 6 2 + 2 3 −
3
− 3t − 2t − 2t 2e−3t
− 2e3 + e 2 + 16 1
2 − 2
e
3 + 31
g)
1 −2t
e3t e3t et − 1 6e5t + 5e6t − 11
30 e
1 −2t 6t
0 e4t 6e e −1
0 0 e−2t
Solution 4.5.5
Both matrices have the same characteristic equation, (2 − λ)3 = 0,
so λ = 2 is an eigenvalue of multiplicity three in each case. The first
matrix has only one linearly independent eigenvector,
1
v1 = 0
0
Any non-zero multiple of this vector also works, of course. The sec-
ond matrix has two linearly indendent eigenvectors,
1
v1 = 0
0
and
0
v2 = 1
−1
(once more up to a non-zero multiplicative constant).
problem solutions 197
Solution 4.5.6
For the first matrix we find that
1 0 0
B0 = I = 0 1 0 r1 (t) = e2t
0 0 1
0 1 1
B1 = A − 2I = 0 0 4 r2 (t) = te2t
0 0 0
0 0 4
1
B2 = ( A − 2I )( A − 2I ) = 0 0 0 r2 (t) = t2 e2t
2
0 0 0
which gives the matrix exponential
e2t te2t (2t2 + t)e2t
0 e2t 4te2t .
0 0 e 2t
0 e2t 0 .
0 0 e2t
The difference between these two examples – the fact that the second
example has B2 = 0 and subsequently has no t2 e2t terms – is con-
nected to some ideas from linear algebra, the Jordan normal form
and the minimal polynomial of a matrix. These topics are usually
covered in a second undergraduate linear algebra text.
Solution 5.2.1
Determine if the following boundary value problems have no solu-
tions, one solution, or infinite solutions. If there are solutions, find
198 differential equations
them.
Solution 5.2.2
Solution 5.3.1
c) y′′ + λy = 0 y(−π ) = 0 λ n = n2 , y n ( x ) =
y(π ) = 0;
sin(nx ); and also λm = (m + 12 )2 , ym ( x ) = cos (m + 12 ) x
Solution 6.5.1
a) 42
b)
∞ ∞
8 1 kπx 8 1 (2k − 1)πx
1− 2
π ∑ k2
cos
2
= 1− 2
π ∑ (2k − 1)2 cos 2
k =1, odd k =1
c)
12 ∞ (−1)k kπx
π k∑
5+ sin
=1
k 3
d) 10 cos(2x )
e)
∞ ∞ ∞ ∞
3 2 1 1 1 3 2 1 1 1
−
4 π2 ∑ k2
cos(kπx ) +
π ∑ k
sin(kπx ) = − 2
4 π ∑ ( 2k − 1 ) 2
cos((2k − 1)πx ) +
π ∑ k
sin(kπx )
k =1,odd k =1 k =1 k =1
f)
25 100 ∞ (−1)k kπx
+ 2 ∑ cos
3 π k =1 k 2 5
Solution 6.5.2
a) Sine:
168 ∞ 1
π k=∑
sin(kx )
1,odd
k
Cosine:
42
b) Sine:
∞
16 (−1)k (2k − 1)πx
− 2
π ∑ (2k − 1)2 sin 4
k =1
Cosine:
∞ ∞
32 1 (4k − 2)πx 8 1 (2k − 1)πx
1− 2
π ∑ (4k − 2)2 cos 4
= 1− 2
π ∑ (2k − 1)2 cos 2
k =1 k =1
200 differential equations
c) Sine:
∞
2 (−1)k + 5 kπx
π ∑ k
sin
3
k =1
Cosine:
∞
24 1 (2k − 1)πx
2+ 2
π ∑ (2k − 1)2 cos 3
k =1
d) Sine:
40 ∞ 2k − 1
π k∑
sin ((2k − 1) x )
=1
( 2k − 1)2 − 4
Cosine:
10 cos(2x )
e) Sine:
∞
2(−1)k
" #
4 kπ kπx
∑ k π2
2
sin
2
−
kπ
sin
2
k =1
Cosine:
∞
3 4 kπ kπx
4 k∑
+ cos − 1 cos
=1
k2 π 2 2 2
f) Sine:
∞
50(−1)k
" #
100 kπx
k
∑ −
kπ
+ 3 3 (−1) − 1 sin
k π 5
k =1
Cosine:
25 100 ∞ (−1)k kπx
+ 2 ∑ cos
3 π k =1 k 2 5
Solution 7.1.1
a)
∞
6 (−1)n − 5n2 π2 t nπx
v( x, t) = −
π ∑ n e 9 sin
3
n =1
problem solutions 201
b)
∞
5 10 1 − (−1)n − 2n2 π2 t nπx
v( x, t) = + 2
2 π ∑ n2
e 25 cos
5
n =1
c)
40 ∞ (−1)n − 3(2n−1)2 t (2n − 1) x
π n∑
v( x, t) = − e 4 cos
=1 2n − 1 2
Solution 7.2.1
a)
∞
4 (−1)n 3nπt nπx
v( x, t) = −
π ∑ n cos 2
sin
2
n =1
b)