Good Guide
Good Guide
2023
MATH 2306: Ordinary Differential Equations
Lake Ritter, Kennesaw State University
dy
= −2 sin(2x).
dx
Even dy/dx is differentiable with d 2 y /dx 2 = −4 cos(2x). Note that
d 2y
+ 4y = 0.
dx 2
Section 1: Concepts and Terminology
The equation
d 2y
+ 4y = 0.
dx 2
is an example of a differential equation.
Definition
dy du dx
dx dt dr
Identify the dependent and independent variables in these terms.
Section 1: Concepts and Terminology
Classifications
Type: An ordinary differential equation (ODE) has exactly one
independent variable1 . For example
dy dy dx
− y 2 = 3x, or +2 = t, or y 00 + 4y = 0
dx dt dt
∂y ∂2y ∂ 2 u 1 ∂u 1 ∂2u
= , or + + =0
∂t ∂x 2 ∂r 2 r ∂r r 2 ∂θ2
For those unfamiliar with the notation, ∂ is the symbol used when
taking a derivative with respect to one variable keeping the remaining
variable(s) constant. ∂u
∂t is read as the ”partial derivative of u with
respect to t.”
1
These are the subject of this course.
Section 1: Concepts and Terminology
Classifications
dy
− y 2 = 3x
dx
y 000 + (y 0 )4 = x 3
∂y ∂2y
=
∂t ∂x 2
dy d 2y d ny
Leibniz: , , ... , or
dx dx 2 dx n
ds d 2s
velocity is = ṡ, and acceleration is = s̈
dt dt 2
Section 1: Concepts and Terminology
F (x, y, y 0 , . . . , y (n) ) = 0
where F is some real valued function of n + 2 variables.
d ny
= f (x, y , y 0 , . . . , y (n−1) ).
dx n
Section 1: Concepts and Terminology
Example
F (x, y , y 0 , y 00 ) = y 00 + 4y.
dy d 2y
= f (x, y ) or = f (x, y , y 0 ).
dx dx 2
M(x, y) dx + N(x, y ) dy = 0
Note that this can be rearranged into a couple of different normal forms
dy M(x, y) dx N(x, y )
=− or =−
dx N(x, y) dy M(x, y)
Section 1: Concepts and Terminology
Classifications
d ny d n−1 y dy
an (x) + an−1 (x) + · · · + a1 (x) + a0 (x)y = g(x).
dx n dx n−1 dx
Note that each of the coefficients a0 , . . . , an and the right hand side g
may depend on the independent variable but not on the dependent
variable or any of its derivatives.
Section 1: Concepts and Terminology
d 2x dx
y 00 + 4y = 0 t2 2
+ 2t − x = et
dt dt
4
d 3y
dy
+ = x3 u 00 + u 0 = cos u
dx 3 dx
2
The interval is called the domain of the solution or the interval of definition.
Section 1: Concepts and Terminology
c2
Note that for any choice of constants c1 and c2 , y = c1 x + x is a
solution of the differential equation
x 2 y 00 + xy 0 − y = 0
We have
c2 2c2
y 0 = c1 − , and y 00 =
x2 x3
So
2 00 0 2 2c2
c2 c2
x y + xy − y = x + x c1 − 2 − c1 x +
x3 x x
2c2 c2 c2
= + c1 x − − c1 x −
x x x
1
= (2c2 − c2 − c2 ) + (c1 − c1 )x
x
= 0
as required.
Section 1: Concepts and Terminology
Some Terms
I A parameter is an unspecified constant such as c1 and c2 in the
last example.
Systems of ODEs
dx
= −αx + βxy
dt
dy
= γy − δxy
dt
This is known as the Lotka-Volterra predator-prey model. x(t) is the
population (density) of predators, and y(t) is the population of prey.
The numbers α, β, γ and δ are nonnegative constants.
This model is built on the assumptions that
I in the absence of predation, prey increase exponentially
I in the absence of predation, predators decrease exponentially,
I predator-prey interactions increase the predator population and
decrease the prey population.
Section 1: Concepts and Terminology
Systems of ODEs
di2
= −2i2 − 2i3 + 60
dt
di3
= −2i2 − 5i3 + 60
dt
Systems of ODEs
3
We’ll consider this later.
Section 2: Initial Value Problems
d ny
= f (x, y, y 0 , . . . , y (n−1) ) (1)
dx n
subject to the initial conditions
4
on some interval I containing x0 .
Section 2: Initial Value Problems
Examples for n = 1 or n = 2
First order case:
dy
= f (x, y), y (x0 ) = y0
dx
Example
Given that y = c1 x + cx2 is a 2-parameter family of solutions of
x 2 y 00 + xy 0 − y = 0, solve the IVP
x 2 y 00 + xy 0 − y = 0, y(1) = 1, y 0 (1) = 3
Satisfying the initial conditions will require certain values for c1 and c2 .
c2
y(1) = c1 (1) + =1 =⇒ c1 + c2 = 1
1
c2
y 0 (1) = c1 − = 3 =⇒ c1 − c2 = 3
12
Solving this algebraic system, one finds that c1 = 2 and c2 = −1. So
the solution to the IVP is
1
y = 2x − .
x
Section 2: Initial Value Problems
Graphical Interpretation
A Numerical Solution
Consider a first order initial value problem
dy
= f (x, y), y (x0 ) = y0 .
dx
In the coming sections, we’ll see methods for solving some of these
problems analytically (e.g. by hand). The method will depend on the
type of equation. But not all ODEs are readily solved by hand. We can
ask whether we can at least obtain an approximation to the solution,
for example as a table of values or in the form of a curve. In general,
the answer is that we can get such an approximation. Various
algorithms have been developed to do this. We’re going to look at a
method known as Euler’s Method.
The strategy behind Euler’s method is to construct the solution starting
with the known initial point (x0 , y0 ) and using the tangent line to find the
next point on the solution curve.
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
Figure: We know that the point (x0 , y0 ) = (0, 1) is on the curve. And the slope
of the curve at (0, 1) is m0 = f (0, 1) = 0 · 1 = 0.
Note: The gray curve is the true solution to this IVP. It’s shown for reference.
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
Figure: So we draw a little tangent line (we know the point and slope). Then
we increase x, say x1 = x0 + h, and approximate the solution value y(x1 ) with
the value on the tangent line y1 . So y1 ≈ y (x1 ).
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
dy
Example = xy , y (0) = 1
dx
Figure: When h is very small, the true solution and the tangent line point will
be close. Here, we’ve zoomed in to see that there is some error between the
exact y value and the approximation from the tangent line.
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
Figure: Now we start with the point (x1 , y1 ) and repeat the process. We get
the slope m1 = f (x1 , y1 ) and draw a tangent line through (x1 , y1 ) with slope
m1 .
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
dy
Example = xy , y (0) = 1
dx
Figure: If we zoom in, we can see that there is some error. But as long as h is
small, the point on the tangent line approximates the point on the actual
solution curve.
Section 2: Initial Value Problems
dy
Example = xy , y (0) = 1
dx
Figure: We can repeat this process at the new point to obtain the next point.
We build an approximate solution by advancing the independent variable and
connect the points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ).
Section 2: Initial Value Problems
dy
= f (x, y), y (x0 ) = y0 .
dx
Notation:
I The approximate value of the solution will be denoted by yn ,
I and the exact values (that we don’t expect to actually know) will be
denoted y(xn ).
dy y1 − y0
f (x0 , y0 ) = ≈
dx (x0 ,y0 ) x1 − x0
=⇒ y1 − y0 = hf (x0 , y0 )
=⇒ y1 = y0 + hf (x0 , y0 )
y1 = y0 + hf (x0 , y0 )
Section 2: Initial Value Problems
with (x0 , y0 ) given in the original IVP and h the choice of step
size.
Section 2: Initial Value Problems
dy
Euler’s Method Example: = xy, y(0) = 1
dx
Let’s take h = 0.25 and find the first three iterates y1 , y2 , and y3 .
We have x0 = 0 and y0 = 1. So
y1 = y0 + hf (x0 , y0 ) = 1 + 0.25(0 · 1) = 1
dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt
We’ll need to use two steps. With h = 0.2, we’ll move from t = 1 to
t = 1.2 and then to t = 1.4. First, let’s determine the formula using
2 2
Euler’s method. We have f (t, x) = x xt−t , so the general formula will be
!
2
xn−1 2
− tn−1
xn = xn−1 + hf (tn−1 , xn−1 ) = xn−1 + 0.2
xn−1 tn−1
We also have
t0 = 1 and x0 = 2.
Section 2: Initial Value Problems
dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt
dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt
Now we move on to the next point to approximate x(1.4). From the last
step, we have t1 = 1.2 and x1 = 2.3 so
2
x1 − t12
2
2.3 − 1.22
x2 = x1 + 0.2 = 2.3 + 0.2 = 2.579
x 1 t1 2.3 · 1.2
It is possible
p to solve this IVP exactly to obtain the solution
x = 4t 2 − 2t 2 ln(t). The true value x(1.4) = 2.554 to four decimal digits.
Section 2: Initial Value Problems
First, let’s define what we mean by the term error. There are a couple
of types of error that we can talk about. These are5
5
Some authors will define absolute error without use of absolute value bars so that
absolute error need not be nonnegative.
Section 2: Initial Value Problems
We notice from this example that cutting the step size in half, seems to
cut the error and relative error in half. This suggests the following:
There are two sources of error for Euler’s method (not counting
numerical errors due to machine rounding).
I The error in approximating the curve with a tangent line, and
I using the approximate value yn−1 to get the slope at the next step.
Section 2: Initial Value Problems
For numerical schemes of this sort, we often refer to the order of the
scheme. If the error satisfies
Euler’s Method
6
a.k.a. RK2
7
a.k.a. RK4
Section 2: Initial Value Problems
dy
Improved Euler’s Method: = f (x, y ), y(x0 ) = y0
dx
Euler’s method approximated y1 using the slope m0 = f (x0 , y0 ) for the
tangent line. An initial improvement on the method can be made by
using this as an intermediate point to give a second approximation to
the slope. That is, let
m0 = f (x0 , y0 )
as before, and now let
Then we take y1 to be the point on the line that has the average of
these two slopes
1
y1 = y0 + (m0 + m̂0 )h.
2
2
dy
Hopefully it’s obvious that we can’t solve + 1 = −y 2 .
dx
(Not if we are only interested in real valued solutions.)
Section 2: Initial Value Problems
Uniqueness
dy √
=x y y(0) = 0
dx
x4
Exercise: Verify that y = 16 is a solution of the IVP.
Can you find a second solution of the IVP by inspection (i.e. clever
guessing)?
Section 3: First Order Equations: Separation of Variables
dy
= g(x).
dx
y = G(x) + c
Separable Equations
f (x, y) = g(x)h(y).
dy
= g(x)h(y ).
dx
Section 3: First Order Equations: Separation of Variables
dy
= x 3y
dx
is separable as the right side is the product of g(x) = x 3 and h(y ) = y.
dy
= 2x + y
dx
is not separable. You can try, but it is not possible to write 2x + y as
the product of a function of x alone and a function of y alone.
Section 3: First Order Equations: Separation of Variables
1 dy dy
Note that = g(x) =⇒ p(y) dx = g(x) dx
h(y ) dx dx
dy
Since dx dx = dy , we integrate both sides
Z Z
p(y ) dy = g(x) dx =⇒ P(y) = G(x) + c
dy
= g(x), y (x0 ) = y0 .
dx
Expressing the solution in terms of an integral
Z x
y (x) = y0 + g(t) dt.
x0
dy
a1 (x) + a0 (x)y = g(x).
dx
If g(x) = 0 the equation is called homogeneous. Otherwise it is called
nonhomogeneous.
dy
+ P(x)y = f (x).
dx
We’ll be interested in equations (and intervals I) for which P and f are
continuous on I.
Section 4: First Order Equations: Linear & Special
dy
+ P(x)y = f (x).
dx
It turns out the solution will always have a basic form of y = yc + yp
where
I yc is called the complementary solution and would solve the
problem
dy
+ P(x)y = 0
dx
(called the associated homogeneous equation), and
I yp is called the particular solution, and is heavily influenced by
the function f (x).
The cool thing is that our solution method will get both parts in one
process—we won’t get this benefit with higher order equations!
Section 4: First Order Equations: Linear & Special
Motivating Example
dy
x2 +2xy = ex
dx
dy
+ P(x)y = f (x)
dx
Based on the previous example, we seek a function µ(x) such that
when we multiply the above equation by this new function, the left side
collapses as a product rule. We wish to have
dy d
µ + µPy = µf =⇒ [µy ] = µf .
dx dx
Matching the left sides
dy d dy dµ
µ + µPy = [µy] = µ + y
dx dx dx dx
which requires
dµ
= Pµ.
dx
Section 4: First Order Equations: Linear & Special
dµ
= Pµ.
dx
d
[µ(x)y] = µ(x)f (x).
dx
I Integrate both sides, and solve for y .
Z Z R
1 R
− P(x) dx P(x) dx
y (x) = µ(x)f (x) dx = e e f (x) dx + C
µ(x)
Section 4: First Order Equations: Linear & Special
dy
− y = 2x 2 x
dx
In standard form the equation is
dy 1 1
− y = 2x, so that P(x) = − .
dx x x
Then8 Z
1
µ(x) = exp − dx = exp(− ln |x|) = x −1
x
The equation becomes
d −1
[x y] = x −1 (2x) = 2
dx
8
We will take the constant of integration to be zero when finding µ.
Section 4: First Order Equations: Linear & Special
d −1
dx [x y] = x −1 (2x) = 2
Next we integrate both sides—µ makes this possible, hence the name
integrating factor—and solve for our solution y.
Z Z
d −1
[x y] dx = 2 dx =⇒ x −1 y = 2x + C
dx
and finally
y = 2x 2 + Cx.
Note that this solution has the form y = yp + yc where yc = Cx and
yp = 2x 2 . The complementary part comes from the constant of
integration and is independent of the right side of the ODE 2x. The
particular part comes from the right hand side integration of x −1 (2x).
Section 4: First Order Equations: Linear & Special
For some linear equations, the term yc decays as x (or t) grows. For
example
dy 3 2
+ y = 3xe−x has solution y = x + Ce−x .
dx 2
3 2
Here, yp = x and yc = Ce−x .
2
Bernoulli Equations
Suppose P(x) and f (x) are continuous on some interval (a, b) and n is
a real number different from 0 or 1 (not necessarily an integer). An
equation of the form
dy
+ P(x)y = f (x)y n
dx
is called a Bernoulli equation.
Observation: This equation has the flavor of a linear ODE, but since
n 6= 0, 1 it is necessarily nonlinear. So our previous approach involving
an integrating factor does not apply directly. Fortunately, we can use a
change of variables to obtain a related linear equation.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
du dy dy y n du
= (1 − n)y −n =⇒ = .
dx dx dx 1 − n dx
Substituting into (3) and dividing through by y n /(1 − n)
y n du du
+ P(x)y = f (x)y n =⇒ + (1 − n)P(x)y 1−n = (1 − n)f (x)
1 − n dx dx
Given our choice of u, this is the first order linear equation
du
+ P1 (x)u = f1 (x), where P1 = (1 − n)P, f1 = (1 − n)f .
dx
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Example
Solve the initial value problem y 0 − y = −e2x y 3 , subject to y (0) = 1.
du dy dy 1 du
= −2y −3 so = − y3 .
dx dx dx 2 dx
Upon substitution
1 du
− y3 − y = −e2x y 3
2 dx
Multiplying through by −2y −3 gives
du
+ 2y −2 = 2e2x
dx
As expected, the second term on the left is (1 − n)P(x)u, here 2u.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Example Continued
Now we solve the first order linear equation for u using an integrating
factor. Omitting the details, we obtain
1 2x
u(x) = e + Ce−2x
2
Of course, we need to remember that our goal is to solve the original
equation for y . But the relationship between y and u is known. From
u = y −2 , we know that y = u −1/2 . Hence
1
y=q
1 2x
2e + Ce−2x
Exact Equations
The left side is called a differential form. We will assume here that M
and N are continuous on some (shared) region in the plane.
∂F ∂F
= M(x, y) and = N(x, y)
∂x ∂y
∂F ∂F
dx + dy = 0
∂x ∂y
This implies that the function F is constant on R and solutions to the
F (x, y) = C
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Recognizing Exactness
∂2F ∂2F
= .
∂y∂x ∂x∂y
If it is true that
∂F ∂F
=M and =N
∂x ∂y
this provides a condition for exactness, namely
∂M ∂N
=
∂y ∂x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Exact Equations
M(x, y) dx + N(x, y ) dy = 0
Example
Show that the equation is exact and obtain a family of solutions.
∂M ∂N
= 2x = .
∂y ∂x
9
Holding y constant while integrating with respect to x means that the constant of
integration may well depend on y
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Example Continued
∂F
F (x, y) = x 2 y − tan x + g(y) and = N(x, y) = x 2 + 2y
∂y
∂F
= x 2 + g 0 (y) = x 2 + 2y
∂y
∂M ∂N
= 2 6= 3 − 8xy −1 = .
∂y ∂x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
(2xy 3 − 6x 2 y 2 ) dx + (3x 2 y 2 − 4x 3 y ) dy = 0
Now we see that
∂(µM) ∂(µN)
= 6xy 2 − 12x 2 y =
∂y ∂x
Suppose that
M dx + N dy = 0
is NOT exact, but that
µM dx + µN dy = 0
∂(µ(x)M) ∂(µ(x)N)
= .
∂y ∂x
∂M dµ ∂N
µ = N +µ (5)
∂y dx ∂x
Rearranging (5), we get both a condition for the existence of such a µ
as well as an equation for it. The function µ must satisfy the separable
equation
∂M ∂N
!
dµ ∂y − ∂x
=µ (6)
dx N
Note that this equation is solvable, insofar as µ depends only on x,
only if
∂M ∂N
∂y − ∂x
N
depends only on x!
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
M dx + N dy = 0 (7)
Theorem: If (∂M/∂y − ∂N/∂x)/N is continuous and depends only on
x, then
Z ∂M − ∂N !
∂y ∂x
µ = exp dx
N
is a special integrating factor for (7). If (∂N/∂x − ∂M/∂y)/M is
Example
Solve the equation 2xy dx + (y 2 − 3x 2 ) dy = 0.
Note that ∂M/∂y = 2x and ∂N/∂x = −6x. The equation is not exact.
Looking to see if there may be a special integrating factor, note that
∂M ∂N
∂y − ∂x 8x
=
N y2 − 3x 2
∂N ∂M
∂x − ∂y −8x −4
= =
M 2xy y
The first does not depend on x alone. But the second does depend on
y alone. So there is a special integrating factor
Z
4
µ = exp − dy = y −4
y
Section 4: First Order Equations: Linear & Special Some Special First Order Equations
Example Continued
The new equation obtained by multiplying through by µ is
2xy −3 dx + (y −2 − 3x 2 y −4 ) dy = 0.
Note that
∂ ∂ −2
2xy −3 = −6xy −4 = (y − 3x 2 y −4 )
∂y ∂x
so this new equation is exact. Solving for F
Z
F (x, y) = 2xy −3 dx = x 2 y −3 + g(y)
x2 1
3
− = C.
y y
Section 5: First Order Equations: Models and Applications
Population Dynamics
A population of dwarf rabbits grows at a rate proportional to the current
population. In 2011, there were 58 rabbits. In 2012, the population was
up to 89 rabbits. Estimate the number of rabbits expected in the
population in 2021.
We can translate this into a mathematical statement then hope to solve
the problem and answer the question. Letting the population of rabbits
(say population density) at time t be given by P(t), to say that the rate
of change is proportional to the population is to say
dP
= kP(t) for some constant k .
dt
This is a differential equation! To answer the question, we will require
the value of k as well as some initial information—i.e. we will need an
IVP.
Section 5: First Order Equations: Models and Applications
Example Continued...11
We can choose units for time t. Based on the statement, taking t in
years is well advised. Letting t = 0 in year 2011, the second and third
sentences translate as
dP
= kP, P(0) = 58
dt
by separation of variables to obtain
P(t) = 58ekt .
11
Taking P as the population density, i.e. number of rabbits per unit habitat, allows
us to consider non-integer P values. Thus fractional and even irrational P values are
reasonable and not necessarily gruesome.
Section 5: First Order Equations: Models and Applications
Example Continued...
dP dP
= kP i.e. − kP = 0.
dt dt
Note that this equation is both separable and first order linear. If k > 0,
P experiences exponential growth. If k < 0, then P experiences
exponential decay.
Section 5: First Order Equations: Models and Applications
Measurable Quantities:
dq
Current is the rate of change of charge with respect to time: i = .
dt
Kirchhoff’s Law
If the initial charge (RC) or initial current (LR) is known, we can solve
the corresponding IVP.
(Note: We will consider LRC series circuits later as these give rise to
second order ODEs.)
Section 5: First Order Equations: Models and Applications
Example
A 200 volt battery is applied to an RC series circuit with resistance
1000Ω and capacitance 5 × 10−6 f . Find the charge q(t) on the
capacitor if i(0) = 0.4A. Determine the charge as t → ∞.
dq 1
1000 + q = 200, q 0 (0) = 0.4
dt 5 · 10−6
(Note that this is a slightly irregular IVP since the condition is given on
i = q 0 .) In standard form the equation is q 0 + 200q = 1/5 with
integrating factor µ = exp(200t). The general solution is
1
q(t) = + Ke−200t .
1000
Section 5: First Order Equations: Models and Applications
Example Continued...
1 e−200t
q(t) = − .
1000 500
The long time charge on the capacitor is therefore
1
lim q(t) = .
t→∞ 1000
Section 5: First Order Equations: Models and Applications
Figure: Spatially uniform composite fluids (e.g. salt & water, gas & ethanol)
being mixed. Concentrations of substances change in time. The ”well mixed”
condition ensures that concentrations do not change with space.
Section 5: First Order Equations: Models and Applications
Building an Equation
Building an Equation
dA A
= ri · c i − ro .
dt V
This equation is first order linear.
Note that the volume
Since the tank originally contains pure water (no salt), we have
A(0) = 0.
Section 5: First Order Equations: Models and Applications
Mixing Example
Our IVP is
dA A
= 5gal/min · 2lb/gal − 5gal/min · lb/gal, A(0) = 0
dt 500
dA 1
+ A = 10, A(0) = 0.
dt 100
The IVP has solution
A(t) = 1000 1 − e−t/100 .
12
The carrying capacity is the maximum number of individuals that the environment
can support due to limitation of space and resources.
Section 5: First Order Equations: Models and Applications
MCeMkt
P(t) = .
1 + CeMkt
13
The partial fraction decomposition
1 1 1 1
= +
P(M − P) M P M −P
is useful.
Section 5: First Order Equations: Models and Applications
P0 MeMkt P0 M
P(t) = = .
M − P0 + P0 e Mkt (M − P0 )e−Mkt + P0
d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = g(x)
dx dx dx
Example
y 00 + 3y 0 − 2y = 0, y (0) = 0, y 0 (0) = 0
consists of a problem
d 2y dy
a2 (x) + a1 (x) + a0 (x)y = g(x), a<x <b
dx 2 dx
to solve subject to a pair of conditions14
y (a) = y0 , y(b) = y1 .
BVP Example
π π
(2) y 00 + 4y = 0, 0<x < y (0) = 0, y = 0.
2 2
π π
(3) y 00 + 4y = 0, 0<x < y (0) = 0, y = 1.
2 2
Section 6: Linear Equations: Theory and Terminology
BVP Examples
y = c1 cos(2x) + c2 sin(2x).
Homogeneous Equations
We’ll consider the equation
d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = 0
dx dx dx
and assume that each ai is continuous and an is never zero on the
interval of interest.
Corollaries
Big Questions:
I Does an equation have any nontrivial solution(s), and
I since y1 and cy1 aren’t truly different solutions, what criteria will be
used to call solutions distinct?
Section 6: Linear Equations: Theory and Terminology
Linear Dependence
The functions f1 (x) = sin2 x, f2 (x) = cos2 x, and f3 (x) = 1 are linearly
dependent on I = (−∞, ∞).
The functions f1 (x) = sin x and f2 (x) = cos x are linearly independent
on I = (−∞, ∞).
Suppose c1 f1 (x) + c2 f2 (x) = 0 for all real x. Then the equation must
hold when x = 0, and it must hold when x = π/2. Consequently
0 · cos(π/2) + c2 sin(π/2) = 0 =⇒ c2 = 0.
We see that the only way for our linear combination to be zero is for
both coefficients to be zero. Hence the functions are linearly
independent.
Section 6: Linear Equations: Theory and Terminology
Definition of Wronskian
f1 f2 ··· fn
f10 f20 ··· fn0
W (f1 , f2 , . . . , fn )(x) = .. .. .. .. .
. . . .
(n−1) (n−1) (n−1)
f1 f2 ··· fn
sin x cos x
W (f1 , f2 )(x) =
cos x − sin x
= − sin2 x − cos2 x = −1
Section 6: Linear Equations: Theory and Terminology
x2 4x x − x2
W (f1 , f2 , f3 )(x) = 2x 4 1 − 2x
2 0 −2
4x x − x2 x2 4x
2 + (−2)
4 1 − 2x 2x 4
= 2(−4x 2 ) − 2(−4x 2 ) = 0
Section 6: Linear Equations: Theory and Terminology
15
For solutions of one linear homogeneous ODE, the Wronskian is either always
zero or is never zero.
Section 6: Linear Equations: Theory and Terminology
y1 = e x , y2 = e−2x I = (−∞, ∞)
d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = 0
dx dx dx
with the assumptions an (x) 6= 0 and ai (x) are continuous on I.
Example
Verify that y1 = ex and y2 = e−x form a fundamental solution set of the
ODE
y 00 − y = 0 on (−∞, ∞),
and determine the general solution.
Note that
(i) y100 − y1 = ex − ex = 0 and y200 − y2 = e−x − e−x = 0.
Also note that (ii) we have two solutions for this second order equation.
And finally (iii)
ex e−x
W (y1 , y2 )(x) = = −2 6= 0.
ex −e−x
Hence the functions are linearly independent. We have a fundamental
solution set, and the general solution is
y = c1 ex + c2 e−x .
Section 6: Linear Equations: Theory and Terminology
Nonhomogeneous Equations
Now we will consider the equation
d ny d n−1 y dy
an (x) + an−1 (x) + · · · + a1 (x) + a0 (x)y = g(x)
dx n dx n−1 dx
where g is not the zero function. We’ll continue to assume that an
doesn’t vanish and that ai and g are continuous.
d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = gi (x)
dx dx dx
for i = 1, . . . , k. Assume the domain of definition for all k equations is a
common interval I.
Then
yp = yp1 + yp2 + · · · + ypk
is a particular solution of the nonhomogeneous equation
d ny
an (x) + · · · + a0 (x)y = g1 (x) + g2 (x) + · · · + gk (x).
dx n
Section 6: Linear Equations: Theory and Terminology
y = c1 x 2 + c2 x 3 + 6 − 7x.
Section 7: Reduction of Order
d 2y dy
a2 (x) 2
+ a1 (x) + a0 (x)y = 0.
dx dx
d 2y dy
2
+ P(x) + Q(x)y = 0
dx dx
where P = a1 /a2 and Q = a0 /a2 .
Section 7: Reduction of Order
d 2y
dx 2
+ P(x) dy
dx + Q(x)y = 0
Recall that every fundmantal solution set will consist of two linearly
independent solutions y1 and y2 , and the general solution will have the
form
y = c1 y1 (x) + c2 y2 (x).
Suppose we happen to know one solution y1 (x). Reduction of order
is a method for finding a second linearly independent solution y2 (x)
that starts with the assumption that
for some function u(x). The method involves finding the function u.
Section 7: Reduction of Order
Reduction of Order
Consider the equation in standard form with one known solution.
Determine a second linearly independent solution.
d 2y dy
+ P(x) + Q(x)y = 0, y1 (x) − −is known.
dx 2 dx
Reduction of Order
Since y1 is a solution of the homogeneous equation, the last
expression in parentheses is zero. So we obtain an equation for u
u 00 y1 + (2y10 + Py1 )u 0 = 0.
Reduction of Order
R
Z − P(x) dx
e
y2 = y1(x) dx
(y1(x))2
Section 7: Reduction of Order
Example
Find the general solution of the ODE given one known solution
x 2 y 00 − 3xy 0 + 4y = 0, x > 0, y1 = x 2
In standard form, the equation is
3 4 3
y 00 − y 0 + 2 y = 0 so that P(x) = − .
x x x
Hence Z Z
3
− P(x) dx = − − dx = 3 ln(x) = ln x 3 .
x
A second solution is therefore
exp(ln x 3 )
Z Z 3 Z
2 2 x 2 dx
y2 = x 2 2
dx = x 4
dx = x = x 2 ln x.
(x ) x x
Note that we can take the constant of integration here to be zero
(why?). The general solution of the ODE is
y = c1 x 2 + c2 x 2 ln x.
Section 8: Homogeneous Equations with Constant Coefficients
d 2y dy
2
+b a
+ cy = 0.
dx dx
Question: What sort of function y could be expected to satisfy
y 00 = constant y 0 + constant y ?
Section 8: Homogeneous Equations with Constant Coefficients
0 = ay 00 + by 0 + cy
= am2 emx + bmemx + cemx
= (am2 + bm + c)emx
Noting that the exponential is never zero, the truth of the above
equation requires m to satisfy
am2 + bm + c = 0.
am2 + bm + c = 0
III b2 − 4ac < 0 and there are two roots that are complex conjugates
m1,2 = α ± iβ
Section 8: Homogeneous Equations with Constant Coefficients
ay 00 + by 0 + cy = 0, where b2 − 4ac = 0
−b
y = c1 emx + c2 xemx where m =
2a
−bx −bx
Use reduction of order to show that if y1 = e 2a , then y2 = xe 2a .
1
y1 = (Y1 + Y2 ) = eαx cos(βx), and
2
1
y2 = (Y1 − Y2 ) = eαx sin(βx).
2i
Section 8: Homogeneous Equations with Constant Coefficients
Examples
Examples
m2 + 4m − 5 = 0 with roots − 5, 1.
This is the two distinct real roots case. Hence y1 = e−5x , y2 = ex , and
the general solution is therefore
y = c1 e−5x + c2 ex .
Section 8: Homogeneous Equations with Constant Coefficients
Examples
This is the one repeated real root case. Hence y1 = e−2x , y2 = xe−2x ,
and the general solution is therefore
y = c1 e−2x + c2 xe−2x .
Section 8: Homogeneous Equations with Constant Coefficients
Example
Solve the ODE
y 000 −4y 0 = 0
m3 − 4m = 0 with roots − 2, 0, 2.
y = c1 e−2x + c2 + c3 e2x .
Example
Solve the ODE
y1 = e x , y2 = xex , y3 = x 2 ex .
y = c1 ex + c2 xex + c3 x 2 ex .
Section 9: Method of Undetermined Coefficients
Motivating Example
Find a particular solution of the ODE
y 00 − 4y 0 + 4y = 8x + 1
yp = Ax + B
for some pair of constants A, B. We can substitute this into the DE.
We’ll use that
yp0 = A, and yp00 = 0.
16
Note that this is an educated guess. If it doesn’t work out, we can sigh and try
something else. If it does work, we owe no apologies for starting with a guess.
Section 9: Method of Undetermined Coefficients
y 00 − 4y 0 + 4y = 8x + 1
Plugging yp into the ODE we get
8x + 1 = yp00 − 4yp0 + 4yp
= 0 − 4(A) + 4(Ax + B)
= 4Ax + (−4A + 4B)
We have first degree polynomials on both sides of the equation. They
are equal if and only if they have the same corresponding coefficients.
Matching the coefficients of x and the constants on the left and right
we get the pair of equations
4A = 8
−4A + 4B = 1
This has solution A = 2, B = 9/4. We’ve found a particular solution
9
yp = 2x + .
4
Section 9: Method of Undetermined Coefficients
17
e.g. sines and cosines give rise to one another when derivatives are taken. Hence
they should be consdered together in linear combinations.
18
Most notably we don’t assume a priori values for coefficients in our linear
combinations and don’t force them to have fixed relationships to one another prior to
fitting them to the ODE
Section 9: Method of Undetermined Coefficients
yp = A
yp = Ax + B
yp = Ax + B
yp = Ax 3 + Bx 2 + Cx + D
Section 9: Method of Undetermined Coefficients
yp = (Ax + B)e3x
yp = A cos(7x) + B sin(7x)
yp = (Ax 3 + Bx 2 + Cx + D)e8x
y 00 − y 0 = 20 sin(2x) + 4e−5x
Given the theorem in section 6 regarding superposition for
nonhomogeneous equations, we can consider two subproblems
Calling the particular solutions yp1 and yp2 , respectively, the correct
forms to guess would be
y 00 − y 0 = 20 sin(2x) + 4e−5x
The particular solution is
2e−5x
y = yp1 + yp2 = 2 cos(2x) − 4 sin(2x) + .
15
The general solution to the ODE is
2e−5x
y = c1 ex + c2 + 2 cos(2x) − 4 sin(2x) + .
15
Section 9: Method of Undetermined Coefficients
A Glitch!
y 00 − y 0 = 3ex
yp = Aex .
A Glitch!
y 00 − y 0 = 3ex
The reason for our failure here comes to light by consideration of the
associated homogenous equation
y 00 − y 0 = 0
with fundamental solution set y1 = ex , y2 = 1. Our initial guess of Aex
is a solution to the associated homogeneous equation for every
constant A. And for any nonzero A, we’ve only duplicated part of the
complementary solution. Fortunately, there is a fix for this problem.
Taking a hint from a previous observation involving reduction of order,
we may modify our initial guess by including a factor of x. If we guess
yp = Axex
we find that this actuall works. It can be shown that (details left to the
reader) A = 3. So yp = 3xex is a particular solution.
Section 9: Method of Undetermined Coefficients
yp = yp1 + · · · + ypk
Case II: yp has a term ypi that duplicates a term in the complementary
solution yc . Multiply that term by x n , where n is the smallest positive
integer that eliminates the duplication.
Section 9: Method of Undetermined Coefficients
yp = Ax 2 ex .
This does not duplicate the complementary solution and will work as
the correct form (i.e. it is possible to find a value of A such that this
function solves the nonhomogeneous ODE).
Section 9: Method of Undetermined Coefficients
y 00 − 4y 0 + 4y = sin(4x) + xe2x
We consider the subproblems
y1 = e2x , y2 = xe2x .
Comparing this set to the first part of the particular solution yp1 , we see
that there is no correlation between them. Hence yp1 will suffice as
written.
Section 9: Method of Undetermined Coefficients
y 00 − 4y 0 + 4y = sin(4x) + xe2x
Our guess at yp2 however will fail since it contains at least one term
(actually both are problematic) that solves the homogeneous equation.
We may attempt a factor of x
to fix the problem. However, this still contains a term (Dxe2x ) that
duplicates the fundamental solution set. Hence we introduce another
factor of x putting
y 00 − 4y 0 + 4y = sin(4x) + xe2x
yp = Axe−x .
y = c1 ex + c2 e−x − 2xe−x .
Section 9: Method of Undetermined Coefficients
So
y(0) = c1 + c2 = −1 and y 0 (0) = c1 − c2 − 2 = 1.
Solving this system of equations for c1 and c2 we find c1 = 1 and
c2 = −2. The solution to the IVP is
y = ex − 2e−x − 2xe−x .
Section 10: Variation of Parameters
y 00 + y = tan x or x 2 y 00 + xy 0 − 4y = ex ?
d 2y dy
+ P(x) + Q(x)y = g(x),
dx 2 dx
suppose {y1 (x), y2 (x)} is a fundamental solution set for the
associated homogeneous equation. We seek a particular solution of
the form
yp (x) = u1 (x)y1 (x) + u2 (x)y2 (x)
where u1 and u2 are functions19 we will determine (in terms of y1 , y2
and g).
19
Note the similarity to yc = c1 y1 + c2 y2 . The coefficients u1 and u2 are varying,
hence the name variation of parameters.
Section 10: Variation of Parameters
Note that we have two unknowns u1 , u2 but only one equation (the
ODE). Hence we will introduce a second equation. We’ll do this with
some degree of freedom but in a way that makes life a little bit easier.
We wish to substitute our form of yp into the ODE. Note that
u10 y1 + u20 y2 = 0.
Section 10: Variation of Parameters
u10 y1 + u20 y2 = 0
u10 y10 + u20 y20 = g
W1 −y2 g W2 y1 g
u10 = = u20 = =
W W W W
where
0 y2 y1 0
W1 = , W2 =
g y20 y10 g
and W is the Wronskian of y1 and y2 . We simply integrate to obtain u1
and u2 .
Section 10: Variation of Parameters
Example:
−y2 g
Z Z
sin x tan x
u1 = dx = − dx = sin x − ln | sec x + tan x|
W 1
Z Z
y1 g cos x tan x
u2 = dx = dx = − cos x
W 1
20
How we number the functions in the fundamental solution set is completely
arbitrary. However, the designations are important for finding our u’s and constructing
our yp . So we pick an ordering at the beginning and stick with it.
Section 10: Variation of Parameters
Example Continued...
yp = u1 y1 + u2 y2
= (sin x − ln | sec x + tan x|) cos x − cos x sin x
= − cos x ln | sec x + tan x|.
r
d 2x 00 2 k
m 2 = −kx =⇒ x + ω x = 0 where ω =
dt m
Convention We’ll Use: Up will be positive (x > 0), and down will be
negative (x < 0). This orientation is arbitrary and follows the
convention in Trench.
Section 11: Linear Mechanical Equations
W
W = k δx =⇒ k= .
δx
The units for k in this system of measure are lb/ft.
W
W = mg =⇒ m= .
g
mg = k δx.
k g
ω2 = = .
m δx
Provided that values for δx and g are used in appropriate units, ω is in
units of per second.
Section 11: Linear Mechanical Equations
x1
x(t) = x0 cos(ωt) + ω sin(ωt)
21
Various authors call f the natural frequency and others use this term for ω.
Section 11: Linear Mechanical Equations
Example
mx 00 + kx = 0 =⇒ x 00 + ω 2 x = 0
p
where ω = k /m. We seek the value of ω, but we do not have the
mass of the object to calculate the weight. Since the displacement is
described as displacment in equilibrium, we can calculate
p
ω = g/δx.
Section 11: Linear Mechanical Equations
Example Continued...
x 00 + 64x = 0.
Section 11: Linear Mechanical Equations
Example
A 4 pound weight stretches a spring 6 inches. The mass is released
from a position 4 feet above equilibrium with an initial downward
velocity of 24 ft/sec. Find the equation of motion, the period, amplitude,
phase shift, and frequency of the motion. (Take g = 32 ft/sec2 .)
We can calculate the spring constant and the mass from the given
information. Converting inches to feet, we have
1
4 lb = ftk =⇒ k = 8lb/ft and
2
4 lb 1
m= = slugs.
32 ft/sec2 8
The value of ω is therefore
r
k 1
ω= =8 .
m sec
Section 11: Linear Mechanical Equations
Example Continued...
Along with the initial conditions, we have the IVP
2π π 1 4 1
T = = sec and f = = .
8 4 T π sec
The amplitude q
A= 42 + (−3)2 = 5 ft.
Section 11: Linear Mechanical Equations
Example Continued...
4 3
sin φ = and cos φ = − .
5 5
We note that sin φ > 0 and cos φ < 0 indicating that φ is a quadrant II
angle (in standard position). Taking the smallest possible positive
value, we have
φ ≈ 2.21 (roughly 127◦ ).
Section 11: Linear Mechanical Equations
d 2x dx d 2x dx
m 2
= −β − kx =⇒ 2
+ 2λ + ω2x = 0
dt dt dt dt
where r
β k
2λ = and ω = .
m m
Three qualitatively different solutions can occur depending on the
nature of the roots of the characteristic equation
p
r 2 + 2λr + ω 2 = 0 with roots r1,2 = −λ ± λ2 − ω 2 .
Section 11: Linear Mechanical Equations
Damping Ratio
Engineers may refer to the damping ratio when determining which of
the three types of damping a system exhibits. Simply put, the damping
ratio is the ratio of the system damping to the critical damping for the
given mass and spring constant. Calling this damping ratio ζ,
damping coefficient β λ
ζ = = √ =
critical damping 2 mk ω
Comparison of Damping
Example
A 2 kg mass is attached to a spring whose spring constant is 12 N/m.
The surrounding medium offers a damping force numerically equal to
10 times the instantaneous velocity. Write the differential equation
describing this system. Determine if the motion is underdamped,
overdamped or critically damped.
Our DE is
2x 00 + 10x 0 + 12x = 0 =⇒ x 00 + 5x 0 + 6x = 0.
Hence
5
λ= and ω 2 = 6.
2
Note that
25 1
λ2 − ω 2 = − 6 = > 0.
4 4
This system is overdamped.
Section 11: Linear Mechanical Equations
Example
A 3 kg mass is attached to a spring whose spring constant is 12 N/m.
The surrounding medium offers a damping force numerically equal to
12 times the instantaneous velocity. Write the differential equation
describing this system. Determine if the motion is underdamped,
overdamped or critically damped. If the mass is released from the
equilibrium position with an upward velocity of 1 m/sec, solve the
resulting initial value problem.
From the description, the IVP is
x(t) = te−2t .
Section 11: Linear Mechanical Equations
Driven Motion
d 2x dx
m 2
= −β − kx + f (t), β ≥ 0.
dt dt
Divide out m and let F (t) = f (t)/m to obtain the nonhomogeneous
equation
d 2x dx
2
+ 2λ + ω 2 x = F (t)
dt dt
Section 11: Linear Mechanical Equations
x 00 + ω 2 x = F0 sin(γt)
xc = c1 cos(ωt) + c2 sin(ωt).
Section 11: Linear Mechanical Equations
x 00 + ω 2 x = F0 sin(γt)
Note that
xc = c1 cos(ωt) + c2 sin(ωt).
xp = A cos(γt)+B sin(γt)
x 00 + ω 2 x = F0 sin(γt)
Note that
xc = c1 cos(ωt) + c2 sin(ωt).
xp = A cos(γt)+B sin(γt)
xp = At cos(ωt) + Bt sin(ωt).
Note that terms of this sort will produce an amplitude of motion that
grows linearly in t.
Section 11: Linear Mechanical Equations
F0 γ
x(t) = sin(γt) − sin(ωt)
ω2 − γ 2 ω
If γ ≈ ω, the amplitude of motion could be rather large!
Section 11: Linear Mechanical Equations
Pure Resonance
Case (2): x 00 + ω 2 x = F0 sin(ωt), x(0) = 0, x 0 (0) = 0
F0 F0
x(t) = 2
sin(ωt) − t cos(ωt)
2ω 2ω
d 2q dq 1
L 2
+R + q=0
dt dt C
1
Lq 00 + Rq 0 + q = E(t), q(0) = q0 , q 0 (0) = i0 .
C
From our basic theory of linear equations we know that the solution will
take the form
q(t) = qc (t) + qp (t).
The function of qc is influenced by the initial state (q0 and i0 ) and will
decay exponentially as t → ∞. Hence qc is called the transient state
charge of the system.
The function qp is independent of the initial state but depends on the
characteristics of the circuit (L, R, and C) and the applied voltage E.
qp is called the steady state charge of the system.
Section 12: LRC Series Circuits
Example
An LRC series circuit has inductance 0.5 h, resistance 10 ohms, and
capacitance 4 · 10−3 f. Find the steady state current of the system if
the applied force is E(t) = 5 cos(10t).
The ODE for the charge is
1
0.5q 00 +10q 0 + q = 5 cos(10t) =⇒ q 00 +20q 0 +500q = 10 cos(10t).
4 · 10−3
The characteristic equation r 2 + 20r + 500 = 0 has roots
r = −10 ± 20i. To determine qp we can assume
qp = A cos(10t) + B sin(10t)
Example Continued...
1 1
qp = cos(10t) + sin(10t).
50 100
The steady state current
dqp 1 1
ip = = − sin(10t) + cos(10t).
dt 5 10
Section 13: The Laplace Transform
Integral Transform
An integral transform is a mapping that assigns to a function f (t)
another function F (s) via an integral of the form
Z b
K (s, t)f (t) dt.
a
1 1
L {1} = − (0 − 1) = .
s s
1
L {1} = , s > 0.
s
23
The integral is improper. We are in reality evaluating an integral of the form
Rb −st
0
e f (t) dt and then taking the limit b → ∞. We suppress some of the notation
here with the understanding that this process is implied.
Section 13: The Laplace Transform
Z ∞ Z 10 Z ∞
−st −st
L {f (t)} = e f (t) dt = 2te dt + 0 · e−st dt
0 0 10
For s 6= 0, integration by parts gives
2 2e−10s 20e−10s
L {f (t)} = − − .
s2 s2 s
I L {1} = 1s , s>0
I L {t n } = n!
, s > 0 for n = 1, 2, . . .
sn+1
I L {eat } = 1
s−a , s>a
I L {cos kt} = s
, s>0
s2 +k 2
I L {sin kt} = k
, s>0
s2 +k 2
Section 13: The Laplace Transform
1 1 1 1
L {f (t)} = L { − cos(10t)} = L {1} − L {cos(10t)}
2 2 2 2
1
1 s
= − 2 2
2s s + 100
Section 13: The Laplace Transform
I L −1 n!
= t n , for n = 1, 2, . . .
sn+1
n o
I L −1 1
s−a = eat
n o
I L −1 s
= cos kt
s2 +k 2
n o
I L −1 k
= sin kt
s2 +k 2
The inverse Laplace transform is also linear so that
−1 1
(a) L
s7
−1 1 1 6! −1
L =L
s7 6! s7
t6
1 −1 6!
= L =
6! s7 6!
Section 14: Inverse Laplace Transforms
Example: Evaluate
s+1
(b) L −1
s2 + 9
−1 s+1 s −1 −1 1
L =L +L
s2 + 9 s2 + 9 s2 + 9
−1 s 1 −1 3
=L + L
s2 + 9 3 s2 + 9
1
= cos(3t) + sin(3t)
3
Section 14: Inverse Laplace Transforms
Example: Evaluate
−1 s−8
(c) L
s2 − 2s
= 4 − 3e2t
Section 15: Shift Theorems
2
F (s − 1) = .
(s − 1)3
Section 15: Shift Theorems
Theorem (translation in s)
For example,
n! n!
L t n = n+1 L eat t n =
=⇒ .
s (s − a)n+1
s s−a
L {cos(kt)} = L eat cos(kt) =
=⇒ .
s2 + k2 (s − a)2 + k 2
Section 15: Shift Theorems
−1 s
(a) L 2
s + 2s + 2
1 + 3s − s2
−1
(b) L
s(s − 1)2
= 1 − 2et + 3tet
Section 15: Shift Theorems
Figure: We can use the unit step function to provide convenient expressions
for piecewise defined functions.
Section 15: Shift Theorems
Verify that
g(t), 0 ≤ t < a
f (t) = = g(t) − g(t)U (t − a) + h(t)U (t − a)
h(t), t ≥ a
Exercise left to the reader. (Hint: Consider the two intervals 0 ≤ t < a
and t ≥ a.)
Section 15: Shift Theorems
Translation in t
Given a function f (t) for t ≥ 0, and a number a > 0
0, 0≤t <a
f (t − a)U (t − a) = .
f (t − a), t ≥ a
Figure: The function f (t − a)U (t − a) has the graph of f shifted a units to the
right with value of zero for t to the left of a.
Section 15: Shift Theorems
Theorem (translation in t)
If F (s) = L {f (t)} and a > 0, then
In particular,
e−as
L {U (t − a)} = .
s
As another example,
n! n!e−as
L {t n } = =⇒ L {(t − a)n U (t − a)} = .
sn+1 sn+1
Section 15: Shift Theorems
Example
1 e−s
= + 2 .
s s
Section 15: Shift Theorems
1
= e−πs/2 L {− sin t} = −e−πs/2 .
s2 +1
Section 15: Shift Theorems
= U (t − 2) − e−(t−2) U (t − 2).
Section 16: Laplace Transforms of Derivatives and IVPs
Z ∞
0
By definition L f (t) = e−st f 0 (t) dt
0
Let us assume that f is of exponential order c for some real c and take
s > c. Integrate by parts to obtain
0
Z ∞ ∞ Z ∞
−st 0 −st
L f (t) = e f (t) dt = f (t)e +s e−st f (t) dt
0 0 0
= 0 − f (0) + sL {f (t)}
= sL {f (t)} − f (0) (9)
Section 16: Laplace Transforms of Derivatives and IVPs
Transforms of Derivatives
Transforms of Derivatives
L {y(t)} = Y (s),
then
dy
L = sY (s) − y (0),
dt
2
d y
L = s2 Y (s) − sy(0) − y 0 (0),
dt 2
..
.
d ny
L = sn Y (s) − sn−1 y(0) − sn−2 y 0 (0) − · · · − y (n−1) (0).
dt n
Section 16: Laplace Transforms of Derivatives and IVPs
Differential Equation
L ay 00 + by 0 + cy = L {g(t)} =⇒
aL y 00 + bL y 0 + cL {y} = G(s) =⇒
Differential Equation
For constants a, b, and c, take the Laplace transform of both sides of
the equation
Solving IVPs
General Form
We get
Q(s) G(s)
Y (s) = +
P(s) P(s)
where Q is a polynomial with coefficients determined by the initial
conditions, G is the Laplace transform of g(t) and P is the
characteristic polynomial of the original equation.
−1 Q(s)
L is called the zero input response,
P(s)
and
−1 G(s)
L is called the zero state response.
P(s)
Section 16: Laplace Transforms of Derivatives and IVPs
dy
(a) +3y = 2t y(0) = 2
dt
Apply the Laplace transform and use the initial condition. Let
Y (s) = L {y }
L y 0 + 3y = L {2t}
2
sY (s) − y(0) + 3Y (s) =
s2
2
(s + 3)Y (s) − 2 =
s2
2 2 2s2 + 2
Y (s) = + =
s2 (s + 3) s + 3 s2 (s + 3)
Section 16: Laplace Transforms of Derivatives and IVPs
Example Continued...
2 2 20 −3t
y(t) = L −1 {Y (s)} = − + t + e .
9 3 9
Section 16: Laplace Transforms of Derivatives and IVPs
L {y 00 + 4y 0 + 4y } = L {te−2t }
1
s2 Y (s) − sy(0) − y 0 (0) + 4sY (s) − 4y(0) + 4Y (s) =
(s + 2)2
1
(s2 + 4s + 4)Y (s) − s − 4 =
(s + 2)2
1 s+4
Y (s) = 4
+ .
(s + 2) (s + 2)2
Section 16: Laplace Transforms of Derivatives and IVPs
Example Continued...
1 1 2
Y (s) = 4
+ +
(s + 2) s + 2 (s + 2)2
1 3 −2t
y(t) = L −1 {Y (s)} = t e + e−2t + 2te−2t .
3!
Section 16: Laplace Transforms of Derivatives and IVPs
LR Circuit Example
E0 e−s E0 e−3s
sI(s) − i(0) + 10I(s) = −
s s
E0
I(s) = e−s − e−3s .
s(s + 10)
Section 16: Laplace Transforms of Derivatives and IVPs
Example Continued...
i(t) = L −1 {I(s)}
E0 E0
= 1 − e−10(t−1) U (t − 1) − 1 − e−10(t−3) U (t − 3).
10 10
Section 16: Laplace Transforms of Derivatives and IVPs
Solving a System
Example
dx
= −2x − 2y + 60, x(0) = 0
dt
dy
= −2x − 5y + 60, y(0) = 0
dt
Example Continued...
dx
= −2x − 2y + 60, x(0) = 0
dt
dy
= −2x − 5y + 60, y (0) = 0
dt
Applying the transform to both sides of both equations
60
sX (s) − x(0) = −2X (s) − 2Y (s) +
s
60
sY (s) − y(0) = −2X (s) − 5Y (s) +
s
We can subsitute in the initial conditions, and rearrange the equations
to get an algebraic system
60
(s + 2)X (s) + 2Y (s) =
s
60
2X (s) + (s + 5)Y (s) =
s
Section 16: Laplace Transforms of Derivatives and IVPs
Example Continued...
60
(s + 2)X (s) + 2Y (s) =
s
60
2X (s) + (s + 5)Y (s) =
s
We can solve this system in any number of ways. For those familiar
with it, Crammer’s Rule is probably the easiest approach. Elimination will
work just as well. We find
60(s + 3)
X (s) =
s(s + 1)(s + 6)
60
Y (s) =
(s + 1)(s + 6)
Example Continued...
30 24 6
X (s) = − −
s s+1 s+6
12 12
Y (s) = −
s+1 s+6
Finally, we take the inverse transform to obtain the solution to the
system.
Example
Use the Laplace transform to solve the system of equations
x 00 (t) = y , x(0) = 1, x 0 (0) = 0
y 0 (t) = x, y(0) = 1
Example Continued...
s2 + 1 s2 + 1
X (s) = =
s3 − 1 (s − 1)(s2 + s + 1)
s2 + s s(s + 1)
Y (s) = 3
=
s −1 (s − 1)(s2 + s + 1)
Example Continued...
2/3 1/3(s − 1)
X (s) = + 2
s−1 s +s+1
2/3 1/3(s + 2)
Y (s) = + 2
s−1 s +s+1
Example Continued...
1/3 s + 12
2/3 1/2
X (s) = + 2 −
s−1 1 2
s + 21 + 3 3
4 s+ 2 + 4
1/3 s + 12
2/3 1/2
Y (s) = + 2 +
s−1 1 2
s + 21 + 3 3
4 s+ 2 + 4
d 2x
Figure: 2 + 128x = f (t)
dt 2
Question: How can we solve a problem like this with a right side with
inifinitely many pieces?
Section 17: Fourier Series: Trigonometric Series
∞
an (x − c)n where the
P
In calculus, you saw power series f (x) =
n=0
simple functions were powers (x − c)n .
Here, you will see how some functions can be written as series of
trigonometric functions
∞
X
f (x) = (an cos nx + bn sin nx)
n=0
We’ll move the n = 0 to the front before the rest of the sum.
Section 17: Fourier Series: Trigonometric Series
hf , gi = 0.
(i) hf , gi = hg, f i
(ii) hf , g + hi = hf , gi + hf , hi
Orthogonal Set
A set of functions {φ0 (x), φ1 (x), φ2 (x), . . .} is said to be orthogonal on
an interval [a, b] if
Z b
hφm , φn i = φm (x)φn (x) dx = 0 whenever m 6= n.
a
Note that any function φ(x) that is not identically zero will satisfy
Z b
hφ, φi = φ2 (x) dx > 0.
a
{1, cos x, cos 2x, cos 3x, . . . , sin x, sin 2x, sin 3x, . . .} on [−π, π].
Z π
cos nx sin mx dx = 0 for all m, n ≥ 1, and
−π
Z π Z π
0, m 6= n
cos nx cos mx dx = sin nx sin mx dx = ,
−π −π π, n = m
Section 17: Fourier Series: Trigonometric Series
{1, cos x, cos 2x, cos 3x, . . . , sin x, sin 2x, sin 3x, . . .}
There are many interesting and useful orthogonal sets of functions (on
appropriate intervals). What follows is readily extended to other such
(infinite) sets.
Section 17: Fourier Series: Trigonometric Series
Fourier Series
Suppose f (x) is defined for −π < x < π. We would like to know how to
write f as a series in terms of sines and cosines.
24 a0
We’ll write 2
as opposed to a0 purely for convenience.
Section 17: Fourier Series: Trigonometric Series
Fourier Series
∞
a0 X
f (x) = + (an cos nx + bn sin nx) .
2
n=1
∞
a0 X
f (x)sin 4x = sin 4x + (an cos nxsin 4x + bn sin nxsin 4x) .
2
n=1
Z π Z π
a0
f (x)sin 4x dx = sin 4x dx +
−π −π 2
∞ Z
X π Z π
an cos nxsin 4x dx + bn sin nxsin 4x dx .
n=1 −π −π
Section 17: Fourier Series: Trigonometric Series
Z π Z π
a0
f (x)sin 4x dx = sin 4x dx +
−π 2 −π
∞
X Z π Z π
an cos nxsin 4x dx + bn sin nxsin 4x dx .
n=1 −π −π
Now
R π we use the known orthogonality property. Recall that
−π sin 4x dx = 0, and that for every n = 1, 2, . . .
Z π
cos nxsin 4x dx = 0
−π
Section 17: Fourier Series: Trigonometric Series
Note that there was nothing special about seeking the 4th sine
coefficient b4 . We could have just as easily sought bm for any positive
integer m. We would simply start by introducing the factor sin(mx).
Moreover, using the same orthogonality property, we could pick on the
a’s by starting with the factor cos(mx)—including the constant term
since cos(0 · x) = 1. The only minor difference we want to be aware of
is that
Z π
2 2π, m = 0
cos (mx) dx =
−π π, m ≥ 1
Careful consideration of this sheds light on why it is conventional to
take the constant to be a20 as opposed to just a0 .
Section 17: Fourier Series: Trigonometric Series
Where
1 π
Z
a0 = f (x) dx,
π −π
1 π
Z
an = f (x) cos nx dx, and
π −π
1 π
Z
bn = f (x) sin nx dx
π −π
Section 17: Fourier Series: Trigonometric Series
Example
Find the Fourier series of the piecewise defined function
0, −π < x < 0
f (x) =
x, 0 ≤ x < π
Example Continued...
Z p
nπx mπx
cos sin dx = 0 for all m, n ≥ 1,
−p p p
Z p
nπx mπx 0, m 6= n
cos cos dx = ,
−p p p p, n=m
Z p
nπx mπx 0, m 6= n
sin sin dx = .
−p p p p, n=m
Section 17: Fourier Series: Trigonometric Series
where
Z p
1
a0 = f (x) dx,
p −p
Z p
1 nπx
an = f (x) cos dx, and
p −p p
Z p
1 nπx
bn = f (x) sin dx
p −p p
Section 17: Fourier Series: Trigonometric Series
Example
We apply the given formulas to find the coefficients. Noting that f is
defined on the interval (−1, 1) we have p = 1.
Z 1 Z 0 Z 1
1
a0 = f (x) dx = dx + (−2) dx = −1
1 −1 −1 0
Z 1
1 nπx
an = f (x) cos dx =
1 −1 1
Z 0 Z 1
= cos(nπx) dx + (−2) cos(nπx) dx = 0
−1 0
Z 1
1 nπx
bn = f (x) sin dx =
1 −1 1
0 1
3((−1)n − 1)
Z Z
= sin(nπx) dx + (−2) sin(nπx) dx =
−1 0 nπ
Section 17: Fourier Series: Trigonometric Series
Example Continued...
Putting the coefficients into the expansion, we get
∞
1 X 3((−1)n − 1)
f (x) = − + sin(nπx).
2 nπ
n=1
at that point.
Periodic Extension:
The series is also defined for x outside of the original domain (−p, p).
The extension to all real numbers is 2p-periodic.
Section 17: Fourier Series: Trigonometric Series
Figure: Plot of the infinite sum, the limit for the Fourier series of f . Note that
the basic plot repeats every 2 units, and converges in the mean at each jump.
Section 17: Fourier Series: Trigonometric Series
Example Continued...
Figure: Plot of f (x) = x for −1 < x < 1 with two terms of the Fourier series.
Section 17: Fourier Series: Trigonometric Series
Figure: Plot of f (x) = x for −1 < x < 1 with 10 terms of the Fourier series
Section 17: Fourier Series: Trigonometric Series
Figure: Plot of f (x) = x for −1 < x < 1 with the Fourier series plotted on
(−3, 3). Note that the series repeats the profile every 2 units. At the jumps,
the series converges to (−1 + 1)/2 = 0.
Section 17: Fourier Series: Trigonometric Series
Figure: Here is a plot of the series (what it converges to). We see the
periodicity and convergence in the mean. Note: A plot like this is determined
by our knowledge of the generating function and Fourier series, not by
analyzing the series itself.
Section 18: Sine and Cosine Series
So, suppose f is even on (−p, p). This tells us that f (x) cos(nx) is
even for all n and f (x) sin(nx) is odd for all n.
And, if f is odd on (−p, p). This tells us that f (x) sin(nx) is even for all
n and f (x) cos(nx) is odd for all n
Section 18: Sine and Cosine Series
where
Z p
2
a0 = f (x) dx
p 0
and
Z p
2 nπx
an = f (x) cos dx.
p 0 p
Section 18: Sine and Cosine Series
If f is odd on (−p, p), then the Fourier series of f has only sine terms.
Moreover
∞
X nπx
f (x) = bn sin
p
n=1
where
Z p
2 nπx
bn = f (x) sin dx.
p 0 p
Section 18: Sine and Cosine Series
Example Continued...
∞
X nπx
Half range sine series f (x) = bn sin
p
n=1
Z p
2 nπx
where bn = f (x) sin dx.
p 0 p
Section 18: Sine and Cosine Series
Here, the value p = 2. Using the formula for the coefficients of the sine
series
2 2
Z nπx
bn = f (x) sin dx
2 0 2
Z 2 nπx
= (2 − x) sin dx
0 2
4
=
nπ
∞
X 4 nπx
The series is f (x) = sin .
nπ 2
n=1
Section 18: Sine and Cosine Series
Example Continued...
Figure: f (x) = 2 − x, 0 < x < 2 with 10 terms of the sine series, and the
series plotted over (−6, 6)
Section 18: Sine and Cosine Series
Figure: f (x) = 2 − x, 0 < x < 2 with 5 terms of the cosine series, and the
series plotted over (−6, 6)
Section 18: Sine and Cosine Series
∞
X 2(−1)n+1
= sin(nπt).
nπ
n=1
1 1 s > 0
s
t n n = 1, 2, . . . n!
n+1 s>0
s
Γ(r +1)
tr r > −1 s>0
sr +1
eat s−a
1 s>a
sin(kt) k 6= 0 k s>0
s2 +k 2
cos(kt) s s>0
s +k 2
2
eat f (t) F (s − a)
e −as
U (t − a) a > 0 s
s>0
U (t − a)f (t − a) a > 0 e−as F (s)
U (t − a)g(t) a > 0 e−as L {g(t + a)}
f 0 (t) sF (s) − f (0)
f (n) (t) sn F (s) − sn−1 f (0) − sn−2 f 0 (0) − · · · − f (n−1) (0)
tf (t) d F (s)
− ds
n
t n f (t) n = 1, 2, . . . (−1)n d n F (s)
ds
−1 −1 −1
L {αF (s) + βG(s)} = αL {F (s)} + βL {G(s)}
Appendix: Crammer’s Rule
While Crammer’s rule can be used with any size system, we’ll restrict
ourselves to the 2 × 2 case. We obtain the solution in terms of ratios of
determinants. First, let’s see how the method plays out in general, and
then we illustrate with an example.
Note: Crammer’s rule will produce the same solution as any other
approach. Its advantage is in its computational simplicity (which gets
lost the larger the system is).
Appendix: Crammer’s Rule
ax + by = e
cx + dy = f
We’re going to form 3 matrices. The first is the coefficient matrix for the
system. I’ll call that A. So
a b
A= .
c d
25
We can allow any of a through f to be unknown parameters, but they don’t depend
on x or y so they will still be considered constant.
Appendix: Crammer’s Rule
ax + by = e
cx + dy = f
Next, we form two more matrices that I’ll call Ax and Ay . These
matrices are obtained by replacing one column of A with the values
from the right side of the system. For Ax we replace the first column
(the one with x’s coefficients), and for Ay we replace the second
column (the one with y ’s coefficients). We have
e b a e
Ax = , and Ay = .
f d c f
Appendix: Crammer’s Rule
ax + by = e
cx + dy = f
det(Ax ) det(Ay )
x= and y= .
det(A) det(A)
26
This is a well known result that can be found in any elementary discussion of
Linear Algebra.
Appendix: Crammer’s Rule
2x − 3y = −4
3x + 7y = 2
Let’s form the three matrices and verify that the determinant of the
coefficient matrix isn’t zero.
2 −3 −4 −3 2 −4
A= , Ax = , and Ay = .
3 7 2 7 3 2
22 16
x =− and y = .
23 23
It’s worth taking a moment to substitute those values back into the
system to verify that it does indeed solve it. It’s not hard to imagine,
looking at the solution, that solving it with substitution or elimination is
probably more tedious.
Appendix: Crammer’s Rule
For larger systems (perhaps bigger than 3 × 3) one must weigh the
computational intensity of computing determinants with that of other
options such as elimination or substitution. The approach also breaks
down if the coefficient matrix has zero determinant. The system may
have solutions (or not), but another approach is needed to characterize
them.