0% found this document useful (0 votes)
38 views323 pages

Good Guide

This document provides an overview of ordinary differential equations including definitions, classifications, notations, and concepts related to solutions. It defines differential equations and discusses the classification of equations by order, type, and linearity. Examples of each classification are provided. The key concepts of solutions, domains of solutions, and the difference between functions and solutions are also explained.

Uploaded by

Alien Messi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views323 pages

Good Guide

This document provides an overview of ordinary differential equations including definitions, classifications, notations, and concepts related to solutions. It defines differential equations and discusses the classification of equations by order, type, and linearity. Examples of each classification are provided. The key concepts of solutions, domains of solutions, and the difference between functions and solutions are also explained.

Uploaded by

Alien Messi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 323

Ordinary Differential Equations

Lake Ritter, Kennesaw State University

2023
MATH 2306: Ordinary Differential Equations
Lake Ritter, Kennesaw State University

This manuscript is a text-like version of the lecture slides I have


created for use in my ODE classes at KSU. It is not intended to serve
as a complete text or reference book, but is provided as a supplement
to my students. I strongly advise against using this text as a substitute
for regular class attendance. In particular, most of the computational
details have been omitted. However, my own students are encouraged
to read (and read ahead) as part of preparing for class.
All others are welcomed to use this document for any noncommercial
use. To report errata or to request TEXand figure files please email me
at [email protected].

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0


International License.
Table of Contents
Section 1: Concepts and Terminology
Section 2: Initial Value Problems
Section 3: First Order Equations: Separation of Variables
Section 4: First Order Equations: Linear & Special
Some Special First Order Equations
Section 5: First Order Equations: Models and Applications
Section 6: Linear Equations: Theory and Terminology
Section 7: Reduction of Order
Section 8: Homogeneous Equations with Constant Coefficients
Section 9: Method of Undetermined Coefficients
Section 10: Variation of Parameters
Section 11: Linear Mechanical Equations
Section 12: LRC Series Circuits
Section 13: The Laplace Transform
Section 14: Inverse Laplace Transforms
Section 15: Shift Theorems
Section 16: Laplace Transforms of Derivatives and IVPs
Section 17: Fourier Series: Trigonometric Series
Section 18: Sine and Cosine Series
Table of Laplace Transforms
Appendix: Crammer’s Rule
Section 1: Concepts and Terminology

Section 1: Concepts and Terminology

Suppose y = φ(x) is a differentiable function. We know that


dy /dx = φ0 (x) is another (related) function.
For example, if y = cos(2x), then y is differentiable on (−∞, ∞). In
fact,

dy
= −2 sin(2x).
dx
Even dy/dx is differentiable with d 2 y /dx 2 = −4 cos(2x). Note that

d 2y
+ 4y = 0.
dx 2
Section 1: Concepts and Terminology

The equation

d 2y
+ 4y = 0.
dx 2
is an example of a differential equation.

Questions: If we only started with the equation, how could we


determine that cos(2x) satisfies it? Also, is cos(2x) the only possible
function that y could be?

We will be able to answer these questions as we proceed.


Section 1: Concepts and Terminology

Definition

A Differential Equation is an equation containing the derivative(s) of


one or more dependent variables, with respect to one or more
indendent variables.

Independent Variable: will appear as one that derivatives are taken


with respect to.
Dependent Variable: will appear as one that derivatives are taken of.

dy du dx
dx dt dr
Identify the dependent and independent variables in these terms.
Section 1: Concepts and Terminology

Classifications
Type: An ordinary differential equation (ODE) has exactly one
independent variable1 . For example

dy dy dx
− y 2 = 3x, or +2 = t, or y 00 + 4y = 0
dx dt dt

A partial differential equation (PDE) has two or more independent


variables. For example

∂y ∂2y ∂ 2 u 1 ∂u 1 ∂2u
= , or + + =0
∂t ∂x 2 ∂r 2 r ∂r r 2 ∂θ2

For those unfamiliar with the notation, ∂ is the symbol used when
taking a derivative with respect to one variable keeping the remaining
variable(s) constant. ∂u
∂t is read as the ”partial derivative of u with
respect to t.”
1
These are the subject of this course.
Section 1: Concepts and Terminology

Classifications

Order: The order of a differential equation is the same as the highest


order derivative appearing anywhere in the equation.

dy
− y 2 = 3x
dx

y 000 + (y 0 )4 = x 3

∂y ∂2y
=
∂t ∂x 2

These are first, third, and second order equations, respectively.


Section 1: Concepts and Terminology

Notations and Symbols

We’ll use standard derivative notations:

dy d 2y d ny
Leibniz: , , ... , or
dx dx 2 dx n

Prime & superscripts: y 0 , y 00 , ... y (n) .

Newton’s dot notation may be used if the independent variable is


time. For example if s is a position function, then

ds d 2s
velocity is = ṡ, and acceleration is = s̈
dt dt 2
Section 1: Concepts and Terminology

Notations and Symbols

An nth order ODE, with independent variable x and dependent variable


y can always be expressed as an equation

F (x, y, y 0 , . . . , y (n) ) = 0
where F is some real valued function of n + 2 variables.

Normal Form: If it is possible to isolate the highest derivative term,


then we can write a normal form of the equation

d ny
= f (x, y , y 0 , . . . , y (n−1) ).
dx n
Section 1: Concepts and Terminology

Example

The equation y 00 + 4y = 0 has the form F (x, y, y 0 , y 00 ) = 0 where

F (x, y , y 0 , y 00 ) = y 00 + 4y.

This equation is second order. In normal form it is y 00 = f (x, y, y 0 )


where
f (x, y, y 0 ) = −4y .
Section 1: Concepts and Terminology

Notations and Symbols


If n = 1 or n = 2, an equation in normal form would look like

dy d 2y
= f (x, y ) or = f (x, y , y 0 ).
dx dx 2

Differential Form: A first order equation may appear in the form

M(x, y) dx + N(x, y ) dy = 0

Note that this can be rearranged into a couple of different normal forms

dy M(x, y) dx N(x, y )
=− or =−
dx N(x, y) dy M(x, y)
Section 1: Concepts and Terminology

Classifications

Linearity: An nth order differential equation is said to be linear if it can


be written in the form

d ny d n−1 y dy
an (x) + an−1 (x) + · · · + a1 (x) + a0 (x)y = g(x).
dx n dx n−1 dx

Note that each of the coefficients a0 , . . . , an and the right hand side g
may depend on the independent variable but not on the dependent
variable or any of its derivatives.
Section 1: Concepts and Terminology

Examples (Linear -vs- Nonlinear)

d 2x dx
y 00 + 4y = 0 t2 2
+ 2t − x = et
dt dt

Convince yourself that the top two equations are linear.

4
d 3y

dy
+ = x3 u 00 + u 0 = cos u
dx 3 dx

The terms (dy /dx)4 and cos u make these nonlinear.


Section 1: Concepts and Terminology

Solution of F (x, y , y 0 , . . . , y (n) ) = 0 (*)

Definition: A function φ defined on an interval I 2 and possessing at


least n continuous derivatives on I is a solution of (*) on I if upon
substitution (i.e. setting y = φ(x)) the equation reduces to an identity.

Definition: An implicit solution of (*) is a relation G(x, y) = 0


provided there exists at least one function y = φ that satisfies both the
differential equation (*) and this relation.

2
The interval is called the domain of the solution or the interval of definition.
Section 1: Concepts and Terminology

Function -vs- Solution

The interval of defintion has to be an interval.

Consider y 0 = −y 2 . Clearly y = x1 solves the DE. The interval of


defintion can be (−∞, 0), or (0, ∞)—or any interval that doesn’t
contain the origin. But it can’t be (−∞, 0) ∪ (0, ∞) because this isn’t an
interval!

Often, we’ll take I to be the largest, or one of the largest, possible


interval. It may depend on other information.
Section 1: Concepts and Terminology

Figure: Left: Plot of f (x) = x1 as a function. Right: A plot of f (x) = 1


x as a
possible solution of an ODE.
Section 1: Concepts and Terminology

c2
Note that for any choice of constants c1 and c2 , y = c1 x + x is a
solution of the differential equation

x 2 y 00 + xy 0 − y = 0

We have
c2 2c2
y 0 = c1 − , and y 00 =
x2 x3
So
 
2 00 0 2 2c2
 c2   c2 
x y + xy − y = x + x c1 − 2 − c1 x +
x3 x x
2c2 c2 c2
= + c1 x − − c1 x −
x x x
1
= (2c2 − c2 − c2 ) + (c1 − c1 )x
x
= 0

as required.
Section 1: Concepts and Terminology

Some Terms
I A parameter is an unspecified constant such as c1 and c2 in the
last example.

I A family of solutions is a collection of solution functions that only


differ by a parameter.
I An n-parameter family of solutions is one containing n
parameters (e.g. c1 x + cx2 is a 2 parameter family).

I A particular solution is one with no arbitrary constants in it.

I The trivial solution is the simple constant function y = 0.

I An integral curve is the graph of one solution (perhaps from a


family).
Section 1: Concepts and Terminology

Systems of ODEs

Sometimes we want to consider two or more dependent variables that


are functions of the same independent variable. The ODEs for the
dependent variables can depend on one another. Some examples of
relevant situations are

I predator and prey


I competing species
I two or more masses attached to a system of springs
I two or more composite fluids in attached tank systems

Such systems can be linear or nonlinear.


Section 1: Concepts and Terminology

Example of Nonlinear System

dx
= −αx + βxy
dt
dy
= γy − δxy
dt
This is known as the Lotka-Volterra predator-prey model. x(t) is the
population (density) of predators, and y(t) is the population of prey.
The numbers α, β, γ and δ are nonnegative constants.
This model is built on the assumptions that
I in the absence of predation, prey increase exponentially
I in the absence of predation, predators decrease exponentially,
I predator-prey interactions increase the predator population and
decrease the prey population.
Section 1: Concepts and Terminology

Example of a Linear System


di2
= −2i2 − 2i3 + 60
dt
di3
= −2i2 − 5i3 + 60
dt

Figure: Electrical Network of resistors and inductors showing currents i2 and


i3 modeled by this system of equations.
Section 1: Concepts and Terminology

Systems of ODEs

Solving a system means finding all dependent variables as functions of


the independent variable.

Example: Show that the pair of functions i2 (t) = 30 − 24e−t − 6e−6t


and i3 (t) = 12e−t − 12e−6t are a solution to the system

di2
= −2i2 − 2i3 + 60
dt
di3
= −2i2 − 5i3 + 60
dt

Exercise left to the reader.


Section 1: Concepts and Terminology

Systems of ODEs

There are various approaches to solving a system of differential


equations. These can include
I elimination (try to eliminate a dependent variable),
I matrix techniques,
I Laplace transforms3
I numerical approximation techniques

3
We’ll consider this later.
Section 2: Initial Value Problems

Section 2: Initial Value Problems

An initial value problem consists of an ODE with additional conditions.

For Example: Solve the equation 4

d ny
= f (x, y, y 0 , . . . , y (n−1) ) (1)
dx n
subject to the initial conditions

y (x0 ) = y0 , y 0 (x0 ) = y1 , . . . , y (n−1) (x0 ) = yn−1 . (2)

The problem (1)–(2) is called an initial value problem (IVP).


Note that y and its derivatives are evaluated at the same initial x value
of x0 .

4
on some interval I containing x0 .
Section 2: Initial Value Problems

Examples for n = 1 or n = 2
First order case:
dy
= f (x, y), y (x0 ) = y0
dx

Second order case:


d 2y
= f (x, y , y 0 ), y(x0 ) = y0 , y 0 (x0 ) = y1
dx 2
Section 2: Initial Value Problems

Example
Given that y = c1 x + cx2 is a 2-parameter family of solutions of
x 2 y 00 + xy 0 − y = 0, solve the IVP

x 2 y 00 + xy 0 − y = 0, y(1) = 1, y 0 (1) = 3

Satisfying the initial conditions will require certain values for c1 and c2 .
c2
y(1) = c1 (1) + =1 =⇒ c1 + c2 = 1
1
c2
y 0 (1) = c1 − = 3 =⇒ c1 − c2 = 3
12
Solving this algebraic system, one finds that c1 = 2 and c2 = −1. So
the solution to the IVP is
1
y = 2x − .
x
Section 2: Initial Value Problems

Graphical Interpretation

Figure: Each curve solves y 0 + 2xy = 0, y (0) = y0 . Each colored curve


corresponds to a different value of y0
Section 2: Initial Value Problems

A Numerical Solution
Consider a first order initial value problem

dy
= f (x, y), y (x0 ) = y0 .
dx
In the coming sections, we’ll see methods for solving some of these
problems analytically (e.g. by hand). The method will depend on the
type of equation. But not all ODEs are readily solved by hand. We can
ask whether we can at least obtain an approximation to the solution,
for example as a table of values or in the form of a curve. In general,
the answer is that we can get such an approximation. Various
algorithms have been developed to do this. We’re going to look at a
method known as Euler’s Method.
The strategy behind Euler’s method is to construct the solution starting
with the known initial point (x0 , y0 ) and using the tangent line to find the
next point on the solution curve.
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: We know that the point (x0 , y0 ) = (0, 1) is on the curve. And the slope
of the curve at (0, 1) is m0 = f (0, 1) = 0 · 1 = 0.
Note: The gray curve is the true solution to this IVP. It’s shown for reference.
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: So we draw a little tangent line (we know the point and slope). Then
we increase x, say x1 = x0 + h, and approximate the solution value y(x1 ) with
the value on the tangent line y1 . So y1 ≈ y (x1 ).
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: We take the approximation to the true function y at the point


x1 = x0 + h to be the point on the tangent line.
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: When h is very small, the true solution and the tangent line point will
be close. Here, we’ve zoomed in to see that there is some error between the
exact y value and the approximation from the tangent line.
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: Now we start with the point (x1 , y1 ) and repeat the process. We get
the slope m1 = f (x1 , y1 ) and draw a tangent line through (x1 , y1 ) with slope
m1 .
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: We go out h more units to x2 = x1 + h. Pick the point on the tangent


line (x2 , y2 ), and use this to approximate y(x2 ). So y2 ≈ y (x2 )
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: If we zoom in, we can see that there is some error. But as long as h is
small, the point on the tangent line approximates the point on the actual
solution curve.
Section 2: Initial Value Problems

dy
Example = xy , y (0) = 1
dx

Figure: We can repeat this process at the new point to obtain the next point.
We build an approximate solution by advancing the independent variable and
connect the points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ).
Section 2: Initial Value Problems

Euler’s Method: An Algorithm & Error


We start with the IVP
dy
= f (x, y), y (x0 ) = y0 .
dx
We build a sequence of points that approximates the true solution y
(x0 , y0 ), (x1 , y1 ), (x2 , y2 ), . . . , (xN , yN ).

We’ll take the x values to be equally spaced with a common difference


of h. That is
x1 = x0 + h
x2 = x1 + h = x0 + 2h
x3 = x2 + h = x0 + 3h
..
.
xn = x0 + nh
Section 2: Initial Value Problems

Euler’s Method: An Algorithm

dy
= f (x, y), y (x0 ) = y0 .
dx

Notation:
I The approximate value of the solution will be denoted by yn ,
I and the exact values (that we don’t expect to actually know) will be
denoted y(xn ).

To build a formula for the approximation y1 , let’s approximate the


derivative at (x0 , y0 ).

dy y1 − y0
f (x0 , y0 ) = ≈
dx (x0 ,y0 ) x1 − x0

(Notice that’s the standard formula for slope. )


Section 2: Initial Value Problems

Euler’s Method: An Algorithm


dy
= f (x, y), y (x0 ) = y0 .
dx

We know x0 and y0 , and we also know that x1 = x0 + h so that


x1 − x0 = h. Thus, we can solve for y1 .
y1 − y0 y1 − y0
= = f (x0 , y0 )
x1 − x0 h

=⇒ y1 − y0 = hf (x0 , y0 )
=⇒ y1 = y0 + hf (x0 , y0 )

The formula to approximate y1 is therefore

y1 = y0 + hf (x0 , y0 )
Section 2: Initial Value Problems

Euler’s Method: An Algorithm


dy
= f (x, y), y (x0 ) = y0 .
dx

We can continue this process. So we use


y2 − y1
= f (x1 , y1 ) =⇒ y2 = y1 + hf (x1 , y1 )
h
and so forth. We have

Euler’s Method Formula: The nth approximation yn to the exact


solution y (xn ) is given by

yn = yn−1 + hf (xn−1 , yn−1 )

with (x0 , y0 ) given in the original IVP and h the choice of step
size.
Section 2: Initial Value Problems

dy
Euler’s Method Example: = xy, y(0) = 1
dx
Let’s take h = 0.25 and find the first three iterates y1 , y2 , and y3 .
We have x0 = 0 and y0 = 1. So

y1 = y0 + hf (x0 , y0 ) = 1 + 0.25(0 · 1) = 1

Now we repeat to find y2 . We have x1 = 0.25 and y1 = 1.

y2 = y1 + hf (x1 , y1 ) = 1 + 0.25(0.25 · 1) = 1.0625

Now we repeat to find y3 . We have x2 = 0.5 and y2 = 1.0625.

y3 = y2 + hf (x2 , y2 ) = 1.0625 + 0.25(0.5 · 1.0625) = 1.19531


Section 2: Initial Value Problems

dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt

Take h = 0.2 and use Euler’s method to approximate x(1.4).

We’ll need to use two steps. With h = 0.2, we’ll move from t = 1 to
t = 1.2 and then to t = 1.4. First, let’s determine the formula using
2 2
Euler’s method. We have f (t, x) = x xt−t , so the general formula will be
!
2
xn−1 2
− tn−1
xn = xn−1 + hf (tn−1 , xn−1 ) = xn−1 + 0.2
xn−1 tn−1

We also have
t0 = 1 and x0 = 2.
Section 2: Initial Value Problems

dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt

First, we approximate x(1.2)


!
x02 − t02 22 − 12
 
x1 = x0 + 0.2 = 2 + 0.2 = 2.3
x 0 t0 2·1

So we have the point (1.2, 2.3).


Section 2: Initial Value Problems

dx x2 − t2
Euler’s Method Example: = , x(1) = 2
dt xt
Now we move on to the next point to approximate x(1.4). From the last
step, we have t1 = 1.2 and x1 = 2.3 so
 2
x1 − t12
 2
2.3 − 1.22
 
x2 = x1 + 0.2 = 2.3 + 0.2 = 2.579
x 1 t1 2.3 · 1.2

So we have the point (1.4, 2.579).


The approximation
x(1.4) ≈ 2.579.

It is possible
p to solve this IVP exactly to obtain the solution
x = 4t 2 − 2t 2 ln(t). The true value x(1.4) = 2.554 to four decimal digits.
Section 2: Initial Value Problems

Euler’s Method: Error

As the previous graph suggests, the approximate solution obtained


using Euler’s method has error. Moreover, the error can be expected to
become more pronounced, the farther away from the intial condition
we get.

First, let’s define what we mean by the term error. There are a couple
of types of error that we can talk about. These are5

Absolute Error = |True Value − Approximate Value|


and
Absolute Error
Relative Error =
|True value|

5
Some authors will define absolute error without use of absolute value bars so that
absolute error need not be nonnegative.
Section 2: Initial Value Problems

Euler’s Method: Error


We expect to get better results taking smaller steps. We can ask, how
does the error depend on the step size? Let’s look at some error for
one of the examples.
dy
= xy, y(0) = 1
dx
I programed Euler’s method into Matlab and used different h values to
approximate y(1). The number of iterations depends on the step size.
For example, if h = 0.2, it takes five steps to get from x0 = 0 to x5 = 1.
In general, the number of steps is n = h1 .
y(1)−yn
h y (1) − yn y(1)
0.2 0.1895 0.1149
0.1 0.1016 0.0616
0.05 0.0528 0.0320
0.025 0.0269 0.0163
0.0125 0.0136 0.0082
Section 2: Initial Value Problems

Euler’s Method: Error

We notice from this example that cutting the step size in half, seems to
cut the error and relative error in half. This suggests the following:

The absolute error in Euler’s method is proportional to the step size.

There are two sources of error for Euler’s method (not counting
numerical errors due to machine rounding).
I The error in approximating the curve with a tangent line, and
I using the approximate value yn−1 to get the slope at the next step.
Section 2: Initial Value Problems

Euler’s Method: Error

For numerical schemes of this sort, we often refer to the order of the
scheme. If the error satisfies

Absolute Error = Chp


where C is some constant, then the order of the scheme is p.

Euler’s method is an order 1 scheme.


Section 2: Initial Value Problems

Euler’s Method

Euler’s method is simple and intuitive. However, it is rarely used in


practice because of its error. There are more widely used schemes.
Other methods tend to use multiple tangent lines for each iteration and
are sometimes referred as mutli-step methods. The two most common
are

I Improved Euler6 which is order 2, and


I Runge-Kutta7 which is order 4.

6
a.k.a. RK2
7
a.k.a. RK4
Section 2: Initial Value Problems

dy
Improved Euler’s Method: = f (x, y ), y(x0 ) = y0
dx
Euler’s method approximated y1 using the slope m0 = f (x0 , y0 ) for the
tangent line. An initial improvement on the method can be made by
using this as an intermediate point to give a second approximation to
the slope. That is, let
m0 = f (x0 , y0 )
as before, and now let

m̂0 = f (x1 , y0 + m0 h).

Then we take y1 to be the point on the line that has the average of
these two slopes
1
y1 = y0 + (m0 + m̂0 )h.
2

Other methods will use the weighted averages of 3, 4 or more tangent


lines.
Section 2: Initial Value Problems

Existence and Uniqueness

Two important questions we can always pose (and sometimes answer)


are
(1) Does an IVP have a solution? (existence) and
(2) If it does, is there just one? (uniqueness)

 2
dy
Hopefully it’s obvious that we can’t solve + 1 = −y 2 .
dx
(Not if we are only interested in real valued solutions.)
Section 2: Initial Value Problems

Uniqueness

Consider the IVP

dy √
=x y y(0) = 0
dx

x4
Exercise: Verify that y = 16 is a solution of the IVP.

Can you find a second solution of the IVP by inspection (i.e. clever
guessing)?
Section 3: First Order Equations: Separation of Variables

Section 3: First Order Equations: Separation of


Variables

The simplest type of equation we could encounter would be of the form

dy
= g(x).
dx

If G(x) is any antiderivative of g(x), the solutions to this ODE would be

y = G(x) + c

obtained by simply integrating.


Section 3: First Order Equations: Separation of Variables

Separable Equations

Definition: The first order equation y 0 = f (x, y ) is said to be


separable if the right side has the form

f (x, y) = g(x)h(y).

That is, a separable equation is one that has the form

dy
= g(x)h(y ).
dx
Section 3: First Order Equations: Separation of Variables

Separable -vs- Nonseparable

dy
= x 3y
dx
is separable as the right side is the product of g(x) = x 3 and h(y ) = y.

dy
= 2x + y
dx
is not separable. You can try, but it is not possible to write 2x + y as
the product of a function of x alone and a function of y alone.
Section 3: First Order Equations: Separation of Variables

Solving Separable Equations


Let’s assume that it’s safe to divide by h(y) and let’s set p(y ) = 1/h(y).
We solve (usually find an implicit solution) by separating the
variables.
dy
= g(x)h(y )
dx

1 dy dy
Note that = g(x) =⇒ p(y) dx = g(x) dx
h(y ) dx dx
dy
Since dx dx = dy , we integrate both sides
Z Z
p(y ) dy = g(x) dx =⇒ P(y) = G(x) + c

where P and G are any antiderivatives of p and g, respectively. The


expression
P(y ) = G(x) + c
defines a one parameter family of solutions implicitly.
Section 3: First Order Equations: Separation of Variables

Caveat regarding division by h(y).

Recall that the IVP


dy √
= x y, y (0) = 0
dx
has two solutions
x2
y= and y = 0.
16

Exercise: Solve this by separation of variables. Note that only one of


these solutions is recoverable. Why is the second one lost?
Section 3: First Order Equations: Separation of Variables

Solutions Defined by Integrals


Recall (Fundamental Theorem of Calculus) for g continuous on an
interval containing x0 and x
Z x Z x
d dy
g(t) dt = g(x) and dt = y(x) − y (x0 ).
dx x0 x0 dt

Assuming g is continuous at x0 , we can use this to solve

dy
= g(x), y (x0 ) = y0 .
dx
Expressing the solution in terms of an integral
Z x
y (x) = y0 + g(t) dt.
x0

Verify that this is a solution.


Section 4: First Order Equations: Linear & Special

Section 4: First Order Equations: Linear & Special

A first order linear equation has the form

dy
a1 (x) + a0 (x)y = g(x).
dx
If g(x) = 0 the equation is called homogeneous. Otherwise it is called
nonhomogeneous.

Provided a1 (x) 6= 0 on the interval I of definition of a solution, we can


write the standard form of the equation

dy
+ P(x)y = f (x).
dx
We’ll be interested in equations (and intervals I) for which P and f are
continuous on I.
Section 4: First Order Equations: Linear & Special

Solutions (the General Solution)

dy
+ P(x)y = f (x).
dx
It turns out the solution will always have a basic form of y = yc + yp
where
I yc is called the complementary solution and would solve the
problem
dy
+ P(x)y = 0
dx
(called the associated homogeneous equation), and
I yp is called the particular solution, and is heavily influenced by
the function f (x).
The cool thing is that our solution method will get both parts in one
process—we won’t get this benefit with higher order equations!
Section 4: First Order Equations: Linear & Special

Motivating Example

dy
x2 +2xy = ex
dx

Note that the left side is a product rule


d h 2 i dy
x y = x2 + 2xy.
dx dx
Hence the equation reduces to
Z Z
d h 2 i d h 2 i
x y = ex =⇒ x y dx = ex dx
dx dx
Integrate and divide by x 2 to obtain
ex c
y= 2
+ 2
x x
Section 4: First Order Equations: Linear & Special

Derivation of Solution via Integrating Factor


We seek to solve the equation in standard form

dy
+ P(x)y = f (x)
dx
Based on the previous example, we seek a function µ(x) such that
when we multiply the above equation by this new function, the left side
collapses as a product rule. We wish to have
dy d
µ + µPy = µf =⇒ [µy ] = µf .
dx dx
Matching the left sides
dy d dy dµ
µ + µPy = [µy] = µ + y
dx dx dx dx
which requires

= Pµ.
dx
Section 4: First Order Equations: Linear & Special


= Pµ.
dx

We’ve obtained a separable equation for the sought after function µ.


Using separation of variables, we find that
Z 
µ(x) = exp P(x) dx .

This function is called an integrating factor.


Section 4: First Order Equations: Linear & Special

General Solution of First Order Linear ODE

I Put the equation in standard form y 0 + P(x)y = f (x), and correctly


identify the function P(x).
R 
I Obtain the integrating factor µ(x) = exp P(x) dx .
I Multiply both sides of the equation (in standard form) by the
integrating factor µ. The left hand side will always collapse into
the derivative of a product

d
[µ(x)y] = µ(x)f (x).
dx
I Integrate both sides, and solve for y .
Z Z R 
1 R
− P(x) dx P(x) dx
y (x) = µ(x)f (x) dx = e e f (x) dx + C
µ(x)
Section 4: First Order Equations: Linear & Special

Example: Find the general solution.

dy
− y = 2x 2 x
dx
In standard form the equation is

dy 1 1
− y = 2x, so that P(x) = − .
dx x x
Then8 Z 
1
µ(x) = exp − dx = exp(− ln |x|) = x −1
x
The equation becomes

d −1
[x y] = x −1 (2x) = 2
dx

8
We will take the constant of integration to be zero when finding µ.
Section 4: First Order Equations: Linear & Special

d −1
dx [x y] = x −1 (2x) = 2

Next we integrate both sides—µ makes this possible, hence the name
integrating factor—and solve for our solution y.
Z Z
d −1
[x y] dx = 2 dx =⇒ x −1 y = 2x + C
dx
and finally

y = 2x 2 + Cx.
Note that this solution has the form y = yp + yc where yc = Cx and
yp = 2x 2 . The complementary part comes from the constant of
integration and is independent of the right side of the ODE 2x. The
particular part comes from the right hand side integration of x −1 (2x).
Section 4: First Order Equations: Linear & Special

Steady and Transient States

For some linear equations, the term yc decays as x (or t) grows. For
example

dy 3 2
+ y = 3xe−x has solution y = x + Ce−x .
dx 2
3 2
Here, yp = x and yc = Ce−x .
2

Such a decaying complementary solution is called a transient state.

The corresponding particular solution is called a steady state.


Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Bernoulli Equations

Suppose P(x) and f (x) are continuous on some interval (a, b) and n is
a real number different from 0 or 1 (not necessarily an integer). An
equation of the form

dy
+ P(x)y = f (x)y n
dx
is called a Bernoulli equation.

Observation: This equation has the flavor of a linear ODE, but since
n 6= 0, 1 it is necessarily nonlinear. So our previous approach involving
an integrating factor does not apply directly. Fortunately, we can use a
change of variables to obtain a related linear equation.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Solving the Bernoulli Equation


dy
+ P(x)y = f (x)y n (3)
dx
Let u = y 1−n . Then

du dy dy y n du
= (1 − n)y −n =⇒ = .
dx dx dx 1 − n dx
Substituting into (3) and dividing through by y n /(1 − n)

y n du du
+ P(x)y = f (x)y n =⇒ + (1 − n)P(x)y 1−n = (1 − n)f (x)
1 − n dx dx
Given our choice of u, this is the first order linear equation

du
+ P1 (x)u = f1 (x), where P1 = (1 − n)P, f1 = (1 − n)f .
dx
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example
Solve the initial value problem y 0 − y = −e2x y 3 , subject to y (0) = 1.

Here, n = 3 so we set u = y 1−3 = y −2 . Observe then that

du dy dy 1 du
= −2y −3 so = − y3 .
dx dx dx 2 dx
Upon substitution
1 du
− y3 − y = −e2x y 3
2 dx
Multiplying through by −2y −3 gives

du
+ 2y −2 = 2e2x
dx
As expected, the second term on the left is (1 − n)P(x)u, here 2u.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example Continued
Now we solve the first order linear equation for u using an integrating
factor. Omitting the details, we obtain
1 2x
u(x) = e + Ce−2x
2
Of course, we need to remember that our goal is to solve the original
equation for y . But the relationship between y and u is known. From
u = y −2 , we know that y = u −1/2 . Hence
1
y=q
1 2x
2e + Ce−2x

Applying y(0) = 1 we find that C = 1/2 for a solution to the IVP



2
y=√
e + e−2x
2x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Exact Equations

We considered first order equations of the form

M(x, y ) dx + N(x, y) dy = 0. (4)

The left side is called a differential form. We will assume here that M
and N are continuous on some (shared) region in the plane.

Definition: The equation (4) is called an exact equation on some


rectangle R if there exists a function F (x, y) such that

∂F ∂F
= M(x, y) and = N(x, y)
∂x ∂y

for every (x, y ) in R.


Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Exact Equation Solution

If M(x, y) dx + N(x, y ) dy = 0 happens to be exact, then it is


equivalent to

∂F ∂F
dx + dy = 0
∂x ∂y
This implies that the function F is constant on R and solutions to the

DE are given by the relation

F (x, y) = C
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Recognizing Exactness

There is a theorem from calculus that ensures that if a function F has


first partials on a domain, and if those partials are continuous, then the
second mixed partials are equal. That is,

∂2F ∂2F
= .
∂y∂x ∂x∂y

If it is true that
∂F ∂F
=M and =N
∂x ∂y
this provides a condition for exactness, namely

∂M ∂N
=
∂y ∂x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Exact Equations

Theorem: Let M and N be continuous on some rectangle R in the


plane. Then the equation

M(x, y) dx + N(x, y ) dy = 0

is exact if and only if


∂M ∂N
= .
∂y ∂x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example
Show that the equation is exact and obtain a family of solutions.

(2xy − sec2 x) dx + (x 2 + 2y) dy = 0

First note that for M = 2xy − sec2 x and N = x 2 + 2y we have

∂M ∂N
= 2x = .
∂y ∂x

Hence the equation is exact. We obtain the solutions (implicitly) by


finding a function F such that ∂F /∂x = M and ∂F /∂y = N. Using the
first relation we get9
Z Z
F (x, y) = M(x, y) dx = (2xy − sec2 x) dx = x 2 y − tan x + g(y)

9
Holding y constant while integrating with respect to x means that the constant of
integration may well depend on y
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example Continued

We must find g to complete our solution. We know that

∂F
F (x, y) = x 2 y − tan x + g(y) and = N(x, y) = x 2 + 2y
∂y

Differentiating the expression on the left with respect to y and equating

∂F
= x 2 + g 0 (y) = x 2 + 2y
∂y

from which if follows that g 0 (y) = 2y . An antiderivative is given by


g(y) = y 2 . Since our solutions are F = C, we arrive at the family of
solutions
x 2 y − tan x + y 2 = C
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factors

Suppose that the equation M dx + N dy = 0 is not exact. Clearly our


approach to exact equations would be fruitless as there is no such
function F to find. It may still be possible to solve the equation if we
can find a way to morph it into an exact equation. As an example,
consider the DE

(2y − 6x) dx + (3x − 4x 2 y −1 ) dy = 0


Note that this equation is NOT exact. In particular

∂M ∂N
= 2 6= 3 − 8xy −1 = .
∂y ∂x
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factors


But note what happens when we multiply our equation by the function
µ(x, y ) = xy 2 .

xy 2 (2y − 6x) dx + xy 2 (3x − 4x 2 y −1 ) dy = 0, =⇒

(2xy 3 − 6x 2 y 2 ) dx + (3x 2 y 2 − 4x 3 y ) dy = 0
Now we see that
∂(µM) ∂(µN)
= 6xy 2 − 12x 2 y =
∂y ∂x

The new equation10 IS exact!


Of course this raises the question: How would we know to use
µ = xy 2 ?
10
The solutions sets for these equations are almost the same. However, it is
possible to introduce or lose solutions employing this approach.We won’t worry about
this here.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factors

The function µ is called a special integrating factor. Finding one


(assuming one even exists) may require ingenuity and likely a bit of
luck. However, there are certain cases we can look for and perhaps
use them to solve the occasional equation. A useful method is to look
for µ of a certain form (usually µ = x n y m for some powers n and m).
We will restrict ourselves to two possible cases:

There is an integrating faction µ = µ(x) depending only on x, or there


is an integrating factor µ = µ(y) depending only on y .
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factor µ = µ(x)

Suppose that
M dx + N dy = 0
is NOT exact, but that

µM dx + µN dy = 0

IS exact where µ = µ(x) does not depend on y. Then

∂(µ(x)M) ∂(µ(x)N)
= .
∂y ∂x

Let’s use the product rule in the right side.


Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factor µ = µ(x)


Since µ is constant in y ,

∂M dµ ∂N
µ = N +µ (5)
∂y dx ∂x
Rearranging (5), we get both a condition for the existence of such a µ
as well as an equation for it. The function µ must satisfy the separable
equation
∂M ∂N
!
dµ ∂y − ∂x
=µ (6)
dx N
Note that this equation is solvable, insofar as µ depends only on x,
only if
∂M ∂N
∂y − ∂x
N
depends only on x!
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factor

When solvable, equation (6) has solution


∂M ∂N
!
Z
∂y − ∂x
µ = exp dx
N

A similar manipulation assuming a function µ = µ(y ) depending only


on y leads to the existence condition requiring
∂N ∂M
∂x − ∂y
M
depend only on y.
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Special Integrating Factor

M dx + N dy = 0 (7)
Theorem: If (∂M/∂y − ∂N/∂x)/N is continuous and depends only on
x, then
Z ∂M − ∂N !
∂y ∂x
µ = exp dx
N
is a special integrating factor for (7). If (∂N/∂x − ∂M/∂y)/M is

continuous and depends only on y, then


∂N ∂M
!
Z
∂x − ∂y
µ = exp dy
M

is a special integrating factor for (7).


Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example
Solve the equation 2xy dx + (y 2 − 3x 2 ) dy = 0.
Note that ∂M/∂y = 2x and ∂N/∂x = −6x. The equation is not exact.
Looking to see if there may be a special integrating factor, note that
∂M ∂N
∂y − ∂x 8x
=
N y2 − 3x 2
∂N ∂M
∂x − ∂y −8x −4
= =
M 2xy y
The first does not depend on x alone. But the second does depend on
y alone. So there is a special integrating factor
Z 
4
µ = exp − dy = y −4
y
Section 4: First Order Equations: Linear & Special Some Special First Order Equations

Example Continued
The new equation obtained by multiplying through by µ is

2xy −3 dx + (y −2 − 3x 2 y −4 ) dy = 0.

Note that
∂ ∂ −2
2xy −3 = −6xy −4 = (y − 3x 2 y −4 )
∂y ∂x
so this new equation is exact. Solving for F
Z
F (x, y) = 2xy −3 dx = x 2 y −3 + g(y)

and g 0 (y) = y −2 so that g(y) = −y −1 . The solutions are given by

x2 1
3
− = C.
y y
Section 5: First Order Equations: Models and Applications

Section 5: First Order Equations: Models and


Applications

Figure: Mathematical Models give Rise to Differential Equations


Section 5: First Order Equations: Models and Applications

Population Dynamics
A population of dwarf rabbits grows at a rate proportional to the current
population. In 2011, there were 58 rabbits. In 2012, the population was
up to 89 rabbits. Estimate the number of rabbits expected in the
population in 2021.
We can translate this into a mathematical statement then hope to solve
the problem and answer the question. Letting the population of rabbits
(say population density) at time t be given by P(t), to say that the rate
of change is proportional to the population is to say

dP
= kP(t) for some constant k .
dt
This is a differential equation! To answer the question, we will require
the value of k as well as some initial information—i.e. we will need an
IVP.
Section 5: First Order Equations: Models and Applications

Example Continued...11
We can choose units for time t. Based on the statement, taking t in
years is well advised. Letting t = 0 in year 2011, the second and third
sentences translate as

P(0) = 58, and P(1) = 89

Without knowing k , we can solve the IVP

dP
= kP, P(0) = 58
dt
by separation of variables to obtain

P(t) = 58ekt .

11
Taking P as the population density, i.e. number of rabbits per unit habitat, allows
us to consider non-integer P values. Thus fractional and even irrational P values are
reasonable and not necessarily gruesome.
Section 5: First Order Equations: Models and Applications

Example Continued...

To evaluate the population function (for a real number), we still need to


know k. We have the additional information P(1) = 89. Note that this
gives  
1k 89
P(1) = 89 = 58e =⇒ k = ln .
58
Hence the function
P(t) = 58et ln(89/58) .
Finally, the population in 2021 is approximately

P(10) = 58e10 ln(89/58) ≈ 4200


Section 5: First Order Equations: Models and Applications

Exponential Growth or Decay

If a quantity P changes continuously at a rate proportional to its current


level, then it will be governed by a differential equation of the form

dP dP
= kP i.e. − kP = 0.
dt dt

Note that this equation is both separable and first order linear. If k > 0,
P experiences exponential growth. If k < 0, then P experiences
exponential decay.
Section 5: First Order Equations: Models and Applications

Series Circuits: RC-circuit

Figure: Series Circuit with Applied Electromotive force E, Resistance R, and


Capcitance C. The charge of the capacitor is q and the current i = dq
dt .
Section 5: First Order Equations: Models and Applications

Series Circuits: LR-circuit

Figure: Series Circuit with Applied Electromotive force E, Inductance L, and


Resistance R. The current is i.
Section 5: First Order Equations: Models and Applications

Measurable Quantities:

Resistance R in ohms (Ω), Implied voltage E in volts (V),


Inductance L in henries (h), Charge q in coulombs (C),
Capacitance C in farads (f), Current i in amperes (A)

dq
Current is the rate of change of charge with respect to time: i = .
dt

Component Potential Drop


di
Inductor L dt
Resistor Ri i.e. R dq dt
1
Capacitor C
q
Section 5: First Order Equations: Models and Applications

Kirchhoff’s Law

The sum of the voltages around a closed circuit is zero.

In other words, the sum of potential drops across the passive


components is equal to the applied electromotive force.
For an RC series circuit, this tells us that

drop across resistor + drop across capacitor = applied force


R dq
dt + 1
Cq = E(t)
Section 5: First Order Equations: Models and Applications

Series Circuit Equations

For an LR series circuit, we have

drop across inductor + drop across resistor = applied force


di
L dt + Ri = E(t)

If the initial charge (RC) or initial current (LR) is known, we can solve
the corresponding IVP.
(Note: We will consider LRC series circuits later as these give rise to
second order ODEs.)
Section 5: First Order Equations: Models and Applications

Example
A 200 volt battery is applied to an RC series circuit with resistance
1000Ω and capacitance 5 × 10−6 f . Find the charge q(t) on the
capacitor if i(0) = 0.4A. Determine the charge as t → ∞.

Using the equation for an RC circuit Rq 0 + (1/C)q = E we have

dq 1
1000 + q = 200, q 0 (0) = 0.4
dt 5 · 10−6
(Note that this is a slightly irregular IVP since the condition is given on
i = q 0 .) In standard form the equation is q 0 + 200q = 1/5 with
integrating factor µ = exp(200t). The general solution is

1
q(t) = + Ke−200t .
1000
Section 5: First Order Equations: Models and Applications

Example Continued...

Applying the initial condition we obtain the charge q(t)

1 e−200t
q(t) = − .
1000 500
The long time charge on the capacitor is therefore

1
lim q(t) = .
t→∞ 1000
Section 5: First Order Equations: Models and Applications

A Classic Mixing Problem

A tank originally contains 500 gallons of pure water.


Brine containing 2 pounds of salt per gallon is
pumped in at a rate of 5gal/min. The well mixed
solution is pumped out at the same rate. Find the
amount of salt A(t) in pounds at the time t. Find the
concentration of the mixture in the tank at t = 5
minutes.
In order to answer such a question, we need to convert the problem
statement into a mathematical one.
Section 5: First Order Equations: Models and Applications

A Classic Mixing Problem

Figure: Spatially uniform composite fluids (e.g. salt & water, gas & ethanol)
being mixed. Concentrations of substances change in time. The ”well mixed”
condition ensures that concentrations do not change with space.
Section 5: First Order Equations: Models and Applications

Building an Equation

The rate of change of the amount of salt


   
dA input rate output rate
= −
dt of salt of salt

The input rate of salt is

fluid rate in · concentration of inflow = ri · c i .

The output rate of salt is

fluid rate out · concentration of outflow = ro · c o .


Section 5: First Order Equations: Models and Applications

Building an Equation

The concentration of the outflowing fluid is

total salt A(t) A(t)


= = .
total volume V (t) V (0) + (ri − ro )t

dA A
= ri · c i − ro .
dt V
This equation is first order linear.
Note that the volume

V (t) = initial volume + rate in × time − rate out × time.

If ri = ro , then V (t) = V (0) a constant.


Section 5: First Order Equations: Models and Applications

Solve the Mixing Problem


A tank originally contains 500 gallons of pure water. Brine containing 2
pounds of salt per gallon is pumped in at a rate of 5gal/min. The well
mixed solution is pumped out at the same rate. Find the amount of salt
A(t) in pounds at the time t. Find the concentration of the mixture in
the tank at t = 5 minutes.
We can take A in pounds, V in gallons, and t in minutes. Here,
V (0) = 500 gal, ri = 5 gal/min, ci = 2 lb/gal, and ro = 5 gal/min. Since
the incoming and outgoing rates are the same, the volume V (t) = 500
gallons for all t. This gives an outgoing concentration of

A(t) A(t) A(t)


co = = = .
V (t) 500 + 5t − 5t 500

Since the tank originally contains pure water (no salt), we have
A(0) = 0.
Section 5: First Order Equations: Models and Applications

Mixing Example
Our IVP is
dA A
= 5gal/min · 2lb/gal − 5gal/min · lb/gal, A(0) = 0
dt 500
dA 1
+ A = 10, A(0) = 0.
dt 100
The IVP has solution
 
A(t) = 1000 1 − e−t/100 .

The concentration c of salt in the tank after five minutes is therefore

A(5) lb 1000(1 − e−5/100 )


c= = lb/gal ≈ 0.01 lb/gal.
V (5) gal 500
Section 5: First Order Equations: Models and Applications

A Nonlinear Modeling Problem

A population P(t) of tilapia changes at a rate jointly proportional to the


current population and the difference between the constant carrying
capacity12 M of the environment and the current population.
Determine the differential equation satsified by P.
To say that P has a rate of change jointly proportional to P and the
difference between P and M is
dP dP
∝ P(M − P) i.e. = kP(M − P)
dt dt
for some constant of proportionality k .

12
The carrying capacity is the maximum number of individuals that the environment
can support due to limitation of space and resources.
Section 5: First Order Equations: Models and Applications

Logistic Differential Equation


The equation
dP
= kP(M − P), k, M > 0
dt
is called a logistic growth equation.
Solve this equation13 and show that for any P(0) 6= 0, P → M as
t → ∞.
The equation is separable. The general solution to the DE is

MCeMkt
P(t) = .
1 + CeMkt

13
The partial fraction decomposition
 
1 1 1 1
= +
P(M − P) M P M −P

is useful.
Section 5: First Order Equations: Models and Applications

Logistic Growth: P 0 (t) = kP(M − P) P(0) = P0

Applying the condition P(0) = P0 to find the constant C, we obtain the


solution to the IVP

P0 MeMkt P0 M
P(t) = = .
M − P0 + P0 e Mkt (M − P0 )e−Mkt + P0

If P0 = 0, then P(t) = 0 for all t. Otherwise, we can take the limit as


t → ∞ to obtain
P0 M
lim P(t) = =M
t→∞ 0 + P0
as expected.
Section 6: Linear Equations: Theory and Terminology

Section 6: Linear Equations: Theory and Terminology

Recall that an nth order linear IVP consists of an equation

d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = g(x)
dx dx dx

to solve subject to conditions

y(x0 ) = y0 , y 0 (x0 ) = y1 , ..., y (n−1) (x0 ) = yn−1 .

The problem is called homogeneous if g(x) ≡ 0. Otherwise it is called


nonhomogeneous.
Section 6: Linear Equations: Theory and Terminology

Theorem: Existence & Uniqueness

Theorem: If a0 , . . . , an and g are continuous on an interval I,


an (x) 6= 0 for each x in I, and x0 is any point in I, then for any choice of
constants y0 , . . . , yn−1 , the IVP has a unique solution y(x) on I.

Put differently, we’re guaranteed to have a solution exist, and it is the


only one there is!
Section 6: Linear Equations: Theory and Terminology

Example

Use only a little clever intuition to solve the IVP

y 00 + 3y 0 − 2y = 0, y (0) = 0, y 0 (0) = 0

Exercise left to the reader. (Hint: Think simple. While we usually


consider the initial conditions at the end, it may help to think about
them first.)
Section 6: Linear Equations: Theory and Terminology

A Second Order Linear Boundary Value Problem

consists of a problem

d 2y dy
a2 (x) + a1 (x) + a0 (x)y = g(x), a<x <b
dx 2 dx
to solve subject to a pair of conditions14

y (a) = y0 , y(b) = y1 .

However similar this is in appearance, the existence and uniqueness


result does not hold for this BVP!

Other conditions on y and/or y 0 can be imposed. The key characteristic is that


14

conditions are imposed at both end points x = a and x = b.


Section 6: Linear Equations: Theory and Terminology

BVP Example

Consider the three similar BVPs:


00 π π 
(1) y + 4y = 0, 0<x < y (0) = 0, y = 0.
4 4

π π 
(2) y 00 + 4y = 0, 0<x < y (0) = 0, y = 0.
2 2
π π 
(3) y 00 + 4y = 0, 0<x < y (0) = 0, y = 1.
2 2
Section 6: Linear Equations: Theory and Terminology

BVP Examples

All solutions of the ODE y 00 + 4y = 0 are of the form

y = c1 cos(2x) + c2 sin(2x).

It can readily be shown (do it!) that


I Problem (1) has exactly one solution y = 0.
I Problem (2) has infinitely many solutions y = c2 sin(2x) where c2
is any real number.
I And problem (3) has no solutions.
Section 6: Linear Equations: Theory and Terminology

Homogeneous Equations
We’ll consider the equation

d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = 0
dx dx dx
and assume that each ai is continuous and an is never zero on the
interval of interest.

Theorem: If y1 , y2 , . . . , yk are all solutions of this homogeneous


equation on an interval I, then the linear combination

y (x) = c1 y1 (x) + c2 y2 (x) + · · · + ck yk (x)

is also a solution on I for any choice of constants c1 , . . . , ck .

This is called the principle of superposition.


Section 6: Linear Equations: Theory and Terminology

Corollaries

(i) If y1 solves the homogeneous equation, the any constant multiple


y = cy1 is also a solution.

(ii) The solution y = 0 (called the trivial solution) is always a solution


to a homogeneous equation.

Big Questions:
I Does an equation have any nontrivial solution(s), and
I since y1 and cy1 aren’t truly different solutions, what criteria will be
used to call solutions distinct?
Section 6: Linear Equations: Theory and Terminology

Linear Dependence

Definition: A set of functions f1 (x), f2 (x), . . . , fn (x) are said to be


linearly dependent on an interval I if there exists a set of constants
c1 , c2 , . . . , cn with at least one of them being nonzero such that

c1 f1 (x) + c2 f2 (x) + · · · + cn fn (x) = 0 for all x in I.

A set of functions that is not linearly dependent on I is said to be


linearly independent on I.

Note: It is always possible to form the above linear combination to


obtain 0 by simply taking all of the coefficients ci = 0. The question
here is whether it is possible to have at least one of the c’s be nonzero.
If so, the functions are linearly Dependent.
Section 6: Linear Equations: Theory and Terminology

Example: A linearly Dependent Set

The functions f1 (x) = sin2 x, f2 (x) = cos2 x, and f3 (x) = 1 are linearly
dependent on I = (−∞, ∞).

Making use of our knowledge of Pythagorean IDs, we can take


c1 = c2 = 1 and c3 = −1 (this isn’t the only choice, but it will do the
trick). Note that

c1 f1 (x) + c2 f2 (x) + c3 f3 (x) = sin2 x + cos2 x − 1 = 0 for all real x.


Section 6: Linear Equations: Theory and Terminology

Example: A linearly Independent Set

The functions f1 (x) = sin x and f2 (x) = cos x are linearly independent
on I = (−∞, ∞).

Suppose c1 f1 (x) + c2 f2 (x) = 0 for all real x. Then the equation must
hold when x = 0, and it must hold when x = π/2. Consequently

c1 cos(0) + c2 sin(0) = 0 =⇒ c1 = 0 and

0 · cos(π/2) + c2 sin(π/2) = 0 =⇒ c2 = 0.
We see that the only way for our linear combination to be zero is for
both coefficients to be zero. Hence the functions are linearly
independent.
Section 6: Linear Equations: Theory and Terminology

Determine if the set is Linearly Dependent or


Independent

f1 (x) = x 2 , f2 (x) = 4x, f3 (x) = x − x 2

Looking at the functions, we should suspect that they are linearly


dependent. Why? Because f3 is a linear combination of f1 and f2 . In
fact,
1 1
f3 (x) = f2 (x) − f1 (x) i.e. f1 (x) − f2 (x) + f3 (x) = 0
4 4
for all real x. The latter is our linear combination with c1 = c3 = 1 and
c2 = − 14 (not all zero).
With only two or three functions, we may be able to intuit linear
dependence/independence. What follows will provide an alternative
method.
Section 6: Linear Equations: Theory and Terminology

Definition of Wronskian

Let f1 , f2 , . . . , fn posses at least n − 1 continuous derivatives on an


interval I. The Wronskian of this set of functions is the determinant

f1 f2 ··· fn
f10 f20 ··· fn0
W (f1 , f2 , . . . , fn )(x) = .. .. .. .. .
. . . .
(n−1) (n−1) (n−1)
f1 f2 ··· fn

(Note that, in general, this Wronskian is a function of the independent


variable x. )
Section 6: Linear Equations: Theory and Terminology

Determine the Wronskian of the Functions

f1 (x) = sin x, f2 (x) = cos x

sin x cos x
W (f1 , f2 )(x) =
cos x − sin x

= sin x(− sin x) − cos x(cos x)

= − sin2 x − cos2 x = −1
Section 6: Linear Equations: Theory and Terminology

Determine the Wronskian of the Functions

f1 (x) = x 2 , f2 (x) = 4x, f3 (x) = x − x 2

x2 4x x − x2
W (f1 , f2 , f3 )(x) = 2x 4 1 − 2x
2 0 −2

4x x − x2 x2 4x
2 + (−2)
4 1 − 2x 2x 4

= 2(−4x 2 ) − 2(−4x 2 ) = 0
Section 6: Linear Equations: Theory and Terminology

Theorem (a test for linear independence)

Let f1 , f2 , . . . , fn be n − 1 times continuously differentiable on an interval


I. If there exists x0 in I such that W (f1 , f2 , . . . , fn )(x0 ) 6= 0, then the
functions are linearly independent on I.

If y1 , y2 , . . . , yn are n solutions of the linear homogeneous nth order


equation on an interval I, then the solutions are linearly independent
on I if and only if W (y1 , y2 , . . . , yn )(x) 6= 0 for15 each x in I.

15
For solutions of one linear homogeneous ODE, the Wronskian is either always
zero or is never zero.
Section 6: Linear Equations: Theory and Terminology

Determine if the functions are linearly dependent or


independent:

y1 = e x , y2 = e−2x I = (−∞, ∞)

Exercise left to the reader. (Hint: Use the Wronskian.)


Section 6: Linear Equations: Theory and Terminology

Fundamental Solution Set


We’re still considering this equation

d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = 0
dx dx dx
with the assumptions an (x) 6= 0 and ai (x) are continuous on I.

Definition: A set of functions y1 , y2 , . . . , yn is a fundamental solution


set of the nth order homogeneous equation provided they
(i) are solutions of the equation,
(ii) there are n of them, and
(iii) they are linearly independent.

Theorem: Under the assumed conditions, the equation has a


fundamental solution set.
Section 6: Linear Equations: Theory and Terminology

General Solution of nth order Linear Homogeneous


Equation

Let y1 , y2 , . . . , yn be a fundamental solution set of the nth order linear


homogeneous equation. Then the general solution of the equation is

y (x) = c1 y1 (x) + c2 y2 (x) + · · · + cn yn (x),


where c1 , c2 , . . . , cn are arbitrary constants.
Section 6: Linear Equations: Theory and Terminology

Example
Verify that y1 = ex and y2 = e−x form a fundamental solution set of the
ODE
y 00 − y = 0 on (−∞, ∞),
and determine the general solution.

Note that
(i) y100 − y1 = ex − ex = 0 and y200 − y2 = e−x − e−x = 0.
Also note that (ii) we have two solutions for this second order equation.
And finally (iii)
ex e−x
W (y1 , y2 )(x) = = −2 6= 0.
ex −e−x
Hence the functions are linearly independent. We have a fundamental
solution set, and the general solution is
y = c1 ex + c2 e−x .
Section 6: Linear Equations: Theory and Terminology

Nonhomogeneous Equations
Now we will consider the equation
d ny d n−1 y dy
an (x) + an−1 (x) + · · · + a1 (x) + a0 (x)y = g(x)
dx n dx n−1 dx
where g is not the zero function. We’ll continue to assume that an
doesn’t vanish and that ai and g are continuous.

The associated homogeneous equation is


d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1 + · · · + a1 (x) + a0 (x)y = 0.
dx dx dx
Section 6: Linear Equations: Theory and Terminology

Theorem: General Solution of Nonhomogeneous


Equation
Let yp be any solution of the nonhomogeneous equation, and let y1 ,
y2 , . . . , yn be any fundamental solution set of the associated
homogeneous equation.
Then the general solution of the nonhomogeneous equation is

y = c1 y1 (x) + c2 y2 (x) + · · · + cn yn (x) + yp (x)

where c1 , c2 , . . . , cn are arbitrary constants.

Note the form of the solution yc + yp !


(complementary plus particular)
Section 6: Linear Equations: Theory and Terminology

Another Superposition Principle (for nonhomogeneous


eqns.)
Let yp1 , yp2 , . . ., ypk be k particular solutions to the nonhomogeneous
linear equations

d ny d n−1 y dy
an (x) n
+ an−1 (x) n−1
+ · · · + a1 (x) + a0 (x)y = gi (x)
dx dx dx
for i = 1, . . . , k. Assume the domain of definition for all k equations is a
common interval I.
Then
yp = yp1 + yp2 + · · · + ypk
is a particular solution of the nonhomogeneous equation

d ny
an (x) + · · · + a0 (x)y = g1 (x) + g2 (x) + · · · + gk (x).
dx n
Section 6: Linear Equations: Theory and Terminology

Example x 2 y 00 − 4xy 0 + 6y = 36 − 14x

(a) Verify that

yp1 = 6 solves x 2 y 00 − 4xy 0 + 6y = 36.

Exercise left to the reader.

(b) Verify that

yp2 = −7x solves x 2 y 00 − 4xy 0 + 6y = −14x.

Exercise left to the reader.


Section 6: Linear Equations: Theory and Terminology

Example x 2 y 00 − 4xy 0 + 6y = 36 − 14x

It can be readily verified that y1 = x 2 and y2 = x 3 is a fundamental


solution set of
x 2 y 00 − 4xy 0 + 6y = 0.
Use this along with results (a) and (b) to write the general solution of
x 2 y 00 − 4xy 0 + 6y = 36 − 14x.

By the definition of the general solution along with the principle of


superposition, we have

y = c1 x 2 + c2 x 3 + 6 − 7x.
Section 7: Reduction of Order

Section 7: Reduction of Order

We’ll focus on second order, linear, homogeneous equations. Recall


that such an equation has the form

d 2y dy
a2 (x) 2
+ a1 (x) + a0 (x)y = 0.
dx dx

Let us assume that a2 (x) 6= 0 on the interval of interest. We will write


our equation in standard form

d 2y dy
2
+ P(x) + Q(x)y = 0
dx dx
where P = a1 /a2 and Q = a0 /a2 .
Section 7: Reduction of Order

d 2y
dx 2
+ P(x) dy
dx + Q(x)y = 0

Recall that every fundmantal solution set will consist of two linearly
independent solutions y1 and y2 , and the general solution will have the
form
y = c1 y1 (x) + c2 y2 (x).
Suppose we happen to know one solution y1 (x). Reduction of order
is a method for finding a second linearly independent solution y2 (x)
that starts with the assumption that

y2 (x) = u(x)y1 (x)

for some function u(x). The method involves finding the function u.
Section 7: Reduction of Order

Reduction of Order
Consider the equation in standard form with one known solution.
Determine a second linearly independent solution.

d 2y dy
+ P(x) + Q(x)y = 0, y1 (x) − −is known.
dx 2 dx

We begin by assuming that y2 = u(x)y1 (x) for some yet to be


determined function u(x). Note then that

y20 = u 0 y1 + uy10 , and y200 = u 00 y1 + 2u 0 y10 + uy100 .

Since y2 must solve the homogeneous equation, we can substitute the


above into the equation to obtain a condition on u.

y200 + Py20 + Qy2 = u 00 y1 + (2y10 + Py1 )u 0 + (y100 + Py10 + Qy1 )u = 0.


Section 7: Reduction of Order

Reduction of Order
Since y1 is a solution of the homogeneous equation, the last
expression in parentheses is zero. So we obtain an equation for u

u 00 y1 + (2y10 + Py1 )u 0 = 0.

While this appears as a second order equation, the absences of u


makes this equation first order in u 0 (hence the name reduction of
order). If we let w = u 0 , we can express the equation in standard form
(assuming y1 doesn’t vanish)

w 0 + (2y10 /y1 + P)w = 0.

This first order linear equation has a solution


R 
exp − P(x) dx
w= .
y12
Section 7: Reduction of Order

Reduction of Order

With w determined, we integrate once to obtain u and conclude that a


second linearly independent solution
R 
exp − P(x) dx
Z
y2 (x) = y1 (x) dx.
(y1 (x))2
Section 7: Reduction of Order

Reduction of Order Formula

For the second order, homogeneous equation in standard form with


one known solution y1 , a second linearly independent solution y2 is
given by

R
Z − P(x) dx
e
y2 = y1(x) dx
(y1(x))2
Section 7: Reduction of Order

Example
Find the general solution of the ODE given one known solution
x 2 y 00 − 3xy 0 + 4y = 0, x > 0, y1 = x 2
In standard form, the equation is
3 4 3
y 00 − y 0 + 2 y = 0 so that P(x) = − .
x x x
Hence Z Z  
3
− P(x) dx = − − dx = 3 ln(x) = ln x 3 .
x
A second solution is therefore
exp(ln x 3 )
Z Z 3 Z
2 2 x 2 dx
y2 = x 2 2
dx = x 4
dx = x = x 2 ln x.
(x ) x x
Note that we can take the constant of integration here to be zero
(why?). The general solution of the ODE is
y = c1 x 2 + c2 x 2 ln x.
Section 8: Homogeneous Equations with Constant Coefficients

Section 8: Homogeneous Equations with Constant


Coefficients

We consider a second order, linear, homogeneous equation with


constant coefficients

d 2y dy
2
+b a
+ cy = 0.
dx dx
Question: What sort of function y could be expected to satisfy

y 00 = constant y 0 + constant y ?
Section 8: Homogeneous Equations with Constant Coefficients

We look for solutions of the form y = emx with m


constant.
If y = emx , then y 0 = memx and y 00 = m2 emx . Upon substitution into
the DE we get

0 = ay 00 + by 0 + cy
= am2 emx + bmemx + cemx
= (am2 + bm + c)emx

Noting that the exponential is never zero, the truth of the above
equation requires m to satisfy

am2 + bm + c = 0.

This quadratic equation is called the characteristic or auxiliary


equation. The polynomial on the left side is the characteristic or
auxiliary polynomial.
Section 8: Homogeneous Equations with Constant Coefficients

Auxiliary a.k.a. Characteristic Equation

am2 + bm + c = 0

There are three cases:


I b2 − 4ac > 0 and there are two distinct real roots m1 6= m2

II b2 − 4ac = 0 and there is one repeated real root m1 = m2 = m

III b2 − 4ac < 0 and there are two roots that are complex conjugates
m1,2 = α ± iβ
Section 8: Homogeneous Equations with Constant Coefficients

Case I: Two distinct real roots

ay 00 + by 0 + cy = 0, where b2 − 4ac > 0



−b ± b2 − 4ac
y = c1 em1 x + c2 em2 x where m1,2 =
2a

Show that y1 = em1 x and y2 = em2 x are linearly independent.

Exercise left to the reader.


Section 8: Homogeneous Equations with Constant Coefficients

Case II: One repeated real root

ay 00 + by 0 + cy = 0, where b2 − 4ac = 0
−b
y = c1 emx + c2 xemx where m =
2a

−bx −bx
Use reduction of order to show that if y1 = e 2a , then y2 = xe 2a .

Exercise left to the reader.


Section 8: Homogeneous Equations with Constant Coefficients

Case III: Complex conjugate roots

ay 00 + by 0 + cy = 0, where b2 − 4ac < 0


y = eαx (c1 cos(βx) + c2 sin(βx)), where the roots

−b 4ac − b2
m = α ± iβ, α= and β =
2a 2a

The solutions can initially be written as

Y1 = e(α+iβ)x = eαx eiβx , and Y2 = e(α−iβ)x = eαx e−iβx .

Note: We are not interested in solutions expressed as complex


valued functions. So we wish to rewrite the above in terms of real
valued functions of our real variable.
Section 8: Homogeneous Equations with Constant Coefficients

Deriving the solutions Case III


Recall Euler’s Formula:

eiθ = cos θ + i sin θ

We can express Y1 and Y2 as

Y1 = eαx eiβx = eαx (cos(βx) + i sin(βx))

Y2 = eαx e−iβx = eαx (cos(βx) − i sin(βx))


Now apply the principle of superposition and set

1
y1 = (Y1 + Y2 ) = eαx cos(βx), and
2
1
y2 = (Y1 − Y2 ) = eαx sin(βx).
2i
Section 8: Homogeneous Equations with Constant Coefficients

Examples

Solve the ODE


d 2x dx
2
+4 + 6x = 0
dt dt

The characteristic equation is



m2 + 4m + 6 = 0 with roots −2± 2i.

This is the complex conjugate case with α = −2 and β = 2.. The
general solution is therefore
√ √
y = c1 e−2x cos( 2x) + c2 e−2x sin( 2x).
Section 8: Homogeneous Equations with Constant Coefficients

Examples

Solve the ODE


d 2x dx
2
+4 − 5x = 0
dt dt

The characteristic equation is

m2 + 4m − 5 = 0 with roots − 5, 1.

This is the two distinct real roots case. Hence y1 = e−5x , y2 = ex , and
the general solution is therefore

y = c1 e−5x + c2 ex .
Section 8: Homogeneous Equations with Constant Coefficients

Examples

Solve the ODE


d 2x dx
2
+4 + 4x = 0
dt dt

The characteristic equation is

m2 + 4m + 4 = 0 with root − 2 (repeated)

This is the one repeated real root case. Hence y1 = e−2x , y2 = xe−2x ,
and the general solution is therefore

y = c1 e−2x + c2 xe−2x .
Section 8: Homogeneous Equations with Constant Coefficients

Higer Order Linear Constant Coefficient ODEs


I The same approach applies. For an nth order equation, we obtain
an nth degree polynomial.
I Complex roots must appear in conjugate pairs (due to real
coefficients) giving a pair of solutions eαx cos(βx) and eαx sin(βx).
I If a root m is repeated k times, we get k linearly independent
solutions

emx , xemx , x 2 emx , ..., x k−1 emx

or in conjugate pairs cases 2k solutions

eαx cos(βx), eαx sin(βx), xeαx cos(βx), xeαx sin(βx), . . . ,

x k−1 eαx cos(βx), x k−1 eαx sin(βx)


I It may require a computer algebra system to find the roots for a
high degree polynomial.
Section 8: Homogeneous Equations with Constant Coefficients

Example
Solve the ODE

y 000 −4y 0 = 0

The characteristic equation is

m3 − 4m = 0 with roots − 2, 0, 2.

A fundamental solution set is y1 = e−2x , y2 = e0x = 1, and y3 = e2x .


The general solution is therefore

y = c1 e−2x + c2 + c3 e2x .

Note that as expected, this third order equation has a fundamental


solution set consisting of three linearly independent functions.
Section 8: Homogeneous Equations with Constant Coefficients

Example
Solve the ODE

y 000 −3y 00 +3y 0 −y = 0

The characteristic equation is

m3 − 3m2 + 3m − 1 = 0 with root 1 (repeated).

Every fundamental solution set must contain three linearly independent


functions. The method for repeated roots in the second order case
extends nicely to higher order equations. A fundamental solution set is

y1 = e x , y2 = xex , y3 = x 2 ex .

The general solution is

y = c1 ex + c2 xex + c3 x 2 ex .
Section 9: Method of Undetermined Coefficients

Section 9: Method of Undetermined Coefficients

The context here is linear, constant coefficient, nonhomogeneous


equations

an y (n) + an−1 y (n−1) + · · · + a0 y = g(x)


where g comes from the restricted classes of functions
I polynomials,
I exponentials,
I sines and/or cosines,
I and products and sums of the above kinds of functions

Recall y = yc + yp , so we’ll have to find both the complementary and


the particular solutions!
Section 9: Method of Undetermined Coefficients

Motivating Example
Find a particular solution of the ODE

y 00 − 4y 0 + 4y = 8x + 1

We may guess that since g(x) is a first degree polynomial, perhaps


our particuar solution yp is also.16 . That is

yp = Ax + B

for some pair of constants A, B. We can substitute this into the DE.
We’ll use that
yp0 = A, and yp00 = 0.

16
Note that this is an educated guess. If it doesn’t work out, we can sigh and try
something else. If it does work, we owe no apologies for starting with a guess.
Section 9: Method of Undetermined Coefficients

y 00 − 4y 0 + 4y = 8x + 1
Plugging yp into the ODE we get
8x + 1 = yp00 − 4yp0 + 4yp
= 0 − 4(A) + 4(Ax + B)
= 4Ax + (−4A + 4B)
We have first degree polynomials on both sides of the equation. They
are equal if and only if they have the same corresponding coefficients.
Matching the coefficients of x and the constants on the left and right
we get the pair of equations
4A = 8
−4A + 4B = 1
This has solution A = 2, B = 9/4. We’ve found a particular solution
9
yp = 2x + .
4
Section 9: Method of Undetermined Coefficients

The Method of Undetermined Coefficients

As the previous example suggests, this method entails guessing that


the particular solution has the same basic form as the right hand side
function g(x). We must keep in mind that the idea of form should be
considered in the most general context. The following forms arise
I nth degree polynomial,
I exponentials emx for m constant,
I a linear combination of sin(mx) AND cos(mx) for some constant m,
I a product of any two or three of the above
When we assume a form for yp we leave unspecified coefficients
(hence the name of the method). The values of factors such as m
inside an exponential, sine, or cosine are fixed by reference to the
function g.
Section 9: Method of Undetermined Coefficients

The Method of Undetermined Coefficients

The gist of the method is that we assume that yp matches g in form


and
I account for each type of term that appears in g or can arise
through differentiation17
I avoid any unwarranted conditions imposed upon coefficients18 .

17
e.g. sines and cosines give rise to one another when derivatives are taken. Hence
they should be consdered together in linear combinations.
18
Most notably we don’t assume a priori values for coefficients in our linear
combinations and don’t force them to have fixed relationships to one another prior to
fitting them to the ODE
Section 9: Method of Undetermined Coefficients

Examples of Forms of yp based on g (Trial Guesses)


(a) g(x) = 1 (or really any constant) This is a degree zero polynomial.

yp = A

(b) g(x) = x − 7 This is a degree 1 polynomial.

yp = Ax + B

(c) g(x) = 5x This is a degree 1 polynomial.

yp = Ax + B

(d) g(x) = 3x 3 − 5 This is a degree 3 polynomial.

yp = Ax 3 + Bx 2 + Cx + D
Section 9: Method of Undetermined Coefficients

(e) g(x) = xe3x A degree1 polynomial times an exponential e3x .

yp = (Ax + B)e3x

(f) g(x) = cos(7x) A linear combination of cos(7x) and sin(7x).

yp = A cos(7x) + B sin(7x)

(g) g(x) = sin(2x) − cos(4x) A linear combination of cos(2x) and


sin(2x) plus a linear combination of cos(4x) and sin(4x).

yp = A cos(2x) + B sin(2x) + C cos(4x) + D sin(4x)

(h) g(x) = x 2 sin(3x) A product of a second degree polynomial and a


linear combination of sines and cosines of 3x.

yp = (Ax 2 + Bx + C) cos(3x) + (Dx 2 + Ex + F ) sin(3x)


Section 9: Method of Undetermined Coefficients

Still More Trial Guesses

(i) g(x) = ex cos(2x) A product of an exponential ex and a linear


combination of sines and cosines of 2x.

yp = Aex cos(2x) + Bex sin(2x)

(j) g(x) = x 3 e8x A degree3 polynomial times an exponential e8x .

yp = (Ax 3 + Bx 2 + Cx + D)e8x

(k) g(x) = xe−x sin(πx) A product of a degree 1 polynomial, an


exponential e−x and a linear combination of sines and cosines of πx.

yp = (Ax + B)e−x cos(πx) + (Cx + D)e−x sin(πx)


Section 9: Method of Undetermined Coefficients

The Superposition Principle

y 00 − y 0 = 20 sin(2x) + 4e−5x
Given the theorem in section 6 regarding superposition for
nonhomogeneous equations, we can consider two subproblems

y 00 − y 0 = 20 sin(2x), and y 00 − y 0 = 4e−5x .

Calling the particular solutions yp1 and yp2 , respectively, the correct
forms to guess would be

yp1 = A cos(2x) + B sin(2x), and yp2 = Ce−5x

It can be shown (details left to the reader) that A = 2, B = −4 and


C = 2/15.
Section 9: Method of Undetermined Coefficients

The Superposition Principle

y 00 − y 0 = 20 sin(2x) + 4e−5x
The particular solution is

2e−5x
y = yp1 + yp2 = 2 cos(2x) − 4 sin(2x) + .
15
The general solution to the ODE is

2e−5x
y = c1 ex + c2 + 2 cos(2x) − 4 sin(2x) + .
15
Section 9: Method of Undetermined Coefficients

A Glitch!
y 00 − y 0 = 3ex

Here we note that g(x) = 3ex is a constant times ex . So we may


guess that our particular solution

yp = Aex .

When we attempt the substitution, we end up with an unsolvable


problem. Note that yp0 = Aex and yp00 = Aex giving upon substitution

3ex = yp00 − yp0


= Aex − Aex
= 0

This requires 3 = 0 which is always false (i.e. we can’t find a value of A


to make a true statement out of this result.)
Section 9: Method of Undetermined Coefficients

A Glitch!
y 00 − y 0 = 3ex

The reason for our failure here comes to light by consideration of the
associated homogenous equation
y 00 − y 0 = 0
with fundamental solution set y1 = ex , y2 = 1. Our initial guess of Aex
is a solution to the associated homogeneous equation for every
constant A. And for any nonzero A, we’ve only duplicated part of the
complementary solution. Fortunately, there is a fix for this problem.
Taking a hint from a previous observation involving reduction of order,
we may modify our initial guess by including a factor of x. If we guess
yp = Axex
we find that this actuall works. It can be shown that (details left to the
reader) A = 3. So yp = 3xex is a particular solution.
Section 9: Method of Undetermined Coefficients

We’ll consider cases

Using superposition as needed, begin with the assumption:

yp = yp1 + · · · + ypk

where ypi has the same general form as gi (x).

Case I: yp as first written has no part that duplicates the


complementary solution yc . Then this first form will suffice.

Case II: yp has a term ypi that duplicates a term in the complementary
solution yc . Multiply that term by x n , where n is the smallest positive
integer that eliminates the duplication.
Section 9: Method of Undetermined Coefficients

Case II Example y 00 − 2y 0 + y = −4ex


The associated homogeneous equation y 00 − 2y 0 + y = 0 has y1 = ex
and y2 = xex as a fundamental solution set. At first pass, we may
assume that
yp = Aex
based on the form of the right hand side. A quick look at y1 and y2
shows that this guess will not work. We might try modifying our guess
as
yp = Axex ,
but again this just duplicates y2 . The correct modification is therefore

yp = Ax 2 ex .

This does not duplicate the complementary solution and will work as
the correct form (i.e. it is possible to find a value of A such that this
function solves the nonhomogeneous ODE).
Section 9: Method of Undetermined Coefficients

Find the form of the particular soluition

y 00 − 4y 0 + 4y = sin(4x) + xe2x
We consider the subproblems

y 00 − 4y 0 + 4y = sin(4x) and y 00 − 4y 0 + 4y = xe2x

At first pass, we may guess particular solutions

yp1 = A sin(4x) + B cos(4x), and yp2 = (Cx + D)e2x .

A fundamental solution set to the associated homogeneous equation is

y1 = e2x , y2 = xe2x .

Comparing this set to the first part of the particular solution yp1 , we see
that there is no correlation between them. Hence yp1 will suffice as
written.
Section 9: Method of Undetermined Coefficients

Find the form of the particular soluition

y 00 − 4y 0 + 4y = sin(4x) + xe2x
Our guess at yp2 however will fail since it contains at least one term
(actually both are problematic) that solves the homogeneous equation.
We may attempt a factor of x

yp2 = x(Cx + D)e2x = (Cx 2 + Dx)e2x

to fix the problem. However, this still contains a term (Dxe2x ) that
duplicates the fundamental solution set. Hence we introduce another
factor of x putting

yp2 = x 2 (Cx + D)e2x = (Cx 3 + Dx 2 )e2x .

Now we have a workable form.


Section 9: Method of Undetermined Coefficients

Find the form of the particular soluition

y 00 − 4y 0 + 4y = sin(4x) + xe2x

The particular solution for the whole ODE is of the form

yp = A sin(4x) + B cos(4x) + (Cx 3 + Dx 2 )e2x .

It is important to note that modification of yp2 does not impose any


necessary modification of yp1 . This is due to our principle of
superposition.
Section 9: Method of Undetermined Coefficients

Solve the IVP


y 00 − y = 4e−x y(0) = −1, y 0 (0) = 1

The associated homogeneous equation y 00 − y = 0 has fundamental


solution set y1 = ex , y2 = e−x . We may guess that our particular
solution
yp = Ae−x
but seeing that this duplicates y2 we will need to modify our guess as

yp = Axe−x .

Substitution into the ODE gives A = −2. So our particular solution is


yp = −2xe−x and the general solution of the ODE is

y = c1 ex + c2 e−x − 2xe−x .
Section 9: Method of Undetermined Coefficients

Solve the IVP

y 00 − y = 4e−x y(0) = −1, y 0 (0) = 1

We apply our initial conditions to the general solution. Note that

y = c1 ex + c2 e−x − 2xe−x =⇒ y 0 = c1 ex − c2 e−x − 2e−x + 2xe−x .

So
y(0) = c1 + c2 = −1 and y 0 (0) = c1 − c2 − 2 = 1.
Solving this system of equations for c1 and c2 we find c1 = 1 and
c2 = −2. The solution to the IVP is

y = ex − 2e−x − 2xe−x .
Section 10: Variation of Parameters

Section 10: Variation of Parameters

Suppose we wish to consider the nonhomogeneous equations

y 00 + y = tan x or x 2 y 00 + xy 0 − 4y = ex ?

Neither of these equations lend themselves to the method of


undetermined coefficients for identification of a particular solution.
I The first one fails because g(x) = tan x does not fall into any of
the classes of functions required for the method.
I The second one fails because the left hand side is not a constant
coefficient equation.
Section 10: Variation of Parameters

We need another method!


For the equation in standard form

d 2y dy
+ P(x) + Q(x)y = g(x),
dx 2 dx
suppose {y1 (x), y2 (x)} is a fundamental solution set for the
associated homogeneous equation. We seek a particular solution of
the form
yp (x) = u1 (x)y1 (x) + u2 (x)y2 (x)
where u1 and u2 are functions19 we will determine (in terms of y1 , y2
and g).

This method is called variation of parameters.

19
Note the similarity to yc = c1 y1 + c2 y2 . The coefficients u1 and u2 are varying,
hence the name variation of parameters.
Section 10: Variation of Parameters

Variation of Parameters: Derivation of yp

y 00 + P(x)y 0 + Q(x)y = g(x)

Set yp = u1 (x)y1 (x)+u2 (x)y2 (x)

Note that we have two unknowns u1 , u2 but only one equation (the
ODE). Hence we will introduce a second equation. We’ll do this with
some degree of freedom but in a way that makes life a little bit easier.
We wish to substitute our form of yp into the ODE. Note that

yp0 = u1 y10 + u2 y20 + u10 y1 + u20 y2 .

Here is that second condition: Let us assume that

u10 y1 + u20 y2 = 0.
Section 10: Variation of Parameters

Variation of Parameters: Derivation of yp

y 00 + P(x)y 0 + Q(x)y = g(x)


Given our assumption, we have

yp00 = u10 y10 + u20 y20 + u1 y100 + u2 y200 .

Substituting into the ODE we obtain

g(x) = yp00 + Pyp0 + Qyp


= u10 y10 + u20 y20 + u1 y100 + u2 y200 + P(u1 y10 + u2 y20 )
+Q(u1 y1 + u2 y2 )
= u10 y10 + u20 y20 + (y100 + Py10 + Qy1 )u1 + (y200 + Py20 + Qy2 )u2

Remember that yi00 + P(x)yi0 + Q(x)yi = 0, for i = 1, 2


Section 10: Variation of Parameters

Variation of Parameters: Derivation of yp

y 00 + P(x)y 0 + Q(x)y = g(x)


Since the last terms cancel, we obtain a second equation for our u’s:
u10 y10 + u20 y20 = g. We have a system

u10 y1 + u20 y2 = 0
u10 y10 + u20 y20 = g

We may express this using a convenient matrix formalism as


  0   
y1 y2 u1 0
=
y10 y20 u20 g

The coefficient matrix on the left should be familiar!


Section 10: Variation of Parameters

Variation of Parameters: Derivation of yp

y 00 + P(x)y 0 + Q(x)y = g(x)


Finally, we may solve this using Crammer’s Rule to obtain

W1 −y2 g W2 y1 g
u10 = = u20 = =
W W W W
where
0 y2 y1 0
W1 = , W2 =
g y20 y10 g
and W is the Wronskian of y1 and y2 . We simply integrate to obtain u1
and u2 .
Section 10: Variation of Parameters

Example:

Solve the ODE y 00 + y = tan x.

The associated homogeneous equation has fundamental solution20


set y1 = cos x and y2 = sin x. The Wronskian W = 1. The equation is
already in standard form, so our g(x) = tan x. We have

−y2 g
Z Z
sin x tan x
u1 = dx = − dx = sin x − ln | sec x + tan x|
W 1
Z Z
y1 g cos x tan x
u2 = dx = dx = − cos x
W 1

20
How we number the functions in the fundamental solution set is completely
arbitrary. However, the designations are important for finding our u’s and constructing
our yp . So we pick an ordering at the beginning and stick with it.
Section 10: Variation of Parameters

Example Continued...

Solve the ODE y 00 + y = tan x.

The particular solution is

yp = u1 y1 + u2 y2
= (sin x − ln | sec x + tan x|) cos x − cos x sin x
= − cos x ln | sec x + tan x|.

And the general solution is therefore

y = c1 cos x + c2 sin x − cos x ln | sec x + tan x|.


Section 11: Linear Mechanical Equations

Section 11: Linear Mechanical Equations

Simple Harmonic Motion

We consider a flexible spring from which a mass is suspended. In the


absence of any damping forces (e.g. friction, a dash pot, etc.), and free
of any external driving forces, any initial displacement or velocity
imparted will result in free, undamped motion–a.k.a. simple
harmonic motion.

Harmonic Motion gif


Section 11: Linear Mechanical Equations

Building an Equation: Hooke’s Law

Figure: In the absence of any displacement, the system is at equilibrium.


Displacement x(t) is measured from equilibrium x = 0.
Section 11: Linear Mechanical Equations

Building an Equation: Hooke’s Law


Newton’s Second Law: F = ma Force = mass times acceleration
d 2x d 2x
a= =⇒ F =m
dt 2 dt 2

Hooke’s Law: F = kx Force exerted by the spring is proportional to


displacement
The force imparted by the spring opposes the direction of motion.

r
d 2x 00 2 k
m 2 = −kx =⇒ x + ω x = 0 where ω =
dt m
Convention We’ll Use: Up will be positive (x > 0), and down will be
negative (x < 0). This orientation is arbitrary and follows the
convention in Trench.
Section 11: Linear Mechanical Equations

Obtaining the Spring Constant (US Customary Units)


If an object with weight W pounds stretches a spring δx feet from it’s
length with no mass attached, the by Hooke’s law we compute the
spring constant via the equation

W
W = k δx =⇒ k= .
δx
The units for k in this system of measure are lb/ft.

Note also that Weight = mass × acceleration due to gravity. Hence if


we know the weight of an object, we can obtain the mass via

W
W = mg =⇒ m= .
g

We typically take the approximation g = 32 ft/sec2 . The units for mass


are lb sec2 /ft which are called slugs.
Section 11: Linear Mechanical Equations

Obtaining the Spring Constant (SI Units)

In SI units, the weight would be expressed in Newtons (N). The


appropriate units for displacement would be meters (m). In these units,
the spring constant would have units of N/m.
It is customary to describe an object by its mass in kilograms. When
we encounter such a description, we deduce the weight in Newtons

W = mg taking the approximation g = 9.8 m/sec2 .


Section 11: Linear Mechanical Equations

Obtaining the Spring Constant: Displacment in


Equilibrium

If an object stretches a spring δx units from it’s length (with no object


attached), we may say that it stretches the spring δx units in
equilibrium. Applying Hooke’s law with the weight as force, we have

mg = k δx.

We observe that the value ω can be deduced from δx by

k g
ω2 = = .
m δx
Provided that values for δx and g are used in appropriate units, ω is in
units of per second.
Section 11: Linear Mechanical Equations

Simple Harmonic Motion

x 00 + ω 2 x = 0, x(0) = x0 , x 0 (0) = x1 (8)


Here, x0 and x1 are the initial position (relative to equilibrium) and
velocity, respectively. The solution is
x1
x(t) = x0 cos(ωt) + sin(ωt)
ω
called the equation of motion.

Caution: The phrase equation of motion is used differently by different


authors. Some, including Trench, use this phrase to refer the ODE of
which (8) would be the example here. Others use it to refer to the
solution to the associated IVP.
Section 11: Linear Mechanical Equations

x1
x(t) = x0 cos(ωt) + ω sin(ωt)

Characteristics of the system include


I the period T = 2π ω ,
I the frequency f = T1 = 2π ω 21

I the circular (or angular) frequency ω, and


q
I the amplitude or maximum displacement A = x02 + (x1 /ω)2

21
Various authors call f the natural frequency and others use this term for ω.
Section 11: Linear Mechanical Equations

Amplitude and Phase Shift


We can formulate the solution in terms of a single sine (or cosine)
function. Letting
x1
x(t) = x0 cos(ωt) + sin(ωt) = A sin(ωt + φ)
ω
requires q
A= x02 + (x1 /ω)2 ,
and the phase shift φ must be defined by
x0 x1
sin φ = , with cos φ = .
A ωA
x1
(Alternatively, we can let x(t) = x0 cos(ωt) + ω sin(ωt) = A cos(ωt − φ̂)
in which case φ̂ is defined by
x0 x1
cos φ̂ = , with sin φ̂ = .
A ωA
π
The phase shift defined above φ = 2 − φ̂. )
Section 11: Linear Mechanical Equations

Example

An object stretches a spring 6 inches in equilibrium. Assuming no


driving force and no damping, set up the differential equation
describing this system.
Letting the displacement at time t be x(t) feet, we have

mx 00 + kx = 0 =⇒ x 00 + ω 2 x = 0
p
where ω = k /m. We seek the value of ω, but we do not have the
mass of the object to calculate the weight. Since the displacement is
described as displacment in equilibrium, we can calculate
p
ω = g/δx.
Section 11: Linear Mechanical Equations

Example Continued...

The streching is given in inches which we convert to feet. Using the


appropriate value for g we have
s
32 ft/sec2 1
ω= 1
=8 .
2 ft sec

The differential equation is therefore

x 00 + 64x = 0.
Section 11: Linear Mechanical Equations

Example
A 4 pound weight stretches a spring 6 inches. The mass is released
from a position 4 feet above equilibrium with an initial downward
velocity of 24 ft/sec. Find the equation of motion, the period, amplitude,
phase shift, and frequency of the motion. (Take g = 32 ft/sec2 .)

We can calculate the spring constant and the mass from the given
information. Converting inches to feet, we have
1
4 lb = ftk =⇒ k = 8lb/ft and
2
4 lb 1
m= = slugs.
32 ft/sec2 8
The value of ω is therefore
r
k 1
ω= =8 .
m sec
Section 11: Linear Mechanical Equations

Example Continued...
Along with the initial conditions, we have the IVP

x 00 + 64x = 0 x(0) = 4, x 0 (0) = −24.

The equation of motion is therefore

x(t) = 4 cos(8t) − 3 sin(8t).

The period and frequency are

2π π 1 4 1
T = = sec and f = = .
8 4 T π sec
The amplitude q
A= 42 + (−3)2 = 5 ft.
Section 11: Linear Mechanical Equations

Example Continued...

The phase shift φ satisfies the equations

4 3
sin φ = and cos φ = − .
5 5
We note that sin φ > 0 and cos φ < 0 indicating that φ is a quadrant II
angle (in standard position). Taking the smallest possible positive
value, we have
φ ≈ 2.21 (roughly 127◦ ).
Section 11: Linear Mechanical Equations

Free Damped Motion

Figure: If a damping force is added, we’ll assume that this force is


proportional to the instantaneous velocity.
Section 11: Linear Mechanical Equations

Free Damped Motion


Now we wish to consider an added force corresponding to
damping—friction, a dashpot, air resistance.

Total Force = Force of spring + Force of damping

d 2x dx d 2x dx
m 2
= −β − kx =⇒ 2
+ 2λ + ω2x = 0
dt dt dt dt
where r
β k
2λ = and ω = .
m m
Three qualitatively different solutions can occur depending on the
nature of the roots of the characteristic equation
p
r 2 + 2λr + ω 2 = 0 with roots r1,2 = −λ ± λ2 − ω 2 .
Section 11: Linear Mechanical Equations

Case 1: λ2 > ω 2 Overdamped


 √ √ 
λ2 −ω 2 λ2 −ω 2
x(t) = e−λt c1 et + c2 e−t

Figure: Two distinct real roots. No oscillations. Approach to equilibrium may


be slow.
Section 11: Linear Mechanical Equations

Case 2: λ2 = ω 2 Critically Damped


x(t) = e−λt (c1 + c2 t)

Figure: One real root. No oscillations. Fastest approach to equilibrium.


Section 11: Linear Mechanical Equations

Case 3: λ2 < ω 2 Underdamped


p
x(t) = e−λt (c1 cos(ω1 t) + c2 sin(ω1 t)) , ω1 = ω 2 − λ2

Figure: Complex conjugate roots. Oscillations occur as the system


approaches (resting) equilibrium.
Section 11: Linear Mechanical Equations

Damping Ratio
Engineers may refer to the damping ratio when determining which of
the three types of damping a system exhibits. Simply put, the damping
ratio is the ratio of the system damping to the critical damping for the
given mass and spring constant. Calling this damping ratio ζ,

damping coefficient β λ
ζ = = √ =
critical damping 2 mk ω

Relative to this ratio, the damping cases are given by

ζ<1 under damped


ζ=1 critically damped
ζ>1 over damped

This criterion is identical. That is, if ζ < 1, the characteristic equation


has complex roots; if ζ = 1 it has one real root, and if ζ > 1 is has two
real roots.
Section 11: Linear Mechanical Equations

Comparison of Damping

Figure: Comparison of motion for the three damping types.


Section 11: Linear Mechanical Equations

Example
A 2 kg mass is attached to a spring whose spring constant is 12 N/m.
The surrounding medium offers a damping force numerically equal to
10 times the instantaneous velocity. Write the differential equation
describing this system. Determine if the motion is underdamped,
overdamped or critically damped.

Our DE is

2x 00 + 10x 0 + 12x = 0 =⇒ x 00 + 5x 0 + 6x = 0.

Hence
5
λ= and ω 2 = 6.
2
Note that
25 1
λ2 − ω 2 = − 6 = > 0.
4 4
This system is overdamped.
Section 11: Linear Mechanical Equations

Example
A 3 kg mass is attached to a spring whose spring constant is 12 N/m.
The surrounding medium offers a damping force numerically equal to
12 times the instantaneous velocity. Write the differential equation
describing this system. Determine if the motion is underdamped,
overdamped or critically damped. If the mass is released from the
equilibrium position with an upward velocity of 1 m/sec, solve the
resulting initial value problem.
From the description, the IVP is

3x 00 + 12x 0 + 12x = 0, x(0) = 0, x 0 (0) = 1.

The equation of motion (solution of the IVP) is found to be

x(t) = te−2t .
Section 11: Linear Mechanical Equations

Driven Motion

We can consider the application of an external driving force (with or


without damping). Assume a time dependent force f (t) is applied to
the system. The ODE governing displacement becomes

d 2x dx
m 2
= −β − kx + f (t), β ≥ 0.
dt dt
Divide out m and let F (t) = f (t)/m to obtain the nonhomogeneous
equation

d 2x dx
2
+ 2λ + ω 2 x = F (t)
dt dt
Section 11: Linear Mechanical Equations

Forced Undamped Motion and Resonance

Consider the case F (t) = F0 cos(γt) or F (t) = F0 sin(γt), and λ = 0.


Two cases arise

(1) γ 6= ω, and (2) γ = ω.

Taking the sine case, the DE is

x 00 + ω 2 x = F0 sin(γt)

with complementary solution

xc = c1 cos(ωt) + c2 sin(ωt).
Section 11: Linear Mechanical Equations

x 00 + ω 2 x = F0 sin(γt)

Note that

xc = c1 cos(ωt) + c2 sin(ωt).

Using the method of undetermined coefficients, the first guess to the


particular solution is

xp = A cos(γt)+B sin(γt)

If ω 6= γ, then this form does not duplicate the solution to the


associated homogeneous equation. Hence it is the correct form for the
particular solution. An equation of motion may consist of a sum of
sines and cosines of ωt and γt.
Section 11: Linear Mechanical Equations

x 00 + ω 2 x = F0 sin(γt)

Note that

xc = c1 cos(ωt) + c2 sin(ωt).

Using the method of undetermined coefficients, the first guess to the


particular solution is

xp = A cos(γt)+B sin(γt)

If ω = γ, then this form DOES duplicate the solution to the associated


homogeneous equation. In this case, the correct form for xp is

xp = At cos(ωt) + Bt sin(ωt).

Note that terms of this sort will produce an amplitude of motion that
grows linearly in t.
Section 11: Linear Mechanical Equations

Forced Undamped Motion and Resonance

For F (t) = F0 sin(γt) starting from rest at equilibrium:

Case (1): x 00 + ω 2 x = F0 sin(γt), x(0) = 0, x 0 (0) = 0

F0  γ 
x(t) = sin(γt) − sin(ωt)
ω2 − γ 2 ω
If γ ≈ ω, the amplitude of motion could be rather large!
Section 11: Linear Mechanical Equations

Pure Resonance
Case (2): x 00 + ω 2 x = F0 sin(ωt), x(0) = 0, x 0 (0) = 0

F0 F0
x(t) = 2
sin(ωt) − t cos(ωt)
2ω 2ω

Note that the amplitude, α, of the second term is a function of t:


F0 t
α(t) =

which grows without bound!

Forced Motion and Resonance Applet

Choose ”Elongation diagram” to see a plot of displacement. Try exciter


frequencies close to ω.
Section 12: LRC Series Circuits

Section 12: LRC Series Circuits

Figure: Kirchhoff’s Law: The charge q on the capacitor satisfies


Lq 00 + Rq 0 + C1 q = E(t).
Section 12: LRC Series Circuits

LRC Series Circuit (Free Electrical Vibrations)

d 2q dq 1
L 2
+R + q=0
dt dt C

If the applied force E(t) = 0, then the electrical vibrations of the


circuit are said to be free. These are categorized as

overdamped if R 2 − 4L/C > 0,


critically damped if R 2 − 4L/C = 0,
underdamped if R 2 − 4L/C < 0.
Section 12: LRC Series Circuits

Steady and Transient States


Given a nonzero applied voltage E(t), we obtain an IVP with
nonhomogeneous ODE for the charge q

1
Lq 00 + Rq 0 + q = E(t), q(0) = q0 , q 0 (0) = i0 .
C
From our basic theory of linear equations we know that the solution will
take the form
q(t) = qc (t) + qp (t).

The function of qc is influenced by the initial state (q0 and i0 ) and will
decay exponentially as t → ∞. Hence qc is called the transient state
charge of the system.
The function qp is independent of the initial state but depends on the
characteristics of the circuit (L, R, and C) and the applied voltage E.
qp is called the steady state charge of the system.
Section 12: LRC Series Circuits

Example
An LRC series circuit has inductance 0.5 h, resistance 10 ohms, and
capacitance 4 · 10−3 f. Find the steady state current of the system if
the applied force is E(t) = 5 cos(10t).
The ODE for the charge is

1
0.5q 00 +10q 0 + q = 5 cos(10t) =⇒ q 00 +20q 0 +500q = 10 cos(10t).
4 · 10−3
The characteristic equation r 2 + 20r + 500 = 0 has roots
r = −10 ± 20i. To determine qp we can assume

qp = A cos(10t) + B sin(10t)

which does not duplicate solutions of the homogeneous equation


(such duplication would only occur if the roots above were r = ±10i).
Section 12: LRC Series Circuits

Example Continued...

Working through the details, we find that A = 1/50 and B = 1/100.


The steady state charge is therefore

1 1
qp = cos(10t) + sin(10t).
50 100
The steady state current

dqp 1 1
ip = = − sin(10t) + cos(10t).
dt 5 10
Section 13: The Laplace Transform

Section 13: The Laplace Transform


If f = f (s, t) is a function of two variables s and t, and we compute a
definite integral with respect to t,
Z b
f (s, t) dt
a

we are left with a function of s alone.

Example: The integral22


4
4
t2
Z
2 2 2
(2st + s − t) dt = st + s t − = 16s + 4s2 − 8
0 2
0

is a function of the variable s.


22
The variable s is treated like a constant when integrating with respect to t—and
visa versa.
Section 13: The Laplace Transform

Integral Transform
An integral transform is a mapping that assigns to a function f (t)
another function F (s) via an integral of the form
Z b
K (s, t)f (t) dt.
a

I The function K is called the kernel of the transformation.


I The limits a and b may be finite or infinite.
I The integral may be improper so that convergence/divergence
must be considered.
I This transform is linear in the sense that
Z b Z b Z b
K (s, t)(αf (t) + βg(t)) dt = α K (s, t)f (t) dt + β K (s, t)g(t) dt.
a a a
Section 13: The Laplace Transform

The Laplace Transform

Definition: Let f (t) be defined on [0, ∞). The Laplace transform of f is


denoted and defined by
Z ∞
L {f (t)} = e−st f (t) dt = F (s).
0
The domain of the transformation F (s) is the set of all s such that the
integral is convergent.

Note: The kernel for the Laplace transform is K (s, t) = e−st .


Section 13: The Laplace Transform

Find the Laplace transform of f (t) = 1


R∞
It is readily seen that if s = 0, the integral 0 e−st dt is divergent.
Otherwise23 Z ∞ ∞
1
L {1} = e−st dt = − e−st
0 s 0
Convergence in the limit t → ∞ requires s > 0. In this case, we have

1 1
L {1} = − (0 − 1) = .
s s

So we have the transform along with its domain

1
L {1} = , s > 0.
s

23
The integral is improper. We are in reality evaluating an integral of the form
Rb −st
0
e f (t) dt and then taking the limit b → ∞. We suppress some of the notation
here with the understanding that this process is implied.
Section 13: The Laplace Transform

A piecewise defined function


Find the Laplace transform of f defined by

2t, 0 ≤ t < 10
f (t) =
0, t ≥ 10

Z ∞ Z 10 Z ∞
−st −st
L {f (t)} = e f (t) dt = 2te dt + 0 · e−st dt
0 0 10
For s 6= 0, integration by parts gives

2 2e−10s 20e−10s
L {f (t)} = − − .
s2 s2 s

When s = 0, the value L {f (t)}|s=10 = 100 can be computed by


evaluating the integral or by taking the limit of the above as s → 0.
Section 13: The Laplace Transform

The Laplace Transform is a Linear Transformation


Some basic results include:
I L {αf (t) + βg(t)} = αF (s) + βG(s)

I L {1} = 1s , s>0

I L {t n } = n!
, s > 0 for n = 1, 2, . . .
sn+1

I L {eat } = 1
s−a , s>a

I L {cos kt} = s
, s>0
s2 +k 2

I L {sin kt} = k
, s>0
s2 +k 2
Section 13: The Laplace Transform

Examples: Evaluate the Laplace Transform of

(a) f (t) = (2−t)2

L {f (t)} = L {4 − 4t + t 2 } = 4L {1} − 4L {t} + L {t 2 }


4 4 2
= − 2+ 3
s s s

(b) f (t) = sin2 5t

1 1 1 1
L {f (t)} = L { − cos(10t)} = L {1} − L {cos(10t)}
2 2 2 2
1
1 s
= − 2 2
2s s + 100
Section 13: The Laplace Transform

Sufficient Conditions for Existence of L {f (t)}

Definition: Let c > 0. A function f defined on [0, ∞) is said to be of


exponential order c provided there exists positive constants M and T
such that |f (t)| < Mect for all t > T .

Definition: A function f is said to be piecewise continuous on an


interval [a, b] if f has at most finitely many jump discontinuities on [a, b]
and is continuous between each such jump.
Section 13: The Laplace Transform

Sufficient Conditions for Existence of L {f (t)}

Theorem: If f is piecewise continuous on [0, ∞) and of exponential


order c for some c > 0, then f has a Laplace transform for s > c.
Section 14: Inverse Laplace Transforms

Section 14: Inverse Laplace Transforms

Now we wish to go backwards: Given F (s) can we find a function f (t)


such that L {f (t)} = F (s)?

If so, we’ll use the following notation

L −1 {F (s)} = f (t) provided L {f (t)} = F (s).

We’ll call f (t) an inverse Laplace transform of F (s).


Section 14: Inverse Laplace Transforms

A Table of Inverse Laplace Transforms


I L −1
1
s =1

I L −1 n!
= t n , for n = 1, 2, . . .

sn+1

n o
I L −1 1
s−a = eat
n o
I L −1 s
= cos kt
s2 +k 2

n o
I L −1 k
= sin kt
s2 +k 2
The inverse Laplace transform is also linear so that

L −1 {αF (s) + βG(s)} = αf (t) + βg(t)


Section 14: Inverse Laplace Transforms

Find the Inverse Laplace Transform


When using the table, we have to match the expression inside the
brackets {} EXACTLY! Algebra, including partial fraction
decomposition, is often needed.

 
−1 1
(a) L
s7

   
−1 1 1 6! −1
L =L
s7 6! s7

t6
 
1 −1 6!
= L =
6! s7 6!
Section 14: Inverse Laplace Transforms

Example: Evaluate

 
s+1
(b) L −1
s2 + 9

     
−1 s+1 s −1 −1 1
L =L +L
s2 + 9 s2 + 9 s2 + 9
   
−1 s 1 −1 3
=L + L
s2 + 9 3 s2 + 9
1
= cos(3t) + sin(3t)
3
Section 14: Inverse Laplace Transforms

Example: Evaluate
 
−1 s−8
(c) L
s2 − 2s

First we perform a partial fraction decomposition on the argument to


find that
s−8 4 3
= − .
s(s − 2) s s−2
Now    
−1 s−8 −1 4 3
L =L −
s2 − 2s s s−2
   
−1 1 −1 1
= 4L − 3L
s s−2

= 4 − 3e2t
Section 15: Shift Theorems

Section 15: Shift


n
Theorems
o
Suppose we wish to evaluate L −1 2
(s−1)3
. Does it help to know that
L t 2 = s23 ?


Note that by definition


n o Z ∞
L et
t 2
= e−st et t 2 dt
Z0 ∞
= e−(s−1)t t 2
0

the Laplace transform of f (t) = t 2 evaluated


Observe that this is simply 
at s − 1. Letting F (s) = L t 2 , we have

2
F (s − 1) = .
(s − 1)3
Section 15: Shift Theorems

Theorem (translation in s)

Suppose L {f (t)} = F (s). Then for any real number a

L eat f (t) = F (s − a).




For example,
n! n!
L t n = n+1 L eat t n =
 
=⇒ .
s (s − a)n+1
s s−a
L {cos(kt)} = L eat cos(kt) =

=⇒ .
s2 + k2 (s − a)2 + k 2
Section 15: Shift Theorems

Inverse Laplace Transforms (completing the square)

 
−1 s
(a) L 2
s + 2s + 2

Note that s2 + 2s + 2 = (s + 1)2 + 1 and s = s + 1 − 1. Hence


     
−1 s −1 s+1 −1 1
L =L −L
s2 + 2s + 2 (s + 1)2 + 1 (s + 1)2 + 1

= e−t cos t − e−t sin t.


Section 15: Shift Theorems

Inverse Laplace Transforms (repeat linear factors)

1 + 3s − s2
 
−1
(b) L
s(s − 1)2

Doing a partial fraction decomposition, we find that


1 + 3s − s2 1 2 3
= − + .
s(s − 1)2 s s − 1 (s − 1)2
So
1 + 3s − s2
   
−1 −1 1 2 3
L =L − +
s(s − 1)2 s s − 1 (s − 1)2
     
−1 1 −1 1 −1 1
=L − 2L + 3L
s s−1 (s − 1)2

= 1 − 2et + 3tet
Section 15: Shift Theorems

The Unit Step Function


Let a > 0. The unit step function U (t − a) is defined by

0, 0 ≤ t < a
U (t − a) =
1, t ≥ a

Figure: We can use the unit step function to provide convenient expressions
for piecewise defined functions.
Section 15: Shift Theorems

Piecewise Defined Functions

Verify that

g(t), 0 ≤ t < a
f (t) = = g(t) − g(t)U (t − a) + h(t)U (t − a)
h(t), t ≥ a

Exercise left to the reader. (Hint: Consider the two intervals 0 ≤ t < a
and t ≥ a.)
Section 15: Shift Theorems

Translation in t
Given a function f (t) for t ≥ 0, and a number a > 0

0, 0≤t <a
f (t − a)U (t − a) = .
f (t − a), t ≥ a

Figure: The function f (t − a)U (t − a) has the graph of f shifted a units to the
right with value of zero for t to the left of a.
Section 15: Shift Theorems

Theorem (translation in t)
If F (s) = L {f (t)} and a > 0, then

L {f (t − a)U (t − a)} = e−as F (s).

In particular,
e−as
L {U (t − a)} = .
s

As another example,

n! n!e−as
L {t n } = =⇒ L {(t − a)n U (t − a)} = .
sn+1 sn+1
Section 15: Shift Theorems

Example

Find the Laplace transform L {f (t)} where



1, 0 ≤ t < 1
f (t) =
t, t ≥ 1

Noting that f (t) = 1 + (t − 1)U (t − 1), we have

L {f (t)} = L {1} + L {(t − 1)U (t − 1)}

1 e−s
= + 2 .
s s
Section 15: Shift Theorems

A Couple of Useful Results

Another formulation of this translation theorem is

(1) L {g(t)U (t−a)} = e−as L {g(t+a)}.

For example (making use of a sum of angles formula)


 π  π
L {cos t U t − } = e−πs/2 L {cos t + }
2 2

1
= e−πs/2 L {− sin t} = −e−πs/2 .
s2 +1
Section 15: Shift Theorems

A Couple of Useful Results

The inverse form of this translation theorem is

(2) L −1 {e−as F (s)} = f (t−a)U (t−a).

For example, using a partial fraction decomposition as necessary


 −2s   −2s
e−2s

−1 e −1 e
L =L −
s(s + 1) s s+1

= U (t − 2) − e−(t−2) U (t − 2).
Section 16: Laplace Transforms of Derivatives and IVPs

Section 16: Laplace Transforms of Derivatives and


IVPs
Suppose f has a Laplace transform and that f is differentiable on
[0, ∞). Obtain an expression for the Laplace tranform of f 0 (t).

Z ∞
0
By definition L f (t) = e−st f 0 (t) dt

0

Let us assume that f is of exponential order c for some real c and take
s > c. Integrate by parts to obtain
 0
Z ∞ ∞ Z ∞
−st 0 −st
L f (t) = e f (t) dt = f (t)e +s e−st f (t) dt
0 0 0
= 0 − f (0) + sL {f (t)}
= sL {f (t)} − f (0) (9)
Section 16: Laplace Transforms of Derivatives and IVPs

Transforms of Derivatives

If L {f (t)} = F (s), we have L {f 0 (t)} = sF (s) − f (0). We can use this


relationship recursively to obtain Laplace transforms for higher
derivatives of f .
For example

L f 00 (t) = sL f 0 (t) − f 0 (0)


 

= s (sF (s) − f (0)) − f 0 (0)

= s2 F (s) − sf (0) − f 0 (0)


Section 16: Laplace Transforms of Derivatives and IVPs

Transforms of Derivatives

For y = y(t) defined on [0, ∞) having derivatives y 0 , y 00 and so forth, if

L {y(t)} = Y (s),
then  
dy
L = sY (s) − y (0),
dt
 2 
d y
L = s2 Y (s) − sy(0) − y 0 (0),
dt 2
..
.
d ny
 
L = sn Y (s) − sn−1 y(0) − sn−2 y 0 (0) − · · · − y (n−1) (0).
dt n
Section 16: Laplace Transforms of Derivatives and IVPs

Differential Equation

For constants a, b, and c, take the Laplace transform of both sides of


the equation

ay 00 + by 0 + cy = g(t), y (0) = y0 , y 0 (0) = y1

Letting L {y(t)} = Y (s) and L {g(t)} = G(s), we take the Laplace


transform of both sides of the ODE to obtain

L ay 00 + by 0 + cy = L {g(t)} =⇒


aL y 00 + bL y 0 + cL {y} = G(s) =⇒
 

a(s2 Y (s) − sy(0) − y 0 (0)) + b(sY (s) − y (0)) + cY (s) = G(s).


Section 16: Laplace Transforms of Derivatives and IVPs

Differential Equation
For constants a, b, and c, take the Laplace transform of both sides of
the equation

ay 00 + by 0 + cy = g(t), y (0) = y0 , y 0 (0) = y1

Applying the initial conditions and solving for Y (s)

(as2 + bs + c)Y (s) − asy0 − ay1 − by0 = G(s) =⇒

ay0 s + ay1 + by0 G(s)


Y (s) = 2
+ 2 .
as + bs + c as + bs + c
The solution y(t) of the IVP can be found by applying the inverse
transform
y (t) = L −1 {Y (s)}.
Section 16: Laplace Transforms of Derivatives and IVPs

Solving IVPs

Figure: We use the Laplace transform to turn our DE into an algebraic


equation. Solve this transformed equation, and then transform back.
Section 16: Laplace Transforms of Derivatives and IVPs

General Form

We get
Q(s) G(s)
Y (s) = +
P(s) P(s)
where Q is a polynomial with coefficients determined by the initial
conditions, G is the Laplace transform of g(t) and P is the
characteristic polynomial of the original equation.

 
−1 Q(s)
L is called the zero input response,
P(s)
and  
−1 G(s)
L is called the zero state response.
P(s)
Section 16: Laplace Transforms of Derivatives and IVPs

Solve the IVP using the Laplace Transform

dy
(a) +3y = 2t y(0) = 2
dt

Apply the Laplace transform and use the initial condition. Let
Y (s) = L {y }
L y 0 + 3y = L {2t}


2
sY (s) − y(0) + 3Y (s) =
s2
2
(s + 3)Y (s) − 2 =
s2
2 2 2s2 + 2
Y (s) = + =
s2 (s + 3) s + 3 s2 (s + 3)
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

We use a partial fraction decomposition to facilitate taking the inverse


transform.
−2 2 20
Y (s) = 9 + 32 + 9
s s s+3

2 2 20 −3t
y(t) = L −1 {Y (s)} = − + t + e .
9 3 9
Section 16: Laplace Transforms of Derivatives and IVPs

Solve the IVP using the Laplace Transform

y 00 +4y 0 +4y = te−2t y(0) = 1, y 0 (0) = 0

Again, let Y (s) = L {y(t)}.

L {y 00 + 4y 0 + 4y } = L {te−2t }

1
s2 Y (s) − sy(0) − y 0 (0) + 4sY (s) − 4y(0) + 4Y (s) =
(s + 2)2
1
(s2 + 4s + 4)Y (s) − s − 4 =
(s + 2)2
1 s+4
Y (s) = 4
+ .
(s + 2) (s + 2)2
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

Perform a partial fraction decomposition on Y , and take the inverse


transform to find the solution y.

1 1 2
Y (s) = 4
+ +
(s + 2) s + 2 (s + 2)2

1 3 −2t
y(t) = L −1 {Y (s)} = t e + e−2t + 2te−2t .
3!
Section 16: Laplace Transforms of Derivatives and IVPs

Solve the IVP


An LR-series circuit has inductance L = 1h, resistance R = 10Ω, and
applied force E(t) whose graph is given below. If the initial current
i(0) = 0, find the current i(t) in the circuit.
Section 16: Laplace Transforms of Derivatives and IVPs

LR Circuit Example

The IVP can be stated as


di
+ 10i = E0 U (t − 1) − E0 U (t − 3), i(0) = 0.
dt
Letting I(s) = L {i(t)}, we apply the Laplace transform to obtain

L {i 0 + 10i} = L {E0 U (t − 1) − E0 U (t − 3)}

E0 e−s E0 e−3s
sI(s) − i(0) + 10I(s) = −
s s
E0  
I(s) = e−s − e−3s .
s(s + 10)
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

We can perform a partial fraction decomposition on the rational factor


and recover the current i.
" #
E0 E0
I(s) = 10
− 10
(e−s − e−3s )
s s + 10
E0 e−s E0 e−s E0 e−3s E0 e−3s
= − − +
10 s 10 s + 10 10 s 10 s + 10
And finally

i(t) = L −1 {I(s)}
E0   E0  
= 1 − e−10(t−1) U (t − 1) − 1 − e−10(t−3) U (t − 3).
10 10
Section 16: Laplace Transforms of Derivatives and IVPs

Solving a System

We can solve a system of ODEs using Laplace transforms. Here, we’ll


consider systems that are
I linear,
I having initial conditions at t = 0, and
I constant coefficient.
Let’s see it in action (i.e. with a couple of examples).
Section 16: Laplace Transforms of Derivatives and IVPs

Example

Solve the system of equations

dx
= −2x − 2y + 60, x(0) = 0
dt
dy
= −2x − 5y + 60, y(0) = 0
dt

We’ll use the Laplace transforms. Sticking with the usual


uppercase-lowercase convention, let’s set

X (s) = L {x(t)}, and Y (s) = L {y(t)}.


Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...
dx
= −2x − 2y + 60, x(0) = 0
dt
dy
= −2x − 5y + 60, y (0) = 0
dt
Applying the transform to both sides of both equations
60
sX (s) − x(0) = −2X (s) − 2Y (s) +
s
60
sY (s) − y(0) = −2X (s) − 5Y (s) +
s
We can subsitute in the initial conditions, and rearrange the equations
to get an algebraic system
60
(s + 2)X (s) + 2Y (s) =
s
60
2X (s) + (s + 5)Y (s) =
s
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

60
(s + 2)X (s) + 2Y (s) =
s
60
2X (s) + (s + 5)Y (s) =
s
We can solve this system in any number of ways. For those familiar
with it, Crammer’s Rule is probably the easiest approach. Elimination will
work just as well. We find

60(s + 3)
X (s) =
s(s + 1)(s + 6)
60
Y (s) =
(s + 1)(s + 6)

As is usually the case, a partial fraction decomposition will give us a


form from which we can take the inverse transform using the table.
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

Upon the decomposition, we have

30 24 6
X (s) = − −
s s+1 s+6
12 12
Y (s) = −
s+1 s+6
Finally, we take the inverse transform to obtain the solution to the
system.

x(t) = L −1 {X (s)} = 30 − 24e−t − 6e−6t


y(t) = L −1 {Y (s)} = 12e−t − 12e−6t
Section 16: Laplace Transforms of Derivatives and IVPs

Example
Use the Laplace transform to solve the system of equations
x 00 (t) = y , x(0) = 1, x 0 (0) = 0
y 0 (t) = x, y(0) = 1

This system is second order. Again, using the upper-lowercase


convention, we take the Laplace transform of both equations to obtain
s2 X (s) − sx(0) − x 0 (0) = Y (s)
sY (s) − y(0) = X (s)
As before, we substitute in the given initial conditions and rearrange
the equations.
s2 X (s) − Y (s) = s
−X (s) + sY (s) = 1
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

Using some method to solve for X and Y , we obtain

s2 + 1 s2 + 1
X (s) = =
s3 − 1 (s − 1)(s2 + s + 1)
s2 + s s(s + 1)
Y (s) = 3
=
s −1 (s − 1)(s2 + s + 1)

The right most expressions come from factoring the difference of


cubes. The decomposition is a bit more tedious. It will be useful to
complete the square on the factor in the denominator.
 2
2 1 3
s +s+1= s+ + .
2 4
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

With a little effort, we obtain the decomposition

2/3 1/3(s − 1)
X (s) = + 2
s−1 s +s+1
2/3 1/3(s + 2)
Y (s) = + 2
s−1 s +s+1

Using the completed square along with s − 1 = s + 21 − 32 and


s + 2 = s + 12 + 23 . We see that the shift in s result is going to be used.
√ 2
It’s also useful to note that 3/4 = 3/2 .
Section 16: Laplace Transforms of Derivatives and IVPs

Example Continued...

1/3 s + 12

2/3 1/2
X (s) = + 2 −
s−1 1 2
s + 21 + 3 3

4 s+ 2 + 4
1/3 s + 12

2/3 1/2
Y (s) = + 2 +
s−1 1 2
s + 21 + 3 3

4 s+ 2 + 4

We finally take the inverse Laplace transform using the table,


x(t) = L −1 {X (s)} and y(t) = L −1 {Y (s)}, and obtain the solution
√ ! √ !
2 t 1 −t/2 3 1 −t/2 3
x(t) = e + e cos t −√ e sin t
3 3 2 3 2
√ ! √ !
2 t 1 −t/2 3 1 −t/2 3
y(t) = e + e cos t +√ e sin t
3 3 2 3 2
Section 17: Fourier Series: Trigonometric Series

Section 17: Fourier Series: Trigonometric Series

Consider the following problem:


An undamped spring mass system has a mass of 2 kg attached to a
spring with spring constant 128 N/m. The mass is driven by an
external force f (t) = 2t for −1 < t < 1 that is 2-periodic so that
f (t + 2) = f (t) for all t > 0.

d 2x
Figure: 2 + 128x = f (t)
dt 2
Question: How can we solve a problem like this with a right side with
inifinitely many pieces?
Section 17: Fourier Series: Trigonometric Series

Motivation for Fourier Series

Various applications in science and engineering involve periodic


forcing or complex signals that can be considered sums of more
elementary parts (e.g. harmonics).

I Signal processing (decomposing/reconstructing sound waves or


voltage inputs)
I Control theory (qualitative assessment/control of dynamics)
I Approximation of forces or solutions of differential equations

A variety of interesting waveforms (periodic curves) arise in


applications and can be expressed by series representations.
Section 17: Fourier Series: Trigonometric Series

Common Models of Periodic Sources (e.g. Voltage)

Figure: We’d like to solve, or at least approximate solutions, to ODEs and


PDEs with periodic right hand sides.
Section 17: Fourier Series: Trigonometric Series

Series Representations for Functions


The goal is to represent a function by a series

X
f (x) = (some simple functions)
n=1


an (x − c)n where the
P
In calculus, you saw power series f (x) =
n=0
simple functions were powers (x − c)n .

Here, you will see how some functions can be written as series of
trigonometric functions

X
f (x) = (an cos nx + bn sin nx)
n=0

We’ll move the n = 0 to the front before the rest of the sum.
Section 17: Fourier Series: Trigonometric Series

Some Preliminary Concepts


Suppose two functions f and g are integrable on the interval [a, b]. We
define the inner product of f and g on [a, b] as
Z b
hf , gi = f (x)g(x) dx.
a

We say that f and g are orthogonal on [a, b] if

hf , gi = 0.

The product depends on the interval, so the orthogonality of two


functions depends on the interval.
Section 17: Fourier Series: Trigonometric Series

Properties of an Inner Product

Let f , g, and h be integrable functions on the appropriate interval and


let c be any real number. The following hold

(i) hf , gi = hg, f i

(ii) hf , g + hi = hf , gi + hf , hi

(iii) hcf , gi = chf , gi

(iv) hf , f i ≥ 0 and hf , f i = 0 if and only if f = 0


Section 17: Fourier Series: Trigonometric Series

Orthogonal Set
A set of functions {φ0 (x), φ1 (x), φ2 (x), . . .} is said to be orthogonal on
an interval [a, b] if
Z b
hφm , φn i = φm (x)φn (x) dx = 0 whenever m 6= n.
a

Note that any function φ(x) that is not identically zero will satisfy
Z b
hφ, φi = φ2 (x) dx > 0.
a

Hence we define the square norm of φ (on [a, b]) to be


s
Z b
kφk = φ2 (x) dx.
a
Section 17: Fourier Series: Trigonometric Series

An Orthogonal Set of Functions


Consider the set of functions

{1, cos x, cos 2x, cos 3x, . . . , sin x, sin 2x, sin 3x, . . .} on [−π, π].

It can easily be verified that


Z π Z π
cos nx dx = 0 and sin mx dx = 0 for all n, m ≥ 1,
−π −π

Z π
cos nx sin mx dx = 0 for all m, n ≥ 1, and
−π

Z π Z π 
0, m 6= n
cos nx cos mx dx = sin nx sin mx dx = ,
−π −π π, n = m
Section 17: Fourier Series: Trigonometric Series

An Orthogonal Set of Functions

These integral values indicated that the set of functions

{1, cos x, cos 2x, cos 3x, . . . , sin x, sin 2x, sin 3x, . . .}

is an orthogonal set on the interval [−π, π].


This set can be generalized by using a simple change of variables
t = πx
p to obtain the orthogonal set on [−p, p]
 
nπx mπx
1, cos , sin | n, m ∈ N
p p

There are many interesting and useful orthogonal sets of functions (on
appropriate intervals). What follows is readily extended to other such
(infinite) sets.
Section 17: Fourier Series: Trigonometric Series

Fourier Series

Suppose f (x) is defined for −π < x < π. We would like to know how to
write f as a series in terms of sines and cosines.

Task: Find coefficients (numbers) a0 , a1 , a2 , . . . and b1 , b2 , . . . such


that24

a0 X
f (x) = + (an cos nx + bn sin nx) .
2
n=1

24 a0
We’ll write 2
as opposed to a0 purely for convenience.
Section 17: Fourier Series: Trigonometric Series

Fourier Series


a0 X
f (x) = + (an cos nx + bn sin nx) .
2
n=1

The question of convergence naturally arises when we wish to work


with infinite series. To highlight convergence considerations, some
authors prefer not to use the equal sign when expressing a Fourier
series and instead write
a0
f (x) ∼ + ···
2
Herein, we’ll use the equal sign with the understanding that equality
may not hold at each point.
Convergence will be address later.
Section 17: Fourier Series: Trigonometric Series

Finding an Example Coefficient


For a known function f defined on (−π, π), assume the series holds.
We’ll find the coefficient b4 . Multiply both sides by sin 4x


a0 X
f (x)sin 4x = sin 4x + (an cos nxsin 4x + bn sin nxsin 4x) .
2
n=1

Now integrate both sides with respect to x from −π to π (assume it is


valid to integrate first and sum later).

Z π Z π
a0
f (x)sin 4x dx = sin 4x dx +
−π −π 2
∞ Z
X π Z π 
an cos nxsin 4x dx + bn sin nxsin 4x dx .
n=1 −π −π
Section 17: Fourier Series: Trigonometric Series

Z π Z π
a0
f (x)sin 4x dx = sin 4x dx +
−π 2 −π

∞ 
X Z π Z π 
an cos nxsin 4x dx + bn sin nxsin 4x dx .
n=1 −π −π

Now
R π we use the known orthogonality property. Recall that
−π sin 4x dx = 0, and that for every n = 1, 2, . . .
Z π
cos nxsin 4x dx = 0
−π
Section 17: Fourier Series: Trigonometric Series

So the constant and all cosine terms are gone leaving


Z π ∞ 
X Z π 
f (x)sin 4x dx = bn sin nxsin 4x dx .
−π n=1 −π

But we also know that


Z π Z π
sin nxsin 4x dx = 0, for n 6= 4, and sin 4xsin 4x dx = π.
−π −π

Hence the sum reduces to the single term


Z π
f (x)sin 4x dx = πb4
−π

from which we determine


Z π
1
b4 = f (x) sin 4x dx.
π −π
Section 17: Fourier Series: Trigonometric Series

Note that there was nothing special about seeking the 4th sine
coefficient b4 . We could have just as easily sought bm for any positive
integer m. We would simply start by introducing the factor sin(mx).
Moreover, using the same orthogonality property, we could pick on the
a’s by starting with the factor cos(mx)—including the constant term
since cos(0 · x) = 1. The only minor difference we want to be aware of
is that
Z π 
2 2π, m = 0
cos (mx) dx =
−π π, m ≥ 1
Careful consideration of this sheds light on why it is conventional to
take the constant to be a20 as opposed to just a0 .
Section 17: Fourier Series: Trigonometric Series

The Fourier Series of f (x) on (−π, π)

The Fourier series of the function f defined on (−π, π) is given by



a0 X
f (x) = + (an cos nx + bn sin nx) .
2
n=1

Where
1 π
Z
a0 = f (x) dx,
π −π
1 π
Z
an = f (x) cos nx dx, and
π −π
1 π
Z
bn = f (x) sin nx dx
π −π
Section 17: Fourier Series: Trigonometric Series

Example
Find the Fourier series of the piecewise defined function

0, −π < x < 0
f (x) =
x, 0 ≤ x < π

We can find the coefficients by using the integral formulas given.


Z π Z 0 Z π
1 1 1 π
a0 = f (x) dx = 0 dx + x dx = .
π −π π −π π 0 2
π 0 π
cos(nπ) − 1
Z Z Z
1 1 1
an = f (x) cos(nx) dx = 0 dx + x cos(nx) dx = .
π −π π −π π 0 πn2
Z π Z 0 Z π
1 1 1 cos(nπ)
bn = f (x) sin(nx) dx = 0 dx + x sin(nx) dx = − .
π −π π −π π 0 n
Section 17: Fourier Series: Trigonometric Series

Example Continued...

It is convenient to use the relation cos(nπ) = (−1)n —this comes up


frequently in computing Fourier series. Using the determined
coefficients we have

a0 X
f (x) = + (an cos nx + bn sin nx)
2
n=1
∞ 
(−1)n − 1 (−1)n+1

π X
= + cos nx + sin nx
4 πn2 n
n=1
Section 17: Fourier Series: Trigonometric Series

Fourier Series on an interval (−p, p)


   
The set of functions {1, cos nπx
p , sin mπx
p |n, m ≥ 1} is orthogonal
on [−p, p]. Moreover, we have the properties
Z p   Z p  
nπx mπx
cos dx = 0 and sin dx = 0 for all n, m ≥ 1,
−p p −p p

Z p    
nπx mπx
cos sin dx = 0 for all m, n ≥ 1,
−p p p
Z p    
nπx mπx 0, m 6= n
cos cos dx = ,
−p p p p, n=m
Z p     
nπx mπx 0, m 6= n
sin sin dx = .
−p p p p, n=m
Section 17: Fourier Series: Trigonometric Series

Fourier Series on an interval (−p, p)


The orthogonality relations provide for an expansion of a function f
defined on (−p, p) as
∞     
a0 X nπx nπx
f (x) = + an cos + bn sin
2 p p
n=1

where
Z p
1
a0 = f (x) dx,
p −p
Z p  
1 nπx
an = f (x) cos dx, and
p −p p
Z p  
1 nπx
bn = f (x) sin dx
p −p p
Section 17: Fourier Series: Trigonometric Series

Find the Fourier series of f



1, −1 < x < 0
f (x) =
−2, 0 ≤ x < 1
Section 17: Fourier Series: Trigonometric Series

Example
We apply the given formulas to find the coefficients. Noting that f is
defined on the interval (−1, 1) we have p = 1.
Z 1 Z 0 Z 1
1
a0 = f (x) dx = dx + (−2) dx = −1
1 −1 −1 0
Z 1
1  nπx 
an = f (x) cos dx =
1 −1 1
Z 0 Z 1
= cos(nπx) dx + (−2) cos(nπx) dx = 0
−1 0
Z 1
1  nπx 
bn = f (x) sin dx =
1 −1 1
0 1
3((−1)n − 1)
Z Z
= sin(nπx) dx + (−2) sin(nπx) dx =
−1 0 nπ
Section 17: Fourier Series: Trigonometric Series

Example Continued...
Putting the coefficients into the expansion, we get

1 X 3((−1)n − 1)
f (x) = − + sin(nπx).
2 nπ
n=1

This example raises an interesting question: The function f is not


continuous on the interval (−1, 1). However, each term in the Fourier
series, and any partial sum thereof, is obviously continuous. This
raises questions about properties (e.g. continuity) of the series. More
to the point, we may ask: what is the connection between f and its
Fourier series at the point of discontinuity?

This is the convergence issue mentioned earlier.


Section 17: Fourier Series: Trigonometric Series

Convergence of the Series

Theorem: If f is continuous at x0 in (−p, p), then the series converges


to f (x0 ) at that point. If f has a jump discontinuity at the point x0 in
(−p, p), then the series converges in the mean to the average value
!
f (x0 −) + f (x0 +) def 1
= lim f (x) + lim+ f (x)
2 2 x→x0− x→x0

at that point.

Periodic Extension:
The series is also defined for x outside of the original domain (−p, p).
The extension to all real numbers is 2p-periodic.
Section 17: Fourier Series: Trigonometric Series

Convergence of the Series



1 X 3((−1)n − 1)

1, −1 < x < 0
f (x) = , f (x) = − + sin(nπx).
−2, 0 ≤ x < 1 2 nπ
n=1

Figure: Plot of the infinite sum, the limit for the Fourier series of f . Note that
the basic plot repeats every 2 units, and converges in the mean at each jump.
Section 17: Fourier Series: Trigonometric Series

Find the Fourier Series for f (x) = x, −1 < x < 1


Again the value of p = 1. So the coefficients are
Z 1 Z 1
1
a0 = f (x) dx = x dx = 0
1 −1 −1
Z 1
1  nπx 
an = f (x) cos dx =
1 −1 1
Z 1
= x cos(nπx) dx = 0
−1
Z 1
1  nπx 
bn = f (x) sin dx =
1 −1 1
1
2(−1)n+1
Z
= x sin(nπx) dx =
−1 nπ
Section 17: Fourier Series: Trigonometric Series

Example Continued...

Having determined the coefficients, we have the Fourier series



X 2(−1)n+1
f (x) = sin(nπx)

n=1

Observation: f is an odd function. It is not surprising then that there


are no nonzero constant or cosine terms (which have even symmetry)
in the Fourier series for f .
The following plots show f , f plotted along with some partial sums of
the series, and f along with a partial sum of its series extended outside
of the original domain (−1, 1).
Section 17: Fourier Series: Trigonometric Series

Figure: Plot of f (x) = x for −1 < x < 1


Section 17: Fourier Series: Trigonometric Series

Figure: Plot of f (x) = x for −1 < x < 1 with two terms of the Fourier series.
Section 17: Fourier Series: Trigonometric Series

Figure: Plot of f (x) = x for −1 < x < 1 with 10 terms of the Fourier series
Section 17: Fourier Series: Trigonometric Series

Figure: Plot of f (x) = x for −1 < x < 1 with the Fourier series plotted on
(−3, 3). Note that the series repeats the profile every 2 units. At the jumps,
the series converges to (−1 + 1)/2 = 0.
Section 17: Fourier Series: Trigonometric Series

Figure: Here is a plot of the series (what it converges to). We see the
periodicity and convergence in the mean. Note: A plot like this is determined
by our knowledge of the generating function and Fourier series, not by
analyzing the series itself.
Section 18: Sine and Cosine Series

Section 18: Sine and Cosine Series

Functions with Symmetry

Recall some definitions:


Suppose f is defined on an interval containing x and −x.

If f (−x) = f (x) for all x, then f is said to be even.


If f (−x) = −f (x) for all x, then f is said to be odd.

For example, f (x) = x n is even if n is even and is odd if n is odd. The


trigonometric function g(x) = cos x is even, and h(x) = sin x is odd.
Section 18: Sine and Cosine Series

Integrals on symmetric intervals


If f is an even function on (−p, p), then
Z p Z p
f (x) dx = 2 f (x) dx.
−p 0

If f is an odd function on (−p, p), then


Z p
f (x) dx = 0.
−p
Section 18: Sine and Cosine Series

Products of Even and Odd functions

Even × Even = Even,


and
Odd × Odd = Even.
While
Even × Odd = Odd.

So, suppose f is even on (−p, p). This tells us that f (x) cos(nx) is
even for all n and f (x) sin(nx) is odd for all n.
And, if f is odd on (−p, p). This tells us that f (x) sin(nx) is even for all
n and f (x) cos(nx) is odd for all n
Section 18: Sine and Cosine Series

Fourier Series of an Even Function


If f is even on (−p, p), then the Fourier series of f has only constant
and cosine terms. Moreover
∞  
a0 X nπx
f (x) = + an cos
2 p
n=1

where

Z p
2
a0 = f (x) dx
p 0

and
Z p  
2 nπx
an = f (x) cos dx.
p 0 p
Section 18: Sine and Cosine Series

Fourier Series of an Odd Function

If f is odd on (−p, p), then the Fourier series of f has only sine terms.
Moreover
∞  
X nπx
f (x) = bn sin
p
n=1

where

Z p  
2 nπx
bn = f (x) sin dx.
p 0 p
Section 18: Sine and Cosine Series

Find the Fourier series of f



x + π, −π < x < 0
f (x) =
π − x, 0 ≤ x < π

An assessment of f (e.g. by plotting) tells us that f is even. So we


know that the Fourier series will not have any sine terms. We can
simplify the work of finding the coefficients by making use of the
symmetry. We have
2 π 2 π
Z Z
a0 = f (x) dx = (π − x) dx = π
π 0 π 0
2 π 2 π
Z Z
an = f (x) cos(nx) dx = (π − x) cos(nx) dx =
π 0 π 0
2(1 − (−1)n )
=
n2 π
Section 18: Sine and Cosine Series

Example Continued...

The series is therefore



π X 2(1 − (−1)n )
f (x) = + cos(nx)
2 n2 π
n=1

By recognizing and using the symmetry, we avoided the work of


computing four integrals—those from −π to 0 and then 0 to π—instead
of two to obtain the a’s as well as computing the integrals for the b’s
which would just end up being zero.
Section 18: Sine and Cosine Series

Half Range Sine and Half Range Cosine Series


Suppose f is only defined for 0 < x < p. We can extend f to the left, to
the interval (−p, 0), as either an even function or as an odd function.
Then we can express f with two distinct series:
∞  
a0 X nπx
Half range cosine series f (x) = + an cos
2 p
n=1
Z p Z p  
2 2 nπx
where a0 = f (x) dx and an = f (x) cos dx.
p 0 p 0 p

∞  
X nπx
Half range sine series f (x) = bn sin
p
n=1
Z p  
2 nπx
where bn = f (x) sin dx.
p 0 p
Section 18: Sine and Cosine Series

Extending a Function to be Odd

Figure: f (x) = p − x, 0 < x < p together with its odd extension.


Section 18: Sine and Cosine Series

Extending a Function to be Even

Figure: f (x) = p − x, 0 < x < p together with its even extension.


Section 18: Sine and Cosine Series

Find the Half Range Sine Series of f

f (x) = 2 − x, 0<x <2

Here, the value p = 2. Using the formula for the coefficients of the sine
series

2 2
Z  nπx 
bn = f (x) sin dx
2 0 2
Z 2  nπx 
= (2 − x) sin dx
0 2
4
=


X 4  nπx 
The series is f (x) = sin .
nπ 2
n=1
Section 18: Sine and Cosine Series

Find the Half Range Cosine Series of f

f (x) = 2 − x, 0<x <2

Using the formulas for the cosine series


Z 2 Z 2
2
a0 = f (x) dx = (2 − x) dx = 2
2 0 0
Z 2
2  nπx 
an = f (x) cos dx
2 0 2
Z 2  nπx 
= (2 − x) cos dx
0 2
4(1 − (−1)n )
=
n2 π 2
Section 18: Sine and Cosine Series

Example Continued...

We can write out the half range cosine series



X 4(1 − (−1)n )  nπx 
f (x) = 1 + cos .
n2 π 2 2
n=1

We have two different series representations for this function each of


which converge to f (x) on the interval (0, 2). The following plots show
graphs of f along with partial sums of each of the series. When we plot
over the interval (−2, 2) we see the two different symmetries. Plotting
over a larger interval such as (−6, 6) we can see the periodic
extensions of the two symmetries.
Section 18: Sine and Cosine Series

Plots of f with Half range series

Figure: f (x) = 2 − x, 0 < x < 2 with 10 terms of the sine series.


Section 18: Sine and Cosine Series

Plots of f with Half range series

Figure: f (x) = 2 − x, 0 < x < 2 with 10 terms of the sine series, and the
series plotted over (−6, 6)
Section 18: Sine and Cosine Series

Plots of f with Half range series

Figure: f (x) = 2 − x, 0 < x < 2 with 5 terms of the cosine series.


Section 18: Sine and Cosine Series

Plots of f with Half range series

Figure: f (x) = 2 − x, 0 < x < 2 with 5 terms of the cosine series, and the
series plotted over (−6, 6)
Section 18: Sine and Cosine Series

Solution of a Differential Equation


An undamped spring mass system has a mass of 2 kg attached to a
spring with spring constant 128 N/m. The mass is driven by an
external force f (t) = 2t for −1 < t < 1 that is 2-periodic so that
f (t + 2) = f (t) for all t > 0. Determine a particular solution xp for the
displacement for t > 0.

We are only interested in t > 0, but since f is an odd function that is


2-periodic, we can express it conveniently as a sine series

X 2(−1)n+1
f (t) = 2 sin(nπt).

n=1

Our differential equation is therefore



X 2(−1)n+1
2x 00 + 128x = f (t) =⇒ x 00 + 64x = sin(nπt).

n=1
Section 18: Sine and Cosine Series

Solution of a Differential Equation

Let us assume that xp can similarly be determined as a sine series


(note that this is the Method of Undetermined Coefficients!) via

X
xp = Bn sin(nπt).
n=1

To determine the coefficients Bn , we substitute this into the left side of


our DE. Observe that (assuming we can differentiate term by term)

X
xp00 = −n2 π 2 Bn sin(nπt).
n=1
Section 18: Sine and Cosine Series

Solution of a Differential Equation


Upon substitution we get

X ∞
X
xp00 + 64xp = −n2 π 2 Bn sin(nπt) + 64 Bn sin(nπt) =
n=1 n=1


X 2(−1)n+1
= sin(nπt).

n=1

Collecting the series on the left side produces


∞ ∞
X X 2(−1)n+1
(64 − n2 π 2 )Bn sin(nπt) = sin(nπt).

n=1 n=1

Finally, comparing coefficients of sin(nπt) for each value of n yields the


formulas for the B’s
2(−1)n+1
Bn = .
nπ(64 − n2 π 2 )
Section 18: Sine and Cosine Series

Solution of a Differential Equation


We should be careful to determine whether our formula is well defined
for every value of n. Since 64 − n2 π 2 is never zero, our expression is
always valid. The particular solution can now be expressed

X 2(−1)n+1
xp = sin(nπt).
nπ(64 − n2 π 2 )
n=1

If the specifics of the problem had resulted in a value of n, say nk , for


which Bnk could not be solved (i.e. if ω 2 − nk2 π 2 = 0), this would
indicated a pure resonance term. The above approach would still yield
the remaining B values. The resonance term would have to be
considered separately. We could assume, using the principle of
superposition, that

X
xp = Ank t cos(nk πt) + Bnk t sin(nk πt) + Bn sin(nπt).
n=1
n6=nk
Table of Laplace Transforms

Table of Laplace Transforms

f (t) = L −1 {F (s)} F (s) = L {f (t)} = 0∞ e−st f (t) dt


R

1 1 s > 0
s
t n n = 1, 2, . . . n!
n+1 s>0
s
Γ(r +1)
tr r > −1 s>0
sr +1
eat s−a
1 s>a
sin(kt) k 6= 0 k s>0
s2 +k 2
cos(kt) s s>0
s +k 2
2
eat f (t) F (s − a)
e −as
U (t − a) a > 0 s
s>0
U (t − a)f (t − a) a > 0 e−as F (s)
U (t − a)g(t) a > 0 e−as L {g(t + a)}
f 0 (t) sF (s) − f (0)
f (n) (t) sn F (s) − sn−1 f (0) − sn−2 f 0 (0) − · · · − f (n−1) (0)
tf (t) d F (s)
− ds
n
t n f (t) n = 1, 2, . . . (−1)n d n F (s)
ds

L {αf (t) + βg(t)} = αL {f (t)} + βL {g(t)}

−1 −1 −1
L {αF (s) + βG(s)} = αL {F (s)} + βL {G(s)}
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

Crammer’s rule is an approach to solving a linear system of equations


under the conditions that
a. the number of equations matches the number of unknowns (i.e.
the system is square), and
b. the system is uniquely solvable (i.e. there is exactly one solution).

While Crammer’s rule can be used with any size system, we’ll restrict
ourselves to the 2 × 2 case. We obtain the solution in terms of ratios of
determinants. First, let’s see how the method plays out in general, and
then we illustrate with an example.

Note: Crammer’s rule will produce the same solution as any other
approach. Its advantage is in its computational simplicity (which gets
lost the larger the system is).
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule


We begin with a 2 × 2 (two equations in two variables) system

ax + by = e
cx + dy = f

The unknowns are x and y, and the parameters a, b, c, d, e, and f are


constants25 .

We’re going to form 3 matrices. The first is the coefficient matrix for the
system. I’ll call that A. So
 
a b
A= .
c d

25
We can allow any of a through f to be unknown parameters, but they don’t depend
on x or y so they will still be considered constant.
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

ax + by = e
cx + dy = f

Next, we form two more matrices that I’ll call Ax and Ay . These
matrices are obtained by replacing one column of A with the values
from the right side of the system. For Ax we replace the first column
(the one with x’s coefficients), and for Ay we replace the second
column (the one with y ’s coefficients). We have
   
e b a e
Ax = , and Ay = .
f d c f
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

ax + by = e
cx + dy = f

Now we have the three matrices


     
a b e b a e
A= , Ax = , and Ay = .
c d f d c f

The condition that the system is uniquely solvable26 guarantees that


det(A) 6= 0. Here’s the punch line: The solution to the system is

det(Ax ) det(Ay )
x= and y= .
det(A) det(A)

26
This is a well known result that can be found in any elementary discussion of
Linear Algebra.
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

Let’s look at a simple example. Solve the system of equations

2x − 3y = −4
3x + 7y = 2

Let’s form the three matrices and verify that the determinant of the
coefficient matrix isn’t zero.
     
2 −3 −4 −3 2 −4
A= , Ax = , and Ay = .
3 7 2 7 3 2

det(A) = 2(7) − (−3)(3) = 23.


Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

Okay, we can proceed by finding the determinants of the two matrices


Ax and Ay . We get

det(Ax ) = −4(7) − 2(−3) = −22 and det(Ay ) = 2(2) − 3(−4) = 16.

Together with det(A) = 23, the solution to the system is

22 16
x =− and y = .
23 23

It’s worth taking a moment to substitute those values back into the
system to verify that it does indeed solve it. It’s not hard to imagine,
looking at the solution, that solving it with substitution or elimination is
probably more tedious.
Appendix: Crammer’s Rule

Appendix: Crammer’s Rule

This process can be extended in the obvious way to larger systems of


equations provided they are square and uniquely solvable. You form
the coefficient matrix. Then for each variable, form another matrix by
replacing that variable’s coefficient column with the values on the right
side of the system. Each variable’s solution value will be the ratio of
the corresponding determinants.

For larger systems (perhaps bigger than 3 × 3) one must weigh the
computational intensity of computing determinants with that of other
options such as elimination or substitution. The approach also breaks
down if the coefficient matrix has zero determinant. The system may
have solutions (or not), but another approach is needed to characterize
them.

You might also like