MATH350
MATH350
MATH350
(MATH 350)
LECTURE NOTES
Math 350 : Differential Equations I J.K.A
3
Math 350 : Differential Equations I J.K.A
5 Laplace Transforms 73
5.0.5 Table of Laplace Transforms . . . . . . . . . . . . . . . 77
5.0.6 Properties of Laplace Transforms . . . . . . . . . . . . 78
5.0.7 Inverse Laplace Transforms . . . . . . . . . . . . . . . 79
5.0.8 Solving IVPs Using Laplace Transforms . . . . . . . . . 84
5.0.9 IVPs with Non-zero Initial Conditions . . . . . . . . . 87
5.0.10 Transforms of Discontinuous Functions . . . . . . . . . 88
5.0.11 Properties . . . . . . . . . . . . . . . . . . . . . . . . . 91
1
Math 350 : Differential Equations I J.K.A
∂u2 ∂v
+ = 0,
∂x2 ∂x
∂φ ∂φ
+b + b = 0.
dx ∂y
The order of a differential equation is the order of the highest derivative
appearing in the equation. Examples of first order equations are
dx dy
+ x = 0, and + 3xy = 0,
dt dx
and the following are second order equations:
3
d2 y d2 x dx
2
+ cy = 0, and 2
+ = 0.
dx dt dt
where it is assumed that the equation holds for all x in an open interval
(a < x < b, where a or b could be infinite). If the highest-order term can be
isolated, then we can write (1.6) in the explicit form
dn y dn−1 y
dy
= f x, y, , · · · , n−1 = 0. (1.7)
dxn dx dx
dy
+ y = x,
dx
where c is a constant.
d2 y 2 2
2
− 2 y = (2 − 2x−3 ) − 2 (x2 − x−1 )
dx x x
= (2 − 2x−3 ) − (2 − 2x−3 )
= 0.
Since this is true for any x 6= 0, the function φ(x) = x2 − x−1 is an explicit
solution to (1.9) on (−∞, 0) and (0, ∞).
Z x√
Example 3. Check whether the function y = 1 + t3 dt + C, −1 <
0
x < ∞, is a solution to the linear equation
dy √
= 1 + x3
dx
on the given interval, where C is a constant.
or
dp
= −Ap(p − p1 ), p(0) = p0 , (1.11)
dt
d2 Q dQ 1
L 2
+R + Q = E(t). (1.12)
dt dt C
dy 2p h i
= √ y 3/2 (1 − y)3/2 , (1.13)
dt n
∂V 1 ∂ 2V ∂V
+ σ 2 x2 2 + rx − rV (1.14)
∂t 2 ∂x ∂t
system of equations
dx1
= Ax1 − Bx1 x2 ,
dt
dx2
= −Cx2 + Dx1 x2 ,
dt
where A, B, C, D are positive constants.
Method of Solution
To solve
dy
= h(y)g(x), (2.2)
dx
separate variables to get
dy
= g(x)dx, h(y) 6= 0.
h(y)
9
Math 350 : Differential Equations I J.K.A
Solution.
dy
= y(2 + sin x),
dx
Z Z
dy
=⇒ = (2 + sin x)dx,
y
=⇒ ln(y) = 2x − cos(x) + C1 ,
=⇒ y = e2x−cos x+C1 = eC1 e2x−cos(x) ,
∴ y = Ke2x−cos(x) .
Example 7. Solve
(1 + x)dy − ydx = 0
Solution.
(1 + x)dy − ydx = 0
dy 1
=⇒ = dx
y 1+x
Z Z
dy 1
=⇒ = dx
y 1+x
=⇒ ln |y| = ln |1 + x| + C1
y
=⇒ ln | | = C1
1+x
y
=⇒ = ±eC1 = C
1+x
=⇒ y = C(1 + x),
where C is an arbitrary constant.
Example 8. Solve
dy
= y2 − 9
dx
Solution. Z Z
dy
2
= dx, y 6= ±3
y −9
Now, we use partial fractions to write:
1 1 A B
= ≡ + ,
y2 −9 (y − 3)(y + 3) y−3 y+3
1 1
ln |y − 3| − ln |y + 3| = x + C1
6 6
11 c Dr. Joseph K. Ansong
Math 350 : Differential Equations I J.K.A
y−3
ln = 6x + C2 , C2 = 6C1
y+3
y−3
= ±eC2 e6x = Ce6x
y+3
Thus,
1 + Ce6x
y=3 .
1 − Ce6x
where µ(x) is given by equation (2.7). The function µ(x) is called the inte-
grating factor of the differential equation.
(2.9)
Z
4x 1
=⇒ e y = e3x dx = e3x + C
3
1
=⇒ y = e−x + Ce−4x
3
Applying the initial condition, we have
4 4 1
x = 0, y = , =⇒ = + C, =⇒ C = 1
3 3 3
1
∴ y = e−x + e−4x .
3
Example 10. Solve the differential equation
dy
− 4y = x6 ex
x
dx
Solution. Re-write in a standard form to get
dy y
− 4 = x5 e x , x 6= 0
dx x
− x4 dx −4
R
µ=e = e−4 ln |x| = eln x = x−4
dy
=⇒ x−4 − 4x−5 y = xex
dx
d −4
=⇒ x y = xex
dx Z
=⇒ x−4 y = xex dx
=⇒ I = xex − ex + C = ex (x − 1) + C
Thus
x−4 y = ex (x − 1) + C
∴ y = x4 ex (x − 1) + Cx4 .
EXERCISE 1. Solve
dy
1) − y = e3x
dx
dr
2) + r tan θ = sec θ
dθ
dy y
3) = + 2x + 1
dx x
∂F ∂F
dF (x, y) = dx + dy = M (x, y)dx + N (x, y)dy (2.11)
∂x ∂y
in a rectangle R, where
∂F ∂F
M (x, y) = , N (x, y) = . (2.12)
∂x ∂y
Remark. Equation (2.13) shows that the solution to the exact equation is
the level curves
F (x, y) = C
where C is an arbitrary constant. The following theorem gives a simple test
to determine if a given differential form, M (x, y)dx + N (x, y)dy, is exact.
Solution. Let
M = (2xy + 3), N = (x2 − 1)
∂M ∂N
=⇒ = 2x, and = 2x
∂y ∂x
∂M ∂N
=⇒ = 2x =
∂y ∂x
Therefore, the equation is exact. Now
∂F
M= = 2xy + 3
∂x
Z
=⇒ F = (2xy + 3)dx = x2 y + 3x + g(y)
x2 y + 3x − y = C
(3) One good check of your solution procedure, in the present case, is to
ensure that the resulting expression for g(y) is a function of only y.
If not, then there is something wrong with your algebra or solution
approach.
∂F
= x2 − 1,
∂y
such that Z
F = (x2 − 1)dy = x2 y − y + h(x),
∂F
= M =⇒ 2xy + h0 (x) = 2xy + 3
∂x
=⇒ h0 (x) = 3, =⇒ h(x) = 3x.
Note how the equation for h(x) contains only the variable x, telling us
that we are on the right track. Hence, we get
F (x, y) = x2 y − y + 3x
which is the same as the expression for F obtained using the first
approach. So the solution becomes x2 y − y + 3x = C as expected.
So we have
∂M 1
= exy + xyexy + 2
∂y y
and
∂N 1
= exy + xyexy + 2 .
∂x y
Thus, the equation is exact since
∂M ∂N
= .
∂y ∂x
Let
∂F 1
M= = yexy −
∂x y
Z
1 x
=⇒ F (x, y) = yexy − dx = exy − + g(y).
y y
Now
∂F x x
N= =⇒ xexy + 2 + g 0 (y) = xexy + 2 ,
∂y y y
0
=⇒ g (y) = 0, =⇒ g(y) = C1 ,
where C1 is a constant. Thus, the solution F (x, y) = C2 becomes
x
exy − + C1 = C2
y
x
∴ exy − =C
y
where C = C2 − C1 is a constant. Using the initial condition y(1) = 1 (i.e.
x = 1, y = 1), we have
1
e(1)(1) − =C =⇒ C = e − 1.
1
Hence, the solution becomes
x
exy − = e − 1.
y
Remark (Test). If
Solution approach
From (2.15) we have
dy y
=G ,
dx x
where G(y/x) is the transformed version of f (x, y). Let
y dy
v= =⇒ = G(v).
x dx
We can differentiate y = vx using the product rule to get
dy dv
=v+x = G(v),
dx dx
which is now a separable equation. To see this, we re-write the equation as
dv
x = G(v) − v
dx
dv dx
=⇒ = .
G(v) − v x
We then integrate both sides of the equation:
Z Z
dv dx
= ,
G(v) − v x
dy (y/x)2
=
dx (y/x) + (y/x)2
dy dv v2
=v+x =
dx dx v + v2
dv v
=⇒ v + x =
dx 1+v
dv v −v 2
=⇒ x = −v =
dx 1+v 1+v
Z Z
1+v 1
=⇒ dv = − dx
v2 x
Z
1 1
=⇒ + dv = − ln |x| + C
v2 v
1
=⇒ − + ln |v| = − ln |x| + C
v
Substitute v = y/x into the equation above to get
x y
− + ln = − ln |x| + C
y x
x
=⇒ − + ln |y| − ln |x| = − ln |x| + C
y
The term − ln |x| cancels out, so we get
x
− + ln |y| = C
y
=⇒ −x + y ln |y| = Cy
∴ y ln |y| = Cy + x.
(y 2 − xy)dx + x2 dy = 0.
dy xy − y 2 y y 2
= = − ,
dx x2 x x
dy
=⇒ = v − v2.
dx
Differentiating y = vx using the product rule and equating the result to the
equation above, we get
dv
v+x = v − v2
dx
dv
=⇒ x = −v 2
dx
Z Z
1 1
=⇒ − 2
dv = dx
v x
1 1
=⇒ = ln |x| + C =⇒ v =
v ln |x| + C
But y = vx, so we get
x
y= .
ln |x| + C
Solution Approach
Note that when n = 0 or 1, the equation is either linear or separable and can
easily be solved. For other values of n, first divide through by y n to get the
equation
dy
y −n + P (x)y 1−n = Q(x),
dx
and let
v = y 1−n .
dv dy
=⇒ = (1 − n)y −n
dx dx
1 dv dy
=⇒ = y −n , n 6= 1
1 − n dx dx
Substituting into the modified differential equation, we get
1 dv
+ P (x)v = Q(x)
1 − n dx
dv
=⇒ + (1 − n)P (x)v = (1 − n)Q(x)
dx
Note that the above equation is now a linear equation in terms of v since
(1 − n) is just a real number. To see this, you can let P1 (x) = (1 − n)P (x)
and Q1 (x) = (1 − n)Q(x) and get the linear first order equation:
dv
+ P1 (x)v = Q1 (x).
dx
Example 16. Solve the following equation
dy y
+ = x2 y 2
dx x
Solution.
dy y
+ = x2 y 2
dx x
The y 2 on the right-hand side tells us that we are dealing with a Bernoulli
equation. Dividing through by y 2 , we get
dy 1 −1
y −2 + y = x2
dx x
Let
v = y −1
dv dy
=⇒ = −y −2
dx dx
dv 1 dv 1
=⇒ − + v = x2 =⇒ − v = −x2 .
dx x dx x
Note that the final equation is a linear equation with integrating factor
R 1
µ = e− 1/x
= e− ln |x| = eln |1/x| = .
x
Multiplying through by the integrating factor and re-arranging, we get
d 1
=⇒ v = −x
dx x
Z
1 1
=⇒ v = − xdx = − x2 + C.
x 2
1
=⇒ v = − x3 + Cx
2
Since v = y −1 , we get
1 2
y= 1 3 = 3 , C1 = 2C.
2
x + Cx x + C1 x
25
Math 350 : Differential Equations I J.K.A
(i) If b2 − 4ac > 0, the roots r1 and r2 are real and distinct. Then the
solutions are y1 = er1 t and y2 = er2 t . Combining the solutions gives the
general solution
y = C1 er1 t + C2 er2 t
where C1 and C2 are unknown constants, often determined from given
conditions on the equation.
(ii) If b2 − 4ac = 0, then r1 and r2 are real repeated roots, and the
solutions are given by y1 = er1 t and y2 = ter1 t . The general solution is
then given by
y = C1 er1 t + C2 ter1 t
(iii) If b2 − 4ac < 0, then r1 and r2 are complex conjugate roots such
that r1 = α + iβ and r2 = α − iβ. The solutions are y1 = eαt cos(βt),
and y2 = eαt sin(βt), and the general solution is given by
y = C1 eαt cos(βt) + C2 eαt sin(βt).
(b)
y 00 + 8y 0 + 16y = 0
=⇒ r2 + 8r + 16 = 0, =⇒ (r + 4)2 = 0,
=⇒ r = −4, −4,
are repeated roots.
∴ y = C1 e−4t + C2 te−4t .
(c)
y 00 − 6y 0 + 10y = 0
=⇒ r2 − 6r + 10 = 0,
√
6 ± 36 − 40 6 ± 2i
=⇒ r = = =3±i
2 2
=⇒ α = 3, β = 1.
Thus, the general solution is given by
y 00 + 2y 0 − 8y = 0,
y(0) = 3, y 0 (0) = −12.
Solution.
y 00 + 2y 0 − 8y = 0,
=⇒ r2 + 2r − 8 = 0, =⇒ (r + 4)(r − 2) = 0
=⇒ r = −4, 2
=⇒ y = C1 e−4t + C2 e2t
Applying the initial conditions:
y(0) = 3 =⇒ C1 + C2 = 3
Solution Approach
(a) To find a particular solution to the differential equation
with
or
ay 00 + by 0 + cy = ktm eαt sin(βt),
assume the following form for the particular solution:
with
Remark. (1) You may check to see if your answer is correct by sub-
stituting yp into the differential equation.
(2) In this case, we are asked to find only the particular solution.
But the general solution of the equation, if desired, is given by
y = yh + yp :
y = C1 et + C2 e−t + 11t − 1.
(b)
y 00 − y = t sin(t)
From the general form (3.6), we see that α = 0 and β = 1 so that
α ± iβ = ±i. Substituting y = ert into the equation results in
=⇒ r2 − 1 = 0 =⇒ r = ±1.
(c)
2y 00 + y = 9e2t
i
=⇒ 2r2 + 1 = 0 =⇒ r = ± √ .
2
Let
yp = Ae2t
=⇒ yp0 = 2Ae2t
=⇒ yp00 = 4Ae2t
Substitute into original equation to get
=⇒ 9A = 9 =⇒ A = 1.
Thus
yp = e2t .
y 00 + 2y 0 − 3y = f (t)
where f (t) is
Solution.
y 00 + 2y 0 − 3y = 0
=⇒ r2 + 2r − 3 = 0
(r + 3)(r − 1) = 0
=⇒ r = 1, −3.
yp = Ate−3t .
yp = t(At + B)et .
(f) f (t) = t2 et
yp = t(At2 + Bt + C)et .
y 00 − 2y 0 + y = f (t)
for the complementary solutions y1 (t) and y2 (t) to get the general solution
yh = c1 y1 + c2 y2 , (3.9)
The idea by Lagrange, who invented this method, is to determine the funci-
tons w1 (t) and w2 (t) such that (3.10) is a particular solution to (3.7). Since
there are two functions to determine, we expect to have two equations (or
conditions) to solve. In other for (3.10) to satisfy (3.7), we need to find the
derivatives of yp . So from (3.10), we have
which is actually one of the equations we would need to eventually solve for
w1 and w2 . This assumption implies that
=⇒ a(w1 y100 + w10 y10 + w2 y200 + w20 y20 ) + b(w1 y10 + w2 y20 ) + c(w1 y1 + w2 y2 ) = f (t),
=⇒ w1 [ay100 + by10 + cy1 ] + w2 [ay200 + by20 + cy2 ] + a(w10 y10 + w20 y20 ) = f (t).
Since y1 and y2 are solutions to the homogeneous equation (3.8), the expres-
sions in square brackets vanish and so
yp = w1 (t)y1 + w2 (t)y2
(1) Determine the two linearly independent solutions y1 and y2 to the cor-
responding homogeneous equation ay 00 (t) + by 0 (t) + cy(t) = 0, and let
w10 y1 + w20 y2 = 0
(3.21)
w10 y10 + w20 y20 = f (t)
a
(3) Finally, substitute w1 (t) and w2 (t) into yp = w1 (t)y1 (t) + w2 (t)y2 (t) to
get a particular solution.
y 00 + y = sec(t)
yh = c1 cos t + c2 sin t.
Multiplying the first equation by sin t and the second by cos t and summing
them gives
w20 (sin2 t + cos2 t) = cos t · sec t = 1,
=⇒ w20 = 1, =⇒ w2 (t) = t,
since sin2 t + cos2 t = 1 and sec t = 1/ cos t. From the first equation in the
system, we get
sin t 0
w10 = − w = − tan t,
cos t 2
Z Z
sin t
w1 = − tan tdt = − dt = ln | cos t|.
cos t
where the coefficients are now functions of the independent variable t. Un-
like equations with constant coefficients which are amenable to the method
of undetermined coefficients or variation of paramaters, there are no straigh-
forward methods for constructing explicit solutions in this case. However,
there are some interesting aspects of variable-coefficient equations that we
would like to study.
There is a theorem for the existence and uniqueness of solutions for
variable-coefficient equations which we will state. We will also take a look
at special equations with variable coefficients (known as Cauchy-Euler or
Equidimensional equations) that can be solved using an approach similar
to what was done for equations with constant coefficients. Finally, we will
state the theorem on the method of “Reduction of Order” which states that,
given one solution, say y1 , to the homogeneous variable coeffient equation
a(t)y 00 (t) + b(t)y 0 (t) + c(t)y(t) = 0, one can find a second linearly independent
solution, say y2 .
To proceed, we first write (3.22) in standard form by dividing through by
a(t) to get an equation of the form
Of course, this supposes that a(t) 6= 0 otherwise the whole structure of the
equation changes from a second order to a first order equation. The following
theorem guarantees the existence and uniqueness of solutions to equations
with variable coefficients:
The theorem may be used to determine the largest interval for which a unique
solution to the differential equation exists, as demonstrated by the following
example.
Example 23. Consider the initial value problem
√
(t − 2)y 00 (t) + y 0 (t) + t ty(t) = ln t; y(1) = 2, y 0 (1) = −3.
Use Theorem 2 to determine the largest interval for which a unique solution
exists.
Solution. Writing the equation in standard form gives
√
00 1 0 t t ln t
y (t) + y (t) + y(t) = ,
t−2 t−2 t−2
such that √
1 t t ln t
p(t) = , q(t) = , f (t) = .
t−2 t−2 t−2
We see that the functions are all continuous on the intervals 0 < t < 2
and 2 < t < ∞. However, the interval 2 < t < ∞ does not contain the
initial point t0 = 1 so we neglect it. Thus, the Theorem guarantees a unique
solution in the interval 0 < t < 2.
Remark. Note from the example above that q(t) does not exist for t < 0
and does f (t) exist for t ≤ 0. Also note that the Theorem guarantees the
existence of a unique solution but does not provide the solution nor the
interval of existence of the solution.
Lemma 1 (Linear Dependence of Solutions). If y1 (t) and y2 (t) are any
two solutions to
y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0, (3.24)
on an interval I where p(t) and q(t) are continuous and if the Wronskian
y1 y2
W (y1 , y2 )(t) = = y1 y20 − y10 y2
y10 y20
is zero at any point t of I, then y1 and y2 are linearly depedent on I.
Theorem 3. Suppose y1 (t) and y2 (t) are any two linearly independent
solutions to the IVP
with solution
Z Z
f (t)y2 f (t)y1
w1 = − dt, and w2 = dt. (3.28)
W (y1 , y2 ) W (y1 , y2 )
Example 24. Find a general solution to the following IVPs using variation of
parameters, given that y1 and y2 are solutions to the associated homogeneous
equation.
y1 = e2 , y2 = t + 1. y1 = 5t − 1, y2 = e−5t .
Solution. (a)
ty 00 − (t + 1)y 0 + y = t2 , y1 = e2 , y2 = t + 1.
It is easy to check that y1 and y2 are indeed solutions to the homogeneous
equation ty 00 − (t + 1)y 0 + y = 0:
Also
(t + 1) 0 1
y 00 − y + y = t, t 6= 0.
t t
40 c Dr. Joseph K. Ansong
Math 350 : Differential Equations I J.K.A
et t + 1
W (y1 , y2 )(t) = = et − tet − et = −tet .
et 1
dv = e−t dt
u = t + 1,
Z
=⇒ du = dt, and v = e−t dt = −e−t ,
Z
=⇒ w1 (t) = −(t + 1)e + e−t dt
−t
y = c1 et + c2 (t + 2) − (t2 + 2t + 2) .
So far, we have seen that given two linearly independent solutions y1 and
y2 to the homogeneous equation of a variable-coefficient equation, we can
use the method of variation of parameters to determine a particular solution
and hence the general solution. This is because we have no technique of
determining y1 and y2 in the general case of a variable-coefficient equation.
However, given one of the solutions, say y1 or y2 , the method of reduction of
order can be used to find the other solution. We describe this next.
in an interval I. Then
R
e− p(t)dt
Z
y2 (t) = y1 (t) dt, (3.30)
[y1 (t)]2
y2 = w(t)y1 (t)
=⇒ y20 = w0 y1 + wy10
=⇒ y200 = w00 y1 + w0 y10 + w0 y10 + wy100
Substituting into (3.29) gives
v = w0 .
=⇒ v 0 y1 + (2y10 + py1 )v = 0,
(2y10 + py1 )
Z Z
dv
=⇒ =− dt,
v y1
2y10
Z
=⇒ ln |v| = − + p dt,
y1
Z
=⇒ ln |v| = −2 ln |y1 | − p(t)dt,
Z
−2
=⇒ ln |v| = ln[y1 (t)] − p(t)dt,
R
R e− p(t)dt
=⇒ v = [y1 (t)]−2 · e− p(t)dt
=
[y1 (t)]2
But v = w0 , so we have
R
e− p(t)dt
Z
w(t) = dt.
[y1 (t)]2
Now y2 (t) = y1 (t)w(t),
R
e− p(t)dt
Z
∴ y2 (t) = y1 (t) dt.
[y1 (t)]2
t5 1
= t−1 ·
= t4 .
5 5
Thus, the two linearly independent solutions to the equation are y1 = t−1
and y2 = t4 /5.
We next take a look at special equations called Cauchy-Euler or Euler or
Equidimensional equations for which it is possible for to determine both
linearly independent solutions y1 and y2 . In order words, they are special
variable-coefficient equations that can be solved explicitly.
y(t) = tr , t 6= 0, (3.34)
ar(r − 1) + br + c = 0, (3.37)
such that
p
−(b − a) ± (b − a)2 − 4ac
r= , (3.38)
2a
with discriminant
p
D= (b − a)2 − 4ac. (3.39)
The solution to the characteristic equation (3.37) yields three different cases
depending on the discrimiant: real distinct roots, repeated roots, and com-
plex roots; as in the case for constant coefficient equations.
(a) For real distinct roots, r1 and r2 , the solution of the homogeneous
equation is given by
yh = c1 tr1 + c2 tr2 .
yh = c1 tr + c2 tr ln t, t > 0,
and this can be verified by directly substituting the solution into the
homogeneous equation.
(c) For a complex root, r = α + iβ (or r = α − iβ), we can write
after employing Euler’s formula: eiθ = cos θ + i sin θ. Using the real
and imaginary parts gives the two independent solutions
such that
at2 y 00 + by 0 + cy = 0, t > 0,
where
(a) a = 1, b = 1, c = −1.
(b) a = 1, b = −1, c = 5.
(c) a = 1, b = 5, c = 4.
Solution. (a) t2 y 00 + y 0 − y = 0
p
−(b − a) ± (b − a)2 − 4ac
r=
2a
√
0± 4
=⇒ r = = ±1.
2
So we have two distinct roots, r1 = 1, r2 = −1 Therefore
yh = c1 t + c2 t−1 .
hh = c1 t−2 + c2 t−2 ln t.
y = c1 t−1/3 + c2 t−1/3 ln t.
Now,
y(1) = 1, =⇒ c1 = 1.
1 1 1
y 0 = − c1 t−4/3 − c2 t−4/3 ln t + c2 t−1/3 ·
3 3 t
2 1 2
y 0 (1) = − , =⇒ − − c2 = −
3 3 3
2 1 1
=⇒ c2 = − = .
3 3 3
1
∴ y = t−1/3 + t−1/3 ln t .
3
4.1 Introduction
In this section, we review a few concepts from Calculus needed to understand
the approach of series solutions.
n
X f j (x0 )
=⇒ Pn (x) = (x − x0 )j .
j=0
j!
Motivation
Find the first few Taylor Polynomials approximating the solution around
x0 = 0:
y 00 = 3y 0 + x7/3 y
49
Math 350 : Differential Equations I J.K.A
Solution.
y 00 = 3y 0 + x7/3 y (4.1)
y(0) = 10, y 0 (0) = 5
y 00 (0) 2 y 000 (0) 3 y n (0) 2
Pn (x) = y(0) + y 0 (0)x + x + x + ··· + x. (4.2)
2! 3! n!
Now,
y 00 (0) = 3y 0 (0) + 07/3 y(0) = 3(5) + 0 = 15.
From (4.1), we have
7
y 000 = 3y 00 + x4/3 y + x7/3 y 0
3
7 4/3
=⇒ y 000 (0) = 3y 00 (0) + · 0 y(0) + 07/3 · y 0 (0) = 3(15) = 45
3
Similarly, we have
28 1/3 7 7
y (4) = 3y 000 + x y + x4/3 y 0 + x4/3 y 0 + x7/3 y 00
9 3 3
14 4/3 0 28 1/3
= 3y 000 + x7/3 y 00 + x y + x y
3 9
14 28 1/3
=⇒ y (4) (0) = 3y 000 (0) + 07/3 · y 00 (0) + · 04/3 y 0 (0) + · 0 y(0)
3 9
= 3y 000 (0) + 0 + 0 + 0 = 3(45) = 135.
Also,
7 56 14 28 28
y (5) = 3y (4) + x4/3 y 00 + x7/3 y 000 + x1/3 y 0 + x4/3 y 00 + x−2/3 y + x1/3 y 0
3 9 3 27 9
Note that at x = 0 the fifth derivative y (5) does not exist because of the term
(28/27)x−2/3 y. So the Taylor Polynomial
15 2 45 3 135 4
P4 (x) = 10 + 5x + x + x + x,
2 6 24
approximates the solution to the differential equation near x = 0.
Power Series
1
is ρ = , with ρ = ∞ if L = 0 and ρ = 0 if L = ∞.
L
Solution.
(−2)n
an = ,
n+1
an+1 (−2)n+1 (n + 1
=⇒ lim = lim ·
n→∞ an n→∞ n+2 (−2)n
(−2)n (−2) n + 1
= lim ·
n→∞ (−2)n n+2
n+1 1 + 1/n
= 2 lim = 2 lim =2=L
n→∞ n+2 n→∞ 1 + 2/n
1
∴ρ=
2
Therefore the series converges for |x − 3| < 12 .
∞
X
Theorem 8. If an (x − x0 )n = 0 for all x in some open interval, then
n=0
each coefficient an equals zero.
∞
X
Remark. If f (x) = an (x − x0 )n , then within the radius of convergence,
n=0
we can differentiate and also integrate f (x) so that
∞
X
0
f (x) = nan (x − x0 )n−1 , |x − x0 | < ρ,
n=0
Z ∞
X an
f (x)dx = (x − x0 )n+1 + C, |x − x0 | < ρ.
n=0
n + 1
Analytic Function
where
a1 (x) a0 (x)
p(x) = , q(x) = .
a2 (x) a2 (x)
Solution.
x sin x
y 00 + y0 + y = 0.
x(1 − x) x
Let
x 1 sin x
p(x) = = , q(x) = .
x(1 − x) 1−x x
Thus, p(x) is analytic except at x = 1. The point x = 0 is called a removable
singularity of p(x) since we can cancel out x to get p = 1/(1 − x). Note
that q(x) is a ratio of analytic functions, so we write
x3 5
x− 3!
+ x5! − · · · x2 x4
q(x) = =1− + − ···
x 3! 4!
Therefore q(x) is analytic for all x. Thus, the only singular point of the
equation is x = 1.
y 0 + 2xy = 0. (4.6)
Solution.
∞
X ∞
X
n−1
=⇒ nan x + 2an xn+1 = 0.
n=1 n=0
We can let both series have the same power of x by setting k = n − 1 in the
first equation and k = n + 1 in the second to get
∞
X ∞
X
k
(k + 1)ak+1 x + 2ak−1 xk = 0.
k=0 k=1
Now, to be able to combine the two series, we let both of them start from
the same value of k such that
X∞ ∞
X
k
a1 + (k + 1)ak+1 x + 2ak−1 xk = 0,
k=1 k=1
∞
X
=⇒ a1 + [(k + 1)ak+1 + 2ak−1 ] xk = 0.
k=1
Now, each coefficient must vanish. Thus
a1 = 0
and we get the recurrence relation from the second term on the left-hand
side:
(k + 1)ak+1 + 2ak−1 , k ≥ 1
2
=⇒ ak+1 = − ak−1 , k ≥ 1
k+1
Substituting values of k into the recurrence relation gives the following:
2
k=1: a2 = − a0 = −a0
2
2
k=2: a3 = − a1 =0
3
2 1
k=3: a4 = − a2 = a0
4 2
2
k=4: a5 = − a3 =0
5
2 1 1 1
k=5: a6 = − a4 = − · a0 = − a0
6 3 2 3!
2
k=6: a7 = − a5 =0
7
2 1 1 1
k=7: a8 = − a6 = − · a0 = a0
8 4 3! 4!
2
k=8: a9 = − a7 =0
9
an+1 (−1)n+1 n!
lim = lim ·
n→∞ an n→∞ (n + 1)! (−1)n
(−1)(−1)n n! (−1)
= lim · = lim =0=L
n→∞ (n + 1) · n! (−1)n n→∞ n + 1
1
∴ ρ = = ∞.
L
Thus (4.9) converges for all x.
Example 31. Find a power series solution of the differential equation about
x = 2:
y 00 − y = 0. (4.10)
(k + 2)(k + 1)ak+2 − ak = 0
1
=⇒ ak+2 = ak , k ≥ 0.
(k + 2)(k + 1)
From the relations above, we can establish the following two recurrence re-
lations:
1
a2n = a0 , n = 1, 2, · · ·
(2n)!
1
a(2n+1) = a1 , n = 1, 2, · · ·
(2n + 1)!
From (4.11) we have
a1 (x − 2) + a3 (x − 2)3 + a5 (x − 2)5 + · · ·
(4.13)
∞
X 1 ∞
X 1
= a0 (x − 2)2n + a1 (x − 2)2n+1
n=0
(2n)! n=0
(2n + 1)!
∞
X 1 2n 1 2n+1
∴ y(x) = (x − 2) a0 + (x − 2) a1
n=0
(2n)! (2n + 1)!
Remark. We first note again that the differential equation could have been
solved much easily using our previous methods. Secondly, given initial con-
ditions; say y(2) = 1 and y 0 (2) = −1, we can determine a0 and a1 . From
(4.13) we have
y(2) = a0 = 1,
and
y 0 (x) = 2a2 (x − 2) + · · · + a1 + 3a3 (x − 2)2 + · · ·
=⇒ y 0 (2) = a1 = −1.
Thus,
∞
X 1 2n 1 2n+1
y(x) = (x − 2) − (x − 2) .
n=0
(2n)! (2n + 1)!
Example 32. Find the first few terms in a power series solution about x = 0
for a general solution to
(1 + x2 )y 00 − y 0 + y = 0. (4.14)
∞
X ∞
X ∞
X ∞
X
=⇒ n(n − 1)an xn−2 + n(n − 1)an xn − nan xn−1 + an x n = 0
n=2 n=2 n=1 n=0
∞
X ∞
X ∞
X ∞
X
k k k
=⇒ (k+2)(k+1)ak+2 x + k(k−1)ak x − (k+1)ak+1 x + ak x k = 0
k=0 k=2 k=0 k=0
Thus, we have
2a2 − a1 + a0 = 0 (4.18)
6a3 − 2a2 + a1 = 0 (4.19)
(k + 2)(k + 1)ak+2 + (k + 1)ak+1 + (k 2 − k + 1)ak = 0 (4.20)
(k + 1)ak+1 − (k 2 − k + 1)ak
ak+2 = , k ≥ 2.
(k + 2)(k + 1)
So we have
3 −a6 0 − 3 a1 −a
3a3 − 3a2 2
0
k=2: a4 = =
4·3 12
−a0
− 3a1 −3a
2 2
0
−a0 − 3a1 + 3a0
= =
12 24
2a0 − 3a1
=
24
−3a1
4 2a024 − 7 −a6 0
4a4 − 7a3
k=3: a5 = =
5·4 20
2a0 − 3a1 + 7a0 9a0 − 3a1 3a0 − a1
= = =
120 120 40
3a0 −a1 2a0 −3a1
5a5 − 13a4 5 40
− 13 24
k=4: a6 = =
6 · 5 30
3a0 −a1 26a0 −39a1
8
− 24
=
30
9a0 − 3a1 − 26a0 + 39a1
=
720
36a1 − 17a0
=
720
From (4.15) we have
y(x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5 + a6 x6 + · · ·
a1 − a0 2 a0 3 2a0 − 3a1
=⇒ y(x) = a0 + a1 x + x − x + x4 +
2 6 24
3a0 − a1 5 36a1 − 17a0
x + x6 + · · · (4.21)
40 720
Remark. Given initial conditions, we can determine a0 and a1 , and (4.21)
gives approximations to the solution of the differential equation about x = 0.
But how useful are the partial sums; for example when x = 0.3 or x = 2.4?
Note that we don’t have a general form for an so we cannot use the well-
known tests to find the radius of convergence. The following theorem helps
us to determine the radius of convergence in cases like this.
Then (4.22) has two linearly independent analytic solutions of the form
∞
X
y(x) = an (x − x0 )n . (4.23)
n=0
Remark. (a) Recall that the distance between any two complex numbers
z1 = a + ib and z2 = c + id is given by
p
|z1 z2 | = (a − c)2 + (b − d)2 .
(b) We are now in a position to address the concerns raised after the pre-
vious example as demonstrated in the example below.
Example 33. Find the minimum value for the radius of convergence of a
power series solution about the point x0 :
(1 + x2 )y 00 − y 0 + y = 0, x0 = 0.
1 + x2 = 0, x = ±i.
Example 34. Find the minimum value for the radius of convergence of a
power series solution about the point x0 :
(a) 2y 00 + xy 0 + y = 0; x0 = 0.
(d) (1 + x + x2 )y 00 − 3y 0 = 0; x0 = 1.
x 1
Solution. (a) y 00 + y 0 + y = 0, x0 = 0.
2 2
Now,
1 1
=⇒ p(x) = x, q(x) = ,
2 2
are analytic with no singular points. So the distance between the ordi-
nary point x = 0 and the nearest singular point is infinite. Thus, the
radius of convergence is infinite.
3x 1
y 00 − y0 − 2 y = 0; x = 0.
(x2 − 5x + 6) (x − 5x + 6)
x2 − 5x + 6 = (x − 2)(x − 3) = 0
=⇒ x = 2 or x = 3.
Now x = 0 is an ordinary point. The distance from 0 to 2 is 2, and
from 0 to 3 is 3. Since 2 is the nearest singular point to 0, Theorem 9
implies that the radius of convergence is at least 2.
3x 0 2
y 00 − y + = 0; x = 1.
x+1 x+1
The Singular point is x = −1. The radius of convergence is given by
p
ρ = (−1 − 1)1 = 2.
(d) (1 + x + x2 )y 00 − 3y 0 = 0; x0 = 1.
3
y 00 − y 0 = 0; x0 = 1.
(1 + x + x2 )
Singular points occur for
1 + x + x2 = x2 + x + 1 = 0
√ √
−1 ± 1 − 4 1 3
=⇒ x = =− ± i
2 2 2
Now x0 = 1 is an ordinary point and the distance, ρ, from x = 1 to
each of the singular points is given by
√ !2 r
v
u 2
u 1 3 9 3
ρ=t 1+ + = +
2 2 4 4
12 √
r
=⇒ ρ = = 3.
4
√
So the radius of convergence is ρ = 3.
Example 35. Find the first few terms in a power series exapansion about
x = 1 for a general solution to
2y 00 + xy 0 + y = 0.
Solution.
2y 00 + xy 0 + y = 0. (4.24)
t=x−1 =⇒ x = t + 1
dY dy(t + 1) dy dx dy
= = · = ,
dt dt dx dt dx
since dx/dt = 1. Thus, we have
dy dY d2 y d2 Y
= and = .
dx dt dx2 dt2
So equation (4.24) becomes
d2 Y dY
2 2
+ (t + 1) + Y = 0,
dt dt
and we seek solutions of the form
∞
X
Y (t) = an t n .
n=0
Solution.
(x + 1) 0 1
y 00 (x) + 2 2
y (x) − 2 y(x) = 0.
(x − 1) (x − 1)2
Thus,
(x + 1) (x + 1) 1
p(x) = 2 2
= 2
=
(x − 1) [(x − 1)(x + 1)] (x − 1)2 (x + 1)
−1 −1
q(x) = =
(x2− 1)2 (x − 1)2 (x + 1)2
Thus, the singular points are
x = 1 and x = −1.
For x = 1, we have
1
(x − 1)p(x) =
(x − 1)(x + 1)
which is not analytic at x = 1. So x = 1 is an irregular singular point. (Note
that we don’t need to proceed to analyze (x − 1)2 q(x)).
For x = −1, we have
1
(x + 1)p(x) = ,
(x − 1)2
which is analytic at x = −1. Now
−1
(x + 1)2 q(x) = ,
(x − 1)2
about a regular singular point, say x0 . The idea is that, since y = xr is the
form of a solution to Cauchy-Euler equations (ax2 y 00 + bxy 0 + cy = 0, x > 0),
we expect, at a regular singular point x = 0, a solution of the form
∞
X
y(x) = W (r, x) = xr an x n , x>0
n=0
∞
X
=⇒ W (r, x) = an xn+r , (4.28)
n=0
with a0 6= 0. Now
∞
X
0
W = (n + r)an xn+r−1 , (4.29)
n=0
X∞
=⇒ W 00 = (n + r)(n + r − 1)an xn+r−2 , (4.30)
n=0
Also, since xp(x) and x2 q(x) are analytic, we can expand them as
∞
X
xp(x) = an x n ,
n=0
∞
X
=⇒ p(x) = an xn−1 , (4.31)
n=0
and
∞
X
q(x) = an xn−2 . (4.32)
n=0
[r(r − 1) + p0 r + q0 ] a0 xr−2 + · · · = 0
where xr−2 is the lowest power in this case. Equating each coefficient to zero
results in the indicial equation:
r(r − 1)p0 r + q0 = 0, a0 6= 0,
r(r − 1) + p0 r + q0 = 0,
where
Example 37. Find the indicial equation and exponents at the singularity
x = −1 of
(x2 − 1)2 y 00 (x) + (x + 1)y 0 (x) − y(x) = 0.
Solution. We analyzed this equation in example 36, where we found that
x = −1 is a regular singular point, and
1 −1
p(x) = and q(x) = .
(x − 1)2 (x + 1) (x − 1)2 (x + 1)2
1 1
p0 = lim (x + 1)p(x) = lim 2
=
x→−1 x→−1 (x − 1) 4
−1 1
q0 = lim (x + 1)2 q(x) = lim = −
x→−1 x→−1 (x − 1)2 4
So the indicial equation r(r − 1) + p0 r + q0 = 0 becomes
1 1
r(r − 1) + r − = 0,
4 4
=⇒ 4r(r − 1) + r − 1 = 0, =⇒ 4r2 − 3r − 1 = 0
=⇒ 4r2 − 4r + r − 1 = 0, =⇒ 4r(r − 1) + (r − 1) = 0
=⇒ (r − 1)(4r + 1) = 0.
So the exponents are:
1
r = 1, − .
4
The Frobenius Theorem stated next enables us to find one series solution of
a variable-coefficient equation.
Example 38. Find the series expansion about the regular singular point
x = 0 for a solution to
1+x 1
q0 = lim x2 q(x) = lim =
x→0 x→0 x + 2 2
The indicial equation r(r − 1) + p0 r + q0 = 0 becomes
1 1
r(r − 1) − r + = 0,
2 2
2r2 − 2r − r + 1 = 0, 2r2 − 3r + 1 = 0
=⇒ (r − 1)(2r − 1) = 0
So the exponents are
1
r = 1, and r = .
2
So we seek a solution of the form
∞
X ∞
X
r n
W (r, x) = x an x = an xn+r
n=0 n=0
To obtain the constants an , we apply Frobenius Theorem and use the larger
root r = 1 to get one solution y1 (x) = W (1, x) such that
∞
X
W (1, x) = an xn+1 (4.35)
n=0
∞
X
0
=⇒ W = (n + 1)an xn
n=0
∞
X
=⇒ W 00 = (n + 1)nan xn−1
n=0
y 00 + p(x)y 0 + q(x)y = 0
and let r1 and r2 be the roots of the associated indicial equation, where
R(r1 ) ≥ R(r2 ).
(a) If r1 −r2 is not an integer, then there exist two linearly independent
solutions of the form
∞
X
y1 (x) = an (x − x0 )n+r1 , a0 6= 0, (4.36)
n=0
X∞
y2 (x) = bn (x − x0 )n+r2 , b0 6= 0. (4.37)
n=0
the form
∞
X
y1 (x) = an (x − x0 )n+r1 , a0 6= 0, (4.38)
n=0
∞
X
y2 (x) = y1 (x) ln(x − x0 ) + bn (x − x0 )n+r2 , b0 6= 0. (4.39)
n=0
Laplace Transforms
Definition 15. Let f (t) be a function on [0, ∞). The Laplace transform
of f is the function F or L{f } defined by the integral
Z ∞
F (s) := e−st f (t)dt. (5.1)
0
The domain of F (s) is all the values of s for which the integral in (5.1)
exists.
ay 00 + by 0 + cy = f (t) (5.2)
73
Math 350 : Differential Equations I J.K.A
Solution. (a)
Z ∞ Z N
−st
F (s) = e f (t)dt = lim e−st · 1dt
0 N →∞ 0
N
1 1 1
= lim − e−st = lim − e−sN +
N →∞ s 0
N →∞ s s
1
∴ F (s) = , for s > 0.
s
If s ≤ 0 the integral diverges, so
1
F (s) = ∀s > 0.
s
(b)
Z ∞ Z N
−st at
F (s) = e e dt = lim e−(s−a)t dt
0 N →∞ 0
N
1 −(s−a)t 1 −(s−a)N 1
= lim − e = lim − e +
N →∞ s−a 0
N →∞ (s − a) (s − a)
1
∴ F (s) = , for s − a > 0 or s > a.
s−a
If s ≤ a the integral diverges, so the domain of F (s) is all s > a.
1
∴ L{eat } = ∀s > a.
s−a
(
0, 0 < t < 2,
(c) f (t) =
t, t > 2.
Z ∞ Z 2 Z ∞
−st −st
F (s) = e f (t)dt = e · 0dt + e−st dt
0 0 2
Z N
= lim te−st dt
N →∞ 2
We integrate by parts. Let
u = t, dv = e−st dt
1
v = − e−st
=⇒ du = dt,
s
( N Z N )
1 −st 1 −st
=⇒ F (s) = lim − te + e dt
N →∞ s 2 2 s
( N )
1 −sN 2 −2s 1 −st
= lim − N e + e + − 2e
N →∞ s s s 2
1 −sN 2 −2s 1 −sN 1 −2s
= lim − N e + e − 2e + 2e
N →∞ s s s s
After taking the limit, we get
2 1
F (s) = e−2s + 2 e−2s , s > 0.
s s
−2s 2s + 1
∴ F (s) = e , s > 0.
s2
1
1 s
, s>0
1
eat s−a
, s>a
n!
tn , n = 1, 2, · · · sn+1
, s>0
b
sin bt s2 +b2
, s>0
s
cos bt s2 +b2
, s>0
n!
eat tn , n = 1, 2, · · · (s−a)n+1
, s>a
b
eat sin bt (s−a)2 +b2
, s>a
s−a
eat cos bt (s−a)2 +b2
, s>a
Example 40. Use the Table of Laplace transforms to compute the Laplace
transform of
(b) e−2t sin 2t + e3t t2 . Using L{eat f (t)}(s) = L{f }(s − a), we have
2 2
L{sin 2t} = =⇒ L{e−2t sin 2t} =
s2 +4 (s + 2)2 + 4
dn
Also, employing L{tn f (t)}(s) = (−1)n (L{f }(s)) gives
dsn
2
1 2 d 1 2
L{e3t } = , 2 3t
=⇒ L{t e } = (−1) 2 = .
s−3 ds s − 3 (s − 3)3
(b) L{t2 cos bt}. Using the results in (a), it can be shown that
2s3 − 6sb2
L{t2 cos bt} = L{t · t cos bt} = .
(s2 + b2 )3
f (t) = L−1 {F }
Motivation
Given the IVP:
(
y 00 (t) − y(t) = 0
(5.4)
y(0) = 0, y 0 (0) = 1.
Let L{y} = Y (s). Then (5.4) becomes, after taking the Laplace transform,
L{y 00 } − L{y} = 0
=⇒ s2 Y (s) − sy(0) − y 0 (0) − Y (s) = 0
=⇒ s2 Y (s) − 1 − Y (s) = 0
=⇒ Y (s)[s2 − 1] = 1
1
=⇒ Y (s) = 2 .
s −1
But we want the solution of the IVP to be y(t). To get this, we need to find
the inverse transform of Y (s). Let
1 1 A B
Y (s) = = ≡ +
s2 −1 (s − 1)(s + 1) s−1 s+1
=⇒ 1 ≡ A(s + 1) + B(s − 1)
1
s = 1, =⇒ A =
2
1
s = −1, =⇒ B = −
2
1 1 1 1
∴ Y (s) = · − ·
2 s−1 2 s+1
Thus,
1 1 1 1
y(t) = L−1 {Y } = L−1 { } − L−1 { }
2 s−1 2 s+1
From the Table of Laplace transforms, we get
1 1 et − e−t
y(t) = et − e−t =
2 2 2
∴ y(t) = sinh(t).
Remark. Note that the motivational example above could have been solved
much more easily with our previous methods for homogeneous differential
equations with constant coefficients. However, the example demonstrates
some of the capabilities of the Laplace transform approach. And as men-
tioned earlier, it is a much more powerful approach, for instance, in cases
where the initial conditions are discontinuous as we’ll encounter later.
4
(c) F (s) =
s2 +9
4 4 3
F (s) = = · 2
+9 s2
3 s +9
−1 4 −1 3 4
=⇒ L {F (s)} = L 2 2
= sin 3t.
3 s +3 3
s−1
(d) F (s) =
s2
− 2s + 5
s−1 s−1
F (s) = 2 = . By completing the square.
s − 2s + 5 (s − 1)2 + 4
Note that
s s−a
cos bt = , and eat cos bt = .
s2 + b2 (s − a)2 + b2
Remark. To compute L−1 of rational functions, review and apply the tech-
niques of partial fractions as demonstrated by the following examples.
7s − 1
(a) F (s) = . Non-repeated linear factors
(s + 1)(s + 2)(s − 3)
s2 + 9s + 2
(b) F (s) = . Repeated linear factors
(s − 1)2 (s + 3)
2s2 + 10s
(c) F (s) = . Quadratic factors
(s2 − 2s + 5)(s + 1)
7s − 1
(a) F (s) = .
(s + 1)(s + 2)(s − 3)
7s − 1 A B C
= + +
(s + 1)(s + 2)(s − 3) s+1 s+2 s−3
s = −1 : −8 = −4A =⇒ A = 2
s = −2 : −15 = 5B =⇒ B = −3
s=3: 20 = 20C =⇒ C = 1.
7s − 1 2 −3 1
=⇒ = + +
(s + 1)(s + 2)(s − 3) s+1 s+2 s−3
∴ L−1 {F } = 2e−t − 3e−2t + e3t .
s2 + 9s + 2
(b) F (s) = .
(s − 1)2 (s + 3)
s2 + 9s + 2 A B C
F (s) = 2
= + 2
+
(s − 1) (s + 3) s − 1 (s − 1) s+3
=⇒ s2 + 9s + 2 = A(s − 1)(s + 3) + B(s + 3) + C(s − 1)2
s=1: 12 = 4B =⇒ B = 3
s = −3 : −16 = 16C =⇒ C = −1
s=1: 2 = −3A + 9 − 1 = −3A + 8
=⇒ −6 = −3A =⇒ A = 2.
2 3 1
=⇒ F (s) = + 2
−
s − 1 (s − 1) s+3
−1 t t −3t
∴ L {F } = 2e + 3te − e .
2s2 + 10s
(c) F (s) = .
(s2 − 2s + 5)(s + 1)
By completing the square, we have s2 − 2s + 5 = (s − 1)2 + 4. Thus
2s2 + 10s 2s2 + 10s
F (s) = =
[(s − 1)2 + 4](s + 1) [(s − 1)2 + 22 ](s + 1)
A(s − 1) + 2B C
≡ +
(s − 1)2 + 4 s+1
=⇒ 2s2 + 10s = [A(s − 1) + 2B](s + 1) + C[(s − 1)2 + 4]
s=1: 12 = 4B + 4C =⇒ 3 = B + C
s = −1 : −8 = 8C =⇒ C = −1
=⇒ 3 = B − 1 =⇒ B = 4.
s=0: 0 = −A + 8 − 5 =⇒ A = 3.
3(s − 1) + 8 1
=⇒ F (s) = 2
−
(s − 1) + 4 s + 1
(s − 1) 2 1
=⇒ F (s) = 3 2 2
+4 2 2
−
(s − 1) + 2 (s − 1) + 2 s+1
−1 t t −t
∴ L {F (s)} = 3e cos 2t + 4e sin 2t − e .
EXERCISE 2. Determine L−1 {F }.
2s + 8
1) F (s) =
(s − 1)2 + 4
7s2 + 23s + 30
2) F (s) =
(s − 2)(s2 + 2s + 5)
(b) Get an equation for the Laplace transform of the solution, say y(t), and
solve for it, say Y (s).
(c) Find the inverse Laplace transform of Y (s) to get the solution, say y(t).
2s 2(s − 1) + 2 s−1 2
Y (s) = = =2 +
s2
− 2s + 5 2
(s − 1) + 4 (s − 1) + 4 (s − 1)2 + 4
2
−1 s−1 −1 2
=⇒ y(t) = 2L +L
(s − 1)2 + 4 (s − 1)2 + 4
r2 − 2r + 5 = 0.
√
2±4 − 20 2 ± 4i
=⇒ r = = = 1 ± 2i
2 2
=⇒ y(t) = C1 et cos 2t + C2 et sin 2t
We next apply the initial conditions. Now
y(0) = 2 =⇒ 2 = C1
y 0 (0) = 4 =⇒ 4 = C1 + 2C2 =⇒ C2 = 1
t t
∴ y(t) = 2e cos 2t + e sin 2t,
which is the same as the solution we got using Laplace transforms.
The next set of examples reveal some of the advantages of the Laplace trans-
form approach compared to the other techniques.
Example 46. Solve for the Laplace transform, Y (s), of y(t).
(a) y 00 + 4y = g(t), y(0) = −1, y 0 (0) = 0,
where (
t, t < 2
g(t) =
5, t > 2.
(b) y 00 − 3y 0 + 2y = cos t
y(0) = 0, y 0 (0) = −1.
Solution. (a) Let the Laplace transform of g(t) be G(s). Taking the
Laplace transform of the equation results in
Let Z 2
I1 = te−st dt
0
1 2 1 5
=⇒ Y (s)[s2 + 4] =
2
− e−2s − 2 e−2s + e−2s − s
s s s s
1 2 1 5 s
=⇒ Y (s) = 2 2 − 2 e−2s − 2 2 e−2s + 2 e−2s − 2
s (s + 4) s(s + 4) s (s + 4) s(s + 4) (s + 4)
1 3 1 s
=⇒ Y (s) = + e−2s − 2 2 e−2s − 2
s2 (s2
+ 4) s(s + 4)2 s (s + 4) (s + 4)
(5.6)
1 + (3s − 1)e−2s − s3
∴ Y (s) = . (5.7)
s2 (s2 + 4)
This question does not ask for the actual solution, y(t). But to deter-
mine the actual solution, you could find the inverse Laplace transform
of (5.7) or (5.6).
z 00 (t + 1) + 5z 0 (t + 1) − 6z(t + 1) = 21et .
Definition 17. The unit step function or Heaviside function u(t) is de-
fined by (
0, t < 0
u(t) =
1, t > 0.
See Figure 5.2.
(b) The height of the jump can also be changed by multiplying by a con-
stant, say M : (
0, t < a
M u(t − a) =
M, t > a
Figure 5.3 illustrates some of these properties.
Figure 5.3:
Example 48. Write the given function, f (t), in terms of unit step functions.
(
3, t<2
(a) f (t) =
1, 2 < t < 5.
3, t<2
1, 2<t<5
(b) f (t) =
t,
5<t<8
t2
10
, t>8
(
3, t < 2
Solution. (a) f (t) =
1, 2 < t < 5.
The function is sketched in Figure 5.4. We see that it is equal to 3 until
t reaches 2, then it jumps by −2 units to the value 1. Thus, it can be
written as
f (t) = 3 − 2u(t − 2).
Figure 5.4:
3, t<2
1, 2<t<5
(b) f (t) =
t,
5<t<8
t2
10
, t>8
The function is sketched in Figure 5.5, and is can be written as
2
t
f (t) = 3 − 2u(t − 2) + (t − 1)u(t − 5) + − t u(t − 8).
10
Figure 5.5:
5.0.11 Properties
e−as
(1) L{u(t − a)}(s) = , s > 0.
s
e−as
−1
(2) L = u(t − a)
s
Z ∞
Proof. (3) L{f (t − a)u(t − a)}(s) = e−st f (t − a)u(t − a)dt
0
But (
0, t<a
u(t − a) =
1, t>a
Z ∞
=⇒ L{f (t − a)u(t − a)}(s) = e−st f (t − a)dt
a
Let v = t − a, =⇒ dv = dt.
Z ∞
=⇒ L{f (t − a)u(t − a)}(s) = e−s(v+a) f (v)dv
0
Z ∞
= e−as e−sv f (v)dv
0
∴ L{f (t − a)u(t − a)}(s) = e−as F (s).
(1) Let f = 1 in property (3). Since L{1} = 1/s, we get
e−as
L{u(t − a)}(t) = .
s
(5) From property (3) let g(t) = f (t − a). Then f (t) = g(t + a). Thus
L {g(t)u(t − a)} (s) = e−as L{g(t + a)}(s).
(b) t2 u(t − 2)
From property (5), we have L {g(t)u(t − a)} (s) = e−as L{g(t + a)}(s).
g(t) = t2 , a = 2.
=⇒ g(t + a) = (t + 2)2 = e2 + 4t + 4
2 4 4
=⇒ L{g(t + a)} = L{g(t + 2)} = 3 + 2 +
s s s
2 4 4
=⇒ L{t2 u(t − 2)} = e−2s 3
+ 2+ .
s s s
e−s e−4s
L{q} = − .
s s
Example 50. Determine the inverse Laplace transform of
e−3s e−2s − 3e−4s
(a) q = (c) q =
s2 s+2
e−2s se−3s
(b) q = (d) q =
s−1 s2 + 4s + 5
e−3s
Solution. (a) q =
s2
e−3s −3s 1
q= = e ·
s2 s2
Use the property
1
F (s) = =⇒ f (t) = t, a = 3
s2
=⇒ f (t − 3) = (t − 3)
−1 −3s 1
=⇒ L e = (t − 3)u(t − 3).
s2
e−2s
(b) q =
s−1
Use the property
1
F (s) = =⇒ f (t) = et , a=2
s−1
f (t − 2) = et−2
=⇒ L−1 {q} = et−2 u(t − 2).
e−2s − 3e−4s
(c) q =
s+2
1 1
=⇒ q = e−2s · − 3e−4s ·
s+2 s+2
Use the property
For
1 1
e−2s · , F (s) = , a=2
s+2 s+2
=⇒ f (t) = e−2t =⇒ f (t − 2) = e−2(t−2)
=⇒ f (t − 2) = e−2t+4
−1 −2s 1
=⇒ L e · = e−2t+4 u(t − 2).
s+2
For
1 1
e−4s · , F (s) = , a=4
s+2 s+2
=⇒ f (t − 4) = e−2(t−4) = e−2t+8
−1 −4s 1
=⇒ L e · = e−2t+8 u(t − 4).
s+2
∴ L−1 {q} = e−2t+4 u(t − 2) − 3e−2t+8 u(t − 4).
se−3s
(d) q =
s2 + 4s + 5
s
=⇒ q = e−3s ·
s2 + 4s + 5
But
s (s + 2) − 2 (s + 2) 1
= = −2
s2 + 4s + 5 2
(s + 2) + 1 2
(s + 2) + 1 (s + 2)2 + 1
(s + 2) 1
=⇒ q = e−3s · 2
− 2e−3s ·
(s + 2) + 1 (s + 2)2 + 1
Let
(s + 2)
F1 (s) =
(s + 2)2 + 1
=⇒ f1 (t) = e−2t cos t =⇒ f1 (t − a) = e−2(t−3) cos(t − 3)
Let
1
F2 (s) =
(s + 2)2 + 1
=⇒ f2 (t) = e−2t sin t =⇒ f2 (t − 3) = e−2(t−3) sin(t − 3)
=⇒ L−1 {q} = e−2(t−3) cos(t − 3)u(t − 3) − 2e−2(t−3) sin(t − 3)u(t − 3)
e−3s
=⇒ s2 Y (s) − sy(0) − y 0 (0) + Y (s) =
s
e−3s
=⇒ s2 Y − 1 + Y =
s
e−3s
=⇒ Y (s)(s2 + 1) = +1
s
e−3s 1 1 1
=⇒ Y (s) = 2
+ 2 = e−3s · 2
+ 2
s(s + 1) s + 1 s(s + 1) s + 1
Note that
−1 1
L 2
= sin t.
s +1
Now,
1 A Bs + C
= + 2
s2 + 1 s s +1
=⇒ 1 = A(s2 + 1) + (Bs + C)s
s=0: 1=A
s = 1 : 1 = 2A + B + C
=⇒ B + C = −1
s = −1 : 1 = 2 − (−B + C) = 2 + B − C
=⇒ B − C = −1
=⇒ 2B = −2 =⇒ B = −1
=⇒ C = −1 + 1 = 0
Thus,
1 1 s
= −
s2 + 1 s s2 + 1
−1 1
=⇒ L = 1 − cos t
s2 + 1
e−3s
−1 −1 1
=⇒ y(t) = L +L
s(s2 + 1) s2 + 1
∴ y(t) = [1 − cos(t − 3)]u(t − 3) + sin t.
(b)
y 00 + 5y 0 + 6y = g(t), (5.11)
y(0) = 0, y 0 (0) = 2,
where
0,
0 ≤ t < 1,
g(t) = t, 1 < t < 5,
1, t > 5,
Figure 5.6:
f (t) = t, a=1
1 1
=⇒ L{f (t + a)} = L{(t + 1)} = 2 +
s s
1 1
=⇒ L{tu(t − 1)} = e−s +
s2 s
For L{(1 − t)u(t − 5)}, we again use the property
(s + 1) 1
Y (s) = e−s − 4e−5s +
s2 (s
+ 2)(s + 3) s(s + 2)(s + 3)
2 1
− e−5s 2
(s + 2)(s + 3) s (s + 2)(s + 3)
(s + 1) 2
Y (s) = e−s 2 + −
s (s + 2)(s + 3) (s + 2)(s + 3)
−5s 4 1
e +
s(s + 2)(s + 3) s2 (s + 2)(s + 3)
−s (s + 1) 2 −5s 4s + 1
Y (s) = e 2 + −e
s (s + 2)(s + 3) (s + 2)(s + 3) s2 (s + 2)(s + 3)
(5.12)
We next simplify each term in (5.12) using partial fractions and find
their inverse.
(s + 1) A B C D
Y1 (s) = ≡ + + +
s2 (s + 2)(s + 3) s s2 s + 2 s + 3
2
=⇒ 2 = 12A + 2 − 1 +
3
2 1
=⇒ 12A = 1 − =
3 3
1
=⇒ A = .
36
Thus,
(s + 1) 1 1 1 1 1 1 2 1
2
= + − +
s (s + 2)(s + 3) 36 s 6 s2 4 s+2 9 s+3
1 1 1 2
=⇒ L−1 {Y1 (s)} = + t − e−2t + e−3t
36 6 4 9
Thus,
−1 (s + 1) 1 1 1 −2(t−1) 2 −3(t−1)
L e−s 2
= + (t − 1) − e + e u(t − 1)
s (s + 2)(s + 3) 36 6 4 9
(5.13)
2 A B
≡ +
(s + 2)(s + 3) s+2 s+3
=⇒ 2 = A(s + 3) + B(s + 2)
s = −2 : 2=A
s = −3 :
2 = −B =⇒ B = −2
2 2 2
=⇒ = −
(s + 2)(s + 3) s+2 s+3
2
=⇒ L = 2e−2t − 2e−3t (5.14)
(s + 2)(s + 3)
4s + 1 A B C D
≡ + 2+ +
s2 (s + 2)(s + 3) s s s+2 s+3
7
s = −2 : −7 = 4C =⇒ C = −
4
11
s = −3 : −11 = −9D =⇒ D =
9
1 7 11
s=1: 5 = 12A + (3)(4) − − (4) + (3)
6 4 9
11 4
=⇒ 5 = 12A + 2 − 7 + =−
3 3
4 19
=⇒ 12A = 5 + =
3 3
19
=⇒ A =
36
So,
4s + 1 19 1 1 1 7 1 11 1
= + − +
s2 (s + 2)(s + 3) 36 s 6 s2 4 s+2 9 s+3
4s + 1 19 1 7 11
=⇒ L−1 { }= + t − e−2t + e−3t
s2 (s + 2)(s + 3) 36 6 4 9
Thus,
−1 4s + 1 19 1 7 −2(t−5) 11 −3(t−5)
L e−5s 2
= + (t − 5) − e + e u(t − 5)
s (s + 2)(s + 3) 36 6 4 9
(5.15)
y 00 + y = u(t − 2) − u(t − 4)
y(0) = 1, y 0 (0) = 0.
e−2s e−4s
[s2 Y − sy(0) − y 0 (0)] + Y = −
s s
e−2s e−4s
=⇒ s2 Y − s + Y = −
s s
1 1
=⇒ Y (s)(s2 + 1) = e−2s − e−4s + s
s s
1 1 s
Y (s) = e−2s − e−4s 2 + 2 (5.17)
s(s2 + 1) s(s + 1) (s + 1)
Let
1 A Bs + C
≡ + 2
s(s2 + 1) s s +1
=⇒ 1 = A(s2 + 1) + (Bs + C)s
s=0: 1=A
s=1: 1=2+B+C =⇒ B + C = −1
s = −1 : 1 = 2 − (−B + C) =⇒ B − C = −1
=⇒ 2B = −2 =⇒ B = −1
=⇒ C = −1 + 1 = 0
1 1 s
=⇒ 2
= − 2
s(s + 1) s s +1
1
L−1 2
= 1 − cos t
s(s + 1)
−1 −2s 1
=⇒ L e = [1 − cos(t − 2)]u(t − 2),
s(s2 + 1)
and
−1 −4s 1
L e 2
= [1 − cos(t − 4)]u(t − 4).
s(s + 1)
Also
−1 s
L = cos t
s2 + 1
Hence, from (5.17), we get
Fourier Series
6.1 Introduction
103
Math 350 : Differential Equations I J.K.A