7 Numerical Methods For Ode
7 Numerical Methods For Ode
7 Numerical Methods For Ode
An ordinary differential equation (ODE) has only derivatives of one variable — that is, it has
no partial derivatives. The following are a few examples of ODEs:
𝑑𝑦
= 𝑥 sin(𝑥 + ) cos 𝑦
𝑑𝑥
𝑑𝑦
= 𝑦𝑡𝑎𝑛𝑥 + ln 𝑥
𝑑𝑥
𝑑+ 𝑦 𝑑𝑦
+ 2𝑥𝑦 − 8𝑦 = 0
𝑑𝑥 + 𝑑𝑥
7.3 Order
The order of a differential equation is the largest derivative present in the differential equation.
𝑑𝑢
= 𝑢𝑒 :
𝑑𝑡
𝑑𝑠
2 = sin 𝑠 + 2𝑒 <=
𝑑𝑡
𝑑𝑦
ln 𝑥𝑦 = 4𝑥 < + 𝑦 − cos 𝑥
𝑑𝑥
𝑑+ 𝑢 𝑑𝑢
+
+3 − 𝑢 = 𝑒A
𝑑𝑣 𝑑𝑣
𝑑< 𝑢 𝑑𝑢
+
=1+
𝑑 𝑥𝑑𝑡 𝑑𝑦
𝑑< 𝑦 𝑑+ 𝑦
= 𝑦 +
𝑑𝑥 < 𝑑𝑥 +
The differential equation listed below is a fourth order differential equation.
𝑑C 𝑦
𝑥+ + sin 𝑦 = 𝑥
𝑑𝑥 C
Note that the order does not depend on whether or not you have got ordinary or partial
derivatives in the differential equation.
We will be looking almost exclusively at first and second order differential equations in these
course with perhaps the choice of extending it to higher orders.
A linear differential equation is any differential equation that can be written in the following
form.
𝑎D (𝑥) + 𝑦 (D) (𝑥) + 𝑎DEF (𝑥)𝑦 (DEF) (𝑥) + ⋯ + 𝑎F (𝑥)𝑦 H (𝑥) + 𝑎I (𝑥)𝑦(𝑥) = 𝑔(𝑥).
The important thing to note about linear differential equations is that there are no products of
the function, 𝑦(𝑥), and its derivatives and neither the function or its derivatives occur to any
power other than the first power.
The coefficients 𝑎I (𝑥), … 𝑎D (𝑥) and 𝑔(𝑥) can be zero or non-zero functions, constant or non-
constant functions, linear or non-linear functions. Only the function,𝑦(𝑥), and its derivatives
are used in determining if a differential equation is linear.
If a differential equation cannot be written in the form above, then it is called a non-
linear differential equation.
𝑑+ 𝑦 𝑑𝑦
sin(y) + = (1 − 𝑦) + 𝑦 + 𝑒 ENO
𝑑𝑥 𝑑𝑥
𝑑< 𝑦 𝑑+ 𝑦
+ 𝑦 +=0
𝑑𝑥 < 𝑑𝑥
1 𝑑 𝑑𝑦
+
P𝑥 + Q + (𝑦 + − 𝑐)</+ = 0
𝑥 𝑑𝑥 𝑑𝑥
7.5 Homogeneous ODE
A differential equation can be homogeneous in either of two respects.
A first order differential equation is said to be homogeneous if it may be written
𝑑𝑥
= ℎ(𝑢)𝑑𝑢
𝑥
which is easy to solve by integration of the two members.
Otherwise, a differential equation is homogeneous, if it is a homogeneous function of the
unknown function and its derivatives. In the case of linear differential equations, this means
that there are no constant terms.
The solutions of any linear ordinary differential equation of any order may be deduced by
integration from the solution of the homogeneous equation obtained by removing the constant
term.
holds for all 𝑥, 𝑦, and 𝑧 (for which both sides are defined).
A first order homogeneous linear differential equation can be written in the form
𝑦 H + 𝑝(𝑥)𝑦 = 0
Alternatively, a first‐order differential equation
𝑀(𝑥, 𝑦)𝑑𝑥 + 𝑁(𝑥, 𝑦)𝑑𝑦 = 0
is said to be homogeneous if M( x,y) and N( x,y) are both homogeneous functions of the same
degree.
𝑑𝑦
= 𝑓(𝑥)𝑔(𝑦)
𝑑𝑥
then the solution may be found by the technique of separation of variables:
𝑑𝑦
[ = [ 𝑓(𝑥)𝑑𝑥
𝑔(𝑦)
This is obtained by rearranging the differential equation (i.e. dividing the standard form by
𝑔(𝑦). To solve this ODE we integrate both sides with respect to the variable.
dO
Example 1: Let 𝑦(𝑥)be a function that satisfies de
=5
If we know what the derivative of a function is, we can you find the function itself. We need to
find the antiderivative, i.e., we need to integrate both sides of the equation.
Solution: Rearranging the equation we get
𝑑𝑦 = 5𝑑𝑥
Integrating both sides of the equation with respect to x we get,
[ 𝑑𝑦 = [ 5 𝑑𝑥
𝑦(𝑥) = 5𝑥 + 𝑐
for some arbitrary constant c.
dO
Example 2 Let 𝑦(𝑥)be a function that satisfies de
= cos 𝑥
Solution: Rearranging and integrating both sides of the equation with respect to x we get,
𝑦(𝑥) = sin 𝑥 + 𝑐
for some arbitrary constant c.
𝑛𝑥 C j 𝑛𝑏 C 𝑛𝑎C
𝑦(𝑏) − 𝑦(𝑎) = −𝑚𝑐𝑜𝑠𝑥 + ] = −𝑚 cos 𝑏 + + 𝑚 cos 𝑎 −
4 k 4 4
The solution for the integral from time x = x to time x = a would thus be
𝑛𝑥 C e 𝑛𝑥 C 𝑛𝑎C
𝑦(𝑥) − 𝑦(𝑎) = −𝑚𝑐𝑜𝑠𝑥 + ] = −𝑚𝑐𝑜𝑠𝑥 + + 𝑚 cos 𝑎 −
4 k 4 4
De n e De n Dk n
𝑦(𝑥) = −𝑚𝑐𝑜𝑠𝑥 + C k
] = −𝑚 cos 𝑥 + C
+ 𝑚 cos 𝑎 − C
+ 𝑦(𝑎)
The general solution is:
𝑛𝑥 C
𝑦(𝑥) = −𝑚 cos 𝑥 + +𝐶
4
For some C.
So far, the example ODE we have seen could be solved simply by integrating. The reason
dO
they were so simple is that the equations for did not depend on the function 𝑦(𝑥) but only
de
dO
on the variable 𝑥. On the other hand, once the equation depends on both de and 𝑦(𝑥), we have
to do more work to solve for the function 𝑦(𝑥).
Here is an ODE that includes 𝑦(𝑥):
Example 4:
𝑑𝑦
= 𝑎𝑦(𝑥) + 𝑏
𝑑𝑥
where 𝑎 and 𝑏 are some constants. Since the right hand side depends on 𝑥 itself, we cannot
simply integrate and use the fundamental theorem of calculus. To solve this ODE for 𝑦(𝑥),
we need to do some manipulations and use the chain rule (i.e., a u-substitution).
Solution
The first thing to do is get all expressions involving 𝑦 on one side of the equation. If we
subtract, we won't be able to put things in the right form for the chain rule, as we will have
dO
terms without a de in them. Instead, we divide both sides of the equation by 𝑎𝑦(𝑥) + 𝑏,
𝑑𝑦
𝑑𝑥 =1
𝑎𝑦(𝑥) + 𝑏
The right hand side is a simple function of 𝑥 (a constant function in this case). We can integrate
both sides of the equation with respect to 𝑥,
𝑑𝑦
[ 𝑑𝑥 𝑑𝑥 = [ 1 𝑑𝑥
𝑎𝑦(𝑥) + 𝑏
dO
Letting 𝑢 = 𝑦(𝑥), 𝑑𝑢 = de 𝑑𝑥
Hence
𝑑𝑦
𝑑𝑥 𝑑𝑢
[ 𝑑𝑥 = [
𝑎𝑦(𝑥) + 𝑏 𝑎𝑢 + 𝑏
1
= ln |𝑎𝑢 + 𝑏| + 𝐶
𝑎
While the RHS is merely 𝑥 + 𝐶,
Thus,
1
ln |𝑎𝑢 + 𝑏| + 𝐶 = 𝑥 + 𝐶
𝑎
1
ln|𝑎𝑦(𝑥) + 𝑏| + 𝐶 = 𝑥 + 𝐶
𝑎
ln|𝑎𝑦(𝑥) + 𝑏| = 𝑎(𝑥 + 𝐶)
|𝑎𝑦(𝑥) + 𝑏| = 𝑒 keqkr
𝑎𝑦(𝑥) + 𝑏 = ±𝑒 keqkr
𝑎𝑦(𝑥) = ±𝑒 keqkr − 𝑏
1
𝑦(𝑥) = ± 𝑒 keqkr − 𝑏/𝑎
𝑎
F
Defining a new arbitrary constant 𝐶 = ± k 𝑒 kr , we have
1 𝑏
𝑦(𝑥) = ±𝐶 𝑒 ke −
𝑎 𝑎
Solution
This equation is homogeneous, as observed above. The method for solving homogeneous
equations follows from this fact:
𝑥 + 𝑑𝑥 + 𝑥 < 𝑢𝑑𝑢 = 0
𝑑𝑥 + 𝑥𝑢𝑑𝑢 = 0
𝑑𝑥
𝑢𝑑𝑢 = −
𝑥
𝑑𝑥
[ 𝑢𝑑𝑢 = [ −
𝑥
1 +
𝑢 = − ln|𝑥| + 𝐶′
2
1 + 𝑐
𝑢 = ln v v
2 𝑥
1 𝑦 + 𝑐
w x = ln v v
2 𝑥 𝑥
𝑐
𝑦 + = 2𝑥 + ln v v
𝑥
7.6.3 Initial Value Problem
An initial value problem (or IVP) is a differential equation along with an appropriate number
of initial conditions.
Initial Condition(s) are a condition, or set of conditions, on the solution that will allow us to
determine the solution to our problem.
So, in other words, initial conditions are values of the solution and/or its derivative(s) at specific
points. As we will see eventually, solutions to “nice enough” differential equations are unique
and hence only one solution will meet the given conditions.
The number of initial conditions that are required for a given differential equation will depend
upon the order of the differential equation.
We are trying to solve problems that are presented in the following way:
where
𝑓(𝑥, 𝑦)
is some function of the variables 𝑥, and 𝑦 that are involved in the problem.
The following is an IVP. It is a simple IVP as the equation does not include a function of y(x):
𝑑𝑦
= 10 − 𝑥, 𝑦(0) = −1
𝑑𝑥
In general, we expect that every initial value problem has exactly one solution.
The number of initial conditions that are required for a given differential equation will depend
upon the order of the differential equation.
Interval of Validity
is the largest possible interval on which the solution is valid and contains 𝑥I .
General Solution
The general solution to a differential equation is the most general form that the solution can
take and does not take any initial conditions into account.
Actual Solution
The actual solution to a differential equation is the specific solution that not only satisfies the
differential equation, but also satisfies the given initial condition(s).
Using the initial data, plug it into the general solution and solve for
𝑐.
𝑑𝑦
= 10 − 𝑥, 𝑦(0) = −1
𝑑𝑥
Solution:
𝑑𝑦
= 10 − 𝑥, → 𝑑𝑦 = (10 − 𝑥 )𝑑𝑥
𝑑𝑥
[ 𝑑𝑦 = [(10 − 𝑥)𝑑𝑥
e~
𝑦 = 10𝑥 − +
+ 𝑐 ← General Solution
0+
𝑦(0) = −1 → 10(0) − + 𝑐 = −1
2
Hence,
𝑐 = −1
e~
𝑦 = 10𝑥 − +
− 1 ← Actual Solution
𝑑𝑠
= cos 𝑡 + sin 𝑡 , 𝑠(𝜋) = 1
𝑑𝑡
Solution
𝑑𝑠
= cos 𝑡 + sin 𝑡 → 𝑠 = sin 𝑡 − cos 𝑡 + 𝑐
𝑑𝑡
𝑠(𝜋) = 1 → sin 𝜋 − cos 𝜋 + 𝑐 = 1
0 −(− 1) + 𝑐 = 1
𝑐=0
Hence,
𝑠(𝑡) = sin 𝑡 − cos 𝑡
Solution
Using relationship between acceleration and velocity and the initial position of the body, find
the body's position at time 𝑡.
𝑑+ 𝑠
𝑎= = −4 sin 2𝑡 → 𝑣 (0) = 2 and 𝑠(0) = −3
𝑑𝑡 +
𝑑+ 𝑠
[ 𝑎 𝑑𝑡 = [ 𝑑𝑡 = [ −4 sin 2𝑡 𝑑𝑡
𝑑𝑡 +
𝑑𝑠
𝑣(𝑡) = = [ −4 sin 2𝑡 𝑑𝑡
𝑑𝑡
𝑣(𝑡) = 2cos 2𝑡 + 𝐶
𝑣(0) = 2 = 2 cos 0 + 𝐶
𝐶=0
𝑑𝑠
𝑠(𝑡) = [ 𝑑𝑡 = [ 2 cos 2𝑡 𝑑𝑡
𝑑𝑡
𝑠(𝑡) = sin 2𝑡 + 𝐶
𝑠(0) = −3 = sin 2(0) + 𝐶
𝐶 = −3
𝑠(𝑡) = sin 2𝑡 − 3
7.7 Euler’s Method for Solving ODE
(First Order Runge-Kutta Method)
𝑑𝑦
= 𝑓(𝑥, 𝑦)
𝑑𝑥
Using Taylor’s expansion for 𝑓(𝑥) we can write (where h is the step size)
€~ €• €n €„
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 H (𝑥) + + 𝑦 HH (𝑥) + <! 𝑦 HHH (𝑥) + C! 𝑦 ƒA (𝑥) + ⋯ D! 𝑦 D (𝑥) + ⋯ (6.1)
This expansion gives the exact solution when all the terms are included and it gives a
reasonably good approximation to the ODE if we take plenty of terms, and if the value of ℎ is
is reasonably small.
Taking the 1st 2 terms only we get
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 H (𝑥)
dO
The last term is just ℎ times our de expression, so we can write Euler's Method as follows:
We start with some known value for 𝑦, which we could call 𝑦I . It has this value when 𝑥 =
𝑥I . We make use of the initial value (𝑥I , 𝑦I ).
The result of using this formula is the value for 𝑦, one ℎ step to the right of the current value.
Let's call it 𝑦F . So we have:
𝑦F ≈ 𝑦I + ℎ𝑓(𝑥I , 𝑦I )
where
Next value: To get the next value 𝑦+ , we would use the value we just found for 𝑦F as follows:
𝑦+ ≈ 𝑦F + ℎ𝑓(𝑥F , 𝑦F )
where
𝑥F = 𝑥I + ℎ and
The idea of Euler’s method is to use repeated linear approximations to estimate a sequence of
points that lie on a solution curve. Starting with an initial condition (𝑥I , 𝑦I ), we use a linear
approximation to estimate a nearby point on the solution curve. We then use a linear
approximation based at the new point to estimate yet another point, and so forth.
The right hand side of the formula above means, "start at the known 𝑦 value, then move one
dO
step ℎ units to the right in the direction of the slope at that point, which is de = 𝑓(𝑥, 𝑦). We
will arrive at a good approximation to the curve's 𝑦 −value at that new point."
We do this for each of the sub-points, ℎ apart, from some starting value 𝑥 = 𝑎 to some finishing
value, 𝑥 = 𝑏, as shown in the graph in Figure 7.1.
𝑦DqF ≈ 𝑦D + ℎ𝑓(𝑥D , 𝑦D )
Figure 7.1
Example 9
Solution
We start at the point (𝑥I , 𝑦I ) = (0,1) and use step size of ℎ = 0.1 and proceed for 3 steps.
dO
de
= sin 𝑦 − 3𝑥 𝑓(0,1) = sin 1 − 3(0) ≈ 0.841 at 3 d.p
𝑦(𝑥 + ℎ) ≈ 𝑦(𝑥) + ℎ𝑓(𝑥, 𝑦)
𝑦F ≈ 𝑦I + ℎ𝑓(𝑥I , 𝑦I )
𝑦F = 𝑦(0.1) ≈ 1 + 0.1 ∗ 0.841 = 1.084 (at 3 d. p)
𝑦+ ≈ 𝑦F + ℎ𝑓(𝑥F , 𝑦F )
𝑑𝑦
= 𝑓(0.1, 1.084) = sin(1.084) − 3(0.1) ≈ 0.584
𝑑𝑥
𝑦< ≈ 𝑦+ + ℎ𝑓(𝑥+ , 𝑦+ )
𝑑𝑦
= 𝑓(0.2, 1.142) = sin(1.142) − 3(0.2) ≈ 0.309
𝑑𝑥
Example 10
𝑑𝑦
= (𝑥 − 1)+ − 𝑦 + 𝑦(0) = 0.5
𝑑𝑥
Use Euler’s method with step size ℎ = 0.5 to estimate 𝑦(2.5), keeping track of four decimal
places during the procedure.
Solution
𝑛 𝑥 𝑦 𝑑𝑦
𝑦DqF ≈ 𝑦D + ℎ𝑓(𝑥D , 𝑦D ) 𝑑𝑥
= (𝑥 − 1)+ − 𝑦 +
0 0 0.5 0.75
1 0.5 0.875 -0.5156
2 1 0.6172 -0.3809
3 1.5 0.4267 0.0679
4 2 0.4607 0.7878
5 2.5 0.8546 -
Error Analysis (Euler’s Method)
Recall that the Euler’s Method is obtained from 6.1 by taking only the first 2 terms i.e.
𝑶(ℎ+ ) is known as the local truncation error; i.e. the algorithm is accurate to
order ℎ+ locally, or 𝑶(ℎ+ ).
0 1
As we have seen from our examples, when ℎ = 0.5, # steps = 2
ℎ ℎ ℎ ℎ
0 1
Thus the number of steps over the whole interval is proportional to 1/h.
And
F
Global Truncation Error = 𝐸‹ ∗ # of steps = 𝐸‹ ∗ €
F
Global Truncation Error = 𝑶(ℎ+ ) ∗ € = O(h)
Thus overall accuracy is O(h). Since the error is O(h), where h is raised to the power 1, this is
called a first order algorithm.
7.8 Runge-Kutta Method
€~ €• €n €„
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 H (𝑥) + +
𝑦 HH (𝑥) + <!
𝑦 HHH (𝑥) + C!
𝑦 ƒA (𝑥) + ⋯ D! 𝑦 D (𝑥) + ⋯
The Euler’s Method can be also be considered as the first order Runge-Kutta method.
There are several reasons that Euler’s method is not recommended for practical use, among
them,
(i) the method is not very accurate when compared to other, fancier, methods run at the
equivalent stepsize, and
(ii) neither is it very stable.
ℎ+ HH
H
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 (𝑥) + 𝑦 (𝑥) + 𝐸‹
2
This is the second order Runge-Kutta method.
ℎ+ HH ℎ<
(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑦 H (𝑥) + 𝑦 (𝑥) + 𝑦 HHH (𝑥) + 𝐸‹
2 3!
A more powerful, in fact it is considered as one of the most powerful methods of all—one which
is so accurate, that most computer packages designed to find numerical solutions for differential
equations will use it by default— is the fourth order Runge-Kutta Method. (For simplicity
of language we will refer to the method as simply the Runge-Kutta Method, but you should be
aware that Runge-Kutta methods are actually a general class of algorithms, the fourth order
method being the most popular.)
𝑘F 𝑘+ 𝑘< 𝑘C
𝑦DqF = 𝑦D + + + + + 𝑶(ℎN )
6 3 3 6
Where
𝑑𝑦
= 𝑓(𝑥, 𝑦)
𝑑𝑥
𝑘F = ℎ𝑓(𝑥D , 𝑦D )
ℎ 𝑘F
𝑘+ = ℎ𝑓(𝑥D + , 𝑦D + )
2 2
ℎ 𝑘+
𝑘< = ℎ𝑓(𝑥D + , 𝑦D + )
2 2
𝑘C = ℎ𝑓(𝑥D + ℎ, 𝑦D + 𝑘< )
Example 11: Let 𝑦(𝑡) be the solution to the following initial value problem:
𝑑𝑦
= 𝑦 − 𝑡+ + 1
𝑑𝑡
𝑦(0) = 0.5
Use the Runge -Kutta Method with step size = 0.5 to estimate 𝑦(0.5).
Solution
𝑘F = ℎ𝑓(𝑥D , 𝑦D ) = ℎ𝑓(𝑥I , 𝑦I )
= 0.5𝑓(0,0.5) = 0.5(0.5 − (0) + 1) = 0.75
€ y
𝑘+ = ℎ𝑓(𝑥D + + , 𝑦D + +” )
0.5𝑓(0.25, 0.875) = 0.5(0.875 − (0.25)+ + 1)= 0.90625
€ y
𝑘< = ℎ𝑓(𝑥D + + , 𝑦D + +~ )
0.5𝑓(0.25, 0.953125) = 0.5(0.953125 − (0.25)+ + 1)= 0.9453125
𝑘C = ℎ𝑓(𝑥D + ℎ, 𝑦D + 𝑘< )
0.5𝑓(0.5, 1.4453125) = 0.5(1.445312 − (0.5)+ + 1)= 1.09765625
F
𝑦DqF = 𝑦D + • (𝑘F + 2𝑘+ + 2 𝑘< + 𝑘C )
F
𝑦F = 𝑦I + • (0.75 + 2(0.90625) + 2 (0.9453125) + 1.09765625)
F
= 0.5 + • (0.75 + 2(0.90625) + 2 (0.9453125) + 1.09765625)
= 1.42513021
7.9 Second Order Differential Equations
The ordinary differential equation of second order
𝑦 " (𝑥) = 𝑓(𝑥, 𝑦(𝑥), 𝑦 H (𝑥))
𝑦 HH + 𝑝(𝑥)𝑦 H + 𝑞(𝑥)𝑦 = 0
Trivial Solution: For the homogeneous equation above, note that the function 𝑦(𝑥) = 0 =
always satisfies the given equation, regardless what 𝑝(𝑥) and 𝑞(𝑥) are. This constant zero
solution is called the trivial solution of such an equation.
For the most part, we will only learn how to solve second order linear equation with constant
coefficients (that is, when 𝑝(𝑥) and 𝑞(𝑥) are constants). Since a homogeneous equation is
easier to solve as compared to its nonhomogeneous counterpart, we start with second order
linear homogeneous equations that contain constant coefficients only:
𝑎𝑦 HH + 𝑏𝑦 H + 𝑐𝑦 = 0
where
𝑎, 𝑏 and 𝑐 are constants and 𝑎 ≠ 0
Rewriting gives us
𝑑+ 𝑦
=0
𝑑𝑥 +
Integrating this once gives
𝑑𝑦
=𝐶
𝑑𝑥
Integrating again gives
𝑦 = 𝐶𝑥 + 𝐶F
Thus, the general solution is
𝑦 = 𝐶𝑥 + 𝐶F
𝑦 HH − 𝑦 = 0
Solution
𝑑+ 𝑦
= 𝑑𝑥 +
𝑦
Integrating both sides:
𝑑+ 𝑦
[ = [ 𝑑𝑥 +
𝑦
ln 𝑦 𝑑𝑦 = 𝑑𝑥 + 𝐶
Integrating both sides again
[ ln 𝑦 𝑑𝑦 = [ 𝑑𝑥 + 𝐶
ln 𝑦 = 𝐶𝑥 + 𝐶F
Thus
𝑦 = 𝑒 š› + 𝐶F
𝑦 = 𝐶𝑒 › + 𝐶F
Obviously 𝑦F = 𝑒 e is a solution, and so is any constant multiple of it, 𝐶F 𝑒 e . Not as obvious, but
still easy to see, is that 𝑦+ = 𝑒 Ee is another solution (and so is any function of the form 𝐶+ 𝑒 Ee ).
It can be easily verified that any function of the form 𝑦 = 𝐶F 𝑒 e + 𝐶+ 𝑒 Ee will satisfy the
equation. In fact, this is the general solution of the above differential equation.
Comment: Unlike first order equations we have seen previously, the general solution of a
second order equation has two arbitrary coefficients.
𝑦 HH + 𝑦 = 0
Solution
The equation’s solution is any function satisfying the equality 𝑦 HH = −𝑦.
This can be rewritten as
𝑑+ 𝑦
= −𝑦
𝑑𝑥 +
d~O
𝑦 = sin 𝑥 or 𝑦 = cos 𝑥, then de ~ = −𝑦.
Thus,
𝑦 = sin 𝑥 or 𝑦 = cos 𝑥 are two independent solutions to the ODE.
𝑦 = 𝐶F sin 𝑥 + 𝐶+ cos 𝑥
𝑎≠0
Each and every root, sometimes called a characteristic root, 𝑟, of the characteristic polynomial
gives rise to a solution
𝑦 = 𝑒 •e
From our knowledge of Algebra, there are 3 possible cases of the solutions to the characteristic
polynomial:
2. When 𝑏 + − 4𝑎𝑐 = 0
The characteristic polynomial has one repeated real root 𝑟.
Solution
The Characteristic Polynomial is
𝑟 + + 5𝑟 + 4 = 0 = (𝑟 + 4)(𝑟 + 1) = 0
= 𝐶F 𝑒 ECe + 𝐶+ 𝑒 Ee
When 𝑏 + − 4𝑎𝑐 = 0, the characteristic polynomial have two repeated real roots 𝑟.
𝑦 = 𝐶𝑦 = 𝐶𝑒 •e
𝑦 HH − 4𝑦 H + 4𝑦 = 0
Solution
The characteristic polynomial is:
𝑟 + − 4𝑟 + 4 = 0 = (𝑟 − 2)(𝑟 − 2)
𝑦 = 𝐶𝑒 +e
This gives only one solution to the ODE 𝑦 = 𝑒 •e , and no second solution for the second degree
of freedom. How then to get the second solution? To get inspiration consider again the example
𝑦 HH = 0
At 𝑟F = 0
𝑦F = 𝐶F 𝑒 •e = 𝐶F 𝑒 Ie = 𝐶F
Thus
i.e.
𝑦 = 𝐶F 𝑒 •e + 𝐶𝑥 𝑒 •e = 𝐶F 𝑒 Ie = 𝐶F + 𝐶𝑥 𝑒 Ie
We can now go back to the equation 𝑦 HH − 4𝑦 H + 4𝑦 = 0 and try something similar: the first
independent solution is 𝑒 +e , try with a second one: 𝑥𝑒 +e .
𝑦 = 𝐶F 𝑒 +e + 𝐶𝑥 𝑒 +e
When 𝑏 + − 4𝑎𝑐 < 0,, the characteristic polynomial has two complex conjugate roots 𝑟 = 𝜆 ±
𝜇𝑖.
Rewriting gives,
𝑦F = 𝐶𝑒 (¥q¦ƒ)e
𝑦F = 𝐶𝑒 ¥e 𝑒 ¦ƒe
And
𝑦+ = 𝐶F 𝑒 ¥e 𝑒 E¦ƒe
From Euler’s Formula
𝑒 ƒ§ = cos 𝜃 + 𝑖 sin 𝜃
𝑒 Eƒ§ = cos 𝜃 − 𝑖 sin 𝜃
𝑦F = 𝐶𝑒 ¥e ( cos 𝜇𝑥 + 𝑖 sin 𝜇𝑥 )
𝑦+ = 𝐶F 𝑒 ¥e ( cos 𝜇𝑥 − 𝑖 sin 𝜇𝑥 )
Note since i𝐶F 𝑒 ¥e sin 𝜇𝑥 can be written as 𝐶F 𝑒 ¥e sin 𝜇𝑥 as 𝐶F is any arbitrary constant and
multiplying it with i would remain as a constant. Similarly, i𝐶𝑒 ¥e sin 𝜇𝑥 can be written as
𝐶𝑒 ¥e sin 𝜇𝑥 ( in the same manner, -i𝐶𝑒 ¥e sin 𝜇𝑥 can be rewritten as 𝐶𝑒 ¥e sin 𝜇𝑥)
4𝑦 HH + 25𝑦 = 0
Solution
The characteristic polynomial is:
4𝑟 + + 25 = 0
In this case,
©0 − 4(4)(25) 20𝑖 5
𝑟= ± =± ± 𝑖
8 8 2
Thus, the solution is
𝑦F = 𝐶𝑒 ¥e ( cos 𝜇𝑥 + 𝑖 sin 𝜇𝑥 )
N N N N
𝑦F = 𝑒 Ie w cos + 𝑥 + 𝑖 sin + 𝑥 x = cos + 𝑥 + 𝑖 sin + 𝑥
𝑦+ = 𝐶F 𝑒 ¥e ( cos 𝜇𝑥 − 𝑖 sin 𝜇𝑥 )
5 5 5 5
𝑦+ = 𝑒 Ie P cos 𝑥 − 𝑖 sin 𝑥 Q = cos 𝑥 − 𝑖 sin 𝑥
2 2 2 2
Summary
A boundary value problem is a differential equation together with a set of additional constraints,
called the boundary conditions. A solution to a boundary value problem is a solution to the
differential equation which also satisfies the boundary conditions.
Boundary value problems are similar to initial value problems. A boundary value problem has
conditions specified at the extremes ("boundaries") of the independent variable in the equation
whereas an initial value problem has all of the conditions specified at the same value of the
independent variable (and that value is at the lower boundary of the domain, thus the term
"initial" value).
For example, if the independent variable is time over the domain [0,1], boundary value
problem would specify values for y(x) at both x = 0 and x = 1, whereas an initial value problem
would specify a value of y(x)} and y'(x)} at x = 0.
With initial value problems we had a differential equation and we specified the value of the
solution and an appropriate number of derivatives at the same point (collectively called initial
conditions).
For instance, for a second order differential equation the initial conditions are,
𝑦(𝑥) = 𝑦I
𝑦 H (𝑥I ) = 𝑦IH
With boundary value problems we will have a differential equation and we will specify the
function and/or derivatives at different points, which we call boundary values.
For second order differential equations, any of the following can be used for boundary
conditions.
𝑦(𝑥I ) = 𝑦I 𝑦(𝑥F ) = 𝑦F
𝑦 H (𝑥I ) = 𝑦IH 𝑦 H (𝑥F ) = 𝑦FH
𝑦 HH (𝑥I ) = 𝑦IHH 𝑦 HH (𝑥F ) = 𝑦FHH
𝑔(𝑥) = 0
for all 𝑥.
Here we will say that a boundary value problem is homogeneous if in addition to 𝑔(𝑥) =
0 we also have
𝑦I = 0 𝑦F = 0
If any of these are not zero we will call the BVP nonhomogeneous.
It is important to now remember that when we say homogeneous (or nonhomogeneous) we are
saying something not only about the differential equation itself but also about the boundary
conditions as well.
The biggest change that we will see here comes when we go to solve the boundary value
problem. When solving linear initial value problems, a unique solution will be guaranteed under
very mild conditions. We only looked at this idea for first order IVP’s but the idea does extend
to higher order IVP’s. With boundary value problems we will often have no solution or
infinitely many solutions even for very nice differential equations that would yield a unique
solution if we had initial conditions instead of boundary conditions.
Example 18:
𝑑+ 𝑦
−𝑦 =0
𝑑𝑥 +
Given that
𝑦(0) = 0, 𝑦(1) = 1
And the CP is
𝑟+ − 1 = 0
Therefore,
𝑦(𝑥) = 𝑐F 𝑒 e + 𝑐+ 𝑒 Ee
is a general solution to the ODE.
Solution
𝑦(0) = 0
I EI
è 𝑐F 𝑒 + 𝑐+ 𝑒 = 0
è 𝑐F + 𝑐+ = 0
è 𝑐F = −𝑐+ (1)
𝑦(1) = 1
à𝑐F 𝑒 F + 𝑐+ 𝑒 EF = 1 (2)
Replacing (1) in to (2)
−𝑐+ 𝑒 F + 𝑐+ 𝑒 EF = 1
𝑐+ (𝑒 EF − 𝑒 F ) = 1
1
𝑐+ = EF
(𝑒 − 𝑒 F )
1
𝑐F = −
(𝑒 EF − 𝑒F )
1 1
𝑦(𝑥) = − 𝑒 e + EF 𝑒 Ee
(𝑒 EF −𝑒 )
F (𝑒 − 𝑒 F )
1
(𝑒 Ee − 𝑒 e )
(𝑒 − 𝑒 F )
EF
Example 19
𝑦 HH + 4𝑦 = 0
Given that
𝜋
𝑦(0) = 1, 𝑦 H w x = 2
2
CP is
𝑟+ + 4 = 0
𝑦(0) = 1
à
𝑦(0) = 1 = 𝑐F cos 0 + 𝑐+ sin 0
1 = 𝑐F
𝑦(𝑥) = 𝑐F cos 2𝑥 + 𝑐+ sin 2𝑥
𝑦′ = −2𝑐F sin 2𝑥 + 2𝑐+ cos 2𝑥
𝜋
𝑦H w x = 2
2
à−2𝑐F sin 𝜋 + 2𝑐+ cos 𝜋 = 2
−2𝑐+ = 2
𝑐+ = −1
𝑦(𝑥) = cos 2𝑥 − sin 2𝑥
𝑦 HH + 4𝑦 = 0 à 𝑟 + + 4 = 0 à𝑟 = 2𝑖
ª ª ª
𝑦 w C x = 10, 𝐶 cos 2 w C x + 𝐶F sin 2 w C x = 10, 𝐶F = 10
Thus,
𝑦 = −2 cos(2𝑥 ) + 10 sin(2𝑥).
Example 21
𝑦 HH + 4𝑦 = 0
𝑦(0) = 0, 𝑦(2𝜋) = 0
From Example 20
𝑦(2𝜋) = 0, 𝑐F = 0
𝑦(𝑥) = 𝑐+ sin 2𝑥
Example 22
𝑦 HH + 3𝑦 = 0
𝑦(0) = 0, 𝑦(2𝜋) = 0
𝑦(𝑥) = 0.
Recall that, for a given square matrix, A, if we could find values of λ for which we could find
-⃗, to,
nonzero solutions, i.e.x-⃗ ≠ 0
-----⃗
𝐴𝑋 = 𝜆𝑋⃗
In order for 𝜆 to be an eigenvalue then we had to be able to find nonzero solutions to the
equation.
𝑦 HH + 𝜆𝑦 = 0 (∗)
𝑦(0) = 0, 𝑦(2𝜋) = 0
In Example 21 𝜆 = 4 and we found the nontrivial (i.e. nonzero) solutions to the BVP.
In Example 22 𝜆 = 3 and the only solution was the trivial solution (i.e. 𝑦(𝑥) = 0).
So, this homogeneous BVP (recall this also means the boundary conditions are zero) seems to
exhibit similar behaviour to the behaviour in the matrix equation above. There are values
of 𝜆 that will give nontrivial solutions to this BVP and values of 𝜆 that will only admit the trivial
solution.
So, for those values of 𝜆 that give nontrivial solutions we call 𝜆 an eigenvalue for the BVP and
the nontrivial solutions will be called eigenfunctions for the BVP corresponding to the given
eigenvalue.
We now know that for the homogeneous BVP given in (*) 𝜆 = 4 is an eigenvalue (with
eigenfunctions 𝑦(𝑥) = 𝑐+ sin 2𝑥 and that 𝜆 = 3 is not an eigenvalue.
Eventually we’ll try to determine if there are any other eigenvalues for (1)(1), however before
we do that let’s comment briefly on why it is so important for the BVP to be homogeneous in
this discussion. In Example 2 and Example 3 of the previous section we solved the
homogeneous differential equation
y′′+4y=0y″+4y=0
y(0)=ay(2π)=by(0)=ay(2π)=b
In these two examples we saw that by simply changing the value of aa and/or bb we were able
to get either nontrivial solutions or to force no solution at all. In the discussion of
eigenvalues/eigenfunctions we need solutions to exist and the only way to assure this behavior
is to require that the boundary conditions also be homogeneous. In other words, we need for
the BVP to be homogeneous.
There is one final topic that we need to discuss before we move into the topic of eigenvalues
and eigenfunctions and this is more of a notational issue that will help us with some of the
work that we’ll need to do.
Let’s suppose that we have a second order differential equation and its characteristic
polynomial has two real, distinct roots and that they are in the form