Lecture Notes On Numerical Diff Eqns
Lecture Notes On Numerical Diff Eqns
Numerical Differential
E−mail: [email protected]
URL: www.math.niu.edu/~dattab
1 Initial Value Problem for Ordinary Differential
Equations
We consider the problem of numerically solving a system of differential equations of the form
dy
Since there are infinitely many values between a and b, we will only be concerned here to
find approximations of the solution y(t) at several specified values of t in [a, b], rather than
finding y(t) at every value between a and b.
Denote
• yi = An approximation of y(ti ) at t = ti .
PSfrag
• Divide [a, b] into N equal subintervals of length h: replacements
a = t0 t1 t2 tN = b
b−a
• h= (step size)
N
Notation:
We will briefly describe here the following well-known numerical methods for solving the
IVP:
etc.
We will also discuss the error behavior and convergence of these methods.
However, before doing so, we state a result without proof, in the following section on the
existence and uniqueness of the solution for the IVP. The proof can be found in most
Definition.
A set S is said to be convex if whenever (t1 , y1 ) and (t2 , y2 ) belong to S, the point ((1 −
λ)t1 + λt2 , (1 − λ)y1 + λy2 ) also belongs to S for each λ when 0 ≤ λ ≤ 1.
Definition.
An IVP is said to be well-posed if a small perhubation in the data of the problem leads to
only a small change in the solution.
Since numerical computation may very well introduce some perhubations to the problem, it
is important that the problem that is to be solved is well-posed.
Fortunately, the Lipschitz condition is a sufficient condition for the IVP problem to be well-
posed.
One of the simplest methods for solving the IVP is the classical Euler method.
The method is derived from the Taylor Series expansion of the function y(t).
The function y(t) has the following Taylor series expansion of order n at t = ti+1 :
(ti+1 − ti ) 2 00 (ti+1 − ti ) n (n)
y(ti+1 ) = y(ti ) + (ti+1 − ti )y 0 (ti ) + y (ti ) + · · · + y (ti ) +
2! n!
(ti+1 − ti ) n+1 n+1
y (ξi ), where ξi is in (ti , ti+1 ).
(n + 1)!
h2 00 hn hn+1 (n+1)
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ti ) + · · · + y (n) (ti ) + y (ξi ).
2! n! (n + 1)!
h2 00
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ξ).
2
h2 (2)
The term = y (ξi ) is call the remainder term.
2!
Euler’s Method
yi+1 = yi + hy 0 (ti )
This formula is known as the Euler method and now can be used to approximate y(ti+1 ).
Geometrical Interpretation
y(tN )
= y(b) PSfrag replacements
y(t2 )
y(t1 )
y(t0 )
= y(a) = α
a t1 t2 tN −1 b = tN
Algorithm: Euler’s Method for IVP
Example: y 0 = t2 + 5, 0 ≤ t ≤ 1.
y(0) = 0, h = 0.25.
Find y1 , y2 , y3 , and y4 , approximations of y(0.25), y(0.50, y(0.75), and y(1), respectively.
The points of subdivisions are: t0 = 0, t1 = 0.25, t2 = 0.50, t3 = 0.75 and t4 = 1.
i = 0 : t1 = t0 + h = 0.25
(y(0.25) = 1.2552).
i = 1 : t2 = t1 + h = 0.50
y2 = y1 + hf (t1 , y1 )
= 2.5156
(y(0.5) = 2.5417).
i = 2 : t3 = t2 + h = 0.75
y3 = y2 + hf (t2 , y2 )
(y(0.75) = 3.8906).
etc.
Example: y 0 = t2 + 5, 0 ≤ t ≤ 2,
y(0) = 0, h = 0.5
(y(0.50) = 2.5417).
(y(1) = 5.3333).
etc.
The approximations obtained by a numerical method to solve the IVP are usually subjected
to three types of errors:
• Round-off Error
We will not consider the round-off error for our discussions below.
The local discretization error is the error made at a single step due to the truncation
of the series used to solve the problem.
Recall that the Euler Method was obtained by truncating the Taylor series
h2 00
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ti ) + · · ·
2
h2 00
after two terms. Thus, in obtaining Euler’s method, the first term neglected is y (t).
2
h2 00
So the local error in Euler’s method is: ELE = y (ξi ),
2
where ξi lies between ti and ti+1 . In this case, we say that the local error is of order h2 ,
Global error is the difference between the true solution y(ti ) and the approximate solution
yi at t = ti . Thus, Global error = y(ti ) − yi . Denote this by EGE .
The following theorem shows that the global error, EGE , is of order h.
Let y(t) be the unique solution of the IVP: y 0 = f (t, y); y(a) = α.
a ≤ t ≤ b, −∞ < y < ∞,
Let L and M be two numbers such that
∂f (t, y)
≤ L, and |y 00 (t)| ≤ M in [a, b].
∂y
Then the global error EGE at t = ti satisfies
hM L(ti −a)
|EGE | = y(ti ) − yi ≤ (e − 1).
2L
Thus, The global error bound for Euler’s method depends upon h, whereas the
local error depends upon h2 .
Proof of the above theorem can be found in the book by G.W. Gear, Numerical Initial
Value Problems in Ordinary Differential Equations, Prentice Hall, Inc. (1971).
Remark. Since the exact solution y(t) of the IVP is not known, the above bound may not
be of practical importance as far as knowing how large the error can be a priori. However,
from this error bound, we can say that the Euler method can be made to converge faster by
decreasing the step-size. Furthermore, if the equalities, L and M of the above theorem can
be found, then we can determine what step-size will be needed to achieve a certain accuracy,
as the following example shows.
Compute L:
t2 + y 2
Since f (t, y) = , we have
2
∂f
=y
∂y
∂f
Thus, ≤ 1 for all y, giving L=1 .
∂y
Find M :
To find M , we compute the second-derivative of y(t) as follows:
dy
y0 == f (t, y)(Given)
dt
∂f ∂f
By implicit differentiation, y 00 = +f
∂t ∂y
2
t + y2
y
=t+ y = t + (t2 + y 2 )
So, if we truncate the Taylor Series after (k + 1) terms and use the truncated series to obtain
the approximating of y1+1 of y(ti+1 ), we have the following of k-th order Taylor’s algo-
rithm for the IVP.
Taylor’s Algorithm of order k for IVP
b−a
Step 1 Initialization: t0 = a, y0 = α, N =
h
Step 2. For i = . . . , N − 1 do
h hk−1 (k−1)
2.1 Compute Tk (ti , yi ) = f (ti , yi ) + f 0 (ti , yi ) + . . . + f (ti , yi )
2 k!
2.2 Compute yi+1 = yi + hTk (ti , yi )
End
Note: With k = 1, the above formula for yi+1 , reduces to Euler’s method.
Example:
y 0 = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5, h = 0 · 2.
4 Runge-Kutta Methods
• The Euler’s method is the simplest to implement; however, even for a reasonable
accuracy the step-size h needs to be very small.
• The difficulties with higher order Taylor’s series methods are that the derivatives of
higher orders of f (t, y) need to be computed, which are very often difficult to compute;
indeed, f (t, y) is not even explicity known in many areas.
The Runge-Kutta methods aim at achieving the accuracy of higher order Taylor series meth-
ods without computing the higher order derivatives.
yi+1 = yi + α1 k1 + α2 k2 , (4.1)
The constants α1 and α2 and α and β are to be chosen so that the formula is as accurate as
To develop the method we need an important result from Calculus: Taylor’s series for
function to two variables.
Let f (t, y) and its partial derivatives of orders up to (n + 1) are continuous in the domain
D = {(t, y)|a ≤ t ≤ b, c ≤ y ≤ d}.
Then
∂f ∂f
f (t, y) = f (t0 , y0 ) + (t − t0 ) (t0 , y0 ) + (y − y0 ) (t0 , y0 ) + · · ·
∂t ∂y
n
∂nf
X
1 n
h−i i
+ i (t − t0 ) (y − y0 ) n−1 i (t0 , y0 ) + Rn (t, y),
n! h=0 ∂t ∂y
where Rn (t, y) is the remainder after n terms and involves the partial derivative of order
n + 1.
∂f ∂f
f (ti + αh, yi + βk1 ) = f (ti , yi ) + αh (ti , yi ) + βk1 (ti , yi ) (4.4)
∂t ∂y
From (4.4) and (4.3), we obtain
k2 ∂f ∂f
= f (ti , yi ) + αh (ti .yi ) + βk1 (ti , yi ). (4.5)
h ∂t ∂y
Again, substituting the value of k1 from (4.2) and k2 from (4.3) in (4.1) we get (after some
∂f ∂f
yi+1 = yi + α1 hf (ti , yi ) + α2 h f (ti , yi ) + αh (ti , yi ) + βhf (ti , yi ) (ti , yi )
∂t ∂y
(4.6)
2 ∂f ∂f
= yi + (α1 + α2 )hf (ti , yi ) + α2 h α (ti , yi ) + βf (ti , yi ) (ti , yi )
∂t ∂y
h2 ∂f
∂f
Also, note that y(ti+1 ) = y(ti ) + hf (ti , yi ) + (ti , yi ) + f (ti , yi ) (ti , yi ) + higher
2 ∂t ∂y
order terms.
h2
∂f ∂f
yi+1 = yi + hf (ti , yi ) + (ti , yi ) + f (ti , yi ) . (4.7)
2 ∂t ∂y
If we want (4.6) and (4.7) to agree for numerical approximations, then we must have
1 ∂f
• α2 α = (comparing the coefficients of h2 (ti , yi ).
2 ∂t
1 ∂f
• α2 β = (comparing the coefficents of h2 f (ti , yi ) (ti yi ).
2 ∂y
Since the number of unknowns here exceeds the number of equations, there are infinitely
many possible solutions. The simplest solution is:
1
α1 = α 2 = , α = β = 1 .
2
With these choices we can generate yi+1 from yi as follows. The process is known as the
Modified Euler’s Method.
1
yi+1 = yi + (k1 + k2 ),
2
where k1 = hf (ti , yi )
k2 = hf (ti + h, yi + k1 ).
or
h
yi+1 = yi + f (ti , yi ) + f (ti + h, yi + hf (ti , yi )
2
Algorithm: The Modified Euler Method
Step 1 (Initialization)
Set t0 = a, y0 = y(t0 ) = y(a) = α
b−a
N=
h
Step 2 For i = 0, 1, 2, · · · , N − 1 do
Compute k1 = hf (ti , yi )
Compute k2 = hf (ti + h, yi + k1 )
1
Compute yi+1 = yi + (k1 + k2 ).
2
End
Example:
y 0 = et , y(0) = 1, h = 0.5, 0 ≤ t ≤ 1
t0 = 0, t1 = 0.5, t2 = 1
k2 = hf (t1 + h, y1 + k1 )
= 0.01 × f (0.02, 1.0101 + 0.0102) = 0.01 × (0.02 + 1.0203)
= −0.0104
1
y2 = 1.0101 + (0.0102 + 0.0104) = 1.0204 (Approximate value of y(0.02)).
2
Local Error in the Modified Euler Method
Since in deriving the modified Euler method, we neglected the terms involving h3 and higher
powers of h, the local error for this method is O(h3 ). Thus with the modified Euler
method, we will be able to use larger step-size h than the Euler method to obtain
the same accuracy.
In deriving the modified Euler’s Method, we have considered only one set of possible values
where k1 = hf (ti , yi ),
and
h k1
k2 = hf ti + , y i +
2 2
or
h h
yi+1 = yi +hf ti + , yi + f (ti , yi ) , i = 0, 1, . . . , N −1.
2 2
Example
y 0 = et , y(0) = 1, h = 0.5, 0 ≤ t ≤ 1.
t0 = 0, t1 = 0.5, t2 = 1
Heun’s Method and the Modified Euler’s Method are classified as the Runge-Kutta
methods of order 2.
A method widely used in practice is the Runge-Kutta method of order 4. It’s derivation is
complicated. We will just state the method without proof.
Algorithm: The Runge-Kutta Method of Order 4
Step 1: (Initialization)
Set t0 = a, y0 = y(t0 ) = y(a) = α
b−a
N= .
h
Step 2: (Computations of the Runge-Kutta Coefficients)
For i = 0, 1, 2, · · · , n do
k1 = hf (ti , yi )
h 1
k2 = hf (ti + , yi + k1 )
2 2
h 1
k3 = hf (ti + , yi + k2 )
2 2
k4 = hf (ti + h, yi + k3 )
The Local Truncation Error: The local truncation error of the Runge-Kutta Method of
order 4 is O(h5 ).
Example:
y 0 = t + y, y(0) = 1
h = 0.01
Let’s complete y(0.01) using the Runge-Kutta Method of order 4.
i=0
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6
1
So, y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) = 1.010100334
6
and so on.