0% found this document useful (0 votes)
26 views

Lecture Notes On Numerical Diff Eqns

Uploaded by

priyanka singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Lecture Notes On Numerical Diff Eqns

Uploaded by

priyanka singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Lecture Notes on

Numerical Differential

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Equations: IVP

Professor Biswa Nath Datta

Department of Mathematical Sciences


Northern Illinois University
DeKalb, IL. 60115 USA

E−mail: [email protected]

URL: www.math.niu.edu/~dattab
1 Initial Value Problem for Ordinary Differential
Equations

We consider the problem of numerically solving a system of differential equations of the form
dy

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


= f (t, y), a ≤ t ≤ b, y(a) = α (given) .
dt
Such a problem is called the Initial Value Problem or in short IVP, because the initial

value of the solution y(a) = α is given.

Since there are infinitely many values between a and b, we will only be concerned here to
find approximations of the solution y(t) at several specified values of t in [a, b], rather than
finding y(t) at every value between a and b.

Denote

• yi = An approximation of y(ti ) at t = ti .

PSfrag
• Divide [a, b] into N equal subintervals of length h: replacements

t0 = a < t1 < t2 < · · · tN = b.




    



     

    

a = t0 t1 t2 tN = b
b−a
• h= (step size)
N
Notation:

y(ti ) ≡ Exact value at t = ti .


yi ≡ Approximate value of y(ti ).
The Initial Value Problem
Given

(1) y 0 = f (y, t), a ≤ t ≤ b

(2) The initial value y(t0 ) = y(a) = α

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


(3) The step-size h.
b−a
Find yi (approximate value of y(ti )), i = 1, · · · , N, where N = .
h

We will briefly describe here the following well-known numerical methods for solving the
IVP:

• The Euler Method

• The Taylor Method of higher order

• The Runge-Kutta Method

• The Adams-Moulton Method

• The Milne Method

etc.

We will also discuss the error behavior and convergence of these methods.
However, before doing so, we state a result without proof, in the following section on the
existence and uniqueness of the solution for the IVP. The proof can be found in most

books on ordinary differential equations.

Existence and Uniqueness of the Solution for the IVP


Theorem: (Existence and Uniqueness Theorem for the IVP).

The initial value problem: 


y 0 = f (t, y)
y(a) = α
has a unique solution y(t) for a ≤ t ≤ b, if f (t, y) is continuous on the domain, given by
R = {a ≤ t ≤ b, ∞ < y < ∞} and satisfies the following inequality:

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


|f (t, y) − f (t, y ∗ )| ≤ L|y − y ∗ |,

Whenever (t, y) and (t, y ∗ ) ∈ R.



Definition. The condition |f (t, y) − f (t, y ∗ )| ≤ L|y − y ∗ | is called the Lipschitz Condi-

tion. The number L is called a Lipschitz Constant.

Definition.
A set S is said to be convex if whenever (t1 , y1 ) and (t2 , y2 ) belong to S, the point ((1 −
λ)t1 + λt2 , (1 − λ)y1 + λy2 ) also belongs to S for each λ when 0 ≤ λ ≤ 1.

Simplification of the Lipschitz Condition for the Convex Domain


If the domain happens to be a convex set, then the condition of the above Theorem reduces
to
∂f
(t, y) ≤ L for all (t, y) ∈ R.
∂y

Liptischitz Condition and Well-Posednes

Definition.
An IVP is said to be well-posed if a small perhubation in the data of the problem leads to
only a small change in the solution.
Since numerical computation may very well introduce some perhubations to the problem, it
is important that the problem that is to be solved is well-posed.
Fortunately, the Lipschitz condition is a sufficient condition for the IVP problem to be well-

posed.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Theorem (Well-Posedness of the IVP problem).
If f (t, y) Satisfies the Lipschitz Condition, then the IVP is well-posed.

2 The Euler Method

One of the simplest methods for solving the IVP is the classical Euler method.

The method is derived from the Taylor Series expansion of the function y(t).
The function y(t) has the following Taylor series expansion of order n at t = ti+1 :
(ti+1 − ti ) 2 00 (ti+1 − ti ) n (n)
y(ti+1 ) = y(ti ) + (ti+1 − ti )y 0 (ti ) + y (ti ) + · · · + y (ti ) +
2! n!
(ti+1 − ti ) n+1 n+1
y (ξi ), where ξi is in (ti , ti+1 ).
(n + 1)!

Substitute h = ti+1 − ti . Then


Taylor Series Expansion of y(t) of order n at t = ti+1 :

h2 00 hn hn+1 (n+1)
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ti ) + · · · + y (n) (ti ) + y (ξi ).
2! n! (n + 1)!

For n = 1, this formula reduces to

h2 00
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ξ).
2
h2 (2)
The term = y (ξi ) is call the remainder term.
2!

Neglecting the remainder term, we have

Euler’s Method
yi+1 = yi + hy 0 (ti )

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


= yi + hf (ti , yi ), i = 0, 1, 2, · · · , N − 1

This formula is known as the Euler method and now can be used to approximate y(ti+1 ).

Geometrical Interpretation

y(tN )
= y(b) PSfrag replacements

y(t2 )

y(t1 )

y(t0 )
= y(a) = α

a t1 t2 tN −1 b = tN
Algorithm: Euler’s Method for IVP

Input: (i). The function f (t, y)


(ii). The end points of the interval [a, b] : a and b
(iii). The initial value: α = y(t0 ) = y(a)

Output: Approximations yi+1 of y(ti + 1), i = 0, 1, · · · , N − 1.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Step 1. Initialization: Set t0 = a, y0 = y(t0 ) = y(a) = α.
b−a
and N = .
h
Step 2. For i = 0, 1, · · · , N − 1 do
Compute yi+1 = yi + hf (ti , yi )
End

Example: y 0 = t2 + 5, 0 ≤ t ≤ 1.
y(0) = 0, h = 0.25.
Find y1 , y2 , y3 , and y4 , approximations of y(0.25), y(0.50, y(0.75), and y(1), respectively.
The points of subdivisions are: t0 = 0, t1 = 0.25, t2 = 0.50, t3 = 0.75 and t4 = 1.
i = 0 : t1 = t0 + h = 0.25

y1 = y0 + hf (t0 , y0 ) = 0 + .25(5) = 1.25

(y(0.25) = 1.2552).

i = 1 : t2 = t1 + h = 0.50

y2 = y1 + hf (t1 , y1 )

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


= 1.25 + 0.25(t21 + 5) = 1.25 + 0.25((0.25)2 + 5)

= 2.5156

(y(0.5) = 2.5417).

i = 2 : t3 = t2 + h = 0.75

y3 = y2 + hf (t2 , y2 )

= 2.5156 + .25((.5)2 + 5) = 3.8281

(y(0.75) = 3.8906).

etc.

Example: y 0 = t2 + 5, 0 ≤ t ≤ 2,
y(0) = 0, h = 0.5

So, the points of subdivisions are: t0 = 0, t1 = 0.5, t2 = 1, t3 = 1.5, t4 = 2.

We compute y1 , y2 , y3 , and y4 , which are, respectively, approximations to y(0.5), y(1), y(1.5),


and y(2).
i=0: y1 = y0 + hf (t0 , y0 ) = y(0) + hf (0, 0) = 0 + 0.5 × 5 = 2.5

(y(0.50) = 2.5417).

i=1: y2 = y1 + hf (t1 , y1 ) = 2.5 + 0.5((0.5)2 + 5) = 5.1250

(y(1) = 5.3333).

i = 2 : y3 = y2 + hf (t2 , y2 ) = 5.1250 + 0.5(t22 + 5) = 5.1250 + 0.5(1.5) = 8.1250

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


(y(1.5) = 8.6250).

etc.

The Errors in Euler’s Method

The approximations obtained by a numerical method to solve the IVP are usually subjected
to three types of errors:

• Local Truncation Error

• Global Truncation Error

• Round-off Error

We will not consider the round-off error for our discussions below.

The local discretization error is the error made at a single step due to the truncation
of the series used to solve the problem.

Recall that the Euler Method was obtained by truncating the Taylor series

h2 00
y(ti+1 ) = y(ti ) + hy 0 (ti ) + y (ti ) + · · ·
2
h2 00
after two terms. Thus, in obtaining Euler’s method, the first term neglected is y (t).
2

h2 00
So the local error in Euler’s method is: ELE = y (ξi ),
2
where ξi lies between ti and ti+1 . In this case, we say that the local error is of order h2 ,

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


written as O(h2 ). Note that the local error ELE converges to zero as h → 0.

Global error is the difference between the true solution y(ti ) and the approximate solution
yi at t = ti . Thus, Global error = y(ti ) − yi . Denote this by EGE .

The following theorem shows that the global error, EGE , is of order h.

Theorem: (Global Error Bound for the Euler Method)

Let y(t) be the unique solution of the IVP: y 0 = f (t, y); y(a) = α.

a ≤ t ≤ b, −∞ < y < ∞,
Let L and M be two numbers such that

∂f (t, y)
≤ L, and |y 00 (t)| ≤ M in [a, b].
∂y
Then the global error EGE at t = ti satisfies

hM L(ti −a)
|EGE | = y(ti ) − yi ≤ (e − 1).
2L

Thus, The global error bound for Euler’s method depends upon h, whereas the
local error depends upon h2 .

Proof of the above theorem can be found in the book by G.W. Gear, Numerical Initial
Value Problems in Ordinary Differential Equations, Prentice Hall, Inc. (1971).

Remark. Since the exact solution y(t) of the IVP is not known, the above bound may not
be of practical importance as far as knowing how large the error can be a priori. However,
from this error bound, we can say that the Euler method can be made to converge faster by
decreasing the step-size. Furthermore, if the equalities, L and M of the above theorem can

be found, then we can determine what step-size will be needed to achieve a certain accuracy,
as the following example shows.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


dy t2 + y 2
Example: = , y(0) = 0
dt 2
0 ≤ t ≤ 1, −1 ≤ y(t) ≤ 1.

Determine how small the step-size should be so that the error

does not exceed  = 10−4 .

Compute L:

t2 + y 2
Since f (t, y) = , we have
2
∂f
=y
∂y
∂f
Thus, ≤ 1 for all y, giving L=1 .
∂y
Find M :
To find M , we compute the second-derivative of y(t) as follows:

dy
y0 == f (t, y)(Given)
dt
∂f ∂f
By implicit differentiation, y 00 = +f
∂t ∂y
 2
t + y2

y
=t+ y = t + (t2 + y 2 )

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


2 2
y 2 2
So, |y (t)| = |t + (t + y )| ≤ 2, for − 1 ≤ y ≤ 1.
00
2
Thus, M=2,
2h (ti )
thus, Global Error Bound: |EGE | at t = ti = |y(ti ) − yi | ≤ (e − 1)
2L
= h(e(ti ) − 1) = h(e − 1).

So, for the error not to exceed 10−4 , we must have:


10−4
h(e − 1) < 10−4 or h < ≈ 5.8198 × 10−5 .
e−1

3 High-order Taylor Methods

Recall that the Taylor’s series expansion of y(t) of degree n is given by

h2 00 hn (n) hn+1 (n+1)


y(ti+1 ) = y(ti ) + hy (ti ) + y (ti ) + · · · + y (ti ) +
0
y (ξi )
2 n! (n + 1)!
Now,

(i) y 0 (t) = f (t, y(t)) (Given).


(ii) y 00 (t) = f 0 (t, y(t)).
In general (iii) y (i) (t) = f (i−1) (t, y(t)), i = 1, 2, . . . , n.
Thus,
h2 0 hn−1 (n−1) hn
y(ti+1 ) = y(ti )+hf (ti , y(ti ))+ f (ti , y(ti ))+· · ·+ f (ti , yi )+ f (n−1) (ti , y(ti ))+
2 (n − 1)! n!
hn+1 (n−1)
f (ξi , y(ξi)
(n + 1)! 
hn−1 n−1

h 0
= y(ti ) + h f (ti , y(ti )) + f (ti , y(ti )) + · · · + f (ti , y(ti )) + Remainder Term
2 n!

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Neglecting the remainder term the above formula can be written in compact form as follows:
yi+1 = yi + hTk (ti , yi ), i = 0, 1, · · · , N − 1, where Tk (ti .yi ) is defined by:
h hk−1 (k−1)
Tk (ti , y1 ) = f (ti , yi ) + f 0 (ti , yi ) + · · · + f (ti , yi )
2 k!

So, if we truncate the Taylor Series after (k + 1) terms and use the truncated series to obtain
the approximating of y1+1 of y(ti+1 ), we have the following of k-th order Taylor’s algo-
rithm for the IVP.
Taylor’s Algorithm of order k for IVP

Input: (i) The function f (t, y)


(ii) The end points: a and b
(iii) The initial value: α = y(to ) = y(a)

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


(iv) The order of the algorithm: k
(v) The step size: h

b−a
Step 1 Initialization: t0 = a, y0 = α, N =
h
Step 2. For i = . . . , N − 1 do
h hk−1 (k−1)
2.1 Compute Tk (ti , yi ) = f (ti , yi ) + f 0 (ti , yi ) + . . . + f (ti , yi )
2 k!
2.2 Compute yi+1 = yi + hTk (ti , yi )

End

Note: With k = 1, the above formula for yi+1 , reduces to Euler’s method.

Example:

y 0 = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5, h = 0 · 2.

The points of division are:


t0 = 0, t1 = 0.2, t2 = 0.4, t3 = 0.6, t4 = 0.8, t5 = 1, and so on.
f (t, y(t)) = y − t2 + 1 (Given).
d
f 0 (t, y(t)) = (y − t2 + 1) = y 0 − 2t
dt
= y − t2 + 1 − 2t
d
f 00 (t, y(t)) = (y − t2 + 1 − 2t) = y − t2 + 1 − 2t − 2 = y − t2 − 2t − 1
dt
so,

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


h2
y1 = y0 + hf (t0 , y(t0 )) + f 0 (t0 , y(t0 ))
2
(0.2)2
= 0.5 + 0.2 × 1.5 + (0.5 + 1) = 0.8300 (approximate value of y(0.2)).
2
y2 = 1.215800 (approximate value of y(0.4)).
etc.

4 Runge-Kutta Methods

• The Euler’s method is the simplest to implement; however, even for a reasonable
accuracy the step-size h needs to be very small.

• The difficulties with higher order Taylor’s series methods are that the derivatives of

higher orders of f (t, y) need to be computed, which are very often difficult to compute;
indeed, f (t, y) is not even explicity known in many areas.

The Runge-Kutta methods aim at achieving the accuracy of higher order Taylor series meth-
ods without computing the higher order derivatives.

We first develop the simplest one: The Runge-Kutta Methods of order 2.

The Runge-Kutta Methods of order 2


Suppose that we want an expression of the approximation yi+1 in the form:

yi+1 = yi + α1 k1 + α2 k2 , (4.1)

where k1 = hf (ti , yi ), (4.2)

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


and
k2 = hf (ti + αh, yi + βk1 ). (4.3)

The constants α1 and α2 and α and β are to be chosen so that the formula is as accurate as

the Taylor’s Series Method of as high as possible.

To develop the method we need an important result from Calculus: Taylor’s series for
function to two variables.

Taylor’s Theorem for Function of Two Variables

Let f (t, y) and its partial derivatives of orders up to (n + 1) are continuous in the domain
D = {(t, y)|a ≤ t ≤ b, c ≤ y ≤ d}.
Then
 
∂f ∂f
f (t, y) = f (t0 , y0 ) + (t − t0 ) (t0 , y0 ) + (y − y0 ) (t0 , y0 ) + · · ·
∂t ∂y
n  
∂nf
 X 
1 n
h−i i
+ i (t − t0 ) (y − y0 ) n−1 i (t0 , y0 ) + Rn (t, y),
n! h=0 ∂t ∂y
where Rn (t, y) is the remainder after n terms and involves the partial derivative of order
n + 1.

Using the above theorem with n = 1, we have

∂f ∂f
f (ti + αh, yi + βk1 ) = f (ti , yi ) + αh (ti , yi ) + βk1 (ti , yi ) (4.4)
∂t ∂y
From (4.4) and (4.3), we obtain

k2 ∂f ∂f
= f (ti , yi ) + αh (ti .yi ) + βk1 (ti , yi ). (4.5)
h ∂t ∂y
Again, substituting the value of k1 from (4.2) and k2 from (4.3) in (4.1) we get (after some

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


rearrangment):

 
∂f ∂f
yi+1 = yi + α1 hf (ti , yi ) + α2 h f (ti , yi ) + αh (ti , yi ) + βhf (ti , yi ) (ti , yi )
∂t ∂y
 (4.6)
2 ∂f ∂f
= yi + (α1 + α2 )hf (ti , yi ) + α2 h α (ti , yi ) + βf (ti , yi ) (ti , yi )
∂t ∂y

h2 ∂f
 
∂f
Also, note that y(ti+1 ) = y(ti ) + hf (ti , yi ) + (ti , yi ) + f (ti , yi ) (ti , yi ) + higher
2 ∂t ∂y
order terms.

So, neglecting the higher order terms, we can write

h2
 
∂f ∂f
yi+1 = yi + hf (ti , yi ) + (ti , yi ) + f (ti , yi ) . (4.7)
2 ∂t ∂y
If we want (4.6) and (4.7) to agree for numerical approximations, then we must have

• α1 + α2 = 1 (comparing the coefficients of hf (ti , yi )).

1 ∂f
• α2 α = (comparing the coefficients of h2 (ti , yi ).
2 ∂t
1 ∂f
• α2 β = (comparing the coefficents of h2 f (ti , yi ) (ti yi ).
2 ∂y

Since the number of unknowns here exceeds the number of equations, there are infinitely
many possible solutions. The simplest solution is:
1
α1 = α 2 = , α = β = 1 .
2

With these choices we can generate yi+1 from yi as follows. The process is known as the
Modified Euler’s Method.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Generating yi+1 from yi in Modified Euler’s Method

1
yi+1 = yi + (k1 + k2 ),
2
where k1 = hf (ti , yi )

k2 = hf (ti + h, yi + k1 ).

or
 
h
yi+1 = yi + f (ti , yi ) + f (ti + h, yi + hf (ti , yi )
2
Algorithm: The Modified Euler Method

Inputs: The given function: f (t, y)


The end points of the interval: a and b
The step-size: h
The initial value y(t0 ) = y(a) = α

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Outputs: Approximations yi+1 of y(ti+1 ) = y(t0 + ih),
i = 0, 1, 2, · · · , N − 1

Step 1 (Initialization)
Set t0 = a, y0 = y(t0 ) = y(a) = α
b−a
N=
h
Step 2 For i = 0, 1, 2, · · · , N − 1 do
Compute k1 = hf (ti , yi )
Compute k2 = hf (ti + h, yi + k1 )
1
Compute yi+1 = yi + (k1 + k2 ).
2
End
Example:
y 0 = et , y(0) = 1, h = 0.5, 0 ≤ t ≤ 1
t0 = 0, t1 = 0.5, t2 = 1

i=0: k1 = hf (t0 , y0 ) = 0.5et0 = 0.5


k2 = hf (t0 + h, y0 + k1 ) = 0.5(et0 +h ) = 0.5e0.5 = 0.8244
y1 = y0 + 12 (k1 + k2 ) = 1 + 0.5(0.5 + 0.8244) = 1.6622
(y(0.5) = e0.5 = 1.6487)
i=1: k1 = hf (t1 , y1 ) = 0.5et1 = 0.5e0.5 = 0.8244
k2 = hf (t1 + h, y1 + k1 ) = 0.5e0.5+0.5 = 0.5e = 1.3591
y2 = y1 + 12 (k1 + k2 ) = 1.6622 + 21 (0.8244 + 1.3591) = 2.7539
(y(1) = 2.7183).
Example: Given: y 0 = t + y, y(0) = 1, compute y1 (approximation to y(0.01)) and y2
(approximation to y(0.02) by using Modified Euler Method.
h = 0.01, y0 = y(0) = 1.
1
i = 0 : y1 = y0 + (k1 + k2 )
2
k1 = hf (t0 , y0 ) = 0.01(0 + 1) = 0.01
k2 = hf (t0 + h, y0 + k1 ) = 0.01 × f (0.01, 1 + 0.01)
= 0.01 × (0.01 + 1.01) = 0.01 × 1.02 = 0.0102
1
Thus y1 = 1 + (0.01 + 0.0102) = 1.0101 (Approximate value of y(0.01)
2

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


1
i = 1 : y2 = y1 + (k1 + k2 )
2
k1 = hf (t1 , y1 )
= 0.01 × f (0.01, 1.0101) = 0.01 × (0.01 + 1.0101)
= 0.0102

k2 = hf (t1 + h, y1 + k1 )
= 0.01 × f (0.02, 1.0101 + 0.0102) = 0.01 × (0.02 + 1.0203)
= −0.0104
1
y2 = 1.0101 + (0.0102 + 0.0104) = 1.0204 (Approximate value of y(0.02)).
2
Local Error in the Modified Euler Method

Since in deriving the modified Euler method, we neglected the terms involving h3 and higher
powers of h, the local error for this method is O(h3 ). Thus with the modified Euler

method, we will be able to use larger step-size h than the Euler method to obtain
the same accuracy.

The Midpoint and Heun’s Methods

In deriving the modified Euler’s Method, we have considered only one set of possible values

of α1 , α2 , α1 and β. We will now consider two more sets of values.


1
• α = 0, α2 = 1, α = β = .
2
This gives us the Midpoint Method.

The Midpoint Method

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


yi+1 = yi + k2

where k1 = hf (ti , yi ),

and
 
h k1
k2 = hf ti + , y i +
2 2
or
 
h h
yi+1 = yi +hf ti + , yi + f (ti , yi ) , i = 0, 1, . . . , N −1.
2 2

Example
y 0 = et , y(0) = 1, h = 0.5, 0 ≤ t ≤ 1.
t0 = 0, t1 = 0.5, t2 = 1

Compute y1 , an approximation to y(0.5):


i=0: k1 = hf (t0 , y0 ) = 0.5et0 = 0.5e0 = 0.5
k1 0.5
k2 = hf (t0 + h2 , y0 + 2
) = 0.5e 2 = 0.6420
y1 = y0 + k2 = 1 + 0.6420 = 1.6420
(y(0.5) = 1.6487).

Compute y2 , an approximation of y(1):


i=1: k1 = hf (t1 , y1 ) = 0.5e0.5 = 0.8244
k1
k2 = hf (t1 + h2 , y1 + 2
) = 0.5e.75 = 1.0585
y2 = y1 + k2 = 1.6420 + 1.0585 = 2.7005
(y(1) = e = 2.7183)
1 3 2
• α 1 = , β1 = , α = β =
4 4 3

Then we have Heun’s Method.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Heun’s Method
1 3
yi+1 = yi + k1 + k2
4 4
where k1 = hf (ti , yi )
 
2 2
k2 = hf ti + h, yi + k1
3 3
or
 
h 3h 2 2h
yi+1 = yi + f (ti , yi ) + f ti + h, yi + f (ti , yi ) , i = 0, 1, · · · , N − 1
4 4 3 3

Heun’s Method and the Modified Euler’s Method are classified as the Runge-Kutta
methods of order 2.

The Runge-Kutta Method of order 4.

A method widely used in practice is the Runge-Kutta method of order 4. It’s derivation is
complicated. We will just state the method without proof.
Algorithm: The Runge-Kutta Method of Order 4

Inputs: f (t, y) - the given function


a, b - the end points of the interval
α - the initial value y(t0 )
h - the step size

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


Outputs: The approximations yi+1 of y(ti+1 ), i = 0, 1, · · · , N − 1

Step 1: (Initialization)
Set t0 = a, y0 = y(t0 ) = y(a) = α
b−a
N= .
h
Step 2: (Computations of the Runge-Kutta Coefficients)
For i = 0, 1, 2, · · · , n do
k1 = hf (ti , yi )
h 1
k2 = hf (ti + , yi + k1 )
2 2
h 1
k3 = hf (ti + , yi + k2 )
2 2
k4 = hf (ti + h, yi + k3 )

Step 3: (Computation of the Approximate Solution)


1
Compute: yi+1 = yi + (k1 + 2k2 + 2k3 + k4 )
6

The Local Truncation Error: The local truncation error of the Runge-Kutta Method of
order 4 is O(h5 ).

Example:

y 0 = t + y, y(0) = 1

h = 0.01
Let’s complete y(0.01) using the Runge-Kutta Method of order 4.
i=0

1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 )
6

where k1 = hf (t0 , y0 ) = 0.01f (0, 1) = 0.01 × 1 = 0.01.

File faclib/dattab/LECTURE-NOTES/diff-equation-S06.tex, 4/7/2009 at 10:44, version 7


   
h k1 0.01 0.01 0.01 1 + 0.01
k2 = hf (t0 + , y0 + ) = 0.01f ,1+ = 0.01 + = 0.0101.
2 2 2 2 2 2
   
h k2 h k2
k3 = hf t0 + , y0 + = h t0 + + y 0 + = 0.0101005.
2 2 2 2

k4 = hf (t0 + h, y0 + k3 ) = h(t0 + h + y0 + k3 ) = 0.01020100

1
So, y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) = 1.010100334
6
and so on.

You might also like