DONE AEM CH 2 Numerical Methods
DONE AEM CH 2 Numerical Methods
net/publication/343057501
CITATIONS READS
0 14,597
1 author:
SEE PROFILE
All content following this page was uploaded by Moh Kamalul Wafi on 18 March 2022.
Numerical Methods
ARTICLE HISTORY
Compiled March 18, 2022
ABSTRACT
This material covers the various numerical methods for solving equations (roots),
namely Bisection or Half-interval method as the foundation and the most trivial
technique, Newton-Raphson method, Secant method as the improvement of the
preceding Newton’s, Rule of false position or Linear interpolation method and the
last iterative method, leading to divergence or convergence. The following mate-
rial comprises the numerical techniques for solving the linear systems, constituting
Gauss elimination method, Gauss-Jordan as the evolution of Gauss’ itself, Crout’s
or Cholesky’s method along with the more iterative ways being called Jacobi’s and
Gauss-Seidel method. The last numerical methods deal with the ordinary differential
equations (ODE) using Taylor series, Piccard, Euler and its modification known as
Poligon, ended by the fourth order of Runge-Kutta method
KEYWORDS
Numerical Methods; Equation’s Roots, Linear Systems; Ordinary Differential
Equations (ODE);
1. Introduction
Numerical methods emerge to cope with the more complicated mathematical problems
which cannot be solved with analytical steps. These numerical methods is based on
the approach of that analytical solution so that this somehow results in some errors.
However, this fault is supposed to be in tolerance leading to omitted. This approach
(ϑ̄) is the difference between the true value (ϑ) and the error (er ) being written as,
ϑ = ϑ̄ + er (1)
Numerical methods are closely related to iteration algorithms being halted for certain
time with the desired error boundary (), such that,
ϑk+1 − ϑk
= · 100% (2)
ϑk+1
and the numerical techniques constitutes the long results so that it is required to cut
for some points due to the tiny values. This fault is known as the truncation error.
Contact: [email protected]
1
2. Numerical Techniques for Solving Equations (Roots)
This chapter focuses on methods for searching the roots of such equation. For the more
simplified equations, like two-order, the trivial technique makes of the inspection and
the analytical formula as,
√
−b ± b2 − 4ac
x1,2 = (3)
2a
whereas the more higher-order, the formulas are more complicated so that the iter-
atively numerical techniques are more reliable to be implemented, approaching the
exact solution. There are two terse methods for finding the roots, by figuring out the
equation leading to portray the intersection points with respect to x-axis and inputting
any numbers to the function f (x) resulting the zero value. However, those two are sup-
posed to be ineffective and not systematic. The basic solution therefore is relied on
the idea that any two nearest integer numbers, (xa ) and (xb ) yielding negative result
is where the root should exist as depicted in Fig. (1). From this, various techniques
will be further discussed in this chapter along with the detail algorithms.
Figure 1.: Fundamental idea of numerical method for solving roots of equation
x1 + x2
x3 = due to f (x2 )f (x3 ) < 0, then
2
x2 + x3
x4 = due to f (x3 )f (x4 ) < 0, then
2
x5 = · · · and so on
2
Figure 2.: Bisection/Graphical/Half-Interval Method
The process is halted if the error () of certain value is inside the error set (ϕ) as
written in the following Bisection algorithm, such that
(1) Let f (x) = 0 and find two nearest integer interval denoted as xa and xb such
that f (xa )f (xb ) < 0
xa + xb
(2) Find a new point of xc where xc =
2
(3) If f (xa )f (xc ) < 0, then xb = xc and do step 4. Else if f (xa )f (xc ) > 0, then
xa = xc and do step 4. Otherwise, meaning f (xa )f (xc ) = 0, then xc is the root
and stop
(4) Check the error with requirement ϕ
(5) If ∈ ϕ then stop, otherwise do step 2
Example 2.1. Find one root of the equation 2x3 − 2x2 + x − 2 = 0 with = 1%!
This means that the root is placed between those values, and further value comprises,
x1 + x2
x3 = → 1.5, f (x3 ) = 2 · (1.5)3 − 2 · (1.5)2 + 1.5 − 2 = 1.75
2
and the result encourages the root moves to the left-side between x1 and x3 , yielding
x4 → 1.25 and f (x4 ) = 0.03125 leading to x5 = 1.125. This shows that the root is
somewhere around the number 1.125 depending on ϕ = 0.01
δ 2 00
f (α + δ) = f (α) + δf 0 (α) + f (α) + · · · (4)
2
3
Figure 3.: Newton-Raphson Method
The approach is explained with two divergent scenario yet generating the same algo-
rithm. The first is by Taylor’ theorem Eq. (4), supposing α is the approximate root
while α + δ constitutes the better result meaning f (α + δ) = 0. Since δ 2 is tiny, it is
neglected, such that
f (α)
0 = f (α) + δf 0 (α) −→ δ = −
f 0 (α)
f (α) f (αk )
α+δ =α− 0
→ α1 which means that αk+1 = αk − 0 , ∀k ≥ 0 (5)
f (α) f (αk )
Fig. (3) is the basis of another concept, finding the gradient from any intial condition
xi and reaching the actual root,
∆y f (xi ) − 0 f (xi )
f 0 (xi ) = = −→ xi+1 = xi − 0 (6)
∆x xi − xi+1 f (xi )
Keep in mind that Eqs. (5) and (6) are the same and the algorithm makes of as follows,
(1) Let f (x) = 0 and and find two nearest integer interval denoted as xa and xb such
that f (xa )f (xb ) < 0
(2) Choose either points of xa or xb and assign it as xi
(3) With gradient theorem,
f (xi ) − 0 f (xi )
f 0 (xi ) = −→ xi+1 = xi − 0 (7)
xi − xi+1 f (xi )
f (3) f (2.46)
x1 = 3 − 0
= 2.46 −→ x2 = 2.46 − 0 = 2.30
f (3) f (2.46)
4
f (2.30)
x3 = 2.30 − = 2.28 −→ f (2.28) = 0.001
f 0 (2.30)
(1) Let f (x) = 0 and find two nearest integer interval denoted as xa and xb such
that f (xa )f (xb ) < 0
(2) Choose either points of xa or xb and assign it as xi
(3) With gradient theorem through two points crossing f (x), substitute this f 0 (x)
into Eq. (7)
f (xi ) − f (xi−1 )
f 0 (xi ) = (8)
xi − xi−1
f (xi ) [xi − xi−1 ]
xi+1 = xi − (9)
f (xi ) − f (xi−1 )
This means that the root is placed between those values, and further value comprises,
3 [2 − 1] −1.365 [1.57 − 2]
x3 = 2 − = 1.57 −→ x4 = 1.57 − = 1.705
3 − (−4) −1.365 − 3
where f (x3 ) = −1.365 and since the further result x5 equals to 1.732, this shows that
one of the roots is somewhere around the number 1.7 depending on ϕ = 0.01, making
the iteration stops.
5
2.4. Rule of False Position/Linear Interpolation Method
Let xa and xb be the nearest integer number resulting the negativity f (xa ) · f (xb ) < 0,
meaning that the result is in between of them. The curve A − B is referred to f (x) and
the straight-line g1 (x) connecting A − B is passing through x? while the actual root
of f (x) is denoted as circle-red . After finding the value of x? , this then acts as xa so
as to reach the actual root. From Fig. (5), it can be seen that the idea of this method
could be built with two different ways depending on where the closer the actual root
is placed, either xa or xb such that either two becomes the pivot. The following is the
algorithm to find out the value of x? achieving the actual, therefore
(1) Let f (x) = 0 and find two nearest integer interval denoted as xa and xb such
that f (xa )f (xb ) < 0
(2) Find a new point of x? from the concept of line linking two points with y ? = 0,
(3) If f (xa )f (x? ) < 0, then xb = x? and do step 4. Else if f (xa )f (x? ) > 0, then
xa = x? and do step 4. Otherwise, meaning f (xa )f (x? ) = 0, then x? is the root
and stop
(4) Check the error with requirement ϕ
(5) If ∈ ϕ then stop, otherwise do step 2
Example 2.4. Find one root of the equation x3 + x2 − 3x − 3 = 0 with = 1%! From
the preceding example, it can be seen that the negativity is in between of xa = 1 →
f (xa ) = −4 and xb = 2 → f (xb ) = 3, and the starting point x1 is
1 · f (2) − 2 · f (1)
x1 = = 1.57 −→ f (1.57) = −1.37
f (2) − f (1)
This means that the following iteration is moving, between 1.57 and 2. Since x2 = 1.7
results in negative, f (x2 ) = −0.25, the next x3 is also moving to the left between 1.7
and 2. It yields 1.73 as counted below and it halts in the same point as previously
6
1.57 · f (2) − 2 · f (1.57)
x2 = = 1.7 −→ f (1.7) = −0.25
f (2) − f (1.57)
Figure 6.: Iteration Method, leading to either convergence (Top) and divergence (Bottom)
(1) Let f (x) = 0 and find two nearest integer interval denoted as xa and xb such
that f (xa )f (xb ) < 0
(2) Find a new point of xi+1 where,
7
3. Numerical Solution of Linear Systems
To solve the linear system of (k) unknown variables requires at least (k) equations
which could be constructed with matrix. Suppose k = 3 as written in Eqs. (12a)−(12c),
these could be built as Ax = B and to find x is to solve A−1 B. However, as the value
of k is getting increased, the hand-made calculation is becoming more complicated and
taking much time so that, in this chapter, several iteration-based numerical methods
are proposed.
From Eq. (13), it is chosen any row, supposed the first row (r1 ) or Eq. (12a), and the
whole members in r1 is divided by the constant of x as denoted in Eq. (14),
α12 α13 β1
x+ y+ z= (14)
α11 α11 α11
and Eq. (14) is multiplied by the constant of x in r2 − (a21 ) in Eq. (13) or Eq. (12b),
therefore
α12 α13 β1
α21 x + α21 y + α21 z = α21 (15)
α11 α11 α11
Keep in mind that the ϕ∗ in Eq. (16) means the new values rather than the preceding
and this is defined in Eq. (13) in the r2 (right) matrix. Likewise, Eq. (14) is multiplied
by the constant of x in r3 − (a31 ) and then subtracted also in Eq. (12c). The idea is
to make the constants of x in r2 and r3 are zeros, yielding
8
The next is to make the constant zero in y in r3 . By dividing the whole members in
Eq. (16) with the constant of y as below,
∗
α23 β2∗
y+ ∗ z = ∗ (18)
α22 α22
and multiplying the result in Eq. (18) with the constant of y in r3 as follows,
∗
α23 β2∗
α32 y + α32 ∗ z = α 32 ∗ (19)
α22 α22
∗ β2∗
α23
α33 − α32 ∗ z = β 3 − α32 ∗ (20)
α22 α
| {z } | {z 22 }
α∗33 β3∗
Eq. (20) is the end of algorithm and the final matrix in Eq. (21) is the same as that
of Eq. (13) (right). The final values of ki are counted below, such that
β3∗
z =
∗
α33
α11 x + α12 y + α13 z = β1
β2∗ − α23
∗ z
∗ ∗
α22 y + α23 z = β2 −→ y = ∗
(21)
∗
α22
α33 z = β3
β1 − α13 z − α12 y
x =
α11
2x + 3y + z = 13; x − y − 2z = −1; 3x + y + 4z = 15
The question should be designed into matrix. As the constant of x in the second
equation is 1, it is arranged in the first (r1 ) for the sake of simplification, such that
1 −1 −2 −1 1 −1 −2 −1 1 −1 −2 −1
r2 −2r 1 r 3 −3r1
2 3 1 13 −− −−→ 0 5 5 15 −− −−→ 0 5 5 15
3 1 4 15 3 1 4 15 0 4 10 18
This means that the final values of the unknown variables are
6z = 6 → z = 1; y + 1 = 3 → y = 2; x − 2 − 2 = −1 → x = 3
9
3.2. Gauss-Jordan Method
The evolution of Gauss elimination is known as Gauss-Jordan method. The principle
is almost on par with Gauss’ yet this generates identity matrix (I) ∈ Rn of the linear
systems as shown in Eq. (22) with (n × n) matrix and xn unknown variables, such
that the final result is directly found.
0 β1∗
α11 α12 α13 ... α1n β1 1 0 0 ...
α21 α22 α23 ... α2n β2 0 1 0 ... 0 β2∗
0 β3∗
α31 α32 α33 ... α3n β3 −→
0 0 1 ... (22)
.. .. .. .. .. .. .. .. .. . . .. ..
. . . . . . . . . . . .
αn1 αn2 αn3 . . . αnn βn 0 0 0 ... 1 βn∗
As mentioned regarding the similarity with Gauss’ with slight different in the end, it
is initiated by dividing the first row (r1 ) with the constant of x1 , therefore
(1) (1) (1) (1)
1 α12 α13 ... α1n β1
α21 α22 α23 ... α2n β2
α31 α32 α33 ... α3n β3
(23)
.. .. .. .. .. ..
. . . . . .
αn1 αn2 αn3 ... αnn βn
Keep in mind that the variable of ϕ(k) with k = 1, . . . means the k−th change compared
to the previous. As r1 is divided by a11 , the members are altered into ϕ(1) Furthermore,
the constant of x1 in r1 is multiplied by each of the following rows r2 → rn and
subtracted into them, making changes throughout the matrix with ϕ(1) , such that
(1) (1) (1) (1)
1 α12 α13 ... α1n β1
(1) (1) (1) (1)
0 α22 α23 ... α2n β2
(1) (1) (1) (1)
0 α32 α33 ... α3n β3
(24)
.. .. .. .. .. ..
. . . . . .
(1) (1) (1) (1)
0 αn2 αn3 ... αnn βn
Eq. (24) is the end of the first column c1 . The next is to divide the second row (r2 ) by
the constant of x2 and this makes ϕ(2) to r2 as written in Eq. (25) (left). The following
is to make (c2 ) be zeros by multiplying x2 in r2 with each of the constant x2 from r1 to
r3 → rn and subtracting them, comprising ϕ(2) the whole matrix as Eq. (25) (right),
(1) (1) (1) (1)
(2) (2) (2)
1 α12 α13 ... α1n β1 1 0 α13 ... α1n β1
(2) (2) (2) (2) (2) (2)
0 1 α23 ... α2n β2
0 1 α23 ... α2n β2
(1) (1) (1) (1)
−→ (2) (2) (2)
0 α32 α33 ... α3n β3 0 0 α33 ... α3n β3
(25)
.. .. .. .. .. .. .. .. .. .. .. ..
. . . . . .
. . . . . .
(1) (1) (1) (1) (2) (2) (2)
0 αn2 αn3 ... αnn βn 0 0 αn3 ... αnn βn
10
The third step is also the same with dividing r3 with the constant of x3 leading to ϕ(3)
as Eq. (26) (left). Likewise, multiplying the constant of x3 with each of the constant
in c3 and then subtracting them are conducted, resulting Eq. (26) (right), therefore
(2) (2) (2)
(3) (3)
1 0 α13 ... α1n β1 1 0 0 ... α1n β1
(2) (2) (2) (3) (3)
0 1 α23 ... α2n β2
0 1 0 ... α2n β2
(3) (3)
−→ (3) (3)
0 0 1 ... α3n β3 0 0 1 ... α3n β3
(26)
.. .. .. .. .. .. .. .. .. . . .. ..
. . . . . .
. . . . . .
(2) (2) (2) (3) (3)
0 0 αn3 ... αnn βn 0 0 0 ... αnn βn
This is done until xn such that the whole matrix is becoming the identity matrix of
I ∈ Rn as denoted below in Eq. (27).
(n−1) (n−1)
(n)
1 0 0 ... α1n β1 1 0 0 ... 0 β1
(n−1) (n−1) (n)
0 1 0 ... α2n β2
0 1 0 ... 0 β2
(n−1) (n−1)
−→ (n)
0 0 1 ... α3n β3 0 0 1 ... 0 β3
(27)
.. .. .. . . .. .. .. .. .. . . .. ..
. . . . . .
. . . . . .
(n) (n)
0 0 0 ... 1 βn 0 0 0 ... 1 βn
2x + 3y + z = 13; x − y − 2z = −1; 3x + y + 4z = 15
The question should be designed into matrix. As the constant of x in the second
equation is 1, it is arranged in the first (r1 ) for the sake of simplification, such that
1 −1 −2 −1 1 −1 −2 −1 1 −1 −2 −1
r2 −2r1 r3 −3r1
2 3 1 13 −− −−→ 0 5 5 15 −− −−→ 0 5 5 15
3 1 4 15 3 1 4 15 0 4 10 18
The final values of the unknown variables are x = 3, y = 2, z = 1, the same as Gauss’.
11
3.3. Crout’s / Cholesky’s Method
In the first two methods, they have some drawbacks as the number of unknown vari-
ables increased because it will eventually add the number of iterations. To deal with
that adversity and time-consuming, the so-called Crout’s or Cholesky’s technique is
proposed. The algorithm is to switch from the given question in Eq. (28) (right) to
upper triangular matrix (left).
α11 α12 α13 β1 1 u12 u13 γ1
α21 α22 α23 β2 −→ 0 1 u23 γ2 (28)
α31 α32 α33 β3 0 0 1 γ3
ΨU X = B or ΨΓ = B (29)
The idea is to find the variables in Eq. (30) using the information from the question
with the following ways as in Eq. (32). The numbers with green color is to show
the steps in finding the unknown variables along with the formulas for each step.
The arrow as the starting point is to direct steps to solve. Keep in mind that as the
unknown variables is getting higher, the steps is also increasing yet the concept just
differ between rows and columns.
−→ α12 α13 β1
↓ ψ11 u12 u13 γ1 (2) → u12 = u13 = γ1 =
ψ11 ψ11 ψ11
α23 − ψ21 u13 β2 − ψ21 γ1
−→
↓ ψ22 γ2 (4) → u23
ψ21 u23 = γ2 =
ϕ22 ϕ22 (32)
−→
ψ31 ψ32 ↓ ψ33 γ3 (6) → γ3 = α32 − ψ31 u12
(1) (3) (5)
ψ11 = α11 ψ22 = α22 − ψ21 u12 ψ33 = α33 − ψ31 u13 − ψ32 u23
ψ21 = α21 ψ32 = α32 − ψ31 u12
ψ31 = α31
12
m−1
" #
X
n−1
∆m − ϕmi Γi
i=1
X
ψmn = αmn − ϕmi uin → m ≥ n Γm =
ϕmm
i=1
m−1 m−1
" # " #
X X
αmn − ϕmi uin γm − ϕmi γi
i=1 i=1
umn = →m<n γm =
ϕmm ϕmm
4x + y − z = 13; 3x + 5y + 2z = 21; 2x + y + 6z = 14
The given question could be designed as follows with the augmented unknown variables
are located in the below of the question. The formulas to solve those are explained in
Eq, (32)
4 1 −1 13 4 1 −1 13
3 5 2 21 3 5 2 21
2 1 6 14 2 1 6 14
−→ 1
ψ11 u12 u13 γ1 4 4 − 14 13
4
17 11 45
ψ21 ψ22 u23 γ2 3 4 17 17
ψ31 ψ32 ψ33 γ3 1 105
2 2 17 1
Suppose we have three (k = 3) unknown variables with k equations. Eq. (34) describes
the design of the algorithm. each of unknown variables is represented with each of
13
equations, such that r1 → x, r2 → y and r3 → z.
β1 α12 α13
x= − y− z
α11 α11 α11
β2 α21 α23
y= − x− z (34)
α22 α22 α22
β3 α31 α32
z= − x− y
α33 α33 α33
The first iteration is to make zeros in x0 , y0 , z0 , and the first iteration is constructed
in Eq. (35),
β1 β2 β3
x1 = ; y1 = ; z1 = (35)
α11 α22 α33
The second iteration takes the information of (x1 , y1 , z1 ) and keep in mind that the
design of the equations are on par with the only difference in the unknown variables,
altering each step of iteration. This happens for some n−iterations depending on ∈ ϕ
4x + y + 3z = 17; x + 5y + z = 14; 2x − y + 8z = 12
From the given questions, it should be built the iteration set-up of those unknown
variables. The first iteration comes from the zeros of those initial unknown variables
(x0 , y0 , z0 ), such that
17 1 3
x1 = − y0 − z0
4 4 4
14 1 1 17 14 3
y1 = − x 0 − z0 −→ x1 = ; y1 = ; z1 =
5 5 5 4 5 2
3 1 1
z1 = − x0 + y0
2 4 8
and the second iteration with the information of (x1 , y1 , z1 ) comprises as follows
17 1 3
x2 = − y1 − z1
4 4 4
14 1 1 97 33 63
y2 = − x 1 − z1 −→ x2 = ; y2 = ; z2 =
5 5 5 40 20 80
3 1 1
z2 = − x1 + y1
2 4 8
14
As it does not yet converge to single point, the third is done,
17 1 3
x3 = − y2 − z2
4 4 4
14 1 1
y3 = − x 2 − z2 −→ x3 = 3.25; y3 = 2.16; z3 = 1.10
5 5 5
3 1 1
z3 = − x2 + y2
2 4 8
17 1 3
x4 = − y3 − z3
4 4 4
14 1 1
y4 = − x 3 − z3 −→ x4 = 2.88; y4 = 1.93; z4 = 0.96
5 5 5
3 1 1
z4 = − x3 + y3
2 4 8
This fifth step guarantee the converge to single point and it stops
17 1 3
x5 = − y4 − z4
4 4 4
14 1 1
y5 = − x 4 − z4 −→ x5 = 3.05; y5 = 2.03; z5 = 1.02
5 5 5
3 1 1
z5 = − x4 + y4
2 4 8
and these changes influence the second iteration and the followings as in Eq. (38) until
n iteration such that ∈ ϕ, therefore
15
From the given questions, it should be built the iteration set-up of those unknown
variables. The only-first iteration comes from the zeros of those unknown variables,
such that
35 1 1
x1 = − y0 − z0
2 6 6
155 1 3 35 85 13
y1 = − x 1 − z0 −→ x1 = ; y1 = ; z1 =
8 2 8 2 8 2
13 1 2
z1 = − + x 1 + y1
2 2 5
and as presented in Eq. (36), the following iterations come with the previous and for
the four steps are written in the table below,
Iteration n = 1 → 4 1 2 3 4
35 1 1
xn = − yn−1 − zn−1 x1 = 17.5 x2 = 14.6 x3 = 15.1 x4 = 14.9
2 6 6
155 1 3
yn = − xn − zn−1 y1 = 10.6 y2 = 9.6 y3 = 10.1 y4 = 9.9
8 2 8
13 1 2
z n = − + x n + yn z1 = 6.5 z2 = 4.7 z3 = 5.1 z4 = 4.9
2 2 5
The last numerical methods deal with the solution of Ordinary Differential Equations
with five divergence techniques as presented in the followings.
dy
= f (x, y) (39)
dx
and recalling the Taylor series,
(x − x0 )2 2 (x − x0 )n n
y = y0 + (x − x0 )y 1 (x0 ) + y (x0 ) + · · · + y (x0 ) (40)
2! n!
d2 y d3 y dn y
y2 = ; y3 = ; ··· ; yn = (41)
dx2 dx3 dxn
dy
Example 4.1. Solve = 3x + y 2 using Taylor Series method if y(0) = 1. Find the
dx
value of y(0.1) with four places of decimals!
16
By differentiating the original O.D.E three times, the four places of decimals could be
achieved, therefore
dy
= 3x + y 2 −→ 3(0) + (1)2 = 1 (42a)
dx
d2 y dy
= 3 + 2y −→ 3 + 2(1)(1) = 5 (42b)
dx2 dx
2
d3 y d2 y dy
= 2y + 2 −→ 2(1)(5) + 2(1)2 = 12 (42c)
dx3 dx2 dx
d4 y d3 y dy d2 y dy d2 y
= 2y 3 + 2 + 4 −→ 2(1)(12) + 2(1)(5) + 4(1)(5) = 54 (42d)
dx4 dx dx dx2 dx dx2
x2 x3 x4
y(x) = 1 + x + (5) + (12) + (54) + · · · (43)
2! 3! 4!
5 12 54
y(0.1) = 1 + 0.1 + (0.01) + (0.001) + (0.0001) (44a)
2! 3! 4!
= 1 + 0.1 + 0.025 + 0.002 + 0.000225
= 1.127225 (44b)
dy
= f (x, y) (45)
dx
By integrating both sides from initial condition (x0 , y0 ) to (x, y), the Eq.(45) becomes,
Z y Z x Z x
dy = f (x, y) dx −→ y = y0 + f (x, y) dx (46)
y0 x0 x0
dy
Example 4.2. Applying Piccard method to solve = 1 + xy with (x0 , y0 ) = (0, 0)
dx
up to third approximation!
Z y Z x
dy
= 1 + xy −→ dy = 1 + xy dx (48)
dx 0 0
17
From the above equation, the three approximation (k = 0, 1, 2, 3) could be explained
as follows,
Z x Z x
y1 = (1 + xy0 ) dx = dx −→ y1 = x (49)
0 0
Z x Z x
x3
y2 = (1 + xy1 ) dx = (1 + x2 ) dx −→ y2 = x + (50)
0 0 3
Z x Z x
x3 x3 x5
y3 = (1 + xy2 ) dx = 1+x x+ dx −→ y3 = x + + (51)
0 0 3 3 15
dy
= f (x, y) (52)
dx
with (xk , yk ) for k = 0, 1, . . . , m, m + 1, . . . , n are the points on the curve of ϕ(x) with
equal distant of ∆. This means that those points lie on ϕ(x) and xm+1 equals to xm+∆ ,
Recalling the Taylor Series in Eq.(40), the solution i n Eq.(53) could be written as
follow,
∆2 2 ∆z z
yk+1 = ϕ(xk ) + ∆ϕ1 (xk ) + ϕ (xk ) + · · · + ϕ (xk ) k = 0, 1, . . . , n (54)
2! z!
18
dy dy
k x y = xk + 2yk yk + ∆ = yk+1
dx dx
to x, it is found that,
dy ∆2 3 ∆z z+1
= ϕ1 (xk ) + ∆ϕ2 (xk ) + ϕ (xk ) + · · · + ϕ (xk )
dx k+1 2! z!
∆2 3 ∆z z+1
f (xk+1 , yk+1 ) = f (xk , yk ) + ∆ϕ2 (xk ) + ϕ (xk ) + · · · + ϕ (xk ) (57)
2! z!
∆
By multiplying Eq.(57) with and subtracting from Eq.(54), the equation would be,
2
∆ ∆ ∆3
yk+1 − f (xk+1 , yk+1 ) = yk + f (xk , yk ) − f (xk , yk ) + · · · (58)
2 2 12
However, Eq.(59) seems to be unsolved since f (xk+1 , yk+1 ) is unknown so that calcu-
lating yk+1 from Euler is required before going through its modified, such that
Example 4.4. Use Euler Modified formula to find an approximate value of unknown
dy
y from = x + 3y subject to y(0) = 1 when x = 1!
dx
19
k x y f (xk , yk ) Euler yk+1 xk+1 f (xk+1 , yk+1 ) E.M yk+1
dy
Example 4.5. Solve = 3x + y 2 subject to y = 1.2 when x = 1 and ∆ = 0.1 using
dx
Rune-Kutta second-order method to find y when x = 1.1!
From the question, x0 = 1, y0 = 1.2 with ∆ = 0.1, such that
and,
1
y1 = y0 + (0.444 + 0.600) = 1.722
2
The following is the third-order of Runge-Kutta method with additional variable of
ϕ3 along slight change in ϕ2 compared to the second
ψ1 = ∆f (x
k , yk )
1 ∆ ψ1
yk+1 = yk + (ψ1 + 4ψ2 + ψ3 ) −→ ψ2 = ∆f xk + , yk + (63)
6
2 2
ψ3 = ∆f (xk + ∆, yk + 2ψ2 − ψ1 )
dy
Example 4.6. Solve the differential equation = x−y subject to y = 1 when x = 1
dx
20
using Runge-Kutta third-order in x = 1.1;
ϕ1 = ∆f (x0 , y0 ) = 0.1(1 − 1) = 0
∆ ϕ1
ϕ2 = ∆f x0 + , y0 + = 0.1(1.05 − 1) = 0.005
2 2
ϕ3 = ∆f (x0 + ∆, y0 + 2ϕ2 − ϕ1 ) = 0.1(1.1 − 1.01) = 0.009
therefore,
1
yk+1 = yk + (0 + 4(0.005) + 0.009) = 1.004833
6
ψ1 = ∆f (x
k , yk )
∆ ψ 1
ψ2 = ∆f xk + , yk +
1 2 2
yk+1 = yk + (ψ1 + 2ψ2 + 2ψ3 + ψ4 ) −→ (64)
6 ∆ ψ 2
ψ3 = ∆f xk + , yk +
2 2
ψ4 = ∆f (xk + ∆, yk + ψ3 )
Example 4.7.
therefore,
1
yk+1 = yk + (0.1 + 2(0.11) + 2(0.1105) + 0.12105) = 1.11034
6
21