MTH313 Numerical Analysis PDF
MTH313 Numerical Analysis PDF
FACULTY OF SCIENCE
Numerical analysis I DEPARTMENT OF MATHEMATICAL SCIENCES
Numerical analysis I
MTH313
Numerical analysis I
Lecture note
Methods of solution:
→ Analytic Methods: The analytic methods compute the exact value the roots in a number of steps, and
give all the roots at the same time.
→ Numerical Methods: The numerical methods are based on successive approximation to the root. The
method however produces only one root at a time.
Derivation of formula:
→ Geometrically
𝐴(𝑥𝑛 , 𝑓(𝑥𝑛 ) )
𝑦 = 𝑓(𝑥)
𝐶 𝐵 𝑁
𝑥𝑛+1 𝑥𝑛
The point 𝐴 on the curve 𝑦 = 𝑓(𝑥) has coordinates ( 𝑥𝑛 ,𝑓(𝑥𝑛 )), and the slope of the tangent to the curve
at 𝐴 is 𝑓 ′ (𝑥𝑛 ). Clearly, 𝐵 is nearer to the desired root 𝐶 than 𝑁, and from the triangle 𝐴𝐵𝑁 , we have
𝐴𝑁 𝑓(𝑥𝑛 )
𝑓 ′ (𝑥𝑛 ) = =
𝐵𝑁 𝑥𝑛 − 𝑥𝑛+1
𝒇(𝒙𝒏 )
⇨ 𝒙𝒏+𝟏 = 𝒙𝒏 − 𝑛 = 0, 1, 2, . . . . . . (1)
𝒇′(𝒙𝒏 )
For values of 𝑥near 𝑥0 , 𝑥 − 𝑥0 will be small, it may be assumed in general that 𝑓 ′′ (𝑥0 ) and all higher
derivatives are not unduly large terms involving the powers of (𝑥 − 𝑥0 ) higher than the first may be
neglected, and to a past approximation the expression for 𝑓(𝑥) becomes
𝑓(𝑥0 ) 𝑓 ′(𝑥0 )(𝑥− 𝑥0 )1
𝑓(𝑥) = +
0! 1!
The value of 𝑥 given by (2) will be nearer to the root than𝑥0 , and so forms the basis for the iteration
formula.
𝒇(𝒙𝒏 )
𝒙𝒏+𝟏 = 𝒙𝒏 − . . . (3)
𝒇′(𝒙𝒏 )
Where 𝑛 = 0, 1, 2, . .
Example: Use Newton-Raphson method to seek for the values of 𝑥 for which the following functions
equals zero.
𝑖) 𝑓(𝑥) = 24𝑥 3 − 14𝑥 2 − 11𝑥 + 6 𝑥0 = 1
𝑖𝑖) 𝑓(𝑥) = 36𝑥 4 − 168𝑥 3 + 121𝑥 2 + 103𝑥 − 42 𝑥0 = 2
Solution:
𝑖) 𝑓(𝑥) = 24𝑥 3 − 14𝑥 2 − 11𝑥 + 6, ⇨ 𝑓 ′ (𝑥) = 72𝑥 2 − 28𝑥 − 11
𝑛 𝑥𝑛 𝑓(𝑥𝑛 ) 𝑓 ′ (𝑥𝑛 ) 𝑥𝑛+1
0 1.0000 5.0000 33.0000 0.8484
1 0.8484 1.2466 17.0691 0.7754
2 0.7754 1.2421 10.5785 0.7525
3 0.7525 0.0215 08.7005 0.7500
4 0.7500 0.0000 08.5000 0.7500
5 0.7500 0.0000 08.5000 0.7500
𝑖𝑖) 𝑓(𝑥) = 36𝑥 4 − 168𝑥 3 + 121𝑥 2 + 103𝑥 − 42 ⇨ 𝑓 ′ (𝑥) = 144𝑥 3 − 504𝑥 2 + 242𝑥 + 103
𝑛 𝑥𝑛 𝑓(𝑥𝑛 ) 𝑓 ′ (𝑥𝑛 ) 𝑥𝑛+1
0
1
2
.
.
Multipoint method:
→ Secant method:
Newton-Raphson method is extremely powerful, but it has some difficulty in finding derivatives,
especially when dealing with trace dental functions. To circumvent this problem, a slight variation is
derived by defining
𝑓(𝑥𝑛 )− 𝑓(𝑥)
𝑓 ′ (𝑥𝑛 ) = lim
𝑥 →𝑥𝑛 𝑥𝑛 − 𝑥
Where 𝑛 = 1, 2, 3 . . .
Equation (5) is called the Secant Scheme.
This technique is essentially a modification of the Newton-Raphson method having the derivative 𝑓 ′ (𝑥𝑛 )
replaced by a different expression.
Example:
Use Secant method to seek for the values of 𝑥 for which the following functions equals zero.
𝑖) 𝑓(𝑥) = 8𝑥 3 − 20𝑥 2 − 58𝑥 + 105 𝑥0 = 3 , 𝑥1 = 4
𝑖𝑖) 𝑓(𝑥) = 48𝑥 4 − 42𝑥 3 − 64𝑥 2 − 21𝑥 + 18 𝑥0 = 0, 𝑥1 = 2
Solution:
𝑖) 𝑓(𝑥) = 8𝑥 3 − 20𝑥 2 − 58𝑥 + 105 𝑥0 = 3 , 𝑥1 = 4
𝑛 𝑥𝑛−1 𝑓(𝑥𝑛−1 ) 𝑥𝑛 𝑓(𝑥𝑛 ) 𝑥𝑛+1
1 3.0000 −33.0000 4.0000 65.0000 3.3367
2 4.0000 65.0000 3.3367 -14.0050 3.4543
3 3.3367 -14.0050 3.4543 -04.2543 3.5056
4 3.4543 -4.25430 3.5056 00.5396 3.4998
5 3.5056 0.53960 3.4998 -00.0192 3.5000
6 3.4998 -0.01920 3.5000 00.0000 3.5000
(𝑥3 , 𝑓(𝑥3 ))
𝑥0
𝑥2 𝑐 𝑥3 𝑥1 𝑥
𝑂
(𝑥2 , 𝑓(𝑥2 ))
(𝑥0 , 𝑓(𝑥0 ))
From the graph we can determine approximately a region in which the required root of 𝑓(𝑥) = 0 lies. Let
this region be denoted by (𝑥0 , 𝑥1 ), now let the values of 𝑓(𝑥) at 𝑥0 , and 𝑥1 , be 𝑓(𝑥0 ) and 𝑓(𝑥1 )
respectively. Draw a line to join the points (𝑥0 , 𝑓(𝑥0 )) and (𝑥1 , 𝑓(𝑥1 )), and let this line intersect the 𝑥-
axis at 𝑥2 , the point 𝑥2 can easily be determine by noting that the equation of the line joining the points
(𝑥0 , 𝑓(𝑥0 )) and (𝑥1 , 𝑓(𝑥1 )), can be written in the form
𝑓(𝑥1 ) 𝑓(𝑥0 ) 𝑥1 𝑓(𝑥0 )− 𝑥0 𝑓(𝑥1 )
=− ⇨ 𝑥2 =
𝑥1 − 𝑥2 𝑥2 − 𝑥0 𝑓(𝑥0 ) – 𝑓(𝑥1 )
Where 𝑛 = 1, 2, 3 . . .
Example: Use False Position method to seek for the values of x for which the following functions equals
zero. 𝑖) 𝑓(𝑥) = 8𝑥 3 − 20𝑥 2 − 58𝑥 + 105 𝑥0 = 3, 𝑥1 = 4
𝑖𝑖) 𝑓(𝑥) = 48𝑥 4 − 42𝑥 3 − 68𝑥 2 − 21𝑥 + 18 𝑥0 = 1, 𝑥1 = 2
Solution:
𝑖) 𝑓(𝑥) = 8𝑥 3 − 20𝑥 2 − 58𝑥 + 105 𝑥0 = 3 , 𝑥1 = 4 𝑓(𝑥0 ) = −33
𝑛 𝑥𝑛 𝑓(𝑥𝑛 ) 𝑥𝑛+1
1 4.0000 65.00000 3.3367
2 3.3367 -14.00500 3.5849
3 3.5849 08.61660 3.4638
4 3.4638 -03.39170 3.5169
5 3.5169 01.69290 3.4917
6 3.4917 - 00.79240 3.5038
7 3.5038 00.36570 3.4983
8 3.4983 - 00.16300 3.5003
9 3.5008 00.07680 3.4996
10 3.4996 - 00.03840 3.5002
11 3.5002 00.01920 3.4999
12 3.4999 -0 0.00960 3.5000
13 3.5000 00.00000 3.5000
Theorem (Intermediate value theorem): Suppose that 𝑓 is continuous on the closed interval [𝑎, 𝑏] and 𝜔
is any number between 𝑓(𝑎) and 𝑓(𝑏). Then there is a number 𝑐 𝜀 [𝑎, 𝑏] for which 𝑓(𝑐) = 𝜔.
𝑓(𝑏) •
𝑓(𝑏) •
𝑦=𝑤
𝑤 = 𝑓(𝑐) •
𝑦=𝑤
• • •
𝑎 𝑥
𝑥 𝑐1 𝑐2
𝑎 𝑐3 𝑏
𝑐 𝑏
• 𝑓(𝑎)
•
𝑓(𝑎)
Theorem: Suppose that 𝑓 is continuous on the closed interval [𝑎, 𝑏] and 𝑓(𝑎) and 𝑓(𝑏) have opposite
signs, then there is at least one number 𝑐 𝜀 (𝑎, 𝑏) for which 𝑓(𝑐) = 0.
Illustration: 𝑦
𝑓(𝑏)
•
𝑦 = 𝑓(𝑥)
𝑎 𝑥
𝑏
• 𝑓(𝑎)
→ Bisection method:
These method calls for repeated halving of the interval (𝑥0 , 𝑥1 ) containing the required root, and at each
step locating the half that is containing the root 𝑐
Example:
Use Bisection method to seek for the values of x for which the following functions equals zero.
𝑖) 𝑓(𝑥) = 8𝑥 3 − 20𝑥 2 − 58𝑥 + 105 𝑥0 = 2 , 𝑥1 = 4
𝑖𝑖) 𝑓(𝑥) = 48𝑥 4 − 42𝑥 3 − 68𝑥 2 − 21𝑥 + 18 𝑥0 = 0 , 𝑥1 = 2
𝑖𝑖𝑖) 𝑓(𝑥) = 𝑥 5 + 4𝑥 2 − 9𝑥 + 3 𝑥0 = −1 , 𝑥1 = 1
Solution:
Do.
(𝑥0 − 𝑥2 )2 [𝑓(𝑥1 ) − 𝑓(𝑥2 )] − (𝑥1 − 𝑥2 )2 [𝑓(𝑥0 ) − 𝑓(𝑥2 )] (𝑥1 − 𝑥2 )[𝑓(𝑥0 ) − 𝑓(𝑥2 )] − (𝑥0 − 𝑥2 )[𝑓(𝑥1 ) − 𝑓(𝑥2 )]
𝑏= (𝑥0 − 𝑥2 )(𝑥1 − 𝑥2 )(𝑥0 − 𝑥1 )
and, 𝑎 = (𝑥0 − 𝑥2 )(𝑥1 − 𝑥2 )(𝑥0 − 𝑥1 )
To determine 𝑥3 , we apply the quadratic formula to 𝑓(𝑥). And because of round off error problems
caused by the subtraction of nearly equal numbers, however, we apply the formula in the following
2𝑐 2𝑐
manner. (𝑥 − 𝑥2 ) = − ⇨ 𝑥 = 𝑥3 = 𝑥2 −
𝑏 ± √𝑏 2 − 4𝑎𝑐 𝑏 ± √𝑏 2 − 4𝑎𝑐
This gives two possibilities for 𝑥3 depending on the sign proceeding the radical term. Once 𝑥3 is
determined, the procedure is re-initialize using 𝑥1 , 𝑥2 , and 𝑥3 to determine the next approximation 𝑥4 .
The value of 𝑥 given by the formula above give a basis for the iteration formula
2𝑐
𝑥𝑛+1 = 𝑥𝑛 − . . . (6)
𝑏 ± √𝑏 2 − 4𝑎𝑐
Note:
1) Since at each step, the method involves the radical √𝑏 2 − 4𝑎𝑐 , the method will approximate real and
complex roots
2) In Muller’s method, the sign is chosen to agree with the sign of 𝑏. choosing in this manner, the denominator will
be the largest in magnitude, and will result in 𝑥3 being selected as the closest to the root of 𝑓(𝑥) than 𝑥2 .
Example: Use Muller’s method to seek for the values of 𝑥 for which the following functions equals zero.
𝑖) 𝑓(𝑥) = 16𝑥 4 − 40𝑥 3 + 5𝑥 2 + 20𝑥 + 16 𝑥0 = 0.5 , 𝑥1 = 1.0, 𝑥2 = 1.5
Solution: Do.
→ Barstow’s method:
This method extracts a quadratic factor of the form, 𝑥 2 − 𝑝𝑥 − 𝑞 = 0. from the polynomial 𝑃(𝑥) = 0
which may give a pair of complex or real root. We start with the division for the quadratic.
Let the polynomial be
𝑃𝑛 (𝑥) = 𝑎0 𝑥 𝑛 + 𝑎1 𝑥 𝑛−1 + 𝑎2 𝑥 𝑛−2 + . . . +𝑎𝑛 𝑥 0 . . . (1)
Then, the quadratic divisor is defined by 𝑄(𝑥) = 𝑥 2 − 𝑝𝑥 − 𝑞
We now find the coefficient of the quotient polynomial
𝑃𝑛−2 (𝑥) = 𝑏0 𝑥 𝑛 + 𝑏1 𝑥 𝑛−1 + 𝑏2 𝑥 𝑛−2 + . . . +𝑏𝑛−3 𝑥 + 𝑏𝑛−2
So that 𝑃𝑛 (𝑥) = (𝑥 2 − 𝑝𝑥 − 𝑞)𝑃𝑛−2 (𝑥) + 𝑅𝑛 (𝑥)
Where 𝑅𝑛 (𝑥) = 𝑏𝑛−1 (𝑥 − 𝑝) + 𝑏𝑛 is a linear remainder
𝑃𝑛 (𝑥) = (𝑥 2 − 𝑝𝑥 + 𝑞)(𝑏0 𝑥 𝑛 + 𝑏1 𝑥 𝑛−1 + 𝑏2 𝑥 𝑛−2 + . . . +𝑏𝑛−3 𝑥 + 𝑏𝑛−2 ) + 𝑏𝑛−1 (𝑥 − 𝑝) + 𝑏𝑛 . . (2)
Comparing the coefficients of 𝑃𝑛 (𝑥) in (1) and (2) we have
This shows that the desire coefficients of 𝑃𝑛−2 (𝑥) and the remainder 𝑅𝑛 (𝑥) are computed by the simple
recursive algorithm
𝑏0 = 𝑎0
𝑏1 = 𝑎1 + 𝑝𝑏0
𝑏2 = 𝑎2 + 𝑝𝑏1 + 𝑞𝑏0
𝑏3 = 𝑎3 + 𝑝𝑏2 + 𝑞𝑏1
𝑏4 = 𝑎4 + 𝑝𝑏3 + 𝑞𝑏2
𝑏5 = 𝑎5 + 𝑝𝑏4 + 𝑞𝑏3
. . . . . . . . . . . . .
𝑏𝑛 = 𝑎𝑛 + 𝑝𝑏𝑛−1 + 𝑞𝑏𝑛−2
Now, the factor (𝑥 2 − 𝑝𝑥 + 𝑞) is a divisor of 𝑃𝑛 (𝑥) if and only if the remainder 𝑅𝑛 (𝑥) is zero i.e.
𝑏𝑛−1 = 𝑏𝑛 = 0.
For a given value of 𝑝 and 𝑞 the coefficients of 𝑃𝑛−2 (𝑥) and 𝑅𝑛 (𝑥) are uniquely determined. They are
functions of two variables p and q. Thus the problem of finding quadratic divisor of 𝑃𝑛 (𝑥) is equivalent to
solving the two non-linear equations. 𝑏𝑛−1 (𝑝, 𝑞) = 0 and, 𝑏𝑛 (𝑝, 𝑞) = 0
For the unknown p and q which can be computed by Newton’s method. To do this, we need the partial
derivatives of the functions with respect to p and q.
𝜕𝑏𝑛 𝜕𝑏𝑛−1 𝜕𝑏𝑛−1 𝜕𝑏1 𝜕𝑏2
= 𝑏𝑛−1 + 𝑝 +𝑞 and = 0, = 𝑏1
𝜕𝑝 𝜕𝑝 𝜕𝑞 𝜕𝑝 𝜕𝑝
𝜕𝑏𝑛
The structure introduces the definition of new quantity 𝑐𝑛−1 =
𝜕𝑝
Thus we have
𝑐0 = 𝑏0
𝑐1 = 𝑏1 + 𝑝𝑐0
𝑐2 = 𝑏2 + 𝑝𝑐1 + 𝑞𝑐0
𝑐3 = 𝑏3 + 𝑝𝑐2 + 𝑞𝑐1
𝑐4 = 𝑏4 + 𝑝𝑐3 + 𝑞𝑐2
𝑐5 = 𝑏5 + 𝑝𝑐4 + 𝑞𝑐3
. . . . . . . . . . . . .
𝑐𝑛 = 𝑏𝑛 + 𝑝𝑐𝑛−1 + 𝑞𝑐𝑛−2
This is the computational scheme for the explicit form of Newton’s method in the case of system of two
non-linear equations. After substituting we get
𝑏𝑛cn−3 − 𝑏𝑛−1 cn−2 𝑏𝑛−1 cn−1 − bn cn−2
𝑝𝑘+1 = 𝑝𝑘 + 𝑞𝑘+1 = 𝑞𝑘 +
c2
n−2 − 𝑐𝑛−1 𝑐𝑛−3 c2
n−2 − 𝑐𝑛−1 cn−3
𝑎0 𝑎1 𝑎2 𝑎3 𝑎4 . . . 𝑎𝑛−1 𝑎𝑛
𝑝0 * 𝑝0 𝑏0 𝑝0 𝑏1 𝑝0 𝑏2 𝑝0 𝑏3 . . . 𝑝0 𝑏𝑛−2 𝑝0 𝑏𝑛−1
𝑞0 * * 𝑞0 𝑏0 𝑞0 𝑏1 𝑞0 𝑏2 . . . 𝑞0 𝑏𝑛−3 𝑞0 𝑏𝑛−2
------------------------------------------------------------------------------------
𝑏0 𝑏1 𝑏2 𝑏3 𝑏4 . . . 𝑏𝑛−1 𝑏𝑛
𝑝0 * 𝑐0 𝑝0 𝑐1 𝑝0 𝑐2 𝑝0 𝑐3 . . . 𝑝0 𝑐𝑛−2 𝑝0 𝑐𝑛−1
𝑞0 * * 𝑞0 𝑐0 𝑞0 𝑐1 𝑞0 𝑐2 . . . 𝑞0 𝑐𝑛−3 𝑞0 𝑐𝑛−2
-------------------------------------------------------------------------------------
𝑐0 𝑐1 𝑐2 𝑐3 𝑐4 . . . 𝑐𝑛−1 𝑐𝑛
Methods of solution:
→ Analytic Methods: Determinant method, Matrix inversion, Gauss elimination, Gauss-Jordan
elimination, factorization method.
→ Numerical Methods: Gauss-Seidal method, Jacobi Method, Method of Successive Over-Relaxation
Analytic Methods
Note: In analytic methods the system (1) is written in matrix equation.
𝐴𝑋 = 𝑏 . . . (2)
𝑎11 𝑎12 𝑎13 . . . . . 𝑎1𝑛 𝑥1 𝑏1
𝑎21 𝑎22 𝑎23 . . . . . 𝑎2𝑛 𝑥2 𝑏2
Where 𝐴 = 𝑎31 𝑎32 𝑎33 . . . . . 𝑎3𝑛 , 𝑋 = 𝑥3 , 𝑏 = 𝑏3
. . . . . . . . . . . . . . .
[ 𝑎𝑛1 𝑎𝑛2 𝑎𝑛3 . . . . . 𝑎𝑛𝑛 ] [𝑥𝑛 ] [𝑏𝑛 ]
→ Determinant Method:
Let us denote the determinant of 𝐴 by ∆ i.e. |𝐴| = ∆. From 𝐴, let us obtain for each 𝑖, 1 ≤ 𝑖 ≤ 𝑛 a matrix
𝐴𝑖 which is a matrix in which the 𝑖 𝑡ℎ column of𝐴 is replaced by the matrix of constant terms𝑏. Let us
denote the determinant of 𝐴𝑖 by ∆𝑖 . The system has a unique solution 𝑥 = 𝑥𝑖 if and only if ∆≠ 0. This
unique solution is given by.
Example:
Use determinant method to obtain the solution of the following system of equations
2𝑥1 + 𝑥2 = 7 𝑥1 + 2𝑥2 − 3𝑥3 = 14
3𝑥1 − 5𝑥2 = 4 4 𝑥1 + 5𝑥2 + 6𝑥3 = 2
7𝑥1 + 8𝑥2 + 9𝑥3 = 3
Solution: Do.
Note: The system has a unique solution 𝑥 = 𝑥𝑖 if and only if 𝐴−1 exist, i.e. if and only if ∆≠ 0.
Example:
Use Matrix inversion to obtain the solution of the following system of equations
2𝑥1 − 3𝑥2 = 11 𝑥1 + 2𝑥2 − 3𝑥3 = 14 5𝑥1 + 7𝑥2 + 6𝑥3 + 5𝑥4 = 23
𝑥1 − 2𝑥2 = 4 4𝑥1 + 5𝑥2 + 6𝑥3 = 2 7𝑥1 +10𝑥2 + 8𝑥3 + 7𝑥4 = 32
7𝑥1 + 8𝑥2 + 9𝑥3 = 3 6𝑥1 + 8𝑥2 + 10𝑥3 + 9𝑥4 = 33
5𝑥1 + 7𝑥2 + 9𝑥3 + 10𝑥4 = 31
Solution: Do
Note: The solution vector remains unchanged if any of the following operations are performed.
− Multiplication (or division) of any equation by a constant
− Replacement of any equation by the sum (or difference) of that equation and any other
equation
Step 2: Repeat the process in step1, using 𝑎122 as pivot element, and 𝑅2 as pivot row and make all the
entries below 𝑎122 zero entries.
Step 3: Continue the process in step1 and 2until we use 𝑎1𝑛𝑛 as pivot element and 𝑅𝑛 as pivot row.
Example: Use Gauss elimination method to obtain the solution of the following system of equations
2𝑥1 − 3𝑥2 = 11 𝑥1 + 4𝑥2 − 3𝑥3 = 6 5𝑥1 + 7𝑥2 + 6𝑥3 + 5𝑥4 = 23
𝑥1 − 2𝑥2 = 4 𝑥1 + 2𝑥2 + 3𝑥3 = 14 7𝑥1 + 10𝑥2 + 8𝑥3 + 7𝑥4 = 32
𝑥1 + 4𝑥2 + 𝑥3 = 6 6𝑥1 + 8𝑥2 + 10𝑥3 + 9𝑥4 = 33
5𝑥1 + 7𝑥2 + 9𝑥3 + 10𝑥4 = 31
Solution: Do
Factorization method: The idea behind factorization method is inspired into part by the observation
that triangular systems are easy to solve. We consider the matrix equation (2).
𝐴𝑋 = 𝑏
We decompose 𝐴 into two matrices 𝐿and 𝑈. Where 𝐿 is the lower triangular matrix, and 𝑈 the upper
triangular matrix with 1’s on the leading diagonal of 𝑈. i.e.
𝐴 = 𝐿𝑈 . . . (𝑎)
𝑙11 0 0 . . . . . 0 1 𝑢12 𝑢13 . . . . . 𝑢1𝑛
𝑙21 𝑙22 0 . . . . . 0 0 1 𝑢23 . . . . . 𝑢2𝑛
𝐴 = 𝑙31 𝑙32 𝑙33 . . . . . 0 0 𝑢
. . . . . . . . . . . . . . . 0. . .1 . . . . . . . . . . . 3𝑛
. .
[ 𝑙𝑛1 𝑙𝑛2 𝑙𝑛3 . . . . . 𝑙𝑛𝑛 ] [ 0 0 0. . . . . . 1 ]
To get the entries of 𝐿 and 𝑈 corresponding to the coefficient matrix 𝐴 we use the following formulae;
𝑗−1
𝑙𝑖𝑗 = 𝑎𝑖𝑗 − ∑𝑘=1 𝑙𝑖𝑘 𝑢𝑘𝑖 𝑗 ≤ 𝑖, 𝑖 = 1, 2, 3, . . . 𝑛
𝑢𝑖𝑗 = 𝑎𝑖𝑗 - ∑𝑖−1
𝑘=1 𝑙𝑖𝑘 𝑢𝑘𝑗 𝑖 ≤ 𝑗, 𝑖 = 1, 2, 3, . . . 𝑛
𝑎1𝑗
Note: If 𝑗 = 1, the rule for 𝐿 reduces to 𝑙𝑖1 = 𝑎𝑖1 , and if 𝑖 = 1 , the rule for 𝑈 reduces to 𝑢1𝑗 = 𝑙1𝑗
.
Now if 𝐴 has been decomposed into 𝐿 and 𝑈, then the solution of the system 𝐴𝑋 = 𝑏 are found by
solving
𝐿(𝑈𝑋) = 𝑏 . . . (𝑏)
To solve (𝑏) we set
𝑈𝑋 = 𝑌 . . . (𝑐)
⇨ 𝐿𝑌 = 𝑏 . . . (𝑑)
We solve (𝑑) for 𝑌, then substitute in (𝑐) to solve for 𝑋.
Note: This substitution method was initiated by Doolittle, and is called Doolittle method.
Example:
Use factorization method to obtain the solution of the following system of equations
2𝑥1 − 3𝑥2 = 11 5𝑥1 + 4𝑥2 + 𝑥3 = 3.4 𝑥1 + 2𝑥2 − 𝑥3 + 3𝑥4 = 9
𝑥1 − 2𝑥2 = 4 10𝑥1 + 9𝑥2 + 4𝑥3 = 8.8 2𝑥1 − 𝑥2 + 3𝑥3 + 7𝑥4 = 23
10𝑥1 + 13𝑥2 + 15𝑥3 = 19.2 3𝑥1 + 3𝑥2 + 𝑥3 + 𝑥4 = 5
4𝑥1 + 5𝑥2 − 2𝑥3 + 2𝑥4 = −2
Solution: Do
→ Cholesky method:
For a symmetric, positive definite matrix 𝐴, we can in
𝐴 = 𝐿𝑈
Choose 𝑈 = 𝐿𝑇 but impose no condition on the diagonal entries.
⇨ 𝐴 = 𝐿𝐿𝑇
In terms of the entries of 𝐿 = 𝑙𝑗𝑘 the formula for the factorization are
𝑎𝑗1
𝑙11 = √𝑎11 𝑙𝑗1 = 𝑗 = 2, 3, 4, . . . 𝑛
𝑙11
𝑗−1 2
𝑙𝑗𝑗 = √𝑎𝑗𝑗 − ∑𝑠=1 𝑙𝑗𝑠 𝑗 = 2, 3, 4, . . . 𝑛
1 𝑗−1
𝑙𝑝𝑗 = (𝑎𝑝𝑗 − ∑𝑠=1 𝑙𝑗𝑠 𝑙𝑝𝑠 ) 𝑝 = 𝑗 + 1, 𝑗 + 2, . . . 𝑛
𝑙𝑗𝑗
Note: If 𝐴 is symmetric but not positive definite, this method could still be applied, but then leads to a complex
matrix 𝐿 so that it becomes impractical.
Example: Use Cholesky method to obtain the solution of the following system of equations.
4𝑥1 + 2𝑥2 + 14𝑥3 = 14 6.83𝑥1 + 2.01𝑥2 + 2.84𝑥3 − 0.84𝑥4 = −0.002
2𝑥1 + 17𝑥2 − 5𝑥3 = −101 2.01𝑥1 + 10.23𝑥2 + 0.0𝑥3 + 2.94𝑥4 = 0.003
14𝑥1 − 5𝑥2 + 83𝑥3 = 155 2.84𝑥1 + 0.0𝑥2 + 12.80𝑥3 + 3.21𝑥4 = 0.002
−0.84𝑥1 + 2.94𝑥2 + 3.21𝑥3 + 13.02𝑥4 = 0.007
Solution: Do.
Numerical Methods:
In numerical methods, we start from an approximation to the true solution and if successful obtain
better and better approximations from a computational cycle repeated as often as may be necessary for
achieving a required accuracy so that the amount of arithmetic depends upon the accuracy required and
varies from case to case, we use numerical methods if the convergence is rapid, so that we save
operations compared to analytic method. We also use numerical methods if a large system is sparse
(has very many zero coefficient so that would waste space in storing zero’s).
→ Gauss-Seidal method:
This is a numerical method of great practical importance. And the formula for the solution is given as
follows. We assume that 𝑎𝑗𝑗 = 1, for 𝑗 = 1, 2, 3, . . . 𝑛(Note that this can be achieved if we can
rearrange the equations so that no diagonal coefficient is zero) then we may divide each equation by the
corresponding diagonal coefficient.
(𝐼 + 𝐿 + 𝑈)𝑋 = 𝑏 ⇨ 𝑋 + 𝐿𝑋 + 𝑈𝑋 = 𝑏
⇨ 𝑋 = 𝑏 − 𝐿𝑋 − 𝑈𝑋 . . . (2)
We obtain from (2) the desired iteration formula.
Note: Below the main diagonal we took “new” approximations and above the main diagonal “old “approximations
Example: Use Gauss-Seidal method to obtain the approximate solution of the following system of
equations, do ten steps starting from a possibly poor approximation to the solution say 𝑋1(0) = 𝑋2(0) =
(0)
𝑋3 = 1. (Use six significant digits).
5𝑥1 + 𝑥2 + 2𝑥3 = 19
𝑥1 + 4𝑥2 − 2𝑥3 = −2
2𝑥1 + 3𝑥2 + 8𝑥3 = 39
Solution: Do.
Example: Use Gauss-Seidal method to obtain the approximate solution of the following system of
equations, do ten steps starting from a possibly poor approximation to the solution say 𝑋1(0) = 𝑋2(0) =
(0) (0)
𝑋3 = 𝑋4 = 100.
4𝑥1 − 𝑥2 − 𝑥3 = 200
−𝑥1 + 4𝑥2 − 𝑥4 = 200
−𝑥1 + 4𝑥3 − 𝑥4 = 100
−𝑥2 − 𝑥3 + 4𝑥4 = 100
Solution: Do.
Jacobi Method:
Jacobi method is a method of simultaneous corrections, if no componentsof an approximation 𝑋 (𝑚) is
used until all the components of 𝑋 (𝑚) have been computed. Jacobi method is similar to Gauss-Seidal
method but involves not using improved values until a step has been completed and then replacing
𝑋 (𝑚) by 𝑋 (𝑚+1) at once, directly before the beginning of the next cycle. Hence we write.
𝐴𝑋 = 𝑏 . . . (1)
with 𝑎𝑗𝑗 = 1, in the form
𝐼𝑋 − 𝐼𝑋 + 𝐴𝑋 = 𝑏
𝑋 = 𝑏 + 𝐼𝑋 − 𝐴𝑋
Example: Use Gauss-Seidal method to obtain the approximate solution of the following system of
equations, do ten steps starting from a possibly poor approximation to the solution say 𝑋1(0) = 𝑋2(0) =
(0)
𝑋3 = 1. (use six significant digit).
10𝑥1 + 𝑥2 + 𝑥3 = 6
𝑥1 + 10𝑥2 + 𝑥3 = 6
𝑥1 + 𝑥2 + 10𝑥3 = 6
Solution: Do.
Example: Use Gauss-Seidal method to obtain the approximate solution of the following system of
(0) (0)
equations, do five steps starting from a possibly poor approximation to the solution say 𝑋1 = 𝑋2 =
(0)
𝑋3 = 0.
Solution: Do.
→ The operator ∆: The operator ∆ is called forward difference operator, and is defined by
∆𝑓(𝑥0 ) = 𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 ) or ∆𝑦𝑘 = 𝑦𝑘+1 − 𝑦𝑘
∆𝑓(𝑥0 + ℎ) = 𝑓(𝑥0 + 2ℎ) − 𝑓(𝑥0 + ℎ) or ∆𝑦𝑘+1 = 𝑦𝑘+2 − 𝑦𝑘+1
∆𝑓(𝑥0 + 2ℎ) = 𝑓(𝑥0 + 3ℎ) − 𝑓(𝑥0 + 2ℎ) or ∆𝑦𝑘+2 = 𝑦𝑘+3 − 𝑦𝑘+2
. . . . . . . . . . .
The differences above are known as first order forward differences. By operating the operator ∆ on the
first order differences we get a second order forward differences which are denoted by ∆2 . In particular
∆2 𝑓(𝑥0 ) = ∆[∆𝑓(𝑥0 )] = ∆[𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 )] = ∆𝑓(𝑥0 + ℎ) − ∆𝑓(𝑥0 )
= [𝑓(𝑥0 + 2ℎ) − 𝑓(𝑥0 + ℎ)] − [𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 )]
Proceeding in the same way we can obtain ∆4 𝑓(𝑥0 ), ∆5 𝑓(𝑥0 ), . . . ∆𝑛 𝑓(𝑥0 ). These differences of
various order can be conveniently expressed in the form of a table called difference table.
Setting 𝑥0 = 𝑥0 ⇨ 𝑓(𝑥0 ) = 𝑦0
𝑥1 = 𝑥0 + ℎ, ⇨ 𝑓(𝑥1 ) = 𝑦1
𝑥2 = 𝑥0 + 2ℎ, ⇨ 𝑓(𝑥2 ) = 𝑦2
. . . . . . .
𝑥𝑘 = 𝑥0 + 𝑘ℎ ⇨ 𝑓(𝑥𝑘 ) = 𝑦𝑘 We have
𝑇𝑎𝑏𝑙𝑒 1.1
𝑥𝑘 𝑦𝑘 ∆𝑦𝑘 ∆2 𝑦𝑘 ∆3 𝑦𝑘 ∆4 𝑦𝑘 ∆5 𝑦𝑘 ∆6 𝑦𝑘
𝑥0 𝑦0
∆𝑦0
𝑥1 𝑦1 ∆2 𝑦0
∆𝑦1 ∆3 𝑦0
𝑥2 𝑦2 ∆2 𝑦1 ∆4 𝑦0
∆𝑦2 ∆3 𝑦1 ∆5 𝑦0
𝑥3 𝑦3 ∆2 𝑦2 ∆4 𝑦1 ∆5 𝑦0
∆𝑦3 ∆3 𝑦2 ∆5 𝑦1
𝑥4 𝑦4 ∆2 𝑦3 ∆4 𝑦2
∆𝑦4 ∆3 𝑦3 .
𝑥5 𝑦5 ∆2 𝑦4 .
∆𝑦5 .
𝑥6 𝑦6 .
.
. .
𝑦0 is known as the first entry in the difference table and ∆𝑦0 , ∆2 𝑦0 , ∆3 𝑦0 , ∆4 𝑦0 . . . are known as the
leading differences, and the table is called the diagonal (forward) difference table.
→ The operator 𝛻: The operator ∇ is called the backward difference operator and is defined by
∇𝑓(𝑥𝑘 ) = 𝑓(𝑥𝑘 + ℎ) − 𝑓(𝑥𝑘 ) or ∇𝑦𝑘 = 𝑦𝑘 − 𝑦𝑘−1
The following table gives the arguments, entries and the backward differences up to 6th order
𝑦6 is known as the last entry in the difference table and ∆𝑦6 , ∆2 𝑦6 , ∆3 𝑦6 , ∆4 𝑦6 . . . are known as
the leading differences, and the table is called the diagonal (backward) difference table.
Note; Unless stated otherwise, interval of differencing will always be taken as unit (one)
→ The operator 𝐸: The operator 𝐸 is called the shift operator and is defined by
→ The operator 𝛿: The operator 𝛿 is called the central difference operator and is defined by
𝛿𝑦𝑘 = 𝑦𝑘+ℎ − 𝑦𝑘−ℎ
2 2
In particular
𝛿𝑦0 = 𝑦0+ℎ − 𝑦0−ℎ
2 2
𝛿𝑦1 = 𝑦1+ℎ − 𝑦1−ℎ
2 2
𝛿𝑦2 = 𝑦2+ℎ − 𝑦2−ℎ
2 2
𝛿𝑦3 = 𝑦3+ℎ − 𝑦3−ℎ
2 2
. . . . . . . . .
𝛿𝑦𝑘 = 𝑦𝑘+ℎ − 𝑦𝑘−ℎ
2 2
𝑑
→ The operator 𝐷: The operator 𝐷 is called differential operator and is defined by 𝐷𝑦𝑘 = 𝑑𝑥 𝑦𝑘
And the second and higher order difference are given as
𝑑2 𝑑3 𝑑4 𝑑𝑛
𝐷2 𝑦𝑘 = 𝑦𝑘 , 𝐷3 𝑦𝑘 = 𝑦𝑘 , 𝐷4 𝑦𝑘 = 𝑦𝑘 . . . 𝐷𝑛 𝑦𝑘 = 𝑦𝑘
𝑑𝑥 2 𝑑𝑥 3 𝑑𝑥 4 𝑑𝑥 𝑛
𝑖𝑖) 𝐸 and ∇;
We have by definition that ∇𝑦𝑘 = 𝑦𝑘 − 𝑦𝑘−1 = 𝑦𝑘 − 𝐸 −1 𝑦𝑘 = (1 − 𝐸 −1 )𝑦𝑘
And since 𝑦𝑘 is arbitrary, we get the relation 𝐸 −1 = 1 − ∇ or ∆= 1 − 𝐸 −1
And since (𝐸 −1 )−1 = 𝐸 we have the relation
𝐸 = (1 − ∇)−1
𝑖𝑖𝑖) 𝐸 and 𝛿;
1 1 1 1
𝛿𝑦𝑘 = 𝑦𝑘+ℎ − 𝑦𝑘−ℎ = 𝐸 2 𝑦𝑘 − 𝐸 −2 𝑦𝑘 = (𝐸 2 − 𝐸 −2 )𝑦𝑘
2 2
1 1 1 1 1 1
Therefore 𝛿 = 𝐸 2 − 𝐸 −2 = 𝐸 −2 (𝐸 − 1) = 𝐸 −2 ∆ Also 𝛿 = 𝐸 2 (1 − 𝐸 −1 ) = 𝐸 2 ∇
𝑖𝑣) 𝐷 and ∆ ;
𝑑
𝐷𝑦𝑘 = 𝐷𝑓(𝑥) = 𝑓 ʹ (𝑥) = 𝑦𝑘
𝑑𝑥
We have by Taylor’s theorem that
ℎ ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝑓(𝑥 + ℎ) = 𝑓(𝑥) + ʹ 𝑓 ʹ (𝑥) + 𝑓 ʹʹ (𝑥) + 𝑓 ʹʹʹ (𝑥) + 𝑓 ʹ𝑣 (𝑥) + 𝑓 𝑣 (𝑥) + 𝑓 𝑣ʹ (𝑥) + . . .
1 2ʹ 3ʹ 4ʹ 5ʹ 6ʹ
ℎ ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
𝐸𝑓(𝑥) = 𝑓(𝑥) + ʹ 𝑓 ʹ (𝑥) + 𝑓 ʹʹ (𝑥) + 𝑓 ʹʹʹ (𝑥) + 𝑓 ʹ𝑣 (𝑥) + 𝑓 𝑣 (𝑥) + 𝑓 𝑣ʹ (𝑥) + . . .
1 2ʹ 3ʹ 4ʹ 5ʹ 6ʹ
ℎ ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
= 𝑓(𝑥) + ʹ 𝐷𝑓(𝑥) + 𝐷 2 𝑓(𝑥) + 𝐷3 𝑓(𝑥) + 𝐷4 𝑓(𝑥) + 𝐷5 𝑓(𝑥) + 𝐷6 𝑓(𝑥) + . . .
1 2ʹ 3ʹ 4ʹ 5ʹ 6ʹ
ℎ ℎ2 ℎ3 ℎ4 ℎ5 ℎ6
= (1 + ʹ 𝐷 + 𝐷2 + 𝐷3 + 𝐷4 + 𝐷5 + 𝐷6 + . . . ) 𝑓(𝑥)
1 2ʹ 3ʹ 4ʹ 5ʹ 6ʹ
We may note that in the first example second order finite difference are constant and equal to 16 in this
case and the third order and higher order differences are zero, in the second example third order finite
difference are constant and equal to 6 in this case and the fourth order and higher order differences are
zero. It may be shown that if 𝑦𝑘 is a polynomial of degree 𝑚 in 𝑥, then 𝑚𝑡ℎ order finite difference will be
constant and (𝑚 + 1)𝑡ℎ order finite difference and higher order differences are zero.
Note:
1. We often use these results in interpolation.
2. It should be understood that the operators 𝐸, ∆ and 𝛻 are define only when the arguments are to be at
equal intervals.
In particular, if
𝑦𝑥 = 5𝑥
⇨ ∆5𝑥 = 5𝑥+1 − 5𝑥 = 5𝑥 5 − 5𝑥 = (5 − 1)5𝑥 = (4)5𝑥
⇨ ∆2 5𝑥 = ∆(∆5𝑥 ) = ∆(4(5𝑥 )) = 4∆5𝑥 = 4(4)5𝑥 = (4)2 5𝑥
⇨ ∆3 5𝑥 = ∆(∆2 5𝑥 ) = ∆(16(5𝑥 )) = (4)2 ∆5𝑥 = (4)2 (4)5𝑥 = (4)3 5𝑥
⇨ ∆4 𝑐 𝑥 = ∆(∆3 5𝑥 ) = ∆((4)3 5𝑥 ) = (4)3 ∆5𝑥 = (4)3 (4)5𝑥 = (4)4 5𝑥
. . . . . . . . . .
𝑛 𝑥 (4)𝑛 𝑥
⇨ ∆ 5 = 5
Numerical differentiation:
The numerical differentiation techniques can be used in the following two situations:
𝑎) The function values corresponding to distinct values of the argument are known but the function is
unknown. For example, we may know the values of 𝑓(𝑥) at various values of 𝑥, 𝑥𝑖 (say) 𝑖 = 0, 1, 2, . . . 𝑛
𝑏) The function to be differentiated is complicated, and therefore difficult to differentiate by usual
procedure
Numerical differentiation is the process calculating the value of the derivative of a function at some
assigned value of the argument 𝑥 from the given set of data points (𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0, 1, 2, . . . 𝑛 which
correspond to the value of an unknown function 𝑦 = 𝑓(𝑥)
To get the derivative, we first find the curve 𝑦 = 𝑓(𝑥) through the points and then differentiate and get
its value at the required point.
If the values of 𝑥 are equally spaced, we get the interpolating polynomial due to Newton-Gregory. If the
derivative is required at a point nearer to the starting values in the table, we use Newton’s forward
interpolation formula. If we require the derivative at the end of the table, we use Newton’s backward
interpolation formula. If the value of derivative is required near the middle of the table value, we use
one of the central differences interpolation formulas. In the case of unequal interval, we can use
Newton’s divided difference formula or Lagrange’s interpolation formula to get the derivative value
𝑦 = 𝑦0 + 𝑢∆𝑦0 + 𝑢(𝑢−1)
2!
∆2 𝑦0 +
𝑢(𝑢−1)(𝑢−2) 3
3!
∆ 𝑦0 +
𝑢(𝑢−1)(𝑢−2)(𝑢−3) 4
4!
∆ 𝑦0 + . . . . . . (1𝑏)
2 3 −3𝑢2 +2𝑢 4 −6𝑢3 +11𝑢2 −6𝑢 5 −10𝑢4 +35𝑢3 −50𝑢2 +24𝑢
𝑦 = 𝑦0 + 𝑢∆𝑦0 + 𝑢 2−𝑢
! ∆ 𝑦0 +
2 𝑢
3!
∆3 𝑦0 +𝑢 4!
∆4 𝑦0 +𝑢 5!
∆5 𝑦0 + . . . . . . (1𝑐)
𝑑2𝑦 𝑑 𝑑𝑦 𝑑𝑢 1 𝑑 𝑑𝑦 1 𝑑𝑦
Differentiating eqn. (2) with respect to 𝑥 we have 𝑓 ′′ (𝑥) = 𝑑𝑥2 = 𝑑𝑢 (𝑑𝑥 ) ∗ 𝑑𝑥 = ℎ 𝑑𝑢 (𝑑𝑥 ) = ℎ2 𝑑𝑢
𝑑2𝑦 1 2 −18𝑢+11 2𝑢3 −12𝑢2 +21𝑢−10 5
𝑖. 𝑒. = [∆2𝑦0 + (𝑢 − 1)∆3𝑦0 + 6𝑢 ∆4 𝑦0 + ∆ 0 𝑦 . . .] . . . (3)
𝑑𝑥 2 ℎ2 12 12
Equation(2), (4) and (5) give the first, second and third derivative at any general 𝑥.
𝑑3𝑦 1 2 +8𝑣+7
= [∇3𝑦𝑛 + 2𝑣+3 ∇4 𝑦𝑛 +
2𝑣
∇5 𝑦𝑛 . . .] . . . (4)
𝑑𝑥 3 ℎ3 2 4
Equations (2) (3) and (4) give the first, second and third derivative value at any general 𝑥.
𝑑𝑦 1
⇨ ( ) = [∆𝑦0 + 12∆2 𝑦0 − 16∆3𝑦0 + 12
1 4 1
∆ 𝑦0 − ∆5 𝑦0 + . . . ]
20
𝑑𝑥 𝑥=1.74 0.04
𝑑𝑦 1
⇨ ( ) = [(0.0059) + 12(−0.0194) − 16(0.0419) + 12
1 1
(−0.0841) − (0)+ . . . ] ≅ −0.0178
20
𝑑𝑥 𝑥=1.74 0.04
1
Example: Find the first two derivatives of 𝑥 3 at 𝑥 = 50 and 𝑥 = 56 given the table below
𝑥𝑘 50 51 52 53 54 55 56
𝑦𝑘 3.6840 3.7084 3.7325 3.7563 3.7798 3.8030 3.8259
Solution
We first form the difference table
𝑘 𝑥𝑘 𝑦𝑘 ∆𝑦𝑘 ∆2 𝑦𝑘 ∆3 𝑦𝑘 ∆4 𝑦𝑘 ∆5 𝑦𝑘 ∆6 𝑦𝑘
0 50 3.6840
0.0244
1 51 3.7084 −0.0003
0.0241 0
2 52 3.7325 −0.0003 0
0.0238 0 0
3 53 3.7563 −0.0003 0 0
0.0235 0 0
4 54 3.7798 −0.0003 0
0.0232 0
5 55 3.8030 −0.0003
0.0229
6 56 3.8259
𝑑𝑦 1
⇨ ( ) = [∆𝑦0 − 21!∆2𝑦0 + 32!∆3𝑦0 − 46!∆4𝑦0 + 24
5!
∆5 𝑦0 + . . . ]
𝑑𝑥 𝑥=50 1
𝑑𝑦 1
⇨ ( ) = [(0.0244) − 12(−0.0003) + 32! (0) − 46!(0) + 24
5!
(0)+ . . . ] ≅ 0.0245
𝑑𝑥 𝑥=50 1
𝑑2𝑦 1
( ) = [∆2𝑦0 − ∆3𝑦0 + 11 ∆4 𝑦0 −
10 5
∆ 0 𝑦 . . .]
𝑑𝑥 2 𝑥=50 ℎ2 12 12
𝑑2𝑦 1
( ) = [(−0.0003) − (0) + 11 (0) −
10
(0) . . . ] ≅ −0.0003
𝑑𝑥 2 𝑥=50 1 12 12
𝑑𝑦 1
⇨ ( ) = [∇𝑦𝑛 + 21!∇2𝑦𝑛 + 32!∇3𝑦𝑛 + 46!∇4𝑦𝑛 + 24
5!
∇5𝑦𝑛 + . . . ]
𝑑𝑥 𝑥=56 ℎ
𝑑𝑦 1
⇨ ( ) = [(0.0229) + 21!(−0.0003) + 32!(0) + 46!(0) + 24
5!
(0)+ . . . ] ≅ 0.02275
𝑑𝑥 𝑥=56 1
𝑑2𝑦 1
( ) = [∇2𝑦𝑛 + ∇3𝑦𝑛 + 11 10
∇4 𝑦𝑛 + ∇5𝑦𝑛 . . . ]
𝑑𝑥 2 𝑥=56 1 12 12
𝑑2𝑦 1
( ) = [(−0.0003) + (0) + 11 (0) + 10 (0) . . . ] ≅ −0.0003
𝑑𝑥 2 𝑥=56 1 12 12
Example: The table given below reveals the velocity 𝑣 of a body during the specified time 𝑡. Find the
acceleration at 𝑡 = 1.1
𝑡 1.0 1.1 1.2 1.3 1.4
𝑣 43.1 47.7 52.1 56.4 60.8
Solution:
We have
𝑑𝑣
𝑣 = 𝑣(𝑡) and acceleration 𝑎 = 𝑣 ′ (𝑡) = at 𝑡 = 1.1
𝑑𝑡
𝑡 𝑣 ∆𝑣 ∆2 𝑣 ∆3 𝑣 ∆4 𝑣
1.0 43.1
4.6
1.1 47.7 −0.2
4.4 0.1
1.2 52.1 −0.1 0.1
4.3 0.2
1.3 56.4 0.1
4.4
1.4 60.8
𝑑𝑣 1
⇨ ( ) = [∆𝑣0 + 21!∆2𝑣0 − 31!∆3𝑣0 + 42!∆4𝑣0 − 56!∆5𝑣0 + . . . ]
𝑑𝑡 𝑡=1.1 0.1
𝑑𝑣 1
⇨ ( ) = [(4.6) + 21!(−0.2) − 31!(0.1) + 42!(0.1) − 56!(0)+ . . . ] ≅ 44.9166
𝑑𝑡 𝑡=1.1 0.1
𝑑𝑦 1
⇨ ( ) = [∆𝑦0 − 21! ∆2 𝑦0 + 32!∆3𝑦0 − 46!∆4𝑦0 + 24
5!
∆5 𝑦0 + . . . ]
𝑑𝑥 𝑥=1931 10
𝑑𝑦 1
⇨ ( ) = [(20.18) − 21!(−1.03) + 32!(5.49) − 46!(−4.47) + 24
5!
(0)+ . . . ] ≅ 2.36425
𝑑𝑥 𝑥=1931 10
𝑑𝑦
( ) ⇨ 𝑢 = 𝑥−𝑥
ℎ
0 = 1941−1931 = 1
10
𝑑𝑥 𝑥=1941
𝑑𝑦 1
⇨ ( ) = [∆𝑦0 + 21! ∆2 𝑦0 − 31!∆3𝑦0 + 42!∆4𝑦0 − 56!∆5𝑦0 + . . . ]
𝑑𝑥 𝑥=1941 10
𝑑𝑦 1
⇨ ( ) = [(20.18) + 21!(−1.03) − 31!(5.49) + 42!(−4.47) − 56!(0)+ . . . ] ≅ 1.83775
𝑑𝑥 𝑥=1941 10
𝑑𝑦 1
⇨ ( ) = [∇𝑦𝑛 + 21!∇2 𝑦𝑛 + 32!∇3𝑦𝑛 + 46!∇4𝑦𝑛 + 24
5!
∇5𝑦𝑛 + . . . ]
𝑑𝑥 𝑥=1971 ℎ
𝑑𝑦 1
⇨ ( ) = [(29.09) + 21!(5.48) + 32!(1.02) + 46!(−4.47) + 24
5!
(0)+ . . . ] ≅ 3.10525
𝑑𝑥 𝑥=1971 10
𝑑𝑦 1
⇨ ( ) = [∇𝑦𝑛 − 21!∇2 𝑦𝑛 − 31!∇3𝑦𝑛 − 42!∇4𝑦𝑛 − 56! ∇5𝑦𝑛 + . . . ]
𝑑𝑥 𝑥=1961 ℎ
𝑑𝑦 1
⇨ ( ) = [(29.09) − 21!(5.48) − 31!(1.02) − 42!(−4.47) − 56!(0)+ . . . ] ≅ 2.65525
𝑑𝑥 𝑥=1961 10
Where 𝑥 is the period of interpolation, i.e. it is the value corresponding to which entry is required; 𝑥0 is
first argument in the difference table and
𝑢 = 𝑝𝑒𝑟𝑖𝑜𝑑 𝑜𝑓𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙
𝑖𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑜𝑛−𝑝𝑒𝑟𝑖𝑜𝑑 𝑜𝑓 𝑜𝑟𝑖𝑔𝑖𝑛
𝑜𝑓 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑖𝑛𝑔
= 𝑥−𝑥
ℎ
0
𝑑𝑦 𝑑𝑦 𝑑𝑢 1 𝑑𝑦
Differentiating Eqn. (1𝑏) with respect to 𝑥 we have 𝑓 ′ (𝑥) = 𝑑𝑥 = 𝑑𝑢 ∗ 𝑑𝑥 = ℎ ∗ 𝑑𝑢
𝑑𝑦 1 2 3 4 −15𝑢2 +4
𝑖. 𝑒. = [12(∆𝑦0 + ∆𝑦−1 ) + 2𝑢2∆2𝑦−1 + 3𝑢12−1(∆3 𝑦−1 + ∆3 𝑦−2 ) + 4𝑢 24−2𝑢∆4 𝑦−2 + 5𝑢 240
(∆5 𝑦−2 + ∆5 𝑦−3 )+ . . . ] ... (2)
𝑑𝑥 ℎ
𝑑2 𝑦 𝑑 𝑑𝑦 𝑑𝑢 1 𝑑 𝑑𝑦
Differentiating eqn. (2) with respect to 𝑥 we have 𝑓 ′′ (𝑥) = 𝑑𝑥 2 = 𝑑𝑢 (𝑑𝑥 ) ∗ 𝑑𝑥 = ℎ ∗ 𝑑𝑢 (𝑑𝑥 )
𝑑2𝑦 1 2 2𝑢3 −3𝑢
= [∆2𝑦−1 + 𝑢2(∆3 𝑦−1 + ∆3 𝑦−2 ) + 6𝑢12−1∆4𝑦−2 + (∆5 𝑦−2 + ∆5 𝑦−3 ) . . . ] . . . (3)
𝑑𝑥 2 ℎ2 24
𝑑3𝑦 1 1 3 2𝑢2 −1
= [ (∆ 𝑦−1 + ∆3 𝑦−2 ) + 12𝑢 ∆4 𝑦−2 + (∆5 𝑦−2 + ∆5 𝑦−3 ) . . . ] . . . (4)
𝑑𝑥 3 ℎ3 2 12 8
Example: Find the gradient of the road at the middle point of the elevation above a datum line of seven
points of road which are given below
𝑥𝑘 0 300 600 900 1200 1500 1800
𝑦𝑘 135 149 157 183 201 205 193
Solution:
𝑖) Since we are required to find 𝑦 ′ at 𝑥 = 900 and 𝑥 = 900 is in the middle of the table therefore we use
Sterling’s formula
𝑘 𝑥𝑘 𝑦𝑘 ∆𝑦𝑘 ∆2 𝑦𝑘 ∆3 𝑦𝑘 ∆4 𝑦𝑘 ∆5 𝑦𝑘 ∆6 𝑦𝑘
0 0 135
14
1 300 149 −6
8 24
2 600 157 18 −50
26 −26 70
3 900 183 −8 20 −86
18 −6 −16
4 1200 201 −14 4
4 −2
5 1500 205 −16
−12
6 1800 193
𝑑𝑦
Example: Find 𝑑𝑥 at 𝑥 = 7.5 from the following table
𝑥𝑘 7.47 7.48 7.49 7.50 7.51 7.52 7.53
𝑦𝑘 0.193 0.195 0.198 0.201 0.203 0.206 0.208
Solution:
𝑖) Since we are required to find 𝑦 ′ at 𝑥 = 900 and 𝑥 = 900 is in the middle of the table therefore we use
Sterling’s formula
𝑘 𝑥𝑘 𝑦𝑘 ∆𝑦𝑘 ∆2 𝑦𝑘 ∆3 𝑦𝑘 ∆4 𝑦𝑘 ∆5 𝑦𝑘 ∆6 𝑦𝑘
0 07.47 0.193
0.002
1 7.48 0.195 0.001
0.003 −0.001
2 7.49 0.198 0.000 0.000
0.003 −0.001 −0.003
3 7.50 0.201 −0.001 0.003 −0.004
0.002 0.002 −0.007
4 7.51 0.203 0.001 −0.004
0.003 −0.002
5 7.52 0.206 −0.001
0.002
6 7.53 0.208
Since 𝑥 = 7.5 is in the middle of the table we will use Sterling’s formula
𝑑𝑦 1 2 3 4 −15𝑢2 +4
= [12(∆𝑦0 + ∆𝑦−1 ) + 2𝑢
2
∆2 𝑦−1 +
3𝑢 −1
12
(∆3 𝑦−1 + ∆3 𝑦−2 ) + 4𝑢 24−2𝑢∆4𝑦−2 + 5𝑢 240
(∆5 𝑦−2 + ∆5 𝑦−3 )+ . . . ]
𝑑𝑥 ℎ
𝑑𝑦
( ) ⇨ 𝑢 = 𝑥−𝑥
ℎ
0 = 7.50−7.50 = 0
0.01
𝑑𝑥 𝑥=7.5
𝑑𝑦 1 1 1 1
⇨ ( ) = [ ( 0.002 + 0.003) − 12 (− 0.002 − 0.001) + 60 (− 0.007 − 0.003)+ . . . ] ≅ 0.225
𝑑𝑥 𝑥=7.5 0.01 2
Example: Find the first and second derivative of the function tabulated below at 𝑥 = 0.6of the road at
the middle point of the elevation above a datum line of seven points of road which are given below
𝑥𝑘 0.4 0.5 0.6 0.7 0.8
𝑦𝑘 1.5836 1.7974 2.0442 2.3275 2.6511
Solution:
We first form the difference table
𝑘 𝑥𝑘 𝑦𝑘 ∆𝑦𝑘 ∆2 𝑦𝑘 ∆3 𝑦𝑘 ∆4 𝑦𝑘
0 0.4 1.5836
0.2138
1 0.5 1.7974 0.0330
0.2468 0.0035
2 0.6 2.0442 0.0365 0.0003
0.2833 0.0038
3 0.7 2.3275 0.0403
0.3236
4 0.8 2.6511
Since we are required to find 𝑦 ′ and 𝑦 ′′ at 𝑥 = 0.6 and 𝑥 = 0.6 is in the middle of the table we use
we use Sterling’s formula
𝑑𝑦 1 1 2𝑢 3𝑢2 −1 4𝑢3 −2𝑢 5𝑢4 −15𝑢2 +4 5
= [2(∆𝑦0 + ∆𝑦−1 ) + 2 ∆2𝑦−1 + 12 (∆3 𝑦−1 + ∆3 𝑦−2 ) + 24 ∆4𝑦−2 + (∆ 𝑦−2 + ∆5 𝑦−3 )+ . . . ]
𝑑𝑥 ℎ 240
𝑑𝑦 1 1 1
⇨ ( ) = [ (0.2833 + 0.2468) − 12 (0.0038 + 0.0035) ] ≅ 2.64442
𝑑𝑥 𝑥=0.6 0.1 2
𝑑2 𝑦 1 1
( ) = [(0.0365) − 12 (0.0003) ] ≅ 3.6475
𝑑𝑥 2 𝑥=0.6 (0.1)2
Numerical Integration
𝑏
We know that a definite integral ∫𝑎 𝑓(𝑥)𝑑𝑥 represent the area under a curve 𝑦 = 𝑓(𝑥) enclosed between
the limits 𝑥 = 𝑎 and 𝑥 = 𝑏. This integration is possible only if 𝑓(𝑥) is explicitly given and if it is
integrable. The problem of numerical integration can be stated as follows:
Given set of (𝑛 + 1) data points (𝑥𝑖 , 𝑦𝑖 ) 𝑖 = 1,2,3, . . . 𝑛 of the function 𝑦 = 𝑓(𝑥) where 𝑓(𝑥) is not
𝑥
known explicitly, it is required to evaluate ∫𝑥0𝑛 𝑓(𝑥) 𝑑𝑥
The problem of numerical integration, like that of numerical differentiation is solve by replacing 𝑓(𝑥)
𝑥
with an interpolating polynomial 𝑃𝑛 (𝑥) and obtaining ∫𝑥 𝑛 𝑃𝑛 (𝑥) 𝑑𝑥 which is approximately taken as the
0
𝑥
value of ∫𝑥0𝑛 𝑓(𝑥) 𝑑𝑥. Numerical integration is also known as numerical quadrature.
𝑛 1 𝑛2 𝑛 1 𝑛3 1 𝑛4 3𝑛3 11𝑛2
= ℎ𝑛 [𝑦0 + 2 ∆𝑦0 + ! ( 3 − 2 ) ∆2 𝑦0 + ! ( 4 − 𝑛2 + 𝑛) ∆3 𝑦0 + (5 − + − 3𝑛2) ∆4 𝑦0 + . . . ]
2 3 4! 2 3
Equation (1) is called Newton-Cote’s quadrature formula. From this general formula, we can get
different integration formulae by putting 𝑛 = 1, 2, 3, . . .
𝑥 ℎ
∫𝑥 3 𝑓(𝑥) 𝑑𝑥 = ℎ[𝑦2 + 12∆𝑦2 ] = ℎ[𝑦2 + 12(𝑦3 − 𝑦2 )] = 2 (𝑦2 + 𝑦3 )
2
4 𝑥 ℎ
∫𝑥 𝑓(𝑥) 𝑑𝑥 = ℎ[𝑦3 + 12∆𝑦3 ] = ℎ[𝑦3 + 12(𝑦4 − 𝑦3 )] = 2 (𝑦3 + 𝑦4 )
3
. . . . . . . . .
. . . . . . . . .
𝑥 ℎ
∫𝑥 𝑛 𝑓(𝑥) 𝑑𝑥 = ℎ[𝑦𝑛−1 + 12∆𝑦𝑛−1 ] = ℎ[𝑦𝑛−1 + 12(𝑦𝑛 − 𝑦𝑛−1 )] = 2 (𝑦𝑛−1 + 𝑦𝑛 )
𝑛−1
Hence,
𝑥 𝑥 𝑥 𝑥 𝑥 𝑥
∫𝑥 𝑛 𝑓(𝑥) 𝑑𝑥 = ∫𝑥 1 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 2 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 3 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 4 𝑓(𝑥) 𝑑𝑥 + . . . + ∫𝑥 𝑛 𝑓(𝑥) 𝑑𝑥
0 0 1 2 3 𝑛−1
ℎ ℎ ℎ ℎ ℎ
= (𝑦0 + 𝑦1 ) + (𝑦1 + 𝑦2 ) + (𝑦2 + 𝑦3 ) + (𝑦3 + 𝑦4 ) + . . . + (𝑦𝑛−1 + 𝑦𝑛 )
2 2 2 2 2
ℎ
= 2 [(𝑦0 + 𝑦1 ) + 2(𝑦1 + 𝑦2 + 𝑦3 + . . . +𝑦𝑛−1 )]
Thus,
𝑛 𝑥 ℎ
∫𝑥 𝑓(𝑥) 𝑑𝑥 = 2 [(𝑠𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑖𝑟𝑠𝑡 𝑎𝑛𝑑 𝑡ℎ𝑒 𝑙𝑎𝑠𝑡 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠 ) + 2(𝑠𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑟𝑒𝑚𝑎𝑖𝑛𝑖𝑛𝑔 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠)]
0
Geometrical Interpretation: Here we are going to find the area of each trapezium using the usual
formula for the area of a trapezium.
𝑦
𝑦 = 𝑓(𝑥)
𝑦1 𝑦3 𝑦4 𝑦5 𝑦6 𝑦7
𝑦0 𝑦2
ℎ ℎ ℎ ℎ ℎ ℎ
𝑥
𝑎 = 𝑥0
𝑂
𝑥4
𝑏 = 𝑥𝑛
𝑥7
𝑥3
𝑥5
𝑥6
𝑥2
𝑥1
The parallel sides of the trapezium are 𝑦0 & 𝑦1 , 𝑦1 & 𝑦1 ), 𝑦2 & 𝑦3 . . . 𝑦𝑛−1 & 𝑦𝑛 respectively.
Thus the total area of the n strips of trapezium is
Solution:
𝑖) 𝑎 = 𝑥0 = 0, 𝑏 = 𝑥𝑛 = 1, 𝑎𝑛𝑑 𝑛 = 5 ⇨ ℎ = 0.2
𝑥𝑘 𝑥𝑘 F &L REM
0.000 0.000 0.000
0.200 0.008 0.008 1
⇨ ∫0 𝑥 3 𝑑𝑥 ≅ 0.05[(2.4142) + 2(10.2767)]
0.400 0.064 0.064
0.600 0.216 0.216 ≅ 0.1[(1.000) + 2(0.800)]
0.800 0.512 0.512
1.000 1.000 1.000 ≅ 0.260 𝑆𝑞𝑢𝑎𝑟𝑒 𝑢𝑛𝑖𝑡
**** Total 1.000 0.800
𝜋
𝑖𝑖) 𝑎 = 𝑥0 = 0, 𝑏 = 𝑥𝑛 = 𝜋, 𝑎𝑛𝑑 𝑛 = 6 ⇨ ℎ=
6
𝑥𝑘 𝑥𝑘 F &L REM
0.0 0.0000 0.000
𝜋
0.2618 0.2618 𝜋 𝜋
6 ⇨ ∫0 𝑥 sin 𝑥 𝑑𝑥 ≅ 12 [(0.0) + 2(5.8623)]
𝜋
0.9069 0.9069
3 𝜋
𝜋
1.5708 1.5708 ≅ [11.7246]
12
2
2𝜋
1.8138 1.8138 ≅ 3.0695 𝑆𝑞𝑢𝑎𝑟𝑒 𝑢𝑛𝑖𝑡
3
5𝜋 1.3090 1.3090
6
𝜋 0.0000 0.0000
**** Total 0.0000 5.8623
𝑥 ( ) 2
∫𝑥 6 𝑓(𝑥) 𝑑𝑥 = 2ℎ[𝑦4 + 22∆𝑦4 + 2 4−3
12
∆ 𝑦4 ]
4
. . . . . . .
. . . . . . .
𝑥𝑛 ℎ
∫𝑥 𝑓(𝑥) 𝑑𝑥 = 3 (𝑦𝑛−2 + 4𝑦𝑛−1 + 𝑦𝑛 )
𝑛−2
Hence,
𝑥𝑛 2 4𝑥 6 𝑥 𝑥
𝑛 𝑥
∫𝑥 𝑓(𝑥) 𝑑𝑥 = ∫𝑥 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 𝑓(𝑥) 𝑑𝑥 + . . . + ∫𝑥 𝑓(𝑥) 𝑑𝑥
0 0 2 4 𝑛−1
ℎ ℎ ℎ ℎ
= (𝑦0 + 4𝑦1 + 𝑦2 ) + (𝑦2 + 4𝑦3 + 𝑦4 ) + (𝑦 + 4𝑦5 + 𝑦6 )+ . . . + (𝑦𝑛−2 + 4𝑦𝑛−1 + 𝑦𝑛 )
3 3 3 4 3
ℎ
= 3 [(𝑦0 + 𝑦𝑛 ) + 4(𝑦1 + 𝑦3 + 𝑦5 + . . . +𝑦𝑛−1 ) + 2(𝑦2 + 𝑦4 + 𝑦6 + . . . +𝑦𝑛−2 )]
Thus,
𝑥𝑛 ℎ
∫𝑥 𝑓(𝑥) 𝑑𝑥 = 3 [(𝑠𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑖𝑟𝑠𝑡 𝑎𝑛𝑑 𝑡ℎ𝑒 𝑙𝑎𝑠𝑡 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠 ) + 4(𝑠𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑜𝑑𝑑 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠) +
0
2(𝑠𝑢𝑚 𝑜𝑓 𝑡ℎ𝑒 𝑒𝑣𝑒𝑛 𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠)]
𝑦 = 𝑓(𝑥)
𝑦0 𝑦1 𝑦2 𝑦3 𝑦𝑛
ℎ ℎ ℎ ℎ ℎ ℎ
𝑥
𝑂
𝑏 = 𝑥𝑛
𝑥2
𝑥1
𝑎 = 𝑥0
𝑥3
1 𝑑𝑥
𝑖𝑖𝑖) ∫0 1+𝑥2
taking ℎ = 16
1
𝑖) 𝑎 = 0, 𝑏 = , 𝑎𝑛𝑑 ℎ= 6
𝑥𝑘 𝑦𝑘 F &L Rem Odd Even
0.00 1.00000 1.00000 --- 2 2
1
⇨ ∫0 𝑒 −𝑥 𝑑𝑥
0.97297 0.97297 0.97297
6
1
2 0.90000 0.90000 0.90000 = 18[(1.500) + 4(2.36313) + 2(1.59231)]
6
3 0.80000 0.80000 0.80000 1
6 = 18 [1.500 + 9.45252 + 3.18462]
4 0.69231 0.69231 0.69231 1
6 = 18 [14.13714]
5 0.59016 0.59016 0.59016
6
= 0.785397 square unit
1 0.50000 0.50000 ---
**** Total 1.50000 2.36313 1.59231
𝟑 1
Simpson’s 𝟖
Rule: Simpson’s 3
rule was derived using three points that fit the quadratic equation. We
extend this approach by incorporating four successive points so that the rule can be extended to a
polynomial of degree three. Putting 𝑛 = 3 in Newton-Cote’s quadrature formula, all differences higher
than the third will become zero, and we obtain
𝑥 ( ) 2 ( ) 3 2
∫𝑥 3 𝑓(𝑥) 𝑑𝑥 = 3ℎ[𝑦0 + 32∆𝑦0 + 3 6−3
12
∆ 𝑦0 + 3 3−2
24
∆ 𝑦0 ]
0
And
𝑥 ( ) 2 ( ) 2
∫𝑥 𝑛 𝑓(𝑥) 𝑑𝑥 = 3ℎ[𝑦𝑛−3 + 32∆𝑦𝑛−3 + 3 6−3
12
∆ 𝑦𝑛−3 + 3 3−2
24
∆ 𝑦𝑛−3 ]
𝑛−3
= 3ℎ[𝑦𝑛−3 + 32(𝑦𝑛−2 − 𝑦𝑛−3 ) + 34(𝑦𝑛−1 − 2𝑦𝑛−2 + 𝑦𝑛−3 ) + 18(𝑦𝑛 − 3𝑦𝑛−1 + 3𝑦𝑛−2 − 𝑦𝑛−3 )]
3ℎ
= (𝑦𝑛−3 + 3𝑦𝑛−2 + 3𝑦𝑛−1 + 𝑦𝑛 )
8
Hence,
𝑥
𝑛 3 𝑥 6 9 𝑥 𝑛 𝑥 𝑥
∫𝑥 𝑓(𝑥) 𝑑𝑥 = ∫𝑥 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 𝑓(𝑥) 𝑑𝑥 + ∫𝑥 𝑓(𝑥) 𝑑𝑥 + . . . + ∫𝑥 𝑓(𝑥) 𝑑𝑥
0 0 3 6 𝑛−3
3ℎ 3ℎ 3ℎ 3ℎ
= (𝑦0 + 3𝑦1 + 3𝑦2 + 𝑦3 ) + (𝑦3 + 3𝑦4 + 3𝑦5 + 𝑦6 ) + (𝑦6 + 3𝑦7 + 3𝑦8 + 𝑦9 )+ . . . + (𝑦𝑛−3 + 3𝑦𝑛−2 +
8 8 8 8
3𝑦𝑛−1 + 𝑦𝑛 )
3ℎ
= [(𝑦0 + 𝑦𝑛 ) + 3(𝑦1 + 𝑦3 + 𝑦5 + . . . +𝑦𝑛−1 ) + 3(𝑦2 + 𝑦4 + 𝑦6 + . . . +𝑦𝑛−2 )]
8
Note: While there is no restriction for the number of intervals 𝑛 in trapezoidal rule, the number of intervals 𝑛 in the
1 3
case of Simpson’s rule must be even, for Simpson’s rule must be multiple of three
3 8
Boole’s rule: Here the function 𝑓(𝑥) approximated by the fourth-order polynomial 𝑃4 (𝑥) which passes
through five points.
Putting 𝑛 = 4 in Newton-Cote’s quadrature formula, all differences higher than the fourth will become
zero and we get
𝑥 ( ) 2 ( ) 3 4[6(4) 2 3 −45(4)2 −110(4)−90]
∫𝑥 4 𝑓(𝑥) 𝑑𝑥 = 4ℎ [𝑦0 + 32∆𝑦0 + 3 6−3
12
∆ 𝑦0 + 3 3−2
24
∆ 𝑦0 + 720
∆4 𝑦0 ]
0
= 3ℎ[𝑦0 + 32(𝑦1 − 𝑦0 ) + 34(𝑦2 − 2𝑦1 + 𝑦0 ) + 18(𝑦3 − 3𝑦2 + 3𝑦1 − 𝑦0 ) + 907 (𝑦4 − 4𝑦3 + 6𝑦2 − 4𝑦1 + 𝑦0 )]
24ℎ
= (7𝑦0 + 32𝑦1 + 12𝑦2 + 32𝑦3 + 7𝑦4 )
45
Weddle’s rule: Here the function 𝑓(𝑥) approximated by the sixth-order polynomial 𝑃6 (𝑥) which passes
through seven points.
Putting 𝑛 = 6 in Newton-Cote’s quadrature formula, all differences higher than the sixth will become
zero and we get
𝑥
6 3ℎ
∫𝑥 𝑓(𝑥) 𝑑𝑥 = 10 (7𝑦0 + 5𝑦1 + 𝑦2 + 6𝑦3 + 𝑦4 + 5𝑦5 + 𝑦6 )
0
Introduction: Numerical methods for Ordinary Differential Equations are of great importance
particularly to Equations that are very difficult to solve or sometimes impossible to solve using the
analytical methods.
Methods of Solution:
− Single-Step Methods: In these methods each step uses only values obtained from the preceding single
step, and they include
1. Euler’s Method
2. Runge-Kutta Methods
3. Picard’s method
1. Euler’s Method.
Let consider an initial value problem (IVP) of the form
𝑦 ′ = 𝑓(𝑥, 𝑦) 𝑦(𝑥𝑜 ) = 𝑦𝑜
Assuming 𝑓 to be such that the problem has a unique solution on some interval containing 𝑥𝑜 . We start
from the given 𝑦(𝑥𝑜 ) = 𝑦𝑜 and proceed stepwise computing approximate values of the solution 𝑦(𝑥) at
the mesh points. 𝑥1 = 𝑥𝑜 + ℎ, 𝑥2 = 𝑥𝑜 + 2ℎ, 𝑥3 = 𝑥𝑜 + 3ℎ, . . . 𝑥𝑛 = 𝑥𝑜 + 𝑛ℎ
Where step size ℎ is a fixed number.
Now for small value of h, the higher powers ℎ2 , ℎ3 , ℎ4 . . . are negligibly small, and this suggest the crude
approximation
ℎ0 ℎ1
𝑦(𝑥 + ℎ) = 𝑦(𝑥) + 𝑦 ′ (𝑥)
0! 1!
⇨ 𝑦(𝑥 + ℎ) = 𝑦(𝑥) + ℎ𝑓(𝑥, 𝑦) . . . (2)
Example1: Use Euler’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦′ = 𝑥 + 𝑦, 𝑦(0) = 0 0≤𝑥≤1
′
𝑖𝑖) 𝑦 = 𝑥 − 𝑦 − 1, 𝑦(0) = 1 0≤𝑥≤1
Solution:
𝑖) The exact solution is 𝑦(𝑥) = 𝑒 𝑥 − 𝑥 − 1 . . . (*)
− 𝑦′ = 𝑓(𝑥, 𝑦) 𝑦(𝑥𝑜 ) = 𝑦𝑜 . . . (i)
𝑦 ′ = 𝑥 + 𝑦, 𝑦(0) = 0 ⇨ 𝑥𝑜 = 0, 𝑦𝑜 = 0 . . . (ii)
From (i) & (ii) we have
𝑓(𝑥, 𝑦) = 𝑥 + 𝑦, 𝑥0 = 0 𝑦0 = 0 and ℎ = 0.1
⇨ 𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑥𝑛 + 𝑦𝑛 , 𝑥0 = 0 𝑦0 = 0 and ℎ = 0.1
Substituting in equation (3), we have
𝒚𝒏+𝟏 = 𝟏. 𝟏𝒚𝒏 + 𝟎. 𝟏𝒙𝒏
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact V Error
0 0.0000 0.0000 0.0000 0.0000 0.0000
1 0.1000 0.0000 0.0100 0.0052 0.0048
2 0.2000 0.0100 0.0311 0.0214 0.0097
3 0.3000 0.0311 0.0642 0.0499 0.0143
4 0.4000 0.0642 0.1106 0.0918 0.0188
5 0.5000 0.1106 0.1717 0.1487 0.0230
6 0.6000 0.1717 0.2489 0.2221 0.0268
7 0.7000 0.2489 0.3438 0.3138 0.0300
8 0.8000 0.3438 0.4581 0.4255 0.0326
9 0.9000 0.4581 0.5939 0.5596 0.0343
10 1.0000 0.5939 0.7533 0.7183 0.0350
Example2: Use Euler’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑦 − 𝑥, 𝑦(0) = 3 0≤𝑥≤1
𝑖𝑖) 𝑦 ′ = 𝑥 − 𝑦 + 1, 𝑦(0) = 1 0≤𝑥≤1
Solution:
𝑖) The Exact solution is 𝑦(𝑥) = 2𝑒 𝑥 + 𝑥 + 1 . . . (*)
− 𝑦 ′ = 𝑓(𝑥, 𝑦) 𝑦(𝑥𝑜 ) = 𝑦𝑜 . . . (i)
− 𝑦 ′ = 𝑦 − 𝑥, 𝑦(0) = 3 ⇨ 𝑥0 = 0, 𝑦0 = 3 . . . (ii)
From (i) & (ii) we have
𝑓(𝑥, 𝑦) = 𝑦 − 𝑥, 𝑥0 = 0 𝑦0 = 3 and ℎ = 0.1
⇨ 𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑦𝑛 − 𝑥𝑛 , 𝑥0 = 0 𝑦0 = 3 and ℎ = 0.1
Substituting in equation (3), we have
𝒚𝒏+𝟏 = 𝟏. 𝟏𝒚𝒏 − 𝟎. 𝟏𝒙𝒏
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact V Error
0 0.0000 3.0000 3.3000 3.0000 0.3000
1 0.1000 3.3000 3.6200 3.3103 0.3097
2 0.2000 3.6200 3.9620 3.6428 0.3192
3 0.3000 3.9620 4.3282 3.9997 0.3285
4 0.4000 4.3282 4.7210 4.3836 0.3374
5 0.5000 4.7210 5.1431 4.7974 0.3457
6 0.6000 5.1431 5.5974 5.2442 0.3532
7 0.7000 5.5974 6.0871 5.7275 0.3596
8 0.8000 6.0871 6.6158 6.2511 0.3647
9 0.9000 6.6158 7.1874 6.8192 0.3682
10 1.0000 7.1874 7.8061 7.4366 0.3695
2. Runge-Kutta Methods:
The Runge-Kutta (R-K) methods are extensions of the basic ideas of Euler’s method using
approximations which agree with more terms of the Taylor series. The basic step length of the method is
ℎ as with Euler’s method, but some intermediate points are also computed and the slopes at these
points are used to improve the overall change between 𝑥𝑛 and 𝑥𝑛 + ℎ = 𝑥𝑛+1
Corrected Euler’s Method: The corrected Euler’s (or sometimes midpoint) method is a two stage
formula
𝑘1 = 𝑓(𝑥𝑛 , 𝑦𝑛 )
1 1
𝑘2 = 𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑘1 )
2 2
𝒚𝒏+𝟏 = 𝒚𝒏 + 𝒉𝒌𝟐 . . . (4)
This is equivalent to taking an Euler’s step with half the step length, and then using the slope of this
midpoint between 𝑥𝑛 and 𝑥𝑛+1 to estimate the average slope over this interval.
Example1: Use Corrected Euler’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦′ = 𝑥 + 𝑦, 𝑦(0) = 0 0≤𝑥≤1
𝑖𝑖) 𝑦 ′ = 𝑥 − 𝑦 − 1, 𝑦(0) = 1 0≤𝑥≤1
Solution:
𝑖) The Exact solution is 𝑦(𝑥) = 𝑒 𝑥 − 𝑥 − 1
𝑘1 = 𝑥𝑛 + 𝑦𝑛
1 1
𝑘2 = 𝑥𝑛 + (0.1) + 𝑦𝑛 + (0.1)(𝑥𝑛 + 𝑦𝑛 ) = 1.05𝑦𝑛 + 1.05𝑥𝑛 + 0.05
2 2
Example2: Use corrected Euler’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑦 − 𝑥, 𝑦(0) = 3 0≤𝑥≤1
𝑖𝑖) 𝑦 ′ = 2𝑥 − 𝑦, 𝑦(0) = 1 0≤𝑥≤1
Solution:
− The Exact solution is 𝑦(𝑥) = 2𝑒 𝑥 + 𝑥 + 1 . . . (*)
− 𝑦 ′ = 𝑓(𝑥, 𝑦) 𝑦(𝑥𝑜 ) = 𝑦𝑜 . . . (i)
′
− 𝑦 = 𝑦 − 𝑥, 𝑦(0) = 3 ⇨ 𝑥0 = 0, 𝑦0 = 3 . . . (ii)
From (i) & (ii) we have
𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑦𝑛 − 𝑥𝑛 , 𝑥0 = 0 𝑦0 = 3 and ℎ = 0.1
𝑘1 = 𝑦𝑛 − 𝑥𝑛
1 1
𝑘2 = [𝑦𝑛 + (0.1)(𝑦𝑛 − 𝑥𝑛 )] − [𝑥𝑛 + (0.1)] = 1.05𝑦𝑛 − 1.05𝑥𝑛 − 0.05
2 2
Substituting 𝑘2 in equation (4), we have
𝒚𝒏+𝟏 = 𝟏. 𝟏𝟎𝟓𝒚𝒏 − 𝟎. 𝟏𝟎𝟓𝒙𝒏 − 𝟎. 𝟎𝟎5
Improved Euler method: The Improved Euler’s method takes the full Euler’s step, but then Improved it
by using the average of the steep slope at the two points for the modified step.
𝑘1 = 𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑘2 = 𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑘1 )
𝟏
𝒚𝒏+𝟏 = 𝒚𝒏 + 𝒉[𝒌𝟏 + 𝒌𝟐 ] . . . (5)
𝟐
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact. V Error
0 𝑥0 𝑦0 𝑦1
1 𝑥1 𝑦1 𝑦2
2 𝑥2 𝑦2 𝑦3
. . . .
The first of these is equivalent to using the midpoint rule to integrate 𝑦 ′ over [𝑥𝑛 , 𝑥𝑛+1 ] with the slope at
the midpoint predicted by Euler’s method. The second is the same principle applied with the Trapezoidal
rule.
Example2: Use Improved Euler’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦′ = 𝑦 − 𝑥, 𝑦(0) = 3 0≤𝑥≤1
𝑖𝑖) 𝑦 ′ = 2𝑥 − 𝑦, 𝑦(0) = 1 0≤𝑥≤1
Solution:
− The exact solution is 𝑦(𝑥) = 2𝑒 𝑥 + 𝑥 + 1 . . . (*)
𝑘1 = 𝑦𝑛 − 𝑥𝑛
𝑘2 = 𝑦𝑛 + 0.1(𝑦𝑛 − 𝑥𝑛 ) − (𝑥𝑛 + 0.1) = 1.1𝑦𝑛 − 1.1𝑥𝑛 − 0.1
Substituting 𝑘1 & 𝑘2 in equation (5), we have
Example1: Use Heun’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑥 + 𝑦, 𝑦(0) = 0 0 ≤ 𝑥 ≤ 1,
𝑖𝑖) 𝑦 ′ = 𝑥 − 𝑦 + 1 𝑦(0) = 1 0 ≤ 𝑥 ≤ 1,
Solution:
− The exact solution is 𝑦(𝑥) = 𝑒 𝑥 − 𝑥 − 1 . . . (*)
𝑘1 = 𝑥𝑛 + 𝑦𝑛
Example2: Use Heun’s method to solve the following initial value problem choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑦 − 𝑥, 𝑦(0) = 3 0 ≤ 𝑥 ≤ 1,
𝑖𝑖) 𝑦 ′ = 2𝑥 − 𝑦 𝑦(0) = 1 0 ≤ 𝑥 ≤ 1,
Solution:
𝑖) The exact solution is 𝑦(𝑥) = 2𝑒 𝑥 + 𝑥 + 1 . . . (*)
𝑘1 = 𝑦𝑛 − 𝑥𝑛
𝑘2 = [𝑦𝑛 + 0.0667(𝑦𝑛 − 𝑥𝑛 )] − [𝑥𝑛 + 0.0667] = 1.0667𝑦𝑛 − 1.0667𝑥𝑛 − 0.0667
Substituting 𝑘1 & 𝑘2 in equation (6), we have
𝒚𝒏+𝟏 = 𝟏. 𝟏𝟎𝟓𝒚𝒏 − 𝟎. 𝟏𝟎𝟓𝒙𝒏 − 𝟎. 𝟎𝟎𝟓
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact. V Error
0 0.0000 3.0000 3.3100 3.0000
1 0.1000 3.3100 3.6421 3.3103
2 0.2000 3.6421 3.9985 3.6428
3 0.3000 3.9985 4.3818 3.9997
4 0.4000 4.3818 4.7949 4.3836
5 0.5000 4.7949 5.2409 4.7974
6 0.6000 5.2409 5.7232 5.2442
7 0.7000 5.7232 6.2456 5.7275
8 0.8000 6.2456 6.8124 6.2511
9 0.9000 6.8124 7.4282 6.8192
10 1.0000 7.4282 8.0982 7.4366
Classical Runge-Kutta Method: A still more accurate method of great practical importance is the
classical Runge-Kutta method of order four, which is called the Runge-Kutta method. In this method we
first compute four auxiliary quantities 𝑘1 , 𝑘2 , 𝑘3 , 𝑎𝑛𝑑 𝑘4 and then the new value 𝑦𝑛+1 .
Where
𝑘1 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
1 1
𝑘2 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘1 )
2 2
1 1
𝑘3 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘2 )
2 2
𝑘4 = ℎ𝑓(𝑥𝑛 + ℎ, 𝑦𝑛 + 𝑘3 )
𝟏
𝒚𝒏+𝟏 = 𝒚𝒏 + [𝒌𝟏 + 𝟐𝒌𝟐 + 𝟐𝒌𝟑 + 𝒌𝟒 ] . . . (7)
𝟔
Example1: Use Classical Runge-Kutta method to approximate the solution of the initial value problem
choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑥 + 𝑦 𝑦(0) = 0 0 ≤ 𝑥 ≤ 1,
𝑖𝑖) 𝑦 ′ = 𝑥 − 𝑦 + 1 , 𝑦(0) = 1 0 ≤ 𝑥 ≤ 1,
Solution:
𝑖) The exact solution is 𝑦(𝑥) = 𝑒 𝑥 − 𝑥 − 1 . . . (*)
𝑓(𝑥, 𝑦) = 𝑥 + 𝑦
𝑘1 = 0.1(𝑥𝑛 + 𝑦𝑛 ) = 0.1𝑥𝑛 + 0.1𝑦𝑛
𝑘2 = 0.1(𝑥𝑛 + 0.05 + 𝑦𝑛 + 0.5(0.1𝑥𝑛 + 0.1𝑦𝑛 )) = 0.105𝑥𝑛 + 0.105𝑦𝑛 + 0.005
𝑘3 = 0.1(𝑥𝑛 + 0.05 + 𝑦𝑛 + 0.50.105𝑥𝑛 + 0.105𝑦𝑛 + 0.005) = 0.10525𝑥𝑛 + 0.10525𝑦𝑛 + 0.00525
𝑘4 = 0.1(𝑥𝑛 + 0. + 𝑦𝑛 + 0.10525𝑥𝑛 + 0.10525𝑦𝑛 + 0.00525 ) = 0.110525𝑥𝑛 + 0.110525𝑦𝑛 + 0,010525
Substituting 𝑘1 , 𝑘2 , 𝑘3 & 𝑘4 in equation (6), we have
⇨ 𝒚𝒏+𝟏 = 𝟏. 𝟏𝟎𝟓𝟏𝟗𝒚𝒏 + 𝟎. 𝟏𝟎𝟓𝟏𝟗𝒙𝒏 + 𝟎. 𝟎𝟎𝟓𝟏𝟕
Example2: Use Classical Runge-Kutta method to approximate the solution of the initial value problem
choosing ℎ = 0.1
𝑖) 𝑦 ′ = 𝑥 + 𝑦 𝑦(0) = 0 0 ≤ 𝑥 ≤ 1,
𝑖𝑖) 𝑦 ′ = 𝑥 − 𝑦 + 1 , 𝑦(0) = 1 0 ≤ 𝑥 ≤ 1,
Solution:
𝑖) The exact solution is 𝑦(𝑥) = 𝑒 𝑥 − 𝑥 − 1 . . . (*)
𝑓(𝑥, 𝑦) = 𝑥 + 𝑦
𝑘1 = 0.1(𝑥𝑛 + 𝑦𝑛 ) = 0.1𝑥𝑛 + 0.1𝑦𝑛
𝑘2 = 0.1(𝑥𝑛 + 0.05 + 𝑦𝑛 + 0.5(0.1𝑥𝑛 + 0.1𝑦𝑛 )) = 0.105𝑥𝑛 + 0.105𝑦𝑛 + 0.005
𝑘3 = 0.1(𝑥𝑛 + 0.05 + 𝑦𝑛 + 0.50.105𝑥𝑛 + 0.105𝑦𝑛 + 0.005) = 0.10525𝑥𝑛 + 0.10525𝑦𝑛 + 0.00525
𝑘4 = 0.1(𝑥𝑛 + 0. + 𝑦𝑛 + 0.10525𝑥𝑛 + 0.10525𝑦𝑛 + 0.00525 ) = 0.110525𝑥𝑛 + 0.110525𝑦𝑛 + 0,010525
Substituting 𝑘1 , 𝑘2 , 𝑘3 & 𝑘4 in equation (6), we have
⇨ 𝒚𝒏+𝟏 = 𝟏. 𝟏𝟎𝟓𝟏𝟗𝒚𝒏 + 𝟎. 𝟏𝟎𝟓𝟏𝟗𝒙𝒏 + 𝟎. 𝟎𝟎𝟓𝟏𝟕
Taylor Series Method: Euler’s method was derived from Taylor theorem with 𝑛 = 2, the method was
improved by taking additional one more term making 𝑛 = 3. In an attempt to find methods for
improving the convergence is to extend this technique of derivation to large values of 𝑛. And the
motivation for this method is that the derivative of the solution function is sometimes easily found from
the Differential Equation itself.
Suppose the solution 𝑦(𝑥) to the initial value problem
𝑦 ′ (𝑥) = 𝑓(𝑥, 𝑦(𝑥)) 𝑎<𝑥<𝑏 𝑦(𝑥0 ) = 𝛼
has (𝑛 + 1) continuous derivatives. If we expand the solution 𝑦(𝑥) in terms of its 𝑛𝑡ℎ Taylor polynomial
ℎ0 ℎ1 ℎ2 ℎ𝑛 ℎ𝑛+1
about 𝑥𝑛 we obtain 𝑦(𝑥𝑛+1 ) = 𝑦(𝑥𝑛 ) + 𝑦 ′ (𝑥𝑛 ) + 𝑦 ′ (𝑥𝑛 )+ . . . + 𝑦 𝑛 (𝑥𝑛 ) + 𝑦 𝑛+1 (𝑧𝑛 )
0! 1! 2! 𝑛! 𝑛+1!
ℎ2 ℎ𝑛 ℎ𝑛+1
𝑦(𝑥𝑛+1 ) = 𝑦(𝑥𝑛 ) + ℎ𝑦 ′ (𝑥𝑛 ) + 𝑦 ′ (𝑥𝑛 )+ . . . + 𝑦 𝑛 (𝑥𝑛 ) + 𝑦 𝑛+1 (𝑧𝑛 ) . . . (i)
2! 𝑛! (𝑛+1)!
Example1: Using Taylor series method of order two solve the initial value problem with ℎ = 0.1
𝑖) 𝑦′ = 𝑥 − 𝑦 + 1 𝑦(0) = 1 0 ≤ 𝑥 ≤ 1,
𝑖𝑖) 𝑦′ = 𝑥 + 𝑦 𝑦(0) = 0 0 ≤ 𝑥 ≤ 1,
Solution:
𝑖) The exact solution is 𝑦(𝑥) = 𝑒 −𝑥 + 𝑥 . . . (*)
𝑓(𝑥, 𝑦) = 𝑥 − 𝑦 + 1
𝑓(𝑥𝑛 , 𝑦𝑛 ) = 𝑥𝑛 − 𝑦𝑛 + 1
𝑓𝑥′ (𝑥𝑛 , 𝑦𝑛 ) = 1 − 𝑦𝑛′ = 1 − (𝑥𝑛 − 𝑦𝑛 + 1) = 𝑦𝑛 − 𝑥𝑛
ℎ2
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 ) + 𝑓 ′ (𝑥𝑛 , 𝑦𝑛 )
2!
1
= 𝑦𝑛 + ℎ [𝑓(𝑥𝑛 , 𝑦𝑛 ) + ℎ𝑓 ′ (𝑥𝑛 , 𝑦𝑛 )] = 𝑦𝑛 + (0.1)[(𝑥𝑛 − 𝑦𝑛 + 1) + (0.05)(𝑦𝑛 − 𝑥𝑛 )]
2
⇨ 𝒚𝒏+𝟏 = 𝟎. 𝟗𝟎𝟓𝒚𝒏 + 𝟎. 𝟎𝟗𝟓𝒙𝒏 + 𝟎. 𝟏
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact Soln. Error
0 0.000 1.000 1.005 1.0000
1 0.100 1.005 0.8145
2 0.200 0.6562
3 0.300 0.5225
4 0.400 0.4110
5 0.500 0.3200
6 0.600 0.2464
7 0.700 0.1897
8 0.800 0.1480
9 0.900 0.1197
10 1.000 0.1036
= 𝑦𝑛 + 0.1[(0.951625)(𝑥𝑛 − 𝑦𝑛 − 2) + 1]
= 𝑦𝑛 + 0.1[0.951625𝑥𝑛 − 0.951625𝑦𝑛 − 1.90325 + 1]
= 𝑦𝑛 + 0.0951625𝑥𝑛 − 0.0951625𝑦𝑛 − 0.090325
= 0.904838𝑦𝑛 + 0.095171𝑥𝑛 − 0.090325
⇨ 𝒚𝒏+𝟏 = 𝟎. 𝟗𝟎𝟒𝟖𝟑𝟖𝒚𝒏 + 𝟎. 𝟎𝟗𝟓𝟏𝟕𝟏𝒙𝒏 − 𝟎. 𝟎𝟗𝟎𝟑𝟐𝟓
= 𝑦𝑛 + 0.1[(0.951625)(2𝑥𝑛 − 𝑦𝑛 − 2) + 2]
= 𝑦𝑛 + 0.1[1.903251𝑥𝑛 − 0.951625𝑦𝑛 − 1.903251 + 2]
= 𝑦𝑛 + 0.1[1.903251𝑥𝑛 − 0.951625𝑦𝑛 + 0.096749]
= 𝑦𝑛 + 0.1903251𝑥𝑛 − 0.0951625𝑦𝑛 + 0.0096749
= 0.904838𝑦𝑛 + 0.190325𝑥𝑛 − 0.009675
⇨ 𝒚𝒏+𝟏 = 𝟎. 𝟗𝟎𝟒𝟖𝟑𝟖𝒚𝒏 + 𝟎. 𝟏𝟗𝟎𝟑𝟐𝟓𝒙𝒏 − 𝟎. 𝟎𝟎𝟗𝟔𝟕𝟓
𝑛 𝑥𝑛 𝑦𝑛 𝑦𝑛+1 Exact Error
0 0.0000 1.0000 0.0000
1 0.1000 0.0052
2 0.2000 0.0214
3 0.3000 0.0499
4 0.4000 0.0918
5 0.5000 0.1487
6 0.6000 0.2221
7 0.7000 0.3138
8 0.8000 0.4255
9 0.9000 0.5596
10 1.0000 0.7183
Exercise:
1) Use the classical fourth order Runge-Kutta method to solve the following initial value problems
𝑦′ = 𝑥 − 𝑦2 𝑦(0) = 1 ℎ = 0.2 0 ≤ 𝑥 ≤ 0.5
2) Use Taylor series method of order three with ℎ = 0.1 to approximate the solution of the initial
𝑦
value problem 𝑦 ′ = 2 𝑥 + 𝑥𝑒 𝑥 1≤𝑥≤2 𝑦(1) = 0
⇨ 𝑑𝑦 = 𝑓(𝑥, 𝑦)𝑑𝑥
𝑥
⇨ ∫ 𝑑𝑦 = ∫ 𝑓(𝑥, 𝑦) 𝑑𝑥
𝑥
⇨ 𝑦 = ∫ 𝑓(𝑥, 𝑦) 𝑑𝑥 + 𝑐 . . . (2)
constants.
𝑑𝑦
Example1: Solve the following differential equation 𝑑𝑥 = 𝑦 − 𝑥 2 𝑦(0) = 1 by Picard’s method up to the
third approximation. Hence, find the value of 𝑦(0.1), 𝑦(0.2).
Solution:
𝑑𝑦
= 𝑦 − 𝑥2 𝑦(0) = 1 ⇨ 𝑥 (0) = 0, 𝑦 (0) = 1
𝑑𝑥
𝑥2 3 𝑥4 𝑥5
= 1+𝑥+ − 𝑥6 − − + . . . . . . (4)
2 12 60
Here 𝑥0 = 0, 𝑦0 = 1
𝑥
⇨ 𝑦 = 1 + ∫0 (𝑥 2 + 𝑦 2 ) 𝑑𝑥 . . . (1)
𝑥 3
⇨ 𝑦 (1) = 1 + ∫0 (1 + 𝑥 2 ) 𝑑𝑥 = 1 + 𝑥 + 𝑥3 . . . (2)
𝑑𝑦
Example4: Solve the following differential equation 𝑑𝑥 + 𝑦 = 𝑒 𝑥 𝑦(0) = 0, Using Picard’s method
Solution:
𝑑𝑦
+ 𝑦 = 𝑒𝑥 𝑦(0) = 0 ⇨ 𝑥 (0) = 0, 𝑦 (0) = 0
𝑑𝑥
Here 𝑥0 = 0, 𝑦0 = 0
𝑥
⇨ 𝑦 = 0 + ∫0 (𝑒 𝑥 − 1) 𝑑𝑥 . . . (1)
𝑥
⇨ 𝑦 (1) = ∫0 (𝑒 𝑥 − 0) 𝑑𝑥 = 𝑒 𝑥 − 1 . . . (2)
In eqn(2) to get the value of 𝑦𝑛+1 we require 𝑦𝑛+1 on the RHS. To overcome this difficulty, we calculate
𝑦𝑛+1 using Euler’s formula (1) and then use it on the RHS of eqn(2) to get the refined 𝑦𝑛+1 on the LHS of
eqn(2). Here we predict the value of 𝑦𝑛+1 from the rough formula (1) and use it in (2) to correct the
value every time we improve using. Hence eqn(1) is a predictor and eqn(2) is a corrector. A predictor
formula is used to predict the value of 𝑦𝑛+1 at 𝑥𝑛+1 and a corrector formula is used to correct the error
and to improve that value of 𝑦𝑛+1
Where 𝑢 = 𝑥−𝑥
ℎ
0 ⇨ 𝑥 − 𝑥0 = 𝑢ℎ
Replacing 𝑦 to 𝑦 ′
2 3 −3𝑢2 +2𝑢 4 −6𝑢3 +11𝑢2 −6𝑢 5 −10𝑢4 +35𝑢3 −50𝑢2 +24𝑢
𝑦 ′ = 𝑦0′ + 𝑢∆𝑦0′ + 𝑢 2−𝑢 2 ′
! ∆ 𝑦0 +
𝑢
3!
′
∆3 0 𝑦 +𝑢 4!
′
∆4 0 𝑦 +𝑢 5!
′
∆5 0 𝑦 + . . . . . . (2)
= ℎ[4𝑦0′ + 8 ∆𝑦0′ + 20
3
′ 8 ′
∆2 𝑦0 + ∆3 𝑦0 +
3
14 4 ′
45
∆ 𝑦0 . . . ]
= ℎ[4𝑦0′ + 8 (𝐸−1)𝑦0′ + 20
3
′ 8 ′
(𝐸−1)2 𝑦0 + (𝐸−1)3 𝑦0 +
3
14 4 ′
45
∆ 𝑦0 . . . ]
= ℎ[4𝑦0′ + 8 (𝑦1′−𝑦0′) + 20
3
8
(𝑦2′ −2𝑦1′ +𝑦0′ ) + (𝑦3′ −3𝑦2′ +3𝑦1′ −𝑦0′ ) +
3
14 4 ′
45
∆ 𝑦0 . . . ]
14 4 ′
= ℎ[ (4−8+20 8 ′
3 −3)𝑦0 + (8−40 ′
3 +8)𝑦1 + (20 ′
3 −8)𝑦2 + 83𝑦3′ + 45
∆ 0 𝑦 . . .]
′
= ℎ[ 83𝑦1′ − 4 ′
3𝑦2 + 83𝑦3′ ] + 14
45
∆4 𝑦0 . . .
′
⇨ 𝑦4 − 𝑦0 = 4ℎ
3
[ 2𝑦1′ − 𝑦2′ + 2𝑦3′ ] + 14
45
∆4 𝑦0 . . . . . . (3)
And in general
5
𝑦𝑛+1 = 𝑦𝑛−3 + 4ℎ
3
[ 2𝑦𝑛−2
′ − ′
𝑦𝑛−1 + 2𝑦𝑛′ ] + 14ℎ
45
𝑦 (𝑣) (𝜉1 ) where 𝑥𝑛−3 < 𝜉1 < 𝑥𝑛+1 . . . (6)
1 4 ′
= ℎ[ 2𝑦0′ + 2 (𝑦1′−𝑦0′ ) + 13 (𝑦2′−2𝑦1′+𝑦0′) − 90 ∆ 𝑦0 . . . ]
ℎ 4 ′
⇨ 𝑦2 − 𝑦0 = ℎ3[ 𝑦0′ + 4𝑦1′ + 𝑦2′ ] − 90 ∆ 𝑦0 + . . . . . . (7)
𝑦2 = 𝑦0 + ℎ3[ 𝑓0 + 4𝑓1 + 𝑓2 ]
ℎ5
The error committed in (7) is −
ℎ 4 ′
90
∆ 0 𝑦 + . . . , and this can be proved to be 90
𝑦 (𝑣) (𝜉) where 𝑥0 < 𝜉 < 𝑥2
Solution:
Here 𝑥0 = 0, 𝑥1 = 0.5 𝑥2 = 1.0 𝑥3 = 1.5 𝑥4 = 2.0 ℎ = 0.5 𝑦0 = 2 𝑦1 = 2.636, 𝑦2 = 3.595 𝑦3 = 4.968
𝑓(𝑥, 𝑦) = 12(𝑥 + 𝑦) = 𝑦 ′ . . . (1)
From eqn.(1)
𝑦1′ = 12(𝑥1 + 𝑦1 ) = 12(0.5 + 2.636) = 1.5680
𝑦2′ = 12(𝑥2 + 𝑦2 ) = 12(1.0 + 3.595) = 2.2975
𝑦3′ = 12(𝑥3 + 𝑦3 ) = 12(1.5 + 4.968) = 3.2340
∴ 𝑦4 = 2 + 4(0.5)
3
[ 2(1.5680) − (2.2975) + 2(3.2340) ] = 6.8710
Example: Using Milne’s method find 𝑦(4.4) given 5𝑥𝑦 ′ + 𝑦 2 − 2 = 0 given that 𝑦(4.0) = 1, 𝑦(4.1) =
1.0049, 𝑦(4.2) = 1.0097 and 𝑦(4.3) = 1.0143
Solution:
2
𝑦 ′ = 2−𝑦
5𝑥
𝑥0 = 4.0, 𝑥1 = 4.1 𝑥2 = 4.2 𝑥3 = 4.3 𝑥4 = 4.4 ℎ = 0.1 𝑦0 = 1.0000 𝑦1 = 1.0049, 𝑦2 = 1.0097
𝑦3 = 1.0143
2
𝑓(𝑥, 𝑦) = 2−𝑦
5𝑥
= 𝑦′ . . . (1)
𝑦4 = 𝑦0 + 4ℎ
3
[ 2𝑦1′ − 𝑦2′ + 2𝑦3′ ]
From eqn.(1)
2−𝑦12 2−(1.0049)2 2−𝑦22 2−(1.0097)2 2−𝑦32 2−(1.0143)2
𝑦1′ = = = 0.0493 𝑦2′ = = = 0.0467 𝑦3′ = = = 0.0452
5𝑥1 5(4.1) 5𝑥2 5(4.2) 5𝑥3 5(4.3)
∴ 𝑦4 = 1 + 4(0.1)
3
[ 2(0.0493) − (0.0467) + 2(0.0452) ] = 1.01897
We now replace 𝑓(𝑥, 𝑦(𝑥)) by an interpolation polynomial 𝑝(𝑥) so that we can later integrate. This gives
an approximation
𝑥
𝑦𝑛+1 = 𝑦𝑛 + ∫𝑥 𝑛+1 𝑝(𝑥) 𝑑𝑥 . . . (2)
𝑛
Different choices of 𝑝(𝑥) now produces different methods. We explain the principle by considering a
polynomial of degree three 𝑝3 (𝑥) at equal distance 𝑥𝑛 , 𝑥𝑛−1 , 𝑥𝑛−2 , 𝑥𝑛−3 with
𝑓𝑛 = 𝑓(𝑥𝑛 , 𝑦𝑛 )
𝑓𝑛−1 = 𝑓(𝑥𝑛−1 , 𝑦𝑛−1 )
} . . . (3)
𝑓𝑛−2 = 𝑓(𝑥𝑛−2 , 𝑦𝑛−2 )
𝑓𝑛−3 = 𝑓(𝑥𝑛−3 , 𝑦𝑛−3 )
This will lead to a practically useful formula. We can obtain 𝑝3 (𝑥) from Newton’s backward difference
formula
𝑝3 (𝑥) = 𝑓𝑛 + 𝑣∇𝑓𝑛 + 12𝑣(𝑣 + 1)∇2 𝑓𝑛 + 16𝑣(𝑣 + 1)(𝑣 + 2)∇3 𝑓𝑛
𝑥−𝑥𝑛
Where 𝑣 = ℎ
We integrate 𝑝3 (𝑥) over 𝑥 from 𝑥𝑛 to 𝑥𝑛+1 , thus over 𝑣 from 0 to 1 since 𝑥 = 𝑥𝑛 + 𝑣ℎ and 𝑑𝑥 = ℎ𝑑𝑣
𝑥𝑛+1 1 1
1 1
∫ 𝑝3 (𝑥) 𝑑𝑥 = ℎ ∫ 𝑝3 (𝑥) 𝑑𝑣 = ℎ ∫ [𝑓𝑛 + 𝑣∇𝑓𝑛 + 2𝑣(𝑣 + 1)∇2 𝑓𝑛 + 6𝑣(𝑣 + 1)(𝑣 + 2)∇3 𝑓𝑛 ] 𝑑𝑣
𝑥𝑛 0 0
We integrate 𝑝3 (𝑥) over 𝑥 from 𝑥𝑛 to 𝑥𝑛+1 , thus over 𝑣 from 0 to 1 since 𝑥 = 𝑥𝑛 + 𝑣ℎ and 𝑑𝑥 = ℎ𝑑𝑣
𝑥𝑛+1 1 1
1 1
∫ 𝑝3 (𝑥) 𝑑𝑥 = ℎ ∫ 𝑝3 (𝑥) 𝑑𝑣 = ℎ ∫ [𝑓𝑛+1 + 𝑣∇𝑓𝑛+1 + 2𝑣(𝑣 + 1)∇2 𝑓𝑛+1 + 6𝑣(𝑣 + 1)(𝑣 + 2)∇3 𝑓𝑛+1 ] 𝑑𝑣
𝑥𝑛 0 0
1 1 2 1 3
⇨ 𝑦𝑛+1 − 𝑦𝑛 = ℎ(𝑓𝑛+1 − ∇𝑓𝑛+1 − 12
2
∇ 𝑓𝑛 − 24 ∇ 𝑓𝑛 ) . . . (6)
𝑑𝑦 1
Example: Find 𝑦(2) given that 𝑑𝑥
= (𝑥 + 𝑦), 𝑦(0) = 2, 𝑦(0.5) = 2.636, 𝑦(1) = 3.595 and 𝑦(1.5) = 4.968
2
by Adam’s method
Solution:
We have
𝑦0′ = 12(𝑥0 + 𝑦0 ) = 12(0.0 + 2.000) = 1.0000
𝑦1′ = 12(𝑥1 + 𝑦1 ) = 12(0.5 + 2.636) = 1.5680
𝑦2′ = 12(𝑥2 + 𝑦2 ) = 12(1.0 + 3.595) = 2.2975
𝑦3′ = 12(𝑥3 + 𝑦3 ) = 12(1.5 + 4.968) = 3.2340
By Adam’s predictor formula
ℎ
𝑦𝑛+1 = 𝑦𝑛 + (55𝑦𝑛′ − 59𝑦𝑛−1
′ ′
+ 37𝑦𝑛−2 ′
− 9𝑦𝑛−3 )
24
ℎ
𝑦4 = 𝑦3 + 24 (55𝑦3′ − 59𝑦2′ + 37𝑦1′ − 9𝑦0′ )
𝑦4 = 4.968 + (0.5)
24
[55(3.2340) − 59(2.2975) + 37(1.5680) − 9(1.0000)] = 6.8798
𝑦4′ = 12(𝑥4 + 𝑦4 ) = 12(2.0 + 6.8708) = 4.4354
By Adam’s corrector formula we have
𝑑𝑦 1
Example: Find 𝑦(0.4) given that 𝑑𝑥
= 𝑥𝑦, 𝑦(0.0) = 1, 𝑦(0.1) = 1.010, 𝑦(0.2) = 1.022 and 𝑦(0.3) = 1.023
2
using Adam’s method
Solution:
𝑥0 = 0.0, 𝑥1 = 0.1 𝑥2 = 0.2 𝑥3 = 0.3 𝑥4 = 0.4 ℎ = 0.1 𝑦0 = 1.000 𝑦1 = 1.010, 𝑦2 = 1.022 𝑦3 = 1.023
𝑦4 = ?
We have
𝑦0′ = 12(𝑥0 𝑦0 ) = 12(0.0 ∗ 1.000) = 0.0000
𝑦1′ = 12(𝑥1 𝑦1 ) = 12(0.1 ∗ 1.010) = 0.0505
𝑦2′ = 12(𝑥2 𝑦2 ) = 12(0.2 ∗ 1.022) = 0.1022
𝑦3′ = 12(𝑥3 𝑦3 ) = 12(0.3 ∗ 1.023) = 0.1535
By Adam’s predictor method
ℎ
𝑦𝑛+1 = 𝑦𝑛 + (55𝑦𝑛′ − 59𝑦𝑛−1
′ ′
+ 37𝑦𝑛−2 ′
− 9𝑦𝑛−3 )
24
ℎ
𝑦4 = 𝑦3 + 24 (55𝑦3′ − 59𝑦2′ + 37𝑦1′ − 9𝑦0′ )
𝑦4 = 1.023 + (0.1)
24
[55(0.1535) − 59(0.1022) + 37(0.0505) − 9(0.0000)] = 1.0408
𝑦4′ = 12(𝑥4 𝑦4 ) = 12(0.4 ∗ 1.0408) = 0.20816
By Adam’s corrector formula we have
ℎ ∗
𝑦𝑛+1 = 𝑦𝑛 + (9𝑓𝑛+1 + 19𝑓𝑛 − 5𝑓𝑛−1 + 𝑓𝑛−2 )
24
ℎ ′
𝑦𝑛+1 = 𝑦𝑛 + (9𝑦𝑛+1 + 19𝑦𝑛′ − 5𝑦𝑛−1
′ ′
+ 𝑦𝑛−2 )
24
ℎ
𝑦4 = 𝑦3 + (9𝑦4′ + 19𝑦3′ − 5𝑦2′ + 𝑦1′ )
24
(0.1)
𝑦4 = 1.023 + [9(0.20816) + 19(0.1535) − 5(0.1022) + (0.0505)] = 1.0410
24
𝑑𝑦 1
Exercise: Find 𝑦(0.1), 𝑦(0.2), 𝑦(0.3) given that 𝑑𝑥 = 2 𝑥𝑦 + 𝑦 2 , 𝑦(0.0) = 1 using Runge-Kutta method and
Definition: An equation which expresses a relation between the independent variable, the dependent
variables and the successive differences of the dependent variable is called difference equation
Example 1:
𝑖) ∆2 𝑦𝑘 − 2∆𝑦𝑘 + 𝑦𝑘 = 0, 𝑖𝑖) ∆3 𝑦𝑘 − 4∆2 𝑦𝑘 + 7𝑦𝑘 = 𝑥 2 + cos 𝑥 + 7,
𝑖𝑖𝑖) ∆3 𝑦𝑘 − 3∆2 𝑦𝑘 + ∆𝑦𝑘 − 𝑦𝑘 = cos 2𝑥
Note: The combinations of differences are not always convenient, it may even obscure information. As a
result, difference equations are usually written directly in terms of 𝑦𝑘
Using the relation above the difference equations in example 1 can be written as
𝑖) ∆2 𝑦𝑘 − 2∆𝑦𝑘 + 𝑦𝑘 = (𝑦𝑘+2 − 2𝑦𝑘+1 + 𝑦𝑘 ) − 2(𝑦𝑘+1 − 𝑦𝑘 ) + 𝑦𝑘 = 0
𝑦𝑘+2 − 4𝑦𝑘+1 + 4𝑦𝑘 = 0
Note: The study of difference equation is analogous to the study of differential equation.
Example: Form a difference equation of the lowest order by eliminating the arbitrary constants
𝑖) 𝑦 = 𝐶2𝑥 + 𝑥3𝑥−1 𝑖𝑖) 𝑦 = 𝐴2𝑥 + 𝐵3𝑥 𝑖𝑖𝑖) 𝑦 = (𝐴 + 𝐵𝑥)2𝑥
Solution:
𝑖) 𝑦𝑥 = 𝐶2𝑥 + 𝑥3𝑥−1
𝑦𝑥+1 = 𝐶2𝑥+1 + (𝑥 + 1)3𝑥 . . . (2)
Eqn.(1) × 2
2𝑦𝑥 = 𝐶2𝑥+1 + 2𝑥3𝑥−1 . . . (3)
Eqn.(3) −eqn.(2)
2𝑦𝑥 − 𝑦𝑥+1 = 𝐶2𝑥+1 + 2𝑥3𝑥−1 − 𝐶2𝑥+1 − (𝑥 + 1)3𝑥 = −(𝑥 + 3)3𝑥−1
⇒ 2𝑦𝑥 − 𝑦𝑥+1 = −(𝑥 + 3)3𝑥−1
⇒ 3𝑦𝑥+1 − 6𝑦𝑥 = (𝑥 + 3)3𝑥
Is the required equation
Note: We can observe that eliminating one arbitrary constant yield a difference equation of order one, eliminating
two arbitrary constants yield a difference equation of order two, and of course eliminating 𝑛 arbitrary constants
yield a difference equation of order 𝑛
→ General solution:
Definition: A general solution of a difference equation of order 𝑛 is a solution which contains 𝑛 arbitrary
constants or 𝑛 arbitrary functions which are periodic of period equal to the interval of differencing
→ Particular solution:
A particular solution of a difference equation is a solution obtained from the general solution by giving
particular values to the arbitrary constants
Note: The solution of the non-homogeneous linear equation (3) depends upon the corresponding homogeneous
linear equation (4)
Note: If 𝑎0 , 𝑎1 , 𝑎2 , 𝑎3 , . . . 𝑎𝑛 are constant in equation (2), then equation (2) is called a linear equation with
constant coefficients.
Case2: If 𝑎1 = 𝑎2 , then 𝑦𝑘 = ∑𝑛𝑖=1 𝑐𝑖 𝑎𝑖𝑘 cannot be the general solution since 𝑐1 + 𝑐2 = 𝑐 will mean that
there are only (𝑛 − 1) arbitrary constants.
To solve (𝐸 − 𝑎1 )2 𝑦𝑘 = 0 we follow the steps below
Let 𝑦𝑘 = 𝑣𝑘 𝑎1𝑘 be a solution
Then (𝐸 − 𝑎1 )2 𝑦𝑘 = 0
⇒ 𝑦𝑘+2 − 2𝑎1 𝑦𝑘+1 + 𝑎12 𝑦𝑘 = 0
⇒ 𝑣𝑘+2 𝑎1𝑘+2 − 2𝑎1 𝑣𝑘+1 𝑎1𝑘+1 + 𝑎12 𝑣𝑘 𝑎1𝑘 = 0
⇒ ( 𝑣𝑘+2 − 2𝑣𝑘+1 + 𝑣𝑘 )𝑎1𝑘+2 = 0
⇒ 𝑣𝑘+2 − 2𝑣𝑘+1 + 𝑣𝑘 = 0 since 𝑎1𝑘+2 ≠ 0
⇒ (𝐸 − 1)2 𝑣𝑘 = 0 ⇒ ∆2 𝑣𝑘 = 0
⇒ 𝑣𝑘 = 𝑐1 + 𝑐2 𝑥
⇒ 𝑦𝑘 = (𝑐1 + 𝑐2 𝑥 )𝑎1𝑘 is the solution of (𝐸 − 𝑎1 )2 𝑦𝑘 = 0
⇒ (𝑎 − 8)(𝑎 + 1) = 0 ⇒ 𝑎 = 8, or 𝑎 = −1
Hence 𝑦 = 𝐴8𝑘 + 𝐵(−1)𝑘
⇒ (𝑎 − 1)(𝑎 + 1)(𝑎 − 2) = 0 ⇒ 𝑎 = 1, or 𝑎 = −1 or 𝑎 = 2
Hence 𝑦 = 𝐴1𝑘 + 𝐵(−1)𝑘 + 𝐶2𝑘
1 √3 𝜋 𝜋
+𝑖 = 1 (cos 𝑘 + 𝑖 sin 𝑘)
2 2 3 3
𝜋 𝜋 𝜋 𝜋
Hence 𝑦 = 1𝑘 (𝐴 cos 3 𝑘 + 𝐵 sin 3 𝑘) = 𝐴 cos 3 𝑘 + 𝐵 sin 3 𝑘
𝑎𝑥
⇒ 𝑎 𝑥 = 𝑓(𝑎)
𝑓(𝐸)
𝑎𝑥 𝑎𝑥
⇒
𝑓(𝑎)
=
𝑓(𝐸)
provided 𝑓(𝑎) ≠ 0
𝑎𝑥 𝑎𝑥
⇒
𝑓(𝑎)
=
(𝐸−𝑎)𝜔(𝑎)
provided that 𝜔(𝑎) ≠ 0
𝑎𝑥 1 𝑎𝑥 1
⇒ = . = 𝑥𝑎 𝑥−1
𝑓(𝑎) 𝜔(𝑎) (𝐸−𝑎) 𝜔(𝑎)
Note:
𝑎𝑥
(𝐸−𝑎)
= 𝑥𝑎 𝑥−1 ,
𝑎𝑥 𝑥(𝑥−1) 𝑥 (2)
(𝐸−𝑎)2
= 𝑎 𝑥−2 = 𝑎 𝑥−2 ,
2! 2!
𝑎𝑥 𝑥(𝑥−1)(𝑥−2) 𝑥 (3)
(𝐸−𝑎)3
= 𝑎 𝑥−3 = 𝑎 𝑥−3
3! 3!
. . . . . .
𝑎𝑥 𝑥 (𝑟)
(𝐸−𝑎)𝑟
= 𝑎 𝑥−𝑟
𝑟!
2𝑛 𝑛(𝑛−1)
Hence 𝑃𝐼 = (𝑃𝐼)1 + (𝑃𝐼)2 = + 3𝑛−2
3 8
2𝑛 𝑛(𝑛−1)
Hence 𝑦 = 𝐶𝐹 + 𝑃𝐼 = 𝐴(−1)𝑛 + (𝐵+𝐶𝑛)3𝑛 + 3
+
8
3𝑛−2
3𝑛 3𝑛 3𝑛
(𝑃𝐼)2 = = = replacing 𝐸 by 3
𝐸 2 +10𝐸+25 (𝐸+5)2 8
𝜋∗1𝑛 𝜋∗1𝑛 𝜋 𝜋
(𝑃𝐼)3 = = = =
𝐸 2 +10𝐸+25 (𝐸+5)2 (1+5)2 36
3𝑛 𝜋
Hence 𝑦 = 𝐶𝐹 + 𝑃𝐼 = (𝐴 + 𝐵𝑛)5𝑛 + 𝑛(𝑛 − 1)2𝑛−3 + 8
+
36
2 2 2 2 3 2 4 2 5
= −3 [1 − (∆ −3
+2∆
) + (∆ −3
+2∆
) − (∆ −3
+2∆
) + (∆ −3
+2∆
) − (∆ −3
+2∆
) + . . . ] 𝑥2
7∆2
= −3 [1 + 2∆
3
+ + . . . ] 𝑥2
9
7∆2
= −3 [1 + 2∆
3
+ ] 𝑥2
9
or
⇒ 𝑃𝐼 = 𝑦𝑥 = −3(𝑥 2 − 43𝑥 − 20
9
)
Hence the complete solution is
𝑦 = 𝐶𝐹 + 𝑃𝐼 = 𝐴2𝑥 + 𝐵(−2)𝑥 − 3(𝑥2 − 43𝑥 − 209)
⇒ 𝑃𝐼 = 𝑦𝑥 = 12𝑥 2 + 2𝑥 + 15
4
= 12[𝑥 2 + 4𝑥 + 15
2
]
Hence the complete solution is
𝑦 = 𝐶𝐹 + 𝑃𝐼 = 𝐴2𝑥 + 𝐵3𝑥 + 12[𝑥2 + 4𝑥 + 152]
or
⇒ 𝑃𝐼 = 12𝑥 3 + 94𝑥 2 + 15
2
𝑥 − 97
16
= 12[𝑥 3 + 92𝑥 2 + 15𝑥 + 97
4
]
Hence the complete solution is
𝑦 = 𝐶𝐹 + 𝑃𝐼 = 𝐴2𝑥 + 𝐵3𝑥 += 12[𝑥3 + 92𝑥2 + 15𝑥 + 974]
therefore
𝐹(𝑥)
𝑃𝐼 = 𝑎 𝑥 𝑣𝑘 = 𝑎 𝑥
𝑓(𝑎𝐸)
2
1 2∆2 −3∆ 2∆2 −3∆
= − 2𝑥−1 [1 − ( )+( ) ] (𝑥 2 − 𝑥)
9 −9 −9
1 ∆ ∆2
= − 2𝑥−1 [1 − + ] (𝑥 2 − 𝑥)
9 3 3
1 1 1 1 2 2 1 5 2
= − 2𝑥−1 [(𝑥 2 − 𝑥) − (2𝑥) + (2) ] = − 2𝑥−1 [(𝑥 2 − 𝑥) − 𝑥 + ] = − 2𝑥−1 [𝑥 2 − 𝑥 + ]
9 3 3 9 3 3 9 3 3
𝑥 𝑥 1 5 2
⇒ 𝑦𝑥 = 𝐶𝐹 + 𝑃𝐼 = 𝐴8 + 𝐵(−1) − 2𝑥−1 [𝑥 2 − 𝑥 + ]
9 3 3
Hence 𝐶𝐹 = (𝐴 + 𝐵𝑥)1𝑥 = 𝐴 + 𝐵𝑥
−1
𝑥 2 2𝑥 𝑥2 𝑥2 𝑥2 𝑥2 𝑥2 2𝑥 4∆2 +4∆
𝑃𝐼 = = 2𝑥 (𝐸−1)2 = 2𝑥 (2𝐸−1)2 = 2𝑥 = 2𝑥 = 2𝑥 = [1 + ] 𝑥2
𝐸 2 −2𝐸+1 4𝐸 2 −4𝐸+1 4(1+∆)2 −4(1+∆)+1 4∆2 +4∆+1 1 1
2 3 4
4∆2 +4∆ 4∆2 +4∆ 4∆2 +4∆ 4∆2 +4∆
= 2𝑥 [1 − ( )+( ) −( ) +( ) − . . . ] (𝑥 2 )
1 1 1 1
2
1 16∆2 +12∆ 16∆2 +12∆
= 4𝑥 [1 − ( )+( ) ] (𝑥 2 − 𝑥 + 5)
2 2 2
1
= 4𝑥 [1 − 6∆ + 28∆2 ](𝑥 2 − 𝑥 + 5)
2
1 1
= 4𝑥 [(𝑥 2 − 𝑥 + 5) − 6(2𝑥) + 28(2) ] = 4𝑥 [𝑥 2 − 13𝑥 + 61]
2 2
1
⇒ 𝑦𝑥 = 𝐶𝐹 + 𝑃𝐼 = 𝐴2𝑥 + 𝐵3𝑥 + 4𝑥 [𝑥 2 − 13𝑥 + 61]
2
1 1 3
= 2𝑥 [(𝑥 2 − 3) + ∆(𝑥 2 − 3) + ∆2 (𝑥 2 − 3)]
−25 5 25
1 1 3 1 2 1 6
= 2𝑥 [(𝑥 2 − 3) + (2𝑥 + 1) + (2)] = 2𝑥 [𝑥 2 − 3 + 𝑥 + + ]
−25 5 25 −25 5 5 25
1 2 64
= 2𝑥 [𝑥 2 + 𝑥 − ]
−25 5 25
1 2 64
⇒ 𝑦𝑥 = 𝐶𝐹 + 𝑃𝐼 = 𝐴(−7)𝑥 + 𝐵8𝑥 − 2𝑥 [𝑥 2 + 𝑥 − ]
25 5 25
Type 4: If 𝜑(𝑥) = cos 𝑘𝑥 or 𝜑(𝑥) = sin 𝑘𝑥 , where 𝑘 is any arbitrary constant, we use cos 𝑘𝑥 = 𝑅𝑒(𝑒 𝑖𝑘𝑥 )
𝑖𝑘𝑥 + 𝑒−𝑖𝑘𝑥 𝑖𝑘𝑥 − 𝑒−𝑖𝑘𝑥
and sin 𝑘𝑥 = 𝑖𝑚(𝑒 𝑖𝑘𝑥 ) or we use cos 𝑘𝑥 = 𝑒 2
and sin 𝑘𝑥 = 𝑒 2𝑖
Next, we find 𝑃𝐼
1 1 1 𝑥−4 1
𝑥𝑖 𝑥𝑖 𝑥𝑖 𝑖( 2 ) 𝑥𝑖
1 𝐼𝑚(𝑒 2 ) 𝐼𝑚(𝑒 2 ) 𝐼𝑚(𝑒2 )(𝑒−2𝑖 + 1) 𝐼𝑚(𝑒 +𝑒 2 )
sin( 2𝑥) 𝑒 −2𝑖 + 1
𝑃𝐼 = = = ∗ = =
𝐸 2 +1 𝐸 2 +1 𝑒 2𝑖 +1 𝑒 −2𝑖 + 1 (𝑒2𝑖 + 1)(𝑒−2𝑖 + 1) 2+ 𝑒 2𝑖 +𝑒 −2𝑖
𝑥−4 1
𝑖( 2 ) 𝑥𝑖 𝑥−4 𝑥 𝑥−4 𝑥 𝑥−4 𝑥
𝐼𝑚(𝑒 )+𝐼𝑚(𝑒 2 ) sin( )+ sin sin( )+ sin sin( )+ sin
2 2 2 2 2 2
= = = =
2+ 𝑒 2𝑖 +𝑒 −2𝑖 2+ cos 2 + cos(−2) 2+ 2 cos 2 2(1+ cos 2 )
𝑥−4 𝑥
𝜋 𝜋 sin( )+ sin
2 2
Hence 𝑦𝑥 = 𝐶𝐹 + 𝑃𝐼 = 𝐴 cos 2 𝑥 + 𝐵 sin 2 𝑥 + 2(1+ cos 2 )