0% found this document useful (0 votes)
15 views45 pages

Approximations and Errors Approximation of Numbers by Truncation and Rounding-Off, Types of Errors

The document discusses numerical methods for solving mathematical problems, focusing on approximations, errors, and methods for finding roots of equations. It outlines types of errors such as inherent and round-off errors, and explains measurement of errors including absolute, relative, and percentage errors. Additionally, it describes numerical solution methods like the Bisection Method, Fixed Point Iteration, and the Newton-Raphson Method, providing examples for clarity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views45 pages

Approximations and Errors Approximation of Numbers by Truncation and Rounding-Off, Types of Errors

The document discusses numerical methods for solving mathematical problems, focusing on approximations, errors, and methods for finding roots of equations. It outlines types of errors such as inherent and round-off errors, and explains measurement of errors including absolute, relative, and percentage errors. Additionally, it describes numerical solution methods like the Bisection Method, Fixed Point Iteration, and the Newton-Raphson Method, providing examples for clarity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Approximations and Errors

Approximation of numbers by truncation and rounding-off,


types of errors

Numerical methods are techniques by which mathematical problems are formulated so that they can be
solved with arithmetic operations. Although there are many kinds of numerical methods, they have one
common characteristic: they invariably involve large numbers of tedious arithmetic calculations. It is
little wonder that with the development of fast, efficient digital computers, the role of numerical
methods in engineering problem solving has increased dramatically in recent years.

A computer has a finite word length and so only a fixed number of digits are stored and used during
computation. This would mean that even in storing an exact decimal number in its converted form in
the computer memory, an error is introduced.

The quantity, (𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 − 𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒) is called the error.

(A) Reason of errors


The error in numerical computation can enter in two different ways:
(1) Inherent error: The inherent error is that quantity which is already present in the statement of
the problem before its solution.
The inherent error arises either due to the simplified assumptions in the mathematical formulation of
the problem or due to the physical measurements of the parameters of the problem.

(2) Round-off error: Error that arises due to the finite precision of an infinite decimal
representation during arithmetic computation is known as round-off error.
Round-off error is occurred in two ways:
(i) Chopping (Truncation)
(ii) Rounding

In chopping, after n-significant digits we simply discard the ( 𝑛 + 1)-th and later digits.
In rounding, to round-off a number to n-significant digits, we discard all the digits to the right of the n-
th digit if the ( 𝑛 + 1)-th digit is less than 5. If the ( 𝑛 + 1)-th digit is more than 5, we increase the n-
th digit by 1. If the ( 𝑛 + 1)-th digit is equal to 5, then we increase the n-th digit by 1 if it is odd and
leave the n-th digit unchanged if it is even.
Example. Find round-off error in rounding and chopping correct up to 4-significant figures for the
numbers (a) 1.7320508 (b) 0.014159.
Solution. (a) Let 𝑥𝑇 = 1.7320508
The approximate value after chopping is 𝑥𝐴 = 1.732
Therefore, round-off error in chopping is = 𝑥𝑇 − 𝑥𝐴 = 1.7320508 − 1.732 = 0.0000508
Again, the approximate value after rounding is 𝑥𝐴 = 1.732
Therefore, round-off error in rounding is = 𝑥𝑇 − 𝑥𝐴 = 1.7320508 − 1.732 = 0.0000508

(b) Let 𝑥𝑇 = 0.014159.


The approximate value after chopping is 𝑥𝐴 = 0.01415
Therefore, round-off error in chopping is = 𝑥𝑇 − 𝑥𝐴 = 0.014159 − 0.01415 = 0.000009
Again, the approximate value after rounding is 𝑥𝐴 = 0.01416
Therefore, round-off error in rounding is = 𝑥𝑇 − 𝑥𝐴 = 0.014159 − 0.01416 =− 0.000001

(B) Measurement of errors


Whenever approximating a number, we mainly measure errors in three different ways

(1) Absolute error: It is the absolute value of the difference between true value and approximate
value. Let us denote true value of a decimal number by 𝑥𝑇 and approximate value of a decimal
number by 𝑥𝐴 . Then the absolute error is defined as 𝐸𝐴 = |𝑥𝑇 − 𝑥𝐴 |.
(2) Relative error: The relative error of a number is known as the absolute error divided by its
|𝑥𝑇 −𝑥𝐴 |
true value, i.e., 𝐸𝑅 = |𝑥𝑇 |
.

(3) Percentage error: The percentage error is defined as the relative error multiplied by 100, i.e,
|𝑥𝑇 −𝑥𝐴 |
𝐸𝑃 = |𝑥𝑇 |
× 100.

Example. Find the absolute error, relative error and percentage error in approximating the number
10.42857 correct up to 3 decimal places.

Solution. Let xT = 10.42857. Therefore, xA = 10.429


So, absolute error is EA = |xT − xA | = | − 0.00043| = 0.00043
|xT−xA | 0.00043
Relative error is ER = |xT|
=
10.42857
= 0.0000412
|xT−xA |
Percentage error is EP = |xT|
× 100 = 0.00412
Numerical Solutions of Nonlinear Equations
A number ⇠ is a solution of f (x) = 0 if f (⇠) = 0. Such a solution ⇠ is a root or a zero of f (x) = 0.
Geometrically, a root of the equation f (x) = 0 is the value of x at which the graph of y = f (x)
intersects x- axis.

1. Bisection Method
This method consists in locating the root of the equation f (x) = 0 between a and b. If f (x)
is continuous in between a and b, and f (a) and f (b) are of opposite signs then there is a root in
between a and b. Then the first approximation of the root is x1 = 12 (a + b). If f (x1 ) = 0, then x1
is a root of f (x) = 0. Otherwise, the root lies between a and x1 or x1 and b according to f (a)f (x1 )
is negative or positive. Then we bisect the interval as before and continue the process until the
root is found to be desired accuracy.

Example 1. Find a root of the equation x3 4x 9 = 0, using the bisection method correct to
two decimal places.

Let f (x) = x3 4x 9.
Then f (2) = 9 < 0 and f (3) = 6 > 0. Therefore, root lies between 2 and 3.
) first approximation to the root is x1 = 2+32
= 2.5.
Now f (x1 ) = f (2.5) = 3.375 < 0. Thus the second approximation of the root is x2 =
1
2
(x1 + 3) = 2.75.
Then f (x2 ) = f (2.75) = 0.7969 > 0. ) the root lies between x1 and x2 . Thus the third
approximation of the root is x3 = 12 (x1 + x2 ) = 2.625.
Then f (x3 ) = f (2.625) = 1.4121 < 0. ) the root lies between x2 and x3 . Thus the fourth
approximation to the root is x4 = 12 (x2 + x3 ) = 2.6875.
Repeating this process, the successive approximations are x5 = 2.71875, x6 = 2.70313, x7 =
2.71094, x8 = 2.70703, x9 = 2.70508, x10 = 2.70605, x11 = 2.70654.
Hence the root is 2.70.

2. Fixed point iteration method


This iteration method is based on the principle of finding a sequence {xn } each element of which
successively approximates a real root ↵ of the equation f (x) = 0 in [a, b].
We re-write f (x) = 0 as:
(1) x = (x).
Thus, a root ↵ of the given equation satisfies ↵ = (↵). Therefore, the point ↵ remains fixed
under the mapping so a root of the equation is a fixed point of .
The function (x) is called iteration function. Here we also assume that (x) is continuously
di↵erentiable in [a, b].
We first find a location or crude approximation [a0 , b0 ] of a real root ↵ say, of f (x) = 0 and let
x = x0 (a0  x0  b0 ) be the initial approximation of ↵. Thus ↵ satisfies the equation
(2) ↵ = (↵).
Putting x = x0 in (1), we get first approximation x1 of ↵ as
x1 = (x0 )
and then the successive approximations are calculated as
(3) x2 = (x1 ), x2 = (x2 ), · · · , xn = (xn 1 ), xn+1 = (xn ).
The above iteration is generated by the formula xn+1 = (xn ) and is called iteration formula,
where xn is the nth approximation of the root ↵ of f (x) = 0.
2

The sequence {xn } of iterations or the successive better approximations may or may not converge
to a limit. If {xn } converges, then it converges to ↵ and the number of iterations required depends
upon the desired degree of accuracy of the root ↵.

Remark 1. (Convergence) The method of fixed point iteration is conditionally convergent. The
condition of convergence is | 0 (x)| < 1. Before starting the computation, one should confirm that
(x) must be such that 1 < 0 (x) < 1.

Example 2. Find the root of the equation 3x cos x 1 = 0, by the iteration method, correct to
four significant figures.

Let f (x) = 3x cos x 1.


) f (0) = 2 < 0, f (1) = 1.45 > 0. Thus, one root of f (x) = 0 lies between 0 and 1.
We re-write the equation as
cos x + 1 sin x
x= = (x), ) 0
(x) = .
3 3
) | 0 (x)| < 1 as | sin x|  1.
cos xn +1
Here, we take x0 = 0 and iteration equation as xn+1 = 3
.

n xn (xn )
0 0 0.66667
1 0.66667 0.59530
2 0.59530 0.60933
3 0.60933 0.60668
4 0.60668 0.60718
5 0.60718 0.60709
6 0.60709 0.60710
7 0.60710 0.60710

Thus, 0.6071 is a root of the equation, correct up to four significant figures.

3. Newton-Raphson Method
This is also an iterative method and is used to find isolated roots of an equation f (x) = 0. The
object of this method is to correct the approximate root x0 (say) successively to its exact value ↵.
Initially, a crude approximation small interval [a0 , b0 ] is found out in which only one root ↵ (say)
of f (x) = 0 lies.
Let x = x0 (a0  x0  b0 ) is an approximation of the root ↵ of the equation f (x) = 0. Let h
be a small correction on x0 , then x1 = x0 + h is the correct root.
Therefore, f (x1 ) = 0 =) f (x0 + h) = 0.
By Taylor series expansion, we get,
h2 00
f (x0 ) + hf 0 (x0 ) + f (x0 ) + · · · = 0.
2!
f (x0 )
As h is small, neglecting the second and higher power of h, we get, h = f 0 (x0 )
. Therefore,
f (x0 )
(4) x1 = x0 .
f 0 (x0 )
Further, if h1 be the correction on x1 , then x2 = x1 + h1 is the correct root.
Therefore, f (x2 ) = 0 =) f (x1 + h1 ) = 0.
By Taylor series expansion, we get,
h21 00
f (x1 ) + h1 f 0 (x1 ) + f (x1 ) + · · · = 0.
2!
3

f (x1 )
Neglecting the second and higher power of h1 , we get, h1 = f 0 (x1 )
. Therefore,
f (x1 )
(5) x2 = x1 .
f 0 (x1 )
Proceeding in this way, we get the (n + 1)th corrected root as
f (xn )
(6) xn+1 = xn .
f 0 (xn )
The formula (6) generates a sequence of successive corrections on an approximate root ↵ to get
the correct root ↵ of f (x) = 0, provided the sequence is convergent. The formula (6) is known as
the iteration formula for Newton-Raphson Method. The number of iterations required depends
upon the desired degree of accuracy of the root.
3.1. Convergence of Newton-Raphson Method. Comparing with the iteration method, we
may assume the iteration as
f (x)
(x) = x .
f 0 (x)
Thus, the above sequence will be convergent if and only if,
{f 0 (x)}2 f (x)f 00 (x) f (x)f 00 (x)
| 0 (x)| = 1 < 1 i.e., < 1.
{f 0 (x)}2 {f 0 (x)}2
Therefore, |{f 0 (x)}2 | > |f (x)f 00 (x)|.
Example 3. Find the positive root of x4 x 10 = 0 correct to four decimal places, using
Newton-Raphson method.
Let f (x) = x4 x 10. Then f 0 (x) = 4x3 1.
Also f (1) = 10 and f (2) = 4. ) f (x) = 0 has a root lies in between 1 and 2.
Let us take the initial approximation x0 = 2. Iteration formula is
f (xn ) x4n xn 10 3x4n + 10
xn+1 = xn = xn+1 = xn i.e., xn+1 = .
f 0 (xn ) 4x3n 1 4x3n 1

4
n xn xn+1 = 3x n +10
4x3n 1
0 2 1.87097
1 1.87097 1.85578
2 1.85578 1.85558
3 1.85558 1.85558

Thus 1.8556 is a root of the given equation correct to four decimal places.
3.2. Rate of convergence. An iterative method is said to be of order p or has the rate of
convergence p, if p is the largest positive real number for which there exists a finite constant
C 6= 0 such that |✏n+1 |  C|✏n |p where ✏n is the error in the nth iterate.
If xn+1 be the (n + 1)th approximation of a root ↵ of f (x) = 0 and ✏n+1 be the corresponding
error, we have
(7) ↵ xn+1 = ✏n+1
and xn+1 = xn + hn
=) ↵ xn+1 = ↵ xn hn =) ✏n+1 = ✏n hn
(8) ✏n = ✏n+1 + hn
where
f (xn )
(9) hn = or f (xn ) + hn f 0 (xn ) = 0.
f 0 (xn )
4

Again we have
↵ = xn + ✏n
✏2n 00
(10) or, f (↵) = 0 = f (xn + ✏n ) = f (xn ) + ✏n f 0 (xn ) + f (⇠n ); (xn < ⇠n < ↵)
2!
Substracting (9) from (10), we get,
✏2n 00 ✏2
(✏n ⇠n )f 0 (xn ) + f (⇠n ) = 0, or ✏n+1 f 0 (xn ) + n f 00 (⇠n ) = 0
2! 2!
00
✏n+1 1 f (⇠n )
= .
2
✏n 2 f 0 (⇠n )
✏n+1 1 f 00 (⇠n )
lim =
n!1 ✏2n 2 f 0 (⇠n )
If the iteration converges i.e., xn , ⇠n ! ↵, as n ! 1
↵ xn+1 1 f 00 (⇠n )
lim = .
n!1 (↵ xn )2 2 f 0 (⇠n )
Thus, it is clear that Nweton-Raphson iteration method is a second order iterative process.

4. Regula-Falsi method
This is the method of finding the real root of the equation f (x) = 0 and closely resembles the
bisection method. Here we choose two points x0 and x1 such that f (x0 ) and f (x1 ) are of opposite
signs. Then, f (x) = 0 has a root lies between x0 and x1 .
Equation of the chord joining A(x0 , f (x0 )) and A(x1 , f (x1 )) is
f (x1 ) f (x0 )
y f (x0 ) = (x x0 ).
x1 x0
The method consists in replacing the curve AB by the chord AB and taking the point of inter-
section of the chord with the x axis as an approximation to the root. So, the abscissa of the point
where the chord cuts the x axis (y = 0) is given by
x1 x0
(11) x2 = x0 f (x0 )
f (x1 ) f (x0 )
which is an approximation to the root.
If now f (x0 ) and f (x1 ) are of opposite signs, then the root lies between x0 and x2 . So, replacing
x1 by x2 in (11), we obtain the next approximation x3 . (The root could as well lie between x1
and x2 and we would obtain x3 accordingly). This procedure is repeated till the root is found to
desired accuracy.
Example 4. Find the positive root of x4 32 = 0 correct to two decimal places, using Regula-Falsi
method.
Let f (x) = x4 32. Then, f (2) = 16 and f (3) = 49. ) f (x) = 0 has a root lies between 2
and 3.
) taking x0 = 2, x1 = 3, f (x0 ) = 16, f (x1 ) = 49 in Regula-Falsi method, we get
x1 x0 16
x2 = x0 f (x0 ) = 2 + = 2.2462.
f (x1 ) f (x0 ) 65
Now, f (x2 ) = f (2.2462) = 6.5438. So, the root lies between 2.2462 and 3.
Taking x0 = 2.2462, x1 = 3, f (x0 ) = 6.5438, f (x1 ) = 49 in Regula-Falsi method, we get
x1 x0 3 2.2462
x3 = x0 f (x0 ) = 2.2462 ( 6.5438) = 2.335.
f (x1 ) f (x0 ) 49 + 6.5438
Repeating this process, the successive approximations are x4 = 2.3645, x5 = 2.3770, x6 =
2.3779.
) 2.38 is a root of the given equation correct to two decimal places.
5

5. Secant Method
Iteration formula for Newton-Raphson method is
f (xn )
xn+1 = xn
f 0 (xn )
where x0 is the initial approximation of the root.
In Secant method the derivative f 0 (x) is replaced by the di↵erence quotient
f (xn ) f (xn 1 )
f 0 (xn ) ⇡ .
xn xn 1
) iteration formula for Secant method is
xn xn 1
xn+1 = xn f (xn )
f (xn ) f (xn 1 )
where x0 and x1 are initial approximation of the root.
Example 5. Find a root of the equation x3 5x + 1 = 0 correct to three decimal places.
Let f (x) = x3 5x + 1. Then f (0) = 1 and f (1) = 3.
) f (x) = 0 has a root in between 0 and 1. Let us take x0 = 0 and x1 = 1. Then f (x0 ) = 1 and
f (x1 ) = 3. By Secant method
x1 x0 (1 0)
x2 = x1 f (x1 ) =1 ( 3) = 0.25, ; f (x2 ) = 0.234375.
f (x1 ) f (x0 ) ( 3 1)
Similarly,
x2 x1
x3 = x2 f (x2 ) = 0.186441, ; f (x3 ) = 0.074276.
f (x2 ) f (x1 )
Repeating this process, the successive approximations are x4 = 0.201736, x5 = 0.201640.
) 0.202 is a root of the given equation correct to three decimal places.
5.1. Rate of convergence. If xn+1 be the (n + 1)th approximation of a root ↵ of f (x) = 0 and
✏n+1 be the corresponding error, we have
xn xn 1
✏n+1 = ↵ xn+1 = ↵ xn + f (xn )
f (xn ) f (xn 1 )
↵ ✏n ↵ + ✏n 1
=) ✏n+1 = ✏n + f (↵ ✏n )
f (↵ ✏n ) f (↵ ✏n 1 )
h i
0 ✏2n 00
(✏n 1 ✏n ) f (↵) ✏n f (↵) + 2 f (↵) + · · ·
=) ✏n+1 = ✏n + h 2
i h
✏2
i
f (↵) ✏n f 0 (↵) + ✏2n f 00 (↵) + · · · f (↵) ✏n 1 f 0 (↵) + n2 1 f 00 (↵) + · · ·
h 2
i
(✏n 1 ✏n ) ✏n f 0 (↵) + ✏2n f 00 (↵) + · · ·
=) ✏n+1 = ✏n +
(✏n 1 ✏n )f 0 (↵) 12 (✏2n 1 ✏2n )f 00 (↵) + · · ·
  1
✏2n f 00 (↵) (✏n + ✏n 1 ) f 00 (↵)
=) ✏n+1 = ✏n + ✏n + + · · · 1 + · · ·
2 f 0 (↵) 2 f 0 (↵)
1 f 00 (↵)
=) ✏n+1 = + ✏n ✏n 1 + O(✏2n ✏n 1 + ✏n ✏2n 1 )
2 f 0 (↵)
Thus we have,
(12) ✏n+1 = C✏n ✏n 1
f 00 (↵)
where C = 12 f 0 (↵) and higher powers of ✏n are neglected.
The relation (12) is known as error equation. Keeping in view the definition of convergence, we
seek a relation of the form
(13) ✏n+1 = A✏pn
where A and p are to be determined.
6

1 1
From (13), we have ✏n = A✏pn 1 or ✏n 1 = A p ✏np .
Substituting ✏n+1 and ✏n 1 in (12), we have
1
(1+ p1 ) 1+ p
(14) ✏pn = CA ✏n .
Comparing the power of ✏n on both sides, we get
1 1 p
p = 1 + =) p = (1 ± 5).
p 2
p
Neglecting the minus sign, we find the rate of convergence of the Secant method is p = 12 (1+ 5) ⇡
1.618. p
Also we have, A = C p+1 . Hence, the Secant method has linear (super) rate of convergence 1.62.
Interpolation and Approximation
This chapter is dedicated to find an approximation to a given function by a class of simpler
functions, mainly polynomials. The main uses of polynomial interpolation are

 Reconstructing the function when it is not given explicitly and only the values of 𝑓(𝑥)
and/or its certain order derivatives at a set of points, known as nodes/tabular
points/arguments are given.
 Replace the function 𝑓(𝑥) by an interpolating polynomial 𝑃(𝑥) so that many common
operations such as determination of roots, differentiation, and integration etc which are
intended for the function 𝑓(𝑥) may be performed using 𝑃 𝑥 .

Definition: A polynomial 𝑃(𝑥) is called interpolating polynomial if the values of 𝑃(𝑥) and/or its
certain order derivatives coincide with those of 𝑓(𝑥) and/or its same order derivatives at one or
more tabular points.

The reason behind choosing the polynomials ahead of any other functions is that polynomials
approximate continuous functions with any desired accuracy. That is, for any continuous
function 𝑓(𝑥) on an interval 𝑎 ≤ 𝑥 ≤ 𝑏 and error bound 𝛽 > 0 there is a polynomial 𝑝𝑛 (𝑥) (of
sufficiently large degree) such that 𝑓 𝑥 − 𝑝𝑛 𝑥 < 𝛽 for all 𝑥 ∈ 𝑎, 𝑏 . This is the famous
Weierstrass approximation theorem.

In this section we will be focusing on the following methods of interpolation

1. Lagrange’s interpolation
2. Newton’s divided difference interpolation
3. Newton’s forward and backward difference interpolation

1. Lagrange’s Interpolation
Assume that 𝑓 𝑥 is continuous on [𝑎, 𝑏] and further assume that we have 𝑛 + 1 distinct points
𝑎 ≤ 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛−1 < 𝑥𝑛 ≤ 𝑏. Let the values of the function 𝑓(𝑥) at these points are
known and are denoted by 𝑓0 = 𝑓 𝑥0 , 𝑓1 = 𝑓 𝑥1 , …, 𝑓𝑛 = 𝑓 𝑥𝑛 . We aim to find a polynomial
𝑃𝑛 𝑥 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥2 + ⋯ + 𝑎𝑛 𝑥𝑛 satisfying 𝑃𝑛 𝑥𝑖 = 𝑓 𝑥𝑖 , 𝑖 = 0,1, …, 𝑛.

Linear Interpolation
Let 𝑛 = 1. Then the nodes are 𝑥0 and 𝑥1 . The Lagrange linear interpolating polynomial is given
by

𝑃1 𝑥 = 𝑎0 + 𝑎1 𝑥,

where the coefficients 𝑎0 and 𝑎1 can be evaluated solving the equations


𝑓0 = 𝑎0 + 𝑎1 𝑥0

𝑓1 = 𝑎0 + 𝑎1 𝑥1 .

The Lagrange linear interpolating polynomial is given by

𝑃1 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1,
𝑥−𝑥1 𝑥−𝑥0
where 𝑙0 𝑥 = and 𝑙1 𝑥 = are called Lagrange’s fundamental polynomials. The
𝑥0−𝑥1 𝑥1 −𝑥0
properties of the Lagrange fundamental polynomials are as follows:

(i) 𝑙0 𝑥 + 𝑙1 𝑥 = 1.
(ii) 𝑙0 𝑥0 = 1, 𝑙0 𝑥1 = 0; 𝑙1 𝑥0 = 0, 𝑙1 𝑥1 = 1.
(iii) The degrees of 𝑙0 (𝑥) and 𝑙1 𝑥 are one.

The error in linear interpolation is given by

1
𝐸1 𝑥, 𝑓 = 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑓'' 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥1 .
2
The bound for the truncation error in linear interpolation is given by
2
𝑥1 − 𝑥0
𝐸1 𝑥, 𝑓 ≤ max |𝑓'' (𝑥)| .
8 𝑥0≤𝑥≤𝑥1

Example 1: Given that 𝑓 2 = 4, 𝑓 2.5 = 5.5. Find the linear interpolating polynomial using
Lagrange’s interpolation and hence find an approximate value of 𝑓 2.2 .

Answer: Given that 𝑥0 = 2, 𝑥1 = 2.5, 𝑓0 = 4, 𝑓1 = 5.5 . The Lagrange fundamental


polynomials are

𝑥 − 𝑥1 𝑥 − 2.5
𝑙0 𝑥 = = = 2 2.5 − 𝑥 = 5 − 2𝑥,
𝑥0 − 𝑥1 2 − 2.5

𝑥 − 𝑥0 𝑥−2
𝑙1 𝑥 = = = 2 𝑥 − 2 = 2𝑥 − 4.
𝑥1 − 𝑥0 2.5 − 2

The linear Lagrange interpolating polynomial is given by

𝑃1 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 = 4 5 − 2𝑥 + 5.5 2𝑥 − 4 = 3𝑥 − 2.

An approximate value of 𝑓 2.2 ≈ 𝑃1 2.2 = 3 × 2.2 − 2 = 4.6.

Quadratic Interpolation
Here 𝑛 = 2. We need to find an interpolating polynomial of the form
𝑃2 𝑥 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥2

where 𝑎0 , 𝑎1 and 𝑎2 are arbitrary constants which satisfies the condition 𝑃2 𝑥0 = 𝑓0 , 𝑃2 𝑥1 =


𝑓1 and 𝑃2 𝑥2 = 𝑓2 . That is, we need to solve the following system of equations:

𝑎0 + 𝑎1 𝑥0 + 𝑎2 𝑥20 = 𝑓0

𝑎0 + 𝑎1 𝑥1 + 𝑎2 𝑥22 = 𝑓1

𝑎0 + 𝑎1 𝑥2 + 𝑎2𝑥22 = 𝑓2 .

The Lagrange quadratic interpolating polynomial is given by

𝑃2 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 + 𝑙2 𝑥 𝑓2

where 𝑙0 𝑥 , 𝑙1 (𝑥) and 𝑙2 (𝑥) are Lagrange’s fundamental polynomial and are defined by

𝑥 − 𝑥1 (𝑥 − 𝑥2 ) 𝑥 − 𝑥0 (𝑥 − 𝑥2 ) 𝑥 − 𝑥0 (𝑥 − 𝑥1 )
𝑙0 𝑥 = , 𝑙1 𝑥 = 𝑎𝑛𝑑 𝑙2 𝑥 = .
𝑥0 − 𝑥1 (𝑥0 − 𝑥2 ) 𝑥1 − 𝑥0 (𝑥1 − 𝑥2 ) 𝑥2 − 𝑥0 (𝑥2 − 𝑥1 )

The truncation error in Lagrange’s quadratic polynomial is given by

1
𝐸2 𝑥, 𝑓 = 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑥 − 𝑥2 𝑓''' 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥2 .
3!
The bound for the quadratic Lagrange’s interpolating polynomial is

𝑀3
𝐸2 𝑥, 𝑓 ≤ max 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑥 − 𝑥3 , 𝑀3 = max 𝑓''' 𝑥 .
6 0≤𝑥≤𝑥2
𝑥 𝑥0≤𝑥≤𝑥2

Example:

General Formula
The general Lagrange’s interpolating polynomial for 𝑛 + 1 nodes 𝑥0 , 𝑥1 , …, 𝑥𝑛 is given by
𝑛

𝑃𝑛 𝑥 = 𝑙𝑘 𝑥 𝑓𝑘 ,
𝑘=0

where the 𝑛-th degree Lagrange’s fundamental polynomial is given by

𝑥 − 𝑥0 𝑥 − 𝑥1 ⋯ 𝑥 − 𝑥𝑘−1 𝑥 − 𝑥𝑘+1 ⋯(𝑥 − 𝑥𝑛 )


𝑙𝑘 𝑥 = 𝑓𝑜𝑟 𝑘 = 0,1,2, …, 𝑛.
𝑥𝑘 − 𝑥0 𝑥𝑘 − 𝑥1 ⋯ 𝑥𝑘 − 𝑥𝑘−1 𝑥𝑘 − 𝑥𝑘+1 ⋯(𝑥𝑘 − 𝑥𝑛 )

The truncation error in Lagrange’s interpolation is


𝑤 𝑥
𝐸𝑛 𝑥, 𝑓 = 𝑓 𝑛+1 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥𝑛 ,
𝑛+1 !

where 𝑤 𝑥 = 𝑥 − 𝑥0 𝑥 − 𝑥1 ⋯ 𝑥 − 𝑥𝑛 .

Example 2. Given that 𝑓 0 = 1, 𝑓 1 = 3 and 𝑓 3 = 55. Find the unique polynomial of


degree 2 or less, which fits the given data. Find the truncation error.

Answer: By hypothesis, 𝑥0 = 0, 𝑥1 = 1, 𝑥2 = 3 ; 𝑓0 = 1, 𝑓1 = 3, 𝑓2 = 55. The Lagrange


fundamental polynomials are

𝑥 − 1 (𝑥 − 3) 1
𝑙0 𝑥 = = 𝑥 − 1 (𝑥 − 3)
0 − 1 (0 − 3) 3

𝑥 − 0 (𝑥 − 3) 1 𝑥 − 0 (𝑥 − 1) 1
𝑙1 𝑥 = =− 𝑥 𝑥 − 3 , 𝑙2 𝑥 = = 𝑥 𝑥−1 .
1 − 0 (1 − 3) 2 3 − 0 (3 − 1) 6

The Lagrange quadratic interpolating polynomial is

1 3 55
𝑃2 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 + 𝑙2 𝑥 𝑓2 = 𝑥−1 𝑥−3 − 𝑥 𝑥−3 + 𝑥 𝑥−1
3 2 6
= 8𝑥2 − 6𝑥 + 1

The truncation error is


-
1
𝐸2 𝑥 = 𝑥 𝑥 − 1 𝑥 − 3 𝑓''' 𝜉 , 0 ≤ 𝜉 ≤ 3.
6

Newton’s Divided Difference Interpolation


Let 𝑓(𝑥) be a function defined on the interval [𝑎, 𝑏]. Let 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛 = 𝑏 be a
partition of [𝑎, 𝑏] . The divided difference of 𝑓 𝑥 at 𝑥0 , written as 𝑓 𝑥0 , is the value of the
function at 𝑥0 . That is,

𝑓 𝑥0 = 𝑓 𝑥0 = 𝑓0 .

The first order divided difference at the nodes 𝑥0 and 𝑥1 is defined as

𝑓1 − 𝑓0
𝑓 𝑥0, 𝑥1 = .
𝑥1 − 𝑥0

The second order divided difference at the nodes 𝑥0, 𝑥1 , 𝑥2 is defined by


𝑓 𝑥1 , 𝑥2 − 𝑓[𝑥0 , 𝑥1 ]
𝑓 𝑥0 , 𝑥1 , 𝑥2 =
𝑥2 − 𝑥0
𝑓0 𝑓1 𝑓2
= + +
𝑥0 − 𝑥1 (𝑥0 − 𝑥2 ) 𝑥1 − 𝑥0 (𝑥1 − 𝑥2 ) 𝑥2 − 𝑥0 (𝑥2 − 𝑥1 )

The 𝑛-th order divided difference is defined by


𝑛
𝑓𝑖
𝑓 𝑥0 , 𝑥1, ⋯, 𝑥𝑛 = 𝑛 .
∏𝑗=0,𝑗≠𝑖 (𝑥𝑖 − 𝑥𝑗 )
𝑖=0

Divided difference table:

Nodes Functional Values 1st order Div. Diff. 2nd order Div. Diff.

𝑥0 𝑓[𝑥0 ]

𝑥1 𝑓[𝑥1 ] 𝑓[𝑥0 , 𝑥1 ]
𝑥2 𝑓[𝑥2 ] 𝑓[𝑥1 , 𝑥2 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]

The Newton divided difference interpolating polynomial 𝑃𝑛 (𝑥) interpolating at 𝑛 + 1 points


𝑥0 , 𝑥1 , …, 𝑥𝑛 is given by

𝑃𝑛 𝑥 = 𝑓 𝑥0 + 𝑥 − 𝑥0 𝑓 𝑥0 , 𝑥1 + 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑓 𝑥0 , 𝑥1 , 𝑥2 + ⋯
+ 𝑥 − 𝑥0 ⋯ 𝑥 − 𝑥𝑛−1 𝑓 𝑥0 , 𝑥1 , ⋯, 𝑥𝑛 .

Hence find the interpolating polynomial and an approximation to the value 𝑓(7).

Answer: The divided difference table is given by


Finite Difference Operators
Let the nodes 𝑥0, 𝑥1 , …, 𝑥𝑛 be equally spaced. That is, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1, …, 𝑛. We now define
the following operators:

The Shift Operator: 𝐸 𝑓 𝑥𝑖 = 𝑓 𝑥𝑖 + ℎ .

The forward difference Operator: ∆𝑓 𝑥𝑖 = 𝑓 𝑥𝑖 + ℎ − 𝑓 𝑥𝑖 .

The backward difference operator: ∇ 𝑓 𝑥𝑖 = 𝑓 𝑥𝑖 − 𝑓 𝑥𝑖 − ℎ .


ℎ ℎ
The central difference operator: 𝛿 𝑓 𝑥𝑖 = 𝑓 𝑥𝑖 + − 𝑓 𝑥𝑖 − .
2 2
1 ℎ ℎ
The average operator: 𝜇 𝑓 𝑥𝑖 = 𝑓 𝑥𝑖 +
2
+ 𝑓 𝑥𝑖 −
2
.
2

Repeated application of the difference operators lead to the following higher order differences.

The forward and backward difference tables can be computed as follows:


!
Interpolating polynomials using the forward difference operator
The Gregory-Newton forward difference interpolating polynomial is given by

𝑥 − 𝑥0 𝑥 − 𝑥0 (𝑥 − 𝑥1 ) 2 𝑥 − 𝑥0 ⋯ 𝑥 − 𝑥𝑛−1 𝑛
𝑃𝑛 𝑥 = 𝑓0 + ∆𝑓0 + ∆ 𝑓0 + ⋯ + ∆ 𝑓0 .
ℎ 2! ℎ2 𝑛! ℎ𝑛

Putting 𝑢 = (𝑥 − 𝑥0 )/ℎ, the interpolating polynomial using forward difference operator becomes
𝑛
𝑢 𝑖
𝑃𝑛 𝑥 = ∆ 𝑓0 ,
𝑖
in fo
U$fotu +u(u)
(2) =
+ 𝑖=0

to po .
With the error

𝑢 𝑢 − 1 ⋯(𝑢 − 𝑛) 𝑛+1 𝑛+1


𝐸𝑛 𝑥, 𝑓 = ℎ 𝑓 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥𝑛 .
𝑛+1 !

Interpolating polynomials using the backward difference operator


𝑥−𝑥𝑛
Let 𝑢 = ℎ
. The Gregory-Newton backward difference interpolating polynomial is given by

𝑢(𝑢 + 1) 2 𝑢 𝑢+1 ⋯ 𝑢+𝑛−1 𝑛


𝑃𝑛 𝑥 = 𝑓𝑛 + 𝑢 ∇ 𝑓𝑛 + ∇ 𝑓𝑛 + ⋯ + ∇ 𝑓𝑛 .
2! 𝑛!
The truncation error becomes

𝑢 𝑢 + 1 ⋯(𝑢 + 𝑛) 𝑛+1 𝑛+1


𝐸𝑛 𝑥, 𝑓 = ℎ 𝑓 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥𝑛 .
𝑛+1 !
Example 4: For the following data calculate the differences, obtain the forward and backward
difference polynomials. Interpolate at 𝑥 = 0.25 and 𝑥 = 0.35.

Solution: The forward difference table is obtained as


x (xi , yi ) dy/dx

x dy/dx
x dy/dx

y = f (x) xi (= x0 + ih), i = 0, 1, 2, ...n

p(p → 1) 2 p(p → 1)(p → 2) 3


y(x) = y0 + p!y0 + ! y0 + ! y0 + .....
2! 3!
x→x0
p= h x = x0 + ph

(p2 → p) 2 (p3 → 3p2 + 2p) 3


y(x0 + ph) = y0 + p!y0 + ! y0 + ! y0 + .....
2! 3!

dy (2p → 1) 2 3p2 → 6p + 2 3
= !y0 + ! y0 + ! y0 + .....
dp 2! 3!
dp 1
dx = h

! "
dy dy dp 1 (2p → 1) 2 3p2 → 6p + 2 3 4p3 → 18p2 + 22p → 6 4
= . = !y0 + ! y0 + ! y0 + ! y0 + ....
dx dp dx h 2! 3! 4!

x = x0 p = 0 p=0
# $ ! "
dy 1 1 1 1 1 1
= !y0 → !2 y0 + !3 y0 → !4 y0 + !5 y0 → !6 y0 + ....
dx x0 h 2 3 4 5 6
x
# $ ! "
d2 y d dy dp 1 2 2 6p → 6 3 12p2 → 36p + 22 4
2
= = 2 ! y0 + ! y0 + ! y0 + .... .
dx dp dx dx h 2! 3! 4!

# $ ! "
d2 y 1 2 3 11 4 5 5
= 2 ! y0 → ! y0 + ! y0 → ! y0 → ........ .
dx2 x=x0 h 12 6

! "
dy dy dp 1 2p + 1 2 3p2 + 6p + 2 3
= = ↑yn + ↑ yn + ↑ yn + .....
dx dp dx h 2! 3!

# $ ! "
d2 y d dy dp 1 6p + 6 3 6p2 + 18p + 11 4
= = 2 ↑2 yn + ↑ yn + ↑ yn + .....
dx2 dp dx dx h 3! 12

dy d2 y
dx , dx2 x = 1.1 x = 1.6
x y ! !2 !3 !4 !5 !6
1.0 7.989
0.414
1.1 8.403 →0.036
0.378 0.006
1.2 8.781 →0.030 →0.002
0.348 0.004 0.001
1.3 9.129 →0.026 →0.001 0.002
0.322 0.003 0.003
1.4 9.451 →0.023 0.002
0.299 0.005
1.5 9.750 →0.018
0.281
1.6 10.031

# $ ! "
dy 1 1 2 1 3 1 4 1 5 1 6
= !y0 → ! y0 + ! y0 → ! y0 + ! y0 → ! y0 + ....
dx x0 h 2 3 4 5 6

# $ ! "
d2 y 1 2 3 11 4 5 5
= 2 ! y0 → ! y0 + ! y0 → ! y0 → ........
dx2 x=x0 h 12 6
h = 0.1 x0 = 1.1 !y0 = 0.378 !2 y0 = →0.03

# $ ! "
dy 1 1 1 1 1
= 0.378 → (→0.03) + (0.004) → (→0.001) + (0.003) = 3.952
dx 1.1 0.1 2 3 4 5
# 2 $ ! "
d y 1 11 5
= →0.03 → (0.004) + (→0.001) → (0.003) = →3.74
dx2 1.1 0.12 12 6

!
# $ ! "
dy 1 1 1 1 1 1
= ↑yn + ↑2 yn + ↑3 yn + ↑4 yn + ↑5 y5 + ↑6 y6 ......
dx xn h 2 3 4 5 6

# $ ! "
d2 y 1 2 3 11 4 5 5 137 6
= ↑ y n + ↑ y n + ↑ y n + + ↑ y n + + ↑ y n + .....
dx2 xn h2 12 6 180
h = 0.1 xn = 1.6 ↑yn = 0.281 ↑2 yn = →0.018
# $ ! "
dy 1 1 1 1 1 1
= 0.281 + (→0.018) + (0.005) + (0.002) + (0.003) + (0.002) = 2.75
dx 1.6 0.1 2 3 4 5 6

# $ ! "
d2 y 1 11 5 137
= →0.018 + 0.005 + (0.002) + (0.003) + (0/02)+ = →0.715
dx2 1.6 h2 12 6 180

y = f (x) x → axis a b
% b % b
I= ydx = f (x)dx
a a

(a, b) n

(a, b) = (a = x0 , x1 , x2 , ..., xn = b)

b→a
h=
n

% b
h
f (x)dx = [y0 + 2 (y1 + y2 + y3 + · · · + yn→1 ) + yn ]
a 2

1
3
rd
% b
h
f (x)dx = [y0 + 4 (y1 + y3 + y5 + · · · ) + 2 (y2 + y4 + y6 + · · · ) + yn ]
a 3

3
8 th
% b
3h
f (x)dx = [y0 + 3 (y1 + y2 + y4 + y5 + · · · ) + 2 (y3 + y6 + y9 + · · · ) + yn ]
a 8

&1 dx
0 1+x2

b→a
n=6 h= n
1→0
↓h= 6
1
y = f (x) = 1+x2
1 1
x0 = 0, y0 = f (x0 ) = = = 1,
1 + x20 1 + 02

1 1 1 36
x1 = , y1 = f (x1 ) = = = ,
6 1 + x21 1 + ( 16 )2 37

2 1 1
x2 = , y2 = f (x2 ) = = = 0.9,
6 1 + x22 1 + ( 26 )2

3 1 1
x3 = , y3 = f (x3 ) = = = 0.8,
6 1 + x23 1 + ( 36 )2

4 1 1 9
x4 = , y4 = f (x4 ) = = = ,
6 1 + x24 1 + ( 46 )2 13

5 1 1 36
x5 = , y5 = f (x5 ) = = = ,
6 1 + x25 1 + ( 56 )2 61

1 1
x6 = 1, y6 = f (x6 ) = = = 0.5.
1 + x26 1 + 12

% 1
dx h
= [y0 + 2 (y1 + y2 + y3 . + y4 + y5 ) + y6 ]
0 1 + x2 2

! # $ "
1 36 9 36
= 1+2 + 0.9 + 0.8 + + + 0.5
6↔2 37 13 61

= 0.784241.

1
3

% 1
dx h
= [y0 + 4 (y1 + y3 + y5 ) + 2 (y2 + y4 ) + y6 ]
0 1 + x2 3

! # $ # $ "
1 36 36 9
= 1+4 + 0.8 + + 2 0.9 + + 0.5
6↔3 37 61 31

= 0.785398.

3
8

% 1
dx 3h
= [y0 + 3 (y1 + y2 + y4 + y5 ) + 2 (y3 ) + y6 ]
0 1 + x2 8
! # $ "
8 36 9 36
= 1+3 + 0.9 + + + 2 (0.8) + 0.5
6↔3 37 13 61

= 0.785396.
Gauss-Legendre Integration Rules
The limits of integration for Gauss-Legendre integration rules are [– 1, 1]. Therefore,
to evaluate b
I= ! a
f ( x ) d x

we transform the limits [a, b] to [– 1, 1], using a linear transformation.

1
The required transformation is x= [(b – a)t + (b + a)].
2
Then, f(x) = f{[(b – a)t + (b + a)]/2} and dx = [(b – a)/2]dt.
The integral becomes
b
$% 1 [(b − a)t + (b + a)]"# $% 1 (b − a)"# dt =
1 1
I=
!a
f ( x) dx =
&2 ! −1
f
' &2 ' !
−1
g(t) dt

where
$1 " $1
g(t) = % (b − a) # f % [(b − a)t + (b + a)]# .
"
& 2 ' & 2 '
1
Therefore, we shall derive formulas to evaluate ! −1
g(t) dt .

1
Without loss of generality, let us write this integral as

Gauss one point rule (Gauss-Legendre one point rule)


!−1
f ( x) dx .

The one point rule is given by


1
! −1
f ( x)dx = 2 f(0).

Gauss two point rule (Gauss-Legendre two point rule)


The two point rule is given by
1 *( − 1+, + f *( 1 +, .
!−1
f ( x)dx = f
) 3- ) 3-
Gauss three point rule (Gauss-Legendre three point rule)
The three point rule is given by

1 1 () " 3% " 3 % +,.


! −1
f ( x)dx =
9
5f −
)* #$ 5
+ 8 f (0) + 5 f
&' #$ 5 &' ,-
2 2x
Example Evaluate the integral
1 + x4
dx, I= !
1

using Gauss-Legendre one point, two point and three point rules.

Solution We reduce the interval [1, 2] to [– 1, 1] to apply the Gauss rules.


1
Here a=1 and b=2. Writing x = [(b – a)t + (b + a)],
2
we get x = (t + 3)/2, dx = dt/2. The integral becomes
1 8(t + 3) 1
I= ! − 1 [ 16 + (t + 3) 4 ]
dt = ! −1
f (t) dt

where f(t) = 8(t + 3)/[16 + (t + 3)4].


Using the one point Gauss rule, we obtain

I = 2 f(0) = 2
() 24 +, = 48 = 0.494845.
* 16 + 81- 97
Using the two point Gauss rule, we obtain

I=f
"# − 1 %& + f "# 1 %& = f(– 0.577350) + f(0.577350)
$ 3' $ 3'
= 0.384183 + 0.159193 = 0.543376.

Using the three point Gauss rule, we obtain

I=
1
5f −
3() "
+ 8 f (0) + 5 f
% " 3 % +,
9 5 )* #$ &' #$ 5 &' ,-
1
= [5 f(– 0.774597) + 8 f(0) + 5f(0.774597)]
9
1
= [5(0.439299) + 8(0.247423) + 5(0.137889) = 0.540592.
9

1 dx
Example Evaluate the integral I= !
0 1+ x
, using the Gauss three point formula.

Solution We reduce the interval [0, 1] to [– 1, 1] to apply the Gauss three point rule.
Here a=0 and b=1.
1
Writing x = [(b – a)t + (b + a)],
2
we get x = (t + 1)/2, dx = dt/2. The integral becomes
1 dt 1
I= !
−1 t+3
= !−1
f ( t ) dt

where f(t) = 1/(t + 3).


Using the three point Gauss rule, we obtain

I=
1
5f −
3() "
+ 8 f (0) + 5 f
% " 3 % +,
9 5 )* #$ &' #$ 5 &' ,-
1
= [5(0.449357) + 8(0.333333) + 5(0.264929)] = 0.693122.
9
Numerical Solution to ODE

We know that an ODE of the first order is of the form � �, �, �' = 0 and can often be
written in the explicit form �' = �(�, �). An initial value problem for this equation is of the
form

�' = � �, � , � �0 = �0

where �0 and �0 are given and we assume that the problem has a unique solution on some
open interval � < � < � containing �0 . We shall discuss methods of computing
approximate numeric values of the solution of the above initial value problem at the
equidistant points on the �-axis �1 = �0 + ℎ, �2 = �0 + 2ℎ, �3 = �0 + 3ℎ, … where the step
size ℎ is a fixed positive real number. These methods are suggested by the Taylor series
expansion of an infinitely differentiable function.

Taylor series Method.

Suppose � is an infinitely differentiable function (i.e., f can be differentiated infinitely often).


Then, the Taylor series expansion of f about �0 is given by

� � �0 �
� − �0 .
�!
�=0

Let � be infinitely differentiable function on the interval (�0 − �, �0 + �) and � ∈ (�0 −


�, �0 + �). Define

� � �0 �
�� (�) = � � − � − �0 .
�!
�=0

Then
� �+1 � �+1
�� (�) = � − �0
(� + 1)!
for some � between �0 and �. Moreover, if
lim �� (�) = 0
�→∞

then the Taylor series for � expanded about �0 converges to �(�), that is

� � �0 �
� � = � − �0
�!
�=0

for all � ∈ (�0 − �, �0 + �).


To solve the initial value problem of the form

�' = � �, � , � �0 = �0

where �0 and �0 are given, we assume that the problem has a unique solution on some
open interval � < � < � containing �0 and � is an infinitely differentiable function on (�, �).
Let � = �0 + ℎ, then by Taylor series expansion of � , we have
∞ ∞
� � �0 �
� � �0 �
� �0 + ℎ = � � = � − �0 = ℎ .
�! �!
�=0 �=0

That is,
ℎ2 '' ℎ3
� �0 + ℎ = � �0 + ℎ �' �0 + � �0 + �''' �0 + ⋯.
2 3!

To compute approximate numeric values of the solution of the above initial value problem
at the equidistant points on the �-axis �1 = �0 + ℎ, �2 = �0 + 2ℎ, �3 = �0 + 3ℎ, …, we use
the above form of Taylor series expansion. Thus we have the approximate values are
ℎ2 '' ℎ3
� �1 = � �0 + ℎ = � �0 + ℎ �' �0 + � �0 + �''' �0 + ⋯,
2 3!

'
ℎ2 '' ℎ3 '''
� �2 = � �1 + ℎ = � �1 + ℎ � �1 + � �1 + � �1 + ⋯,
2 3!

'
ℎ2 '' ℎ3 '''
� �3 = � �2 + ℎ = � �2 + ℎ � �2 + � �2 + � �2 + ⋯,
2 3!
and so on. In general
ℎ2 '' ℎ3
� ��+1 = � �� + ℎ = � �� + ℎ �' �� + � �� + �''' �� + ⋯.
2 3!

Example: Solve the initial value problem


�' = ��, � 0 = 1.
in the interval [0,1] with ℎ = 0.2 using Taylor series method upto 3rd order derivative.

Solution: We have to solve the initial value problem


�' = ��, � 0 = 1.
in the interval [0,1] with ℎ = 0.2 . That is we have to find the values of �(�) at
�1 = 0.2, �2 = 0.4, �3 = 0.6, �4 = 0.8, �5 = 1.0 .
It is given that �0 = 0, � �0 = 1, ℎ = 0.2 . Let denote � �� = �� . We have to use Taylor
series method up to 3rd order derivative. That is we have to use

'
ℎ2 '' ℎ3 '''
� ��+1 = � �� + ℎ � �� + � �� + � �� .
2 3!
That is,
ℎ2 '' ℎ3 '''
��+1 = �� + ℎ �'� + � + � .
2 � 3! �
Given that, �' = ��. So, �'' = 1 + �2 � and �''' = 3� + �3 � . Now using ℎ = 0.2 and
the given initial condition � 0 = 1, that is �0 = 0, �0 = 1, we find,

�'0 = �0 �0 = 0, �''0 = 1 + �20 �0 = 1, �''' 3


0 = �0 + �0 �0 = 0.

Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + �0 + �0 = 1.02.
2 3!
Similarly we calculate �2 , �3 , �4 , �5 and are given in the below table.

� �� �� �'� �''� �'''


� ��+1
0 0 1 0 1 0 1.0200
1 0.2 1.0200 0.2040 1.0608 0.6202 1.0828
2 0.4 1.0828 0.4331 1.2561 1.3687 1.1964
3 0.6 1.1964 0.7179 1.6271 2.4120 1.3757
4 0.8 1.3757 1.1006 2.2562 4.0062 1.6463
5 1.0 --- --- --- ---
1.6463

From the above table we get the solution of the initial value problem as

� 0.2 = 1.0200, � 0.4 = 1.0828, � 0.6 = 1.1964, � 0.8 = 1.3757, � 1.0 = 1.6463 .

�2
Note: The exact solution is �(�) = � 2 , and so the exact value of � 1 is 1.6487.

Example: Solve the initial value problem


�' = � + �, � 0 = 0.
in the interval [0,1] with ℎ = 0.25 using Taylor series method upto 3rd order derivative.
Solution: We have to solve the initial value problem
�' = � + �, � 0 = 0.
in the interval [0,1] with ℎ = 0.25 . That is we have to find the values of �(�) at
�1 = 0.25, �2 = 0.50, �3 = 0.75, �4 = 1.0 .
It is given that �0 = 0, � �0 = 0, ℎ = 0.25 . Let denote � �� = �� . We have to use
Taylor series method up to 3rd order derivative. That is we have to use
ℎ2 '' ℎ3
� ��+1 = � �� + ℎ �' �� + � �� + �''' �� .
2 3!
That is,
ℎ2 '' ℎ3 '''
��+1 = �� + ℎ �'� + �� + �� .
2 3!
Given that, �' = � + � . So, �'' = 1 + � + � and �''' = 1 + � + � . Now using
ℎ = 0.25 and the given initial condition � 0 = 0, that is �0 = 0, �0 = 0, we find,

�'0 = �0 +�0 = 0, �''0 = 1 + �0 + �0 = 1, �'''


0 = 1 + �0 + �0 = 1.

Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + � + � = 0.0339.
2 0 3! 0
Similarly, we calculate �2 , �3 , �4 and are given in the below table.

� �� �� �'� �''� �'''


� ��+1
0 0 0 0 1 1 0.0339
1 0.25 0.0339 0.2839 1.2839 1.2839 0.1483
2 0.50 0.1483 0.6483 1.6483 1.6483 0.3662
3 0.75 0.3662 1.1162 2.1162 2.1162 0.7168
4 1.0 --- --- --- ---
0.7168

From the above table we get the solution of the initial value problem as

� 0.25 = 0.0339, � 0.50 = 0.1483, � 0.75 = 0.3662, � 1.0 = 0.7168.

Note: The exact solution is �(�) = �� − � − 1, and so the exact value of � 1 is 0.7183.
Example: Solve the initial value problem
�' = � + � 2 , � 0 = 0; ℎ = 0.1
using Taylor series method up to 3rd order derivative. Do 10 steps.
Solution: We have to solve the initial value problem
�' = � + � 2 , � 0 = 0.
with ℎ = 0.25 . That is we have to find the values of �(�) at
�1 = 0.1, �2 = 0.1, �3 = 0.3, …, �9 = 0.9, �10 = 1.0.
It is given that �0 = 0, � �0 = 0, ℎ = 0.1 . Let denote � �� = �� . We have to use Taylor
series method up to 3rd order derivative. That is we have to use
ℎ2 '' ℎ3
� ��+1 = � �� + ℎ �' �� + � �� + �''' �� .
2 3!
That is,
ℎ2 '' ℎ3 '''
��+1 = �� + ℎ �'� + �� + �� .
2 3!
Given that, �' = � + � 2 .
So,
�'' = 2 � + � 1 + �'
And
�''' = 2(1 + 4�' + 3�'2 )
Using ℎ = 0.1 and the given initial condition � 0 = 0 , that is �0 = 0, �0 = 0 , we do the
following 10 steps.
Step 1.
�'0 = �0 + �0 2
= 0, �''0 = 2(�0 + �0 )(1 + �'0 ) = 0, �''' ' '2
0 = 2(1 + 4�0 + 3�0 ) = 2.

Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + � + � =.
2 0 3! 0
Similarly, we do the rest steps and are given in the below table.

� �� �� �'� �''� �'''


� ��+1
1 0 0.0000 0.0000 0.0000 2.0000 0.0003
2 0.1 0.0003 0.0101 0.2007 2.0811 0.0027
3 0.2 0.0027 0.0411 0.4061 2.3388 0.0092
4 0.3 0.0092 0.0956 0.6241 2.8198 0.0224
5 0.4 0.0224 0.1784 0.8716 3.6181 0.0452
6 0.5 0.0452 0.2972 1.1867 4.9077 0.0816
7 0.6 0.0816 0.4646 1.6576 7.0124 0.1376
8 0.7 0.1376 0.7015 2.4995 10.5649 0.2220
9 0.8 0.2220 1.0444 4.2736 16.9005 0.3506
10 0.9 0.3506 1.5640 8.6194 29.1887 0.5550
11 1 0.5550 --- --- --- ---

Note: The exact solution is �(�) = tan � − �, and so the exact value of � 1 is 0.05574.

Euler's method
From use Taylor series method, we have,

'
ℎ2 '' ℎ3 '''
� ��+1 = � �� + ℎ � �� + � �� + � �� + ⋯.
2 3!
For small ℎ the higher powers ℎ2 , ℎ3 , …, are very small and if we drop all of them to get the
crude approximation
� ��+1 = � �� + ℎ �' ��
i.e.,
��+1 = �� + ℎ � ��, �� .
Geometrically, this is an approximation of the curve of by a polygon whose first side is
tangent to this curve at ��.

Example: Apply the Euler method to the initial value problem


�' = � − � 2 , � 0 = 0;
choosing ℎ = 0.1 and computing ten steps.
Solution: Given that
�' = � − � 2 , � 0 = 0; ℎ = 0.1.
We have to do ten steps. That is, we have to calculate �1 , �2 , …, �10 . Given that �0 = 0, �0 =
0, ℎ = 0.1. So �1 = 0.1, �2 = 0.2, …, �10 = 1.0 . Here � �, � = � − � 2 . By Euler method

��+1 = �� + ℎ � ��, �� .
Thus
2
�1 = �0 + ℎ� �0 , �0 = 0 + 0.1 ⋅ 0 − 0 =0
Similarly, we calculate �2 , …, �10 and are given in the below table.
� �� ��
1 0.1 0.0000
2 0.2 0.0010
3 0.3 0.0050
4 0.4 0.0137
5 0.5 0.0286
6 0.6 0.0508
7 0.7 0.0810
8 0.8 0.1193
9 0.9 0.1656
10 1.0 0.2196

Modified Euler's method.


Euler’s method is generally much too inaccurate and methods of higher order and precision
are obtained by taking more terms in Taylor series into account. But this involves an
important practical problem of calculation of successive derivatives. The general strategy
now is to avoid the computation of these derivatives and to replace it by computing
functions for one or several suitably chosen auxiliary values of (�, �). The Modified Euler's
method for solving initial value problem of the form
�' = � �, � , � �0 = �0

where �0 and �0 are given is a predictor–corrector method as described below.


Suppose �� has been calculated and to calculate ��+1 we do two steps
Step 1. Find the predictor
0
��+1 = �� + ℎ � ��, �� .
Step 2. Find the corrector by iterations
�+1 ℎ �
��+1 = �� + � �� , �� + �(��+1 , ��+1 for � = 0,1,2, ….
2

In this step we repeat the iteration till two consecutive ��+1 are equal up to certain decimal
places and treat it as ��+1 .

Example: Solve the initial value problem


�' = ��2 , � 0 = 1; ℎ = 0.25
in the interval [0,1] using Modified Euler's method correct to four decimal places.
Solution: According to the question � �, � = ��2 , �0 = 0, �0 = 1, ℎ = 0.25 .We have to
solve in the interval 0,1 , that is we have to find the values of �(�) at �1 = 0.25, �2 =
0.5, �3 = 0.75, and �4 = 1.0. By Modified Euler's method we have
0
��+1 = �� + ℎ � ��, �� ,
and
�+1 ℎ �
��+1 = �� + � �� , �� + �(��+1 , ��+1 for � = 0,1,2, ….
2
First to calculate �1 = � �1 , we have
0
�1 = �0 + ℎ� �0 , �0 = 1
�+1
Now by iteration we calculate �1 ; for � = 0,1,2, …, till it repeats. That is

�11 = �0 + � �0 , �0 + �(�1 , �10 ) = 1.0313.
2

�12 = �0 + � �0 , �0 + �(�1 , �11 ) = 1.0332.
2
ℎ 2
�13 = �0 + � �0 , �0 + �(�1 , �1 ) = 1.0334.
2
ℎ 3
�14 = �0 + � �0 , �0 + �(�1 , �1 ) = 1.0334.
2
Thus, �1 = 1.0334.

In similar way we calculate �2 as bellow:

�20 = 1.1001, �21 = 1.1424, �22 = 1.1483, �23 = 1.1492, �24 = 1.1493, �25 = 1.1493.
Thus, �2 = 1.1493. Again

�30 = 1.3144 , �31 = 1.3938, �32 = 1.4140, �33 = 1.4193, �34 = 1.4207, �35 = 1.4212,
�36 =1.4212.

Thus �3 =1.4212. Finally,

�40 = 1.7999, �41 = 2.0155, …, �415 = 2.2349, �416 = 2.2349.

Thus, �4 = 2.2349.
Therefore, �1 = 1.0334, �2 = 1.1493, �3 =1.4212, �4 = 2.2349.
2
Note: The exact solution of the above initial value problem is � � = 2−�2 and �1 = 1.0323,

�2 = 1.1429, �3 = 1.3913 and �4 = 2.


************** END****************
!"#$%&'"(()* +%(,-./

et on der t e o o n :

yω(x) = f (x, y), y(x0) = y0 .

Integrating the differential equation y′ = f (x, y) in the interval [xn , xn+1], we get
x n+1 dy x n+1
!xn dx
dx = ! xn
f ( x , y) dx .

Note that y′ and hence f (x, y) is the slope of the solution curve. Further, the
integrand on the right hand side is the slope of the solution curve which changes continuously
in [xn , xn+1]. By approximating the continuously varying slope in [xn , xn+1] by a fixed slope,
we obtain the Euler, Heun’s and modified Euler methods. The basic idea of Runge-
Kutta methods is to approximate the integral by a weighted average of slopes and
approximate slopes at a number of points in [xn , xn+1].

The general Runge-Kutta method can be written as


v
yn+1 = yn + ω w K, i i (1)
i=1

" iΛ1 $
where Ki = hf !! x n + ci h, yn + ω aim K m %%
# m=1 &
with c1 = 0.
For v = 1, w1 = 1, the equation (1) becomes the Euler method with order p = 1, aand
this is the lowest order Runge-Kutta method.
We now list a few Runge-Kutta methods.

Runge-Kutta method of second order

Euler-Cauchy method (Heun method) method :


1 yn+1 = yn + K2,
yn+1 = yn + (K + K2), K1 = hf (xn, yn),
2 1
K1 = hf (xn, yn), "! h K$%
K2 = hf xn + , yn + 1 .
K2 = hf (xn + h, yn + K1). # 2 2&
Runge-Kutta method of fourth order

Classical method
1
yn+1 = yn + (K1 + 2K2 + 2K3 + K4),
6
K1 = hf (xn, yn),
"!
K2 = hf x n +
1 1
h, yn + K 1 %&$
# 2 2

K3
"
= hf ! x
1
h, yn
+
1
+ K %
$
# 2
n
2 & 2

K4 = hf (xn + h, yn + K3).
!"#$%&'"(()* +%(,-./
!"#$%&'"(()* +%(,-./

Example Given y′ = x3 + y, y(0) = 2, compute y(0.2), y(0.4) and y(0.6) using the Runge-Kutta
method of fourth order.

Solution We have x0 = 0, y0 = 2, f(x, y) = x3 + y, h = 0.2.


For n = 0, we have x0 = 0, y0 = 2.
k1 = hf(x0, y0) = 0.2 f (0, 2) = (0.2)(2) = 0.4,

!" h 1 $%
k2 = hf x0 +
# , y0 + k1 = 0.2 f(0.1, 2.2)
&
2 2
= (0.2)(2.201) = 0.4402,

!" h 1 $% = 0.2 f(0.1, 2.2201)


k3 = hf x0 +
# , y0 + k2
&
2 2
= (0.2)(2.2211) = 0.44422,
k4 = hf (x0 + h, y0 + k3) = 0.2 f (0.2, 2.44422)
= (0.2)(2.45222) = 0.490444,
1
y(0.2) ≈ y1 = y0 + (k + 2k2 + 2k3 + k4)
6 1

1
= 2.0 + [0.4 + 2(0.4402) + 2(0.44422) + 0.490444]
6
= 2.443214.
For n =1 , we have x1 = 0.2, y1 = 2.443214.
k1 = h f (x1, y1) = 0.2 f(0.2, 2.443214) = (0.2)(2.451214) = 0.490243,

!" h 1 $%
k2 = hf x1 + , y1 + k1 = 0.2 f(0.3, 2.443214 + 0.245122)
# 2 2 &
= (0.2)(2.715336) = 0.543067,

!" h 1 $% = 0.2 f(0.3, 2.443214 + 0.271534)


k3 = hf x1 + , y1 + k2
# 2 2 &
= (0.2)(2.741748) = 0.548350,
k4 = hf (x1 + h, y1 + k3) = 0.2 f(0.4, 2.443214 + 0.548350)
= (0.2)(3.055564) = 0.611113,
1
y(0.4) ≈ y2 = y1 + (k + 2k2 + 2k3 + k4)
6 1
1
= 2.443214 + [0.490243 + 2(0.543067) + 2(0.548350) + 0.611113]
6
= 2.990579.
!"#$%&'"(()* +%(,-./

For n = 2, we have x2 = 0.4, y2 = 2.990579.

k1 = hf (x2, y2) = 0.2 f (0.4, 2.990579) = (0.2)(3.054579) = 0.610916,

!" h 1 $%
k2 = hf x2 + , y2 + k1 = 0.2 f(0.5, 2.990579 + 0.305458)
# 2 2 &
= (0.2)(3.421037) = 0.684207,

!" h 1 $% = 0.2f (0.5, 2.990579 + 0.342104)


k3 = hf x2 + , y2 + k2
# 2 2 &
= (0.2)(3.457683) = 0.691537,
k4 = hf (x2 + h, y2 + k3) = 0.2 f (0.6, 2.990579 + 0.691537)
= (0.2) (3.898116) = 0.779623.
1
y(0.6) ≈ y3 = y2 + (k + 2k2 + 2k3 + k4)
6 1
1
= 2.990579 + [0.610916 + 2(0.684207) + 2(0.691537) + 0.779623]
6
= 3.680917.

Example Solve the initial value problem


y′ = – 2xy2, y(0) = 1
with h = 0.2 on the interval [0, 0.4] s n the fourth order classical Runge-Kutta method.

Solution

We have x0 = 0, y0 = 1, 0.2 and f(x, y) = –2xy2.


For n = 0, we have x0 = 0, y0 = 1.
k1 = hf (x0, y0) = –2(0.2)(0)(1)2 = 0,

!" h 1 $%
k2 = hf x0 + , y0 + k1 = –2(0.2)(0.1)(1)2 = –0.04,
# 2 2 &
! h 1 $
k3 = hf " x 0 + , y0 + k % = –2 (0.2)(0.1)(0.98)
2
2 = –0.038416,
# 2 2 &
k4 = hf (x0 + h, y0 + k3) = –2(0.2)(0.2)(0.961584)2 = –0.0739715,
1
y(0.2) ≈ y1 = y0 + (k + 2k2 + 2k3 + k4)
6 1
1
= 1.0 + [0.0 – 0.08 – 0.076832 – 0.0739715] = 0.9615328.
6
!"#$%&'"(()* +%(,-./

For n = 1, we have x1 = 0.2, y1 = 0.9615328.


k1 = h f (x1, y1) = –2(0.2)(0.2)(0.9615328)2 = – 0.0739636,

!" h 1 $%
k2 = hf x1 + , y1 + k1 = –2(0.2)(0.3) (0.924551)2 = – 0.1025753,
# 2 2 &
! h 1 $
k3 = hf " x 1 + , y1 + k % = –2(0.2)(0.3)(0.9102451)
2
2 = – 0.0994255,
# 2 2 &
k4 = hf (x1 + h, y1 + k3) = –2(0.2)(0.4)(0.86210734)2 = – 0.1189166,
1
y(0.4) ≈ y2 = y1 + (k + 2k2 + 2k3 + k4)
6 1
1
= 0.9615328 + [–0.0739636 –0.2051506 –0.1988510 – 0.1189166]
6
= 0.8620525
Solution of System of Linear Equations
(Gauss-Jacobi and Gauss-Seidel methods)

Theorem (Diagonally Dominant):


 A square matrix � is said to be strictly diagonally dominant if

��� > ��� , � = 1,2, …, �.


�=1
�≠�

Where, ��� denotes the entry in the �th row and �th column of the matrix.

 A strictly diagonal dominant matrix (or an irreducibly diagonal dominant matrix) is non-singular.
 The Gauss-Jacobi and Gauss-Seidel methods for solving a linear system of equations converge if
the matrix is strictly diagonally dominant. It converges for any initial approximation �(�) .
Generally, �(�) = � is taken in the absence of any better initial approximation.

Iterative Methods:
 To solve the system of linear equations, we use two types of method. i.e., Direct method and
Iterative method. Direct method includes Gauss-Elimination method and Gauss-Jordan method.
 Iterative method includes Gauss-Jacobi method and Gauss-Seidel method.

(1) Gauss-Jacobi Method:

 This is an iterative method and also known as “Method of simultaneous displacement”.


 Here each diagonal element is solved and an approximate value is obtained. The process is then
iterated until it converges.
Algorithm for Gauss-Jacobi method:
Step-1:
Consider a square matrix � of �-linear system of equations as �� = �.
Where, � = [��� ]�� ,
� = [�1 �2 �3 … ��]� and
� = [�1 �2 �3 … ��]� .
And the diagonal elements ��� ≠ 0 for � = 1,2, …, �. If any of ��� = 0 ∀ �, then
rearrange the above system of equations in such a way that the above conditions hold.

Step-2:

Rewrite the system of equations as,


1
�1 = � �1 − �12 �2 − �13 �3 − … − �1� �� .
11

1
�2 = � �2 − �21 �1 − �23 �3 − … − �2� �� .
22

.
.
1
��−1 = ��−1 − ��−1,1 �1 − ��−2,2 �2 − …−��−1,�−2 ��−2 − ��−1,� �� .
��−1,�−1

1
�� = � [�1 − ��1 �1 − ��2 �2 − … − ��,�−1 ��−1 ].
��

1 �
In general, �� = � �� − �=1 ��� �� , � = 1,2, …, �, ��� ≠ 0.
�� �≠�

Step-3:

Generate the iteration scheme �(�+�) from �(�) for � ≥ 0 as

1 �
��(�+1) = �� − �=1 ��� �� � , � = 1,2, …, � and ��� ≠ 0 ∀ �.
���
�≠�

Example 1:
Solve the linear system �� = � by
10�1 − �2 + 2�3 = 6
− �1 + 11�2 − �3 + 3�4 = 25
2�1 − �2 + ���3 − �4 =− ��
��2 − �3 + ��4 = ��
by Gauss-Jacobi method rounded up to four decimal places.
Solution:
Here, � is strictly diagonally dominant since 10 > 1 + 2 , 11 > 1 + 1 + 3 ,
10 > 2 + 1 + 1 and 8 > 3 + 1.
Now letting �(�) = [0 0 0 0]� , we get
�(�) = [0.6000 2.2727 − 1.1000 1.8750]�
�(�) = [1.0473 1.7159 − 0.8052 0.8852]� and
�(�) = [0.9326 2.0533 − 1.0493 1.1309]� .
Proceeding similarly one can obtain,
�(�) = [0.9890 2.0114 − 1.0103 1.0214]� and
�(��) = [1.0001 1.9998 − 0.9998 0.9998]� .
The solution is � = [1 2 − 1 1]� . You may note that �(��) is a good
approximation to the exact solution compared to �(�) .

(2) Gauss-Seidel Method:

 This is the modification of Gauss-Jacobi iteration method.


 This method is also known as “Method of successive displacement” since one must use the
recent guesses to do the iterations.
Algorithm for Gauss-Jacobi
Seidel method:
Step-1:
Consider a square matrix � of �-linear equations and �-unknowns as �� = �.
Where, � = [��� ]�� ,
� = [�1 �2 �3 … ��]� and
� = [�1 �2 �3 … ��]� .
i.e.
�11 �1 + �12 �2 + �13 �3 + … + �1� �� = �1
�21 �1 + �22 �2 + �23 �3 + … + �2� �� = �2
. . . . .
. . . . .
. . . . .
��1 �1 + ��2 �2 + ��3 �3 + … + ��� �� = ��
And the diagonal elements ��� ≠ 0 for � = 1,2, …, �.

Step-2:

If any of ��� = 0 ∀ �, then rearrange the above system of equations in such a way
that the above conditions hold. i.e. the first equation is rewritten with �1 on the left-hand
side and the second equation is rewritten with �2 on the left-hand side and so on. i.e.,

Rewrite the system of equations as,


1
�1 = � �1 − �12 �2 − �13 �3 − … − �1� �� .
11

1
�2 = � �2 − �21 �1 − �23 �3 − … − �2� �� .
22

.
1
��−1 = � ��−1 − ��−1,1 �1 − ��−2,2 �2 − …−��−1,�−2 ��−2 − ��−1,� �� .
�−1,�−1

1
�� = [� − ��1 �1 − ��2 �2 − … − ���−1 ��−1 ].
��� 1

1 �
In general, �� = � �� − �=1 ��� �� , � = 1,2, …, �, ��� ≠ 0.
�� �≠�

Step-3:

Generate the iteration scheme �(�+�) from �(�) for � ≥ 0 as

1 �−1 �
��(�+1) = �� − �=1 ��� �� �+1 − �=�+1 ��� �� � , � = 1,2, …, � and ��� ≠ 0 ∀ �.
���
�≠� �≠�
Step-4:

Now to find ��’s, one must assume an initial guess for the ��’s and then use the
rewritten equations to calculate the new guesses. Remember, one always uses the most
recent guesses to calculate ��.

Step-5:

At the end of each iteration, one calculates the absolute relative approximate error
for each �� as
�� ���−�� ���
|��|� = | �� ���
|∗ 100

Where ����� is the recently obtained value of �� , and ����� is the previous value of �� .

When the absolute relative approximate error for each �� is less than the pre-specified
tolerance, the iterations are stopped.

Example 1:
Solve the given system of equations by Gauss-Seidel method.
12�1 + 3�2 − 5�3 = 1
�1 + 5�2 + 3�3 = 28
3�1 + ��2 + ���3 = ��
�1 1
Given �2 = 0 as the initial guess.
�3 1
Solution:
�� � −�
The coefficient matrix, � = � � � is diagonally dominant as
� � ��
�11 = 12 = 12 ≥ �12 + �13 = 3 + −5 = 8
�22 = 5 = 5 ≥ �21 + �23 = 1 + 3 = 4
�33 = 13 = 13 ≥ �31 + �32 = 3 + 7 = 10
And the inequality is strictly greater than for at least one row. Hence the solution
should converge using Gauss-Seidel method.
Rewriting the equations, we get,
1−3�2 +5�3
�1 = 12
28−�1 −3�3
�2 = 5
76−3�1 −7�2
�3 = 13
�1 1
Given initial guess is �2 = 0 .
�3 1
Iteration 1:
1−3(0)+5(1)
�1 = = 0.5000
12
28− 0.5 −3(1)
�2 = = 4.9000
5
76−3 0.5000 −7(4.9000)
�3 = = 3.0923
13

The absolute relative approximate error at the end of first iteration is


0.5000−1.0000
|��|1 = 0.5000
∗ 100 = 67.662%
4.9000−0
|�� |2 = 4.9000
∗ 100 = 100.000%
3.0923−1.0000
|�� |3 = 3.0923
∗ 100 = 67.662%

The maximum absolute relative approximate error is 100.000%.


Iteration 2:
1−3(4.9000)+5(3.0923)
�1 = = 0.14679
12
28− 0.14679 −3(3.0923)
�2 = = 3.7153
5
76−3 0.14679 −7(4.9000)
�3 = = 3.8118
13

The absolute relative approximate error at the end of second iteration is


0.14679−0.5000
|��|1 = 0.14679
∗ 100 = 240.62%
3.7153−4.9000
|�� |2 = 3.7153
∗ 100 = 31.887%
3.8118−3.0923
|�� |3 = 3.8118
∗ 100 = 18.876%

The maximum absolute relative approximate error is 240.62%. This is greater than
the value of 67.612% we obtained in the first iteration. As we conduct more
iterations, the solution converges as follows.

Iteration �1 |��|1 �2 |��|1 �3 �� 1

1 0.50000 67.662 4.900 100.00 3.0923 67.662


2 0.14679 240.62 3.7153 31.887 3.8118 18.876
3 0.74275 80.23 3.1644 17.409 3.9708 4.0042
4 0.94675 21.547 3.0281 4.5012 3.9971 0.65798
5 0.99177 4.5394 3.0034 0.82240 4.0001 0.07499
6 0.99919 0.74260 3.0001 0.11000 4.0001 0.00000
�1 1
Thus, the exact solution is �2 = 3 .
�3 4
Q. Why Gauss-Seidel method is more preferable than Gauss-Jacobi
method?
Ans: The convergence rate of Gauss-Seidel method is faster than Gauss-Jacobi method
as the rate of convergence of Gauss-Seidel method is almost twice than Gauss-Jacobi
method. Hence, Gauss-Seidel method is more preferable than Gauss-Jacobi method.

You might also like