Approximations and Errors Approximation of Numbers by Truncation and Rounding-Off, Types of Errors
Approximations and Errors Approximation of Numbers by Truncation and Rounding-Off, Types of Errors
Numerical methods are techniques by which mathematical problems are formulated so that they can be
solved with arithmetic operations. Although there are many kinds of numerical methods, they have one
common characteristic: they invariably involve large numbers of tedious arithmetic calculations. It is
little wonder that with the development of fast, efficient digital computers, the role of numerical
methods in engineering problem solving has increased dramatically in recent years.
A computer has a finite word length and so only a fixed number of digits are stored and used during
computation. This would mean that even in storing an exact decimal number in its converted form in
the computer memory, an error is introduced.
(2) Round-off error: Error that arises due to the finite precision of an infinite decimal
representation during arithmetic computation is known as round-off error.
Round-off error is occurred in two ways:
(i) Chopping (Truncation)
(ii) Rounding
In chopping, after n-significant digits we simply discard the ( 𝑛 + 1)-th and later digits.
In rounding, to round-off a number to n-significant digits, we discard all the digits to the right of the n-
th digit if the ( 𝑛 + 1)-th digit is less than 5. If the ( 𝑛 + 1)-th digit is more than 5, we increase the n-
th digit by 1. If the ( 𝑛 + 1)-th digit is equal to 5, then we increase the n-th digit by 1 if it is odd and
leave the n-th digit unchanged if it is even.
Example. Find round-off error in rounding and chopping correct up to 4-significant figures for the
numbers (a) 1.7320508 (b) 0.014159.
Solution. (a) Let 𝑥𝑇 = 1.7320508
The approximate value after chopping is 𝑥𝐴 = 1.732
Therefore, round-off error in chopping is = 𝑥𝑇 − 𝑥𝐴 = 1.7320508 − 1.732 = 0.0000508
Again, the approximate value after rounding is 𝑥𝐴 = 1.732
Therefore, round-off error in rounding is = 𝑥𝑇 − 𝑥𝐴 = 1.7320508 − 1.732 = 0.0000508
(1) Absolute error: It is the absolute value of the difference between true value and approximate
value. Let us denote true value of a decimal number by 𝑥𝑇 and approximate value of a decimal
number by 𝑥𝐴 . Then the absolute error is defined as 𝐸𝐴 = |𝑥𝑇 − 𝑥𝐴 |.
(2) Relative error: The relative error of a number is known as the absolute error divided by its
|𝑥𝑇 −𝑥𝐴 |
true value, i.e., 𝐸𝑅 = |𝑥𝑇 |
.
(3) Percentage error: The percentage error is defined as the relative error multiplied by 100, i.e,
|𝑥𝑇 −𝑥𝐴 |
𝐸𝑃 = |𝑥𝑇 |
× 100.
Example. Find the absolute error, relative error and percentage error in approximating the number
10.42857 correct up to 3 decimal places.
1. Bisection Method
This method consists in locating the root of the equation f (x) = 0 between a and b. If f (x)
is continuous in between a and b, and f (a) and f (b) are of opposite signs then there is a root in
between a and b. Then the first approximation of the root is x1 = 12 (a + b). If f (x1 ) = 0, then x1
is a root of f (x) = 0. Otherwise, the root lies between a and x1 or x1 and b according to f (a)f (x1 )
is negative or positive. Then we bisect the interval as before and continue the process until the
root is found to be desired accuracy.
Example 1. Find a root of the equation x3 4x 9 = 0, using the bisection method correct to
two decimal places.
Let f (x) = x3 4x 9.
Then f (2) = 9 < 0 and f (3) = 6 > 0. Therefore, root lies between 2 and 3.
) first approximation to the root is x1 = 2+32
= 2.5.
Now f (x1 ) = f (2.5) = 3.375 < 0. Thus the second approximation of the root is x2 =
1
2
(x1 + 3) = 2.75.
Then f (x2 ) = f (2.75) = 0.7969 > 0. ) the root lies between x1 and x2 . Thus the third
approximation of the root is x3 = 12 (x1 + x2 ) = 2.625.
Then f (x3 ) = f (2.625) = 1.4121 < 0. ) the root lies between x2 and x3 . Thus the fourth
approximation to the root is x4 = 12 (x2 + x3 ) = 2.6875.
Repeating this process, the successive approximations are x5 = 2.71875, x6 = 2.70313, x7 =
2.71094, x8 = 2.70703, x9 = 2.70508, x10 = 2.70605, x11 = 2.70654.
Hence the root is 2.70.
The sequence {xn } of iterations or the successive better approximations may or may not converge
to a limit. If {xn } converges, then it converges to ↵ and the number of iterations required depends
upon the desired degree of accuracy of the root ↵.
Remark 1. (Convergence) The method of fixed point iteration is conditionally convergent. The
condition of convergence is | 0 (x)| < 1. Before starting the computation, one should confirm that
(x) must be such that 1 < 0 (x) < 1.
Example 2. Find the root of the equation 3x cos x 1 = 0, by the iteration method, correct to
four significant figures.
n xn (xn )
0 0 0.66667
1 0.66667 0.59530
2 0.59530 0.60933
3 0.60933 0.60668
4 0.60668 0.60718
5 0.60718 0.60709
6 0.60709 0.60710
7 0.60710 0.60710
3. Newton-Raphson Method
This is also an iterative method and is used to find isolated roots of an equation f (x) = 0. The
object of this method is to correct the approximate root x0 (say) successively to its exact value ↵.
Initially, a crude approximation small interval [a0 , b0 ] is found out in which only one root ↵ (say)
of f (x) = 0 lies.
Let x = x0 (a0 x0 b0 ) is an approximation of the root ↵ of the equation f (x) = 0. Let h
be a small correction on x0 , then x1 = x0 + h is the correct root.
Therefore, f (x1 ) = 0 =) f (x0 + h) = 0.
By Taylor series expansion, we get,
h2 00
f (x0 ) + hf 0 (x0 ) + f (x0 ) + · · · = 0.
2!
f (x0 )
As h is small, neglecting the second and higher power of h, we get, h = f 0 (x0 )
. Therefore,
f (x0 )
(4) x1 = x0 .
f 0 (x0 )
Further, if h1 be the correction on x1 , then x2 = x1 + h1 is the correct root.
Therefore, f (x2 ) = 0 =) f (x1 + h1 ) = 0.
By Taylor series expansion, we get,
h21 00
f (x1 ) + h1 f 0 (x1 ) + f (x1 ) + · · · = 0.
2!
3
f (x1 )
Neglecting the second and higher power of h1 , we get, h1 = f 0 (x1 )
. Therefore,
f (x1 )
(5) x2 = x1 .
f 0 (x1 )
Proceeding in this way, we get the (n + 1)th corrected root as
f (xn )
(6) xn+1 = xn .
f 0 (xn )
The formula (6) generates a sequence of successive corrections on an approximate root ↵ to get
the correct root ↵ of f (x) = 0, provided the sequence is convergent. The formula (6) is known as
the iteration formula for Newton-Raphson Method. The number of iterations required depends
upon the desired degree of accuracy of the root.
3.1. Convergence of Newton-Raphson Method. Comparing with the iteration method, we
may assume the iteration as
f (x)
(x) = x .
f 0 (x)
Thus, the above sequence will be convergent if and only if,
{f 0 (x)}2 f (x)f 00 (x) f (x)f 00 (x)
| 0 (x)| = 1 < 1 i.e., < 1.
{f 0 (x)}2 {f 0 (x)}2
Therefore, |{f 0 (x)}2 | > |f (x)f 00 (x)|.
Example 3. Find the positive root of x4 x 10 = 0 correct to four decimal places, using
Newton-Raphson method.
Let f (x) = x4 x 10. Then f 0 (x) = 4x3 1.
Also f (1) = 10 and f (2) = 4. ) f (x) = 0 has a root lies in between 1 and 2.
Let us take the initial approximation x0 = 2. Iteration formula is
f (xn ) x4n xn 10 3x4n + 10
xn+1 = xn = xn+1 = xn i.e., xn+1 = .
f 0 (xn ) 4x3n 1 4x3n 1
4
n xn xn+1 = 3x n +10
4x3n 1
0 2 1.87097
1 1.87097 1.85578
2 1.85578 1.85558
3 1.85558 1.85558
Thus 1.8556 is a root of the given equation correct to four decimal places.
3.2. Rate of convergence. An iterative method is said to be of order p or has the rate of
convergence p, if p is the largest positive real number for which there exists a finite constant
C 6= 0 such that |✏n+1 | C|✏n |p where ✏n is the error in the nth iterate.
If xn+1 be the (n + 1)th approximation of a root ↵ of f (x) = 0 and ✏n+1 be the corresponding
error, we have
(7) ↵ xn+1 = ✏n+1
and xn+1 = xn + hn
=) ↵ xn+1 = ↵ xn hn =) ✏n+1 = ✏n hn
(8) ✏n = ✏n+1 + hn
where
f (xn )
(9) hn = or f (xn ) + hn f 0 (xn ) = 0.
f 0 (xn )
4
Again we have
↵ = xn + ✏n
✏2n 00
(10) or, f (↵) = 0 = f (xn + ✏n ) = f (xn ) + ✏n f 0 (xn ) + f (⇠n ); (xn < ⇠n < ↵)
2!
Substracting (9) from (10), we get,
✏2n 00 ✏2
(✏n ⇠n )f 0 (xn ) + f (⇠n ) = 0, or ✏n+1 f 0 (xn ) + n f 00 (⇠n ) = 0
2! 2!
00
✏n+1 1 f (⇠n )
= .
2
✏n 2 f 0 (⇠n )
✏n+1 1 f 00 (⇠n )
lim =
n!1 ✏2n 2 f 0 (⇠n )
If the iteration converges i.e., xn , ⇠n ! ↵, as n ! 1
↵ xn+1 1 f 00 (⇠n )
lim = .
n!1 (↵ xn )2 2 f 0 (⇠n )
Thus, it is clear that Nweton-Raphson iteration method is a second order iterative process.
4. Regula-Falsi method
This is the method of finding the real root of the equation f (x) = 0 and closely resembles the
bisection method. Here we choose two points x0 and x1 such that f (x0 ) and f (x1 ) are of opposite
signs. Then, f (x) = 0 has a root lies between x0 and x1 .
Equation of the chord joining A(x0 , f (x0 )) and A(x1 , f (x1 )) is
f (x1 ) f (x0 )
y f (x0 ) = (x x0 ).
x1 x0
The method consists in replacing the curve AB by the chord AB and taking the point of inter-
section of the chord with the x axis as an approximation to the root. So, the abscissa of the point
where the chord cuts the x axis (y = 0) is given by
x1 x0
(11) x2 = x0 f (x0 )
f (x1 ) f (x0 )
which is an approximation to the root.
If now f (x0 ) and f (x1 ) are of opposite signs, then the root lies between x0 and x2 . So, replacing
x1 by x2 in (11), we obtain the next approximation x3 . (The root could as well lie between x1
and x2 and we would obtain x3 accordingly). This procedure is repeated till the root is found to
desired accuracy.
Example 4. Find the positive root of x4 32 = 0 correct to two decimal places, using Regula-Falsi
method.
Let f (x) = x4 32. Then, f (2) = 16 and f (3) = 49. ) f (x) = 0 has a root lies between 2
and 3.
) taking x0 = 2, x1 = 3, f (x0 ) = 16, f (x1 ) = 49 in Regula-Falsi method, we get
x1 x0 16
x2 = x0 f (x0 ) = 2 + = 2.2462.
f (x1 ) f (x0 ) 65
Now, f (x2 ) = f (2.2462) = 6.5438. So, the root lies between 2.2462 and 3.
Taking x0 = 2.2462, x1 = 3, f (x0 ) = 6.5438, f (x1 ) = 49 in Regula-Falsi method, we get
x1 x0 3 2.2462
x3 = x0 f (x0 ) = 2.2462 ( 6.5438) = 2.335.
f (x1 ) f (x0 ) 49 + 6.5438
Repeating this process, the successive approximations are x4 = 2.3645, x5 = 2.3770, x6 =
2.3779.
) 2.38 is a root of the given equation correct to two decimal places.
5
5. Secant Method
Iteration formula for Newton-Raphson method is
f (xn )
xn+1 = xn
f 0 (xn )
where x0 is the initial approximation of the root.
In Secant method the derivative f 0 (x) is replaced by the di↵erence quotient
f (xn ) f (xn 1 )
f 0 (xn ) ⇡ .
xn xn 1
) iteration formula for Secant method is
xn xn 1
xn+1 = xn f (xn )
f (xn ) f (xn 1 )
where x0 and x1 are initial approximation of the root.
Example 5. Find a root of the equation x3 5x + 1 = 0 correct to three decimal places.
Let f (x) = x3 5x + 1. Then f (0) = 1 and f (1) = 3.
) f (x) = 0 has a root in between 0 and 1. Let us take x0 = 0 and x1 = 1. Then f (x0 ) = 1 and
f (x1 ) = 3. By Secant method
x1 x0 (1 0)
x2 = x1 f (x1 ) =1 ( 3) = 0.25, ; f (x2 ) = 0.234375.
f (x1 ) f (x0 ) ( 3 1)
Similarly,
x2 x1
x3 = x2 f (x2 ) = 0.186441, ; f (x3 ) = 0.074276.
f (x2 ) f (x1 )
Repeating this process, the successive approximations are x4 = 0.201736, x5 = 0.201640.
) 0.202 is a root of the given equation correct to three decimal places.
5.1. Rate of convergence. If xn+1 be the (n + 1)th approximation of a root ↵ of f (x) = 0 and
✏n+1 be the corresponding error, we have
xn xn 1
✏n+1 = ↵ xn+1 = ↵ xn + f (xn )
f (xn ) f (xn 1 )
↵ ✏n ↵ + ✏n 1
=) ✏n+1 = ✏n + f (↵ ✏n )
f (↵ ✏n ) f (↵ ✏n 1 )
h i
0 ✏2n 00
(✏n 1 ✏n ) f (↵) ✏n f (↵) + 2 f (↵) + · · ·
=) ✏n+1 = ✏n + h 2
i h
✏2
i
f (↵) ✏n f 0 (↵) + ✏2n f 00 (↵) + · · · f (↵) ✏n 1 f 0 (↵) + n2 1 f 00 (↵) + · · ·
h 2
i
(✏n 1 ✏n ) ✏n f 0 (↵) + ✏2n f 00 (↵) + · · ·
=) ✏n+1 = ✏n +
(✏n 1 ✏n )f 0 (↵) 12 (✏2n 1 ✏2n )f 00 (↵) + · · ·
1
✏2n f 00 (↵) (✏n + ✏n 1 ) f 00 (↵)
=) ✏n+1 = ✏n + ✏n + + · · · 1 + · · ·
2 f 0 (↵) 2 f 0 (↵)
1 f 00 (↵)
=) ✏n+1 = + ✏n ✏n 1 + O(✏2n ✏n 1 + ✏n ✏2n 1 )
2 f 0 (↵)
Thus we have,
(12) ✏n+1 = C✏n ✏n 1
f 00 (↵)
where C = 12 f 0 (↵) and higher powers of ✏n are neglected.
The relation (12) is known as error equation. Keeping in view the definition of convergence, we
seek a relation of the form
(13) ✏n+1 = A✏pn
where A and p are to be determined.
6
1 1
From (13), we have ✏n = A✏pn 1 or ✏n 1 = A p ✏np .
Substituting ✏n+1 and ✏n 1 in (12), we have
1
(1+ p1 ) 1+ p
(14) ✏pn = CA ✏n .
Comparing the power of ✏n on both sides, we get
1 1 p
p = 1 + =) p = (1 ± 5).
p 2
p
Neglecting the minus sign, we find the rate of convergence of the Secant method is p = 12 (1+ 5) ⇡
1.618. p
Also we have, A = C p+1 . Hence, the Secant method has linear (super) rate of convergence 1.62.
Interpolation and Approximation
This chapter is dedicated to find an approximation to a given function by a class of simpler
functions, mainly polynomials. The main uses of polynomial interpolation are
Reconstructing the function when it is not given explicitly and only the values of 𝑓(𝑥)
and/or its certain order derivatives at a set of points, known as nodes/tabular
points/arguments are given.
Replace the function 𝑓(𝑥) by an interpolating polynomial 𝑃(𝑥) so that many common
operations such as determination of roots, differentiation, and integration etc which are
intended for the function 𝑓(𝑥) may be performed using 𝑃 𝑥 .
Definition: A polynomial 𝑃(𝑥) is called interpolating polynomial if the values of 𝑃(𝑥) and/or its
certain order derivatives coincide with those of 𝑓(𝑥) and/or its same order derivatives at one or
more tabular points.
The reason behind choosing the polynomials ahead of any other functions is that polynomials
approximate continuous functions with any desired accuracy. That is, for any continuous
function 𝑓(𝑥) on an interval 𝑎 ≤ 𝑥 ≤ 𝑏 and error bound 𝛽 > 0 there is a polynomial 𝑝𝑛 (𝑥) (of
sufficiently large degree) such that 𝑓 𝑥 − 𝑝𝑛 𝑥 < 𝛽 for all 𝑥 ∈ 𝑎, 𝑏 . This is the famous
Weierstrass approximation theorem.
1. Lagrange’s interpolation
2. Newton’s divided difference interpolation
3. Newton’s forward and backward difference interpolation
1. Lagrange’s Interpolation
Assume that 𝑓 𝑥 is continuous on [𝑎, 𝑏] and further assume that we have 𝑛 + 1 distinct points
𝑎 ≤ 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛−1 < 𝑥𝑛 ≤ 𝑏. Let the values of the function 𝑓(𝑥) at these points are
known and are denoted by 𝑓0 = 𝑓 𝑥0 , 𝑓1 = 𝑓 𝑥1 , …, 𝑓𝑛 = 𝑓 𝑥𝑛 . We aim to find a polynomial
𝑃𝑛 𝑥 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥2 + ⋯ + 𝑎𝑛 𝑥𝑛 satisfying 𝑃𝑛 𝑥𝑖 = 𝑓 𝑥𝑖 , 𝑖 = 0,1, …, 𝑛.
Linear Interpolation
Let 𝑛 = 1. Then the nodes are 𝑥0 and 𝑥1 . The Lagrange linear interpolating polynomial is given
by
𝑃1 𝑥 = 𝑎0 + 𝑎1 𝑥,
𝑓1 = 𝑎0 + 𝑎1 𝑥1 .
𝑃1 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1,
𝑥−𝑥1 𝑥−𝑥0
where 𝑙0 𝑥 = and 𝑙1 𝑥 = are called Lagrange’s fundamental polynomials. The
𝑥0−𝑥1 𝑥1 −𝑥0
properties of the Lagrange fundamental polynomials are as follows:
(i) 𝑙0 𝑥 + 𝑙1 𝑥 = 1.
(ii) 𝑙0 𝑥0 = 1, 𝑙0 𝑥1 = 0; 𝑙1 𝑥0 = 0, 𝑙1 𝑥1 = 1.
(iii) The degrees of 𝑙0 (𝑥) and 𝑙1 𝑥 are one.
1
𝐸1 𝑥, 𝑓 = 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑓'' 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥1 .
2
The bound for the truncation error in linear interpolation is given by
2
𝑥1 − 𝑥0
𝐸1 𝑥, 𝑓 ≤ max |𝑓'' (𝑥)| .
8 𝑥0≤𝑥≤𝑥1
Example 1: Given that 𝑓 2 = 4, 𝑓 2.5 = 5.5. Find the linear interpolating polynomial using
Lagrange’s interpolation and hence find an approximate value of 𝑓 2.2 .
𝑥 − 𝑥1 𝑥 − 2.5
𝑙0 𝑥 = = = 2 2.5 − 𝑥 = 5 − 2𝑥,
𝑥0 − 𝑥1 2 − 2.5
𝑥 − 𝑥0 𝑥−2
𝑙1 𝑥 = = = 2 𝑥 − 2 = 2𝑥 − 4.
𝑥1 − 𝑥0 2.5 − 2
𝑃1 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 = 4 5 − 2𝑥 + 5.5 2𝑥 − 4 = 3𝑥 − 2.
Quadratic Interpolation
Here 𝑛 = 2. We need to find an interpolating polynomial of the form
𝑃2 𝑥 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥2
𝑎0 + 𝑎1 𝑥0 + 𝑎2 𝑥20 = 𝑓0
𝑎0 + 𝑎1 𝑥1 + 𝑎2 𝑥22 = 𝑓1
𝑎0 + 𝑎1 𝑥2 + 𝑎2𝑥22 = 𝑓2 .
𝑃2 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 + 𝑙2 𝑥 𝑓2
where 𝑙0 𝑥 , 𝑙1 (𝑥) and 𝑙2 (𝑥) are Lagrange’s fundamental polynomial and are defined by
𝑥 − 𝑥1 (𝑥 − 𝑥2 ) 𝑥 − 𝑥0 (𝑥 − 𝑥2 ) 𝑥 − 𝑥0 (𝑥 − 𝑥1 )
𝑙0 𝑥 = , 𝑙1 𝑥 = 𝑎𝑛𝑑 𝑙2 𝑥 = .
𝑥0 − 𝑥1 (𝑥0 − 𝑥2 ) 𝑥1 − 𝑥0 (𝑥1 − 𝑥2 ) 𝑥2 − 𝑥0 (𝑥2 − 𝑥1 )
1
𝐸2 𝑥, 𝑓 = 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑥 − 𝑥2 𝑓''' 𝜉 , 𝑥0 ≤ 𝜉 ≤ 𝑥2 .
3!
The bound for the quadratic Lagrange’s interpolating polynomial is
𝑀3
𝐸2 𝑥, 𝑓 ≤ max 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑥 − 𝑥3 , 𝑀3 = max 𝑓''' 𝑥 .
6 0≤𝑥≤𝑥2
𝑥 𝑥0≤𝑥≤𝑥2
Example:
General Formula
The general Lagrange’s interpolating polynomial for 𝑛 + 1 nodes 𝑥0 , 𝑥1 , …, 𝑥𝑛 is given by
𝑛
𝑃𝑛 𝑥 = 𝑙𝑘 𝑥 𝑓𝑘 ,
𝑘=0
where 𝑤 𝑥 = 𝑥 − 𝑥0 𝑥 − 𝑥1 ⋯ 𝑥 − 𝑥𝑛 .
𝑥 − 1 (𝑥 − 3) 1
𝑙0 𝑥 = = 𝑥 − 1 (𝑥 − 3)
0 − 1 (0 − 3) 3
𝑥 − 0 (𝑥 − 3) 1 𝑥 − 0 (𝑥 − 1) 1
𝑙1 𝑥 = =− 𝑥 𝑥 − 3 , 𝑙2 𝑥 = = 𝑥 𝑥−1 .
1 − 0 (1 − 3) 2 3 − 0 (3 − 1) 6
1 3 55
𝑃2 𝑥 = 𝑙0 𝑥 𝑓0 + 𝑙1 𝑥 𝑓1 + 𝑙2 𝑥 𝑓2 = 𝑥−1 𝑥−3 − 𝑥 𝑥−3 + 𝑥 𝑥−1
3 2 6
= 8𝑥2 − 6𝑥 + 1
𝑓 𝑥0 = 𝑓 𝑥0 = 𝑓0 .
𝑓1 − 𝑓0
𝑓 𝑥0, 𝑥1 = .
𝑥1 − 𝑥0
Nodes Functional Values 1st order Div. Diff. 2nd order Div. Diff.
𝑥0 𝑓[𝑥0 ]
𝑥1 𝑓[𝑥1 ] 𝑓[𝑥0 , 𝑥1 ]
𝑥2 𝑓[𝑥2 ] 𝑓[𝑥1 , 𝑥2 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
𝑃𝑛 𝑥 = 𝑓 𝑥0 + 𝑥 − 𝑥0 𝑓 𝑥0 , 𝑥1 + 𝑥 − 𝑥0 𝑥 − 𝑥1 𝑓 𝑥0 , 𝑥1 , 𝑥2 + ⋯
+ 𝑥 − 𝑥0 ⋯ 𝑥 − 𝑥𝑛−1 𝑓 𝑥0 , 𝑥1 , ⋯, 𝑥𝑛 .
Hence find the interpolating polynomial and an approximation to the value 𝑓(7).
Repeated application of the difference operators lead to the following higher order differences.
𝑥 − 𝑥0 𝑥 − 𝑥0 (𝑥 − 𝑥1 ) 2 𝑥 − 𝑥0 ⋯ 𝑥 − 𝑥𝑛−1 𝑛
𝑃𝑛 𝑥 = 𝑓0 + ∆𝑓0 + ∆ 𝑓0 + ⋯ + ∆ 𝑓0 .
ℎ 2! ℎ2 𝑛! ℎ𝑛
Putting 𝑢 = (𝑥 − 𝑥0 )/ℎ, the interpolating polynomial using forward difference operator becomes
𝑛
𝑢 𝑖
𝑃𝑛 𝑥 = ∆ 𝑓0 ,
𝑖
in fo
U$fotu +u(u)
(2) =
+ 𝑖=0
to po .
With the error
x dy/dx
x dy/dx
dy (2p → 1) 2 3p2 → 6p + 2 3
= !y0 + ! y0 + ! y0 + .....
dp 2! 3!
dp 1
dx = h
! "
dy dy dp 1 (2p → 1) 2 3p2 → 6p + 2 3 4p3 → 18p2 + 22p → 6 4
= . = !y0 + ! y0 + ! y0 + ! y0 + ....
dx dp dx h 2! 3! 4!
x = x0 p = 0 p=0
# $ ! "
dy 1 1 1 1 1 1
= !y0 → !2 y0 + !3 y0 → !4 y0 + !5 y0 → !6 y0 + ....
dx x0 h 2 3 4 5 6
x
# $ ! "
d2 y d dy dp 1 2 2 6p → 6 3 12p2 → 36p + 22 4
2
= = 2 ! y0 + ! y0 + ! y0 + .... .
dx dp dx dx h 2! 3! 4!
# $ ! "
d2 y 1 2 3 11 4 5 5
= 2 ! y0 → ! y0 + ! y0 → ! y0 → ........ .
dx2 x=x0 h 12 6
! "
dy dy dp 1 2p + 1 2 3p2 + 6p + 2 3
= = ↑yn + ↑ yn + ↑ yn + .....
dx dp dx h 2! 3!
# $ ! "
d2 y d dy dp 1 6p + 6 3 6p2 + 18p + 11 4
= = 2 ↑2 yn + ↑ yn + ↑ yn + .....
dx2 dp dx dx h 3! 12
dy d2 y
dx , dx2 x = 1.1 x = 1.6
x y ! !2 !3 !4 !5 !6
1.0 7.989
0.414
1.1 8.403 →0.036
0.378 0.006
1.2 8.781 →0.030 →0.002
0.348 0.004 0.001
1.3 9.129 →0.026 →0.001 0.002
0.322 0.003 0.003
1.4 9.451 →0.023 0.002
0.299 0.005
1.5 9.750 →0.018
0.281
1.6 10.031
# $ ! "
dy 1 1 2 1 3 1 4 1 5 1 6
= !y0 → ! y0 + ! y0 → ! y0 + ! y0 → ! y0 + ....
dx x0 h 2 3 4 5 6
# $ ! "
d2 y 1 2 3 11 4 5 5
= 2 ! y0 → ! y0 + ! y0 → ! y0 → ........
dx2 x=x0 h 12 6
h = 0.1 x0 = 1.1 !y0 = 0.378 !2 y0 = →0.03
# $ ! "
dy 1 1 1 1 1
= 0.378 → (→0.03) + (0.004) → (→0.001) + (0.003) = 3.952
dx 1.1 0.1 2 3 4 5
# 2 $ ! "
d y 1 11 5
= →0.03 → (0.004) + (→0.001) → (0.003) = →3.74
dx2 1.1 0.12 12 6
↑
!
# $ ! "
dy 1 1 1 1 1 1
= ↑yn + ↑2 yn + ↑3 yn + ↑4 yn + ↑5 y5 + ↑6 y6 ......
dx xn h 2 3 4 5 6
# $ ! "
d2 y 1 2 3 11 4 5 5 137 6
= ↑ y n + ↑ y n + ↑ y n + + ↑ y n + + ↑ y n + .....
dx2 xn h2 12 6 180
h = 0.1 xn = 1.6 ↑yn = 0.281 ↑2 yn = →0.018
# $ ! "
dy 1 1 1 1 1 1
= 0.281 + (→0.018) + (0.005) + (0.002) + (0.003) + (0.002) = 2.75
dx 1.6 0.1 2 3 4 5 6
# $ ! "
d2 y 1 11 5 137
= →0.018 + 0.005 + (0.002) + (0.003) + (0/02)+ = →0.715
dx2 1.6 h2 12 6 180
y = f (x) x → axis a b
% b % b
I= ydx = f (x)dx
a a
(a, b) n
(a, b) = (a = x0 , x1 , x2 , ..., xn = b)
b→a
h=
n
% b
h
f (x)dx = [y0 + 2 (y1 + y2 + y3 + · · · + yn→1 ) + yn ]
a 2
1
3
rd
% b
h
f (x)dx = [y0 + 4 (y1 + y3 + y5 + · · · ) + 2 (y2 + y4 + y6 + · · · ) + yn ]
a 3
3
8 th
% b
3h
f (x)dx = [y0 + 3 (y1 + y2 + y4 + y5 + · · · ) + 2 (y3 + y6 + y9 + · · · ) + yn ]
a 8
&1 dx
0 1+x2
b→a
n=6 h= n
1→0
↓h= 6
1
y = f (x) = 1+x2
1 1
x0 = 0, y0 = f (x0 ) = = = 1,
1 + x20 1 + 02
1 1 1 36
x1 = , y1 = f (x1 ) = = = ,
6 1 + x21 1 + ( 16 )2 37
2 1 1
x2 = , y2 = f (x2 ) = = = 0.9,
6 1 + x22 1 + ( 26 )2
3 1 1
x3 = , y3 = f (x3 ) = = = 0.8,
6 1 + x23 1 + ( 36 )2
4 1 1 9
x4 = , y4 = f (x4 ) = = = ,
6 1 + x24 1 + ( 46 )2 13
5 1 1 36
x5 = , y5 = f (x5 ) = = = ,
6 1 + x25 1 + ( 56 )2 61
1 1
x6 = 1, y6 = f (x6 ) = = = 0.5.
1 + x26 1 + 12
% 1
dx h
= [y0 + 2 (y1 + y2 + y3 . + y4 + y5 ) + y6 ]
0 1 + x2 2
! # $ "
1 36 9 36
= 1+2 + 0.9 + 0.8 + + + 0.5
6↔2 37 13 61
= 0.784241.
1
3
% 1
dx h
= [y0 + 4 (y1 + y3 + y5 ) + 2 (y2 + y4 ) + y6 ]
0 1 + x2 3
! # $ # $ "
1 36 36 9
= 1+4 + 0.8 + + 2 0.9 + + 0.5
6↔3 37 61 31
= 0.785398.
3
8
% 1
dx 3h
= [y0 + 3 (y1 + y2 + y4 + y5 ) + 2 (y3 ) + y6 ]
0 1 + x2 8
! # $ "
8 36 9 36
= 1+3 + 0.9 + + + 2 (0.8) + 0.5
6↔3 37 13 61
= 0.785396.
Gauss-Legendre Integration Rules
The limits of integration for Gauss-Legendre integration rules are [– 1, 1]. Therefore,
to evaluate b
I= ! a
f ( x ) d x
1
The required transformation is x= [(b – a)t + (b + a)].
2
Then, f(x) = f{[(b – a)t + (b + a)]/2} and dx = [(b – a)/2]dt.
The integral becomes
b
$% 1 [(b − a)t + (b + a)]"# $% 1 (b − a)"# dt =
1 1
I=
!a
f ( x) dx =
&2 ! −1
f
' &2 ' !
−1
g(t) dt
where
$1 " $1
g(t) = % (b − a) # f % [(b − a)t + (b + a)]# .
"
& 2 ' & 2 '
1
Therefore, we shall derive formulas to evaluate ! −1
g(t) dt .
1
Without loss of generality, let us write this integral as
using Gauss-Legendre one point, two point and three point rules.
I = 2 f(0) = 2
() 24 +, = 48 = 0.494845.
* 16 + 81- 97
Using the two point Gauss rule, we obtain
I=f
"# − 1 %& + f "# 1 %& = f(– 0.577350) + f(0.577350)
$ 3' $ 3'
= 0.384183 + 0.159193 = 0.543376.
I=
1
5f −
3() "
+ 8 f (0) + 5 f
% " 3 % +,
9 5 )* #$ &' #$ 5 &' ,-
1
= [5 f(– 0.774597) + 8 f(0) + 5f(0.774597)]
9
1
= [5(0.439299) + 8(0.247423) + 5(0.137889) = 0.540592.
9
1 dx
Example Evaluate the integral I= !
0 1+ x
, using the Gauss three point formula.
Solution We reduce the interval [0, 1] to [– 1, 1] to apply the Gauss three point rule.
Here a=0 and b=1.
1
Writing x = [(b – a)t + (b + a)],
2
we get x = (t + 1)/2, dx = dt/2. The integral becomes
1 dt 1
I= !
−1 t+3
= !−1
f ( t ) dt
I=
1
5f −
3() "
+ 8 f (0) + 5 f
% " 3 % +,
9 5 )* #$ &' #$ 5 &' ,-
1
= [5(0.449357) + 8(0.333333) + 5(0.264929)] = 0.693122.
9
Numerical Solution to ODE
We know that an ODE of the first order is of the form � �, �, �' = 0 and can often be
written in the explicit form �' = �(�, �). An initial value problem for this equation is of the
form
�' = � �, � , � �0 = �0
where �0 and �0 are given and we assume that the problem has a unique solution on some
open interval � < � < � containing �0 . We shall discuss methods of computing
approximate numeric values of the solution of the above initial value problem at the
equidistant points on the �-axis �1 = �0 + ℎ, �2 = �0 + 2ℎ, �3 = �0 + 3ℎ, … where the step
size ℎ is a fixed positive real number. These methods are suggested by the Taylor series
expansion of an infinitely differentiable function.
Then
� �+1 � �+1
�� (�) = � − �0
(� + 1)!
for some � between �0 and �. Moreover, if
lim �� (�) = 0
�→∞
then the Taylor series for � expanded about �0 converges to �(�), that is
∞
� � �0 �
� � = � − �0
�!
�=0
�' = � �, � , � �0 = �0
where �0 and �0 are given, we assume that the problem has a unique solution on some
open interval � < � < � containing �0 and � is an infinitely differentiable function on (�, �).
Let � = �0 + ℎ, then by Taylor series expansion of � , we have
∞ ∞
� � �0 �
� � �0 �
� �0 + ℎ = � � = � − �0 = ℎ .
�! �!
�=0 �=0
That is,
ℎ2 '' ℎ3
� �0 + ℎ = � �0 + ℎ �' �0 + � �0 + �''' �0 + ⋯.
2 3!
To compute approximate numeric values of the solution of the above initial value problem
at the equidistant points on the �-axis �1 = �0 + ℎ, �2 = �0 + 2ℎ, �3 = �0 + 3ℎ, …, we use
the above form of Taylor series expansion. Thus we have the approximate values are
ℎ2 '' ℎ3
� �1 = � �0 + ℎ = � �0 + ℎ �' �0 + � �0 + �''' �0 + ⋯,
2 3!
'
ℎ2 '' ℎ3 '''
� �2 = � �1 + ℎ = � �1 + ℎ � �1 + � �1 + � �1 + ⋯,
2 3!
'
ℎ2 '' ℎ3 '''
� �3 = � �2 + ℎ = � �2 + ℎ � �2 + � �2 + � �2 + ⋯,
2 3!
and so on. In general
ℎ2 '' ℎ3
� ��+1 = � �� + ℎ = � �� + ℎ �' �� + � �� + �''' �� + ⋯.
2 3!
'
ℎ2 '' ℎ3 '''
� ��+1 = � �� + ℎ � �� + � �� + � �� .
2 3!
That is,
ℎ2 '' ℎ3 '''
��+1 = �� + ℎ �'� + � + � .
2 � 3! �
Given that, �' = ��. So, �'' = 1 + �2 � and �''' = 3� + �3 � . Now using ℎ = 0.2 and
the given initial condition � 0 = 1, that is �0 = 0, �0 = 1, we find,
Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + �0 + �0 = 1.02.
2 3!
Similarly we calculate �2 , �3 , �4 , �5 and are given in the below table.
From the above table we get the solution of the initial value problem as
� 0.2 = 1.0200, � 0.4 = 1.0828, � 0.6 = 1.1964, � 0.8 = 1.3757, � 1.0 = 1.6463 .
�2
Note: The exact solution is �(�) = � 2 , and so the exact value of � 1 is 1.6487.
Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + � + � = 0.0339.
2 0 3! 0
Similarly, we calculate �2 , �3 , �4 and are given in the below table.
From the above table we get the solution of the initial value problem as
Note: The exact solution is �(�) = �� − � − 1, and so the exact value of � 1 is 0.7183.
Example: Solve the initial value problem
�' = � + � 2 , � 0 = 0; ℎ = 0.1
using Taylor series method up to 3rd order derivative. Do 10 steps.
Solution: We have to solve the initial value problem
�' = � + � 2 , � 0 = 0.
with ℎ = 0.25 . That is we have to find the values of �(�) at
�1 = 0.1, �2 = 0.1, �3 = 0.3, …, �9 = 0.9, �10 = 1.0.
It is given that �0 = 0, � �0 = 0, ℎ = 0.1 . Let denote � �� = �� . We have to use Taylor
series method up to 3rd order derivative. That is we have to use
ℎ2 '' ℎ3
� ��+1 = � �� + ℎ �' �� + � �� + �''' �� .
2 3!
That is,
ℎ2 '' ℎ3 '''
��+1 = �� + ℎ �'� + �� + �� .
2 3!
Given that, �' = � + � 2 .
So,
�'' = 2 � + � 1 + �'
And
�''' = 2(1 + 4�' + 3�'2 )
Using ℎ = 0.1 and the given initial condition � 0 = 0 , that is �0 = 0, �0 = 0 , we do the
following 10 steps.
Step 1.
�'0 = �0 + �0 2
= 0, �''0 = 2(�0 + �0 )(1 + �'0 ) = 0, �''' ' '2
0 = 2(1 + 4�0 + 3�0 ) = 2.
Thus,
ℎ2 '' ℎ3 '''
�1 = �0 + ℎ �'0 + � + � =.
2 0 3! 0
Similarly, we do the rest steps and are given in the below table.
Note: The exact solution is �(�) = tan � − �, and so the exact value of � 1 is 0.05574.
Euler's method
From use Taylor series method, we have,
'
ℎ2 '' ℎ3 '''
� ��+1 = � �� + ℎ � �� + � �� + � �� + ⋯.
2 3!
For small ℎ the higher powers ℎ2 , ℎ3 , …, are very small and if we drop all of them to get the
crude approximation
� ��+1 = � �� + ℎ �' ��
i.e.,
��+1 = �� + ℎ � ��, �� .
Geometrically, this is an approximation of the curve of by a polygon whose first side is
tangent to this curve at ��.
��+1 = �� + ℎ � ��, �� .
Thus
2
�1 = �0 + ℎ� �0 , �0 = 0 + 0.1 ⋅ 0 − 0 =0
Similarly, we calculate �2 , …, �10 and are given in the below table.
� �� ��
1 0.1 0.0000
2 0.2 0.0010
3 0.3 0.0050
4 0.4 0.0137
5 0.5 0.0286
6 0.6 0.0508
7 0.7 0.0810
8 0.8 0.1193
9 0.9 0.1656
10 1.0 0.2196
�20 = 1.1001, �21 = 1.1424, �22 = 1.1483, �23 = 1.1492, �24 = 1.1493, �25 = 1.1493.
Thus, �2 = 1.1493. Again
�30 = 1.3144 , �31 = 1.3938, �32 = 1.4140, �33 = 1.4193, �34 = 1.4207, �35 = 1.4212,
�36 =1.4212.
Thus, �4 = 2.2349.
Therefore, �1 = 1.0334, �2 = 1.1493, �3 =1.4212, �4 = 2.2349.
2
Note: The exact solution of the above initial value problem is � � = 2−�2 and �1 = 1.0323,
et on der t e o o n :
Integrating the differential equation y′ = f (x, y) in the interval [xn , xn+1], we get
x n+1 dy x n+1
!xn dx
dx = ! xn
f ( x , y) dx .
Note that y′ and hence f (x, y) is the slope of the solution curve. Further, the
integrand on the right hand side is the slope of the solution curve which changes continuously
in [xn , xn+1]. By approximating the continuously varying slope in [xn , xn+1] by a fixed slope,
we obtain the Euler, Heun’s and modified Euler methods. The basic idea of Runge-
Kutta methods is to approximate the integral by a weighted average of slopes and
approximate slopes at a number of points in [xn , xn+1].
" iΛ1 $
where Ki = hf !! x n + ci h, yn + ω aim K m %%
# m=1 &
with c1 = 0.
For v = 1, w1 = 1, the equation (1) becomes the Euler method with order p = 1, aand
this is the lowest order Runge-Kutta method.
We now list a few Runge-Kutta methods.
Classical method
1
yn+1 = yn + (K1 + 2K2 + 2K3 + K4),
6
K1 = hf (xn, yn),
"!
K2 = hf x n +
1 1
h, yn + K 1 %&$
# 2 2
K3
"
= hf ! x
1
h, yn
+
1
+ K %
$
# 2
n
2 & 2
K4 = hf (xn + h, yn + K3).
!"#$%&'"(()* +%(,-./
!"#$%&'"(()* +%(,-./
Example Given y′ = x3 + y, y(0) = 2, compute y(0.2), y(0.4) and y(0.6) using the Runge-Kutta
method of fourth order.
!" h 1 $%
k2 = hf x0 +
# , y0 + k1 = 0.2 f(0.1, 2.2)
&
2 2
= (0.2)(2.201) = 0.4402,
1
= 2.0 + [0.4 + 2(0.4402) + 2(0.44422) + 0.490444]
6
= 2.443214.
For n =1 , we have x1 = 0.2, y1 = 2.443214.
k1 = h f (x1, y1) = 0.2 f(0.2, 2.443214) = (0.2)(2.451214) = 0.490243,
!" h 1 $%
k2 = hf x1 + , y1 + k1 = 0.2 f(0.3, 2.443214 + 0.245122)
# 2 2 &
= (0.2)(2.715336) = 0.543067,
!" h 1 $%
k2 = hf x2 + , y2 + k1 = 0.2 f(0.5, 2.990579 + 0.305458)
# 2 2 &
= (0.2)(3.421037) = 0.684207,
Solution
!" h 1 $%
k2 = hf x0 + , y0 + k1 = –2(0.2)(0.1)(1)2 = –0.04,
# 2 2 &
! h 1 $
k3 = hf " x 0 + , y0 + k % = –2 (0.2)(0.1)(0.98)
2
2 = –0.038416,
# 2 2 &
k4 = hf (x0 + h, y0 + k3) = –2(0.2)(0.2)(0.961584)2 = –0.0739715,
1
y(0.2) ≈ y1 = y0 + (k + 2k2 + 2k3 + k4)
6 1
1
= 1.0 + [0.0 – 0.08 – 0.076832 – 0.0739715] = 0.9615328.
6
!"#$%&'"(()* +%(,-./
!" h 1 $%
k2 = hf x1 + , y1 + k1 = –2(0.2)(0.3) (0.924551)2 = – 0.1025753,
# 2 2 &
! h 1 $
k3 = hf " x 1 + , y1 + k % = –2(0.2)(0.3)(0.9102451)
2
2 = – 0.0994255,
# 2 2 &
k4 = hf (x1 + h, y1 + k3) = –2(0.2)(0.4)(0.86210734)2 = – 0.1189166,
1
y(0.4) ≈ y2 = y1 + (k + 2k2 + 2k3 + k4)
6 1
1
= 0.9615328 + [–0.0739636 –0.2051506 –0.1988510 – 0.1189166]
6
= 0.8620525
Solution of System of Linear Equations
(Gauss-Jacobi and Gauss-Seidel methods)
Where, ��� denotes the entry in the �th row and �th column of the matrix.
A strictly diagonal dominant matrix (or an irreducibly diagonal dominant matrix) is non-singular.
The Gauss-Jacobi and Gauss-Seidel methods for solving a linear system of equations converge if
the matrix is strictly diagonally dominant. It converges for any initial approximation �(�) .
Generally, �(�) = � is taken in the absence of any better initial approximation.
Iterative Methods:
To solve the system of linear equations, we use two types of method. i.e., Direct method and
Iterative method. Direct method includes Gauss-Elimination method and Gauss-Jordan method.
Iterative method includes Gauss-Jacobi method and Gauss-Seidel method.
Step-2:
1
�2 = � �2 − �21 �1 − �23 �3 − … − �2� �� .
22
.
.
1
��−1 = ��−1 − ��−1,1 �1 − ��−2,2 �2 − …−��−1,�−2 ��−2 − ��−1,� �� .
��−1,�−1
1
�� = � [�1 − ��1 �1 − ��2 �2 − … − ��,�−1 ��−1 ].
��
1 �
In general, �� = � �� − �=1 ��� �� , � = 1,2, …, �, ��� ≠ 0.
�� �≠�
Step-3:
1 �
��(�+1) = �� − �=1 ��� �� � , � = 1,2, …, � and ��� ≠ 0 ∀ �.
���
�≠�
Example 1:
Solve the linear system �� = � by
10�1 − �2 + 2�3 = 6
− �1 + 11�2 − �3 + 3�4 = 25
2�1 − �2 + ���3 − �4 =− ��
��2 − �3 + ��4 = ��
by Gauss-Jacobi method rounded up to four decimal places.
Solution:
Here, � is strictly diagonally dominant since 10 > 1 + 2 , 11 > 1 + 1 + 3 ,
10 > 2 + 1 + 1 and 8 > 3 + 1.
Now letting �(�) = [0 0 0 0]� , we get
�(�) = [0.6000 2.2727 − 1.1000 1.8750]�
�(�) = [1.0473 1.7159 − 0.8052 0.8852]� and
�(�) = [0.9326 2.0533 − 1.0493 1.1309]� .
Proceeding similarly one can obtain,
�(�) = [0.9890 2.0114 − 1.0103 1.0214]� and
�(��) = [1.0001 1.9998 − 0.9998 0.9998]� .
The solution is � = [1 2 − 1 1]� . You may note that �(��) is a good
approximation to the exact solution compared to �(�) .
Step-2:
If any of ��� = 0 ∀ �, then rearrange the above system of equations in such a way
that the above conditions hold. i.e. the first equation is rewritten with �1 on the left-hand
side and the second equation is rewritten with �2 on the left-hand side and so on. i.e.,
1
�2 = � �2 − �21 �1 − �23 �3 − … − �2� �� .
22
.
1
��−1 = � ��−1 − ��−1,1 �1 − ��−2,2 �2 − …−��−1,�−2 ��−2 − ��−1,� �� .
�−1,�−1
1
�� = [� − ��1 �1 − ��2 �2 − … − ���−1 ��−1 ].
��� 1
1 �
In general, �� = � �� − �=1 ��� �� , � = 1,2, …, �, ��� ≠ 0.
�� �≠�
Step-3:
1 �−1 �
��(�+1) = �� − �=1 ��� �� �+1 − �=�+1 ��� �� � , � = 1,2, …, � and ��� ≠ 0 ∀ �.
���
�≠� �≠�
Step-4:
Now to find ��’s, one must assume an initial guess for the ��’s and then use the
rewritten equations to calculate the new guesses. Remember, one always uses the most
recent guesses to calculate ��.
Step-5:
At the end of each iteration, one calculates the absolute relative approximate error
for each �� as
�� ���−�� ���
|��|� = | �� ���
|∗ 100
Where ����� is the recently obtained value of �� , and ����� is the previous value of �� .
When the absolute relative approximate error for each �� is less than the pre-specified
tolerance, the iterations are stopped.
Example 1:
Solve the given system of equations by Gauss-Seidel method.
12�1 + 3�2 − 5�3 = 1
�1 + 5�2 + 3�3 = 28
3�1 + ��2 + ���3 = ��
�1 1
Given �2 = 0 as the initial guess.
�3 1
Solution:
�� � −�
The coefficient matrix, � = � � � is diagonally dominant as
� � ��
�11 = 12 = 12 ≥ �12 + �13 = 3 + −5 = 8
�22 = 5 = 5 ≥ �21 + �23 = 1 + 3 = 4
�33 = 13 = 13 ≥ �31 + �32 = 3 + 7 = 10
And the inequality is strictly greater than for at least one row. Hence the solution
should converge using Gauss-Seidel method.
Rewriting the equations, we get,
1−3�2 +5�3
�1 = 12
28−�1 −3�3
�2 = 5
76−3�1 −7�2
�3 = 13
�1 1
Given initial guess is �2 = 0 .
�3 1
Iteration 1:
1−3(0)+5(1)
�1 = = 0.5000
12
28− 0.5 −3(1)
�2 = = 4.9000
5
76−3 0.5000 −7(4.9000)
�3 = = 3.0923
13
The maximum absolute relative approximate error is 240.62%. This is greater than
the value of 67.612% we obtained in the first iteration. As we conduct more
iterations, the solution converges as follows.