Numerical Analysis PDF
Numerical Analysis PDF
General objective
On completion of the course, successful students will be able to
Understand error
understand sources of errors,
identify absolute and relative errors
1.1 Introduction
Numerical Analysis is a subject that is concerned with devising methods for approximating the
solution of mathematically expressed problems. Such problems may each be formulated for
example in terms of algebraic, transcendental equations or ordinary differential equations or
partial or integral equations. More often the mathematical problems cannot be solve by exact
methods. The mathematical models ordinarily do not solve the physical problems exactly, so it is
often more appropriate to find an approximate solution. Generally numerical analysis does not
give exact solution; instead it attempts to devise a method which will yield an approximation
differing from exactness by less than a specified tolerance. The efficiency of the method used to
solve the given problem depends both upon the accuracy required of the method and the ease
with which it can be implemented
Examples: … etc.
Approximate numbers: These are numbers that represent numbers to a certain degree of
accuracy but not exact value.
Example: 2.7182 is an approximate value of e.
Chopping: If n digits are used to represent a non-terminating mixed numbers then the simplest
scheme is to keep the first n digits and chop off all remaining digits.
Rounding off: Alternatively, one can round-off to nth digits by examining the values of the
remaining digits. The rounding operation is actually a modified version of chopping. To
rounding of a number to n digits we adopt the following procedure:
To round-off a number to n significant digits, discard all digits to the right of the nth
digit, and if this discarded number is:
i. Less than half a unit in the nth place, leave the nth digit unaltered;
ii. Greater than half a unit in the nth place, increase the nth digit by unit;
iii. Exactly half a unit in the nth place, increase the nth digits by unit if it is odd,
otherwise leave it unchanged.
Note:
The numbers thus rounded-off is said to be correct to n significant figures/digits
The digits that are used to express a number are called significant digits (or figures).
Example: The numbers 2.143, 0.3312 and 1.065 contain four significant digits while 0.0012 has
only two significant digits 1 & 2.The zeros serve only to fix the position of the decimal point
Example: round-off the following numbers correct to four significant figures.
Solution
Here we have to retain the first four significant figures. Therefore,
becomes
10.0537 becomes 10.05
0.583251 becomes0 .5832
3.14159 becomes 3.142
1.3 Error
Analysis of errors is the central concern in the study of numerical analysis and therefore we will
investigate the sources and types of errors that may occur in a given problem and the subsequent
propagation of errors.
Errors in the solution of a problem are due to the following reasons:
To solve physical problems, mathematical models are formulated to describe
them and these models do not describe the problems exactly and as a result errors
are introduced.
The methods used to solve the mathematical models are often not exact and as a
consequence errors are introduced.
A computer has a finite word length and so only a fixed number of digits of a
number are inserted and as a consequence errors are introduced.
In a numerical point of view error does not mean mistake but error is a difference between exact
value and approximate value.
i.e.
Example:
An approximate value of is given by and its true value is
. Find the absolute, relative and percentage errors.
Solution
i.
ii.
iii.
Activity 1.2
1. Which of the following better approximate
a.
b.
c.
22
2. An approximate value is given by x1 = = 3.142871 and its true value is
7
. Find the absolute and relative errors.
1
3. The three approximate values of the number are given as 0.30, 0.33 and 0.34, which of
3
the three is the best approximation.
2. Find the relative error of the number 8.6 if both of its digits are correct.
3. Evaluate the sum of the following:
a.
b.
4. Evaluate the sum = 3+ 5 + 7 to four significant digits and find its absolute and
relative errors.
Bisection method
Since the method is based on finding the root between two points, the method falls under the
category of bracketing methods. Since the root is bracketed between two points, and , one
can find the mid-point , between and . This gives us two new intervals, and
Figure 1 At least one root exists between the two points if the function is real, continuous, and
changes sign.
a b a1 b1
- If f 1 1 > 0, then the root lies in a1 , 2 .
2
a b a b
- If f 1 1 < 0, then the root lies in 1 1 , b1 .
2 2
The newly reduced interval denoted by [a2, b2] is again halved and the same investigation is
made. Finally, at some stage in the process we get either the exact root of (2.2) or a finite
sequence of nested intervals such that ( ) and
and
b1 a1
Where for n 1. We take the mid -point of this last subinterval as the
2 n 1
a n bn
desired approximate root of i.e. (2.2)
2
Theorem:
Let be continuous on and suppose . Then the Bisection
Method generates a sequence { } approximating the root with the property –
b1 a1
for n 1.
2n
Proof: For each n 1, we have
b1 a1
– and for n 1
2 n 1
1 1 b1 a1
Since for all n 1, then – ( –
2 2 2n
5 .
log 10 2
5
log 10 2
It would appear to require 17 iterations to obtain an approximation accurate to 10-5
a) The bisection method is always convergent. Since the method brackets the root, the
method is guaranteed to converge.
b) As iterations are conducted, the interval gets halved. So one can guarantee the error in
the solution of the equation.
However, the function is not continuous and the theorem that a root exists is also not applicable
.
Figure: The equation has a no root but changes sign
Example 2.3: Solve the equation for the root between and by the
method of bisection
Solution:
Here is continuous in
Since
and , the root will lies between 2 and 4.
Iteration 1
Iteration 5
and so on.
Example 2.4
which is positive
, which is negative
Hence the root lies between 1.5 and 1.25 and we obtain
Example2.5: Find the real root of the equation , using bisection method.
Solution:
Let .Then and
Hence the root lies between 2 and 3 and we take
1 2 3 2.5 5.6250
2 2 2.5 2.25 1.8906
3 2 2.25 2.125 0.3457
4 2 2.125 2.0625 -0.3513
5 2.0625 2.125 2.09375 -0.0089
6 2.09375 2.125 2.10938 0.1668
7 2.09375 2.10938 2.10156 0.07856
8 2.09375 2.10156 2.09766 0.03471
9 2.09375 2.09766 2.09570 0.00195
10 2.09375 2.09570 2.09473 -0.0035
11 2.09375 2.09473 2.09424
12 2.09424 2.09473
At n=12, it is seem that the difference between two successive iteration is 0.0005, which is less
than 0.001.
(*)
In this method the curve between the points and is replaced by the
chord AB by joining the points A and B and taking the point of intersection of the chord with the
x-axis as an approximation to the root which is given by putting in (*).
Thus, we have
Hence
Now,
Hence
Example2.7: Solve the equation by regula false method starting with and
correct to 3 decimal places.
Solution: Let
Since , the root will lies between 2.5 and 3.On taking and , we
have
Here the graph of the function in the neighborhood of the root is approximated by a
secant line (chords).Further the interval at each iteration may not contain the root. Let initially
the limits of the interval are and , then the first approximation is given by
In case, at any stage , this method will fail. Thus, this method does not
converge always whereas Regula-falsi method will always converges. The only
advantage of in this method lies with the fact if it converges then it will converges more
rapidly than the regula-falsi method.
Example 2.8: Find the root of the equation using secant method correct to four
decimal places.
Solution: Let .Taking the initial approximation
So that , then by secant method, we have
Example 2.9: Find the root of the equation using secant method correct to four
decimal places.
Solution
Let
.Let and
Then by secant method, we have
, for
x2 – – (x – 3) (x + 1) = 0
1. 2x 3
If we start with and iterate with fixed point algorithm, successive values of are:
2(4) 3 = 3.31662
3
2. Another rearrangement of is:
x2
Let us start with then successive values then are:
4 = -6
1.5
= -1.02762
Hence, lim lim does not exist hence, no fixed point for .
n n
The fixed point of is the intersection of the line and the curve
Starting on the x-axis at the initial , and then go vertically to the curve, then horizontally to the
line then vertically to the curve, and again horizontally to the line. Repeat the process
until the points on the curve converge to a fixed point or else diverge. It appears from the graph
that the different behavior is due to the slope greater of less or of opposite sign then to the slope
of the line (= 1)
Proof: As – –
g ( xn ) g ( )
x
xn n
– – , where
th
Define the error of the iteration as
– –
Hence,
Since |
Example 2.10: Find the real root of , correct to four decimal place by using
iteration method.
Here,
in
Thus we get
Example 2.11: Find the real root of the equation correct to four decimal places,
using iteration method.
Solution
Let
Let
Since h is very small, neglecting second and higher order terms and taking the first
approximation, we have
f ( x0 )
, provided ( )
f ( x0 )
f ( x0 )
f ( x0 )
(3)
Relation (3) gives the improved value of the root over the previous one. Now, substituting for
and for , we get
In general we have
(5)
f ( xn )
f ( xn )
f ( xn )
Let ,
f ( xn )
f ( x)
2
f ( x) f ( x)
= < 1 for (x) 0 (5)
f ( x)
2
Or [ (x)] 2 , x ( - , + )
Solution
2
– (x)
f ( xn ) x n 16 1 16
Then = x n
f ( xn ) 2 xn 2 xn
Iteration I
16
x0 = 5 = 4.1
1 16
2 x0 5
Iteration II
1 16 16 1 16
x1 x1 4.1 = 4.001
2 x1 x1 2 4.1
1 16 1 16
x 2 4.001 = 4.0000
2 x2 2 4.001
Solution: Let
Iteration I
Iteration II
Iteration III
Example 2.13: Find the real root of the equation , using Newton’s Raphson
method.
Solution
We have
Thus
Hence, the root of the equation correct to five decimal places is 2.79839
3.1 Introduction
Systems of Linear Equations
Any straight line in the xy -plane can be represented algebraically by an equation of the form
ax+by=c where x&y are variables a,b&c are real constant (a&b are both not zero). An equation
of this form is called a linear equation with variables x & y. More generally, we define a linear
equation in the n variables x1,x2,…,xn to be one that can be expressed in the form
a1x1+a2x2+…+anxn=b
Where a1,a2,…,an(not all zero) and b are real constants. The variables in a linear equation are
sometimes calledunknowns.
Example 3x+2y=-9,3x+9y-z+4u=5
A finite set of linear equations in the variables x1,x2,…,xn is called a system of linear equations
or a linear system. A sequence of number called a solution of the system if
x1=s1,x2=s2,…,xn=sn is the solution of every equation in the system
An arbitrary system of m linear equations in n unknowns can be written as
------------------- (3.1)
AX=b---------------------------------3.2
Where A= X=
Systems of linear Equations arise in a large number of areas, both directly in modeling physical
situations and indirectly in the numerical solution of other mathematical models. These
applications occur in virtually all areas of physical, biological and social sciences.
Linear systems are involved in the numerical solution of optimization problems, systems of non
linear equations approximations of functions, boundary value problems in ordinary differential
equations, partial differential equations, integral equations, statistical inference and etc. And
because of the wide spread importance of linear systems, much research has been devoted to
their numerical solution and excellent algorithms have been developed for the most common
type of problems.
det(A) = 1
1 3 1
0 1 1
1 2 1 1
Therefore x1 = = =1
det( A) 1
2 1 1
1 0 1
1 1 1 1
x2 = = = -1.
det( A) 1
2 3 1
1 1 0
1 2 1 4
x3 = = = -4.
det( A) 1
Cramer’s Rue will be used for system of n n where n = 2, 3, 4 or 5 but for n > 5 Cramer’s rule
will be impractical, say for n = 26, the number multiplications required will be 25 26! and this
is impractical to compute, hence use of numerical methods to solve the systems, are more
appropriate and efficient.
Thesemethods are based on the idea of successive approximations, they starting with one or
more approximations to the solution; we obtain a sequence of approximation or iterations {xk},
which converges to the solutions. These methods use simple and uniform operations. They do
not compute the exact solution directly but makes use of an infinite number of steps which will
converge to the exact solution.
. i.e. x
k
k 1 x = The exact solution
------------------------3.6
a i11
mi1= 1
, i = 2, 3, …, n. … (3.71)
a11
Then we can eliminate x1 from the last (n-1) equations by subtracting from the ith
equation the multiple mi1 of the first equation. The first rows of A(1) and b(1) are left unchanged
and the remaining rows are changed as a result we get a new system.
A(2)x = b(2)
1 1
a11 a12 a11n x1 b11
2 x 2
0 a 22 a 22n 2 = b 2
2 2
0 a n22 a nn x n bn -----------------------------------------------3.7
j = 2, 3, …n … (3.8)
( 2)
Step 2. If a 22 0, we can in a similar way eliminate x2 from the last (n – 2) of those equations.
We then get a new system
a11
(1)
a1(1n) b11
1 2
x
0
( 2)
a 22 a 2( 2n) x b 2
A(3)x = b3 0 a ( 3) 2 = b 3
( 2)
a 22 3n 3
0 0 (n)
a nn x n b 3
a i22
If we put mi2 = ( 2 ) for i = 3, 4, …n
a 22
The coefficients of this system are given by
aij(3) aij( 2) mi1a2( 2j) , bi(3) bi( 2) mi 2 b2( 2) , i = 3 …, n
j = 31 … n
We continue to eliminate the unknowns, going onto columns 3, 4, and so on and this is expressed
generally in the following.
We solve for x s
We note that the right-hand side b is transformed in exactly the same way as the columns in A.
Therefore the description of the elimination is simplified if we consider b as the last column of A
and denote
ai(,kn)1 = bi(k ) , i, k = 1, 2, …, n.
1st Step:
m21 = 2, m31 = -1
1 2 1 : 0 1 2 1 : 0
Then 2 2 3 : 3 0 2 1 : 3
1 3 0 : 2 0 1 1 : 2
1 2 1 x1 0
x = 3
Thus = 0 2 1 2
0 0 1 x 3 1
2 2
x3 = 1, x2 = -1, x1 = 1
Activity 3.1
SOLVE THE FOLLOWING SYSTEM OF LINEAR EQUATIONS USING
CRAMER’S RULE AND Gauss elimination method
1)
3)
Eliminate the unknown xk in equations both of above and below equation k using the following
formula
aij( k 1) = a ijk - aikk a kj( k 1) for j = k, … n + 1
i = 1, …n, i k
th
At the end of the n step this procedure converts
A b I b n
Thus IX = bnX = bn
3 1 2 1 x1 3
3 x 2 8
A = 2 2 2
b
=
1 5 4 1 x 3 3
3 1 2 3 x 4 1
Solution:
3 1 2 1 3
2 2 2 3 8
Ab =
1 5 4 1 3
3 1 2 3 1
3 2 67
1 0 Divide R3 by 4
7 7
5 1 3
0 1
7 7 7 and reduce the elements below
0 0 4 4 4 and above the pivoting element to
10 23 62
0 0 zero
7 7 7
1 3 13
1 0 0
7 7 Divide R4 by 7
4 2
0 1 0 7 and reduce the elements below
7
1
0 0 1 1 and above the pivoting element to
0 0 0 13 5
7 7 zero
1 0 0 0 1
0 1 0 0 2
0 0 1 0 3
0 0 0 1 4
Hence the solution is
x1 = 1, x2 = 2, x3 = 3, x4 = -4.
Augment A and I
1 1 2 1 0 0
3 0 1 0 1 0
1 0 2 0 0 1
k 1
a kjk
a kj = k
; j = k, … 2n, aijk 1 = a ijk - aikk a kjk 1 … (3.26)
a kk
1 1 2 1 0 0
0 3 5 3 1 0
0 1 0 1 0 1
1
0 1 0
1 0 3 3
5 1 1
0 1 3 3
0
0 0 5 1
3 0
3
1
1 0 0 0 2 1
5 5
0 1 0 1 0 1
0 0 1 0 1 3
5 5
0 2 1
5 5
Therefore A = 1 0-1
1
0 1 3
5 5
1)
2)
3)
4)
5)
AX=b
Where A= X=
A=LU
Where L= ------------------------------------------3.9
and
U= -----------------------------------3.10
Using the matrix multiplication rule to multiply the matrix L and U and comparing the elements
of the resulting matrix with those of A we obtain
--------------3.11
Where
The system of equations involves n2+n unknowns. Thus, there n parameter family of solutions.
To produce a unique solution it is convenient to chose ether OR
When we chose
i) the method is called the Doolittle’s method
ii) the method is called the Crout’s method
When we take , the solution of the equations 3.11 may be written as
----------------------------3.12
-------------------------------2.13
The first columen of L and the first row of U have been determined. We can now proceed to
determine the second column of L and the second row of U
Next we find the third column of L followed by the third row of U. thus for the relevant indices i
and j the elements are computed in the order .
Having determined matrices L and U, the system of equation AX=b become
AX=b
LUX=b we write the above equation as the following two system of equations
Let UX=Z----------------------------------------3.14
ThenLZ=b-----------------------------------3.15
The unknown’s in (3.15) are determined by forward substitution and in
(3.14) are obtaind by back substitution
Alternatively, find L-1 and U-1 to get
--------------------------------------------------3.16
The inverse of can be also determined from
--------------------------------------------------3.17
The method fails if any of the diagonal elements is zero. The LU decomposition is
guaranteed when the matrix A is positive definite .however it is only a sufficient condition
Write
L= AND U=
i.e =
4z1 -z2 = 6
3z1 +2z2 -10z3 = 4
Using UX=z
x1 +x2 + x3 = 1
x2 + 5x3 = -2
x3 =
A)
B)
C)
D)
E)
When a linear system is to large and sparse the Gaussian Elimination method becomes ill
conditioned. Under such conditions Iteration methods is best for finding the roots of the system.
n
1
X1 = b1 a1 j x j
a11 j 1
j 1
n
1
X2 = b2 a 2 j x j
a 22
j 1
j2
n
1
= b2 a nj x j
a nn
j 1
. jn .
x1 3x 2 10 x3 14 0
1
True solution x = 1
n
1 (0)
Using 1
xi = bi aij x j ,
a ii j 1
1
ji
Second iteration
=-0.2
= 1.11
And so on
0 0 0 0 1
1 1.4 .5 1.4 .5
2 1.11 -0.20 1.11 .2
3 .929 1.055 .929 .071
4 .9906 .9645 .9906 .0355
5 1.01159 .9953 1.01159 .01159
6 1.00025 1.005795 1.000251 .005795
0
Starting with initial approximation x = 0
0
0
Substituting the above approximation in the equation we obtain the second iteration
And so on
But the answer converges to 1,-1,1
Activity 3.4
Solve, by Jacobi’s iteration method, the equations
a)
b)
ji
e ( m1) M e (m )
ji
n
aij aii … (3.30)
j 1
ji
Therefore the matrix A must be diagonally dominant for the iteration to converge.
Then solving each equation of the system for the unknown with the largest coefficient, we obtain
On substituting any initial value xi0 for the unknown on the results and obtain the new
values xi1 on the results. Again substitute the new values x j1 on the results and obtain
improved values x j2 and continue the process until x jn is equal to x jm n is obtained. The x jm
1 i 1 n
xi( m1) i aij x j a
( m 1)
b ij x (jm) , i = 1, …, n
aii j 1 j i 1
i.e.it is the some as that of Gauss-Jacobi Method the deference is that (as soon as a new
approximation is found, it is immediately in the next step.) used each new component xim1 is
immediately used in the computation of the next component.
Example 8
Solve the following by the Gauss- Seidel method
10 x1 3x 2 x3 14 0
2 x1 10 x 2 3x3 5 with x = 0 0
x1 3x 2 10 x3 14 0
Solution
Therefore
The first iteration
= 1.194
Second iteration
= -0.10718
=
= 1.11
And so on
Numerical Results with Gauss-Seidel Method
m x1( m ) x 2( m ) x3( m ) ||em||
0 0 0 0 1
1 1.4 .78 1.026 .4
2 1.11 .99248 1.1092 .1092
3 .9234 1.0310 .99159 .031
4 .99134 .99578 1.0021 .0085
It is clear that the speed of convergence with Gauss-seidel method is faster than that of
Jacobi.
c)
em+1 = xi - xim 1
eim 1 = xi - xim 1
i 1 aij n aij
eim1 = - a
j 1
e mj 1 - a
j i 1 ii
e jm i = 1, 2, …,n
ii
Define
i 1 aij n aij
i= a
j 1
, i = a
j i 1
, i = 1, …, n with 1 = m = 0
ii ii
The rate of convergence is linear but with a faster rate than with the Gauss-Jacobi method.
Consider the same example of the above.
To solve AX = B we split A as
A=N–P where N might be diagonal, triangular or tridiagonal.
And write Ax = B as (N – P)x = B or
Nx = B + Px
Define the iteration method by
Nx(m + 1) = B + Px(m), m 0 with x0 given
i
Then define = max
1 i n 1i
e ( m1) = e ( m 1)
k
Then with i = k
e ( m1) k e ( m1) + k e (m )
k
e ( m1) e (m )
1k
Hence
e ( m1) e (m )
we have M < 1
Therefore e ( m1) em m e o and m
e m 0.
x1
X=
x
n
f1
Similarly, f=
f
n
Therefore, system (3.32) in short form
f(X) = 0 … (3.22)
System (3.22) is solved by the method of successive approximations.
Suppose the kth approximation
then the exact root of this equation can be represented in the form
X = X(i) + e(k) … (3.23)
where
ek = e1k , e2k enk , the error of the root .
putting (3.22) into (3.23) we have
f(xk + ek) = 0 … (3.24)
2 x12 x22 4 x3 0
3x12 4 x2 x32 0
0.25
f(x0) = 1.25
1.00
2 x1 2 x2 2 x3
W(x) = 4 x1 2 x2 4
6 x1 4 2 x3
Then
1 1 1 e10 0.25
2 1 4 e 0 = 1.25
2
3 4 1 e30 1.00
x1 = x0 + e0
x11 0.875
1
i.e. x 2 = 0.500
x31 0.375
then if ||e0|| < tolerance we take the approximate solution if not repeat the process until a desired
approximate
r =A ~
x - b has the property that || r || is small, then ||x - ~
x || would be small as well. Although this
is often the case, certain special systems which occur quite often in practice fail to have this
property.
Example. 9
1 2 x1 3
1.0001 2 x = 3.0001
2
3 1 2 3 0
r=b-A~
x = - = … (3.38)
3.0001 1.0001 2 0 .0002
||r || = 0.0002
i.e. ||x - ~
x || = 2
This illustrates the fact that there are systems in which small changes in the right side b leads
to a large changes in the solution. And a linear system whose solution x is unstable with respect
to small relative changes in the right side b is called Ill-conditioned.
The following theorem gives a criterion for well conditioned and ill-conditioned behavior in
matrix:
k ( A) r
||x - ~
x ||
A
x~
x r
and k(A) provided x 0, b 0 … (3.39)
x b
Proof: Since r = b - A ~
x = Ax - A ~
x and A is nonsingular,
then x - ~
x = A-1
and ||x - ~
x || = ||A-1 r|| ||A-1|| || r || it implies
r
||x - ~
x || ||A|| ||A-1|| .
A
x~
x A 1 r
Hence,
A x b
Ax~
x A A 1 r
A x b
x~
x A A 1
r
x b
||A|| ||A-1|| is called a condition number of a non singular matrix A. denoted by K(A) implies
x~
x r
K(A) . … (3.40)
x b
1 2 10,000 10,000
A= , A-1 =
1.0001 2 50005 5000
= 60,002.
b) x – 2y + 3z + 4t = 9
2
3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12
c)
–
d)
b) x – 2y + 3z + 4t = 9
2
3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12
c)
–
d)
b) x – 2y + 3z + 4t = 9
2
3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12
c)
–
d)
4) Solve the system of equations using Jacobi iteration method (Iterate up to two steps)
a) 27x + 6y – z = 85
6x + 15y + 23 = 72
x + y + 54z = 110
c) 10x1 – 2x2 – x3 – x4 = 3
-2x1 + 10x2 – x3 – x4 = 15
- x1 – x2 – 2x3 + 10x4 = -9
x + y + 54z = 110
c) 10x1 – 2x2 – x3 – x4 = 3
-2x1 + 10x2 – x3 – x4 = 15
- x1 – x2 – 2x3 + 10x4 = -9
8) Solve the system of equation by Gauss –Jacobi Method ( perform only 3 iteration )
4.1 INTRODUCTION
The calculus of finite differences deals with the changes takes place in the values of the function
,due to finite changes in the dependent variables.
= f(a + 3h)
E-1f(a) = f(a – h)
E-1E = 0.
E0 = 1
We have
= - )
= + )
= + )
The coefficient occurring on the right hand side being the binomial coefficients we have in
general ,
The Newton’s is forward and backward Interpolation Formula are applicable for
interpolation near to the beginning and end of tabulated values respectively. In this section we
discuss the central difference formulas which are most suited for interpolation near the middle of
the tabulated values.
h h
Definition f(x) = f x – f x f 1 f 1
2 2 2 2
Central Differential
2 3 4
X f f f f f
x2 f2
2 3
x1 f1 f1 1 f1
2
2 4
x0 f0 f0 f 0 1 f0
2
x-1 f--1 2
f-1
f 0 1
x-2 f--2 2
f 1 1
2
f 1 1
2
f 2 1
2
h h
f(x) = x x
2 2
6) The divided difference operators
.
.
.
2 3 4
X f f f f f
x-2 f-2
2 3
x-1 f-1 f 2 1 f-1
2
2 4
x0 f0 f0 f 1 1 f0
2
x1 f1 2
f-1
f 1 1 f 0 1
x2 f2 2 2
f 0 1
2
f1 1
2
c) ==
h h
x x
3) f = f 2 - f 2 = E 12 E 12 f(x)
1
= E 2
(E – 1) f(x)
1
= E 2
f(x).
1 1 1
E 2
- E 2
=E 2
1
= E 2
f(x).
1
E 2
So in general
1) ∆=E-1
2) =
= E 2 E 2 fi = E 2 (E–1)fi E 2 f
1 1 1 1
3) fi = f - f
i
1
2
i
1
2
m-1 m-1 fi
f i 1 =
1 1
4) m
fi = f E 2
E 2
1 12 2
1
= m-1
E 2
fi
2
fi = f i 1 - f i 1
2 2
= 2fi-1
= 2fi+1
3 2 2
fi = [ fi] = [ f i 1 - f i 1 ]
2 2
= [ f i 1 - f i 1 ]
2 2
= fi+1 – 2 fi + fi-1
= f i 1 1 - f i 1 - 2( f i 1 - f i 1 ) + ( f i 1 - f i 1 1 )
2 2 2 2 2 2
= f i 1 1 -3 f i 1 +3 f i 1 - f i 1 1
2 2 2 2
3
2
3E fi
= f (i 1) 1 =
3
2
3
2
The = f (i 1) 1 = 3 E fi
2
n
So in general n
fi = n E 2
f i = n f i n
2
n
= n E 2
f i = n f i n
2
1 h h
f(x) = [f(x + ) + f(x - )]
2 2 2
1 1 1
= [ E 2 f(x) + E 2 f(x)].
2
1 1 1
= [ E 2 + E 2 ]f(x) … (4.14)
2
1 1 1
Hence = [ E 2 + E 2 ].
2
n 1 n 1 n 1
2 2
and = [ E + E ].
2
1 1 1
also2 = ( E 2 + E 2 )2
4
1 2
= +1
4
1
2 = 2
+1 … (4.15
4
1 1
2 2
Again since = E -E
1 1 1
and = (E 2 +E 2)
2
1 1 1 1
2 2 2 2
2 + = E + E +E - E
1
2
= 2E
1
Hence, E 2
=+ 1
2
5) Proof that
6) Evaluate
7) Let , then construct the forward difference table for the argument
and compute
Objective
On completion of this chapter, successful students will be able to:
Learn about polynomial interpolation
Know uniqueness of the interpolating polynomial
Practice computation of the interpolating polynomial
Determining the error for the interpolating methods
Introduction
In this chapter, we consider the interpolation problem: suppose we do not know the function f,
but a few information (data) about f, now we try to compute a function g that approximates f.
There is always a need to approximate functions for instance due to:
1. Given a set of discrete data {(xi, yi)|i = 1, …n} we want to find the relation between xi and yi
that can describe the physical phenomena sufficiently.
2. We may have a function f(x) that is complicated to differentiate or integrate, and then we can
find a simpler function that approximates the differentiation and integration of the
complicated function f(x) on a given closed interval.
3. We may need to determine the solution of differential equations. But finding it analytically
is difficult, so we find the approximate solution at finite points of the interval.
If (xi, yi), xi, yi ɛℜ, i=0, 1, …,n, are n+1 distinct pairs of data point, then there is a unique
polynomial Pn of degree at most n such that
Proof:
Existence: Proof by mathematical induction. The theorem clearly holds for n=0 (only one data
point (x0, y0)) since one may choose the constant polynomial P0(x)=y0 for all x. Assume that the
theorem holds for n k, that is, there is a polynomial Pk(x), deg(Pk) k, such that yi=Pk(xi), for
0 i k. next we try to construct a polynomial of degree at most k+1 to interpolate (x i, yi),
0 i k+1.
Let
Pk 1 ( x) Pk ( x) c( x x0 )( x x1 )...( x xn )
yk 1 Pk ( xk 1 )
c
( xk 1 x0 )( xk 1 x1 )...( xk 2 xk )
Since xi’s are distinct, the polynomial Pk+1(x) is well-defined and deg(Pk+1) k+1. It is easy to
verify that
Uniqueness: suppose there are two such polynomials Pn and Qn satisfying (5.1). Define
For 0 i n. this means that Sn has at least n+1 zeros, it therefore must be Sn=0. Hence Pn =Qn.
Let the data points (x0, y0), (x1, y1), . . . , (xn, yn) belonging to an unknown smooth function
y = f (x) be plotted on a graph. Then the simplest way to estimate the value of y(x) when x lies in
the interval xi < x < xi+1 is to join the points (xi , yi ) and (xi+1, yi+1) by a straight line segment, and
then to use the point on the line segment with argument x as the approximation to y(x). This
process is called linear interpolation.
Linear interpolation is interpolation by the straight line through (x0, f0) and (x1, f1): see Fig. 5.1.
Thus the Liner polynomial P1 is a sum P1= L0f0 + L1f1 with L0 the linear polynomial that is 1
at x0 and 0 at x1; Similarly, L1 is 0 at x0 and 1 at x1. Obviously,
( x x1 ) ( x x0 )
L0(x) = , L1(x)=
x0 x1 x1 x0
This gives the linear interpolation formula
P1(x) = L0(x)f0 + L1(x)f1
Example 5.1
Compute a 4D-value of ln9.2 from ln9.0=2.1972, ln9.5=2.2513.
Solution:
x0=9.0, x1=9.5, f0=ln9.0, f1=ln9.5, ln9.2 we need
( x 9.5)
L0(x) = 2.0( x 9.5) , L0(9.2)= 2.0(0.3) 0.6
0.5
( x 9.0)
L1(x) = 2.0( x 9.0) , L1(9.2)= 2.0.2 0.4
0.5
ln9.2≈ P1(9.2)= L0 (9.2)f0+ L1 (9.2)f1=0.6.2.1972+0.4.2.2513=2.2188.
( x x0 )( x x 2 )
L1(x) =
( x1 x0 )( x1 x 2 )
( x x0 )( x x1 )
L2(x) =
( x 2 x0 )( x 2 x1 )
In addition
Li(xj) = 0 for i j o i, j 2
Li(xj) = 1 for i = j
Example 5.2
Compute a 4D-value of ln9.2 from ln9.0=2.1972, ln9.5=2.2513 and ln11.0=2.3979.
Solution
( x 9.5)( x 11.0)
L0(x) = x 2-20.5x+104.5, L0(9.2)=0.5400,
(9.0 9.5)(9.0 11.0)
( x 9.0)( x 11.0) 1
L1(x) = (x2-20x+99), L1(9.2)=0.4800,
(9.5 9.0)(9.5 11.0) 0.75
( x 9.0)( x 9.5) 1
L2(x) = ( x2-18.5x+85.5), L2(9.2)=-0.0200,
(11.0 9.0)(11.0 9.5) 3
Ln9.2≈ P2(9.2)= 0.5400.2.1972+ 0.4800.2.2513-0.02200.2.3979=2.2192.
Assume that we are given n + 1 data points (x0, f0), (x1, f1),…, (xn, fn) with all of the xi’s are
distinct. The interpolating polynomial of degree n is given by
n
Pn(x) = L ( x) f
i 0
i i
n (x x j )
where Li(x) = x
j 0 xj
is a polynomial of degree n and for
i
j i
Example 5.3:
2
P2(x) = L ( x) f
i 0
i i = L0(x)f0 + L1(x) f1 + L2(x)f2
( x x1 )( x x 2 ) ( x x0 )( x x 2 ) ( x x0 )( x x1 )
= f0 + f1 + f2
( x0 x1 )( x0 x2 ) ( x1 x0 )( x1 x 2 ) ( x 2 x0 )( x 2 x1 )
Example 5.4:
Use the Lagrange interpolation formula to find a polynomial P3(x) which passes through:
Solution:
3 3 3 (x x j )
Now, P3(x) =
i 0
f i Li ( x) = f x
i 0
i
j 0 xj
i
j i
3 (x x j ) ( x 1)( x 2)( x 4) 1 3
L0(x) = x
j 1 xj
=
(0 1)(0 2)(0 4)
=
8
(x – 7x2 + 14x -8)
0
3 (x x j ) 1 3
L1(x) = x
j 0 xj
=
3
(x – 6x2 + 8x)
i
j 1
2 (x x j ) 1
L3(x) = xj 0 xj
=
24
(x3 – 3x2 + 2x)
3
3
P3(x) = f L ( x) = x3 – 2x + 3.
i 0
i i
n n (x x j )
Pn(x) = ( x
i 0 j 0 xj)
fi
i
j i
x x
n
If we let w(x) = j
j 0
n n
then w (x) = ( x x
i 0 j 0
j )
j i
putting
x = xi (i = 0, 1, 2, …, n) we have
w (xi) = (xi – x0) (xi – x1) … (xi – xi-1) (xi – xi+1) … (xi-xn)
n
= (x
j 0
i x j ) … (xi – xn)
j i
x x0
Suppose xj = x0 + jh and S = then
h
x x = hn+1S(S – 1) … (S – n)
n
w(x) = j
j 0
= hn+1S[n + 1]
and w (xi) = (xi – x0)(xi – x1) … (xi – xi-1) (xi – xi+1) … (xi – xn)
Hence, the Lagranges polynomial for equidistant points we have the expression:
n
1ni f iS [ n 1]
Pn(x) =
i 0 i!(n i )! S i
Theorem 5.3:
Let x0, x1, …, xn be distinct points and let f be a given real-valued function with (n + 1)
continuous derivatives on the smallest interval I[x0, …, xn, x ] for some argument x in I and x
xi, then there exists a number in I such that
3
f ( n 1) ( )
f( x ) – f i ( x j )Li ( x ) = w( x )
j 0 (n 1)!
n
where w(x) = (x x
j 0
j )
n
Let Pn(x) = f ( x j )L j ( x)
j 0
F( x ) = f( x ) – Pn( x ) – KW( x ) = 0
Then by Rolle’s theorem applied repeatedly, F (x) has at least n + 1 zeros in the above interval,
F (x) has at least n zeros, and finally Fn+1)(x) at least one zero I such that
f ( n 1) ( )
Hence, K =
n 1!
f ( n 1) ( )
f( x ) – Pn( x ) = W( x )
n 1!
Since [a, b] and the error formula is valid for x [a, b] including xi for i = 0, 1, …, n.
xi fi
100 10
121 11
144 12
x = 115
Solution:
1 12 1 3 3 5
Here, (x) = x , (x) = x 2 and (x) = x 2
2 4 8
3 5 3
M3 = max | (x)| = (100) 2 = (10 5 )
100 x 144 8 8
3 5 1
Hence, Error = |R2(x)| < 10 (115 – 100)(115-11)(115-144)
8 3!
1– 6 10-5
f [ x1 ] f [ x0 ]
1st order difference f[x0, x1] =
x1 x0
f [ x1 , x2 ] f [ x0 , x1 ]
2nd order difference f[x0, x1, x2] = and in general the kth order difference.
x 2 x0
f [ x1 , x2 , xk ] f [ x0 , x1 xk 1 ]
f[x0, x1, x2, ..., xk] =
x k x0
k f (x j )
Theorem 5.4: To prove f[x0, …, xk] = k
j 0
(x
i 0
j xi )
f [ x1 ] f [ x0 ]
for k = 1, f[x0, x1] =
x1 x0
1 f (x j )
= 1
j 0
(x
i 0
j xi )
i j
f [ x1 , xr 1 ] f [ x0 xr ]
then f[x0, x1, …, xr+1] =
xr 1 x0
1
= f ( x r 1 ) r f (x j ) r f (x j ) f ( x0 )
x r 1 x0 r
r 1 r r
x r 1 x i j 1
( x j xi ) j 1
( x j xi ) ( x0 xi )
i 1 i 1 i 0 i 1
i j i j
f ( x r 1 ) r x j x0 ( x j xr 1 )
= + r 1
f(xj)
xr 1 x0 x j xi
r
xr 1 x0 xr 1 xi j 1
i 1 i 0
i j
r 1 f (x j )
f ( x0 )
- r
= r 1
xr 1 x0 x0 xi j 0
(x j xi )
i 1 i 0
l j
Remark: The divided difference f[x0, x1, …,xn] is invariant of its arguments.
i.e. f[x0, x2, x1, x3 …xn] = f[x0, x5, x4, x2, x1, xj, xi,… xn]
Divided differences are most easily computed recursively using the following formula:
f [ xi 1 xk 1 , xk ] f [ xi , xi 1 xk 1 ]
f[xi, xi+1, … xk-1, xk] =
x k xi
x0 f0
f[x0, x1]
x1 f1 f[x0, x1,x2]
f[x3, x4]
x-4 f4
Example 5.6
xi 0 1 3 5
yi 1 2 6 7
1 1/3
2 -17/120
3 -3/8
1/2
So,
P(x) =1+x+1/3x(x-1)-17/120x(x-1)(x-3)
None that xi can be reordered, but must be distinct. When the order of xi’s are changed, one
obtains the same polynomial but in different form.
xi 3 1 5 0
yi 6 2 7 1
While theoretically important Lagrange’s formula is, in general, not suitable for actual
calculations, but there are other forms that are much more convenient formulae and one of them
is Newton Divided Difference.
Pn(xi) = fi for i = 0, 1, …, n.
n
then Pn(x) = f0 + f [ x , x , , x
j 1
0 1 j ]( x x0 ) …(x – xj-1)
f n 1
and f(x) = Pn(x) + (x – x0) (x – x1) …(x – xn)
(n 1)!
f [ x0 , x1 ,, xk 1 , x] f [ x0 , x1 ,, xk ]
f[x0, x1, …, xk-1, xk, x] =
x xk
If we set k = 1, then
f [ x0 , x] f [ x0 , x1 ]
f[x0, x1, x0] =
x1 x0
f [ x0 , x2 ] f [ x0 , x1 ]
f[x0, x1, x2] =
x2 x1
f ( x2 ) f ( x0 ) a1 ( x2 x0 )
= a2
x2 x0 ( x2 x1 )
Pn(x) = f(x0) + f[x0, x1](x – x0) + f[x0, x1, x2](x – x0)(x – x1)
n
= f(x0) + j 1
f[x0, x1, …, xj] (x – x0) (x – x1) … (x – xj-1)
Then
n
Pn+1( x ) = f0 + j 1
f[x0, x1, …, xj] ( x – x0)( x – x1)…( x - xi)
Therefore
Thus
f (n) ( )
f[x0, …,xn] = for some I.
n!
0 132.654
81.13
85.87 1
89.11 1
95.79 1
104.44
0.9 216
Solution:
Here, P5(x) = f0 + f[x0, x1](x – x0) + f[x0, x1, x2] (x – x0)(x – x1)
+f[x0, x1, x2, x3, x4, x5] (x – x0)(x – x1)(x – x2)(x – x3)(x – x4)
Approximate
Remark: The coefficient of the Newton divided difference form of interpolating polynomial are
along the diagonal of the table,
Δj f0
f[x0, x1, …xj] =
j!h j
For, j =2
f [ x1 , x2 ] f [ x0 , x1 ]
f[x0, x1, x2] =
x 2 x0
f ( x 2 ) f ( x1 ) f ( x1 ) f ( x0 )
= h h
2h
f ( x2 ) 2 f ( x1 ) f ( x0 )
=
2h 2
2 f 0
=
2!h 2
Then, when k = j + 1
f [ x1 , x2 , x j 1 ] f [ x0 , x1 ]..., x j
f[x0, x1, …xj, xi+1] =
x j 1 x0
j 1 f 0
=
j 1!h i 1
Thus if xi = x0 + ih for i = 0, 1, …, n
n f 0
f[x0, x1, …, xn] =
n!h n
Therefore
f 0 2 f 0
Pn(x) = f0 + (x – x0) + (x – x0)(x – x1)
h 2h 2
n f 0
+…+ (x – x0) … (x – xn-1)
n!h n
x0 f0
f0
x1 f1 2f0
f1 20
x2 f2 2f1 4f0
f2 3f1
x3 f3 2f2
f3
x4 f4
Example 5.7:
1
f(0) = 6, f(1) = 5 = f(2), f(3) = 15, f(4) = 50 find f
2
0 6
-1
1 5 1
0 9
2 5 10 6
10 15
3 15 25
35
4 50
f 0 2 f 0 3 f 0
f(x) f0 + (x – x0) + (x – x0) (x – x1) + (x-x)(x –x2)
h 2!h 2 3!h 3
4 f 0
+ (x – x0) (x – x1) (x – x2) (x – x3)
4!h 4
1 9
= 6 – 1(x – 0) + (x – 0)(x – 1) + (x – 0)(x – 1)(x – 2)
2! 3!
6
+ (x – 0)(x – 1) (x – 2)(x – 3)
4!
1 1 1 1 1 3 1 1 1
Then f 6 - + 1 + 1 2
2 2 2 2 2 2 2 2 2
1 1 1 1 1 545
+ 1 2 3
4 2 2 2 2 64
If we require a formula with ordinates at xn, xn-1, xn-2 and so forth, we may replace x0 by xn, x1 by
xn-1 … xk, by xn-k and get
f n ( x x n ) 2 f n
= fn + + (x – xn) (x – xn-1)
h 2!h 2
n fn
+ …. + (x – xn) (x – xn-1) … (x – x1)
n!h n
This is called Newton’s Backward Interpolating formula and it uses the difference path indicated
below.
xn-3 fn-3
fn-2
fn-1 3fn
fn
xn fn
0 6
-1
1 5 1
0 9
2 5 10 6
10 15
3 15 25
35
4 50
( x 4)( x 3) 2
f(x) P4(x) = f4 + (x – 4) f4 + f4
2!
7 5 7 5 3
15
7
= 50 + (35) +
2 2 2 2 2
(25) +
2 2! 3!
7 5 3 1
+
2 2 2 2
(6)
4!
545
=
64
More generally, the Newton Forward Formula is used near the beginning of a tabulation and the
backward formula is used near the end of the tabulation.
x x0
S= , x = x0 + hS
h
S ( S 1) 2 S ( S 1) ( S n 1) n
Pn(x) = f0 + Sf0 + f0 + … f0
2! n!
n
S
= K kf0
k 0
( S N )(S N 1) 2
Pn(x) = fn + (S – N)fn + fn
2!
k f n ( S N )(S N 1) ( S N K 1)
+…+
K!
x xn
Where S =
h
Then find f 1 .
2
Solution:
First we interpolate f through the above given points: Using the Newton’s Forward Interpolating
Formula we get:
S ( S 1) 2 S ( S 1)( S 2) 3
f(x) f0 + Sf0 + f0 + f0
2! 3!
S ( S 1)(S 2)(S 3) 4
+ f0
4!
0 6
-1
1 5 1
0 9
2 5 10 +6
10 +15
3 15 25
35
4 50
1
1 1
1 1 3 1 1 3 5
x +
1 2 2 2 9 2 2 2 2 6
f =6 + (-1)+ 2 x+ 2 x
2 2 2 3! 4!
=6+
1 1
(-1) - (1) +
1
(9) -
5
6
2 8 16 128
545
=
64
Polynomial interpolation means the determination of a polynomial Pn(x) such that Pn(xj)=fj,
where j=0, …, n and (x0, f0), …, (xn, fn) are measured or observed values, values of a function,
etc. Pn(x) is called an interpolation polynomial. For given data, Pn(x) of degree n (or less) is
unique. However, it can be written in different forms, notably in Lagrange’s form, or in
Newton’s divided difference form, which requires fewer operations. For regularly spaced x 0,
x1=x0+h, …, xn=x0+nh the latter becomes Newton’s forward difference formula.
1. The population of a country in the censing as under Estimate the population for the year
1925
Year x: 1891 1901 1911 1921 1931
Population y: 46 66 81 93 101
(in thousands)
4. The function y = f(x) is given at the points (7, 3), (8, 1), (9, 1) and (10, 5). Find the value of
y for x = 9.5 using Lagrange’s interpolation formula
u -2 -1 0 1 2 3
APPLICATION OF INTERPOLATIONS
Objectives
On completion of this chapter successfully, students will be able to: grasp practical
knowledge of polynomial interpolation in numerical differentiation and integration,
Introduction
In this chapter, we shall discuss numerical differentiation and numerical integration. Under this,
we shall first approximate the function with the help of interpolation formula and differentiating
this formula as many times as required.
The second part deals with integration of function by Trapezoidal rule and Simpsons rule.
6.1 Differentiation
If the function f(x) is very complicated to get its derivative or is known only through a table we
use a numerical differentiation method. Formulas for numerical differentiation may be obtained
by differentiating the interpolating polynomial. The essential idea is that the derivative (x),
(x),… of the function f(x) are represented by the derivatives Pn (x), Pn (x), ... respectively.
S ( S 1) 2 S ( S 1) 2 S ( S 1)S 2( S 3) 3
f(x) = f0 + Sf0 + f0 + f0+ f0 + …
2! 3! 4!
x x0
where S = , h = xi+1 – xi , (i = 0, 1, …)
h
S 2 S 2 S 3 3S 2 2S 3 S 4 6S 3 11S 2 6S 4
f(x) = f0 + Sf0 + f0+ f0 + f0 + …
2! 3! 24
df df ds 1 df
Since = =
dx ds dx h ds
1 2S 1 2 3S 2 6S 2 3 4S 3 18S 2 22S 6 4
(x) = Δ
0f Δ f 0 Δ f 0 + f 0
h 2! 3! 24
( x x1 )( x x2 ) ( x x0 )( x x2 ) ( x x0 )( x x1 )
L2(x) = f0 + f1 + f2
( x0 x1 )( x0 x2 ) ( x1 x0 )( x1 x2 ) ( x2 x0 )( x2 x1 )
( S 1)( S 2) S ( S 2) S ( S 1)
= f0 – f1 + f2
2 3 2
1 1 1
(x) = f 0 (2s 3) f1 (2s 2) f 2 (2s 1)
h 2 2
1 1 1
(x0) = f 0 (3) f1 (2) f 2 (1)
h 2 2
1 3 1
=
h 2 f 0 2 f1 2 f 2
=
1
3 f0 4 f1 f2
2h
Similarly, since
d2 f d( f ) d 1 df 1 d3 f
(x) = = = = dx
dx 2 dx dx dx ds h 2 ds 2
12S 2 36S 22 4
(x) =
1
h2
Δ2
f 0
6S 6 3
Δ f 0 Δ f0
3! 24
Rn(x) = f(x) – Pn(x) is the corresponding error, the error in determining (x) is:
As we know
( x x0 )( x x1 )( x xn ) ( n 1)
Rn(x) = f
(n 1)!
S ( S 1)( S n) ( n 1)
= hn+1 f
(n 1)!
=
hn
(n 1)
f ( n 1) S ( S 1)(S 2) + S(S – 1) … (S – n)
d
ds
d (n+1)
ds
[f ( )]}.
d (n+1)
If we suppose ( ) is to be bounded and take into account that
ds
d
[S(S-1) … (S – n)]s=0 = (-1)nn! then for x = x0 and, hence, for S = 0, we get
ds
hn
P n(xn) = (-1)n (n+1)( )
n 1
In many cases it is difficult to estimate (n+1)( ) since we don’t know , but for small h we
n 1 f 0 (1) n n 1 f 0
approximate (n+1)( ) and hence R n (x 0 )
h n 1 h n 1
d n+1
Assuming y ( ) is to be bounded, we get the error of the derivative at the points.
dx
i!(n i )! (n+1)
R n(xi) = (-1)n-ihn y ( )
(n 1)!
x f(x)
50 1.6990
55 1.7404
60 1.7782
65 1.8129
It is known that the maximum and minimum values of a function can be found by equating the
first derivative to zero and solving for the variable. The same procedure can be applied to
determine the maxima and minima of a tabulated function. Consider Newton’s forward
difference formula
S ( S 1) 2 S ( S 1)( S 2) 3
f(x) = f0 + Sf0 + f0 + f0…
2 6
df 2S 1 2 3S 2 3S 2 3
= f0 + f0 + f0 + … (*)
ds 2 6
df
=0
ds
C0 + C1 S + C2S2 = 0
1 2 1
where C0 = f0 - f0 + 3f0
2 3
1 3
C1 = 2f0 - f0
2
1
C2 = 3f0
2
Example 6.2:
Find x correct to two decimal places for which f(x) is maximum and find this value of f(x)
x f(x) f 2f
1.2 0.9320
0.0316
0.0219
0.0120
0.0021
1.6 0.9996
2S 1 2
f0 + f0 = 0
2
1
0.0316 + (0.0097) + 0.00975 = 0
2
x = x0 + sh gives
= 1.104
If a function f(x) is continuous on an interval [a, b] and its anti derivative F(x) is known, then the
definite integral of this function from a to b may be computed from:
b
f ( x)dx
a
= F(b) – F(a)
a
f ( x)dx ( x)dx
a
b
The integral ( x)dx can be evaluated directly
a
Suppose, for the function f(x), we know that corresponding values at n + 1 arbitrary points, x 0,
x1, x2, …, xn of [a, b] as f(xi)= fi, (i = 0, 1, 2, …n)
f ( x)dx
a
b b n
w( x)
a
f ( x)dx ( x x )w( x ) fidx
0 i 0 i i
n b
w( x)
= Ai f i
i 0
where Ai = x x w( x )dx
a i i
The coefficients Ai, are independent of the choice of the function f(x) for a given points. And if
f(x) is a polynomial of degree n then
b b
f ( x)dx = P ( x)dx
a a
n
n
I1 = Ax
i 0
i i
n
In = Ax
i 0
i
n
i
b k 1 a k 1
b
Where, Ik = x dx =
k
a
k 1
Solution:
1 = A0 + A1 + A2
1 1 1 3
= A0 + A1 + A2
2 4 2 4
1 1 1 9
= A0 + A1 + A2
3 16 4 16
2 1 2
Hence, A0 = , A1 = - , A2 =
3 3 3
1
2 1 1 1 2 3
Thus f x dx =
0
f - f + f
3 4 3 2 3 4
0
3 4 3 2 3 4
= 0.25 f(x) = x3
3 3 3
2 1 2 1 1 2 2 3 2
1
2. f(x)= x ,0.4= 1x dx - + =0.398
2 3
0
3 4 3 2 3 4
If f(x) is the function and Pn(x) is the interpolating polynomial, then the error of Pn(x) in
approximating f(x) for x xi is
n 1!
b b
f b w( x)dx
n 1
then R(x) =
a
f ( x)dx - Pn ( x)dx =
a
n 1! a
Hence error due to integral approximation is
f n 1 ( x)
b
(n 1)! a
R(x) = w( x)dx
ba
xi = x0 + ih, i = 0, 1, …,n and h = and let Pn be the interpolating polynomial of
n
degree n or less with
Pn(xi) = f(xi), i = 0, … n
n
Ln(x) = L ( x) f
i 0
i i
n x xj
where Li(x) = x j 0 xj
i
j i
x x0
Let S = and S[n+1] = S(S – 1) … (S – n)
h
(1) n i S [ n 1]
n
Pn(x) = fi
i 0 i!( n i )! S i
b
To compute f ( x)dx , We proceed as follows:
a
b b n
a
f ( x)dx L ( x) f dx
a i 0
i i
(1) n i S [ n 1]
b n
= a .
i 0 i!( n i )! S i
fidx
(1) n i S n 1
b n n
f ( x)dx h fi ds .
a i 0 i!( n i )! 0 S i
h(1) n i S n 1
n
letting Ai =
i!(n i )! 0 S i ds
b n
f ( x)dx h Ai f i
a i 0
b 1
For n = 1, f ( x)dx h Ai f i
a i 0
for i = 0
(1)1 S ( S 1)
1
1 1
A0 =
0!1!
0
S
ds = - 1
2 2
for i = 1
(1) 0 S ( S 1)
1
1
A1 =
1!0! S 1 ds =
0
2
b
Hence f ( x)dx
a
= h[A0f0 + A1f1]
x1
h
f ( x)dx
x0
2
[f0 + f1]
We define,
x1
h
R= f ( x)dx -
x0
2
(f0 + f1)
x0 h
h
R(h) = f ( x)dx -
x0
2
[f(x0) + f(x0 + h)]
1 h
R (h) = f(x0 + h) - [f(x0) + f(x0 + h)] - (x0 + h)
2 2
1 h 1
= f(x0 + h) - (x0 + h) - f(x0)
2 2 2
Expand f(x0 + h) and (x0 + h) using Taylor series about x0, then
1 h2 h 1
R (h) = [f(x0) + h (x0) + ( )] - [ (x0) + ( )] - f(x0)
2 2 2 2
1 2
= h ( )
4
4 0
R(h) = t ( )dt, for [0, h]
1 1 3
h
R(h) = t 2 dt = h ( )
4 0
12
b
If the interval [a, b] is big enough, to evaluate f ( x)dx , we divide the interval [a, b] into n equal
a
parts [x0, x1], [x1, x2] … [xn -1, xn] and to each apply the trapezoidal rule.
ba
Setting h = . We have
n
b x1 x2 xn
0
f ( x)dx = f ( x)dx +
x0
x1
f ( x)dx … + f ( x)dx
xn 1
h h h
[f0 + f1] + [f1 + f2] + … + [fn-1 + fn]
2 2 2
=
h
f 0 2 f1 f 2 f n1 f n
2
h
= {sum of first and least ordinate + 2 (sum of other ordinates)}
2
xn n
f fi
h
R=
x0
f ( x)dx -
2 i 1
i 1
n xi h
= f ( x ) dx f i 1 f
i
i 1
i 1
x
2
h3 n
=
12
f
i 1
( i)
1 n
Since is continuous on [a, b] there exists a point [a, b] such that ( ) = f ( i).
n i 1
nh 3 (b a)h 2
We then have R = - f = - f
12 12
1
dx
1 x
0
using the following data.
Solution: taking n = 10
1
dx 0 .1
1 x
0
2
[0.5 + 090909 + … + 0.52632 + 0.26]= 0.693767
b 2
f ( x)dx h Ai f i for i = 0
a i 0
(1)2 S ( S 1)(S 2)
2 2
1
A0
0!2! 0 S
ds = ( S 2 3S 2)ds
20
1 8 1
= 6 4
2 3 3
(1) 2 1 S ( S 1)( S 2)
2
1!(2 1)! 0
for i = 1 A1 ds
S 1
2
1 8 4
= - ( S 2 2S )ds = - (2)3 22 = - 4 =
0 3 3 3
for i = 2
(1) 2 2 S ( S 1)(S 2)
2
2!(2 2)! 0
A2 = ds
S 2
2
1 1 1 1
=
20 ( S 2 S )ds = (2)3 (2) 2
2 3 2
1 8 4 1
= =
2 3 2 3
b
f ( x)dx
a
h[A0f0 + A1f1 + A2f2]
h
= [f0 + 4f1 + f2]
3
1
which is Simpson’s formula
3
1
The remainder term of Simpson’s formula is:
3
x2
f ( x)dx 3 f 4 f1 f 2
h
R= 0
x0
x1 h
R(h) = f ( x)dx
h
f ( x1 h) 4 f ( x1 ) f ( x1 h)
x1 h
3
1 h
R (h) = f(x1+h) + f(x1-h) - [f(x1-h)+f(x1) + f(x1+h)] - [ (x1-h)+ (x1+h)]
3 3
2 4 h
= [f(x1+h)+f(x1-h)] - f(x1)- [ (x1-h)+ (x1+h)]
3 3 3
2 h2 h3 h 4 iv
= [{f(x1) + h (x1) + (x1) + (x1)+ ( )}
3 2! 3! 4!
h2 h3 h 4 iv
+{f(x1)-h (x1) + (x1) - (x1) + ( )}]
2! 3! 4!
4 h h2 h 3 iv
- f(x1) - [{- (x1) + h (x1) - (x1)+ ( )}
3 3 2! 3!
h2 h3
+ (x1) + h (x1) + (x1) + iv( )]
2! 3!
1 4 iv
R (h) = h f ( )
18
1 5 iv
R(h) = h f ( )
90
Since each application of Simpson’s rule requires two intervals, n must be an even integer, let n
= 2m and let f(xi) be the values of the function f for equally spaced points.
ba
h= and applying Simpson’s rule to each doubled intervals
2m
[x0, x2], [x2, x4] … [x2m-2, x2m] of length 2h, We then get
b
h h h
f ( x)dx =
a
3
(f0 + 4f1 + f2) + (f2 + 4f3 + f4) +… + (f2m-2 + 4f2m-1 + f2m)
3 3
b
h
f ( x)dx =
a
3
[(f0 + f2m) + 4(f1 + f3 + f5 + … + f2m-1) + 2(f2 + f4 + f6 + … + f2m-2)]
Letting S1 = f1 + f3 + f5 + … + f2m-1
S2 = f2 + f4 = f6 + … + f2m-2
1
Which is Simpson’s general formula
3
h5 m iv
i.e. R=- f
90 k 1
k
Since iv(x) is continuous on [a, b], then there exists a point [a, b] such that
mh5 iv (b a)h 4 iv
Therefore R - f ( ) =- f ( )
90 90
Activity 6.3:
n b
i.e. lim h Ai f i = f ( x)dx
n
i 0 a
(1) n i S [ n 1]
n
i!(n i ) 0 ( S i )
For Ai = ds
n n
A f
i 0
i i
2 5.4902
4 2.2776
6 3.3288
8 1.9411
10 3.5956
n
This implies lim Ai fi .
n
i 0
And this illustrates the fact that the Newton-Cotes integration formulas need not converge to
b
n
lim Ai =
n
i 0
n
lim Ai B
n
j 0
n b
then lim h Ai f i = f ( x)dx for any continuous function f(x) on [a, b].
n
j 0 a
1
dx
1 x
0
taking n = 10
1
Solution: Here = 10 hence = 0.1 = h.
10
i xi f2i-1 f2i
0 0 f0=1
1 0.1 0.90909
2 0.2 0.83333
3 0.3 0.76923
4 0.4 0.71429
5 0.5 0.66667
6 0.6 0.62500
7 0.7 0.58824
8 0.8 0.5555
9 0.9 0.52632
S1=3.4520 S2=2.72818
1 1
dx dx h
0 1 x = 0 1 x 3 (f0 + 451 + 25 + fn)
2
0 .1
= (1 + 4x3.45955 + 2x2.72818 + 0.5000)
3
=0
= 2.8169
x 0 1 2 5
f(x) 2 3 12 147
2. The function f(x) = y is given at the points (7, 3) (8, 1), (9, 1) and (10, 5). Find the value of
y for x = 9.5 using Lagrange’s interpolation formula
y(x) -12 -8 3 5
Suppose we have a function f(x) given in tabular form. The problem of inverse interpolation
consists in determining a value of argument x from a given value of the function f(x). In this,
case we assume that the points are of equidistant and the function f(x) is monotonic. Then
replacing the function f(x) by Newton’s forward Interpolation polynomial we get
1 2 f 0 3 f 0 n f 0
S= [f(x) – f0 –S(S – 1) -S(S+!)(S-2) -…S(S-1)…(S-n+1) ]
f 0 2! 3! n!
Then apply the method of successive approximations. For the initial approximation we neglect
all the terms in S on the right to obtain
f ( x) f 0
S0 = .
f 0
The second approximation is obtained using S1 on the right side including now one more term.
1 2 f 0
S1 = [f(x) – f(x0) – S0(S0 – 1) ].
f 0 2
The next approximation uses S2 on the right and points up another term:
1 2 f 0 3 f 0
S2 = [f(x) – f(x0) – S1(S1 – 1) -S1(S1-1)(S1-2 ].
f 0 2 3!
The process of iteration is continued until the condition of the required accuracy is obtained.
x = x0 + Sh.
Solution:
x y y 2y 3y
2 8
19
3 27 18
37 6
4 64 24
61
5 125
Hence x = x0 + hS2
= 2 + 0.1532
= 2.1532
1 2 f 0
S1 = [y – y0 – S0(S – 1) ]
y 0 2
1
= [10 – 8 – 0.1(0.1 – 1)9] = 0.15
19
1 2 f 0 3 f 0
S2 = [y – y0 – S1(S1 – 1) - S(S1 – 1)(S1 – 2) ]
y 0 2 3!
1 18 6
S2 = [10-8-0.15(0.15-1) -0.15(0.15-1)(0.15-2) ]=0.1532.
19 2 3!
The problem of inverse interpolation of a function for the case of unequally spaced values of the
argument x0, x1,…, x0 can be solved directly by means of Lagrange’s interpolation formula. To
do this, it is sufficient to take f(x) as independent variable and to write.
n
( y y1 )( y y 2 ) ( y yi 1 )( y yi 1 )
x= x
i 0
i
2
where yi = f(xi) i = 0, 1, …n
Hence, inverse interpolation can be used for finding the root of an equation provided values of
f(x) at some points are given.
f ( x)dx
a
*
b
lim f ( x)dx exists
b
a
If the limit does not exist, then the integral is divergent and such an integral is considered to be
meaningless. To evaluate the convergent improper integral to a given accuracy we represent it in
the form:
b
a
f ( x)dx = a
f ( x)dx + f ( x)dx
b
Since the integral converges, the number b may be chosen so large that the inequality
f ( x)dx
b
< 6
2
holds true
dx 10 4
2 1 x 2 , within e =
3
(e = accuracy)
b
dx dx dx
2 1 x 2 = 2 1 x 2 + b 1 x 2
dx e 10 4
b 1 x 3 2 2 3
< =
dx dx 1 10 4 2
since b 1 x 3 b 1 x 2 b 2 10 4 b
<
1 2 2
b 4 4 2 x10 4
b 10 10
b
dx dx
Hence
2 1 x 2 1 x
2 2
Suppose now that the interval of integration [a, b] is finite and the integrand f(x) has a finite
number of discontinuities on [a, b]. Let us examine the case when there is a single discontinuity
point c of the function f(x) on [a, b].
In order to approximate, to a given accuracy e, one chooses positive numbers 1 and 2 so small
that the inequality
c2
f ( x)dx
c 1
holds true.
c 1 b
f ( x)dx
a
and f ( x)dx
c2
b c 1 b
Then f ( x)dx
a
f ( x)dx + f ( x)dx
a c2
f ( x, y)dA
R
R = {(x, y) | a x b, c y d}
To illustrate the approximation technique, we employ the Simpson’s rule although any other
Newton – Cotes formulas could be used.
Suppose that integers n and m are chosen to determine the step sizes:
b d
R
f ( x, y )dA = f ( x, y )dy dx
ac
f ( x, y)dy
c
k m 1
d m
f ( x, y )dy f ( x, y 0 ) 2 f ( x, y 2 j ) 4 f ( x, y 2 j 1 ) f ( x, y 2 m
c
3 j 1 j 1
Hence,
2k m1
b d b b
f ( x1 x2 j )dx
k
a c
f ( x, y )dydx
3
a
f ( x, y 0 )dx
3 j 1 a
m b b
AK k
+
3
j 1 a
f ( x, y 2 j 1 )dx
3 a
f ( x, y 2 m )dx
n 1
b n
h
f ( x, y j )dx
3
f ( x 0 , y j ) 2i 1
f ( x 2j , y j ) + 4i 1
f ( x2i 1 , y j ) f ( x2 n , y j )
a
hk n 1
b d n
f ( x, y )dydx
9
f ( x 0 , y j ) 2i 1
f ( x 2 j , y ) + 4i 1
f ( x2i 1 , y 0 ) f ( x2 n , y 0 )
a c
m 1 m 1 n 1 m 1 n
2 f ( x0 , y2 j ) 4 f ( x2i , y2 j ) 8 f ( x 2 j 1 , y 2 j ) +
j 1 j 1 i 1 j 1 i 1
m n m
16 f ( x2i 1 , y 2 j 1 ) 4 f ( x0 , y 2 j 1 )
j 1 i 1 j 1
n 1 n
+ f ( x0 , y 2 m ) 2 f ( x2i 1 , y 2 m ) 4 f ( x2i 1 , y 2 m ) f ( x2n , y 2 m )
i 1 i 1
b d
hk n m
f ( x, y )dydx wij f ( xi , f j )
9 i 0 j 0
a c
i.e. w0 = 1, w01 = 4, w02 = 2, w03 = 4, w04 = 2, w05 = 1 w10 = 4, w11 = 16, w12 = 8,
Example 6.9
2.0 1.5
Approximate n (x + 2y)dydx.
1 .4 1 .0
Then 2n = 2 2 = 4, 2m = 2 1 =2.
2 1 4 1 5 10
h= ,k
4 2
(n( x 2 y)dy)dx
0.25
n( x 2 y0 ) 4 ln( x 2 y1 ) n( x 2 y 2 )dx
1.4 1.0 1.4
3
=
0.15 x0.25
n( x0 2 y0 ) 4n( x1 2 y0 ) 4n( x3 2 y0 )
9
In short
2.0 1.5
(0.15)(0.25) 4 2 x
n( x 2 y)dydx 9
i 0 j 0
wij n( xi 2 y j )
1.4 1.0
1.50 1 4 2 4 1
1.25 4 16 8 16 4
1.00 1 4 2 4 1
Numerical differentiation is the process of obtaining the value of the derivative of a function
from a set of numerical values of that function
1. If the argument are equally spaced,
a. We will use Newton forward formula. If we desire to find the derivative of the function at
a point near to beginning.
b. If we desire to find the derivative of the function at a point near to end then we will use
Newton backward formula.
c. If the derivative at a point is near the middle of the table we apply stirling difference
formula.
2. In case the argument are un equally spaced then we should use Newton,s divided difference
formula.
Numerical integration is the process of obtaining the value of a definite integral from a set of
numerical values of the integrand. The process of finding the value of the definite integral
b
I = f ( x)dx
a
Of a function of a single variable, is called as numerical quadratutre. If we apply this for function
of two variables is called mechanical cubature.
The problem of numerical integration is solved by first approximating the function f(x) by an
interpolating polynomial and then integrating it between the desired limit.
Thus f ( x) Pn ( x)
b b
f ( x)dx = P ( x)dx .
a a
n
1. Find the first, second and third derivatives of the function tabulated below, at the point
x=1.5
x 1.5 2.0 2.5 3.0 3.5 4.0
F(x) 3.375 7.000 13.625 24.000 38.875 59.000
1
dx
3. Evaluate 1 x
0
2
by using Simpson’s one-third and three-eighth rule. Hence obtain the