0% found this document useful (0 votes)
20 views

class notes Numerical Analysis

Class notes on Numerical analysis

Uploaded by

fvs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

class notes Numerical Analysis

Class notes on Numerical analysis

Uploaded by

fvs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Numerical Analysis

ERROR:
1.1 introduction
Numerical analysis dealt with finding value of a function,
its derivative, integration where function is not known,
instead value of the function at some point is known,
also to solve equation and differential which we want to
manage. In fact two component are there numerical
method and numerical analysis. Second part deals with
error management along with method.
1.2 Source of ERROR:
In mathematics we often take approximation ,say √2 =
1.414 or sin x = x. This contribute some error. Source of
error may be of two type Round off error and Truncation
error. First one is round off error whereas second one is
truncation error.
Decimal places and Significant figure : in the number
23.765 number of significant figure is 5 where decimal
places is 3 .
.002354 has 6 DP whereas it has 4 s.f
.12300 has 5 DP but at least 3 SF (we will explain latter)
2
2

1.3 types of error:


There are three types of error : Absolute error,
Relative error and Percentage error.
Let x be a number (unless otherwise stated it will be a
real number always) and x* its approximate value then
absolute error Ea is given by 𝐸𝑎 = |𝑥 − 𝑥 ∗ | .
|𝑥−𝑥 ∗ | |𝑥−𝑥 ∗ |
Relative error Er is given by 𝐸𝑟 = ≈ |𝑥 ∗ |
=
|𝑥|
𝐸𝑎 𝐸
|𝑥 ∗ |
≈ |𝑥|𝑎 .

Percentage error Ep is given by 𝐸𝑝 = 𝐸𝑟 × 100 .


1.4 Method of rounding off :
let us take some example first:
12.24567 ≈ 12.246 correct to 5 s.f or 3 DP
Ea=.00033
12.24547 ≈ 12.245 correct to 5 s.f or 3 DP
Ea=.00047
12.24557 ≈ 12.246 correct to 5 s.f or 3 DP
Ea=.00043
12.24657 ≈ 12.247 correct to 5 s.f or 3 DP
Ea=.00043
2 of 65

of 65
3
3

12.24650 ≈ 12.246 correct to 5 s.f or 3 DP


Ea=.00050
12.24750 ≈ 12.248 correct to 5 s.f or 3 DP
Ea=.00050
So we can check easily that whatever may be the
approximate number for the first four ,absolute error is
minimum for the given approximation. So now we make
rounding off rule “ if the discarded part which is .00067
and .00047 is more than .0005 then add 1 at 3rd decimal
place and if it is less than .0005 then discard it “ . Fine
now what happens if it is exactly .0005 ? Look the last
two cases. Here if we take 12.247 as the approximate
number for both the cases then also absolute error
remain same. So we are fixing our strategy like this “ if
discarded part is exactly .0005 then add 1 to 3rd decimal
place if it is odd else keep unchanged “. Now 3rd decimal
place as here we are approximating it upto 3 DP ,
1
absolute error .0005 means × 10−𝑚 where m=3 for 3
2
DP .
So finally the rounding off method become “ if the
𝟏
discarded part is more than × 𝟏𝟎−𝒎 then add 1 at
𝟐
m th decimal place or at last significant figure and if it is
3 of 65

of 65
4
4

𝟏
less than 𝟐 × 𝟏𝟎−𝒎 then discard it and if discarded
𝟏
part is exactly × 𝟏𝟎−𝒎 then add 1 to last decimal
𝟐
place or last significant figure if it is odd otherwise keep
unchanged “ . That makes the upper bound of error for
1 −m
rounding off is × 10 or half a unit at last
2
significant digit. Now the most relevant question is what
is the reason behind this rounding off technique ?
Obvious answer is to make absolute error minimum. But
why that odd even principle ? First of all alternatively we
can always increase or decrease last digit by 1 ,then final
answer of a sum will attract more error , so in 50% cases
we will increase it and other 50% cases not . Again by this
principle we make the last digit of a number even. A
number whose last digit is even is divisible by 2 and may
be divisible by some other number, on the contrary it
may not be divisible by any even number .So ultimately
probability of attracting error will be less.
Example : If x=19.9995 round it off upto 5,4,3,2,1
significant figures.
x* =20.000 correct to 5 s.f
x* =20.00 correct to 4 s.f
4 of 65

of 65
5
5

x* =20.0 correct to 3 s.f


x* =20 correct to 2 s.f
x* =20 correct to 1 s.f
Example: Round off 237654 to 2,4 s.f . Answer 24000
and 24000 respectively .
In the above two example we observe that the number
237700 may have two s.f or may be 4. So we will say that
it has at least 2 s.f. till we are sure about the origin .
Example: Round off the following numbers, to four
significant digits
i) 23.4251 ii) 32.4250 iii) 24.87500 iv) 19.995 v)
437.261 vi) 19.36235
Solution: i) 23.43 ii) 32.42 iii) 24.88 iv)20.00 v)
437.3 v) 19.36

Example: Round off the number 54762 to four


significant digits and then calculate absolute error,
relative error and percentage error.
Solution: i) The given number is 54762 (=N)
After round off to four significant figures,
The given number would be 54760 (=𝑁1 )
Absolute error 𝐸𝑎 = |54762 − 54760|= 2
5 of 65

of 65
6
6

2
Relative error 𝐸𝑟 = |54762| = 3.652 × 10−5
Relative percentage error = 𝐸p = 𝐸𝑟 × 100

= 3.652 × 10−5 × 100


= 3.652 × 10−3 %

Exercise: Round off the following numbers to four


significant digits and then calculate absolute error,
relative error and percentage error.
i)437.261 ii)19.36235

1.5 Errors in addition subtraction and product division:


Let x, y are two numbers whose approximate values are
x*, y* and corresponding errors (absolute error with sign)
are ex and ey .So l;jjkv
x=x*+ex and y=y*+ey .
x+y=(x*+y*)+(ex+ey) and x-y=(x*-y*)+(ex-ey)
(x+y)-(x*+y*)= (ex+ey) and (x-y)-(x*-y*)=(ex-ey).

6 of 65

of 65
7
7

So maximum absolute error for addition and subtraction


will be absolute error of x and absolute error of y (ex,ey
may have opposite sign).
x.y=( x*+ex ) .(y*+ey)=x*.y*+x*ex+y*ey+exey
𝑥.𝑦−𝑥 ∗ 𝑦 ∗ 𝑒 𝑒
relative error of (x.y) = = 𝑥𝑥∗ + 𝑦𝑦∗ neglecting
𝑥 ∗𝑦∗
small quantity of 2nd order. So relative error for
multiplication is sum of the relative error . Similarly for
division
∗ 𝑒𝑥

𝑥 𝑥 + 𝑒𝑥 𝑥 (1 + ∗ ) 𝑥∗ 𝑒𝑥 𝑒𝑦 −1
= = 𝑥 = (1 + ∗ ) (1 + ∗ )
𝑦 𝑦 ∗ + 𝑒𝑦 𝑦 ∗ (1 + 𝑒𝑦 ) 𝑦 ∗ 𝑥 𝑦
𝑦 ∗

𝑥∗ 𝑒𝑥 𝑒𝑦 𝑥∗ 𝑒𝑥 𝑒𝑦 𝑒𝑥 𝑒𝑦
= ∗ (1 + ∗ ) (1 − ∗ ) = ∗ (1 + ∗ − ∗ − ∗ ∗ )
𝑦 𝑥 𝑦 𝑦 𝑥 𝑦 𝑥 𝑦
𝑥 𝑥∗ 𝑥∗ 𝑒 𝑒
∴ |𝑦 − | ÷ |𝑦 ∗ | ≤ |𝑥𝑥∗ | + |𝑦𝑦∗ |
𝑦∗

7 of 65

of 65
8
8

Neglecting small quantity of 2nd order . So maximum


relative error for division is also sum of the relative error.
This can be done by taking differential also. Let
f(x1,x2,x3,……,xn)=𝑥1 𝛼1 𝑥2 𝛼2 𝑥3 𝛼3 … … … . 𝑥𝑛 𝛼𝑛 (exponents
may be negative) taking logarithm and its differential
∆𝑓 𝛼1 ∆𝑥1 𝛼2 ∆𝑥2 𝛼3 ∆𝑥3 𝛼𝑛 ∆𝑥𝑛
= + + + ⋯…..+ ,
𝑓 𝑥1 𝑥2 𝑥3 𝑥𝑛
∆𝑓 ∆𝑥1 ∆𝑥2 ∆𝑥3
| |
| | ≤ 𝛼1 | |
| + 𝛼2 || | |
| + 𝛼3 | |
𝑓 𝑥1 𝑥2 𝑥3
∆𝑥𝑛
+ ⋯ … . +|𝛼𝑛 | | |
𝑥𝑛
i.e maximum relative error is sum of the exponent times
relative error.
For a general function 𝑧 = 𝑓(𝑥1 , 𝑥2 , 𝑥3 , . . . . , 𝑥𝑛 ) taking
differential, here differential represent error with sign we
get
𝜕𝑓 𝜕𝑓 𝜕𝑓
𝑑𝑧 = 𝜕𝑥 𝑑𝑥1 + 𝜕𝑥 𝑑𝑥2 +. . . . + 𝜕𝑥 𝑑𝑥𝑛 this can be
1 2 𝑛
used to find error for any expression including all the
above.

8 of 65

of 65
9
9

22
Example: If 𝜋 = is approximated as 3.14, find the
7
absolute error, relative error and relative percentage
error.

22
Solution: Absolute error 𝐸𝑎 = | 7 − 3.14|
22−21.98
=| |
7
0.02
= | 7 | = 0.002857
0.002857
Relative error 𝐸𝑟 = | 22/7 |
= 0.0009
Relative percentage error = 𝐸p = 𝐸𝑟 × 100
= 0.0009 × 100
= 0.09 %
Example: Compute the percentage error in the time
period 𝑇 = 2𝜋√𝑙/𝑔 for 𝑙 = 1 𝑚 if the error in the
measurement of 𝑙 is 0.01.
𝑙
Solution: Given the 𝑇 = 2𝜋 √𝑔.
Taking 𝑙𝑜𝑔 of both sides we have,
1 1
𝑙𝑜𝑔𝑇 = 𝑙𝑜𝑔2𝜋 + log 𝑙 − log 𝑔
2 2

9 of 65

of 65
10
10

𝑑𝑇 1 𝑑𝑙
∴ =
𝑇 2 𝑙
𝑑𝑇 1 𝑑𝑙 0.01
× 100 = × 100 = = 0.05%.
𝑇 2 𝑙 2×1

1.6 Truncation errors: These errors occur due to the


finite representation of an Inherently infinite process. For
example, the use of a finite number of terms in the
infinite series to compute the value of 𝑐𝑜𝑠 𝑥,𝑠𝑖𝑛 𝑥, 𝑒 𝑥 etc.

The Taylor’s series expansion of 𝑠𝑖𝑛 𝑥 is


𝑥3 𝑥5
𝑠𝑖𝑛 𝑥 = 𝑥 − + −⋯
3 5
This is an infinite series expansion. If only first five terms
are taken to compute the
value of 𝑠𝑖𝑛 𝑥 for a given 𝑥, then we obtain an
approximate result. Here, the error occurs due to the
truncation of the series. Suppose, we retain the first n
terms, the truncation Error is given by
𝑥 2𝑛+1
𝐸𝑡𝑟𝑢𝑛𝑐 ≤
(2𝑛 + 1)!
It may be noted that the truncation error is independent
of the computational machine.

10 of 65

of 65
11
11

Example: Find the number of terms of the


exponential series such that their sum gives the
value 𝑒 𝑥 correct to six decimal places at 𝑥 = 1.
Solution: We know,
𝑥 𝑥2 𝑥3 𝑥 𝑛−1
𝑒 =1+𝑥+ + + ⋯ (𝑛−1)! + 𝑅𝑛 (𝑥)
2! 3!

𝑥𝑛
Where 𝑅𝑛 (𝑥 ) = 𝑒 𝜃 , 0 < 𝜃 < 𝑥.
𝑛!
𝑥𝑛
Maximum absolute error (at 𝜃 = 𝑥) = 𝑒𝑥
𝑛!
𝑥𝑛
and maximum relative error is .
𝑛!
1
Hence (𝑒 𝑥 )𝑚𝑎𝑥 at 𝑥 = 1 is .
𝑛!
For a six decimal accuracy at 𝑥 = 1, we have
1 1
< × 10−6
𝑛! 2
or, 𝑛! > 2 × 106
which gives 𝑛 = 10.
Here we have two important theorem on error.(PDF)

INTRERPOLATION
2.1 Introduction

11 of 65

of 65
12
12

The method of obtaining the value of the function for


any intermediate value of the argument when the values
of a functions are known for a set of values of the
arguments is known as interpolation. Mathematically, if
the values of the function 𝑦 = 𝑓(𝑥)at 𝑥 =
𝑥0 , 𝑥1 , 𝑥2 , . . . . , 𝑥𝑛 : 𝑤ℎ𝑒𝑟𝑒 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < . .. . . . . <
𝑥𝑛−1 < 𝑥𝑛 = 𝑏 be known then finding the value of the
function at 𝑥 where 𝑎 < 𝑥 < 𝑏 is known as
interpolation. If 𝑥 lies outside the above said range, then
the corresponding process is called extrapolation.

2.2 Polynomial Interpolation:


Let 𝑓 (𝑥 )𝜖 𝐶 ∞ (−∞, ∞). The principle of interpolating
polynomial is “the selection of a function 𝜑(𝑥 )from a
given class of functions such that the graph 𝑦 = 𝜑(𝑥 )
passes through a finite set of given points”. When the
function 𝜑(𝑥 ) is a polynomial, the process of
representing 𝑓 (𝑥 ) by 𝜑(𝑥 ) is called
𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙 𝑖𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑜𝑛. There are other types of
interpolation where 𝜑(𝑥 ) is not polynomial, may be
trigonometric function. But here we will discuss only
polynomial interpolation. The polynomial interpolation is

12 of 65

of 65
13
13

based on the following theorem known as Weierstrass


theorem:
Theorem 1(Weierstrass theorem) Let a function
𝑓(𝑥 )𝜖 𝐶[𝑎, 𝑏] and let 𝜀 > 0 be any preassigned small
number. Then, ∃ a polynomial 𝜑(𝑥 ) for which
|𝑓 (𝑥 ) − 𝜑(𝑥 )| < 𝜀 ; ∀ 𝑥𝜖 [𝑎, 𝑏] i.e. any continuous
function can be uniformly approximated by a polynomial
of sufficiently high degree within any prescribed
tolerance on the finite interval.
Theorem 2: Given any real valued function 𝑓(𝑥 )and
(𝑛 + 1) distinct points 𝑥0, 𝑥1, 𝑥2, 𝑥3,……. 𝑥𝑛 there exist
unique polynomial of maximum degree 𝑛 which
interpolates 𝑓(𝑥 ) at the points 𝑥0, 𝑥1, 𝑥2, 𝑥3,……. 𝑥𝑛 .
The above theorem have few part:
(1) Existence of a polynomial.
(2) Degree of the polynomial
(3) Uniqueness of the polynomial
(4) Error of this process.
Existence is given by Weierstrass theorem but
here we are adding one more criteria i.e.
𝜑(𝑥𝑖 ) = 𝑦𝑖 = 𝑓(𝑥𝑖 ) (∀ 𝑖 = 0, 1, 2, … . . , 𝑛),
13 of 65

of 65
14
14

Where f(x) is the unknown function which we want to


approximate. Next since here we have (n+1) condition so
we can find a polynomial of maximum degree n, as it has
unknown (n+1) coefficients. Hence onward we call this
polynomial as 𝐿𝑛 (𝑥).
2.3 Uniqueness:
For uniqueness let there be two such polynomial 𝐿𝑛 (𝑥)
and L(x).So from the given condition 𝐿𝑛 (𝑥𝑖 )=L(𝑥𝑖 )= 𝑦𝑖
∀ 𝑖 = 0, 1, 2, … . . , 𝑛).
Now if we take P(x)= 𝐿𝑛 (𝑥)--L(x) .
then P(𝑥𝑖 )=0 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛). So P(x) is a
polynomial as it is difference of two polynomial and of
degree maximum n , as 𝐿𝑛 (𝑥) and L(x) are so.But P(x)=0
at (n+1) points. So it must be an identity.hence P(x)=0
∀𝑥 . 𝐻𝑒𝑛𝑐𝑒 𝐿𝑛 (𝑥)=L(x) ∀𝑥 . Hence 𝐿𝑛 (𝑥)is unique.
Also polynomial 𝐿𝑛 (𝑥) of degree ≤ 𝑛 given by
𝐿𝑛 (𝑥 ) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛
and 𝐿𝑛 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛)
i.e. 𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖2 + ⋯ + 𝑎𝑛 𝑥𝑖𝑛 = 𝑓(𝑥𝑖 ) (∀ 𝑖 =
0, 1, 2, … . . , 𝑛)
Now it is a system of (𝑛 + 1) linear equation with
14 of 65

of 65
15
15

(𝑛 + 1) unknowns 𝑎0 , 𝑎1 , 𝑎2 , … , 𝑎𝑛 . Since the


co-efficients determinant
1 𝑥0 ⋯ ⋯ 𝑥0𝑛
1 𝑥1 ⋯ ⋯ 𝑥1𝑛
| | = ∏( 𝑥𝑖 − 𝑥𝑗 ) ≠ 0
⋯⋯⋯⋯⋯⋯
𝑖>𝑗
1 𝑥𝑛 ⋯ ⋯ 𝑥𝑛𝑛
By Vandermonde’s determinant as the points
𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 are distinct. The values of
𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑛 can be uniquely determined so that
𝐿𝑛 (𝑥 ) exists and is called interpolating polynomial. The
given points 𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 are called nodes such that
𝑥0 < 𝑥1 < 𝑥2 … < 𝑥𝑛 .
What is problem of polynomial interpolation ?
Let f(x) is an unknown function known at (n+1) points:
𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , 𝑦𝑖 = 𝑓(𝑥𝑖 ) (∀ 𝑖 = 0, 1, 2, … . . , 𝑛).By
problem of polynomial interpolation we mean to find a
polynomial 𝐿𝑛 (𝑥 ) of degree maximum n for which
𝐿𝑛 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛).
2.4 Error of interpolating polynomial:

15 of 65

of 65
16
16

Now let f(t) 𝜖 𝐶 𝑛+1 [𝑎, 𝑏] be an unknown function known


for (n+1) points: 𝑥𝑖 𝑓𝑜𝑟 𝑤ℎ𝑖𝑐ℎ 𝑦𝑖 = 𝑓(𝑥𝑖 ) (∀ 𝑖 =
0( 1)𝑛).
Here a=min 𝑥𝑖 𝑎𝑛𝑑 𝑏 = max 𝑥𝑖 . Let
𝑖 𝑖

F(t)=f(t)-Ln(t)-kω(t) : where ω(t)=(t-x0) (t-x1) (t-x2) . . .(t-xn)


and k is a constant.
Now F(t)=0 (∀ 𝑡 = 𝑥𝑖 , 0( 1)𝑛).
Moreover k is so chosen that at t=x 𝜖(𝑎, 𝑏) ,F(t)=0.
So F(t)=0 at (n+2) points and F is continuous and
derivable in[a,b].So by Rolle’s theorem F’(t) vanishes at
(n+1) points in a closed interval within (a,b).Using Rolle’s
theorem again on F’(t) , F(t) 𝜖 𝐶 𝑛+1 [𝑎, 𝑏] as f(t), Ln(t) and
ω(t) are also, we get F”(t) vanishes at n points in a
closed interval within (a,b). Repeating this use of Rolle’s
theorem (n+1) times we get 𝜉𝜖(𝑎, 𝑏) s.t F(n+1)( 𝜉)=0 .Now
since Ln(t )is a polynomial of degree n so (n+1)th
derivative of this is 0.
Also since ω(t)is a monic polynomial of degree (n+1) so
(n+1) th derivative of this is (n+1)! . so
F(n+1)( 𝜉)=0=f(n+1)( 𝜉) –k(n+1)! Which gives

16 of 65

of 65
17
17

𝑓 𝑛+1 ( 𝜉)
K= (𝑛+1)! .Hence at a interpolating point t=x we get

0=F(x)=f(x)-Ln(x)-kω(x) imply
𝑓 𝑛+1( 𝜉)
f(x)-Ln(x)= (𝑛+1)!
ω(x) = Rn+1(x) , 𝜉𝜖(𝑎, 𝑏)

=remainder after (n+1)terms error at x or simply


error at interpolating point x.
Now the interesting fact that we can find error term
without knowing interpolating polynomial. Now the most
important question is how to find interpolating
polynomial.
2.5 Lagrange interpolating polynomial:
Let f(x) is an unknown function known at (n+1) points:
𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , 𝑦𝑖 = 𝑓(𝑥𝑖 ) (∀ 𝑖 = 0, 1, 2, … . . , 𝑛)
Let Ln(x) = ∑𝑛𝑖=0 𝜔𝑖 (𝑥)𝑦𝑖 . Now Ln(xi) = yi imply
𝜔𝑖 (𝑥 ) = 0 ∀𝑥 = 𝑥0 , 𝑥1 , 𝑥2 , . . . . 𝑥𝑖−1 , 𝑥𝑖+1 , . … , 𝑥𝑛 and
=1 for x=xi
Or 𝜔𝑖 (𝑥𝑘 ) = 0 𝑓𝑜𝑟 𝑖 ≠ 𝑘
=1 for i=k
i.e. 𝜔𝑖 (𝑥𝑘 ) = 𝛿𝑘𝑖 ,known as Kronecker delta.

17 of 65

of 65
18
18

so 𝜔𝑖 (𝑥 ) take the form


𝜔𝑖 (𝑥 ) = 𝑎𝑖 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑖−1 )
(𝑥 − 𝑥𝑖+1 ). . . . . (𝑥 − 𝑥𝑛 ) and 𝜔𝑖 (𝑥𝑖 ) = 1 give
𝜔𝑖 (𝑥𝑖 ) = 𝑎𝑖 (𝑥𝑖 − 𝑥0 )(𝑥𝑖 − 𝑥1 )(𝑥𝑖 − 𝑥2 ). . . . (𝑥𝑖 − 𝑥𝑖−1 )
(𝑥𝑖 − 𝑥𝑖+1 ). . . . . (𝑥𝑖 − 𝑥𝑛 ) =1 so
𝑎𝑖
1
=
(𝑥𝑖 − 𝑥0 )(𝑥𝑖 − 𝑥1 ). . . . (𝑥𝑖 − 𝑥𝑖−1 )(𝑥𝑖 − 𝑥𝑖+1 ). . . . . (𝑥𝑖 − 𝑥𝑛 )
∴ 𝜔𝑖 (𝑥 )
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ). . . . (𝑥 − 𝑥𝑖−1 )(𝑥 − 𝑥𝑖+1 ). . . (𝑥 − 𝑥𝑛 )
=
(𝑥𝑖 − 𝑥0 )(𝑥𝑖 − 𝑥1 ). . . . (𝑥𝑖 − 𝑥𝑖−1 )(𝑥𝑖 − 𝑥𝑖+1 ). . . . (𝑥𝑖 − 𝑥𝑛 )
(𝑥−𝑥 ) 𝜔(𝑥)
= ∏𝑛𝑘=0 (𝑥 −𝑥𝑘 ) =
𝑖 𝑘 (𝑥−𝑥𝑖 )𝜔′(𝑥𝑖 )
𝑘≠𝑖

So interpolation formula become


(𝑥−𝑥0 )(𝑥−𝑥1 )....(𝑥−𝑥𝑖−1 )(𝑥−𝑥𝑖+1 )...(𝑥−𝑥𝑛 )
Ln(x) = ∑𝑛𝑖=0 (𝑥 −𝑥 𝑦𝑖
𝑖 0 )(𝑥𝑖 −𝑥1 )....(𝑥𝑖 −𝑥𝑖−1 )(𝑥𝑖 −𝑥𝑖+1 )....(𝑥𝑖 −𝑥𝑛 )

(𝑥−𝑥𝑘 ) 𝜔(𝑥)
Ln(x) = ∑𝑛𝑖=0 ∏𝑛𝑘=0 𝑦
(𝑥𝑖 −𝑥𝑘 ) 𝑖
= ∑𝑛𝑖=0 𝑦
(𝑥−𝑥𝑖 )𝜔′(𝑥𝑖 ) 𝑖
𝑘≠𝑖
18 of 65

of 65
19
19

This is known as Lagrange polynomial interpolation


formula. The function 𝜔𝑖 (𝑥 ) is known as lagrangian.
Some of its properties:
(1) 𝜔𝑖 (𝑥 ) are independent of function value yi of f(x)
(2) 𝜔𝑖 (𝑥 ) is invariant under linear transformation.
(3) ∑𝑛𝑖=0 𝜔𝑖 (𝑥 ) = 1
Proof of (1) is obvious from the expression.
For (2) Let x=a+bt ; ∴ xi=a+bti ; xk=a+btk
x-xk=b(t-tk) ; xi-xk=b(ti-tk) putting in the expression
𝑏 𝑛 (𝑡−𝑡0 )(𝑡−𝑡1 )....(𝑡−𝑡𝑖−1 )(𝑡−𝑡𝑖+1 )...(𝑡−𝑡𝑛 )
𝜔𝑖 (𝑥 ) = 𝑏 𝑛 (𝑡𝑖 −𝑡0 )(𝑡𝑖 −𝑡1 )....(𝑡𝑖 −𝑡𝑖−1)(𝑡𝑖 −𝑡𝑖+1 )....(𝑡𝑖 −𝑡𝑛 )

(𝑡−𝑡0 )(𝑡−𝑡1 )....(𝑡−𝑡𝑖−1 )(𝑡−𝑡𝑖+1 )...(𝑡−𝑡𝑛 )


=(𝑡 −𝑡 = 𝜔𝑖 (𝑡 )
𝑖 0 )(𝑡𝑖 −𝑡1 )....(𝑡𝑖 −𝑡𝑖−1 )(𝑡𝑖 −𝑡𝑖+1 )....(𝑡𝑖 −𝑡𝑛 )

So it is invariant under linear transformation.


For (3) using error term we have

𝑓 𝑛+1( 𝜉)
𝑓(𝑥 ) = ∑𝑛𝑖=0 𝜔𝑖 (𝑥)𝑦𝑖 + ω(x) ; remember 𝜔𝑖 (𝑥 )
(𝑛+1)!
are independent of function value yi of f(x) . so if we take
f(x) =1 then yi =1 and we get ∑𝑛𝑖=0 𝜔𝑖 (𝑥 ) = 1 .
19 of 65

of 65
20
20

2.6 Finite difference operator:


Shift Operator 𝐸: Let ℎ be a non-zero constant is the step
length. The shift operator 𝐸 for any arbitrary function
𝑓(𝑥 ) defined in (−∞, ∞) is represented by 𝐸𝑓(𝑥 ) =
𝑓(𝑥 + ℎ).
Now 𝐸 2 𝑓(𝑥 ) = 𝐸. 𝐸𝑓(𝑥 ) = 𝐸𝑓(𝑥 + ℎ) = 𝑓(𝑥 + 2ℎ) and
in general 𝐸 𝑛 𝑓 (𝑥 ) = 𝑓 (𝑥 + 𝑛ℎ).
Forward difference operator ∆ : It is defined by
∆𝑓(𝑥 ) = 𝑓(𝑥 + ℎ) − 𝑓(𝑥) where ℎ is the step length
∆ is a linear operator and ∆= 𝐸 − 1, 𝐸 = ∆ + 1.
Putting 𝑥 = 𝑥0 we get
∆𝑦0 = 𝑓(𝑥0 + ℎ) − 𝑓(𝑥0 ) = 𝑦1 − 𝑦0 ,
The second order difference is given by
∆2 𝑦0 = ∆(∆𝑦0 ) = ∆(𝑦1 − 𝑦0 ) = ∆𝑦1 − ∆𝑦0
= 𝑦2 − 𝑦1 − (𝑦1 − 𝑦0 ) = 𝑦2 − 2𝑦1 + 𝑦0
Similarly the 3rd order difference is represented by

and k-th order difference is given by

20 of 65

of 65
21
21

𝑘
∆𝑘 𝑦0 = ∑𝑘𝑖=0(−1)𝑖 ( ) 𝑦𝑘−𝑖 . Note that coefficients are
𝑖
binomial. Question is why?
Exercise: i) Prove that first order difference of a
constant is 0.
ii) The first order difference of a polynomial of
degree 𝑛 is a polynomial of degree 𝑛 − 1.
Backward difference operator 𝛁: The first order
backward difference operator is defined by
∇𝑓(𝑥 ) = 𝑓(𝑥 ) − 𝑓(𝑥 − ℎ)
The central difference operator 𝛅 ∶ The central
difference operator δ is defined by
1 1
𝛿𝑓(𝑥 ) = 𝑓 (𝑥 + ℎ) − 𝑓 (𝑥 − ℎ)
2 2
1 1
−2
= (𝐸 2 − 𝐸 ) 𝑓(𝑥)
1
𝛿𝑓 (𝑥 + 2 ℎ) = 𝑓(𝑥 + ℎ) − 𝑓(𝑥 ) = ∆𝑓(𝑥)
1 1 1
𝛿𝑓(𝑥 ) = 𝑓 (𝑥 + 2 ℎ) − 𝑓 (𝑥 − 2 ℎ) = ∆𝑓(𝑥 − ℎ)
2
1 1
−2
Thus we have the result 𝛿 ≡ 𝐸 − 𝐸 2

Example: i) Show that 𝐸 −1 ≡ 1 − ∇.


21 of 65

of 65
22
22

Proof: We know that


∇𝑓(𝑥 ) = 𝑓(𝑥 ) − 𝑓(𝑥 − ℎ) = 𝑓(𝑥 ) − 𝐸 −1 𝑓(𝑥 )
= (1 − 𝐸 −1 )𝑓(𝑥)
⇒ 𝐸 −1 ≡ 1 − ∇
ii) Show that ∆ − ∇≡ 𝛿 2 .
Proof: We know that
1 1
𝛿𝑓(𝑥 ) = 𝑓 (𝑥 + ℎ) − 𝑓 (𝑥 − ℎ)
2 2
1 1

= (𝐸 2 − 𝐸 2 ) 𝑓(𝑥)
1 1
−2
⇒ 𝛿≡ 𝐸2 −𝐸
⇒ 𝛿 2 ≡ 𝐸 − 2 + 𝐸 −1 = (1 + ∆) − 2 + (1 − ∇) = ∆ −

𝑘
(iii) ∆𝑘 𝑦0 = (𝐸 − 𝐼)𝑘 𝑦0 = ∑𝑘𝑖=0(−1)𝑖 ( ) 𝐸 𝑘−𝑖 𝐼 𝑖 𝑦0
𝑖
𝑘
= ∑𝑘𝑖=0(−1)𝑖 ( ) 𝑦𝑘−𝑖 yu
𝑖
(iv)Simplify:∆ (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )m,
=(𝑥 + ℎ − 𝑥0 )(𝑥 + ℎ − 𝑥1 )(𝑥 − 𝑥2 ). . . (𝑥 + ℎ − 𝑥𝑛−1 ) −
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )
=(𝑥 + ℎ − 𝑥0 )(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ). . . . (𝑥 − 𝑥𝑛−2 ) −
22 of 65

of 65
23
23

(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )


=(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−2 )(𝑥 + ℎ − 𝑥0 −
𝑥 + 𝑥𝑛−1 )
=𝑛ℎ(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−2 ).
So : ∆2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )
= 𝑛(𝑛 − 1)ℎ2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−3 )
So : ∆𝑘 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )
= 𝑛(𝑛 − 1). . . (𝑛 − 𝑘 + 1)ℎ𝑘
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−𝑘 ) for k<n
=n! for k=n
=0 for k>n

The forward difference table look like this:

𝑥 𝑦 ∆𝑦 ∆2 𝑦 ∆3 𝑦 ∆𝑛 𝑦

𝑥0 𝑦𝑜

23 of 65

of 65
24
24

∆𝑦0
𝑥1 𝑦1 ∆2 𝑦0

∆𝑦1 ∆3 𝑦0
𝑥2 𝑦2 ∆2 𝑦1

…. ….
….. ∆𝑛 𝑦0

…… ∆2 𝑦𝑛−2

∆𝑦𝑛−1
𝑥𝑛 𝑦𝑛

2.7 Newton’s Forward Interpolation Formula


Given a set of (𝑛 + 1) values
(𝑥0, 𝑦0 ), (𝑥1, 𝑦1 ), … . . (𝑥𝑛, 𝑦𝑛 ) of 𝑥 and 𝑦, it is required to
find 𝐿𝑛 (𝑥 ), a polynomial of degree 𝑛, so that 𝑦 and
𝐿𝑛 (𝑥 ) coincide at tabulated points .i.e
𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛) .Let the values of 𝑥 be
equidistant so that 𝑥𝑖 = 𝑥0 + 𝑖ℎ, ( ℎ > 0 is the step

24 of 65

of 65
25
25

length, 𝑖 = 0,1,2, … . 𝑛). Since 𝐿𝑛 (𝑥 ) is a polynomial of


degree 𝑛, this can be written in the form
𝐿𝑛 (𝑥 ) = 𝑎0 + 𝑎1 (𝑥 − 𝑥0 ) + 𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) + ⋯ +
𝑎𝑛 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … . (𝑥 − 𝑥𝑛−1 ) We now
determine the coefficient 𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑛 jm sn using the
notation 𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (𝑖 = 0, 1, 2, … . . , 𝑛)
We have 𝐿𝑛 (𝑥0 ) = 𝑦0 = 𝑎0 ;
𝑦1 − 𝑦0
𝐿𝑛 (𝑥1 ) = 𝑦1 = 𝑎0 + 𝑎1 (𝑥1 − 𝑥0 ) ; 𝑎1 =
𝑥1 − 𝑥0
∆𝑦0
= ;

𝐿𝑛 (𝑥2 ) = 𝑦2
= 𝑎0 + 𝑎1 (𝑥2 − 𝑥0 ) + 𝑎2 (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
𝑦2 − 2𝑦1+ 𝑦0 ∆2 𝑦0
𝑎2 = 2
=
2ℎ 2! ℎ2
By continuing this method of calculating the coefficients
we shall find that
∆3 𝑦0 ∆4 𝑦0 ∆𝑛 𝑦0
𝑎3 = ,𝑎 = , … … . 𝑎𝑛 = .
3! ℎ3 4 4! ℎ4 𝑛! ℎ𝑛
Substituting these values of 𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑛 in equation
we get

25 of 65

of 65
26
26

∆𝑦0
𝐿𝑛 (𝑥 ) = 𝑦0 + (𝑥 − 𝑥0 ) + (𝑥 − 𝑥0 )(𝑥 −

∆2 𝑦0 ∆𝑛 𝑦0
𝑥1 ) 2!ℎ2 + ⋯ + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … . (𝑥 − 𝑥𝑛−1 ) 𝑛!ℎ𝑛

𝑥−𝑥0
Setting 𝑢 = , equation become

𝑢(𝑢−1) 2 𝑢(𝑢−1)(𝑢−2) 3
𝐿𝑛 (𝑥 ) = 𝑦0 + 𝑢 ∆𝑦0 + ∆ 𝑦0 + ∆ 𝑦0 +
2! 3!
𝑢(𝑢−1)(𝑢−2)….(𝑢−𝑛+1) 𝑛
⋯+ ∆ 𝑦0
𝑛!

This equation is Newton’s forward interpolation


formula.
The error term is given by
𝑅𝑛+1 (𝑥 )
𝑢(𝑢 − 1)(𝑢 − 2) … . (𝑢 − 𝑛) 𝑛+1 (𝑛+1)
= ℎ 𝑓 (𝜉 )
(𝑛 + 1)!
𝑤ℎ𝑒𝑟𝑒 𝑚𝑖𝑚{𝑥, 𝑥0 , … , 𝑥𝑛 } < 𝜉 < 𝑚𝑎𝑥 {𝑥, 𝑥0 , … , 𝑥𝑛 }
;’p
2.8 Newton’s Backward Interpolation Formula:
Given a set of (𝑛 + 1) values

26 of 65

of 65
27
27

(𝑥0, 𝑦0 ), (𝑥1, 𝑦1 ), … . . (𝑥𝑛, 𝑦𝑛 ) of 𝑥 and 𝑦, it is required to


find 𝐿𝑛 (𝑥 ), a polynomial of degree 𝑛, so that 𝑦 and
𝐿𝑛 (𝑥 ) coincide at tabulated points .i.e
𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛) .Let the values of 𝑥 be
equidistant so that 𝑥𝑖 = 𝑥0 + 𝑖ℎ, ( ℎ > 0 is the step
length , 𝑖 = 0,1,2, … . 𝑛). Since 𝐿𝑛 (𝑥 ) is a polynomial of
degree 𝑛, this can be written in the form

𝐿𝑛 (𝑥 ) = 𝑎0 + 𝑎1 (𝑥 − 𝑥𝑛 ) + 𝑎2 (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) +
⋯ + 𝑎𝑛 (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) … . (𝑥 − 𝑥1 )
We now determine the coefficient 𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑛 using
the notation 𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (𝑖 = 0, 1, 2, … . . , 𝑛)
We have 𝐿𝑛 (𝑥𝑛 ) = 𝑦𝑛 = 𝑎0 ;
𝐿𝑛 (𝑥𝑛−1 ) = 𝑦𝑛−1 = 𝑎0 + 𝑎1 (𝑥𝑛−1 − 𝑥𝑛 )
𝑦𝑛 − 𝑦𝑛−1 ∇𝑦𝑛 Δ𝑦𝑛−1
𝑎1 = = = ;
𝑥𝑛 − 𝑥𝑛−1 ℎ h
𝐿𝑛 (𝑥𝑛−2 ) = 𝑦𝑛−2
= 𝑎0 + 𝑎1 (𝑥𝑛−2 − 𝑥𝑛 )
+ 𝑎2 (𝑥𝑛−2 − 𝑥𝑛 )(𝑥𝑛−2 − 𝑥𝑛−1 )
𝑦𝑛 − 2𝑦𝑛−1+ 𝑦𝑛−2 ∇2 𝑦𝑛 Δ2 𝑦𝑛−2
𝑎2 = 2
= 2
=
2ℎ 2! ℎ 2! ℎ2
27 of 65

of 65
28
28

By continuing this method of calculating the coefficients


we can find that
∇3 𝑦𝑛 Δ3 𝑦𝑛−3 ∇4 𝑦𝑛 Δ4 𝑦𝑛−4
𝑎3 = 3
= 3
, 𝑎4 = 4
= 4
, … ….
3! ℎ 3! ℎ 4! ℎ 4! ℎ
∇𝑛 𝑦𝑛 Δ𝑛 𝑦0
𝑎𝑛 = 𝑛
= 𝑛
.
𝑛! ℎ 𝑛! ℎ
Substituting these values of 𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑛 we get
∇𝑦𝑛 ∇2 𝑦𝑛
𝐿𝑛 (𝑥 ) = 𝑦𝑛 + (𝑥 − 𝑥𝑛 ) + (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 )
ℎ 2! ℎ2
+⋯
∇𝑛 𝑦𝑛
. . .. +(𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) … . (𝑥 − 𝑥1 ) 𝑛!ℎ𝑛
Δ𝑦𝑛−1
𝐿𝑛 (𝑥 ) = 𝑦𝑛 + (𝑥 − 𝑥𝑛 )

Δ2 𝑦𝑛−2
+ (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) +⋯
2! ℎ2
∇𝑛 𝑦0
+. . . .. +(𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) … . (𝑥 − 𝑥1 ) 𝑛!ℎ𝑛

𝑥−𝑥𝑛
Setting 𝑣 = , we get

28 of 65

of 65
29
29

𝑣 (𝑣+1) 2 𝑣(𝑣+1)(𝑣+2) 3
𝐿𝑛 (𝑥 ) = 𝑦𝑛 + 𝑣 ∇𝑦𝑛 + ∇ 𝑦𝑛 + ∇ 𝑦𝑛 +
2! 3!
𝑣(𝑣+1)(𝑣+2)….(𝑣+𝑛−1) 𝑛
…..+ ∇ 𝑦𝑛 ; or
𝑛!

𝐿𝑛 (𝑥 ) = 𝑦𝑛 + 𝑣Δ𝑦𝑛−1
𝑣(𝑣 + 1) 2 𝑣 (𝑣 + 1)(𝑣 + 2) 3
+ Δ 𝑦𝑛−2 Δ 𝑦𝑛−3
2! 3!
𝑣 (𝑣 + 1)(𝑣 + 2) … . (𝑣 + 𝑛 − 1) 𝑛
+ ⋯+ Δ 𝑦0
𝑛!
Equation (4.3.3) is Newton’s backward interpolation
formula.
The error term is given by
𝑣(𝑣 + 1)(𝑣 + 2) … . (𝑣 + 𝑛) 𝑛+1 (𝑛+1)
𝑅𝑛+1 (𝑥 ) = ℎ 𝑓 (𝜉 )
(𝑛 + 1)!
Where 𝑚𝑖𝑛{𝑥, 𝑥0 , . . , 𝑥𝑛 } < 𝜉 < 𝑚𝑎𝑥 {𝑥, 𝑥0 , . . , 𝑥𝑛 }
Note: Newton’s forward interpolation formula is used to
interpolate the values of 𝑦 near the beginning of a set of
tabulator values. And Newton’s backward interpolation
formula is used to interpolate the values of 𝑦 near the
end of a set of tabulator values.
Again question is why it is so?
2.9 effect of accidental error:

29 of 65

of 65
30
30

Here we see that if 𝛿 is a quantity introduced by accident


then as we progress in the table it shows large error of
alternate sign.

𝒙 𝒚 ∆𝒚 ∆𝟐 𝒚 ∆𝟑 𝒚 ∆𝒏 𝒚
𝒙𝟎 𝑦𝟎
∆𝑦𝟎
𝒙𝟏 𝑦1 ∆2 𝑦𝟎
∆𝑦1 ∆3 𝑦𝟎
+𝛿
𝒙𝟐 𝑦2 ∆2 𝑦1
+𝛿
∆𝑦2 ∆3 𝑦1
+𝛿 − 3𝛿
𝒙𝟑 𝑦3 + 𝛿 ∆2 𝑦2
− 2𝛿
∆𝑦3 ∆3 𝑦2
−𝛿 + 3𝛿
30 of 65

of 65
31
31

𝒙𝟒 𝑦4 ∆2 𝑦3
+𝛿
∆𝑦4

∆𝑦𝑛−2 ∆3 𝑦𝑛−3
𝒙𝒏−𝟏 𝑦𝑛−1 ∆2 𝑦𝑛−2
∆𝑦𝑛−1
𝒙𝒏 𝑦𝑛
So for a difference table where these type of situation
occur i.e. entries are with opposite sign , means errors are
more than actual entries ,we should not proceed or
truncate the table ,and use the truncated formula.
2.10 Effect of round off error and noise level:

𝒙 𝒚 ∆𝒚 ∆𝟐 𝒚 ∆𝟑 𝒚 ∆𝒏 𝒚
𝒙𝟎 𝑦𝟎
+ 𝜀0
∆𝑦𝟎 + 𝜀1
− 𝜀0

31 of 65

of 65
32
32

𝒙𝟏 𝑦1 ∆2 𝑦𝟎 +𝜀2
+ 𝜀1
−2𝜀1
+ 𝜀0
∆𝑦1 + 𝜀2 ∆𝟑 𝑦𝟎 + 𝜀3
− 𝜀1
−3𝜀2 + 3𝜀1
+ 𝜀0
𝒙𝟐 𝑦2 ∆2 𝑦𝟏 +𝜀3
+ 𝜀2
−2𝜀2
+ 𝜀1
∆𝑦2 + 𝜀3 ∆𝟑 𝑦𝟏 + 𝜀4
− 𝜀2
−3𝜀3 + 3𝜀2
− 𝜀1
𝒙𝟑 𝑦3 ∆2 𝑦2 +𝜀4
+ 𝜀3
−2𝜀3
+ 𝜀2
∆𝑦3 + 𝜀4 ∆3 𝑦2 + ⋯
− 𝜀3
𝒙𝟒 𝑦4 ∆2 𝑦3
+ 𝜀4 +⋯

32 of 65

of 65
33
33

∆𝑦4 + 𝜀5
− 𝜀4

∆𝑦𝑛−2 + ∆3 𝑦𝑛−3 +…
𝜀𝑛−1 −
𝜀𝑛−2
𝒙𝒏−𝟏 𝑦𝑛−1 ∆2 𝑦𝑛−2
+ 𝜀𝑛−1 +⋯
∆𝑦𝑛−1
+ 𝜀𝑛
− 𝜀𝑛−1
𝒙𝒏 𝑦𝑛
+ 𝜀𝑛

Here we can see again that as we proceed in the


difference table error component is increasing. Since
1
here 𝜀𝑖 s are less than 2 × 10−𝑚 where m is the no of
1
places after decimal or 2 a unit at last significant place ,
error in
∆𝒚 is a unit at last significant place as 𝜀𝑖 ′𝑠 are with sign
may be 1st one is +ve & 2nd one is –ve .Similarly error in
33 of 65

of 65
34
34

∆2 𝑦 is 2 unit at last significant place. And in


∆3 𝑦 4 unit, in ∆4 𝑦 8unit and so on .Clearly at ∆5 𝑦 it
is 16 means not only the last but the significant place
before the last is also erroneous. Also number of
significant figure will be lesser and lesser as we proceed
in the table. So we should not go beyond a stage where
contribution of ∆𝑘 𝑦 is less than the corresponding error.
This stage is known as noise level.
𝑘
𝑘
∆𝑘 (𝑦𝑖 + 𝜀𝑖 ) = ∆𝑘 𝑦𝑖 + ∑(−1)𝑗 ( ) 𝜀𝑘+𝑖−𝑗
𝑖
𝑗=0

𝑘 𝑘 1
|∆𝑘 𝜀𝑖 | = |∑𝑘𝑗=0(−1)𝑗 ( ) 𝜀𝑘+𝑖−𝑗 | ≤ ∑𝑘𝑗=0 ( )x2 × 10−𝑚
𝑖 𝑖
1
≤ 2𝑘 . × 10−𝑚 = 2𝑘−1 × 10−𝑚
2
So error term is 2𝑘−1 at m DP.Now if
∆𝑘 𝑦𝑖 ≤ 2𝑘−1 × 10−𝑚 means noise level arise at k-th
order difference.
From 2.9 and 2.10 it is clear that we should not take
the complete formula but a truncated version of the
formula. So depending on the position of interpolating
34 of 65

of 65
35
35

point different formula will give min error. This is the


reason why we use different formula in spite of
uniqueness of interpolating polynomial.
2.11 Uses of interpolating polynomial:
1) If nodes are not equispaced then use Lagrange
or Newton divided difference interpolation formula.
2)If nodes are equispaced then use Newton forward
difference formula when interpolating point is near
x0.choose x0 in such a way that |u|<0.5 ,
𝑥−𝑥0
where = .

3) )If nodes are equispaced then use Newton backward


difference formula when interpolating point is near xn.
𝑥−𝑥
choose xn in such a way that |u|<0.5 where 𝑢 = ℎ 𝑛 .

4) For the other zone we use Bessel or Stirling


interpolation formula for equispaced case.
2.12 Divided difference :
Let y=f(x) be a real valued function defined in the
interval [a , b] and let

35 of 65

of 65
36
36

𝑦𝑖 = 𝑓(𝑥𝑖 ) (∀ 𝑖 = 0, 1, 2, … . . , 𝑛) we define
divided difference for nodes 𝑥𝑖 as follows :
𝑓(𝑥0 ) − 𝑓 (𝑥1 ) 𝑦0 −𝑦1 𝑦1 −𝑦0
𝑓[𝑥0 , 𝑥1 ] = = =
𝑥0 −𝑥1 𝑥0 − 𝑥1 𝑥1 − 𝑥0
= 𝑓[𝑥1 , 𝑥0 ] ; similarly
𝑓(𝑥1 )−𝑓(𝑥2 ) 𝑦 −𝑦 𝑦 −𝑦
𝑓[𝑥1 , 𝑥2 ] = = 𝑥1 −𝑥2 = 𝑥2 −𝑥1 jk
𝑥1 −𝑥2 1 2 2 1
= 𝑓[𝑥2 , 𝑥1 ] ad so on. Also
𝑓 [𝑥0 , 𝑥1 ] − 𝑓[𝑥1 , 𝑥2 ]
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] =
𝑥0 −𝑥2
= 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
And in general
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]
𝑓[𝑥0 , 𝑥1 … 𝑥𝑛−1 ] − 𝑓 [𝑥1 , 𝑥2 … 𝑥𝑛 ]
=
𝑥0 −𝑥𝑛
x y 1st 2nd order 3rd order 4th order
order difference difference difference
differ
X 0 Y0
f[x0,x1]
X 1 Y1 f[x0,x1,x2]
f[x1,x2] f[x0,x1,x2,x3]
X 2 Y2 f[x1,x2,x3] f[x0,x1,x2,x3,x4]
36 of 65

of 65
37
37

f[x2,x3] f[x1,x2,x3,x4]
X 3 Y3 f[x2,x3,x4] f[x1,x2,x3,x4,x5]
f[x3,x4] f[x2,x3,x4,x5]
X 4 Y4 f[x3,x4,x5]

… f[xn-4,xn-3,xn-
2,xn-1,xn]
f[xn-2,xn- f[xn-3,xn-2,xn-
1] 1,xn]
Xn- Yn- f[xn-2,xn-
1 1 1,xn]
f[xn-
1,xn]
X n Yn
Divided difference table
2.13 Properties of divided difference (try)
1) Divided differences are symmetric w.r.t. their
arguments.
𝑓(𝑥𝑖 )
Prove this 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ] = ∑𝑛𝑖=0 ∏𝑛
𝑗=0(𝑥𝑖 −𝑥𝑗 )
𝑗≠𝑖

by induction and conclude.


2) Divided difference of a constant is zero.
37 of 65

of 65
38
38

3) Divided difference of kf(x) is k times of f(x).


4) Divided differences of f(x)±g(x) is sum or difference of
corresponding divided differences of f(x) and g(x).
5) If f(x) is xn then for m ≤ n
𝑓[𝑥0 , 𝑥1 , 𝑥2 , … , 𝑥𝑚 ]

= ∑ 𝑥0 𝑘0 . 𝑥1 𝑘1 . 𝑥2 𝑘2 … . . 𝑥𝑚 𝑘𝑚
𝑘0 +𝑘1 +𝑘2 +⋯+𝑘𝑚 =𝑛−𝑚

Cor :for n=m any divided difference of order n of xn is 1


and that of order (n+10 is zero.
6) if nodes are equispaced with spacing h then a divided
difference reduces to a finite difference given by
∆𝑛 𝑓(𝑥0 )
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ] = 𝑛!ℎ𝑛

2.14 newton divided difference interpolation:


Let Given a set of (𝑛 + 1) values
(𝑥0, 𝑦0 ), (𝑥1, 𝑦1 ), … . . (𝑥𝑛, 𝑦𝑛 ) of 𝑥 and 𝑦, it is required to
find 𝐿𝑛 (𝑥 ), a polynomial of degree 𝑛, so that 𝑦 and
𝐿𝑛 (𝑥 ) coincide at tabulated points .i.e
𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛) .also

38 of 65

of 65
39
39

f(x)=Ln(x)+𝑅𝑛+1 (𝑥) ,where 𝑅𝑛+1 (𝑥)is remainder term of


interpolation. Now
𝑓(𝑥) − 𝑓(𝑥0 )
𝑓[𝑥, 𝑥0 ] =
𝑥−𝑥0
𝑓[𝑥, 𝑥0 ] − 𝑓[𝑥0 , 𝑥1 ]
𝑓[𝑥, 𝑥0 , 𝑥1 ] =
𝑥 − 𝑥1
𝑓[𝑥, 𝑥0 , 𝑥1 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
𝑓[𝑥, 𝑥0 , 𝑥1 , 𝑥2 ] =
𝑥−𝑥2
…………
𝑓[𝑥, 𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]
𝑓[𝑥, 𝑥0 , 𝑥1 … 𝑥𝑛−1 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 … 𝑥𝑛 ]
=
𝑥−𝑥𝑛

Multiplying above (n+1) equation by (𝑥 − 𝑥0 ),


(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ), (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ),………
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛 ) respectively and
adding we get
𝑓(𝑥 ) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ] + (𝑥 − 𝑥0 )(𝑥 −
𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]+…
39 of 65

of 65
40
40

….+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )


𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]
+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛 )
𝑓[𝑥, 𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]
= 𝐿𝑛 (𝑥) + 𝑅𝑛+1 (𝑥) ; so
𝐿𝑛 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝑓 [𝑥0 , 𝑥1 ] + (𝑥 − 𝑥0 )(𝑥 −
𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]+…
….+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 )
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]

And remainder or error term


𝑅𝑛+1 (𝑥 ) = (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛 )
𝑓[𝑥, 𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ]
check 𝐿𝑛 (𝑥𝑖 ) = 𝑦𝑖 (∀ 𝑖 = 0, 1, 2, … . . , 𝑛)is satisfied.
𝑓 𝑛+1 ( 𝜉)
Also 𝑓 [𝑥, 𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑛 ] = (𝑛+1)!

We can prove Newton forward ,backward, Lagrange using


above formula and properties of divided difference.
2.15 Remark:
40 of 65

of 65
41
41

1)To find various interpolation formula actually we are


peeping into polynomial space 𝑃𝑛 [𝑎, 𝑏] of dimension
(n+1).We know basis of a vector space is not unique hence
we are choosing different basis for different formula to
express the same polynomial .that’s why it is unique.
For Lagrange 𝜔𝑖 (𝑥 )’s are basis , whereas (𝑥 − 𝑥0 ),
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ), (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ),………
, (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . (𝑥 − 𝑥𝑛−1 ) are basis of
Newton forward interpolation.
2) In statistics we have done curve fitting where the basic
problem is same for some given (xi,yi) for i=0,1,2….n , we
have to fit a curve with given degree. Now the question is
which one should be used when? In interpolation we have
seen that approximate polynomial is passing through all
(xi,yi) but in curve fitting we are using least square method
where the approximate polynomial not necessarily
passing through the given points. So the given data
contain only round off error then it is wise to use
interpolation otherwise curve fitting will be better.
3) In error of interpolation we have the term fn+1, but in
most of the function higher derivatives are unstable i.e.
41 of 65

of 65
42
42

for small change in arguments change in fn+1 value is high.


So for such type of function use of interpolation is not
advisable .But without knowing that function how can we
predict that? It is expected for a stable function that yi’s
will not vary much and the differences will decrease as we
proceed in difference table. This precaution has to be
taken.

2.16 Differentiation
Just like interpolation where we are approximating f(x)
by Ln(x),f’(x) can be approximated by Ln’(x) and f’’(x) by
Ln’’(x).So using Newton forward interpolation we get :
1 2𝑢−1 2
𝑓 ′ (𝑥 ) ≈ 𝐿′𝑛 (𝑥 ) = ℎ (∆𝑦0 + ∆ 𝑦0 +
2
3𝑢2 −6𝑢+2 3 1
∆ 𝑦0 + ⋯ ) 𝑓 ′′ (𝑥 ) ≈ 𝐿′′𝑛 (𝑥 ) = ℎ2 (∆2 𝑦0 +
3!
(12𝑢2−36𝑢+22)
(𝑢 − 1)∆3 𝑦0 + ∆4 𝑦0 +. . . )
24

42 of 65

of 65
43
43

Note: We already mentioned that as we proceed in the


difference table, quantity become smaller or there is a
loss of significant figure that means if number of
significant figure in y is 12 then possibly it is 10 in ∆𝑦 and
8 in ∆2 𝑦, 6 in ∆3 𝑦 and so on. In the formula of 𝐿′𝑛 (𝑥 ) the
leading term is ∆𝑦0 so there is a loss of significant figure.
1 1 1
Moreover a factor of is there ,if ℎ 𝑖𝑠 then ℎ will be
ℎ 10
10 so error will increase 10 times. Situation will be worse
for the case of 𝑓 ′′ (𝑥 ). So we can conclude that these
may not be reliable .This is the reason for saying:
Large errors are often found in approximating derivatives
from the polynomial interpolation formula .

Numerical Integration

3.1 Introduction
The well-known method of evaluating a definite
𝑏
integral ∫𝑎 𝑓(𝑥 )𝑑𝑥 is to find an indefinite integral or
43 of 65

of 65
44
44

a primitive of 𝑓(𝑥 ), i. e. a function 𝜑(𝑥) such that


𝜑 ′ (𝑥 ) = 𝑓(𝑥) and then calculate the values of
𝜑(𝑎 ), 𝜑(𝑏) and take the value of the integral to
be 𝜑(𝑏) − 𝜑(𝑎 ). But if the function 𝑓 (𝑥 ) is such that
its indefinite integral cannot be obtained in terms of
known functions, as is very often the case, then the
above method fails. In such cases we may try to
compute an approximate numerical value of the
definite integral up to a desired degree of accuracy.
This is the problem of numerical integration which is
also called mechanical quadrature.
Again, if the integrand 𝑓 (𝑥 ) is not known in its
analytic form but is represented by table of values,
then the formal method becomes meaningless, and
we are turned to numerical integration.
Closed and open type quadrature formula: A
mechanical quadrature formula is called closed or
open type according as the limits of integration are
used as interpolating points or not.
Degree of Precision: A mechanical quadrature
formula is said have a degree of precision 𝑘 , (𝑘 being
a positive integer), if it is exact, i.e. the error is zero

44 of 65

of 65
45
45

for an arbitrary polynomial of degree 𝑘 ≤ 𝑛, but


there exist a polynomial of degree 𝑘 + 1 for which it
is not exact, i.e., the error is not zero.
Composite rule: Sometimes it is more convenient to
break up the interval of integration [𝑎, 𝑏] into 𝑚 sub-
intervals [𝑎𝑗−1 , 𝑎𝑗 ](𝑗 = 1,2,3, … 𝑚) by the points
𝑎0 , 𝑎1 , 𝑎2, … , 𝑎𝑚 such that 𝑎 = 𝑎0 < 𝑎1 < 𝑎2 … <
𝑎𝑚 = 𝑏, apply a given quadrature formula separately
to each interval [𝑎𝑗−1 , 𝑎𝑗 ] and add the result. The
formula thus obtained will be called composite rule
corresponding to given quadrature formula.
3.2: Newton-Cotes Formula (closed type)
Let the integral to be evaluated be 𝐼 (𝑓) =
𝑏
∫𝑎 𝑓(𝑥 )𝑑𝑥 .The interval [𝑎, 𝑏] is sub-divided into 𝑛
equal subinterval, each of length ℎ. The nodes are
𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , such that 𝑥0 = 𝑎, 𝑥𝑛 = 𝑏, 𝑥𝑖 =
𝑏−𝑎
𝑥0 + 𝑖ℎ, ℎ = ( 𝑖 = 0,1,2,3, … . , 𝑛).
𝑛

The corresponding entries 𝑓 (𝑥𝑖 ), 𝑖 = 0, 1, 2, … . 𝑛 are also


available. Let us use Lagrange’s interpolation formula to
approximate 𝑓 (𝑥 ) by the interpolating polynomial 𝐿𝑛 (𝑥 )

45 of 65

of 65
46
46

𝑛
𝑓(𝑥𝑖 )
𝑓(𝑥 ) ≈ 𝐿𝑛 (𝑥 ) = 𝜔(𝑥) ∑
(𝑥 − 𝑥𝑖 )𝜔 ′ (𝑥𝑖 )
𝑖=0

Where 𝜔(𝑥 ) = (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … . . (𝑥 − 𝑥𝑛 ).


Integrating the interpolating polynomial 𝐿𝑛 (𝑥 ) we have
the approximate value of the given interval as 𝐼𝑛 (𝑓) =
𝑏 𝑓(𝑥 )
∫𝑎 𝜔(𝑥) ∑𝑛𝑖=0 (𝑥−𝑥 )𝜔𝑖 ′ (𝑥 ) 𝑑𝑥 = ∑𝑛𝑖=𝑜 𝐻𝑖𝑛 𝑓(𝑥𝑖 )
𝑖 𝑖
𝑏 𝜔(𝑥)
Where 𝐻𝑖𝑛 = ∫𝑎 𝑑𝑥 (𝑖 = 0, 1, 2, … … . , 𝑛)
(𝑥−𝑥𝑖 )𝜔′ (𝑥𝑖 )
𝑥−𝑥0
Setting 𝑢 = , so that , 𝑑𝑥 = ℎ 𝑑𝑢

So 𝜔(𝑥 ) = ℎ𝑛+1 𝑢(𝑢 − 1)(𝑢 − 2) … . (𝑢 − 𝑛)


𝜔′ (𝑥𝑖 ) = (𝑥𝑖 − 𝑥0 )(𝑥𝑖 − 𝑥1 ) … (𝑥𝑖 − 𝑥𝑖−1 )(𝑥𝑖 − 𝑥𝑖+1 ). . (𝑥𝑖 − 𝑥𝑛 )
= 𝑖ℎ{(𝑖 − 1)ℎ} … … (1ℎ)(−1ℎ)(−2ℎ) … . {−(𝑛 − 𝑖)ℎ}
= {𝑖 (𝑖 − 1)(𝑖 − 2) … … 1}ℎ𝑖 (−1)𝑛−𝑖 ℎ𝑛−𝑖 (𝑛 − 𝑖)!
=(−1)𝑛−𝑖 ℎ𝑛 𝑖! (𝑛 − 𝑖 )!
Putting in 𝐻𝑖𝑛 we get
𝑛
𝑛+1 (
𝑛 ℎ 𝑢 𝑢 − 1)(𝑢 − 2) … . (𝑢 − 𝑛)
𝐻𝑖 = ℎ ∫ 𝑛−𝑖 ℎ𝑛 𝑖! (𝑛 − 𝑖 )! {(𝑢 − 𝑖 )ℎ}
𝑑𝑢
0 (−1 )
(−1)𝑛−𝑖 (𝑏−𝑎) 𝑛 𝑢(𝑢−1)(𝑢−2)….(𝑢−𝑛)
= ∫0 𝑑𝑢
𝑛.𝑖!(𝑛−𝑖)! (𝑢−𝑖)

46 of 65

of 65
47
47

∴ 𝐻𝑖𝑛 = (𝑏 − 𝑎 )𝐾𝑖𝑛 ; (𝑖 = 0, 1, 2, … … . , 𝑛)
(−1)𝑛−𝑖 𝑛 𝑢(𝑢−1)(𝑢−2)….(𝑢−𝑛)
where 𝐾𝑖𝑛 = ∫0 𝑑𝑢
𝑛.𝑖!(𝑛−𝑖)! (𝑢−𝑖)
(𝑖 = 0, 1, 2, … … . , 𝑛) .
Thus we have
𝐼 (𝑓) = (𝑏 − 𝑎) ∑𝑛𝑖=𝑜 𝐾𝑖𝑛 𝑦𝑖 ; 𝑦𝑖 = 𝑓(𝑥𝑖 )
Where 𝐾𝑖𝑛 is given above. This is called the
(𝑛 + 1) – points Newton- Cotes Numerical
Integration formula of the closed type.
Note 𝐼 (𝑓) = ∑𝑛𝑖=𝑜 𝐻𝑖𝑛 𝑓 (𝑥𝑖 ) is not Newton Cotes
formula, rather this can be used for unequally spaced
nodes.
3.3: Trapezoidal Rule
For 𝑛 = 1, we have from Newton-Cotes Formula
𝐼 (𝑓 ) = 𝐼𝑇 ≈ (𝑏 − 𝑎) ∑1𝑖=𝑜 𝐾𝑖𝑛 𝑦𝑖 = (𝑏 − 𝑎)[𝐾01 𝑦0 + 𝐾11 𝑦1 ]

(−1)1−0 1 1
where 𝐾01 = ∫0 (𝑢 − 1)𝑑𝑢 = 2
1.0!(1−0)!
(−1)1−1 1 1
and 𝐾11 = ∫0 𝑢𝑑𝑢 =2
1.1!(1−1)!
(𝑏−𝑎) ℎ
𝐼 (𝑓) = 𝐼𝑇 ≈ 2
[𝑦0 + 𝑦1 ] = 2 [𝑦0 + 𝑦1 ]
47 of 65

of 65
48
48

Geometrically, the curve 𝑦 = 𝑓 (𝑥 ) is replaced by the


straight line passing through the point
𝑏
(𝑎, 𝑓 (𝑎 ))and (𝑏, 𝑓(𝑏)), and the integral ∫𝑎 𝑓(𝑥 )𝑑𝑥 is
approximated by the area of the trapezium bounded by
the straight line, the ordinates at 𝑥 = 𝑎, 𝑏 and he the
name trapezoidal rule.
Composite trapezoidal rule: Suppose the interval [𝑎, 𝑏] is
sub-divided into 𝑛 equal subinterval, each of length ℎ. The
nodes are 𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , such that 𝑥0 = 𝑎, 𝑥𝑛 =
𝑏−𝑎
𝑏, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, ℎ = ( 𝑖 = 0,1,2,3, … . , 𝑛), then
𝑛
applying the above Trapezoidal rule to each subintervals
[𝑥𝑖−1 , 𝑥𝑖 ]( 𝑖 = 0,1,2,3, … . , 𝑛) and summing over 𝑖 we can
obtain the composite Trapezoidal rule given as
𝑥1 𝑥2 𝑥𝑛
𝐼(𝑓 ) = ∫ 𝑓 (𝑥)𝑑𝑥 + ∫ 𝑓 (𝑥)𝑑𝑥 + ⋯ … … + ∫ 𝑓 (𝑥)𝑑𝑥
𝑥0 𝑥1 𝑥𝑛−1

= 2 [𝑦0 + (𝑦1 + 𝑦2 + ⋯ . . +𝑦𝑛−1 ) + 𝑦𝑛 ]

3.4 Simpson’s Rule


For 𝑛 = 2, we have from Newton-Cotes Formula
𝐼(𝑓) = 𝐼𝑇 ≈ (𝑏 − 𝑎 ) ∑2𝑖=𝑜 𝐾𝑖𝑛 𝑦𝑖 = (𝑏 − 𝑎 )[𝐾02 𝑦0 + 𝐾12 𝑦1 + 𝐾22 𝑦2 ]

48 of 65

of 65
49
49

(−1)2−0 2 1
where 𝐾02 = ∫0 (𝑢 − 1)(𝑢 − 2)𝑑𝑢 = 6
2.0!(2−0)!
(−1)2−1 2 2
𝐾12 = ∫0 𝑢(𝑢 − 2)𝑑𝑢 =
2.1!(2−1)! 3
(−1)2−2 2 1
𝐾22 = ∫0 𝑢(𝑢 − 1)𝑑𝑢 = 6
2.2!(2−2)!
(𝑏−𝑎) ℎ
𝐼 (𝑓) = 𝐼𝑠 ≈ [𝑦0 + 4 𝑦1 + 𝑦2 ] = 3[𝑦0 + 4 𝑦1 + 𝑦2 ]
6

Composite Simpson’1/3rd rule: Suppose the interval


[𝑎, 𝑏] is sub-divided into 𝑛 (= 2𝑚) of equal subinterval,
each of length ℎ. The nodes are
𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , such that 𝑥0 = 𝑎, 𝑥𝑛 = 𝑏, 𝑥𝑖 = 𝑥0 +
𝑏−𝑎
𝑖ℎ, ℎ = ( 𝑖 = 0,1,2,3, … . , 𝑛). This divides the range of
𝑛
integration [𝑎, 𝑏] into 𝑚 = 𝑛/2 subrange then applying
the above Simpson’s rule to each subintervals
[𝑥0 , 𝑥2 ], [𝑥2 , 𝑥4 ], … [𝑥𝑛−2 , 𝑥𝑛 ] and applying Simpson’s rule
to the subrange [𝑥2𝑗−2 , 𝑥2𝑗 ]
𝑥2 𝑥4 𝑥𝑛
𝐼(𝑓 ) = ∫ 𝑓 (𝑥)𝑑𝑥 + ∫ 𝑓 (𝑥)𝑑𝑥 + ⋯ … … + ∫ 𝑓 (𝑥)𝑑𝑥
𝑥0 𝑥2 𝑥𝑛−2

49 of 65

of 65
50
50

𝑥2𝑗

∫ 𝑓 (𝑥 )𝑑𝑥 = [𝑦2𝑗−2 + 4 𝑦2𝑗−1 + 𝑦2𝑗 ]
𝑥2𝑗−2 3
𝑥
𝐼 (𝑓) = ∑𝑚
𝑗=1 ∫𝑥
2𝑗
𝑓(𝑥 )𝑑𝑥
2𝑗−2


= 3 ∑𝑚
𝑗=1[ 𝑦2𝑗−2 + 4 𝑦2𝑗−1 + 𝑦2𝑗 ]

𝑐 ℎ
𝐼𝑠 = [{𝑦0 + 4𝑦1 + 𝑦2 } + {𝑦2 + 4𝑦3 + 𝑦4 }
3
+ ⋯ . . +{𝑦𝑛−2 + 4𝑦𝑛−1 + 𝑦𝑛 }]

= 3 [𝑦0 + 4(𝑦1 + 𝑦3 + 𝑦5 + ⋯ . . +𝑦𝑛−1 ) +
2(𝑦2 + 𝑦4 + 𝑦6 +. . . +𝑦𝑛−2 ) + 𝑦𝑛 ]
1
Note for Simpson’s 3 rd rule n must be even.
For 𝑛 = 1, 2, 3, 4, 5, 6 the calculated values of 𝐾𝑖𝑛
are given in table

i 0 1 2 3 4 5 6
n
1 1 1
2 2
2 1 4 1
6 6 6

50 of 65

of 65
51
51

3 1 3 3 1
8 8 8 8
4 7 32 12 32 7
90 90 90 90 90
5 19 75 50 50 75 19
288 288 288 288 288 288
6 41 216 27 272 27 216 41
840 840 840 840 840 840 840

Table: for 𝐾𝑖𝑛 Newton-Cotes quadrature


coefficients (closed type)

3.5 Weddle’s Rule: n=6


The seven-point Newton-Cotes closed type formula is
𝑏
𝐼(𝑓) = ∫ 𝑓(𝑥)
𝑎

= [41𝑦0 + 216𝑦1 + 27𝑦2 + 272𝑦3 + 27𝑦4 + 216𝑦5 + 41𝑦6 ]
140
𝑏−𝑎
𝑤ℎ𝑒𝑟𝑒 ℎ = . The coefficient of the ordinate s are
6
extremely cumbrous which makes the formula unworthy
of practical computation. Accordingly, we seek to modify
the above formula so that the coefficients are simplified
by proceeding as follows. We know

51 of 65

of 65
52
52

∆6 𝑦0 = 𝑦0 − 6𝑦1 + 15𝑦2 − 20𝑦3 + 15𝑦4 − 6𝑦5 + 𝑦6



Adding 140 × ∆6 𝑦0 with 𝐼 (𝑓) we get
𝑏 3ℎ
𝐼𝑊 = ∫𝑎 𝑓(𝑥 )𝑑𝑥 = [𝑦0 + 5𝑦1 + 𝑦2 + 6𝑦3 + 𝑦4 + 5𝑦5 + 𝑦6 ) ]
10

This is called Weddle’s rule in which the coefficients of the


ordinates are fairly simple.
Composite Weddle’s rule: Suppose the interval [𝑎, 𝑏] is
sub-divided into 𝑛 (= 6𝑚) of equal subinterval, each of
length ℎ. The nodes are 𝑥0 , 𝑥1 , 𝑥2, … , 𝑥𝑛 , such that 𝑥0 =
𝑏−𝑎
𝑎, 𝑥𝑛 = 𝑏, 𝑥𝑖 = 𝑥0 + 𝑖ℎ, ℎ = ( 𝑖 = 0,1,2,3, … . , 𝑛).
𝑛
This divides the range of integration [𝑎, 𝑏] into 𝑚 = 𝑛/6
subrange then applying the above Weddle’s rule to each
subintervals [𝑥0 , 𝑥6 ], [𝑥6 , 𝑥12 ], … [𝑥𝑛−6 , 𝑥𝑛 ] and applying
Weddle’s rule to the subrange [𝑥2𝑗−6 , 𝑥6𝑗 ] and summing
over 𝑗 = 1,2,3, … . , 𝑚, we get
𝑚 𝑥6𝑗
𝐼 (𝑓) = ∑ ∫ 𝑓 (𝑥 )𝑑𝑥
𝑗=1 𝑥6𝑗−6

52 of 65

of 65
53
53

𝑐 3ℎ
where 𝐼𝑊 = [𝑦 + 𝑦𝑛 + 5 {𝑦1 + 𝑦5 + 𝑦7 + 𝑦11 + ⋯ +
10 0
𝑦𝑛−5 + 𝑦𝑛−1 } + 2{𝑦2 + 𝑦4 + 𝑦6 +. . +𝑦𝑛−2 } + 6{𝑦3 +
𝑦9 +. . . +𝑦𝑛−3 }]

3.6 Properties Cote’s coefficients:


Again Cote’s coefficients are given by 𝐾𝑖𝑛 =
(−1)𝑛−𝑖 𝑛 𝑢(𝑢−1)(𝑢−2)….(𝑢−𝑛)
∫0 𝑑𝑢 ; so clearly
𝑛.𝑖!(𝑛−𝑖)! (𝑢−𝑖)

1) 𝐾𝑖𝑛 or Ki are independent function value f(x) or yi.


2) Ki=Kn-i ; to prove this let us substitute u=n-t
𝐾𝑛−𝑖
(−1)𝑖 𝑛 (
𝑢 𝑢 − 1)(𝑢 − 2) … . (𝑢 − 𝑛)
= ∫ 𝑑𝑢
𝑛. 𝑖! (𝑛 − 𝑖 )! 0 (𝑢 − 𝑛 + 𝑖 )
(−1)(−1)𝑖 0 (𝑛−𝑡)(𝑛−𝑡−1)(𝑛−𝑡−2)….(𝑛−𝑡−𝑛)
= 𝑛.𝑖!(𝑛−𝑖)! ∫𝑛 (𝑛−𝑡−𝑛+𝑖)
𝑑𝑡
(−1)𝑛+𝑖 𝑛 𝑡(𝑡−1)(𝑡−2)….(𝑡−𝑛)
=𝑛.𝑖!(𝑛−𝑖)! ∫0 (𝑡−𝑖)
𝑑𝑡 = 𝐾𝑖 ; as
( )𝑛+𝑖
−1 = −1 ( )𝑛−𝑖 ( )2𝑖
. −1
3) ∑𝑛𝑖=𝑜 𝐾𝑖 = 1 ;from interpolation we have :
𝑓 𝑛+1( 𝜉)
𝑓(𝑥 ) = 𝐿𝑛 (𝑥 ) + (𝑛+1)!
ω(x) ;so

53 of 65

of 65
54
54

𝑏 𝑏 𝑏
𝑓 𝑛+1 ( 𝜉)
∫ 𝑓(𝑥 )𝑑𝑥 = ∫ 𝐿𝑛 (𝑥 )𝑑𝑥 + ∫ ω(x)𝑑𝑥
𝑎 𝑎 𝑎 (𝑛 + 1)!
𝑏 𝑓 𝑛+1 ( 𝜉)
=(𝑏 − 𝑎) ∑𝑛𝑖=𝑜 𝐾𝑖 𝑦𝑖 +∫𝑎 (𝑛+1)! ω(x)𝑑𝑥 ;
Taking f(x) =1 we get ∑𝑛𝑖=𝑜 𝐾𝑖 = 1
Using above : for n=1 we get 𝐾0 = 𝐾1 ; 𝐾0 + 𝐾1 =
1
1 ; so 𝐾0 = 𝐾1 =
2
Which give trapezoidal rule. Also taking n=2 we get
𝐾0 = 𝐾2 ; 𝐾0 + 𝐾1 + 𝐾2 = 1 finding one value is
enough
1 1 1 𝑢3 𝑢2 2 1 8 4 1
𝐾2 =2.2! ∫0 𝑢(𝑢 − 1)𝑑𝑢=4 ( 3 ) | 0= 4 (3 − 2) = 6=𝐾0 ;
2
1 1 2
𝐾1 = 1 − − =
6 6 3
1
Which gives Simpson’s rd rule.
3
3.7 Deduction from Newton’s interpolation formula:
Since all the interpolation formula as same as
interpolating polynomial is unique, it should be
deduced also from any other interpolation formula.
Also remembering that here we are using equispaced
nodes so it can be done from Newton’s interpolation
formula also.

54 of 65

of 65
55
55

𝑥−𝑥0
For n=1; 𝐿1 (𝑥 ) = 𝑦0 + 𝑢∆𝑦0 where 𝑢 = ℎ
𝑏 1
∫𝑎 𝐿1 (𝑥 )𝑑𝑥 = ℎ ∫0 (𝑦0 + 𝑢∆𝑦0 )𝑑𝑢 ; 𝑎 = 𝑥0 , 𝑏 =
𝑥1
1 ℎ
= ℎ (𝑦0 + 2 ∆𝑦0 ) = 2 (𝑦0 + 𝑦1 ):Trapezoidal rule.
𝑢(𝑢−1) 2
For n=2 ; 𝐿2 (𝑥 ) = 𝑦0 + 𝑢∆𝑦0 + ∆ 𝑦0
2!
𝑏
∫𝑎 𝐿2 (𝑥 )𝑑𝑥 ; 𝑎 = 𝑥0 , 𝑏 = 𝑥2
2 𝑢(𝑢−1) 2
= ℎ ∫0 (𝑦0 + 𝑢∆𝑦0 + ∆ 𝑦0 )𝑑𝑢 ;
2!
1 1
− ℎ
=ℎ (2𝑦0 + ∆𝑦0 + 3 2
∆2 𝑦0 ) = 3 (𝑦0 + 4𝑦1 + 𝑦2 ) :
2!
1
Simpson’s rd rule.
3

𝟑.8 Errors in Mechanical quadrature formula:


Let primitive of f(x )exists and equal to 𝜑(𝑥 ) i.e.
𝑥
𝑓(𝑥 ) = 𝜑 ′ (𝑥 ) ; 𝐸𝑇 = ∫𝑥 1 𝑓(𝑥 ) 𝑑𝑥 − 𝐼𝑇
0

𝐸𝑇 = 𝜑(𝑥1 ) − 𝜑(𝑥0 ) − (𝑦0 + 𝑦1 )
2

= 𝜑(𝑥0 + ℎ) − 𝜑(𝑥0 ) − 2 {𝑓(𝑥0 ) + 𝑓(𝑥0 + ℎ)}
ℎ2 ′′ ℎ3 ′′′
=ℎ𝜑 ′ (𝑥0 ) + 2! 𝜑 (𝑥0 ) + 3! 𝜑 (𝑥0 ) +⋯

55 of 65

of 65
56
56

ℎ ℎ2 ℎ3
− 2 {𝑓(𝑥0 ) + 𝑓(𝑥0 ) + ℎ𝑓 ′ (𝑥0 ) + 𝑓 ′′ (𝑥0 ) + 𝑓 ′′′ (𝑥0 ) + ⋯ }
2! 3!
ℎ2 ℎ 3 ′′ ℎ
= ′
{ℎ𝑓(𝑥0 ) + 𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + ⋯ }
2 6
− 2 {2𝑓(𝑥0 ) + ℎ𝑓 ′ (𝑥0 ) +
ℎ 2 ′′ ℎ 3 ′′′
𝑓 (𝑥0 ) + 6 𝑓 (𝑥0 ) + ⋯ }
2

1 1 ℎ3
=ℎ3 (6 − 4) 𝑓 ′′ (
𝑥0 ) = − 12 𝑓 ′′ (𝑥0 ) + ⋯ …
ℎ3 ′′
𝐸𝑇 = − 12 𝑓 (𝜉 ) .where = 𝑥0 < 𝜉 < 𝑥1 = 𝑏 .
Clearly degree of precision of Trapezoidal rule is 1.
1
For Simpson’s rd rule: n=2
3
𝑥2

𝐸𝑠 = ∫ 𝑓(𝑥 ) 𝑑𝑥 − 𝐼𝑠
𝑥0

𝐸𝑠 = 𝜑(𝑥2 ) − 𝜑(𝑥0 ) − 3 (𝑦0 + 4𝑦1 + 𝑦2 ), ‘

= 𝜑(𝑥0 + 2ℎ) − 𝜑(𝑥0 ) − 3 {𝑓(𝑥0 ) + 4𝑓(𝑥0 + ℎ) + 𝑓(𝑥0 + 2ℎ)}

′ 4ℎ2 ′′ 8ℎ3 ′′′


={2ℎ𝜑 (𝑥0 )+ 2! 𝜑 (𝑥0 ) + 3! 𝜑 (𝑥0 ) +
16ℎ4 𝑖𝑣 32ℎ5 𝑣 ℎ
𝜑 (𝑥0 ) + 𝜑 (𝑥0 ) … . . } − 𝑓(𝑥0 )
4! 5! 3

4ℎ ′
ℎ2 ′′ ℎ3 ′′′ ℎ4 𝑖𝑣
− {𝑓(𝑥0 ) + ℎ𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + ⋯ }
3 2! 3! 4!

56 of 65

of 65
57
57

ℎ ′
4ℎ2 ′′ 8ℎ3 ′′′ 16ℎ4 𝑖𝑣
− {𝑓(𝑥0 ) + 2ℎ𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + 𝑓 (𝑥0 ) + 𝑓 (𝑥0 )
3 2! 3! 4!
+⋯}
1 4 1 2 ′
4 2
( )
= ℎ𝑓(𝑥0 ){2 − − − } + ℎ 𝑓 𝑥0 {2 − − }
3 3 3 3 3
8 4 4 16 4 8
+ℎ3 𝑓 ′′ (𝑥0 ) {6 − 6 − 6} + ℎ4 𝑓 ′′′ (𝑥0 ) {24 − 18 − 18} +
32 4 16
ℎ5 𝑓 𝑖𝑣 (𝑥0 ){ − − }+……
120 72 72
1
=− 90 ℎ5 𝑓 𝑖𝑣 (𝑥0 )+……
ℎ5
𝐸𝑠 = − 𝑓 𝑖𝑣 (𝜉 ) .where = 𝑥0 < 𝜉 < 𝑥2 = 𝑏 .
90
1
Clearly degree of precision of Simpson’s rd rule is 3.
3

Remark: Most interesting question is which formula to be


used when? For example if n=2, composite trapezoidal or
1
Simpson’s 3 rd rule which one will give better result ?That
will depend on value of h i.e. On length of subinterval. In
ET ,it is of the order h3 whereas in Es it is of the order of
h5 .So if for both the cases h is same and less than 1 then
obviously for Simpson error will be less provided the value
of the corresponding derivative are bounded ,not
shooting up. Hence as a thumb rule we use Simpson rule
and try to make length of the subinterval less if function
57 of 65

of 65
58
58

values are not fluctuating much ,otherwise use


Trapezoidal rule. If calculation is done on computer it is
advisable to take h as 0.1 0r less. Best method is to find
suitable value if n i.e. no of subinterval for which integral
converges.

2)error in Weddle’s rule:


ℎ7
EW = − 140 𝑓 𝑣𝑖 (𝜉 ) (𝑎 < 𝜉 < 𝑏). Hence degree of
precision of Weddle’s rule is 5.

3)Errors in composite formula:


𝑛ℎ3 ′′ ℎ2 (𝑏−𝑎) ′′
𝐸𝑇 = − 12 𝑓 (𝜉 ) = − 12 𝑓 (𝜉 ) .
where 𝑎 = 𝑥0 < 𝜉 < 𝑥𝑛 = 𝑏 , 𝑛ℎ = 𝑏 − 𝑎.
𝑛ℎ5 ℎ4 (𝑏−𝑎)
𝐸𝑠 = − 𝑓 𝑖𝑣 (𝜉 ) =− 𝑓 𝑖𝑣 (𝜉 ) .
180 180

where 𝑎 = 𝑥0 < 𝜉 < 𝑥𝑛 = 𝑏, 𝑛ℎ = 𝑏 − 𝑎


𝑛ℎ7 𝑣𝑖 ℎ6 (𝑏−𝑎) 𝑣𝑖
EW = − 140.6 𝑓 (𝜉 ) = − 840 𝑓 (𝜉 ) (𝑎 < 𝜉 < 𝑏)

58 of 65

of 65
59
59

6 1
Example: Evaluate 𝐼 = ∫0 𝑑𝑥 using (i) Trapezoidal
1+𝑥
rule, (ii) Simpson’s 1/3rd rule, (iii) Weddle’s rule. Also
check by direct integration.
1
Solution: Here, we have 𝑦 = 𝑓(𝑥 ) = , 0 ≤ 𝑥 ≤ 6.
1+𝑥
6−0
Divide the interval into six parts. So ℎ = =1
6
1
Therefore, the values of 𝑦 = 1+𝑥
are:
59 of 65

of 65
60
60

𝑥 0 1 2 3 4 5 6
𝑦 = 𝑓 (𝑥 ) 1 0.5 1/3 1/4 1/5 1/6 1/7

(i) By Trapezoidal rule:


6
1
∫ 𝑑𝑥
0 1 + 𝑥

= [(𝑦0 + 𝑦6 ) + 2(𝑦1 + 𝑦2 + 𝑦3 + 𝑦4
2
+ 𝑦5 )]

1 1 1 1 1 1
= 2 [(1 + 7) + 2(0.5 + 3 + 4 + 5 + 6)]
= 2.021429
(ii) By Simpson’s 1/3rd rule:
6
1
∫ 𝑑𝑥
0 1+𝑥

= [(𝑦0 + 𝑦6 ) + 4(𝑦1 + 𝑦3 + 𝑦5 )
3
+ 2(𝑦2 + 𝑦4 )]
1 1 1 1 1 1 1
= [(1 + ) + 4( + + ) + 2( + )]
3 7 2 4 6 3 5
= 1.9538730
(iii) By Weddle’s rule

60 of 65

of 65
61
61

6
1
∫ 𝑑𝑥
0 1+𝑥
3ℎ
= [(𝑦 + 𝑦6 ) + 3(𝑦1 + 𝑦2 + 𝑦4 + 𝑦5 )
10 0
+ 2𝑦3 ]
3 1 1 1 1 1 1
= [(1 + ) + 3( + + + ) + 2( )]
8 7 2 3 5 6 4
= 1.952857
By actual integration,
6
1
∫ 𝑑𝑥 = [log(1 + 𝑥 )]60
0 1+𝑥
= log 7 − log 1
= 1.945910

Example: The velocity 𝑣 of a particle at distance 𝑠


from a point on its path is given in the table below:

𝑠 in meter 0 10 20 30 40 50 60
𝑣 in m/sec 47 58 64 65 61 52 38

61 of 65

of 65
62
62

Estimate the time to travel 60 meters by using


Simpson’s 1/3rd rule.

Solution: Here, we have ℎ = 10.


𝑑𝑠 𝑑𝑠
We know the 𝑣 = . Hence, 𝑑𝑡 =
𝑑𝑡 𝑣
To find the time taken to travel 60 metres we have
to evaluate
60 6
𝑑𝑠
∫ 𝑑𝑡 = ∫
0 0 𝑣
1
Let 𝑦 = , then the table values of 𝑦 for different
𝑣
values of 𝑠 are given below
𝑠 0 10 20 30 40 50 60
𝑦 0.021 0.017 0.015 0.015 0.016 0.019 0.026
13 2 6 4 4 2 3
=
𝑣

By Simpson’s 1/3d rule,


60
∫ 𝑦 𝑑𝑠
0

= [(𝑦0 + 𝑦6 ) + 4(𝑦1 + 𝑦3 + 𝑦5 )
3
+ 2(𝑦2 + 𝑦4 )]
62 of 65

of 65
63
63

10
= [(0.0213 + 0.0263) + 4(0.0172 + 0.0154
3
+ 0.0192)
+2(0.0156 + 0.0164)]
= 1.0627
∴ Time taken to travel 60 meters is 1.0627 seconds.

6.6 Summary
In this unit the numerical integration by using
Newton-Cotes formula(closed type), Trapezoidal rule,
Simpson’s1/3rd rule and Weddle’s rule have been
discussed and also the corresponding error terms are also
studied.

6.7 Exercises
1. Define the degree of precision of mechanical
quadrature formula. Show that the d.p. of
trapezoidal is 1.

63 of 65

of 65
64
64

2. Deduce the trapezoidal, Simpson’s 1/3rd and


Weddle’s rules (without error) by integrating
Newton’s forward interpolation formula.
5 1
3. Evaluate ∫0 𝑑𝑥 by Trapezoidal rule using
4𝑥+5
11 coordinate.
Ans: 0.4055
𝜋/2
4. find the value of ∫0 √cos 𝑥 𝑑𝑥 by (i)
Trapezoidal rule and (ii) Simpson’s one-third rule
taking n = 6. Ans: (i) 1.170 (ii)
1.187)
5. When a train is moving at 30m/sec steam is shut
off and brakes are applied. The speed of the train per
second after 𝑡 seconds is given by

𝑡𝑖𝑚𝑒 (𝑡) 0 5 10 15 20 25 30 35 40
𝑠𝑝𝑒𝑒𝑑 (𝑣) 30 24 19.5 16 13.6 11.7 10.0 8.5 7.0
Using Simpson’s rule, determine the distance
moved by the train in 40 sec.
(Ans: 606.66 m.)

64 of 65

of 65
65
65

65 of 65

of 65

You might also like