0% found this document useful (0 votes)
87 views16 pages

SEM 3 - GE3 - Numerical Analysis 1

Numerical analysis involves computational methods for solving mathematical problems numerically. It emphasizes implementing numerical algorithms using modern digital computers, which are very fast at arithmetic operations. Numerical methods provide systematic techniques for finding approximate solutions to problems, as the data and methods used are inherently approximate, introducing errors.

Uploaded by

Neelarka Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views16 pages

SEM 3 - GE3 - Numerical Analysis 1

Numerical analysis involves computational methods for solving mathematical problems numerically. It emphasizes implementing numerical algorithms using modern digital computers, which are very fast at arithmetic operations. Numerical methods provide systematic techniques for finding approximate solutions to problems, as the data and methods used are inherently approximate, introducing errors.

Uploaded by

Neelarka Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Numerical Analysis

Numerical analysis is a subject that involves computational methods for studying and solving
mathematical problems. It is a branch of mathematics and computer science that creates,
analyzes, and implements algorithms for solving mathematical problems numerically.
Numerical methods usually emphasize on the implementation of the numerical algorithms.

By an algorithm for a given numerical problem we mean a complete description of


well-defined operations through which each permissible input data is transformed into an
output data. By "operations" we mean here arithmetic and logical operations which a
computer can perform, together with references to previously defined algorithms. The aim of
these methods is, therefore, to provide systematic techniques for solving mathematical
problems numerically.

Numerical methods are well suited for solving mathematical problems by using
modern digital computers, which are very fast and efficient in performing arithmetic
operations. The process of solving problems using high precision digital computers generally
involves starting from an initial data; the concerned appropriate algorithms are then executed
to yield the required results.

Inevitably, the numerical data and the methods used are approximate ones. Hence, the
error in the computed result may certainly be caused by the errors in the data, or the errors in
the method, or both.

Absolute, Relative and Percentage Errors


Let xT be the true value of a quantity and x A be its approximate value as given or obtained
by measurement or calculation. Then

the absolute error in approximating xT by x A is defined by E A = xT − x A

xT − x A
Relative error in approximating xT by x A is given by R A = provided xT  0
xT

xT − x A
Percentage error in approximating xT by x A is given by PA =  100
xT
2

Example:

Let xT = 3.141592 and x A = 3.14

Then E A = xT − x A = 0.001592

xT − x A
And RA = = 0.00507
xT

Another example is

Computer Representation of Numbers

Fixed-point Numbers

Digital computers are the principal means of calculation in numerical analysis, and
consequently it is very important to understand how they operate. Some digital computers
have a make that do with a fixed finite number of places, the word length, when internally
representing a number. This number 𝑛 is determined by the make of the machine, although
some machines have built-in extensions to integer multiples 2𝑛, 3𝑛, . .. (double word length,
triple word length, . . .) of 𝑛 to offer greater precision if needed.
A word length of 𝑛 places can be used in several different fashions to represent a
number. Fixed-point representation specifies a fixed number 𝑛1 of places before and a fixed
number 𝑛2 after the decimal (binary) point, so that 𝑛 = 𝑛1 + 𝑛2 (usually 𝑛1 = 0 or 𝑛1 =
𝑛 ).
3

In this representation, the position of the decimal (binary) point is fixed. A few simple
digital devices, mainly for accounting purposes, are still restricted to fixed-point
representation.

Floating-point numbers

Most computers have a floating-point mode for representing numbers. The floating-point
form is used to represent real numbers. The numbers allowed can be of greatly varying size,
but there are limitations on both the magnitude of the number and on the number of digits.
The floating-point ·representation is closely related to what is called scientific notation in
many high school mathematics texts. The number base used in computers is seldom decimal.
Most digital computers use the base 2 (binary) number system or some variant of it such as
base 8 (octal) or base 16 (hexadecimal).

An m-digit floating-point number x in base  has the form

x =  ( . d1d 2 ........ d m )  e

d1 d2 dm
where 0  d i   − 1 and (. d1d 2 ........ d m ) = + + ....... + is a  -fraction called
  2
m
the mantissa and e is called the exponent.  is also called the radix. Such a floating-point
number is said to be normalized if d1  0 or else d1 = d 2 = ........ = d m = 0 . We will always

assume d1  0 .

• For most computers  = 2 although on some  = 16 . In hand calculations and in


most calculators  = 10 .

• The precision or length m of floating-point numbers on any particular computer is


usually determined by the word length of the computer and may therefore vary
widely.
• The exponent 𝑒 is limited to a range L  e U for certain integers L and U,
which limits the possible size of x.
4

Significant digits of a floating point number

The digits used in mantissa to express a number are called as significant digits or significant
figures. More precisely, digits in the normalized form mantissa of a number are significant
digits. A significant digit of an approximate number in its decimal representation is any
nonzero digit, or any zero lying between significant digits or used as a placeholder, to
indicate a retained place. All other zeros of the approximate number that serve only to fix the
position of the decimal point are not significant digits.

In the floating point representation the number


x =  (. d1d 2 ...... d m )  10 n , d1  0

is said to have ‘m’ significant digits and d1 , d 2 ,........, d m are respectively called first,

second,……., m-th significant digit of the number.

Examples:
1) In the number 0.002070 the first three zeros are not significant digits because they
serve only to fix the position of the decimal point and indicate the place values of the other
digits. The fourth zero is significant since it lies between 2 and 7. The fifth zero is also
significant since it shows that we retain the decimal place 10 −6 in the approximate number.
From this point of view, the numbers 0.002070 and 0.00207 are not the same, because the
former has 4 s.d. and the latter has only three.

2) Suppose an experimentalist gives the result of a length measurement to be 13.59 m.


Here all the four figures are significant if the length has been measured to an accuracy of
1 cm i.e. the actual length is established to lie between 13.585 m and 13.595 m. If however,
he quotes the result as 13.590 m, then it is understood that he means the last zero to be
significant because he has measured the length to an accuracy of 1 mm. The number of
significant figures is then five.

3) If a number is written as 463000, it is not clear whether the zeros are significant. The
ambiguity can be removed by writing the number in floating - point representation. If the
number is given to be
.463106 then it has 3 significant digits.
Or if .4630 106 then it has 4 significant digits.
Or if .46300106 then it has 5 significant digits.
5

5) The numbers 3.50, 65.0, 0.230 each has three significant digits.
6) The numbers 0.0003125, 0.004321, .05349 each has four s.f.

Note: To compute the significant digits in a number, simply convert the number in the
normalized form and then compute the significant digits.

Chopping and Rounding:

Most real numbers x cannot be represented exactly by the above stated floating-point
representation and thus they are approximated by a nearby number representable in the
machine. The fact that on any computer only a subset 𝐹(𝛽, 𝑚, 𝐿, 𝑈) of 𝑅 is actually
available, poses several practical problems, first of all the representation in 𝐹 of any given
real number. To this concern, notice that, even if x and y were two numbers in 𝐹, the result of
an operation on them does not necessarily belong to 𝐹. Therefore, we must define an
arithmetic also on F.
We denote by fl ( x ) the machine approximation of a given real number x. There are
two principal ways of producing fl ( x ) from x: Chopping and Rounding.
Let 𝑥 be written in the form
x =  (. d1d 2 ...... d m d m+1 ...... )  e

The chopped machine representation is


fl ( x ) =  (. d1d 2 ...... d m )  e

Many computers use chopping after each arithmetic operation.

The rounded representation of x is given by


 
  (. d1 d 2 ...... d m )   e 0  d m +1 

fl ( x ) =  2



 (. d d ...... d ) + ( .00..... 01)  e
 1 2 m    2
 d m +1  

A variation of this definition is sometimes used in order to have unbiased rounding.


6

This rule is called even-digit rule. In such a case,



if d m+1 = and d j = 0 for j  m+2
2
then the number is rounded up or down according as d m is odd or even.

Thus the rules for rounding-off a number to m significant digits:

i) If the (𝑚 + 1)th digit is 0, 1, 2, 3 or 4 then delete all the digits following the m-th digit.
ii) If the (𝑚 + 1)th digit is 6, 7, 8 𝑜𝑟 9 then delete all digits following the m-th digit and
add 1 to the m-th place.
iii) If the (𝑚 + 1)th digit is 5 followed by at least one non-zero digit, then proceed as in (ii).
iv) If the (𝑚 + 1)th digit is 5 and it is the last non-zero digit, then
a) delete the digit 5 only if the 𝑚-th digit is even.
b) Add 1 to the 𝑚-th place and delete 5 if the 𝑚-th digit is odd.

Example:  = 3.14159265.......... . in normalized floating point representation is


 = 0.314159265....... 101
The floating-point form of  using five-digit chopping is fl( ) = 0.31415101 = 3.1415

And the floating-point form of  using five-digit rounding is fl( ) = 0.31416101 = 3.1416

Roundoff Error
The error that results from representing a number x by its floating –point form fl ( x )
is called the round-off error which clearly depends on the size of x and is therefore best
measured relative to x. The floating –point representation fl ( x ) for x has the relative error

fl ( x ) − x
which admits the following bound:
x

fl (x ) − x 1
Theorem: For decimal system  101−m in case of m – digit rounding.
x 2
7

Example:
i) 24.0349 rounded to 5 significant digits is 24.035
ii) 52.3682 rounded to 5 significant digits is 52.368
iii) 86.145 rounded to 4 significant digits is 86.14
iv) 86.135 rounded to 4 significant digits is 86.14
v) 79.9998 rounded to 5 significant digits is 80.000
vi) 52.184 rounded to 2 significant digits is 52
vii) 52.184 rounded to12 significant digit is 50
8

Finite difference Operators


Linear Operators:

Let ℱ be the class (or space) of all functions defined on (−∞, ∞).

An operator can be thought of as a mapping or a transformation which acts on a member of


the function space (a function) to produce another member of that space (another function). It
is denoted by A or L. Thus 𝐴𝑓(𝑥) = 𝑔(𝑥).

• If 𝐴𝑓(𝑥) = 𝐵𝑓(𝑥) for every 𝑓(𝑥) ∈ ℱ then 𝐴 = 𝐵.


• A is said to be linear if for any two functions 𝑓(𝑥), 𝑔(𝑥) ∈ ℱ and any constants 𝑐1 , 𝑐2
𝐴(𝑐1 𝑓(𝑥) + 𝑐2 𝑔(𝑥)) = 𝑐1 𝐴𝑓(𝑥) + 𝑐2 𝐴𝑔(𝑥)
• Sum: (𝐴 + 𝐵)𝑓(𝑥) = 𝐴𝑓(𝑥) + 𝐵𝑓(𝑥)
• Product: (𝐴𝐵)𝑓(𝑥) = 𝐴(𝐵𝑓(𝑥))
• Identity operator: 𝐼𝑓(𝑥) = 𝑓(𝑥) for every 𝑓(𝑥) ∈ ℱ
• 𝐴𝐴 = 𝐴2 , 𝐴𝐴𝐴 = 𝐴3 , … … … . . , 𝐴0 = 𝐼

Forward difference operator ∆:

Let h be a nonzero constant.

It is defined by ∆𝑓(𝑥) = 𝑓(𝑥 + ℎ) − 𝑓(𝑥).

∴ ∆2 𝑓(𝑥) = ∆(∆𝑓(𝑥)) = ∆𝑓(𝑥 + ℎ) − ∆𝑓(𝑥) = 𝑓(𝑥 + 2ℎ) − 2𝑓(𝑥 + ℎ) + 𝑓(𝑥)

k-th order difference of 𝑓(𝑥) is ∆𝑘 𝑓(𝑥) = ∑𝑘𝑖=0(−1)𝑖 (𝑘𝑖) 𝑓 (𝑥 + (𝑘 − 𝑖)ℎ)

It follows that 𝑓(𝑥 + 𝑘ℎ) = ∑𝑘𝑖=0(𝑘𝑖)∆𝑘−𝑖 𝑓(𝑥)

Consider a set of data points (𝑥𝑖 , 𝑦𝑖 ) where 𝑥𝑟 = 𝑥0 + 𝑟ℎ , 𝑟 = 0(1)𝑛 , ℎ > 0 where ℎ is the
common length of the intervals [𝑥𝑖−1, 𝑥𝑖 ] and it is called the step-length or spacing of the
data points.

At 𝑥 = 𝑥𝑗 , ∆𝑘 𝑦𝑗 = ∑𝑘𝑖=0(−1)𝑖 (𝑘𝑖)𝑦𝑗+𝑘−𝑖
9

And 𝑦𝑗+𝑘 = ∑𝑘𝑖=0(𝑘𝑖)∆𝑘−𝑖 𝑦𝑗

In particular, ∆𝑘 𝑦0 = ∑𝑘𝑖=0(−1)𝑖 (𝑘𝑖)𝑦𝑘−𝑖

And 𝑦𝑘 = ∑𝑘𝑖=0(𝑘𝑖)∆𝑘−𝑖 𝑦0

Properties:

1. The first order difference of a constant is zero.

2. The first order difference of a polynomial of degree n is a polynomial of degree n-1.

3. The (n+1)-th order difference of a polynomial of degree n is zero.

Difference Table in terms of forward differences:

x 𝒚 ∆𝒚 ∆𝟐 𝒚 ∆𝟑 𝒚 ∆𝟒 𝒚

𝑥0 𝑦0
∆𝑦0
𝑥1 𝑦1 ∆2 𝑦0
∆𝑦1 ∆3 𝑦0
𝑥2 𝑦2 ∆2 𝑦1 ∆4 𝑦0
∆𝑦2 ∆3 𝑦1
𝑥3 𝑦3 ∆2 𝑦2
∆𝑦3
𝑥4 𝑦4

Backward difference operator 𝛁:

Let h be a nonzero constant.

It is defined by ∇𝑓(𝑥) = 𝑓(𝑥) − 𝑓(𝑥 − ℎ).

∴ ∇2 𝑓(𝑥) = ∇(∇𝑓(𝑥)) = ∇𝑓(𝑥) − ∇𝑓(𝑥 − ℎ) = 𝑓(𝑥) − 2𝑓(𝑥 − ℎ) + 𝑓(𝑥 − 2ℎ) and so


on.
10

k-th order backward difference of 𝑓(𝑥) is ∇𝑘 𝑓(𝑥) = ∑𝑘𝑖=0(−1)𝑖 (𝑘𝑖) 𝑓(𝑥 − 𝑖ℎ)

At 𝑥 = 𝑥𝑗 , ∇𝑘 𝑦𝑗 = ∑𝑘𝑖=0(−1)𝑖 (𝑘𝑖)𝑦𝑗−𝑖

Difference Table in terms of backward differences:

x 𝒚 𝛁𝒚 𝛁𝟐𝒚 𝛁𝟑 𝒚 𝛁𝟒 𝒚

𝑥0 𝑦0
∇𝑦1
𝑥1 𝑦1 ∇2 𝑦2
∇𝑦2 ∇3 𝑦3
𝑥2 𝑦2 ∇2 𝑦3 ∇4 𝑦4
∇𝑦3 ∇3 𝑦4
𝑥3 𝑦3 ∇2 𝑦4
∇𝑦4
𝑥4 𝑦4

Shift operator E:

Let h be a nonzero constant.

It is defined by 𝐸𝑓(𝑥) = 𝑓(𝑥 + ℎ)

∴ 𝐸 2 𝑓(𝑥) = 𝐸(𝐸𝑓(𝑥)) = 𝐸𝑓(𝑥 + ℎ) = 𝑓(𝑥 + 2ℎ)

In general 𝐸 𝑘 𝑓(𝑥) = 𝑓(𝑥 + 𝑘ℎ)

𝐸 −1 is a linear operator defined by 𝐸 −1 𝑓(𝑥) = 𝑓(𝑥 − ℎ).

Then 𝐸𝐸 −1 = 𝐸 −1 𝐸 = 𝐼

Also (𝐸 −1 )𝑘 = 𝐸 −𝑘

In particular 𝐸 −1 𝑦𝑗 = 𝑦𝑗−1 and 𝐸 −𝑘 𝑦𝑗 = 𝑦𝑗−𝑘 .


11

Relation between ∆ and 𝛁:

∆𝑓(𝑥) = 𝑓(𝑥 + ℎ) − 𝑓(𝑥) = 𝐸𝑓(𝑥) − 𝐼𝑓(𝑥) = (𝐸 − 𝐼)𝑓(𝑥)

∆2 𝑓(𝑥) = 𝑓(𝑥 + 2ℎ) − 2𝑓(𝑥 + ℎ) + 𝑓(𝑥)

= 𝐸 2 𝑓(𝑥) − 2𝐸𝐼𝑓(𝑥) + 𝐼 2 𝑓(𝑥)

= (𝐸 − 𝐼)2 𝑓(𝑥)

Similarly ∆𝑘 𝑓(𝑥) = (𝐸 − 𝐼)𝑘 𝑓(𝑥)

Thus ∆≡ 𝐸 − 𝐼

∇𝑓(𝑥) = 𝑓(𝑥) − 𝑓(𝑥 − ℎ) = 𝐼𝑓(𝑥) − 𝐸 −1 𝑓(𝑥) = (𝐼 − 𝐸 −1 )𝑓(𝑥)

∇2 𝑓(𝑥) = 𝑓(𝑥) − 2𝑓(𝑥 − ℎ) + 𝑓(𝑥 − 2ℎ)

= 𝐼𝑓(𝑥) − 2𝐼𝐸 −1 𝑓(𝑥) + (𝐸 −1 )2 𝑓(𝑥)

= (𝐼 − 𝐸 −1 )2 𝑓(𝑥)

Similarly ∇𝑘 𝑓(𝑥) = (𝐼 − 𝐸 −1 )𝑘 𝑓(𝑥)

Thus ∇ ≡ 𝐼 − 𝐸 −1

Now ∇𝑘 𝑦𝑗 = (𝐼 − 𝐸 −1 )𝑘 𝑦𝑗

= (𝐸𝐸 −1 − 𝐸 −1 )𝑘 𝑦𝑗 = (𝐸 − 𝐼)𝑘 𝐸 −𝑘 𝑦𝑗

= ∆𝑘 (𝐸 −𝑘 𝑦𝑗 ) = ∆𝑘 𝑦𝑗−𝑘

∴ ∇𝑘 𝑦𝑗 = ∆𝑘 𝑦𝑗−𝑘

Some relations among the operators:


1. ∆ − ∇≡ ∆∇

2. ∇𝐸 ≡ 𝐸∇≡ ∆

3. (𝐼 + ∆)(𝐼 − ∇) ≡ 𝐼
12

Principle of Interpolation:
Let 𝑓: [𝑎, 𝑏] → ℝ be a continuous function and let 𝑥0 , 𝑥1 , … … … … … 𝑥𝑛 be distinct points in
[𝑎, 𝑏]. Suppose the analytical formula representing 𝑓 is unknown but it is known for
𝑥0 , 𝑥1 , … … … … … 𝑥𝑛 only, given by 𝑦𝑟 = 𝑓(𝑥𝑟 ) , 𝑟 = 0,1, … … . , 𝑛. The object of interpolation
is to find approximate value of 𝑓(𝑥 ∗ ) for 𝑥 ∗ ∈ [𝑎, 𝑏] which is distinct from
𝑥0 , 𝑥1 , … … … … … 𝑥𝑛 . The principle of interpolation consists in approximating 𝑓 by a
known function 𝜑 (also defined on [𝑎, 𝑏]) such that
𝜑(𝑥𝑖 ) = 𝑓(𝑥𝑖 ) , 𝑖 = 0,1, … … . , 𝑛.
 The points 𝑥0 , 𝑥1 , … … … … … 𝑥𝑛 are called interpolating points or nodes.

Polynomial Interpolation:
In polynomial interpolation the approximating function 𝜑(𝑥) is taken to be a
polynomial 𝐿𝑛 (𝑥) of degree  n such that 𝐿𝑛 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) , 𝑖 = 0,1, … … . , 𝑛. This
polynomial is called interpolating polynomial.

Geometrical Interpretation:

LAGRANGE INTERPOLATION FORMULA:


Let y = f (x) be an unknown function whose values at x = xr are given by

yr = f (xr ), r = 0(1)n .
13

Then
n
( x − x0 )( x − x1 ).......... ( x − xi −1 )( x − xi +1 ).......... ..( x − xn )
Ln ( x ) =  yi
i =0 ( xi − x0 )( xi − x1 ).......... ( xi − xi −1 )( xi − xi +1 ).......... ..( xi − x n )

is called Lagrange’s interpolation Polynomial (or formula).

𝑦𝑟 = 𝑓(𝑥𝑟 ), 𝑟 = 0(1)3 then the


In particular if 𝑦 = 𝑓(𝑥) is given at 4 points by
corresponding Lagrange’s Interpolation Polynomial is

(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 )


𝐿𝑛 (𝑥) = 𝑓(𝑥0 ) + 𝑓(𝑥1 )
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 )(𝑥0 − 𝑥3 ) (𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 )(𝑥1 − 𝑥3 )

(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥3 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )


+ 𝑓(𝑥2 ) + 𝑓(𝑥3 )
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )(𝑥2 − 𝑥3 ) (𝑥3 − 𝑥0 )(𝑥3 − 𝑥1 )(𝑥3 − 𝑥2 )

Newton-Gregory Forward Interpolation Formula


Let 𝑦 = 𝑓(𝑥) be an unknown function whose values at 𝑥 = 𝑥𝑟 are given by 𝑦𝑟 =
𝑓(𝑥𝑟 ), 𝑟 = 0(1)𝑛 where 𝑥𝑟 = 𝑥0 + 𝑟ℎ , 𝑟 = 0(1)𝑛 , ℎ > 0.

We want to construct a polynomial 𝐿𝑛 (𝑥) of degree ≤ 𝑛 such that

𝐿𝑛 (𝑥𝑟 ) = 𝑦𝑟 , 𝑟 = 0(1)𝑛 …………..(1)

Let 𝐿𝑛 (𝑥) = 𝑎0 + 𝑎1 (𝑥 − 𝑥0 ) + 𝑎2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) + 𝑎3 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) +


⋯ … … ….

……….+𝑎𝑛 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … … … … . . (𝑥 − 𝑥𝑛−1 )


14

where 𝑎0 , 𝑎1 , … … … … 𝑎𝑛 are constants to be determined such that (1) holds.

Now 𝐿𝑛 (𝑥0 ) = 𝑦0 ⟹ 𝑎0 = 𝑦0

𝐿𝑛 (𝑥1 ) = 𝑦1 ⟹ 𝑎0 + 𝑎1 (𝑥1 − 𝑥0 ) = 𝑦1

𝑦1 −𝑦0 ∆𝑦0
⟹ 𝑎1 = =
ℎ ℎ

𝐿𝑛 (𝑥2 ) = 𝑦2 ⟹ 𝑎0 + 𝑎1 (𝑥2 − 𝑥0 ) + 𝑎2 (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) = 𝑦2

Or, 𝑎0 + 𝑎1. 2ℎ + 𝑎2 . 2ℎ. ℎ = 𝑦2

Or, 𝑦0 + 2∆𝑦0 + 2ℎ2 𝑎2 = 𝑦2

𝑦2 −2𝑦1 +𝑦0 ∆2 𝑦0
⇒ 𝑎2 = =
2ℎ2 2!ℎ2

𝐿𝑛 (𝑥3 ) = 𝑦3 ⟹ 𝑎0 + 𝑎1 (𝑥3 − 𝑥0 ) + 𝑎2 (𝑥3 − 𝑥0 )(𝑥3 − 𝑥1 )

+𝑎3 (𝑥3 − 𝑥0 )(𝑥3 − 𝑥1 )(𝑥3 − 𝑥2 ) = 𝑦3

Or, 𝑎0 + 𝑎1. 3ℎ + 𝑎2 . 3ℎ. 2ℎ + 𝑎3 3ℎ. 2ℎ. ℎ = 𝑦3

Or, 𝑦0 + 3∆𝑦0 + 3∆2 𝑦0 + 6ℎ3 𝑎3 = 𝑦3

𝑦3 − 𝑦0 − 3(𝑦1 − 𝑦0 ) − 3(𝑦2 − 2𝑦1 + 𝑦0 ) ∆3 𝑦0


⇒ 𝑎3 = =
6ℎ3 3! ℎ3

Proceeding in this way we shall get

∆4 𝑦0 ∆𝑛 𝑦0
𝑎4 = , ………………………………………………., 𝑎𝑛 =
4!ℎ4 𝑛!ℎ𝑛

∴ 𝐿𝑛 (𝑥) becomes

∆𝑦0 ∆2 𝑦0 ∆3 𝑦0
𝐿𝑛 (𝑥) = 𝑦0 + (𝑥 − 𝑥0 ) + 2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )
ℎ 2!ℎ 3!ℎ3

∆𝑛 𝑦
+ ⋯ … … … … … … … … ………….+ 𝑛!ℎ𝑛0 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) … … … … . . (𝑥 − 𝑥𝑛−1 )

𝑥−𝑥0
Let 𝑢= . Then 𝑥 − 𝑥𝑟 = (𝑢 − 𝑟)ℎ, 𝑟 = 0(1)𝑛

∴ 𝐿𝑛 (𝑥) becomes
15

∆2 𝑦 ∆3 𝑦0
𝐿𝑛 (𝑥) = 𝑦0 + ∆𝑦0 𝑢 + 2! 0 𝑢(𝑢 − 1) + 𝑢(𝑢 − 1)(𝑢 − 2)
3!

∆𝑛 𝑦0
+ ⋯ … … … … … … … … ………….+ 𝑢(𝑢 − 1) … … … … … … … (𝑢 − 𝑛 + 1)
𝑛!

2
= 𝑦0 + (𝑢1) ∆𝑦0 + (𝑢2)∆ 𝑦0 + ⋯ … … … … . . +(𝑢𝑛)∆𝑛 𝑦0 where (𝑢𝑟) =
𝑢(𝑢−1)……………(𝑢−𝑟+1)
𝑟!

This is called Newton forward interpolation formula.

Newton-Gregory Backward Interpolation Formula


Let 𝑦 = 𝑓(𝑥) be an unknown function whose values at 𝑥 = 𝑥𝑟 are given by 𝑦𝑟 =
𝑓(𝑥𝑟 ), 𝑟 = 0(1)𝑛 where 𝑥𝑟 = 𝑥0 + 𝑟ℎ , 𝑟 = 0(1)𝑛 , ℎ > 0.

Then Newton backward interpolation formula is given by

∇𝑦𝑛 ∇2 𝑦𝑛 ∇3 𝑦𝑛
𝐿𝑛 (𝑥) = 𝑦𝑛 + (𝑥 − 𝑥𝑛 ) + 2 (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) + (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 )(𝑥 −
ℎ 2!ℎ 3!ℎ3

∇𝑛 𝑦
𝑥𝑛−2 ) + … … … …. ……….+ 𝑛!ℎ𝑛𝑛 (𝑥 − 𝑥𝑛 )(𝑥 − 𝑥𝑛−1 ) … … … … . . (𝑥 − 𝑥1 )

𝑥−𝑥𝑛
Let s= . Then 𝑥 − 𝑥𝑛−𝑟 = (𝑠 + 𝑟)ℎ, 𝑟 = 0(1)𝑛 − 1

∴ 𝐿𝑛 (𝑥) becomes

∇2 𝑦 ∇3 𝑦𝑛
𝐿𝑛 (𝑥) = 𝑦𝑛 + ∇𝑦𝑛 𝑠 + 2! 𝑛 𝑠(𝑠 + 1) + 𝑠(𝑠 + 1)(𝑠 + 2)
3!

∇𝑛 𝑦𝑛
+ ⋯ … … … … … … … … ………….+ 𝑠(𝑠 + 1) … … … … … … … (𝑠 + 𝑛 − 1)
𝑛!

2
= 𝑦𝑛 + (1𝑠 ) ∇𝑦𝑛 + (𝑠+1
2
)∇ 𝑦𝑛 + ⋯ … … … . +(𝑠+𝑛−1
𝑛
)∇𝑛 𝑦𝑛

where (𝑠+𝑟−1
𝑟
)=
𝑠(𝑠+1)……………(𝑠+𝑟−1)
𝑟!

Since we have the relation ∇𝑘 𝑦𝑗 = ∆𝑘 𝑦𝑗−𝑘 , 𝐿𝑛 (𝑥) can be written as


16

∆2 𝑦𝑛−2 ∆3 𝑦𝑛−3
𝐿𝑛 (𝑥) = 𝑦𝑛 + ∆𝑦𝑛−1 𝑠 + 𝑠(𝑠 + 1) + 𝑠(𝑠 + 1)(𝑠 + 2)
2! 3!

∆𝑛 𝑦0
+ ⋯ … … … … … … … … ………….+ 𝑠(𝑠 + 1) … … … … … … … (𝑠 + 𝑛 + 1)
𝑛!

This is Newton backward difference interpolation formula with the coefficients expressed in
terms of ∆.

You might also like