0% found this document useful (0 votes)
85 views143 pages

Numerical Analysis PDF

This document outlines topics in numerical analysis including error, sources of error, and solution methods for algebraic equations, systems of linear equations, interpolation, differentiation, integration, and improper integrals. The introduction discusses that numerical analysis finds approximate solutions to mathematical problems that cannot be solved exactly. Accuracy and error are also introduced, distinguishing between exact and approximate numbers. Sources of error discussed include truncation, computational, round-off, and methods to calculate absolute, relative, and percentage errors. Numerical methods to solve equations outlined include bisection, regula-falsi, secant, fixed-point iteration, and Newton-Raphson. Methods to solve systems of linear equations include Gauss elimination, Gauss-Jordan, decomposition, and iteration

Uploaded by

tesfu ab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views143 pages

Numerical Analysis PDF

This document outlines topics in numerical analysis including error, sources of error, and solution methods for algebraic equations, systems of linear equations, interpolation, differentiation, integration, and improper integrals. The introduction discusses that numerical analysis finds approximate solutions to mathematical problems that cannot be solved exactly. Accuracy and error are also introduced, distinguishing between exact and approximate numbers. Sources of error discussed include truncation, computational, round-off, and methods to calculate absolute, relative, and percentage errors. Numerical methods to solve equations outlined include bisection, regula-falsi, secant, fixed-point iteration, and Newton-Raphson. Methods to solve systems of linear equations include Gauss elimination, Gauss-Jordan, decomposition, and iteration

Uploaded by

tesfu ab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

Contents

1.1 Introduction ............................................................................................................................................ 3


1.2 Accuracy of Numbers .............................................................................................................................. 3
1.3 Error ........................................................................................................................................................ 5
1.4 Sources of errors. .................................................................................................................................... 5
1.4.2 Truncation errors ............................................................................................................................. 6
1.4.3 Computational Errors....................................................................................................................... 6
1.4.4 Round off error: ............................................................................................................................... 6
1.4.6 Absolute, relative and percentage errors. ....................................................................................... 6
1.5 Propagation of Errors and Stability ......................................................................................................... 8
2.1 Algebraic equation ............................................................................................................................ 10
2.2 Bisection Method of Solving a Nonlinear Equation .............................................................................. 10
2.2.1 Advantages of bisection method ................................................................................................... 14
2.2.2 Drawbacks of bisection method .................................................................................................... 14
2.3 Regula-Falsi method (False Position Method) ..................................................................................... 18
2.4 Secant Method ...................................................................................................................................... 21
2.5 Fixed-Point Iteration Method ............................................................................................................... 23
2.6 The Newton – Raphson Method ........................................................................................................... 28
2.6.1 Condition under which Newton’s Method converges. .................................................................. 29
3.1 Introduction .......................................................................................................................................... 34
3.2 Methods of Solution of systems of Linear Equations ..................................................................... 35
3.2.1 The Analytical Method of Solution ......................................................................................... 35
3.2.2 Numerical Methods .............................................................................................................. 36
3.2.2.1 The Direct Methods of Numerical Solutions........................................................................ 37
3.2.2.1.1 Gauss elimination method ....................................................................................................... 37
3.2.2.1.2 Gauss-Jordan Method. ............................................................................................................. 41
3.2.2.1.3 Decomposition Method or the decomposition method ............................................................ 45
3.2.2.2 Iteration Method ........................................................................................................................ 50
3.2.3 System of non linear Equations ..................................................................................................... 60
3.2.3.1 Newton’s Method for System of non linear Equations ............................................................. 60

Mathematics Program Numerical Analysis I


1
3.4 Error Estimates and Conditioning of Matrices ............................................................................... 63
4.1 INTRODUCTION ............................................................................................................................... 70
4.2 FINITE OPERATORS.......................................................................................................................... 70
4.3 Different Representation of the same Difference Table. ...................................................................... 73
4.4 The relation between porters ................................................................................................................. 75
5.1 Linear interpolation .............................................................................................................................. 82
5.2 Quadratic Interpolation ........................................................................................................................ 83
5.3 Lagrange's interpolation formula.......................................................................................................... 84
5.3.1 Compact form of Largrang’s Polynomial........................................................................................ 86
5.3.2 Error in Lagrange’s Polynomial ...................................................................................................... 87
5.4 Divided difference formula ................................................................................................................... 90
5.5 Newton interpolation formula .............................................................................................................. 94
5.4.1 Newton Divided Difference Polynomial......................................................................................... 94
6.1 Differentiation..................................................................................................................................... 109
6.1.1 Differentiation Formulas Based on Newton’s Forward Interpolation Formula ........................... 110
6.1.2 Maximum and Minimum Values of a Tabulated Function .......................................................... 113
6.2 Integration (Trapezoidal and Simpson's rule) ..................................................................................... 116
6.2.1 Using Lagrange Interpolation Polynomial .................................................................................... 116
6.2.2 Error Due to Integral Approximation .................................................................................... 119
6.2.3 Newton-Cotes-Quadrature Formulas .......................................................................................... 120
6.2.3 Convergence Discussion of Newton – Cotes Formula ................................................................. 128
6.3 Inverse Interpolation .................................................................................................................... 133
6.4 Numerical Approximation of Improper Integrals ............................................................................... 136
6.5 Integrals of discontinuous functions................................................................................................... 137
6.6 Multiple Integrals ................................................................................................................................ 138

Mathematics Program Numerical Analysis I


2
CHAPTER ONE

APPROXIMATION AND SOURCE OF ERRORS

General objective
On completion of the course, successful students will be able to
 Understand error
 understand sources of errors,
 identify absolute and relative errors

1.1 Introduction
Numerical Analysis is a subject that is concerned with devising methods for approximating the
solution of mathematically expressed problems. Such problems may each be formulated for
example in terms of algebraic, transcendental equations or ordinary differential equations or
partial or integral equations. More often the mathematical problems cannot be solve by exact
methods. The mathematical models ordinarily do not solve the physical problems exactly, so it is
often more appropriate to find an approximate solution. Generally numerical analysis does not
give exact solution; instead it attempts to devise a method which will yield an approximation
differing from exactness by less than a specified tolerance. The efficiency of the method used to
solve the given problem depends both upon the accuracy required of the method and the ease
with which it can be implemented

1.2 Accuracy of Numbers


There are two types of numbers, exact and approximate numbers:
Exact Number: The numbers in which, there is no uncertainty and no approximation, are said to
be exact numbers

Examples: … etc.

Approximate numbers: These are numbers that represent numbers to a certain degree of
accuracy but not exact value.
Example: 2.7182 is an approximate value of e.

Mathematics Program Numerical Analysis I


3
Chopping and Rounding off
Non exact numbers (approximate numbers) can be approximated with a finite numbers of digits
of precision in two ways.

Chopping: If n digits are used to represent a non-terminating mixed numbers then the simplest
scheme is to keep the first n digits and chop off all remaining digits.

Rounding off: Alternatively, one can round-off to nth digits by examining the values of the
remaining digits. The rounding operation is actually a modified version of chopping. To
rounding of a number to n digits we adopt the following procedure:
 To round-off a number to n significant digits, discard all digits to the right of the nth
digit, and if this discarded number is:
i. Less than half a unit in the nth place, leave the nth digit unaltered;
ii. Greater than half a unit in the nth place, increase the nth digit by unit;
iii. Exactly half a unit in the nth place, increase the nth digits by unit if it is odd,
otherwise leave it unchanged.
Note:
 The numbers thus rounded-off is said to be correct to n significant figures/digits
 The digits that are used to express a number are called significant digits (or figures).
Example: The numbers 2.143, 0.3312 and 1.065 contain four significant digits while 0.0012 has
only two significant digits 1 & 2.The zeros serve only to fix the position of the decimal point
Example: round-off the following numbers correct to four significant figures.

Solution
Here we have to retain the first four significant figures. Therefore,
becomes
10.0537 becomes 10.05
0.583251 becomes0 .5832
3.14159 becomes 3.142

Mathematics Program Numerical Analysis I


4
Activity 1.1
1. Round-off the following number correct to four significant figures
a.
b.
2. Find the sum of the following approximate numbers, each being correct to its last
figures

1.3 Error
Analysis of errors is the central concern in the study of numerical analysis and therefore we will
investigate the sources and types of errors that may occur in a given problem and the subsequent
propagation of errors.
Errors in the solution of a problem are due to the following reasons:
 To solve physical problems, mathematical models are formulated to describe
them and these models do not describe the problems exactly and as a result errors
are introduced.
 The methods used to solve the mathematical models are often not exact and as a
consequence errors are introduced.
 A computer has a finite word length and so only a fixed number of digits of a
number are inserted and as a consequence errors are introduced.

In a numerical point of view error does not mean mistake but error is a difference between exact
value and approximate value.
i.e.

1.4 Sources of errors.


When we do numerical analysis there are several possible sources of errors.

Mathematics Program Numerical Analysis I


5
The errors induced by the sources mentioned above are classified as:
1.4.1 Inherent Errors: These are errors that we cannot avoid; unless the mathematical models
that are formulated to describe the physical problems are exact such errors will always be
induced. Because of this such errors are called Inherent errors.
- The error which are already exist in the statement of the problem before its solution is
called inherent error
- They are caused by either the given data being approximate or due to the limitations of
the computing aids: mathematical tables, calculators or digital computers.

1.4.2 Truncation errors


The mathematical models may be formulated in algebraic or transcendental or other type of
equations. The solutions of such equations may not be solved analytically

- These are errors caused by using approximate formulae in computations.


- Error which arise from the process of truncating the numbers
Hence, we use numerical methods to obtain the solutions of such equations. In the process errors
will be induced. Such errors are called Truncation error (errors due to the method).

1.4.3 Computational Errors


Computational tools have limited space to store digits, and a number with number of the digits
greater than the tools accommodation capacity will be truncated and such errors are called
computational errors

1.4.4 Round off error:


Is the type of errors which arises from the process of rounding the numbers during the
computation.

1.4.6 Absolute, relative and percentage errors.


There are three ways to express the size of the error in a computed result.
Let x be the true value of a quantity and 1 x be the approximate value of a quantity. Then
i. Absolute error : If is the approximate value of the exact numbers ,then the
absolute error is denoted by is defined by
Remark:
If the number is rounded to N decimal places, then the absolute error is,

Mathematics Program Numerical Analysis I


6
ii. Relative error: It is often more meaning full to work with the relative error,
which is the absolute error normalized by the exact value.
- It is denoted by , and defined as:

iii. The percentage error is defined by:

Example:
An approximate value of is given by and its true value is
. Find the absolute, relative and percentage errors.
Solution
i.
ii.

iii.

 Activity 1.2
1. Which of the following better approximate

a.
b.
c.
22
2. An approximate value is given by x1 = = 3.142871 and its true value is
7
. Find the absolute and relative errors.
1
3. The three approximate values of the number are given as 0.30, 0.33 and 0.34, which of
3
the three is the best approximation.

Mathematics Program Numerical Analysis I


7
1.5 Propagation of Errors and Stability

Definition An algorithm is a procedure that describes a finite sequence of steps to be


performed in a specified order to solve a given problem.

Definition Propagated Errors is an error in the succeeding steps of an algorithm due to an


error at the initial step.
Propagated error is of critical importance.
- If errors are magnified continuously as the algorithm continues eventually they will over
shadow the true value and hence destroying the validity of the algorithm and we call such
algorithm us unstable. If errors made at initial stage die out as the algorithm continues, the
algorithm is said to be stable. Usually the initial error induced will not die out to the last stage
of the algorithm. But to consider the stability of an algorithm we consider two cases:
Suppose that an error is introduced at initial stage of the algorithm and after ‘n’
subsequence operations of the algorithm, En error is resulted which is the propagated error:
- If where k is a constant the propagated error is said to be linear growth and
the algorithm is stable.
Linear growth of error is usually unavoidable and is acceptable and Algorithms that show
linear growth are considered to be stable.

- If |for some , the propagated error is exponential. Exponential growth


of error should be avoided.
- And algorithms that show exponential growth are said to be unstable.

Mathematics Program Numerical Analysis I


8
Exercise 1.1

2. Find the relative error of the number 8.6 if both of its digits are correct.
3. Evaluate the sum of the following:
a.
b.
4. Evaluate the sum = 3+ 5 + 7 to four significant digits and find its absolute and
relative errors.

Mathematics Program Numerical Analysis I


9
CHAPTER 2

SOLVING NON LINEAR EQUATION


Objective
 Understand a range of numerical methods for solving non linear equation
 comprehend the convergence properties of the numerical methods
 Use the numerical method to solve examples of finding roots of a
nonlinear equation
 Enumerate the advantages and disadvantages of each method

2.1 Algebraic equation


Definition
An expression of the form
where all , , ......, are constants and n is a positive integer, called an algebraic equation
of degree n, in x are the example of algebraic equation. is an example
algebraic equation
Root of the equation
Definition
The value of x which satisfying the equation is called the root of the equation. The roots
of the linear, quadratic, cubic, or bi-quadratic equations are obtained by available methods, but
for transcendental equation or higher degree equation cannot solved by these methods easily. So
those types of equation can be solved by numerical methods such as bisection, secant, Iteration
Newton-Raphson, Regula-Falsi method etc.

2.2 Bisection Method of Solving a Nonlinear Equation


What is the bisection method and what is it based on?
After reading this chapter, you should be able to:
1. Follow the algorithm of the bisection method of solving a nonlinear equation,
2. Use the bisection method to solve examples of finding roots of a nonlinear equation, and
3. Enumerate the advantages and disadvantages of the bisection method
One of the first numerical methods developed to find the root of a nonlinear equation
was the bisection method. The method is based on the following theorem.

Mathematics Program Numerical Analysis I


10
Theorem
An equation , where is a real continuous function, has at least one root between
and if . (See Figure 1).Note that if , there may or may not
be any root between and (Figures 2).If , then there may be more than one
root between and . So the theorem only guarantees one root between and

Bisection method
Since the method is based on finding the root between two points, the method falls under the
category of bracketing methods. Since the root is bracketed between two points, and , one
can find the mid-point , between and . This gives us two new intervals, and

Figure 1 At least one root exists between the two points if the function is real, continuous, and
changes sign.

Mathematics Program Numerical Analysis I


11
Figure 2 If the function does not change sign between the two points, roots of the equation
may still exist between the two points.

Suppose we have an equation (2.1)


Where the function is continuous on and and or vise versa.
In order to find a root of eq. (2.2) lying in the interval we divide the interval in to half and
then:
a b  a1  b1
- If f  1 1  = 0, then is the root of the equation.
 2  2

a b   a1  b1 
- If f  1 1  > 0, then the root lies in a1 , 2  .
 2   

a b  a  b 
- If f  1 1  < 0, then the root lies in  1 1 , b1  .
 2   2 
The newly reduced interval denoted by [a2, b2] is again halved and the same investigation is
made. Finally, at some stage in the process we get either the exact root of (2.2) or a finite
sequence of nested intervals such that ( ) and
and
b1  a1
Where for n  1. We take the mid -point of this last subinterval as the
2 n 1
a n  bn
desired approximate root of i.e. (2.2)
2

Mathematics Program Numerical Analysis I


12
2
 x
Example 2.1: Find the root of the equation   in using the bisection
 2
method.
Solution
We obtain the following:
)
1 1.5 2 1.75
2 1.75 2 1.875
3 1.875 2 1.9375
4 1.8750 1.9375 1.90625
5 1.90625 1.9875 1.92187

Theorem:
Let be continuous on and suppose . Then the Bisection
Method generates a sequence { } approximating the root  with the property –
b1  a1
for n  1.
2n
Proof: For each n  1, we have
b1  a1 
– and  for n  1
2 n 1
1 1 b1  a1 
Since for all n  1, then – ( –
2 2 2n

Example2.2: Determine approximately how many iterations are necessary to solve


x3 + 4x2 – 10 = 0 with an accuracy of 10-5 for and

Solution: This requires finding an integer that will satisfy


b1  a1
| – | 2-n 10-5
2n

Mathematics Program Numerical Analysis I


13
To determine we use algorithms to base 10
Since 2-n < 10-5
 log102-n log1010-5

 5 .
log 10 2

5

log 10 2
It would appear to require 17 iterations to obtain an approximation accurate to 10-5

2.2.1 Advantages of bisection method

a) The bisection method is always convergent. Since the method brackets the root, the
method is guaranteed to converge.
b) As iterations are conducted, the interval gets halved. So one can guarantee the error in
the solution of the equation.

2.2.2 Drawbacks of bisection method


a) The convergence of the bisection method is slow as it is simply based on halving the
interval.
b) If one of the initial guesses is closer to the root, it will take larger number of iterations to
reach the root.
c) If a function is such that it just touches the -axis (Figure 6) such as it will
be unable to find the lower guess , and upper guess such that
d) For functions where there is a singularity and it reverses sign at the singularity, the
bisection method may converge on the singularity (Figure 7). An example includes)
, where , are valid initial guesses which satisfy

However, the function is not continuous and the theorem that a root exists is also not applicable

Mathematics Program Numerical Analysis I


14
Figure: The equation has a single root at that cannot bracketed

.
Figure: The equation has a no root but changes sign

Example 2.3: Solve the equation for the root between and by the
method of bisection
Solution:
Here is continuous in
Since
and , the root will lies between 2 and 4.
Iteration 1

Since , the root is lies between 2 and 3

Mathematics Program Numerical Analysis I


15
Iteration 2

Since , the root is lies between 2.5 and 3


Iteration 3

Since , the root is lies between 2.75 and 3


Iteration 4

Since , the root is lies between 2.875 and 3


Iteration 5

Iteration 5
and so on.

Hence, approximate value of the root is after5 iteration

Example 2.4

Find a real root of the equation , using bisection method.


Solution
Since is negative and positive, a root lies between 1 and 2 and therefore

which is positive

Hence the root lies between 1 and 1.5 and we obtain

, which is negative

Hence the root lies between 1.5 and 1.25 and we obtain

Mathematics Program Numerical Analysis I


16
The procedure is repeated and the successive approximations are
etc.

Example2.5: Find the real root of the equation , using bisection method.
Solution:
Let .Then and
Hence the root lies between 2 and 3 and we take

we choose as the new interval. Then

Proceeding in this way, the following table is obtained.

1 2 3 2.5 5.6250
2 2 2.5 2.25 1.8906
3 2 2.25 2.125 0.3457
4 2 2.125 2.0625 -0.3513
5 2.0625 2.125 2.09375 -0.0089
6 2.09375 2.125 2.10938 0.1668
7 2.09375 2.10938 2.10156 0.07856
8 2.09375 2.10156 2.09766 0.03471
9 2.09375 2.09766 2.09570 0.00195
10 2.09375 2.09570 2.09473 -0.0035
11 2.09375 2.09473 2.09424
12 2.09424 2.09473
At n=12, it is seem that the difference between two successive iteration is 0.0005, which is less
than 0.001.

Mathematics Program Numerical Analysis I


17
 Activity 2.1
1. Solve the following equation using bisection method correct to 2 decimal place
a.
b.
2. Find the real root of the equation on the interval with an error of

2.3 Regula-Falsi method (False Position Method)


While the bisection method is easy and has simple error analysis, it is not very efficient. For
most functions, we can improve the rate at which we converge to the root. One such method is
the method of interpolation. This is the oldest method for finding the real root of the equation
in this method we take two points and such that and are of opposite
signs i.e. . The root must lie in between and since the graph
crosses the x-axis between these two points.
Now equation of the chord joining the two points and is

(*)

In this method the curve between the points and is replaced by the
chord AB by joining the points A and B and taking the point of intersection of the chord with the
x-axis as an approximation to the root which is given by putting in (*).
Thus, we have

Mathematics Program Numerical Analysis I


18
If now and are of opposite signs, then the root lies between and Then replace
the part of curve between the points and by the chord joining these
points and this chord intersect the x-axis then we get second approximation to the root which is
given by:

The procedure is repeated till the root is found to desired accuracy

Example 2.6: given the equation by Regula false method.


Solution
Let

and .So let choose and

Hence

Since then the root lies between 2.5 and 2.801252

Now,

Mathematics Program Numerical Analysis I


19
Since , then the root is between 2.5 and 2.798492

Hence

Hence, the root correct to three places is 2.798

Example2.7: Solve the equation by regula false method starting with and
correct to 3 decimal places.
Solution: Let

Since , the root will lies between 2.5 and 3.On taking and , we
have

Therefore the root will lies between 2.5 and 2.801252

Therefore the root lies between 2.5 and

Hence the root correct to three places is 2.798

Mathematics Program Numerical Analysis I


20
2.4 Secant Method
This method is quite similar to that of regula-False method except for the condition

Here the graph of the function in the neighborhood of the root is approximated by a
secant line (chords).Further the interval at each iteration may not contain the root. Let initially
the limits of the interval are and , then the first approximation is given by

Again the formula for succissive approximation in general form is

In case, at any stage , this method will fail. Thus, this method does not
converge always whereas Regula-falsi method will always converges. The only
advantage of in this method lies with the fact if it converges then it will converges more
rapidly than the regula-falsi method.

Example 2.8: Find the root of the equation using secant method correct to four
decimal places.
Solution: Let .Taking the initial approximation
So that , then by secant method, we have

Reapeating this process, the successive aprroximation are

Mathematics Program Numerical Analysis I


21
Hence, the root is 0.5177

Example 2.9: Find the root of the equation using secant method correct to four
decimal places.
Solution
Let
.Let and
Then by secant method, we have

Since , the root is between and 1 and 0.31467

Repeating this process, the successive approximations are

Hence, the root is 0.5177

Mathematics Program Numerical Analysis I


22
 Activity 2.2
1. Solve the following equation using regula false and secant method correct to 2 decimal
place
c.
d.
2. Explain the difference between secant method and regula-falsi method
3. What is the advantage of regula-falsi method over secant method

2.5 Fixed-Point Iteration Method


The method known as fixed-point iteration is a very useful way to get a root of . To
use the method, we rearrange into an equivalent form which usually can
be done in several ways.

Suppose that if  , where  is a root of it follows that   Whenever we


have   ,  is said to be a fixed point for the function , and the iterative form is:

, for

Let us take examples where can be written as

x2 – – (x – 3) (x + 1) = 0

The roots are and .

Suppose we rearrange to get

1. 2x  3

If we start with and iterate with fixed point algorithm, successive values of are:

2(4)  3 = 3.31662

Mathematics Program Numerical Analysis I


23
, it appears that the values are converging to the root 

lim lim , is a fixed point of


n  n 

3
2. Another rearrangement of is:
x2
Let us start with then successive values then are:

4 = -6

1.5

= -1.02762

It seems that it converges to root 

lim lim is a fixed point of


n  n 

3. Consider a third rearrangement


x  3 start with we get
2
, 5, , 7

The iteration diverges.

Hence, lim lim does not exist hence, no fixed point for .
n  n 

The behavior of the three rearrangements is different:

The fixed point of is the intersection of the line and the curve

Mathematics Program Numerical Analysis I


24
Figure : Fixed Itertion method

Starting on the x-axis at the initial , and then go vertically to the curve, then horizontally to the
line then vertically to the curve, and again horizontally to the line. Repeat the process
until the points on the curve converge to a fixed point or else diverge. It appears from the graph
that the different behavior is due to the slope greater of less or of opposite sign then to the slope
of the line (=  1)

Theorem: What conditions are needed for convergence of such that if  ,


then  

Proof: As – – 

g ( xn )  g ( )
x   
xn    n

Now if is continuous on the interval  then by the mean value theorem

– –  , where 

th
Define the error of the iteration as

Mathematics Program Numerical Analysis I


25
– –  , we then have

– –

Hence,

Now suppose that  , then will converge.

Since |

Hence is the condition for convergence of

Example 2.10: Find the real root of , correct to four decimal place by using
iteration method.

Solution: We have (1)

, therefore the root lies between 0 and

Now, the equation (1) can be written as

Here,

in

Therefore we can apply iteration method. Let be the initial approximation.

Thus we get

Mathematics Program Numerical Analysis I


26
Therefore, the root correct to four decimal place is

Example 2.11: Find the real root of the equation correct to four decimal places,
using iteration method.
Solution
Let

.Then, the root is between 3 and 4.Let


The given equation can be rewritten as

Let

, for all .Then we can apply iteration method

Mathematics Program Numerical Analysis I


27
Let

Hence, the root is 3.7892, correct to four decimal places.

2.6 The Newton – Raphson Method


This method is generally used to improve the result obtained by one of the previous methods.
Let be approximate value of a root of the equation and be the exact value
of the corresponding root, where is very small quantity. Then … (1)
since is the root of the equation .Exapanding equation (1) by sing Taylor series
expansion about we have
h 2 f ( x0 )
( )+
2!
(2)

Since h is very small, neglecting second and higher order terms and taking the first
approximation, we have

Mathematics Program Numerical Analysis I


28
( )

f ( x0 )
, provided  ( )
f ( x0 )

f ( x0 )
f ( x0 )

(3)

Relation (3) gives the improved value of the root over the previous one. Now, substituting for
and for , we get

In general we have
(5)

The relation (5) is known as Newton-Raphosn formula

2.6.1 Condition under which Newton’s Method converges.


The Newton -Raphson Iteration Formula is:

f ( xn )
f ( xn )

f ( xn )
Let ,
f ( xn )

We know that in general fixed point iteration Method converges when 


In the Newton’s -Raphson Method case, this means

Mathematics Program Numerical Analysis I


29
|g (x)| = 1 
f ( x)  f ( x) f ( x)
2

f ( x)
2

f ( x) f ( x)
= < 1 for  (x)  0 (5)
f ( x)
2

Or  [ (x)] 2 , x ( - ,  + )

In some cases Newton’s Method will not converge.

Example 2.12: Find the positive root of x2 – using Newton – Raphson


Method correct to 3 decimal places, starting with

Solution

2
–  (x)

f ( xn ) x n  16 1  16 
Then =  x n  
f ( xn ) 2 xn 2  xn 

Iteration I

 16 
 x0   =  5   = 4.1
1 16
2  x0   5

Iteration II

1  16   16  1  16 
 x1    x1    4.1   = 4.001
2  x1   x1  2  4.1 

Mathematics Program Numerical Analysis I


30
Iteration II

1  16  1  16 
 x 2    4.001   = 4.0000
2  x2  2  4.001 

Hence, the correct root is 4.0000

Example: Use Newton-Raphson method to find the root of the equation .

Solution: Let

, therefore, a real root lies between 2 and 3.

Now, . Let the initial approximation be , then

Using Newton-Raphson formula, we get

Iteration I

Iteration II

Iteration III

Mathematics Program Numerical Analysis I


31
Iteration IV

Hence, the correct root is

Example 2.13: Find the real root of the equation , using Newton’s Raphson
method.
Solution
We have
Thus

Thus, the iteration formula is

Now, on taking we get the successive iteration as

Hence, the root of the equation correct to five decimal places is 2.79839

Mathematics Program Numerical Analysis I


32
Exercises 2.1

1. Find by iteration, a real root of – correct to five decimal place


2. By Newton-Raphson method find the real root of x3 + x2
3. Find the positive root of – using bisection method correct to2
decimal place.
4. Find correct to 3 decimal places the two positive roots of 2ex – 3x2 = 2.5644 using
secant method.
1
5. Find the iterative methods based on Newton-Raphson method for finding N , ,
N
1
3
N , where N is a positive real number. Apply the method to N = 18, to obtain the
results correct to two decimal places.
6. Given the equation – x
. Determine the initial approximate for finding the
smallest positive root. Find by
a. Regular- falsi
b. Newton-Raphson method

Mathematics Program Numerical Analysis I


33
CHAPTER 3
SYSTEMS OF LINEAR EQUATIONS

3.1 Introduction
Systems of Linear Equations
Any straight line in the xy -plane can be represented algebraically by an equation of the form
ax+by=c where x&y are variables a,b&c are real constant (a&b are both not zero). An equation
of this form is called a linear equation with variables x & y. More generally, we define a linear
equation in the n variables x1,x2,…,xn to be one that can be expressed in the form
a1x1+a2x2+…+anxn=b
Where a1,a2,…,an(not all zero) and b are real constants. The variables in a linear equation are
sometimes calledunknowns.
Example 3x+2y=-9,3x+9y-z+4u=5
A finite set of linear equations in the variables x1,x2,…,xn is called a system of linear equations
or a linear system. A sequence of number called a solution of the system if
x1=s1,x2=s2,…,xn=sn is the solution of every equation in the system
An arbitrary system of m linear equations in n unknowns can be written as

------------------- (3.1)

This can be written as

AX=b---------------------------------3.2

Where A= X=

Mathematics Program Numerical Analysis I


34
Note
EVERY SYSTEM OF LINEAR EQUATIONS HAS NO SOLUTIONS, OR HAS EXACTLY ONE
SOLUTION, OR HAS INFINITELY MANY SOLUTIONS.

Systems of linear Equations arise in a large number of areas, both directly in modeling physical
situations and indirectly in the numerical solution of other mathematical models. These
applications occur in virtually all areas of physical, biological and social sciences.
Linear systems are involved in the numerical solution of optimization problems, systems of non
linear equations approximations of functions, boundary value problems in ordinary differential
equations, partial differential equations, integral equations, statistical inference and etc. And
because of the wide spread importance of linear systems, much research has been devoted to
their numerical solution and excellent algorithms have been developed for the most common
type of problems.

3.2 Methods of Solution of systems of Linear Equations


3.2.1 The Analytical Method of Solution
3.2.1.1Cramer’s Rule.
The most general method is known as Cramer’s Rule. This method is the analytical method for
solving the given system
Ax = b and the solution take the form:
det( Ai j )
xj = , j = 1, 2, …, n … (3.3)
det( A)
Where A(j) is the matrix obtained by replacing the jth column of A by the vector b.

Example:3.1 Solve the following system using Cramer’s Rule


2x1 -3x2 + x3 = 1
x1 + x2 – x3 = 0
x1 – 2x2 + x3 = -1

Mathematics Program Numerical Analysis I


35
2  3 1 
here A = 1 1  1
1  2 1 

det(A) = 1
1 3 1
0 1 1
1  2 1 1
Therefore x1 = = =1
det( A) 1

2 1 1
1 0 1
1 1 1 1
x2 = = = -1.
det( A) 1

2 3 1
1 1 0
1  2 1 4
x3 = = = -4.
det( A) 1
Cramer’s Rue will be used for system of n  n where n = 2, 3, 4 or 5 but for n > 5 Cramer’s rule
will be impractical, say for n = 26, the number multiplications required will be 25  26! and this
is impractical to compute, hence use of numerical methods to solve the systems, are more
appropriate and efficient.

3.2.2 Numerical Methods


Broadly there are two methods namely the direct and iterative methods
a) Direct Methods
The direct methods use a finite numbers of steps and result in an exact solution if there are no
round errors and they are most efficient in systems where the matrix is a dense matrix. a dense
matrix is a matrix in which most of its elements are non zero. But when these methods are
applied, the coefficient matrix iscontinuously modified at the end of each step and hence such
methods are not recommended for systems with sparse matrices.

Mathematics Program Numerical Analysis I


36
b) Iterative Methods

Thesemethods are based on the idea of successive approximations, they starting with one or
more approximations to the solution; we obtain a sequence of approximation or iterations {xk},
which converges to the solutions. These methods use simple and uniform operations. They do
not compute the exact solution directly but makes use of an infinite number of steps which will
converge to the exact solution.

. i.e. x 
k 
k 1  x = The exact solution

3.2.2.1 The Direct Methods of Numerical Solutions.

3.2.2.1.1 Gauss elimination method


The Gauss–Jordan elimination method is a suitable technique for solving systems of linear
equations of any size. One advantage of this technique is its adaptability to the computer. This
method involves a sequence of operations on a system of linear equations to obtain at each stage
an equivalent system that is, a system having the same solution as the original system. The
reduction is complete when the original
System has been transformed so that it is in a certain standard form from which the solution can
be easily read.
The gauss–Jordan elimination method is a suitable technique and the most important
among the direct methods for solving a general linear system of equations. One advantage of this
technique is its adaptability to the computer. The idea behind this method is to eliminate the
unknowns in a systematic way, so that we end up with a triangular system. Consider the system
Ax = b

Let us denote the Original linear system by:


 1
 a11
1 1
a12  a 1n   x1  b1 
 1 1 
 
a a 1
 a 2n     
A(1)x = b(1)  21
    1 
22
 = 
 
 1 x 
1 
bn
a n1 a n 2  a nn     
1 n

    ------------------------3.6

Mathematics Program Numerical Analysis I


37
1
Step1. Assume a11  0. Then we define the row multipliers

a i11
mi1= 1
, i = 2, 3, …, n. … (3.71)
a11
Then we can eliminate x1 from the last (n-1) equations by subtracting from the ith
equation the multiple mi1 of the first equation. The first rows of A(1) and b(1) are left unchanged
and the remaining rows are changed as a result we get a new system.
A(2)x = b(2)
1 1
a11 a12  a11n   x1   b11 
 2   x   2 
 0 a 22  a 22n   2  = b 2 
    
 2      2
 0 a n22  a nn   x n  bn  -----------------------------------------------3.7

where the new coefficients are given by:


aij( 2)  aij(1)  mi1a1(1j) , bi( 2)  bi(1)  mi1b11 i = 2, 3, …, n

j = 2, 3, …n … (3.8)

( 2)
Step 2. If a 22  0, we can in a similar way eliminate x2 from the last (n – 2) of those equations.
We then get a new system

a11
(1)
 a1(1n)   b11 
   1  2
x
 0
( 2)
a 22  a 2( 2n)   x  b 2 
A(3)x = b3  0 a ( 3)   2  = b 3 

( 2)
a 22  3n      3
     
 0 0 (n)
 a nn   x n  b 3 
 
a i22 
If we put mi2 = ( 2 ) for i = 3, 4, …n
a 22
The coefficients of this system are given by
aij(3)  aij( 2)  mi1a2( 2j) , bi(3)  bi( 2)  mi 2 b2( 2) , i = 3 …, n

j = 31 … n
We continue to eliminate the unknowns, going onto columns 3, 4, and so on and this is expressed
generally in the following.

Mathematics Program Numerical Analysis I


38
Step K: Let 1 k n – 1. Assume that A(k)x = bk, eliminated at successive stages, and A(k) has
the form:
 a11
(1) (1)
a12  a1(1n) 
 ( 2) 
 0 a 22  a 2( 2n) 
  
A(k) =   … (3.9)
0  0 a kk( k )  a kn( k ) 
  
 
0  0 
(k ) (k )
a nk  a nn

Assume a kk(k )  0. Define the multipliers

mik = aik( k ) akk( k ) i = k + 1, …, n … (3.10)


and use these to remove the unknown xk from equations k + 1 through n.
Define aij( k 1) = aij(k ) - mik a kj(k )

bi( k 1)  bik  mik bkk i. j = K + 1, …n … (3.11)


The earlier rows 1 through k are left undisturbed and zeroes are introduced into column k below
the diagonal element.
By continuing in this manner, after n – 1 steps we obtain
A(n)x = b(n)
a11
(1)
 a1(1n)   x1   b1 
1
      
 0    =   … (3.12)
      
 (n)   (n) 
 0 a nn   n  bn 
x

Then using the back substitution formula:


n
bi  a x
k i 1
i k
xi = i = n, n – 1, …1
aii

We solve for x s
We note that the right-hand side b is transformed in exactly the same way as the columns in A.
Therefore the description of the elimination is simplified if we consider b as the last column of A
and denote
ai(,kn)1 = bi(k ) , i, k = 1, 2, …, n.

Mathematics Program Numerical Analysis I


39
Example:2 Solve the linear system
x1 + 2x2 + x3 = 0
2x1 + 2x2 + 3x3 = 3
-x1 – 3x2 = 2
We represent the above linear system with the augmented matrix
1 2 1 : 0

[A|b] =  2 2 3 : 3
 1  3 0 : 2

1st Step:
m21 = 2, m31 = -1
1 2 1 : 0 1 2 1 : 0
Then 2 2 3 : 3  0  2 1 : 3

 1  3 0 : 2 0  1 1 : 2

2nd Step m32 = 1


2
 
1 2 1 : 0 1 2 1 : 0
Then 0  2 1 : 3    0  2 1 : 3  = UX = b2
   1 1
0  1 1 : 2 0 0 : 
 2 2

1 2 1  x1   0 
  x  =  3 
Thus = 0  2 1  2  
0 0 1   x 3   1 
 2  2
x3 = 1, x2 = -1, x1 = 1
Activity 3.1
SOLVE THE FOLLOWING SYSTEM OF LINEAR EQUATIONS USING
CRAMER’S RULE AND Gauss elimination method

1)

Mathematics Program Numerical Analysis I


40
2)

3)

3.2.2.1.2 Gauss-Jordan Method.


This procedure is much the same as regular elimination including the possible use of
pivoting. It differs in eliminating the unknown in equations above the diagonal as well as below
it.
In step K of the elimination, choose the pivot element as before.
i.e. |a1k| = max |aik|, k i n.
i

then interchange rows r and k.


1 0 01kk  0 ink 
 
0 1 a 2kk  0 k2 k 
 0 a kk( k )  a kn( k ) 
 
 
0 (k )
0 a nk  a nn  (k ) 

Then define
a kj( k 1) = a kj( k ) a kk( k ) , j = k, …, n + 1

Eliminate the unknown xk in equations both of above and below equation k using the following
formula
aij( k 1) = a ijk - aikk  a kj( k 1) for j = k, … n + 1

i = 1, …n, i  k
th
At the end of the n step this procedure converts

 A b   I b n



Thus IX = bnX = bn

Mathematics Program Numerical Analysis I


41
To solve AX = b using Gauss-Jordan Method requires
n(n  1) 2 n 3
 operations and this is 50% more than the regular elimination method and
2 2
consequently the Gauss-Jordan method should not be used for solving linear systems. However,
it can be used to produce a matrix inversion.
Example: 3.3 Solve the following using Gauss-Jordan’s Method

3 1  2  1  x1   3 
 3   x 2   8
 A = 2  2 2
b
=
1 5  4  1  x 3   3 
    
3 1 2 3   x 4    1

Solution:
3 1  2  1 3 
2  2 2 3  8
 Ab  = 
1 5  4  1 3 
 
3 1 2 3  1

Notation R1 means Row one


R2means Row two
R3 means Row three
Divide R1 by 3 and reduce the element below the pivoting elements to zero.
 1 2 1 
1 3 3 3
1 
 8 10 11  Interchang e R and R
0 2 3
 10 
 3 3 3 
 14  10 2 2 
0 
0
3 3 3 4
 0 4 4 
 1 2 1  14
1 3 3 3
1  Divide R2 by
  3
14  10 2      
0 2 
 3 3 3  and reduce the elements below
0 8 10 11  10
  and above the pivoting element to
0
3 3 3 4
 0 4 4  zero

Mathematics Program Numerical Analysis I


42
 3  2 67 
1 0 
 7 7 
 5  1 3  Interchang e R2 and R3
0 1
7 7 7  
 10 23  62 
0 0  R3 and R4
 3 7 7 
0 0 4 4 4 
 

 3  2 67 
1 0  Divide R3 by 4
 7 7       
 5 1 3 
0 1
7 7 7  and reduce the elements below
0 0 4 4  4  and above the pivoting element to
 10 23  62 
0 0  zero
 7 7 7 

 1 3  13
1 0 0
7 7  Divide R4 by 7
 4  2        
0 1 0 7  and reduce the elements below
 7
1 
0 0 1 1  and above the pivoting element to
0 0 0 13 5 
 7 7  zero

1 0 0 0 1 
0 1 0 0 2 

0 0 1 0 3
 
0 0 0 1  4
Hence the solution is
x1 = 1, x2 = 2, x3 = 3, x4 = -4.

Inverse of a square Matrix using Jordan’s Method


The procedure is to augment the given matrix with the identity matrix of the same order.
One then reduces the original matrix to the identity matrix by Gauss-Jordan’s method. When the
identity matrix stands as the left half of the augmented matrix the inverse of the original stands
as the right half. (No row nor column interchanges)

Mathematics Program Numerical Analysis I


43
1  1 2
Example: Find the inverse of A = 3 0 1 
1 0 2

Augment A and I
1  1 2 1 0 0
 
3 0 1 0 1 0
1 0 2 0 0 1

Now using the Gauss-Jordan’s Formula!

k 1
a kjk
a kj = k
; j = k, … 2n, aijk 1 = a ijk - aikk a kjk 1 … (3.26)
a kk

1  1 2 1 0 0
 
0 3  5  3 1 0 
0 1 0  1 0 1

 1
0 1 0
1 0 3 3 
  5 1 1 
0 1 3 3
0
0 0 5 1 
 3 0
3
1
 
1 0 0 0 2 1 
 5 5
 0 1 0  1 0 1 
0 0 1 0  1 3 
 5 5 

0 2 1 
 5 5
Therefore A =  1 0-1
1 
 0 1 3 
 5 5

Mathematics Program Numerical Analysis I


44
Activity 3.2
1. Find the inverse of the following matrix

1)

2)

2. SOLVE THE FOLLOWING SYSTEM OF LINEAR EQUATIONS USING GAUSS-


JORDAN METHOD

3)

4)

5)

3.2.2.1.3 Decomposition Method or the decomposition method


This method is is also known as the decomposition method or the factorization method .In this
method, the coefficient matrix A of the system of equations

AX=b

Where A= X=

Mathematics Program Numerical Analysis I


45
Is decomposed or factorized in to the prodact of a lower triangular matrix L and an upper
triangular matrix U. We write the matrix A as

A=LU

Where L= ------------------------------------------3.9

and

U= -----------------------------------3.10

Using the matrix multiplication rule to multiply the matrix L and U and comparing the elements
of the resulting matrix with those of A we obtain

--------------3.11
Where
The system of equations involves n2+n unknowns. Thus, there n parameter family of solutions.
To produce a unique solution it is convenient to chose ether OR
When we chose
i) the method is called the Doolittle’s method
ii) the method is called the Crout’s method
When we take , the solution of the equations 3.11 may be written as

----------------------------3.12

-------------------------------2.13

Mathematics Program Numerical Analysis I


46
We note that the first column of the matrix L is identical with the first column of the matrix A,
that is

We also note that

The first columen of L and the first row of U have been determined. We can now proceed to
determine the second column of L and the second row of U

Next we find the third column of L followed by the third row of U. thus for the relevant indices i
and j the elements are computed in the order .
Having determined matrices L and U, the system of equation AX=b become
AX=b
LUX=b we write the above equation as the following two system of equations
Let UX=Z----------------------------------------3.14
ThenLZ=b-----------------------------------3.15
The unknown’s in (3.15) are determined by forward substitution and in
(3.14) are obtaind by back substitution
Alternatively, find L-1 and U-1 to get
--------------------------------------------------3.16
The inverse of can be also determined from

--------------------------------------------------3.17
The method fails if any of the diagonal elements is zero. The LU decomposition is
guaranteed when the matrix A is positive definite .however it is only a sufficient condition

Mathematics Program Numerical Analysis I


47
Example 5.
Consider the equations
x1 +x2 + x3 = 1
4x1 +3x2 -x3 = 6
3x1 +5x2 + 3x3 = 4
Use the decomposition Method to solve the system and Use .
Solution

Write

On comparing the corresponding elements, we have


1 1

After solving this equations we obtain


1st column:
1STrow:
Second column
Second row =-10
Thus we have

L= AND U=

Mathematics Program Numerical Analysis I


48
Using ZX= b

i.e =

Using the forward substitution, we have

4z1 -z2 = 6
3z1 +2z2 -10z3 = 4

Using UX=z

x1 +x2 + x3 = 1
x2 + 5x3 = -2

x3 =

Using back ward substitution we get

is the solution of the equation

Mathematics Program Numerical Analysis I


49
Activity 3.4
1. Find the L and U of the following matrix

A)

B)

2. solve the following system of linear equations using LU decomposition method

C)

D)

E)

3.2.2.2 Iteration Method

When a linear system is to large and sparse the Gaussian Elimination method becomes ill
conditioned. Under such conditions Iteration methods is best for finding the roots of the system.

I. The Gauss-Jacobi Method.


Consider the system of linear equation

This can be written as


AX = b

Mathematics Program Numerical Analysis I


50
Where A= X=

Assuming aii 0 and are large compared to coefficients , rewrite it as

 
 n

1 
X1 = b1   a1 j x j 
a11  j 1

 
 j 1 

 
 n

1 
X2 = b2   a 2 j x j 
a 22  
 j 1 
 j2 

 
 n

1 
= b2   a nj x j 
a nn  
 j 1 
.  jn  .

Generally we Define The Gauss-Jacobi Method iteration as:


 
 n

1  (m) 
xi( m1) = bi   aij x j i = 1, …, n, m  0 … (3.18)
a ii  j  1

 
 ji 
and assume initial approximation xi( 0 ) , i = 1, …, n are given, we then compute a sequence of
approximation x(1), x(2) … x(n). The truncation results from the truncation of the sequences which
is expected to converge to the true solution.

Mathematics Program Numerical Analysis I


51
Example6 Solve the following by the Gauss-Jacobi method
10 x1  3x 2  x3  14 0 
2 x1  10 x 2  3x3  5 with x = 0 
0

x1  3x 2  10 x3  14 0 

 
  1
True solution x = 1
n
1  (0) 
Using 1
xi = bi   aij x j ,
a ii  j 1

  1
 ji 

The first iteration

Second iteration

=-0.2

= 1.11
And so on

The numerical result is given as follows:


m x1( m ) x 2( m ) x3( m ) ||em||

0 0 0 0 1
1 1.4 .5 1.4 .5
2 1.11 -0.20 1.11 .2
3 .929 1.055 .929 .071
4 .9906 .9645 .9906 .0355
5 1.01159 .9953 1.01159 .01159
6 1.00025 1.005795 1.000251 .005795

Mathematics Program Numerical Analysis I


52
Example 7
Solve, by Jacobi’s iteration method ,the equations

0 
Starting with initial approximation x = 0 
0

0 

We start with initial approximation


We get

Substituting the above approximation in the equation we obtain the second iteration

Substituting the the above approximation in the equation we have

Mathematics Program Numerical Analysis I


53
Similsrly the fourth iteration is given by

And so on
But the answer converges to 1,-1,1

Activity 3.4
Solve, by Jacobi’s iteration method, the equations

a)

b)

Error Analysis of Gauss-Jacobi method


Let X be the exact root of AX = b
Then let x(m) be the mth step iterated solution.
Then eim = xi - xim  , m 0
n aij
Then eim 1 = -  e mj
j 1 aii
ji
n aij
Let M = max 
j 1 aii
1 i n

ji

Mathematics Program Numerical Analysis I


54
Then eim 1 M e (m )

e ( m1) M e (m )

Since ||em|| M(m) ||e0||


If M < 1, then e(m) 0 as m .
n aij
But for if M < 1 max  <1
j 1 aii
1 i n

ji
n
  aij  aii … (3.30)
j 1
ji
Therefore the matrix A must be diagonally dominant for the iteration to converge.

Consider the above example where


10 3 1 
A =  2 10 3 
 1 3 10
n
It is clear that  aij  aii
j 1
ji

II) The Gauss-Seidel Method


A system of simultaneous linear equations is called diagonal when in each equation the
coefficient of a different unknown is greater in absolute value than the sum of the absolute values
of the other coefficients.
We solve each equation of the system for the unknown with the largest coefficient:
Let the system be given by

Then solving each equation of the system for the unknown with the largest coefficient, we obtain

Mathematics Program Numerical Analysis I


55
x1 = c12x2 + c13x3 + … + c1nxn + d1
x2 = c21x1 + c23x3 + … + c2nxn + d2

xn, cn1 + x1 + cn2x2 + … + cn,n-1, xn-1+dn-----------------------------------------------3.20

On substituting any initial value xi0  for the unknown on the results and obtain the new

values xi1 on the results. Again substitute the new values x j1 on the results and obtain

improved values x j2  and continue the process until x jn  is equal to x jm n  is obtained. The x jm 

are the required roots of this system.

1  i 1 n 
xi( m1)   i  aij x j a
( m 1)
b   ij x (jm)  , i = 1, …, n
aii  j 1 j  i 1 
i.e.it is the some as that of Gauss-Jacobi Method the deference is that (as soon as a new
approximation is found, it is immediately in the next step.) used each new component xim1 is
immediately used in the computation of the next component.
Example 8
Solve the following by the Gauss- Seidel method
10 x1  3x 2  x3  14 0 
2 x1  10 x 2  3x3  5 with x = 0  0

x1  3x 2  10 x3  14 0 

Solution

Therefore
The first iteration

Mathematics Program Numerical Analysis I


56
= 0.22

= 1.194
Second iteration

= -0.10718
=

= 1.11
And so on
Numerical Results with Gauss-Seidel Method
m x1( m ) x 2( m ) x3( m ) ||em||

0 0 0 0 1
1 1.4 .78 1.026 .4
2 1.11 .99248 1.1092 .1092
3 .9234 1.0310 .99159 .031
4 .99134 .99578 1.0021 .0085

It is clear that the speed of convergence with Gauss-seidel method is faster than that of
Jacobi.

Mathematics Program Numerical Analysis I


57
Activity 3.5
Solve, by Jacobi’s iteration method, the equations

c)

Error Analysis of Gauss-Seidel: Let eim  xi  xim then

em+1 = xi - xim 1

eim 1 = xi - xim 1
i 1 aij n aij
eim1 = - a
j 1
e mj 1 - a
j i 1 ii
e jm  i = 1, 2, …,n
ii

Define
i 1 aij n aij
i= a
j 1
, i = a
j i 1
, i = 1, …, n with 1 = m = 0
ii ii

The rate of convergence is linear but with a faster rate than with the Gauss-Jacobi method.
Consider the same example of the above.

General Framework for Iteration Methods.

To solve AX = B we split A as
A=N–P where N might be diagonal, triangular or tridiagonal.
And write Ax = B as (N – P)x = B or
Nx = B + Px
Define the iteration method by
Nx(m + 1) = B + Px(m), m  0 with x0 given

Mathematics Program Numerical Analysis I


58
Then Ne(m + 1) = pe(m),
e(m + 1) = Me(m) where M = N-1P
Thus if M(m) 0 then e(m) = M(m)e0 0 as m .
Let M = max (i + i) < 1 i.e. the same M that of Gauss-Jacobi
1 i n

i
Then define  = max
1 i n 1i

Then eim 1 i e ( m1) + i e (m ) i = 1, …, n

Let K be the subscript for which

e ( m1) = e ( m 1)
k
Then with i = k
e ( m1) k e ( m1) + k e (m )

k
e ( m1) e (m )
1k

Hence
 e ( m1)  e (m )

and since for each i,


k  1  ( i   i )  k
(i + i) - = i  (1 – M)
1k 1k 1k

we have  M < 1
Therefore e ( m1)  em m e o and m 

 e m  0.

Mathematics Program Numerical Analysis I


59
3.2.3 System of non linear Equations

3.2.3.1 Newton’s Method for System of non linear Equations

Consider a non linear system of equations


f1(x1, x2, …,xn) = 0
f2(x1, x2, …,xn) = 0
 … (3.21)
fn(x1, x2, …, xn) = 0
This system can be rewritten in a more concise form. The totality of the arguments x1, x2,
…,xn may be considered as an n-dimensional vector.

 x1 
 
X=   
x 
 n

 f1 
 
Similarly, f=   
f 
 n
Therefore, system (3.32) in short form
f(X) = 0 … (3.22)
System (3.22) is solved by the method of successive approximations.
Suppose the kth approximation

Xk = x1k , x2k ,..., xnk 


T

then the exact root of this equation can be represented in the form
X = X(i) + e(k) … (3.23)
where
ek = e1k , e2k enk  , the error of the root .
putting (3.22) into (3.23) we have
f(xk + ek) = 0 … (3.24)

Mathematics Program Numerical Analysis I


60
On the assumption that the function f(x) is continuously differentiable in the interval containing
x and xk, using Taylor series we expand eq. 3.24 about xk.
f(xk + ek) = f(xk) +  (xk) ek 0 … (3.25)
 f1 f1 f1 
 x 
x 2 x n 
k  1  k
where (x ) =    = W(x )
 f n f n f n 
 x 
 1 x 2 x n 

Eq. (3.25) may be written as:


f(xk) + W(xk) ek = 0
then f(xk) + W(xk) (ek) = 0
Thus W(xk) (ek) = -f(xk)
Then using Gaussian Elimination we solve for ek.
xk+1 = xk + ek
 x1k 1   x1k  e1k 
 k 1   k   k 
x x e
or  2  =  2  +  2  … (3.26)
    
 k 1   k   k 
 x n   x n  e n 
Example: Using Newton’s Method, find approximate solution of
x12  x22  x32  1

2 x12  x22  4 x3  0

3x12  4 x2  x32  0

Given x10  x20  x30  0.5

 0.25
f(x0) =   1.25 
  1.00 

2 x1 2 x2 2 x3 
W(x) = 4 x1 2 x2  4 
6 x1 4 2 x3 

Mathematics Program Numerical Analysis I


61
1 1 1 
W(x ) = 2 1  4
0 
3  4 1 

Then

W(x0) (e0) = -f(x0)

1 1 1  e10  0.25
2 1  4 e 0  = 1.25 
   2  
3  4 1  e30  1.00 

e10  e20  e30 = 0.25

Then 2e10  e20  4e30 = 1.25

3e10  4e20  e30 = 1.00

Then using Gaussian Elimination Method we solve

e10 = 0.375, e20 = 0.000, e30 = 0.125

Then we compute the next approximation

x1 = x0 + e0

x11 = x10  e10 , x12  x20  e20 , x31  x30  e30

 x11  0.875
 1  
i.e.  x 2  = 0.500
 x31  0.375
 

then if ||e0|| < tolerance we take the approximate solution if not repeat the process until a desired
approximate

Mathematics Program Numerical Analysis I


62
3.4 Error Estimates and Conditioning of Matrices
Any solution of a linear system must because of round-off and other errors be considered as an
approximate solution. If ~x is an approximate solution to AX = b then the residual defined by

r =A ~
x - b has the property that || r || is small, then ||x - ~
x || would be small as well. Although this
is often the case, certain special systems which occur quite often in practice fail to have this
property.

Example. 9

1 2  x1  3 
1.0001 2  x  = 3.0001
   2  

has a unique solution x = (1, 1)T

The approximate solution ~


x = (3, 0)T

3  1 2  3   0 
r=b-A~
x =   -     =   … (3.38)
3.0001 1.0001 2 0 .0002

||r || = 0.0002

Although the residual is small, the approximation ~


x = (3, 0)T is quite poor.

i.e. ||x - ~
x || = 2

This illustrates the fact that there are systems in which small changes in the right side b leads
to a large changes in the solution. And a linear system whose solution x is unstable with respect
to small relative changes in the right side b is called Ill-conditioned.

The following theorem gives a criterion for well conditioned and ill-conditioned behavior in
matrix:

Mathematics Program Numerical Analysis I


63
Theorem: If ~
x is an approximation to the solution of AX = b and A is a nonsingular matrix,
then

k ( A) r
||x - ~
x ||
A

x~
x r
and k(A) provided x  0, b  0 … (3.39)
x b

Proof: Since r = b - A ~
x = Ax - A ~
x and A is nonsingular,

then x - ~
x = A-1

and ||x - ~
x || = ||A-1 r|| ||A-1|| || r || it implies

r
 ||x - ~
x || ||A|| ||A-1|| .
A

Since b = AX and ||b|| ||A|| ||X||

x~
x A 1 r
Hence,
A x b

Ax~
x  A A 1 r
A x b

x~
x A A 1
 r
x b

||A|| ||A-1|| is called a condition number of a non singular matrix A. denoted by K(A) implies

x~
x r
K(A) . … (3.40)
x b

Since for any nonsingular matrix A

1 = ||I|| = ||A.A-1|| ||A|| ||A-1|| = k(A).

Mathematics Program Numerical Analysis I


64
The matrix A will behave well (well conditioned) if K(A) is close to one and do not behave well
(ill conditioned) when K(A) is significantly greater than one.

Example. 10 From the previous example

1 2  10,000 10,000 
A=   , A-1 =  
1.0001 2  50005  5000

Then K(A) = ||A|| ||A-1|| = 3.00/x20,000

= 60,002.

K(A) is large hence the system is ill conditioned.

Mathematics Program Numerical Analysis I


65
Exercises

1. Solve by Gauss-Jacobi elimination Method.


– –
a) – –
– –

b) x – 2y + 3z + 4t = 9
2

3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12

c)


d)

2. Solve by Gauss-Jacobi elimination


– –
a) – –
– –

b) x – 2y + 3z + 4t = 9
2

3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12

c)


d)

Mathematics Program Numerical Analysis I


66
3) Solve by using LU decomposition method
– –
a) – –
– –

b) x – 2y + 3z + 4t = 9
2

3x – y +2z + 5t = 19
2
2x + 4y – 5z + t = 15
4x + 2y – 3z + 3t = 12

c)


d)

4) Solve the system of equations using Jacobi iteration method (Iterate up to two steps)
a) 27x + 6y – z = 85
6x + 15y + 23 = 72

x + y + 54z = 110

b) take initial approximation x= 0.5, y= -0.5,z= 0.5

c) 10x1 – 2x2 – x3 – x4 = 3

-2x1 + 10x2 – x3 – x4 = 15

-x1 – x2 + 10x3 – 2x4 = 27

- x1 – x2 – 2x3 + 10x4 = -9

Mathematics Program Numerical Analysis I


67
5) Solve by Gauss-seidel method
a) 27x + 6y – z = 85
6x + 15y + 23 = 72

x + y + 54z = 110

b) take initial approximation x= 0.5, y= -0.5,z= 0.5

c) 10x1 – 2x2 – x3 – x4 = 3

-2x1 + 10x2 – x3 – x4 = 15

-x1 – x2 + 10x3 – 2x4 = 27

- x1 – x2 – 2x3 + 10x4 = -9

5) Solve the system of equation by matrix inversion method :

6) Solve the system of equation by Gauss elimination method

7) Solve the system of equation by Crout’s method (LU decomposition method)

8) Solve the system of equation by Gauss –Jacobi Method ( perform only 3 iteration )

Mathematics Program Numerical Analysis I


68
9) Solve the system of equation by Gauss –seidel Method ( perform only 3 iteration )

10) Solve by Newton’s Method


x1 + x12 - 2x2x3 = 0.1

x2 - x 22 + 3x1x3 = -0.2 for x10 = x 20 = x30 = 0.0

x3 + x32 + 2x1x2 = 0.3

Mathematics Program Numerical Analysis I


69
CHAPTER FOUR
FINITE DIFFERENCES

4.1 INTRODUCTION
The calculus of finite differences deals with the changes takes place in the values of the function
,due to finite changes in the dependent variables.

4.2 FINITE OPERATORS


Let the tabular points be equally spaced, that is, and
their corresponding functional values are given we now define the
following operators
1) The shift operator denoted by E and defined as
E =
Sometimes it is known as displacement or Translation operator.

Some of its properties

E2f(a) = E(Ef(a) = E(f(a + h))

= f(a + h + h) = f(a + 2h)

E3f(a) = E(E2f(a)) = E(f(a + 2h))

= f(a + 3h)

Enf(a) = f(a + hn)

E-1f(a) = f(a – h)

E-2f(a) = f(a – 2h)

E-nf(a) = f(a – nh).

E-1E = 0.

E0 = 1

Mathematics Program Numerical Analysis I


70
2) The forward Difference operator denoted by ∆ and defined as

The first forward difference operator ∆ -
The second forward difference operator
The third forward difference operator
.

The nthforward difference operator

Any higher ordered forward difference can be expressed in terms of entries

We have

= - )

= + )

= + )

The coefficient occurring on the right hand side being the binomial coefficients we have in
general ,

3) the back ward difference operator denoted by and defined as


The first backward difference operator 

Mathematics Program Numerical Analysis I


71
The second backward difference operator   
The third backward difference operator   
.

The nthforward difference operator   

4) Central Difference Interpolation Formula

The Newton’s is forward and backward Interpolation Formula are applicable for
interpolation near to the beginning and end of tabulated values respectively. In this section we
discuss the central difference formulas which are most suited for interpolation near the middle of
the tabulated values.

Central Difference Operator,

 h  h
Definition f(x) = f  x   – f  x    f 1  f 1
 2  2 2 2

Central Differential

2 3 4
X f f f f f
x2 f2
2 3
x1 f1 f1 1 f1
2
2 4
x0 f0 f0 f 0 1 f0
2
x-1 f--1 2
f-1
f 0 1
x-2 f--2 2
f 1 1
2

f 1 1
2

f 2 1
2

Mathematics Program Numerical Analysis I


72
5) The average operators denoted by  and is defined as:

 h  h
f(x) = x   x  
 2  2
6) The divided difference operators

1st divided difference operators

2nd divided difference operators

3rd divided difference operators

.
.
.

nth divided difference operators

4.3 Different Representation of the same Difference Table.

Forward Difference Table

X f f 2f 3f 4f


x-2 f-2
f-2
x-1 f-1 2f-2
f-1 3f-2
x0 f0 2f-1 4f-2
f0 3f-1
x1 f1 2f0
f1
x2 f2

Mathematics Program Numerical Analysis I


73
Backward Difference Table

X f f 2f 3f 4f


x-2 f-2
f-1
x-1 f-1 2f0
f0 3f1
x0 f0 2f1 4f2
f1 3f2
x1 f1 2f2
f2
x2 f2

Central Difference Table

2 3 4
X f f f f f
x-2 f-2
2 3
x-1 f-1 f 2 1 f-1
2
2 4
x0 f0 f0 f 1 1 f0
2
x1 f1 2
f-1
f 1 1 f 0 1
x2 f2 2 2

f 0 1
2

f1 1
2

Mathematics Program Numerical Analysis I


74
Activity 4.2
1) If f(x)= 2x3+e2x use h=1
a) Construct forward table On the interval [0,10]
b) Construct backward table On the interval [0,10]
c) Construct divided table On the interval [0,10]
2) Write the relation between backward and forward difference
3) Prove that
a) -=
 
b) += 

c) ==

4.4 The relation between porters


1) ∆
Therefor ∆=E-1
Similarly
2) 
=

 h  h
x   x  
3) f = f 2 - f 2  =  E 12  E 12  f(x)
 
1
= E 2
(E – 1) f(x)

1
= E 2
f(x).

1 1 1
 E 2
- E 2
=E 2

Mathematics Program Numerical Analysis I


75
 h  h
x   x  
4) Also f  2 - f 2  =  E 12  E 12  f(x).
 
1
= E 2
(1 – E-1)f(x)

1
= E 2
f(x).

1
 E 2

So in general

1) ∆=E-1
2) =

=  E 2  E 2  fi = E 2 (E–1)fi E 2 f
1 1 1 1
3) fi = f - f
i
1
2
i
1
2
 

m-1  m-1   fi
 f i 1  =
1 1
4) m
fi = f E 2
E 2
 1 12 2
  

1
= m-1
E 2
fi

2
fi = f i 1 - f i 1
2 2

= (fi+1 – fi) – (fi – fi-1) = fi+1 – 2fi + fi-1

= 2fi-1

= 2fi+1

3 2 2
fi = [ fi] = [ f i 1 - f i 1 ]
2 2

= [ f i 1 - f i 1 ]
2 2

= fi+1 – fi-1 – (fi – fi-1)]

Mathematics Program Numerical Analysis I


76
= [fi+1 – 2fi + fi-1]

= fi+1 – 2 fi + fi-1

= f i 1 1 - f i  1 - 2( f i  1 - f i  1 ) + ( f i  1 - f i 1 1 )
2 2 2 2 2 2

= f i 1 1 -3 f i  1 +3 f i  1 - f i 1 1
2 2 2 2

3
2
3E fi
=  f (i 1)  1 = 
3
2

3
2
The =  f (i 1) 1 = 3 E fi
2

n
So in general n
fi = n E 2
f i = n f i  n
2

n
= n E 2
f i = n f i  n
2

The Average Operator, is defined as:

1 h h
f(x) = [f(x + ) + f(x - )]
2 2 2

1 1 1
= [ E 2 f(x) + E 2 f(x)].
2

1 1 1
= [ E 2 + E 2 ]f(x) … (4.14)
2

1 1 1
Hence = [ E 2 + E 2 ].
2

n 1 n 1 n 1
2 2
and = [ E + E ].
2

1 1 1
also2 = ( E 2 + E 2 )2
4

Mathematics Program Numerical Analysis I


77
1 1 1
= [( E 2 - E 2 )2 + 4]
4

1 2
= +1
4

1
2 = 2
+1 … (4.15
4

1 1
2 2
Again since = E -E

1 1 1
and = (E 2 +E 2)
2

1 1 1 1
2 2 2 2
2 + = E + E +E - E

1
2
= 2E

1
Hence, E 2
=+ 1
2

Mathematics Program Numerical Analysis I


78
Exercise 4
1) For the following data construct forward ,backward and divided difference table
X 0.1 0.2 0.3 0.4 0.5
F(x) 1.4 1.56 1.76 2 2.28

2) If f(x)= ,show that 


3) Evaluate
a) tan-1x,
b) (( )
c) 
d) 
4) Proof that

.

5) Proof that

6) Evaluate
7) Let , then construct the forward difference table for the argument
and compute

Mathematics Program Numerical Analysis I


79
CHAPTER 5
INTERPOLATIONS

Objective
On completion of this chapter, successful students will be able to:
 Learn about polynomial interpolation
 Know uniqueness of the interpolating polynomial
 Practice computation of the interpolating polynomial
 Determining the error for the interpolating methods

Introduction

In this chapter, we consider the interpolation problem: suppose we do not know the function f,
but a few information (data) about f, now we try to compute a function g that approximates f.
There is always a need to approximate functions for instance due to:
1. Given a set of discrete data {(xi, yi)|i = 1, …n} we want to find the relation between xi and yi
that can describe the physical phenomena sufficiently.
2. We may have a function f(x) that is complicated to differentiate or integrate, and then we can
find a simpler function that approximates the differentiation and integration of the
complicated function f(x) on a given closed interval.
3. We may need to determine the solution of differential equations. But finding it analytically
is difficult, so we find the approximate solution at finite points of the interval.

An algebraic polynomial is the most convenient function to be handled in practice. It is easy to


construct, to evaluate, differentiate and integrate and are therefore widely used to approximate
functions based on the weierstrass Theorem.

Mathematics Program Numerical Analysis I


80
Theorem 5.1: (Weierstrass Approximation theorem)
If ‘f ’ is defined and continuous on [a, b] and > 0 is given, then there exists a polynomial p
defined on [a, b] with the property that:
|f(x) – p(x)| < , x [a, b]

Theorem 5.2: (Existence and Uniqueness)

If (xi, yi), xi, yi ɛℜ, i=0, 1, …,n, are n+1 distinct pairs of data point, then there is a unique
polynomial Pn of degree at most n such that

Pn(xi) = yi, (0 i n) (5.1)

Proof:

Existence: Proof by mathematical induction. The theorem clearly holds for n=0 (only one data
point (x0, y0)) since one may choose the constant polynomial P0(x)=y0 for all x. Assume that the
theorem holds for n k, that is, there is a polynomial Pk(x), deg(Pk) k, such that yi=Pk(xi), for
0 i k. next we try to construct a polynomial of degree at most k+1 to interpolate (x i, yi),
0 i k+1.

Let

Pk 1 ( x)  Pk ( x)  c( x  x0 )( x  x1 )...( x  xn )

yk 1  Pk ( xk 1 )
c
( xk 1  x0 )( xk 1  x1 )...( xk 2  xk )

Since xi’s are distinct, the polynomial Pk+1(x) is well-defined and deg(Pk+1) k+1. It is easy to
verify that

Mathematics Program Numerical Analysis I


81
Pk+1(xi)= yi, 0 i k+1

Uniqueness: suppose there are two such polynomials Pn and Qn satisfying (5.1). Define

Sn(x) =Pn (x)- Qn(x)

Since both deg(Pn) n and deg(Qn) n, deg(Sn) n. moreover

Sn(xi) =Pn (xi)- Qn(xi)=yi-yi=0,

For 0 i n. this means that Sn has at least n+1 zeros, it therefore must be Sn=0. Hence Pn =Qn.

5.1 Linear interpolation

Let the data points (x0, y0), (x1, y1), . . . , (xn, yn) belonging to an unknown smooth function

y = f (x) be plotted on a graph. Then the simplest way to estimate the value of y(x) when x lies in
the interval xi < x < xi+1 is to join the points (xi , yi ) and (xi+1, yi+1) by a straight line segment, and
then to use the point on the line segment with argument x as the approximation to y(x). This
process is called linear interpolation.

Linear interpolation is interpolation by the straight line through (x0, f0) and (x1, f1): see Fig. 5.1.
Thus the Liner polynomial P1 is a sum P1= L0f0 + L1f1 with L0 the linear polynomial that is 1
at x0 and 0 at x1; Similarly, L1 is 0 at x0 and 1 at x1. Obviously,
( x  x1 ) ( x  x0 )
L0(x) = , L1(x)=
x0  x1 x1  x0
This gives the linear interpolation formula
P1(x) = L0(x)f0 + L1(x)f1

Mathematics Program Numerical Analysis I


82
Fig.5.1 Linear Interpolation

Example 5.1
Compute a 4D-value of ln9.2 from ln9.0=2.1972, ln9.5=2.2513.

Solution:
x0=9.0, x1=9.5, f0=ln9.0, f1=ln9.5, ln9.2 we need
( x  9.5)
L0(x) =  2.0( x  9.5) , L0(9.2)=  2.0(0.3)  0.6
 0.5
( x  9.0)
L1(x) =  2.0( x  9.0) , L1(9.2)= 2.0.2  0.4
0.5
ln9.2≈ P1(9.2)= L0 (9.2)f0+ L1 (9.2)f1=0.6.2.1972+0.4.2.2513=2.2188.

5.2 Quadratic Interpolation


Quadratic interpolation is interpolation of given(x0, f0), (x1, f1), (x2, f2) by a second degree
polynomial P2(x) as follows.
P2(x) = L0(x) f0 + L1(x)f1 + L2(x)f2
with
( x  x1 )( x  x 2 )
L0(x) =
( x0  x1 )( x0  x2 )

( x  x0 )( x  x 2 )
L1(x) =
( x1  x0 )( x1  x 2 )

( x  x0 )( x  x1 )
L2(x) =
( x 2  x0 )( x 2  x1 )

Mathematics Program Numerical Analysis I


83
and look at the interpolation basis L0(x), L1(x) and L2(x) each polynomial Li(x) has degree 2.
Hence P2(x) has degree 2.

In addition
Li(xj) = 0 for i  j o i, j 2
Li(xj) = 1 for i = j

Example 5.2
Compute a 4D-value of ln9.2 from ln9.0=2.1972, ln9.5=2.2513 and ln11.0=2.3979.

Solution
( x  9.5)( x  11.0)
L0(x) =  x 2-20.5x+104.5, L0(9.2)=0.5400,
(9.0  9.5)(9.0  11.0)
( x  9.0)( x  11.0) 1
L1(x) =  (x2-20x+99), L1(9.2)=0.4800,
(9.5  9.0)(9.5  11.0) 0.75
( x  9.0)( x  9.5) 1
L2(x) =  ( x2-18.5x+85.5), L2(9.2)=-0.0200,
(11.0  9.0)(11.0  9.5) 3
Ln9.2≈ P2(9.2)= 0.5400.2.1972+ 0.4800.2.2513-0.02200.2.3979=2.2192.

5.3 Lagrange's interpolation formula

Assume that we are given n + 1 data points (x0, f0), (x1, f1),…, (xn, fn) with all of the xi’s are
distinct. The interpolating polynomial of degree n is given by
n
Pn(x) =  L ( x) f
i 0
i i

n (x  x j )
where Li(x) =  x
j 0  xj 
is a polynomial of degree n and for
i
j i

Mathematics Program Numerical Analysis I


84
0 for i  j
i = 0, 1, …, n. and Li(xj) = 
1 for i  j
It is clear that Pn(xi) = fi at i = 0, 1, …, n

Example 5.3:

2
P2(x) =  L ( x) f
i 0
i i = L0(x)f0 + L1(x) f1 + L2(x)f2

( x  x1 )( x  x 2 ) ( x  x0 )( x  x 2 ) ( x  x0 )( x  x1 )
= f0 + f1 + f2
( x0  x1 )( x0  x2 ) ( x1  x0 )( x1  x 2 ) ( x 2  x0 )( x 2  x1 )

Example 5.4:

Use the Lagrange interpolation formula to find a polynomial P3(x) which passes through:

(0, 3), (1, 2), (2, 7), (4, 59)

and hence approximates f(3)

Solution:

3 3 3 (x  x j )
Now, P3(x) = 
i 0
f i Li ( x) =  f  x
i 0
i
j 0  xj 
i
j i

3 (x  x j ) ( x  1)( x  2)( x  4)  1 3
L0(x) =  x
j 1  xj 
=
(0  1)(0  2)(0  4)
=
8
(x – 7x2 + 14x -8)
0

3 (x  x j ) 1 3
L1(x) =  x
j 0  xj 
=
3
(x – 6x2 + 8x)
i
j 1

Mathematics Program Numerical Analysis I


85
3 (x  x j ) 1 3
L2(x) =  xj 0  xj 
=
4
(x – 5x2 + 48)
i
j 2

2 (x  x j ) 1
L3(x) =  xj 0  xj 
=
24
(x3 – 3x2 + 2x)
3

3
P3(x) =  f L ( x) = x3 – 2x + 3.
i 0
i i

Then f(3)  P3(3) = 33 – 3(3) + 3 = 24

5.3.1 Compact form of Largrang’s Polynomial

We recall the Lagrang’s polynomial of n degree as:

n n (x  x j )
Pn(x) =  ( x
i 0 j 0  xj)
fi
i
j i

 x  x 
n
If we let w(x) = j
j 0

n n
then w (x) =  ( x  x
i 0 j 0
j )
j i

putting

x = xi (i = 0, 1, 2, …, n) we have

w (xi) = (xi – x0) (xi – x1) … (xi – xi-1) (xi – xi+1) … (xi-xn)

n
=  (x
j 0
i  x j ) … (xi – xn)
j i

Combining these, Lagrange’s Formula can be expressed as:

Mathematics Program Numerical Analysis I


86
n
w( x)
Pn(x) =  ( x  x )w ( x ) f
i 0
i
i i

x  x0
Suppose xj = x0 + jh and S = then
h

 x  x  = hn+1S(S – 1) … (S – n)
n
w(x) = j
j 0

= hn+1S[n + 1]

and w (xi) = (xi – x0)(xi – x1) … (xi – xi-1) (xi – xi+1) … (xi – xn)

= hni(i – 1) (i – 2) … 1(-1) … (-(n – i))

= hni! (-1)n-i (n – i)!

Hence, the Lagranges polynomial for equidistant points we have the expression:

n
 1ni f iS [ n 1]
Pn(x) = 
i 0 i!(n  i )! S  i

5.3.2 Error in Lagrange’s Polynomial

Theorem 5.3:

Let x0, x1, …, xn be distinct points and let f be a given real-valued function with (n + 1)
continuous derivatives on the smallest interval I[x0, …, xn, x ] for some argument x in I and x 
xi, then there exists a number in I such that

3
f ( n 1) ( )
f( x ) –  f i ( x j )Li ( x ) = w( x )
j 0 (n  1)!

n
where w(x) =  (x  x
j 0
j )

Mathematics Program Numerical Analysis I


87
Proof:

n
Let Pn(x) =  f ( x j )L j ( x)
j 0

Suppose x ,  xi then we can find a constant k such that the function

F( x ) = f( x ) – Pn( x ) – KW( x ) = 0

Consequently, F(x) has at least n + 2 zeros

x0, x1, …, xn, x in I.

Then by Rolle’s theorem applied repeatedly, F (x) has at least n + 1 zeros in the above interval,
F (x) has at least n zeros, and finally Fn+1)(x) at least one zero I such that

Fn+1( ) = fn+1( ) – Pnn 1 ( ) KW( )n+1 = 0

Since Pn( n 1) (x) = 0 and W(n+1)(x) = (n + 1)!

F(n+1)( ) = fn+1( ) – K(n + 1)! = 0

f ( n 1) ( )
Hence, K =
n  1!

This proves the proposition that

f ( n 1) ( )
f( x ) – Pn( x ) = W( x )
n  1!

Let us estimate fh+1( )

Since [a, b] and the error formula is valid for x [a, b] including xi for i = 0, 1, …, n.

We let, Mn+1 = max |fn+1(x)|


a x n

Then the absolute error

Mathematics Program Numerical Analysis I


88
M n 1W ( x)
= |Rn(x)| = |f(x) – Pn(x)|
n  1!

Example 5.5: Given

xi fi

100 10

121 11

144 12

for f(x) = x , find the error committed in approximating f by Lagrange’s polynomial at

x = 115

Solution:

1  12 1 3 3 5
Here,  (x) = x ,  (x) = x 2 and  (x) = x 2
2 4 8

3 5 3
M3 = max | (x)| = (100) 2 = (10 5 )
100 x 144 8 8

3 5 1
Hence, Error = |R2(x)| < 10  (115 – 100)(115-11)(115-144)
8 3!

 1– 6  10-5

Mathematics Program Numerical Analysis I


89
5.4 Divided difference formula

The notation of a divided difference is a generalization of that of a derivative. Divided


th
differences of order 0, 1, 2, …, k are defined recursively by the relations 0 order difference
f[x0] = f(x0)

f [ x1 ]  f [ x0 ]
1st order difference f[x0, x1] =
x1  x0 

f [ x1 , x2 ]  f [ x0 , x1 ]
2nd order difference f[x0, x1, x2] = and in general the kth order difference.
 x 2  x0 

f [ x1 , x2 , xk ]  f [ x0 , x1  xk 1 ]
f[x0, x1, x2, ..., xk] =
x k  x0 

k f (x j )
Theorem 5.4: To prove f[x0, …, xk] =  k
j 0
 (x
i 0
j  xi )

Proof: We will carry out the proof by induction

f [ x1 ]  f [ x0 ]
for k = 1, f[x0, x1] =
x1  x0

1 f (x j )
=  1
j 0
 (x
i 0
j  xi )
i j

Suppose the equation is true for k r

f [ x1 , xr 1 ]  f [ x0  xr ]
then f[x0, x1, …, xr+1] =
xr 1  x0 

Mathematics Program Numerical Analysis I


90
 
 
1  r 1 f (x j ) r f (x j ) 
=   r 
x r 1  x0  j 1 r 1
 
( x j  x i ) j 0
 x j  x i 

i 1 i 0
 i  j i j 

 
1  
=  f ( x r 1 ) r f (x j ) r f (x j ) f ( x0 ) 
x r 1  x0  r
  r 1  r  r 


 x r 1  x i  j 1
 ( x j  xi ) j 1
 ( x j  xi )  ( x0  xi ) 

i 1 i 1 i 0 i 1
 i j i j 

  
f ( x r 1 ) r  x j  x0   ( x j  xr  1 )
= +  r 1
f(xj)
xr 1  x0  x j  xi 
r
xr 1  x0  xr 1  xi  j 1

i 1 i 0
i j

r 1 f (x j )
f ( x0 )
- r
=  r 1
xr 1  x0  x0  xi  j 0
 (x j  xi )
i 1 i 0
l j

Hence, the proof

Remark: The divided difference f[x0, x1, …,xn] is invariant of its arguments.

i.e. f[x0, x2, x1, x3 …xn] = f[x0, x5, x4, x2, x1, xj, xi,… xn]

Divided differences are most easily computed recursively using the following formula:

f [ xi 1  xk 1 , xk ]  f [ xi , xi 1  xk 1 ]
f[xi, xi+1, … xk-1, xk] =
 x k  xi 

Mathematics Program Numerical Analysis I


91
Table of the commutative of Divided Differences

x0 f0

f[x0, x1]

x1 f1 f[x0, x1,x2]

f[x1, x2] f[x0, x1,x2,x3]

x2 f2 f[x1, x2,x3] f[x0, x1, x2,x3,x4]

f[x2, x3] f[x1, x2,x3,x4]

x-3 f3 f[x2, x3,x4]

f[x3, x4]

x-4 f4

Example 5.6

Given the following 4 points

xi 0 1 3 5

yi 1 2 6 7

Find a polynomial of degree 3 in Newton’s form to interpolate these data.

Mathematics Program Numerical Analysis I


92
Solution:

1 1/3

2 -17/120

3 -3/8

1/2

So,

P(x) =1+x+1/3x(x-1)-17/120x(x-1)(x-3)

None that xi can be reordered, but must be distinct. When the order of xi’s are changed, one
obtains the same polynomial but in different form.

Mathematics Program Numerical Analysis I


93
Activity 5.1

Given the following 4 points

xi 3 1 5 0

yi 6 2 7 1

Find a polynomial of degree 3 in Newton’s form to interpolate these data.

5.5 Newton interpolation formula

5.4.1 Newton Divided Difference Polynomial

While theoretically important Lagrange’s formula is, in general, not suitable for actual
calculations, but there are other forms that are much more convenient formulae and one of them
is Newton Divided Difference.

Theorem 5.5.1: Let the interpolating polynomial Pn(x) be such that:

Pn(xi) = fi for i = 0, 1, …, n.

n
then Pn(x) = f0 +  f [ x , x , , x
j 1
0 1 j ]( x  x0 ) …(x – xj-1)

Mathematics Program Numerical Analysis I


94
Proof: The interpolation polynomial of nth degree which agrees with the valves of f(x) at x i, for
i = 0, 1, …, n can be represented as:

Pn(x) = a0 + a1(x – x0) + a2(x-x0)(x – x1) + an(x – x0) … (x – xn-1)

f n 1  
and f(x) = Pn(x) + (x – x0) (x – x1) …(x – xn)
(n  1)!

1) If we let x = x0 then f(x0) = Pn(x0) = a0


2) If we set x = x1, then f(x1) = Pn(x1) = f(x0) + a1(x1 – x0) then
f ( x1 )  f ( x0 )
a1 = = f[x0, x1]
x1  x0

We now define recursively the divided difference:

f [ x0 , x1 ,, xk 1 , x]  f [ x0 , x1 ,, xk ]
f[x0, x1, …, xk-1, xk, x] =
x  xk

If we set k = 1, then

f [ x0 , x]  f [ x0 , x1 ]
f[x0, x1, x0] =
x1  x0

If we let x = x2, then

f [ x0 , x2 ]  f [ x0 , x1 ]
f[x0, x1, x2] =
x2  x1

f(x2) = f(x0) +a1(x2 – x0)+a2(x2 – x0)(x2 – x1)+ a3(x2 – x2)(x2– x1)

f ( x2 )  f ( x0 )  a1 ( x2  x0 )
= a2
x2  x0 ( x2  x1 )

 a2 = f[x0, x1, x2]

Hence, by induction one can show that

ak = f[x0. x1, …, xk-1, xk]

Mathematics Program Numerical Analysis I


95
Therefore:

Pn(x) = f(x0) + f[x0, x1](x – x0) + f[x0, x1, x2](x – x0)(x – x1)

+ … + f[x0, x1, …, xn] (x – x0) (x – x1) … (x – xn-1)

n
= f(x0) +  j 1
f[x0, x1, …, xj] (x – x0) (x – x1) … (x – xj-1)

If in addition to the (n+1)support points, we introduce an (n+2)support point

x , f ( x ) where x  xi for i = 0, 1, …,n.

Then

n
Pn+1( x ) = f0 + j 1
f[x0, x1, …, xj] ( x – x0)( x – x1)…( x - xi)

+ f[x0, x1, …xn+1 x ](x – x0)…( x – xn) = f( x )

Therefore

f  x  - Pn  x  = f[x0, x1, …xn, x ] x  x0  … (x – xn)

Thus

f[x0, x1, …, xn, x ] = f(n+1)   for some I[x0, x1, …,xn, x ]


(n  1)!

This leads to:

f (n) ( )
f[x0, …,xn] = for some I.
n!

Mathematics Program Numerical Analysis I


96
Example: Form the Newton divided difference Formula for a function specified by

xi f(xi) f[xi-1,xi] f[xi-2,xi-1,xi] F[xi-3,xi-2,xi-1,xi]

0 132.654

81.13

0.2 148.877 15.8

85.87 1

0.3 157.464 16.2

89.11 1

0.4 166.375 16.7

95.79 1

0.7 195.112 17.3

104.44

0.9 216

Solution:

Here, P5(x) = f0 + f[x0, x1](x – x0) + f[x0, x1, x2] (x – x0)(x – x1)

+ f[x0, x1, x2, x3](x – x0) (x – x1) (x – x2) +

f[x0, x1, x2, x3, x4] (x – x0) (x – x1) (x – x2) (x – x3)

+f[x0, x1, x2, x3, x4, x5] (x – x0)(x – x1)(x – x2)(x – x3)(x – x4)

P5(x) = 132.654+81.13(x–x0) + 15.8(x–x0)(x–x1)+(x–x0)(x-x1)(x-x2)

Approximate

Mathematics Program Numerical Analysis I


97
f(0.5)  P5(0.5) = 132.654 + 81.13(0.5) + 15.8(0.5)(0.5 – 0.2) + 0.5(0.5 – 0.2) (0.5 – 0.3)

Remark: The coefficient of the Newton divided difference form of interpolating polynomial are
along the diagonal of the table,

Theorem 5.5.2: If xj = x0 + jh, then

Δj f0
f[x0, x1, …xj] =
j!h j

Proof: Using mathematical induction

For, j =2

f [ x1 , x2 ]  f [ x0 , x1 ]
f[x0, x1, x2] =
x 2  x0

f ( x 2 )  f ( x1 ) f ( x1 )  f ( x0 )

= h h
2h

f ( x2 )  2 f ( x1 )  f ( x0 )
=
2h 2

2 f 0
=
2!h 2

Assume that it is true k = j

Then, when k = j + 1

f [ x1 , x2 , x j 1 ]  f [ x0 , x1 ]..., x j
f[x0, x1, …xj, xi+1] =
x j 1  x0

Mathematics Program Numerical Analysis I


98
 j f1  j f 0

j!h i j!h j
=
( j  1)h

 j 1 f 0
=
 j  1!h i 1

Hence, it is true for any K N,

Thus if xi = x0 + ih for i = 0, 1, …, n

n f 0
f[x0, x1, …, xn] =
n!h n

Therefore

f 0 2 f 0
Pn(x) = f0 + (x – x0) + (x – x0)(x – x1)
h 2h 2

n f 0
+…+ (x – x0) … (x – xn-1)
n!h n

This formula is called Newton’s Forward Difference Formula.

This formula makes use of the difference path indicated as follows:

Mathematics Program Numerical Analysis I


99
x f f 2f 3f 4f

x0 f0

f0

x1 f1 2f0

f1 20

x2 f2 2f1 4f0

f2 3f1

x3 f3 2f2

f3

x4 f4

Example 5.7:

f(x) is a function such that

1
f(0) = 6, f(1) = 5 = f(2), f(3) = 15, f(4) = 50 find f  
 2

First we interpolate f through the above given points

Mathematics Program Numerical Analysis I


100
Solution:

From the Forward Difference Table, we have the following

xi fi f 2f 3f 4f

0 6

-1

1 5 1

0 9

2 5 10 6

10 15

3 15 25

35

4 50

Then using the Newton Forward difference Interpolating Formula we get:

f 0 2 f 0 3 f 0
f(x)  f0 + (x – x0) + (x – x0) (x – x1) + (x-x)(x –x2)
h 2!h 2 3!h 3

4 f 0
+ (x – x0) (x – x1) (x – x2) (x – x3)
4!h 4

1 9
= 6 – 1(x – 0) + (x – 0)(x – 1) + (x – 0)(x – 1)(x – 2)
2! 3!

6
+ (x – 0)(x – 1) (x – 2)(x – 3)
4!

Mathematics Program Numerical Analysis I


101
= 1 (2x4 – 9x3 + 14x2 – 9x + 12)
2

1 1 1 1 1  3 1 1  1 
Then f    6 - +  1 +  1   2 
 2 2 2 2 2  2 2 2  2 

1 1 1  1  1  545
+   1   2    3  
4 2 2  2  2  64

If we require a formula with ordinates at xn, xn-1, xn-2 and so forth, we may replace x0 by xn, x1 by
xn-1 … xk, by xn-k and get

Pn(x) = fn + (x – xn) f[xn, xn-1] + (x – xn) (x – xn-1) f[xn,xn-1,xn-2]

… + (x – xn) (x – xn-1) …(x-x1) f[xn, xn-1, …,x, x0]

f n ( x  x n )  2 f n
= fn + + (x – xn) (x – xn-1)
h 2!h 2

n fn
+ …. + (x – xn) (x – xn-1) … (x – x1)
n!h n

This is called Newton’s Backward Interpolating formula and it uses the difference path indicated
below.

 

xn-3 fn-3

fn-2

xn-2 fn-2 2fn-1

fn-1 3fn

xn-1 fn-1 2fn

fn

xn fn

Mathematics Program Numerical Analysis I


102
If N + 1 points are retained in each Newton formula the two formulas would involve the same
ordinates and would yield the same polynomial approximation.

Example 5.8: Taking the previous example

x f(x) f(x) 2f(x) 3f(x) 4f(x)

0 6

-1

1 5 1

0 9

2 5 10 6

10 15

3 15 25

35

4 50

Using the Newton Backward Difference Formula we get

( x  4)( x  3) 2
f(x)  P4(x) = f4 + (x – 4) f4 +  f4
2!

( x  4)( x  3)( x  2) 3 ( x  4)( x  3)( x  2)( x  1) 4


+  f4 +  f4
3! 4!

= 1 (2x4 – 9x3 + 14x2 – 9x + 12)


2

Mathematics Program Numerical Analysis I


103
1 1
f    P4  
 2  2

 7  5   7  5  3 
           15
 7
= 50 +    (35) + 
2  2   2  2  2 
(25) +
 2 2! 3!

  7   5   3   1 
    
+ 
2  2  2  2 
(6)
4!

545
=
64

More generally, the Newton Forward Formula is used near the beginning of a tabulation and the
backward formula is used near the end of the tabulation.

The formula takes on a simpler form if we let

x  x0
S= , x = x0 + hS
h

Newton Forward Interpolating Formula becomes:

S ( S  1) 2 S ( S  1) ( S  n  1) n
Pn(x) = f0 + Sf0 +  f0 + …  f0
2! n!

n
S 
=   K  kf0
k 0  

Newton Backward Interpolating Formula:

( S  N )(S  N  1) 2
Pn(x) = fn + (S – N)fn +  fn
2!

 k f n ( S  N )(S  N  1) ( S  N  K  1)
+…+
K!

Mathematics Program Numerical Analysis I


104
N
S  N  k
=   k
k 0 
  fn

x  xn
Where S =
h

Example: f(x) is a function such that

f(0) = 6, f(1) = 5 = f(2), f(3) = 15, f(4) = 50

Then find f 1 .
2
 

Solution:

First we interpolate f through the above given points: Using the Newton’s Forward Interpolating
Formula we get:

S ( S  1) 2 S ( S  1)( S  2) 3
f(x)  f0 + Sf0 +  f0 +  f0
2! 3!

S ( S  1)(S  2)(S  3) 4
+  f0
4!

Mathematics Program Numerical Analysis I


105
x f f 2f 3f 4f

0 6

-1

1 5 1

0 9

2 5 10 +6

10 +15

3 15 25

35

4 50

1
1 1
    
1 1  3 1   1   3   5 
   
x +  
1 2 2 2 9 2 2 2  2  6
f   =6 + (-1)+ 2 x+ 2 x
 2 2 2 3! 4!

=6+
1 1
(-1) - (1) +
1
(9) -
5
6
2 8 16 128

545
=
64

Mathematics Program Numerical Analysis I


106
Summary

Polynomial interpolation means the determination of a polynomial Pn(x) such that Pn(xj)=fj,
where j=0, …, n and (x0, f0), …, (xn, fn) are measured or observed values, values of a function,
etc. Pn(x) is called an interpolation polynomial. For given data, Pn(x) of degree n (or less) is
unique. However, it can be written in different forms, notably in Lagrange’s form, or in
Newton’s divided difference form, which requires fewer operations. For regularly spaced x 0,
x1=x0+h, …, xn=x0+nh the latter becomes Newton’s forward difference formula.

Mathematics Program Numerical Analysis I


107
Exercises

1. The population of a country in the censing as under Estimate the population for the year
1925
Year x: 1891 1901 1911 1921 1931

Population y: 46 66 81 93 101

(in thousands)

2. Find the polynomial satisfied by


(-4, 1245), (-1, 33), (0, 5), (2, 9) and (5, 1335)

3. Show that the nth divided difference of xn is 1.

4. The function y = f(x) is given at the points (7, 3), (8, 1), (9, 1) and (10, 5). Find the value of
y for x = 9.5 using Lagrange’s interpolation formula

5. Use Gauss’s forward formulas to find the value of y when


x = 3.75 given the table

x 2.5 3.0 3.5 4.0 4.5 5.0

u -2 -1 0 1 2 3

Mathematics Program Numerical Analysis I


108
CHAPTER 6

APPLICATION OF INTERPOLATIONS

Objectives

On completion of this chapter successfully, students will be able to: grasp practical
knowledge of polynomial interpolation in numerical differentiation and integration,

Introduction

In this chapter, we shall discuss numerical differentiation and numerical integration. Under this,
we shall first approximate the function with the help of interpolation formula and differentiating
this formula as many times as required.

The second part deals with integration of function by Trapezoidal rule and Simpsons rule.

6.1 Differentiation

If the function f(x) is very complicated to get its derivative or is known only through a table we
use a numerical differentiation method. Formulas for numerical differentiation may be obtained
by differentiating the interpolating polynomial. The essential idea is that the derivative  (x),
 (x),… of the function f(x) are represented by the derivatives Pn (x), Pn (x), ... respectively.

Mathematics Program Numerical Analysis I


109
6.1.1 Differentiation Formulas Based on Newton’s Forward Interpolation
Formula

Suppose we have a function f(x) specified at equally spaced points x i (i = 0, 1, 2, …, n) on an


interval [a, b] by means of the values f(xi). In order to find the derivatives  (x),  (x) etc. on [a,
b], We replace the function f(x) by Newton’s interpolation polynomial constructed for a set of
points x0, x1, …, xn, we have:

S ( S  1) 2 S ( S  1) 2 S ( S  1)S  2( S  3) 3
f(x) = f0 + Sf0 +  f0 +  f0+  f0 + …
2! 3! 4!

x  x0
where S = , h = xi+1 – xi , (i = 0, 1, …)
h

Multiplying the binomials together, we get:

S 2  S 2 S 3  3S 2  2S 3 S 4  6S 3  11S 2  6S 4
f(x) = f0 + Sf0 +  f0+  f0 +  f0 + …
2! 3! 24

df df ds 1 df
Since = =
dx ds dx h ds

1  2S  1 2 3S 2  6S  2 3 4S 3  18S 2  22S  6 4 
 (x) = Δ
 0f  Δ f 0  Δ f 0 +  f 0  
h  2! 3! 24 

Example 6.1: For n = 2

( x  x1 )( x  x2 ) ( x  x0 )( x  x2 ) ( x  x0 )( x  x1 )
L2(x) = f0 + f1 + f2
( x0  x1 )( x0  x2 ) ( x1  x0 )( x1  x2 ) ( x2  x0 )( x2  x1 )

( S  1)( S  2) S ( S  2) S ( S  1)
= f0 – f1 + f2
2 3 2

1 1 1 
 (x) =  f 0 (2s  3)  f1 (2s  2)  f 2 (2s  1)
h 2 2 

Mathematics Program Numerical Analysis I


110
In particular for  (xi) , i = 0 s=0

1 1 1 
 (x0) =  f 0 (3)  f1 (2)  f 2 (1)
h 2 2 

1  3 1 
=
h  2 f 0  2 f1  2 f 2 

=
1
 3 f0  4 f1  f2 
2h

Similarly, since

d2 f d( f ) d  1 df  1 d3 f
 (x) = = = = dx
dx 2 dx dx  dx ds  h 2 ds 2

12S 2  36S  22 4 
 (x) =
1
h2

Δ2
f 0 
6S  6 3
Δ f 0  Δ f0 
3! 24 

If Pn(x) is Newton’s interpolation polynomial and

Rn(x) = f(x) – Pn(x) is the corresponding error, the error in determining  (x) is:

R n(x) =  (x) – Pn (x).

As we know

( x  x0 )( x  x1 )( x  xn ) ( n 1)
Rn(x) = f  
(n  1)!

S ( S  1)( S  n) ( n 1)
= hn+1 f  
(n  1)!

Where, is an intermediate number between the values x0, x1, … xn, x

Mathematics Program Numerical Analysis I


111
dRn ( x) ds 1 dRn ( x)
R n(x) = . =
ds dx h dx

=
hn
(n  1)
 f ( n 1)   S ( S  1)(S  2) + S(S – 1) … (S – n)
d
ds
d (n+1)
ds
[f ( )]}.

d (n+1)
If we suppose  ( ) is to be bounded and take into account that
ds

d
[S(S-1) … (S – n)]s=0 = (-1)nn! then for x = x0 and, hence, for S = 0, we get
ds

hn
P n(xn) = (-1)n (n+1)( )
n 1

In many cases it is difficult to estimate (n+1)( ) since we don’t know , but for small h we
n 1 f 0 (1) n n 1 f 0
approximate (n+1)( )  and hence R n (x 0 ) 
h n 1 h n 1

d n+1
Assuming y ( ) is to be bounded, we get the error of the derivative at the points.
dx

i!(n  i )! (n+1)
R n(xi) = (-1)n-ihn y ( )
(n  1)!

where is a value lying between x0 and xn.

Mathematics Program Numerical Analysis I


112
Activity 6.1:

Find  (50) of the following Data.

x f(x)

50 1.6990

55 1.7404

60 1.7782

65 1.8129

6.1.2 Maximum and Minimum Values of a Tabulated Function

It is known that the maximum and minimum values of a function can be found by equating the
first derivative to zero and solving for the variable. The same procedure can be applied to
determine the maxima and minima of a tabulated function. Consider Newton’s forward
difference formula

S ( S  1) 2 S ( S  1)( S  2) 3
f(x) = f0 + Sf0 +  f0 +  f0…
2 6

Differentiating this with respect to s, we obtain

df 2S  1 2 3S 2  3S  2 3
= f0 +  f0 +  f0 + … (*)
ds 2 6

For maxima or minima

df
=0
ds

Mathematics Program Numerical Analysis I


113
Terminating the right hand side (*) for simplicity, after the third difference and equating it to
zero, we obtain S such that:

C0 + C1 S + C2S2 = 0

1 2 1
where C0 = f0 -  f0 + 3f0
2 3

1 3
C1 = 2f0 -  f0
2

1
C2 = 3f0
2

and then we find x = x0 + hS,

Example 6.2:

Find x correct to two decimal places for which f(x) is maximum and find this value of f(x)

from the following table.

x 1.2 1.3 1.4 1.5 1.6

y 0.932 0.9635 0.9855 0.9975 0.9996

Mathematics Program Numerical Analysis I


114
Solution: We formulate the forward Difference Table as

x f(x) f 2f

1.2 0.9320

0.0316

1.3 0.9636 -0.0097

0.0219

1.4 0.9855 -0.0099

0.0120

1.5 0.9975 -0.0099

0.0021

1.6 0.9996

Let x0 = 1.2 terminating the formula after 2nd difference we get

2S  1 2
f0 +  f0 = 0
2

1
0.0316 + (0.0097) + 0.00975 = 0
2

from which we obtain that s = 3.8 and hence

x = x0 + sh gives

x = 1.2 + 0.1(3.8) = 1.58

Thus x = 1.58 is the point at which  has maximum value.

The maximum value is

Mathematics Program Numerical Analysis I


115
S ( S  1) 2
f(1.58)  0.9320 + sf0 +  f0
2

= 0.9320 + 3.8(0.0316) + (3.8 2.8)/2  (0.0097

= 1.104

6.2 Integration (Trapezoidal and Simpson's rule)

If a function f(x) is continuous on an interval [a, b] and its anti derivative F(x) is known, then the
definite integral of this function from a to b may be computed from:
b

 f ( x)dx
a
= F(b) – F(a)

where F (x) = f(x).


However, in many cases the anti derivative F(x) cannot be found by elementary means and as a
result, computation of the definite integral by above may be difficult. Because of these
numerical methods are used for computing definite integrals. The numerical computation of a
single integral is called quadrature formulas. The technique consists in replacing the given
function f(x) on the interval [a, b] under consideration, by an interpolating or approximation
function (x) of a simple kind.
b b


a
f ( x)dx    ( x)dx
a

b
The integral   ( x)dx can be evaluated directly
a

6.2.1 Using Lagrange Interpolation Polynomial

Suppose, for the function f(x), we know that corresponding values at n + 1 arbitrary points, x 0,
x1, x2, …, xn of [a, b] as f(xi)= fi, (i = 0, 1, 2, …n)

Mathematics Program Numerical Analysis I


116
It is required to find approximately
b

 f ( x)dx
a

Using the given values fi, construct the Lagrange polynomial


n
w( x)
Pn(x) =  ( x  x )w ( x ) fi
i 0
1
i i

such that Pn(xi) = fi, (i = 0, 1, 2, …,n)


Replacing the function f(x) by the polynomial Pn(x), we get
b b

 f ( x)dx =  P ( x)dx + Rn(x)


a a
n

b b n
w( x)

a
f ( x)dx   ( x  x )w( x ) fidx
0 i 0 i i

n b
w( x)
=  Ai f i
i 0
where Ai =  x  x w( x )dx
a i i

The coefficients Ai, are independent of the choice of the function f(x) for a given points. And if
f(x) is a polynomial of degree n then
b b

 f ( x)dx =  P ( x)dx
a a
n

Since f(x) = Pn(x) in particular for f(x) = xk for k = 0, 1, …n


Putting f(x) = xk , (k = 0, 1, 2, …,n), we get a linear system of n + 1 equations
n
I0 = A
i 0
i

n
I1 = Ax
i 0
i i


n
In = Ax
i 0
i
n
i

b k 1  a k 1
b
Where, Ik =  x dx =
k

a
k 1

Mathematics Program Numerical Analysis I


117
Example 6.3:

Determine the coefficient of a quadrature formulation of the form:

 f ( x)dx = 1, A0 f  4  + A1f  2  + A2f 3 4 


1
1 1
0

Solution:

Putting f(x) = xk (k = 0, 1, 2,)


1 1 1
1 1
 x dx = 1,  xdx = x
0 2
, dx =
0 0
2 0
3

we get the system

1 = A0 + A1 + A2

1 1 1 3
= A0 + A1 + A2
2 4 2 4

1 1 1 9
= A0 + A1 + A2
3 16 4 16

2 1 2
Hence, A0 = , A1 = - , A2 =
3 3 3
1
2 1 1 1 2  3
Thus  f x dx =
0
f  - f  + f 
3  4 3  2 3  4

Mathematics Program Numerical Analysis I


118
Activity 6.2: Cheek the followings
1 3 3 3
2 1 1 1 2  3
1. f(x) = x , 0.25 =  x dx =   -   +  
2 3

0
3  4 3  2 3  4

= 0.25 f(x) = x3
3 3 3
2 1 2 1 1 2 2  3 2
1
2. f(x)= x ,0.4=  1x dx    -   +   =0.398
2 3

0
3  4 3  2 3  4

6.2.2 Error Due to Integral Approximation

If f(x) is the function and Pn(x) is the interpolating polynomial, then the error of Pn(x) in
approximating f(x) for x  xi is

f(x) – Pn(x) = W(x)


f  
n 1

n  1!
b b
f   b w( x)dx
n 1
then R(x) = 
a
f ( x)dx -  Pn ( x)dx =
a
n  1! a
Hence error due to integral approximation is

f n 1 ( x)
b

(n  1)! a
R(x) = w( x)dx

[a, b] must not containing any of xi for i = 0, 1, …,n.

Mathematics Program Numerical Analysis I


119
6.2.3 Newton-Cotes-Quadrature Formulas

Consider a uniform partition of the closed interval [a, b] given

ba
xi = x0 + ih, i = 0, 1, …,n and h = and let Pn be the interpolating polynomial of
n
degree n or less with

Pn(xi) = f(xi), i = 0, … n

by Lagrange’s interpolating formula

n
Ln(x) =  L ( x) f
i 0
i i

n x  xj
where Li(x) = x j 0  xj
i
j i

x  x0
Let S = and S[n+1] = S(S – 1) … (S – n)
h

(1) n i S [ n 1]
n
Pn(x) =  fi
i  0 i!( n  i )! S  i

b
To compute  f ( x)dx , We proceed as follows:
a

We replace the function f(x) by Lagrange interpolation polynomial and obtain:

b b n


a
f ( x)dx   L ( x) f dx
a i 0
i i

(1) n i S [ n 1]
b n
= a  .
i  0 i!( n  i )! S  i
fidx

Mathematics Program Numerical Analysis I


120
x  x0 dx
Since S = , ds =
h h

(1) n i S n 1
b n n

 f ( x)dx h  fi  ds .
a i  0 i!( n  i )! 0 S  i

h(1) n i S n 1
n
letting Ai =
i!(n  i )! 0 S  i ds

b n

 f ( x)dx h Ai f i
a i 0

This is Newton-Cotes Formula. Ai is called the coefficient

Example 6.4: The Trapezoidal Formula

b 1
For n = 1,  f ( x)dx  h  Ai f i
a i 0

for i = 0

(1)1 S ( S  1)
1
1  1
A0 =
0!1! 
0
S
ds = -   1 
2  2

for i = 1

(1) 0 S ( S  1)
1
1
A1 =
1!0!  S  1 ds =
0
2

b
Hence  f ( x)dx
a
= h[A0f0 + A1f1]

Mathematics Program Numerical Analysis I


121
h
= [f0 + f1].
2

x1
h
 f ( x)dx 
x0
2
[f0 + f1]

Error in Trapezoidal Formula

We define,

x1
h
R=  f ( x)dx -
x0
2
(f0 + f1)

and regarding to R = R(h) as a function of the step length h. then

x0 h
h
R(h) =  f ( x)dx -
x0
2
[f(x0) + f(x0 + h)]

since F(x0 + h) – F(x0) is derivative of f(x)th

1 h
R (h) = f(x0 + h) - [f(x0) + f(x0 + h)] -  (x0 + h)
2 2

1 h 1
= f(x0 + h) -  (x0 + h) - f(x0)
2 2 2

Expand f(x0 + h) and  (x0 + h) using Taylor series about x0, then

1 h2 h 1
R (h) = [f(x0) + h (x0) +  ( )] - [ (x0) +  ( )] - f(x0)
2 2 2 2

1 2
= h ( )
4

Mathematics Program Numerical Analysis I


122
1 2
h

4 0
R(h) = t  ( )dt, for [0, h]

using the mean value theorem of integral we finally obtain

1 1 3
h
R(h) =     t 2 dt = h ( )
4 0
12

General Trapezoidal Formula

b
If the interval [a, b] is big enough, to evaluate  f ( x)dx , we divide the interval [a, b] into n equal
a

parts [x0, x1], [x1, x2] … [xn -1, xn] and to each apply the trapezoidal rule.

ba
Setting h = . We have
n

b x1 x2 xn


0
f ( x)dx =  f ( x)dx +
x0

x1
f ( x)dx … +  f ( x)dx
xn 1

h h h
 [f0 + f1] + [f1 + f2] + … + [fn-1 + fn]
2 2 2

=
h
 f 0  2 f1  f 2    f n1   f n 
2

h
= {sum of first and least ordinate + 2 (sum of other ordinates)}
2

Mathematics Program Numerical Analysis I


123
Error in the General Trapezoidal Formula

xn n

 f  fi 
h
R= 
x0
f ( x)dx -
2 i 1
i 1

n xi h 
=    f ( x ) dx  f i 1  f 
i 
i 1 
 i 1
x
2 

 h3 n
=
12
f
i 1
( i)

1 n
Since  is continuous on [a, b] there exists a point [a, b] such that  ( ) =  f ( i).
n i 1

nh 3 (b  a)h 2
We then have R = - f  = - f 
12 12

Example 6.5: Using Trapezoidal formula compute the integral

1
dx
1 x
0
using the following data.

x 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.


i 0

fi 1 0.9090 0.8333 0.7692 0.7142 0.6666 0.6250 0.5882 0.555 0.5263 0.


9 3 3 9 7 0 4 5 2 5

Solution: taking n = 10

1
dx 0 .1
1 x
0

2
[0.5 + 090909 + … + 0.52632 + 0.26]= 0.693767

Mathematics Program Numerical Analysis I


124
Simpson’s Formula

In the Newton cotes formula for selecting n = 2, then we get

b 2

 f ( x)dx h  Ai f i for i = 0
a i 0

(1)2 S ( S  1)(S  2)
2 2
1
A0  
0!2! 0 S
ds =  ( S 2  3S  2)ds
20

1 8  1
=   6  4 
2 3  3

(1) 2 1 S ( S  1)( S  2)
2

1!(2  1)! 0
for i = 1 A1  ds
S 1

2
1  8  4
= -  ( S 2  2S )ds = -  (2)3  22  = -   4 =
0 3  3  3

for i = 2

(1) 2  2 S ( S  1)(S  2)
2

2!(2  2)! 0
A2 = ds
S 2

2
1 1 1 1 
=
20 ( S 2  S )ds =  (2)3  (2) 2 
2 3 2 

1 8 4  1
=  =
2  3 2  3

b
  f ( x)dx
a
h[A0f0 + A1f1 + A2f2]

Mathematics Program Numerical Analysis I


125
1 4 1 
= h  f 0  f1  f 2 
3 3 3 

h
= [f0 + 4f1 + f2]
3

1
which is Simpson’s formula
3

1
The remainder term of Simpson’s formula is:
3

x2

 f ( x)dx  3  f  4 f1  f 2 
h
R= 0
x0

Fixing the midpoint x1 and regarding R = A(h) as a function of h, we get:

x1  h

R(h) =  f ( x)dx 
h
 f ( x1  h)  4 f ( x1 )  f ( x1  h)
x1  h
3

1 h
R (h) = f(x1+h) + f(x1-h) - [f(x1-h)+f(x1) + f(x1+h)] - [ (x1-h)+ (x1+h)]
3 3

2 4 h
= [f(x1+h)+f(x1-h)] - f(x1)- [ (x1-h)+ (x1+h)]
3 3 3

2 h2 h3 h 4 iv
= [{f(x1) + h  (x1) +  (x1) +  (x1)+  ( )}
3 2! 3! 4!

h2 h3 h 4 iv
+{f(x1)-h  (x1) +  (x1) -  (x1) +  ( )}]
2! 3! 4!

4 h h2 h 3 iv
- f(x1) - [{- (x1) + h (x1) -  (x1)+  ( )}
3 3 2! 3!

h2 h3
+  (x1) + h (x1) +  (x1) + iv( )]
2! 3!

Mathematics Program Numerical Analysis I


126
where (x1 – h, x1 + h)

 1 4 iv
R (h) = h f ( )
18

 1 5 iv
R(h) = h f ( )
90

Simpson’s General Formula

Since each application of Simpson’s rule requires two intervals, n must be an even integer, let n
= 2m and let f(xi) be the values of the function f for equally spaced points.

a = x0,… x2m = b with

ba
h= and applying Simpson’s rule to each doubled intervals
2m

[x0, x2], [x2, x4] … [x2m-2, x2m] of length 2h, We then get

b
h h h
 f ( x)dx =
a
3
(f0 + 4f1 + f2) + (f2 + 4f3 + f4) +… + (f2m-2 + 4f2m-1 + f2m)
3 3

b
h
  f ( x)dx =
a
3
[(f0 + f2m) + 4(f1 + f3 + f5 + … + f2m-1) + 2(f2 + f4 + f6 + … + f2m-2)]

Letting S1 = f1 + f3 + f5 + … + f2m-1

S2 = f2 + f4 = f6 + … + f2m-2

 f ( x)dx  3  f  4S1  2S2  f n 


h
0
a

1
Which is Simpson’s general formula
3

Mathematics Program Numerical Analysis I


127
The error in Simpson’s General Formula will be the sum of the errors on every doubled interval
[x2k-2, x2k]. k = 1, 2,…, m

h5 m iv
i.e. R=- f 
90 k 1
k 

Since iv(x) is continuous on [a, b], then there exists a point [a, b] such that

min f iv ( 0 ) iv( ) max f iv ( i )


i i

mh5 iv (b  a)h 4 iv
Therefore R - f ( ) =- f ( )
90 90

The method is of order 4.

6.2.3 Convergence Discussion of Newton – Cotes Formula

Activity 6.3:

Will Newton-Cotes Formula converge as n ?

n b
i.e. lim h Ai f i =  f ( x)dx
n 
i 0 a

(1) n i S [ n 1]
n

i!(n  i ) 0 ( S  i )
For Ai = ds

Consider the following example.

Mathematics Program Numerical Analysis I


128
4
dx
1 x
4
2
= 2tan-1(4)  2.6516

Using Newton-cotes Formulas

n n

A f
i 0
i i

2 5.4902

4 2.2776

6 3.3288

8 1.9411

10 3.5956

n
This implies lim  Ai fi  .
n 
i 0

And this illustrates the fact that the Newton-Cotes integration formulas need not converge to
b

 f ( x)dx. and the reason for this is


a

n
lim  Ai = 
n 
i 0

But if we can find B <  such that

n
lim  Ai B
n 
j 0

n b
then lim h Ai f i =  f ( x)dx for any continuous function f(x) on [a, b].
n
j 0 a

Mathematics Program Numerical Analysis I


129
For cases where the coefficients are not all positive, the integrating formula suffer a build-up of
round – off error besides the coefficients being complicated to evaluate for large n. Therefore
Newton-Cotes formulas are seldom used for n larger than 9.

Example 6.6: Using Simpson’s formula compute the integral.

1
dx
1 x
0

taking n = 10

1
Solution: Here = 10 hence = 0.1 = h.
10

i xi f2i-1 f2i

0 0 f0=1

1 0.1 0.90909

2 0.2 0.83333

3 0.3 0.76923

4 0.4 0.71429

5 0.5 0.66667

6 0.6 0.62500

7 0.7 0.58824

8 0.8 0.5555

9 0.9 0.52632

Mathematics Program Numerical Analysis I


130
10 1.0 fn=0.5000

 S1=3.4520 S2=2.72818

1 1
dx dx h
0 1  x = 0 1  x  3 (f0 + 451 + 25 + fn)
2

0 .1
= (1 + 4x3.45955 + 2x2.72818 + 0.5000)
3

=0

Example 6.7: Find log10656 given

x 654 658 659 661

log 0x 2.8156 2.8182 2.8189 2.8262

Solution: Let y = f(x) = log 0x

Hence by Lagrange interpolation formula

(656  658)(656  659)(656  661)


Log10 656 = (2.8156)
654  658654  659654  661

(656  654)(656  659)(656  661)


+ (2.8182)
658  654658  659658  661

(656  654)(656  658)(656  661)


+ (2.8189)
659  654659  658659  661

(656  654)(656  658)(656  659)


+ (2.8202)
661  654661  658661  659

= 2.8169

Mathematics Program Numerical Analysis I


131
Exercises:

1. The following table is given. Find the form of the function:

x 0 1 2 5

f(x) 2 3 12 147

2. The function f(x) = y is given at the points (7, 3) (8, 1), (9, 1) and (10, 5). Find the value of
y for x = 9.5 using Lagrange’s interpolation formula

3. Given the table of values


x 150 152 154 156

y= x 12.247 12.329 12.410 12.490

Evaluate 155 by using Lagranges interpolation formula

4. Find a cubic Polynomial which approximate the following data


x -2 -1 2 3

y(x) -12 -8 3 5

Mathematics Program Numerical Analysis I


132
6.3 Inverse Interpolation

I) For the case of equally spaced points.

Suppose we have a function f(x) given in tabular form. The problem of inverse interpolation
consists in determining a value of argument x from a given value of the function f(x). In this,
case we assume that the points are of equidistant and the function f(x) is monotonic. Then
replacing the function f(x) by Newton’s forward Interpolation polynomial we get

Sf 0 Sf 0 Sf 0


f(x)=f0+ + S(x –1) +…+S(S – 1)…(S-n+1)
1! 2! n!

We rearrange to solve for S in the second term.

1 2 f 0 3 f 0 n f 0
S= [f(x) – f0 –S(S – 1) -S(S+!)(S-2) -…S(S-1)…(S-n+1) ]
f 0 2! 3! n!

Then apply the method of successive approximations. For the initial approximation we neglect
all the terms in S on the right to obtain

f ( x)  f 0
S0 = .
f 0

The second approximation is obtained using S1 on the right side including now one more term.

1 2 f 0
S1 = [f(x) – f(x0) – S0(S0 – 1) ].
f 0 2

The next approximation uses S2 on the right and points up another term:

1 2 f 0 3 f 0
S2 = [f(x) – f(x0) – S1(S1 – 1) -S1(S1-1)(S1-2 ].
f 0 2 3!

The process of iteration is continued until the condition of the required accuracy is obtained.

Mathematics Program Numerical Analysis I


133
Having found S, we determine x from the formula

x = x0 + Sh.

Example 6.7: y = x3 for x = 2, 3, 4 and 5 and calculate the cube root of

y = 10 correct to three decimal places.

Solution:

x y y 2y 3y

2 8

19

3 27 18

37 6

4 64 24

61

5 125

Hence x = x0 + hS2

= 2 + 0.1532

= 2.1532

is the cubic root of y = 10

Mathematics Program Numerical Analysis I


134
1
Here; S0 = (y – y0) = (10 – 8) = 0.1
y 0

1 2 f 0
S1 = [y – y0 – S0(S – 1) ]
y 0 2

1
= [10 – 8 – 0.1(0.1 – 1)9] = 0.15
19

1 2 f 0 3 f 0
S2 = [y – y0 – S1(S1 – 1) - S(S1 – 1)(S1 – 2) ]
y 0 2 3!

1 18 6
S2 = [10-8-0.15(0.15-1) -0.15(0.15-1)(0.15-2) ]=0.1532.
19 2 3!

Hence x = x0 + hS2 = 2 + 0.1532 = 2.1532

The cube root of 10 is = 2.1532

II. Inverse Interpolation For the case of unequally spaced points.

The problem of inverse interpolation of a function for the case of unequally spaced values of the
argument x0, x1,…, x0 can be solved directly by means of Lagrange’s interpolation formula. To
do this, it is sufficient to take f(x) as independent variable and to write.

n
( y  y1 )( y  y 2 ) ( y  yi 1 )( y  yi 1 )
x= x
i 0
i
2

where yi = f(xi) i = 0, 1, …n

Hence, inverse interpolation can be used for finding the root of an equation provided values of
f(x) at some points are given.

Mathematics Program Numerical Analysis I


135
6.4 Numerical Approximation of Improper Integrals

Let us first consider the approximate computation of the improper integral

 f ( x)dx
a
*

where f(x) is continuous over a x < .

The integral (*) is convergent if there is a finite limit

b
lim  f ( x)dx exists
b 
a

If the limit does not exist, then the integral is divergent and such an integral is considered to be
meaningless. To evaluate the convergent improper integral to a given accuracy we represent it in
the form:

 b 


a
f ( x)dx = a
f ( x)dx +  f ( x)dx
b

Since the integral converges, the number b may be chosen so large that the inequality

 f ( x)dx
b
< 6
2
holds true

Then the proper integral

 f ( x)dx , may be computed using one of the Quadrate


a

Example 6.8: Find the approximate value of the Integral:


dx 10 4
2 1  x 2 , within e =
3
(e = accuracy)

Mathematics Program Numerical Analysis I


136
Solution:

 b 
dx dx dx
2 1  x 2 = 2 1  x 2 + b 1  x 2

Now we choose b such that


dx e 10 4
b 1  x 3 2 2  3
< =

 
dx dx 1 10 4 2
since b 1  x 3 b 1  x 2 b 2 10 4  b
< 

1 2 2
 b  4 4  2 x10 4
b 10 10

 b
dx dx
Hence   
2 1 x 2 1 x
2 2

6.5 Integrals of discontinuous functions

Suppose now that the interval of integration [a, b] is finite and the integrand f(x) has a finite
number of discontinuities on [a, b]. Let us examine the case when there is a single discontinuity
point c of the function f(x) on [a, b].

In order to approximate, to a given accuracy e, one chooses positive numbers 1 and 2 so small
that the inequality

c2

 f ( x)dx
c  1
holds true.

Mathematics Program Numerical Analysis I


137
Then using the quadrature formulas one approximately calculates the proper integrals.

c  1 b

 f ( x)dx
a
and  f ( x)dx
c2

b c  1 b
Then  f ( x)dx
a
  f ( x)dx +  f ( x)dx
a c2

6.6 Multiple Integrals


The technique discussed in the previous sections can be extended for use in the approximation of
multiple integrals. We consider the approximation of a double integral

 f ( x, y)dA
R

where R is a rectangular region in the plane

R = {(x, y) | a x b, c y d}

for some constants a, b, c and d.

To illustrate the approximation technique, we employ the Simpson’s rule although any other
Newton – Cotes formulas could be used.

Suppose that integers n and m are chosen to determine the step sizes:

h = (b – a)/2n and k = (d – c)/2m.

Writing the double integral as an iterated integral

b d
 
 R
f ( x, y )dA =    f ( x, y )dy dx
ac 

We first use the Simpson’s rule to evaluate

Mathematics Program Numerical Analysis I


138
d

 f ( x, y)dy
c

treating x as a constant and letting yj = c + jk for j = 0, 1, …, m gives

k  m 1 
d m

 f ( x, y )dy   f ( x, y 0 )  2 f ( x, y 2 j )  4 f ( x, y 2 j 1 )  f ( x, y 2 m 
c
3  j 1 j 1 

Hence,

2k m1
b d b b

 f ( x1 x2 j )dx
k

a c
f ( x, y )dydx 
3 
a
f ( x, y 0 )dx 
3 j 1 a

m b b
AK k
+
3
 j 1 a
f ( x, y 2 j 1 )dx 
3 a
f ( x, y 2 m )dx

We apply Simpson’s rule on each integral letting xi = a + i h for

i = 0, 1, …, 2n, produces for each yj, for j = 0, 1, …, 2m

 n 1

b n
h
 f ( x, y j )dx 
3 

f ( x 0 , y j )  2i 1
f ( x 2j , y j ) + 4i 1
f ( x2i 1 , y j )  f ( x2 n , y j )

a

The resulting approximation has the form

hk  n 1
b d n

 f ( x, y )dydx 
9 
f ( x 0 , y j )  2i 1
f ( x 2 j , y ) + 4i 1
f ( x2i 1 , y 0 )  f ( x2 n , y 0 ) 
a c

m 1 m 1 n 1 m 1 n
2 f ( x0 , y2 j )  4 f ( x2i , y2 j )  8  f ( x 2 j 1 , y 2 j ) +
j 1 j 1 i 1 j 1 i 1

m n m
16  f ( x2i 1 , y 2 j 1 )  4 f ( x0 , y 2 j 1 )
j 1 i 1 j 1

n 1 n

+ f ( x0 , y 2 m )  2 f ( x2i 1 , y 2 m )  4 f ( x2i 1 , y 2 m )  f ( x2n , y 2 m )
i 1 i 1 

Mathematics Program Numerical Analysis I


139
In short

b d
hk n m
 f ( x, y )dydx   wij f ( xi , f j )
9 i 0 j 0
a c

where wi is obtained from the following

i.e. w0 = 1, w01 = 4, w02 = 2, w03 = 4, w04 = 2, w05 = 1 w10 = 4, w11 = 16, w12 = 8,

w13 = 16, w14 = 8, w15 = 4.

Example 6.9

2.0 1.5
Approximate   n (x + 2y)dydx.
1 .4 1 .0

Solution: using Simpson’s rule with n = 2 and m = 1,

Then 2n = 2  2 = 4, 2m = 2  1 =2.

The nodes (xi, yj) with i = 0, 1, 2, 3, 4, ,j = 0, 1, 2 and

2 1 4 1  5  10
h= ,k 
4 2

2.0 1.5 2.0

  (n( x  2 y)dy)dx  
0.25
n( x  2 y0 )  4 ln( x  2 y1 )  n( x  2 y 2 )dx
1.4 1.0 1.4
3

=
0.15 x0.25
n( x0  2 y0 )  4n( x1  2 y0 )  4n( x3  2 y0 )
9

+2n(x2 + 2y0) + n(x4 + 2y0) + 4n(x0 + 2y1) + 16n(x1 + 2y1)

Mathematics Program Numerical Analysis I


140
+16n(x3 + 2y1) + 2n(x2 +2y1) + 4n(x4 + 2y1) + n(x0 + 2y2)

+4n(x1 + 2y2) + 4n(x3 + 2y2) + 2n(x2 + 2y2) n(x4, 2y2]

In short

2.0 1.5
(0.15)(0.25) 4 2 x
  n( x  2 y)dydx  9

i 0 j 0
wij n( xi  2 y j )
1.4 1.0

where wijx is the coefficients of (xi, yj)

Table values for wij for 2n = 4, 2m = 2

1.50 1 4 2 4 1

1.25 4 16 8 16 4

1.00 1 4 2 4 1

1.4 1.05 1.70 1.85 2.00

Mathematics Program Numerical Analysis I


141
Summery

Numerical differentiation is the process of obtaining the value of the derivative of a function
from a set of numerical values of that function
1. If the argument are equally spaced,

a. We will use Newton forward formula. If we desire to find the derivative of the function at
a point near to beginning.
b. If we desire to find the derivative of the function at a point near to end then we will use
Newton backward formula.
c. If the derivative at a point is near the middle of the table we apply stirling difference
formula.
2. In case the argument are un equally spaced then we should use Newton,s divided difference
formula.

Numerical integration is the process of obtaining the value of a definite integral from a set of
numerical values of the integrand. The process of finding the value of the definite integral
b
I =  f ( x)dx
a

Of a function of a single variable, is called as numerical quadratutre. If we apply this for function
of two variables is called mechanical cubature.

The problem of numerical integration is solved by first approximating the function f(x) by an
interpolating polynomial and then integrating it between the desired limit.
Thus f ( x)  Pn ( x)
b b

 f ( x)dx =  P ( x)dx .
a a
n

Mathematics Program Numerical Analysis I


142
Exercise

1. Find the first, second and third derivatives of the function tabulated below, at the point
x=1.5
x 1.5 2.0 2.5 3.0 3.5 4.0
F(x) 3.375 7.000 13.625 24.000 38.875 59.000

2. From the following table, find the first derivative at x=4


x 1 2 4 8 10
y 0 1 5 21 27

1
dx
3. Evaluate  1 x
0
2
by using Simpson’s one-third and three-eighth rule. Hence obtain the

approximate value of in each case.

Mathematics Program Numerical Analysis I


143

You might also like