0% found this document useful (0 votes)
307 views

Numerical Methods

This document provides an introduction to a book on numerical methods by Nandwa V.C. It includes a dedication to the author's son and students of numerical analysis. It then provides biographical information about the author and their teaching experience. It concludes by outlining the preface, acknowledgements and general introduction sections that are still under development in the book. The overall document serves as a brief overview of the forthcoming book and author.

Uploaded by

Joshua M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
307 views

Numerical Methods

This document provides an introduction to a book on numerical methods by Nandwa V.C. It includes a dedication to the author's son and students of numerical analysis. It then provides biographical information about the author and their teaching experience. It concludes by outlining the preface, acknowledgements and general introduction sections that are still under development in the book. The overall document serves as a brief overview of the forthcoming book and author.

Uploaded by

Joshua M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

NUMERICAL METHODS (ANALYSIS)

Written by Nandwa V.C.

School of Mathematics

University of Nairobi
Lecture Notes

March, 2020
DEDICATION

I dedicate this book to my son Einstein Nandwa Chiteri and all my Numerical Analysis
students.

Einstein Nandwa Chiteri

i
ABOUT THE AUTHOR
Nandwa V.C. is a passionate mathematician and a part-time lecturer in Applied Mathe-
matics at the University of Nairobi(UoN), Daystar University and KAG-East University.Mr
Nandwa V.C. received his Bachelor of Science (Mathematics) from the University of Nairobi
(First Class Honors)-2016,Master of Science (Applied Mathematics) from the University of
Nairobi (2018) and is currently working on his Ph.D research ”Mathematical Analysis and
Simulations of Models for Spatio-temporal Dynamics of Rho GTPases” at the University of
Nairobi.
Nandwa also is a tutor at St Theresse Training College where he teaches Mathematics,
Physics and Chemistry.He has also taught Mathematics in most performing schools: Al-
liance High School,Alliance Girls’ High School,Moi Girls’ School Nairobi,Precious Blood
Secondary School-Riruta, etc and has always produced excellent results.He is a member of
the Kenya Mathematical Olympiad (KMO)and always helps in the training of students who
participate in the KMO competitions and represent the country in Pan African Mathemati-
cal Olympiads (PAMO) and International Mathematical Olympiads (IMO). He also teaches
younger learners in pre-school and primary school!

From left:Cynthia Migika(Alliance Girls’ High School), Nandwa Vincent Chiteri (tutor)
and Victor Momanyi (Alliance High School) during International Mathematics Olympiad
(IMO) training at Alliance Girls’ High School-2016.

”Mathematics is good for our health”!

ii
PREFACE
Numerical analysis is the area of mathematics and computer science that creates, analyzes,
and implements algorithms for solving numerically the problems of continuous mathematics.
Such problems originate generally from real-world applications of algebra, geometry, and cal-
culus, and they involve variables that vary continuously; these problems occur throughout
the natural sciences, social sciences, engineering, medicine, and business. During the second
half of the twentieth century and continuing up to the present day, digital computers have
grown in power and availability. This has led to the use of increasingly realistic mathematical
models in science and engineering, and numerical analysis of increasing sophistication has
been needed to solve these more sophisticated mathematical models of the world. The formal
academic area of numerical analysis varies, from quite foundational mathematical studies to
the computer science issues involved in the creation and implementation of algorithms.
The study of Numerical Methods/Analysis is indispensable for a prospective student of Edu-
cation (Mathematics), Pure or Applied Mathematics,Statistics,Actuarial Science,Computer
Science and Engineering.It has become an integral part of Mathematical background neces-
sary for diverse fields such as Mathematics, Chemistry, Physics,Economics, Education,
Business,Engineering and Computer Science. In writing this book,I was guided by my ex-
perience in teaching Numerical Methods/Analysis at the University of Nairobi, Daystar
University and KAG-EAST University.I also owe it greatly to my Numerical Methods lec-
turers,Prof. Wandera Ogana and Dr Juma Victor;both whom are my PhD supervisors. The
book is based on my lecture notes for the course entitled Numerical Methods.
The choice of material is not entirely mine, but laid down by the University of Nairobi SMA
322,SMA 423,TMA 322, FEB 312, FEE 472 and FEM 472: Numerical Methods/Analysis
Syllabus together with MAT 325 Numerical Analysis syllabus of Daystar University. For
students, my purpose was to introduce students studying sciences and Mathematics all the
Mathematical foundations they need for their future studies in the world of Numerical Meth-
ods;Numerical Methods is the key to the door of research in modeling and most of the real
world problems.For instructors, my purpose was to design a flexible,comprehensive teaching
tool using proven pedagogical techniques in Mathematics. I wanted to provide instructions
with a package of materials that they could use to teach Numerical Methods/Analysis effec-
tively and efficiently in the most appropriate manner for their particular students. I hope I
have accomplished these goals without watering down the material.

...STILL WORKING ON THE FULL CONTENTS!!!!!...

”Mathematics can Smile”!

iii
ACKNOWLEDGEMENT
First and foremost, I wish to thank The Almighty God for His unwavering love,care,life
and wisdom He has installed in me.I also wish to thank Prof Ogana Wandera for planting
strong muscles of Numerical Analysis during my undergraduate studies.Prof your smart way
of teaching in terms of simplification of terms and ideas was a great thing I have inherited
from you. guidance and continued support towards the quality of my research.I also wish
to thank Dr Nyandwi Charles for ””hardening me” in Mathematics. You taught me that
”doing Mathematics is the only way of doing Mathematics!” Thanks so much.

iv
GENERAL INTRODUCTION
0.1 Pieces of Advice to Students
1. The secret of excelling in this course is CONSTANT revision; apply ”The Pentagon Theorem”
which states that any serious science,engineering or mathematics student MUST solve
at least FIVE (mathematical) problems a day!

2. Before you start revising Numerical Analysis course; make sure that you have a pen,
a book/writing material and a calculator!

3. The main tools for this course are: a CALCULATOR, a pen and writing sur-
face(pieces of papers/an exercise book); without these three basics;you can’t solve
any problem in this course!

4. Solve as many problems in Numerical Analysis as you can.

5. Without proper practice in this course;YOU WILL FAIL! If you do proper and regular
practice; YOU WILL EXCEL!

0.2 Introduction
The main question to ask ourselves is What is Numerical Analysis? According to L.N. Tre-
fethen;Numerical Analysis can be defined as the study of algorithms for the problems of
continuous mathematics. The field of numerical analysis predates the invention of modern
computers by many centuries. Linear interpolation was already in use more than 2000 years
ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is
obvious from the names of important algorithms like Newton’s method, Lagrange interpola-
tion polynomial, Gaussian elimination, or Euler’s method.
To facilitate computations by hand, large books were produced with formulas and tables of
data such as interpolation points and function coefficients. Using these tables, often calcu-
lated out to 16 decimal places or more for some functions, one could look up values to plug
into the formulas given and achieve very good numerical estimates of some functions. The
canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a
1000-plus page book of a very large number of commonly used formulas and functions and
their values at many points. The function values are no longer very useful when a computer
is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These cal-
culators evolved into electronic computers in the 1940s, and it was then found that these
computers were also useful for administrative purposes. But the invention of the computer

v
also influenced the field of numerical analysis, since now longer and more complicated calcu-
lations could be done.

0.3 Aim
1. Studying the numerical methods for solving problems, mastering of methodological
approaches of numerical calculations development.

2. Studying the methods for solving research and applied tasks.

3. Studying of the problem solving methods based on the application of special software
(MatLab).

0.4 Objectives
The main goal of this course is to devise algorithms that give quick and accurate answers
(solutions with minimal errors) to mathematical problems for scientists and engineers, nowa-
days using computers.
Other objectives are:

1. To make the students familiarize themselves with the ways of solving complicated
mathematical problems numerically.

2. To understand the idea about the basics of numerical methods for the analysis of
experimental results.

3. To be aware of the basic methods for solving linear and nonlinear problems of algebra.

4. To develop practical skills in the use of numerical methods, including using software.

5. Obtaining numerical solutions to problems of mathematics.

6. Describing and understanding of the several errors and approximation in numerical


methods.

7. The understanding of several available Solutions of Equations in One Variable.

8. The explaining and understanding of the several available methods to Solve the simul-
taneous equations.

9. Analyze and evaluate the problem solutions.

10. The studying of Curve Fitting and Interpolation.

vi
0.5 Learning Outcomes
To know:

1. The basics of the theory of error and the approximation theory.

2. The fundamental principles of mathematical modeling.

3. The numerical methods for solving problems of algebra.

4. The methods of numerical integration and differentiation.

5. The algorithms for numerical methods implementations be able to:


(i) Formulate the problem and find the ways to solve it.
(ii) Classify and select the numerical methods.
(iii) Develop the algorithms of numerical methods and implement them in practice by
means of software products.
(iv) Analyze and evaluate the problem solutions.
(v) Find numerically the required solutions of mathematical models for various tech-
nological processes.

6. Perform calculations to solve problems with the help of the software package

7. Apply:
(i) Computational methods and software resources to solve different tasks of industry.
(ii) Skills to evaluate in practice the accuracy of the results.
(iii) The major techniques of using computational methods in solving various problems
of professional activity.

0.6 Course Syllabus


1. The theory of errors.

2. Solution of Algebraic and Transcendental Equations.

3. Finite Differences.

4. Interpolation with Equal Intervals.

5. Interpolation with Unequal Intervals.

6. Inverse Interpolation.

7. Central Difference Interpolation Formulae.

vii
8. Numerical Differentiation.

9. Numerical Integration.

10. Numerical Solution of Ordinary Differential Equations.

11. Solution of Linear Equations.

12. Curve Fitting.

0.7 References
1. Richard l.Burden & J.Douglas Faires (2011). Numerical Analysis, 9th edition, interna-
tional edition.

2. G. Shanker Rao,Numerical Analysis.

3. M.K.. Jain, S.R.K. Iyengar, R.K. Jain, Numerical Methods-problems and solutions.

4. M.K.. Jain, S.R.K. Iyengar, R.K. Jain, Numerical Methods-for scientific and engineer-
ing computation,5th ediion,New Age International Publishers.

5. J.N. Sharma; Numerical Methods for Engineers and Scietists-2nd edition;alpha science.

6. Chapra, S., & Canale, R. (2008). Numerical Methods for Engineers 5th edition.
McGraw-Hill

7. Gilat, A. (2009). Numerical Methods with MATLAB, 2 edition. Wiley.

8. Mathews, J. H., & Kurtis, K. F, (2004). Numerical Methods Using Matlab (4th Edition).
Prentice Hall.

9. Epperson, J. F. (2007).An Introduction to Numerical Methods and Analysis. Wiley-


Interscience.

10. Brandimarte, P. (2006).Numerical Methods in Finance and Economics: A MATLAB-


Based Introduction (Statistics in Practice). 2nd edition. Wiley-Interscience.

viii
Contents
ABOUT THE AUTHOR ii

PREFACE iii

ACKNOWLEDGEMENT iv

GENERAL INTRODUCTION v
0.1 Pieces of Advice to Students . . . . . . . . . . . . . . . . . . . . . . . . . . . v
0.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
0.3 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
0.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
0.5 Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
0.6 Course Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
0.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

1 INTRODUCTION TO ERRORS 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Significant Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Rounding off Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4.1 Types of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 General Error Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Application of Errors to the Fundamental Operations of Arithmetic . . . . . 5
1.6.1 Errors in Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS


13
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Regula–Falsi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 The Newton–Raphson (or Newton Iteration) Method . . . . . . . . . . . . . 17
2.5 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 The Iteration Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Muller’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Generalized Newton’s Method for Multiple Roots . . . . . . . . . . . . . . . 19
2.9 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

ix
3 FINITE DIFFERENCES 1
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3.2 Forward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
3.3 Forward Difference Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3.4 The Shift Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.5 The Backward Difference Operator . . . . . . . . . . . . . . . . . . . . . . . 5
3.6 Backward Difference Table . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.7 The Central Difference Operator . . . . . . . . . . . . . . . . . . . . . . . . 8
3.8 The Central Difference Table . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.9 The Mean Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.10 The Differential Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.11 Relationship Among Operators . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.12 Error Propagation in a Difference Table . . . . . . . . . . . . . . . . . . . . 13

4 INTERPOLATION WITH EQUAL INTERVALS 15


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Newton’s Binomial Expansion Formula . . . . . . . . . . . . . . . . . . . . . 15
4.4 Newton’s Forward Interpolation Formula . . . . . . . . . . . . . . . . . . . . 16
4.5 Newton-Gregory Backward Interpolation Formula . . . . . . . . . . . . . . . 18

5 INTERPOLATION WITH UNEQUAL INTERVALS 21


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Newton’s General Divided Differences Formula . . . . . . . . . . . . . . . . . 21
5.3 Lagrange’s Interpolation Formula . . . . . . . . . . . . . . . . . . . . . . . . 21
5.4 Inverse (Lagrange’s) Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 25

6 CENTRAL DIFFERENCE INTERPOLATION FORMULAE 26


6.1 Gauss Forward Interpolation Formula . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Gauss Backward Interpolation Formula . . . . . . . . . . . . . . . . . . . . . 26
6.3 Bessel’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4 Stirling’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Laplace- Everett Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7 INVERSE INTERPOLATION 28
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Method of Successive Approximations . . . . . . . . . . . . . . . . . . . . . 28
7.3 Method of Reversion Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

x
7.4 Applications of Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

8 CURVE FITTING 29
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.2 The Straight Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.3 Fitting a Straight Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.4 Fitting a Parabola . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.5 Exponential Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

9 MATRICES AND SIMULTANEOUS LINEAR EQUATIONS 30


9.1 Matrix Inversion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.2 Gaussian Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.3 Gauss-Jordan Elimination Method . . . . . . . . . . . . . . . . . . . . . . . 30
9.4 LU Decomposition Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.5 Iteration Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.5.1 Jacobi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.5.2 Gauss-Seidel Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
9.6 Introduction to SOR Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9.7 Crout’s Triangulation Method (Method of Factorization) . . . . . . . . . . . 31
9.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

10 NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS 32


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10.2 Taylor’s Series Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
10.3 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.4 Modified Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.5 Predictor-Corrector Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.6 Milne’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.7 Adams Bashforth-Moulton Method . . . . . . . . . . . . . . . . . . . . . . . 35
10.8 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.9 Picard’s Method of Successive Approximation . . . . . . . . . . . . . . . . . 36
10.10Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

11 NUMERICAL DIFFERENTIATION 37
11.1 Derivatives Using Newton’s Forward Interpolation Formula . . . . . . . . . . 37
11.2 Derivatives Using Newton’s Backward Interpolation Formula . . . . . . . . . 38

12 NUMERICAL INTEGRATION 43

xi
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12.2 General Quadrature Formula for Equidistant Ordinates . . . . . . . . . . . . 43
12.3 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
12.4 Simpson’s one-third Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
12.5 Simpson’s three-eighths Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
12.6 Weddle’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
12.7 Newton-Cotes Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.8 Romberg Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
12.9 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

xii
1 INTRODUCTION TO ERRORS
1.1 Introduction
There are two kinds of numbers—exact and approximate numbers. An approximate number
x is a number that differs, but slightly, from an exact number X and is used in place of
the latter in calculations. The numbers 1, 2, 3, ... 35 , 37 , ...etc., are all exact, and π, 2, e, ...,etc.,
written in this manner are also exact. 1.41 is an approximate value of 2, and 1.414 is also
an approximate value of 2. Similarly 3.14, 3.141, 3.14159, ... etc., are all approximate values
of π.

1.2 Significant Digits


The digits that are used to express a number are called significant digits. Figure is synony-
mous with digit.

Definition 1.1. A significant digit of an approximate number is any non-zero digit in its
decimal representation, or any zero lying between significant digits, or used as place holder
to indicate a retained place.

The digits1, 2, 3, 4, 5, 6, 7, 8, 9 are significant digits. ‘0’ is also a significant figure except when
it is used to fix the decimal point, or to fill the places of unknown or discarded digits. For
example, in the number 0.0005010, the first four ‘0’s’ are not significant digits, since they
serve only to fix the position of the decimal point and indicate the place values of the other
digits. The other two ‘0’s’ are significant. Two notational conventions which make clear how
many digits of a given number are significant are given below.
1. The significant figure in a number in positional notation consists of:
(a) All non-zero digits and
(b) Zero digits which

(i) lie between significant digits


(ii) lie to the right of decimal point, and at the same time to the right of a non-zero digit
(iii) are specifically indicated to be significant
2. The significant figure in a number written in scientific notation (M × 10n ) consists of all
the digits explicitly in M.
Significant figures are counted from left to right starting with the left most non zero digit.

Example 1.2. The following table illustrates the way of identifying the significant figures
and number of significant figures of a number.

1
Number Significant figures No. of significant figures
37.89 3, 7, 8, 9 4
5090 5, 0, 9 3
7.00 7, 0, 0 3
0.00082 8, 2 2
0.000620 6, 2, 0 3
5.2 × 104 5, 2 2
3.506 × 10 3, 5, 0, 6 4
8 × 10−3 8 1

1.3 Rounding off Numbers


With a computer it is easy to input a vast number of data and perform an immense number
of calculations. Sometimes it may be necessary to cut the numbers with large numbers of
digits. This process of cutting the numbers is called rounding off numbers. In rounding
off a number after a computation, the number is chosen which has the required number of
significant figures and which is closest to the number to be rounded off. Usually numbers are
rounded off according to the following rule.Rounding-off rule In order to round-off a number
to n significant digits drop all the digits to the right of the nth significant digit or replace
them by ‘0’s’ if the ‘0’s’ are needed as place holders, and if this discarded digit is:

1. Less than 5, leave the remaining digits unchanged

2. Greater than 5, add 1 to the last retained digit

3. Exactly 5 and there are non-zero digits among those discarded, add unity to the last
retained digit.

However, if the first discarded digit is exactly 5 and all the other discarded digits are ‘0’s’,
the last retained digit is left unchanged if even and is increased by unity if odd. In other
words, if the discarded number is less than half a unit in the nth place, the nth digit is
unaltered. But if the discarded number is greater than half a unit in the nth place, the nth
digit is increased by unity. And if the discarded number is exactly half a unit in the nth
place, the even digit rule is applied.

Example 1.3. The following table illustrates the method of rounding off numbers.

2
Number Round-off to 3 s.f Round-off to 4 s.f Round-off to 5 s.f Round-off to 6 s.f
0.566341 0.522 0.5223 0.52234 0.522341
93.21550 93.2 93.22 93.226 93.2155
0.66666666666 0.667 0.6667 0.66667 0.666667
9.6782 9.68 9.678 9.6782 9.67820
29.1568 29.2 29.16 29.157 29.1568
8.24159 8.24 8.242 8.2416 8.24159
30.0567 30.0 30.06 30.057 30.0567

1.4 Errors
One of the most important aspects of numerical analysis is the error analysis. Errors may
occur at any stage of the process of solving a problem. By the error we mean the difference
between the true value and the approximate value.

Error = True value - Approximate value.

1.4.1 Types of Errors

Usually we come across the following types of errors in numerical analysis.

1. Inherent Errors. These are the errors involved in the statement of a problem. When
a problem is first presented to the numerical analyst it may contain certain data or
parameters. If the data or parameters are in some way determined by physical measure-
ment, they will probably differ from the exact values. Errors inherent in the statement
of the problem are called inherent errors.

2. Analytic Errors. These are the errors introduced due to transforming a physical
or mathematical problem into a computational problem. Once a problem has been
carefully stated, it is time to begin the analysis of the problem which involves cer-
tain simplifying assumptions. The functions involved in mathematical formulas are
frequently specified in the form of infinite sequences or series. For example, consider
x3 x5 x7
sinx = x − + − + ...
3! 5! 7!
If we compute sin x by the formula
x3 x5
sinx = x − + ,
3! 5!
then it leads to an error. Similarly the transformation ex − x = 0 into the equation
x2 x3
(1 − x + − )−x=0
2! 3!
3
involves an analytical error.
The magnitude of the error in the value of the function due to cutting (truncation) of
its series is equal to the sum of all the discarded terms. It may be large and may even
exceed the sum of the terms retained, thus making the calculated result meaningless.

3. Round-off errors. When depicting even rational numbers in decimal system or some
other positional system, there may be an infinity of digits to the right of the decimal
point, and it may not be possible for us to use an infinity of digits, in a computational
problem. Therefore it is obvious that we can only use a finite number of digits in
our computations. This is the source of the so called rounding errors. Each of the
FORTRAN operations +, −, ×, ÷ is subject to possible roundoff error.

1.5 General Error Formula


Let u be a function of several independent quantities x1 , x2 , x3 , ..., xn which are subject to
errors of magnitudes ϵ1 , ϵ2 , ϵ3 , ..., ϵn respectively. If ϵu denotes the error in u then

u = f (x1 , x2 , x3 , ..., xn )

u + ϵu = f [x1 + ϵ1 , x2 + ϵ2 , x3 + ϵ3 , ..., xn + ϵn ]
Using Taylor’s theorem for a function of several variables and expanding the right hand side
we get
∂f ∂f ∂f ∂f
u + ϵu = f (x1 , x2 , x3 , ..., xn ) + ϵ1 + ϵ2 + ϵ3 + ... + ϵn + O(ϵ2 )
∂x1 ∂x2 ∂x3 ∂xn
ie

∂f ∂f ∂f ∂f
u + ϵu = u + ϵ1 + ϵ2 + ϵ3 + ... + ϵn + O(ϵ2 )
∂x1 ∂x2 ∂x3 ∂xn
The errors ϵ1 , ϵ2 , ϵ3 , ..., ϵn , are very small quantities. Therefore, neglecting the squares and
higher powers of ϵ1 , we can write
∂f ∂f ∂f ∂f
ϵu = ϵ1 + ϵ2 + ϵ3 + ... + ϵn
∂x1 ∂x2 ∂x3 ∂xn
The relative error in u is

ϵu 1 ∂u ∂u ∂u ∂u
ϵR = = [ϵ1 + ϵ2 + ϵ3 + ... + ϵn ]
u u ∂x1 ∂x2 ∂x3 ∂xn
∂f ∂u
( = )
∂xi ∂xi
This is the general error formula.

4
1.6 Application of Errors to the Fundamental Operations of Arith-
metic
Let x, x̄ and ϵx be the exact, approximate and error in the quantity x respectively. Similarly,
let y, ȳ and ϵy be the exact, approximate and error in the quantity y respectively.

1. Addition
We have that
ϵx = x − x̄ ϵy = y − ȳ
x̄ = x − ϵx ȳ = y − ϵy
⇒ x̄ + ȳ = (x − ϵx ) + (y − ϵy )
⇒ (x + y) − (x̄ + ȳ) = ϵx + ϵy = ϵx+y
Thus the error in the sum of two quantities x and y is given by

ϵx+y = (x + y) − (x̄ + ȳ)


Relative error in the sum of two quantities x and y is given by
ϵx+y
R.Ex+y =
x+y
while the percentage error in the sum of two quantities x and y is given by
ϵx+y
P.Ex+y = × 100%
x+y

2. Subtraction
We have that
ϵx = x − x̄ ϵy = y − ȳ
x̄ = x − ϵx ȳ = y − ϵy
⇒ x̄ − ȳ = (x − ϵx ) − (y − ϵy )
⇒ (x − y) − (x̄ − ȳ) = ϵx − ϵy = ϵx−y
Thus the error in difference of two quantities x and y is given by

ϵx−y = (x − y) − (x̄ − ȳ)


Relative error in the difference of two quantities x and y is given by
ϵx−y
R.Ex−y =
x−y

5
while the percentage error in the of two quantities x and y is given by
ϵx−y
P.Ex−y = × 100%
x−y

3. Multiplication
We have that
ϵx = x − x̄ ϵy = y − ȳ
x̄ = x − ϵx ȳ = y − ϵy
⇒ x̄.ȳ = (x − ϵx )(y − ϵy )

⇒ x̄.ȳ = xy − yϵx − xϵy + ϵx ϵy + O(ϵ2x ) + ...


To first order approximation, we neglect higher order error terms i.e.

x̄.ȳ = xy − yϵx − xϵy + ϵx ϵy

⇒ xy − x̄.ȳ = yϵx + xϵy = ϵxy

Thus the error in multiplication of two quantities x and y is given by

ϵxy = yϵx + xϵy

Relative error in the product of two quantities x and y is given by


ϵxy
R.Exy =
xy
while the percentage error in the product of two quantities x and y is given by
ϵxy
P.Exy = × 100%
xy
Lemma 1.4. The relative error in the product of two quantities maybe expressed as
the sum of the relative errors in the respective quantities.

PROOF:
We know that
ϵxy = yϵx + xϵy
and
ϵxy
R.Exy =
xy

6
Thus
ϵxy yϵx + xϵy yϵx xϵy ϵx ϵy
R.Exy = = = + = +
xy xy xy xy x y
ϵx ϵy
⇒ R.Exy = +
x y
⇒ R.Exy = R.Ex + R.Ey

4. Division We have that


ϵx = x − x̄ ϵy = y − ȳ
x̄ = x − ϵx ȳ = y − ϵy
x̄ x − ϵx x − ϵx x − ϵx ϵy
⇒ = = ϵy = (1 − )−1
ȳ y − ϵy y(1 − y ) y y
i.e.
x̄ x − ϵx ϵy
= (1 − )−1
ȳ y y
We now apply The Binomial Theorem to rewrite (1 − ϵyy )−1 as a series [since | ϵy
y
|< 1.]
NOTE: Binomial theorem states that:

(1 − h)−1 = 1 + h + h2 + h3 + h4 + ...

(1 − h)−1 = 1 − h + h2 − h3 + ...
∀h ∈ R such that | h |< 1

Now we have
x̄ x ϵx ϵy ϵ2y ϵ3y ϵ4y
= ( − )(1 + + 2 + 3 + 4 + ...)
ȳ y y y y y y

x xϵy xϵ2y xϵ3y ϵx ϵx ϵy ϵx ϵ2y ϵx ϵ3y


= + 2 + 3 + 4 + ... − − 2 − 3 − 4 − ...
y y y y y y y y
[To first order approximation, we neglect the second and other higher order terms or products
of errors]

x̄ x xϵy ϵx
⇒ = + 2 −
ȳ y y y

x x̄ ϵx xϵy yϵx − xϵy


⇒ − = − 2 = = ϵ xy
y ȳ y y y2
Thus the error in division of two quantities x and y is given by
ϵx xϵy yϵx − xϵy
ϵ xy = − 2 =
y y y2

7
Relative error in the division of two quantities x and y is given by
ϵ xy
R.E xy = x
y

while the percentage error in the division of two quantities x and y is given by
ϵ xy
P.Exy = x × 100%
y

Lemma 1.5. The relative error in the quotient of two quantities maybe expressed as the
difference of the relative errors in the respective quantities.

PROOF:
We know that

ϵx xϵy yϵx − xϵy


ϵ xy = − 2 =
y y y2
and
ϵ xy
R.E xy = x
y

i.e.
ϵ yϵx −xϵy
y2 yϵx − xϵy x yϵx − xϵy y yϵx − xϵy
R.E =
x
x = 2
÷ = 2
× =
y
y
y y y x xy

yϵx − xϵy ϵx ϵy
⇒ R.E xy = = −
xy x y
i.e.
R.E xy = R.Ex − R.Ey

1.6.1 Errors in Functions

Consider the function f (x). Evaluate the value of the function x = a and also at x = ā.
Notice that
ϵa = a − ā

(x − a)2 ′′
′ (x − a)3 ′′′
f (x) = f (a) + (x − a)f (a) + f (a) + f (a) + ...
2! 3!

(ā − a)2 ′′ (ā − a)3 ′′′


f (ā) = f (a) + (ā − a)f ′ (a) + f (a) + f (a) + ...
2! 3!
[To thee first order approximation we neglect the second and higher order terms] Thus we
have;
f (ā) = f (a) + (ā − a)f ′ (a) = f (a) − ϵa f ′ (a)

8
The error function is
ϵf = f (a) − f ′ (a) = ϵa f ′ (a)

Example 1.6. Let f (x) = ex . Calculate the error in f when a = 1.001 and ā = 1.0.

SOLUTION:
Here we have
ϵa = 1.001 − 1.0 = 0.001
f ′ (x) = ex
⇒ ϵf = 0.001 × e1.001 = 0.002721001
Note that the exact value is
e1.001 − e1.0 = 0.002719641

Example 1.7. Round-off 27.8793 correct to four significant figures.

SOLUTION:
The number 27.8793 rounded-off to four significant figures is 27.88.

Example 1.8. Round-off the number 0.00243468 to four significant figures.

SOLUTION:
The rounded-off number is 0.002435.

Example 1.9. Find the sum of the approximate numbers 0.348, 0.1834, 345.4, 235.2, 11.75,
0.0849, 0.0214, 0.000354 each correct to the indicated significant digits.

SOLUTION
345.4 and 235.4 are numbers with the least accuracy whose absolute error may attain 0.1.
Rounding the remaining numbers to 0.01 and adding we get

345.4 + 235.2 + 11.75 + 9.27 + 0.35 + 0.18 + 0.08 + 0.02 + 0.00 = 602.25

Applying the even-digit rule for rounding the result we get the sum to be equal to 602.2.
Therefore the sum of the given numbers = 602.2.

Example 1.10. Find the number of significant figures in the approximate number 11.2461
given its absolute error as 0.25 × 10−2 .

SOLUTION
Given that absolute error = 0.25 × 10−2 = 0.0025.
Therefore the number of significant figure is 4.

9
Example 1.11. Find the product 349.1 × 863.4 and state how many figures of the result are
trust worthy, assuming that each number is correct to four decimals

SOLUTION
Let
x = 349.1 ϵx = 0.05 y = 863.4 ϵy = 0.05
u = xy = 349.1 × 863.4 = 301412.94
Now
ϵu ϵx ϵy
= +
u x y
ϵu 1 1 x+y
⇒ ≤ (0.05)( + ) = (0.05)[ ]
u x y x.y

x+y
⇒ ϵx ≤ (0.05)u[ ] = 0.05(x + y)
x.y
⇒ u ≤ (0.05)[349.1 + 863.47] = 60.6285 ≈ 60.63
Therefore, the true value of u lies between

301412.94 − 60.63 and 301412.94 + 60.63

301352.31 and 301473.559

3014 × 102 and 3015 × 102


We infer that, only the first three figures are reliable.
√ √
Example 1.12. Find the difference 2.01 − 2 to three correct digits.

SOLUTION

We know that
√ √
2.01 = 1.41774469... and 2 = 1.41421356...
Let X denote the difference. Therefore
√ √
X= 2.01 − 2 = (1.41774469...) − (1.41421356...)

= 0.00353
3.53 × 10−3

10
Example 1.13. If ϵx = 0.005, ϵy = 0.001 be the absolute errors in x = 2.11 and y = 4.15,
find the relative error in the computation of x + y

SOLUTION

x = 2.11 y = 4.15 ϵx = 0.005 ϵy = 0.001


x + y = 2.11 + 4.15 = 6.26
⇒ ϵx + ϵy = 0.005 + 0.001 = 0.006
Therefore the relative error in (x + y) is
ϵx + ϵy 0.006
R.Ex+y = =
x+y 6.26

The relative error in (x + y) = 0.001 approximately.


2
Example 1.14. Given that u = 5xyz3
; ϵx , ϵy and ϵz denote the errors in x, y and z respectively
such that x = y = z = 1 and ϵx = ϵy = ϵz = 0.001, find the relative maximum error in u.

SOLUTION
We have
∂u 5y 2 ∂u 10xy ∂u 15xy 2
= 3 = 3 =− 4
∂x z ∂y z ∂z z

∂u ∂u ∂u
ϵu = ϵx + ϵy + ϵz
∂x ∂y ∂z

∂u ∂u ∂u
(ϵu )max =| ϵx | + | ϵy | + | ϵz |
∂x ∂y ∂z

5y 2 10xy 15xy 2
(ϵu )max =| (0.001)( ) | + | (0.001)( ) | + | −(0.001) |
z3 z3 z4

0.03
(R.Eu )max = = 0.006
5
Example 1.15. If X = 2.536, find the absolute error and relative error when:
(i) X is rounded and
(ii) X is truncated to two decimal digits.

SOLUTION

11
(i) Here X = 2.536
Rounded-off value of X is x = 2.54
The absolute error in X is

EA =| 2.536 − 2.54 |=| −0.004 |= 0.004


0.004
Relative error = ER = = 0.0015772 = 1.5772 × 10−3
2.536
(ii) Here X = 2.536
Truncated value of X is x = 2.53
The absolute error in X is

EA =| 2.536 − 2.53 |=| −0.006 |= 0.006


0.006
Relative error = ER = = 0.0023659 = 2.3659 × 10−3
2.536
Example 1.16. The number x = 37.46235 is rounded off to four significant figures.Compute
the absolute error, relative error and the percentage error.

SOLUTION:
We have
X = 37.46235 x = 37.46000

Absolute error =| X − x |=| 37.46235 − 37.46000 |= 0.00235

0.00235
R.E = = 6.27 × 10−5
37.46235

%error = 6.27 × 10−5 × 100 = 6.27 × 10−3 %

12
2 SOLUTION OF ALGEBRAIC AND TRANSCEN-
DENTAL EQUATIONS
2.1 Introduction
In this chapter we shall discuss some numerical methods for solving algebraic and tran-
scendental equations. The equation f (x) = 0 is said to be algebraic if f (x) is purely a
polynomial in x. If f (x) contains some other functions, namely, Trigonometric, Logarithmic,
Exponential, etc., then the equation f (x) = 0 is called a Transcendental Equation. The
equations
x3 − 7x + 8 = 0
and
x4 + 4x3 + 7x2 + 6x + 3 = 0
are algebraic. The equations
3tan3x = 3x + 1
x − 2sinx = 0
ex = 4x
are transcendental.
Algebraically, the real number α is called the real root (or zero of the function f (x)) of the
equation f (x) = 0 if and only if f (α) = 0 and geometrically the real root of an equation
f (x) = 0 is the value of x where the graph of f (x) meets the x−axis in rectangular coordinate
system. We will assume that the equation

f (x) = 0 (1)

has only isolated roots, that is for each root of the equation there is a neighbourhood which
does not contain any other roots of the equation. Approximately the isolated roots of the
equation has two stages.

1. Isolating the roots that is finding the smallest possible interval (a, b) containing one
and only one root of the equation (1).

2. Improving the values of the approximate roots to the specified degree of accuracy. Now
we state a very useful theorem of mathematical analysis without proof.

Theorem 2.1. If a function f (x) assumes values of opposite sign at the end points of
interval (a, b), i.e., f (a)f (b) < 0 then the interval will contain at least one root of the
equation f (x) = 0, in other words, there will be at least one number c ∈ (a, b) such that
f (c) = 0.

13
Throughout our discussion in this chapter we assume that

1. f (x) is continuous and continuously differentiable up to sufficient number of times.

2. f (x) = 0 has no multiple root, that is, if cis a real root f (x) = 0 then f (c) = 0 and
f ′ (x) < 0 f ′ (x) > 0 in (a, b).

2.2 The Bisection Method


Algorithm forThe Bisection Method

1. Start with two points xL and xR at which it i known that f (x) has opposite signs i.e.

f (xL )f (xR ) < 0

2. From the two values xL and xR known to be on opposite sides of the root, determine
a quantity c half-way between xL and xR i.e.
1
c = (xL + xR )
2

3. If f (xL ) and f (c) have opposite signs, then the root lies between xL and c. So replace
xR by c and repeat step two above.Similarly, if f (c) and f (xR ) have opposite signs,
then the root lies between c and xR . So replace xL by c and repeat step two above.

4. The procedure is repeated until the required accuracy is achieved.

2.2.1 Stopping Criterion

Choose a small positive number ε and stop when

| xn+1 − xn |< ε

ε ≈ 0.5 × 10−m
where m is the number of decimal places.

Example 2.2. Find the real root for the equation

xex − 2 = 0

to the nearest four significant figures using The Bisection Method.

14
SOLUTION

k xL f (xL ) xR f (xR ) ck f (ck )


1 0.8 −0.2 0.9 0.2 0.85 −0.01
2 0.85 −0.01 0.9 0.2 0.875 0.1
3 0.85 −0.01 0.875 0.1 0.8625 0.04
4 0.85 −0.01 0.8625 0.04 0.85625 0.02
5 0.85 −0.01 0.85625 0.02 0.853125 0.002
6 0.85 −0.01 0.853125 0.002 0.85155625 0.005
7 0.8515625 −0.005 0.8533125 0.002 0.85234275 −0.001
8 0.85234375 −0.001 0.853125 0.002 0.852734375 0.0005
9 0.85234375 <0 0.852734375 >0 0.85253906
10
11
12
If you continue the process upto the 12th iteration, you get that the root is

≈ 0.8526(4s.f )

2.3 Regula–Falsi Method


Consider the equation f (x) = 0 and let a, b be two values of x such that f (a) and f (b) are of
opposite signs. Also let a < b. The graph of y = f (x) will meet the x−axis at the same point
between a and b. The equation of the chord joining the two points [a, f (a)] and [b, f (b)] is

y − f (a) f (b) − f (a)


= (2)
x−a b−a
In the small interval (a, b) the graph of the function can be considered as a straight line. So
that x−coordinate of the point of intersection of the chord joining [a, f (a)] and [b, f (b)] with
the x−axis will give an approximate value of the root. So putting y = 0 in (2) we get

f (a) f (b) − f (a)


− =
x − x1 b−a
or
f (a)
x=a− (b − a)
f (b) − f (a)
or
af (b) − bf (a)
x= = x0
f (b) − f (a)

15
(say). If f (a) and f (x0 ) are of apposite signs then the root lies between a and x0 otherwise
it lies between x0 and b. If the root lies between a and x0 then the next approximation
af (x0 ) − x0 f (a)
x1 =
f (x0 ) − f (a)
otherwise
x0 f (b) − bf (x0 )
x1 =
f (b) − f (x0 )
The above method is applied repeatedly till the desired accuracy is obtained.
We can as well write the formula for Regula-falsi method formula as
xL f (xR ) − xR f (xL )
c=
f (xR ) − f (xL )
Example 2.3. Find the real root for the equation

xex − 2 = 0

to the nearest four significant figures using The Regular Falsi Method.

SOLUTION
Here
xL = 0.8, xR = 0.9, f (xL ) = −0.219567257, f (xR ) = o.2136428

0.8(0.2136428) − 0.9(−0.219567257) 0.368524771


c1 = = = 0.850683785
0.2136428 + 0.219567257 0.433210056

f (c1 ) = −0.008338958, xL = 0.850683785, xR = 0.9


xL f (xR ) − xR f (xL )
c2 =
f (xR ) − f (xL )

(0.850683785)(0.2136428) − (0.9)(−0.008338958)
c2 =
0.2136428 − (−0.008338958)

0.189247527
c2 = = 0.852536396
0.221981756

f (c2 ) = −3.0030153×10−4 , xL = 0.852536396, f (xL ) = −3.0030153×10−4 , xR = 0.9, f (xR ) = 0.2136428

(0.852536396)(0.2136428) + 0.9(3.0030153 × 10−4 ) 0.182408534


c3 = = = 0.852603018
0.2136428 + 3.0030153 × 10−4 0.213942873
The root is
≈ 0.8526(4s.f )

16
2.4 The Newton–Raphson (or Newton Iteration) Method
This is also an iteration method and is used to find the isolated roots of an equation f (x) = 0,
when the derivative of f (x) is a simple expression. It is derived as follows:
Let x = x0 be an approximate value of one root of the equation f (x) = 0. If x = x1 , is the
exact root then

f (x1 ) = 0 (3)

where the difference between x0 and x1 is very small and if h denotes the difference then

x1 = x0 + h (4)

Substituting in (3) we get


f (x1 ) = f (x0 + h) = 0
Expanding by Taylor’s theorem we get

h ′ h2 ′′ h3 ′′′
f (x0 ) + f (x0 ) + f (x0 ) + f (x0 ) + ... (5)
1! 2! 3!
Since h is small, neglecting all the powers of h above the first from (5) we get

h ′
f (x0 ) + f (x0 ) = 0
1!
approximately
f (x0 )
⇒−
f ′ (x0 )
Therefore from (5) we get

f (x0 )
x1 = x0 + h = x0 − (6)
f ′ (x0 )

The above value of x1 is a closer approximation to the root of f (x) = 0 than x0 . Similarly
if x2 denotes a better approximation, starting with x1 , we get

f (x1 )
x2 = x 1 − (7)
f ′ (x1 )

Proceeding in this way we get

f (xn )
xn+1 = xn − (8)
f ′ (xn )

The above is a general formula, known as Newton–Raphson formula.

17
Example 2.4. Find the real root for the equation

xex − 2 = 0

to the nearest four significant figures using The Newton-Raphson Method.

SOLUTION

f (x) = xex − 2
f ′ (x) = ex + xex = ex (x + 1)

xn exn − 2
xn+1 = xn − n = 0, 1, 2, 3, 4, ...
(1 + xn )exn

0.011300175
x0 x1 = 0.85 + = 0.852610737
0.328346676

x2 = 0.852605502
x3 = 0.852605502
The root is
≈ 0.8526(4s.f )

2.5 Secant Method


Let two iterates xn−1 and xn be given. Then the formula for Secant Method is

xn−1 f (xn ) − xn f (xn−1 )


xn+1 = n = 1, 2, 3, 4, 5, 6, ...
f (xn ) − f (xn−1 )

Example 2.5. Find the real root for the equation

xex − 2 = 0

to the nearest four significant figures using the Secant Method.

SOLUTION
Let
x0 = 0.85, x1 = 0.875

x0 f (x1 ) − x1 f (x0 )
x2 =
f (x1 ) − f (x0 )

18
When you substitute the values
x2 = 0.852560863
x3 = 0.852604737
x3 = 0.852605501
The root is
≈ 0.8526(4s.f )

2.6 The Iteration Method


Coming soon...!

2.7 Muller’s Method


Coming soon...!

2.8 Generalized Newton’s Method for Multiple Roots


Coming soon...!

2.9 Applications
Coming soon...!

19
3 FINITE DIFFERENCES
3.1 Introduction
Numerical Analysis is a branch of mathematics which leads to approximate solution by
repeated application of four basic operations of Algebra. The knowledge of finite differences
is essential for the study of Numerical Analysis. In this section we introduce few basic
operators.

Definition 3.1. The tabular(mesh or base) points

x0 , x1 , x2 , x3 , ...xn , ...

are said to be equally spaced if

xk+1 − xk = h, ∀k ∈ Z

where h is the interval of spacing. E.g, for the mesh points

0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1, 6, ... ⇒ h = 0.2

If x0 , x0 + h, x0 + 2h, x0 + 3h, x0 + 4h, x0 + 5h, x0 + 6h, ... are equally spaced base points, then
any arbitrary mesh point is given by

xk = x0 + kh

3.2 Forward Difference Operator


Definition 3.2. The forward difference operator, △, is defined as

△f (x) = f (x + h) − f (x)

In particular if x = xk , xk+1 = xk + h then we can also define the forward difference operator
as
△f (xk ) = f (xk + h) − f (xk )
or
△fk = fk+1 − fk
We can also define higher order differences as:
Second Forward Differences :

△2 fk = △(△fk ) = △(fk+1 − fk )

1
= △fk+1 − △fk = fk+2 − 2fk+1 + fk
Third Forward Differences :

△3 fk = △(△2 fk ) = △(fk+2 − 2fk+1 + fk )

= △fk+2 − 2 △ fk+1 + △fk


= fk+3 − 3fk+2 + 3fk+1 − fk
Four Forward Differences :

△4 fk = △(△3 fk ) = △(fk+3 − 3fk+2 + 3fk+1 − fk )

= △fk+3 − 3 △ fk+2 + 3 △ fk+1 − △fk


= fk+4 − 4fk+3 + 6fk+2 − 4fk+1 + fk
Fifth Forward Differences :

△5 fk = △(△4 fk ) = △(fk+4 − 4fk+3 + 6fk+2 − 4fk+1 + fk )

= △fk+4 − 4 △ fk+3 + 6 △ fk+2 − 4 △ fk+1 + fk


= fk+5 − 5fk+4 + 10fk+3 − 10fk+2 + 5fk+1 − fk

.
.
.
n-th Forward Differences :

△n fk = △(△n−1 fk ) = △n−1 fk+1 − △n−1 fk

n n(n − 1)
= fk+n − fk+n−1 + fk+n−2 + ... + (−1)n fk
1! 2!

3.3 Forward Difference Table


It is a convenient method for displaying the successive differences of a function. The follow-
ing table is an example to show how the differences are formed.

2
x f △f △2 f △3 f △4 f △5 f
x0 f0
△f0
x1 f1 △2 f 0
△f1 △3 f 0
x2 f2 △2 f 1 △4 f 0
△f2 △3 f 1 △5 f 0
x3 f3 △2 f 2 △4 f 1
△f3 △3 f 2
x4 f4 △2 f 3
△f4
x5 f5

The above table is called a diagonal difference table. The first term in the table is f0 . It is
called the leading term. The differences △f0 , △2 f0 , △3 f0 , ..., are called the leading differences.
The differences △n fn with a fixed subscript are called forward differences. In forming such
a difference table care must be taken to maintain correct sign.
A convenient check may be obtained by noting the sum of the entries in any column equals
the differences between the first and the last entries in preceding column.

Example 3.3. Construct a forward difference table for y = f (x) = x3 + 2x + 1 for x =


1, 2, 3, 4, 5, 6.

Solution:
x f △f △2 f △3 f △4 f △5 f
1 4
9
2 13 12
21 6
3 34 18 0
39 6 0
4 39 24 0
63 6
5 136 30
93
6 229

Theorem 3.4. The nth differences of a polynomial of the nth degree are constant when the
values of independent variable are at equal intervals.

3
3.4 The Shift Operator
Let y = f (x) be function of x and x, x + h, x + 2h, x + 3h, ..., etc., be the consecutive values
of x, then the operator E is defined as

Ef (x) = f (x + h),

E is called shift operator. It is also called displacement operator.


Note: E is only a symbol but not an algebraic sum. E 2 f (x) means the operator E is applied
twice on f (x), i.e.,
E 2 f (x) = E[Ef (x)]
= Ef (x + h)
= f (x + 2h)
.
.
.
E n f (x) = f (x + nh)
We can also define
E −n f (x) = f (x − nh).
Properties of E

1. E(f1 (x) + f2 (x) + ... + fn (x)) = Ef1 (x) + Ef2 (x) + ... + Efn (x)

2. E(cf (x)) = cEf (x) (where c is constant)

3. E m (E n f (x)) = E n (E m f (x)) = E m+n f (x) where m, n are positive integers.

4. If n is positive integer, then E n [E −n f (x)] = f (x)

Alternative notation:
If f0 , f1 , f2 , ..., fn , ...,etc., are consecutive values of the function y = f (x) corresponding
to equally spaced values x0 , x1 , x2 , ..., xn , etc., of x then in alternative notation

Ef0 = f1

Ef1 = f2
.

4
.
.
and in general
E n f0 = fn

3.5 The Backward Difference Operator


Definition 3.5. The backward difference operator, ∇, is defined as

∇f (x) = f (x) − f (x − h)

In particular if x = xk , xk+1 = xk + h then we can also define the forward difference operator
as
∇f (xk ) = f (xk ) − f (xk − h)
or
∇fk = fk − fk−1
We can also define higher order differences as:
Second Backward Differences :

∇2 fk = ∇(∇fk ) = ∇(fk − fk−1 )

= ∇fk − ∇fk−1 = fk − 2fk−1 + fk−2


Third Backward Differences :

∇3 fk = ∇(∇2 fk ) = ∇(fk − 2fk−1 + fk−2 )

= ∇fk − 2∇fk−1 + ∇fk−2


= fk − 3fk−1 + 3fk−2 − fk−3
Four Backward Differences :

∇4 fk = ∇(∇3 fk ) = ∇(fk − 3fk−1 + 3fk−2 − fk−3 )

= ∇fk − 3∇fk−1 + 3∇fk−2 − ∇fk−3


= fk − 4fk−1 + 6fk−2 − 4fk−3 + fk−4
Fifth Backward Differences :

∇5 fk = ∇(∇4 fk ) = ∇(fk − 4fk−1 + 6fk−2 − 4fk−3 + fk−4 )

5
= ∇fk − 4∇fk−1 + 6∇fk−2 − 4∇fk−3 + ∇fk−4
= fk − 5fk−1 + 10fk−2 − 10fk−3 + 5fk−4 − fk−5

.
.
.
Alternative Notation:
Let the function y = f (x) be given at equal spaces of the independent variable x at x =
a, a + h, a + 2h, a + 3h, ... then we define

∇f (a) = f (a) − f (a − h)

where ∇ is called the backward difference operator, h is called the interval of differencing.
In general we can define
∇f (x) = f (x) − f (x − h)
We observe that
∇f (x + h) = f (x + h) − f (x) = ∆f (x)
∇f (x + 2h) = f (x + 2h) − f (x + h) = ∆f (x + h)
.
.
.
∇f (x + nh) = f (x + nh) − f (x + (n − 1)h)
∆f [x + (n − 1)h]
Similarly we get
∇2 f (x + 2h) = ∇[∇f (x + 2h)]
= ∇[∆f (x + h)]
= ∆[∆f (x)]
= ∆2 f (x)
.
.
.
∇n f (x + nh) = ∆n f (x)

6
3.6 Backward Difference Table
It is a convenient method for displaying the successive differences of a function. The follow-
ing table is an example to show how the differences are formed.

x f ∇f ∇2 f ∇3 f ∇4 f ∇5 f
x0 f0
∇f1
x1 f1 ∇2 f2
∇f2 ∇3 f3
x2 f2 ∇2 f3 ∇4 f4
∇f3 ∇3 f4 ∇5 f5
x3 f3 ∇2 f4 ∇4 f5
∇f4 ∇3 f5
x4 f4 ∇2 f5
∇f5
x5 f5

The above table is called a diagonal difference table. The first term in the table is f0 . It is
called the leading term. The differences ∇f0 , ∇2 f0 , ∇3 f0 , ..., are called the leading differences.
The differences ∇n fn with a fixed subscript are called backward differences. In forming such
a difference table care must be taken to maintain correct sign.
A convenient check may be obtained by noting the sum of the entries in any column equals
the differences between the first and the last entries in preceding column.

Example 3.6. Construct a backward difference table for y = f (x) = x3 + 2x + 1 for


x = 1, 2, 3, 4, 5, 6.

Solution:

7
x f ∇f ∇2 f ∇3 f ∇4 f ∇5 f
1 4
9
2 13 12
21 6
3 34 18 0
39 6 0
4 39 24 0
63 6
5 136 30
93
6 229

3.7 The Central Difference Operator


Definition 3.7. The central difference operator, δ, is defined as
h h
δf (x) = f (x + ) − f (x − )
2 2
In particular if x = xk , xk+1 = xk + h then we can also define the forward difference operator
as
1 1
δf (xk ) = f (xk + ) − f (xk − )
2 2
or
δfk = fk+ 1 − fk− 1
2 2

3.8 The Central Difference Table


It is a convenient method for displaying the successive differences of a function. The follow-
ing table is an example to show how the differences are formed.

8
x f δf δ2f δ3f δ4f δ5f
x0 f0
δf 1
2

x1 f1 δ 2 f1
δf 3 δ3f 3
2 2

x2 f2 δ 2 f2 δ 4 f2
δf 5 δ3f 5 δ5f 5
2 2 2

x3 f3 δ 2 f3 δ 4 f3
δf 7 δ3f 7
2 2

x4 f4 δ 2 f4
δf 9
2

x5 f5

3.9 The Mean Operator


Definition 3.8. The central difference operator, µ, is defined as
1 h h
µf (x) = [f (x + ) + f (x − )]
2 2 2
In particular if x = xk , xk+1 = xk + h then we can also define the forward difference operator
as
1 1 1
µf (xk ) = [f (xk + ) + f (xk − )]
2 2 2
or
1
µfk = [fk+ 1 + fk− 1 ]
3 2 2

3.10 The Differential Operator


d
Definition 3.9. The differential operator, D = dx
, is defined as

d
Df (x) = f (x) = f ′ (x)
dx

d2
D2 f (x) = f (x) = f ′′ (x)
dx2
d3
D3 f (x) = 3 f (x) = f ′′′ (x)
dx
d4
D4 f (x) = 4 f (x) = f (iv) (x)
dx
.

9
.
.
n
d
Dn f (x) = f (x) = f (n) (x)
dxn

3.11 Relationship Among Operators


1. Relationship between E and ∆ :
From the definition of ∆, we know that

∆f (x) = f (x + h) − f (x),

where h is the interval of differencing.Using the operator E we can write

∆f (x) = Ef (x) − f (x)

⇒ ∆f (x) = (E − 1)f (x)


The above relation can be expressed as an identity

∆=E−1

i.e.,
E =1+∆

QUESTION Prove that


E∆ = ∆E

Proof:

E∆f (x) = E[f (x + h) − f (x)]

= Ef (x + h) − Ef (x)
= f (x + 2h) − f (x + h)
= ∆f (x + h)
= ∆Ef (x)
Therefore E∆ = ∆E.

QUESTION Prove that


∆f (x)
∆logf (x) = log[1 + ]
f (x)

10
Proof:
Let h be the interval of differencing

f (x + h) = Ef (x) = (∆ + 1)f (x) = ∆f (x) + f (x)

f (x + h) ∆f (x)
⇒ = +1
f (x) f (x)
Applying logarithms on both we get
f (x + h) ∆f (x)
log[ ] = log[1 + ]
f (x) f (x)

∆f (x)
⇒ logf (x + h) − logf (x) = log[1 + ]
f (x)

∆f (x)
⇒ ∆logf (x) = log[1 + ]
f (x)
2. Relationship between E and ∇ :

∇f (x) = f (x) − f (x − h) = f (x) − E −1 f (x) = (1 − E −1 )f (x)


E−1
⇒ ∇ = 1 − E −1 ⇒ ∇=
E
3. Relationship between E and D :

We know that
d
Df (x) = f (x) = f ′ (x)
dx

d2
D2 f (x) = f (x) = f ′′ (x)
dx2
d3
D3 f (x) = 3 f (x) = f ′′′ (x)
dx
d4
D4 f (x) = 4 f (x) = f (iv) (x)
dx
.
.
.
n
d
Dn f (x) = n f (x) = f (n) (x)
dx
From definition we have

11
Ef (x) = f (x + h) (h being the interval of differencing)

h ′ h2 h3
= f (x) + f (x) + f ′′ (x) + f ′′′ (x) + ... (expanding by Taylor’s series method)
1! 2! 3!
hD h2 2 h3 3
= (1 + + D + D + ...)f (x)
1! 2! 3!
hD (hD)2 (hD)3
= (1 + + + + ...)f (x)
1! 2! 3!
= ehD f (x)
Hence the identity
E = ehD
We have already proved that

E = 1 + ∆; E = ehD

Now consider
E = ehD
. Applying logarithms, we get

⇒ hD = logE = log[1 + ∆]

∆2 ∆3 ∆4
=∆− + − + ...
2 3 4
1 ∆2 ∆3 ∆4
⇒ D = [∆ − + − + ...]
h 2 3 4
4. Relationship between δ and ∆ :
From the definition we know that
h h
δf (x) = f (x + ) − f (x − )
2 2

δf (x) = E 2 f (x) − E − 2 f (x)


1 1

= (E 2 − E − 2 )f (x)
1 1

Therefore
δ = (E 2 − E − 2 )
1 1

12
Further
δf (x) = E − 2 (E − 1)f (x) = E − 2 ∆f (x)
1 1

Therefore
δ = E− 2 ∆
1

From the above result we get


1
E2δ = ∆

5. Relationship between µ and E :

1 h h
µf (x) = [f (x + ) + f (x − )]
2 2 2

1 1
= [E 2 + E − 2 ]f (x)
1

2
1 1
µ = [E 2 + E − 2 ]
1

2

3.12 Error Propagation in a Difference Table


Let f0 , f1 , f2 , f3 , f4 , ..., fn be the values of the function y = f (x) and the value f5 be effected
with an error ϵ such that the erroneous value of f5 is f5 + ϵ In this case the error ϵ effects
the successive differences and spreads out facewise as higher orders are formed in the table.
The table given below shows us the effect of the error.

13
f ∆f ∆2 f ∆3 f
f0
∆f0
f1 ∆2 f0
∆f1 ∆3 f0
f2 ∆2 f1
∆f2 ∆3 f1
f3 ∆2 f2
∆f3 ∆3 f2 + ϵ
f4 ∆2 f3 + ϵ
∆f4 + ϵ ∆3 f3 − 3ϵ
f5 + ϵ ∆2 f4 − 2ϵ
∆f5 − ϵ ∆3 f4 + 3ϵ
f6 ∆2 f5 + ϵ
∆f6 ∆3 f5 − ϵ
f7 ∆2 f6
∆f7 ∆3 f6
f8 ∆2 f7
∆f8
f9

14
4 INTERPOLATION WITH EQUAL INTERVALS
4.1 Introduction
The word interpolation denotes the method of computing the value of the function y = f (x)
for any given value of the independent variable x when a set of values of y = f (x) for certain
values of x are given.

Definition 4.1. Interpolation is the estimation of a most likely estimate in given conditions.
It is the technique of estimating a Past figure (Hiral).

According to Theile: ”Interpolation is the art of reading between the lines of a table”.
According to W.M. Harper: ”Interpolation consists in reading a value which lies between
two extreme points”.
The study of interpoltation is based on the assumption that there are no sudden jumps in
the values of the dependent variable for the period under consideration. It is also assumed
that the rate of change of figures from one period to another is uniform.
Let y = f (x) be a function which takes the values f0 , f1 , f2 , f3 , f4 , ..., fn corresponding to
the values x0 , x1 , x2 , x3 , x4 , ..., xn of the independent variable x. If the form of the function
y = f (x) is known we can very easily calculate the value of y correspondig to any value of x.
But in most of the practical problems, the exact form of the function is not known. In such
cases the function f (x) is replaced by a simpler function say ϕ(x) which has the same values
as f (x) for x0 , x1 , x2 , x3 , x4 , ..., xn . The function ϕ(x) is called an interpolating function.

4.2 Missing Values


Let a function y = f (x) be given for equally spaced values x0 , x1 , x2 , x3 , x4 , ..., xn of the
argument and f0 , f1 , f2 , f4 , ..., fn denote the corresponding values of the function. If one or
more values of y = f (x) are missing we can find the missing values by using the relation
between the operators ∆ and E.

4.3 Newton’s Binomial Expansion Formula


Let f0 , f1 , f2 , f3 , ..., fn denote the values of the function y = f (x) corresponding to the values
x0 , x0 + h, x0 + 2h, x0 + 3h, ..., x0 + nh of x and let one of the values of y be missing since n
values of the functions are known. We have

∆n f0 = 0

⇒ (E − 1)n f0 = 0

15
( ) ( )
n n
⇒ [E −
n
E n−1
+ E n−2 + ... + (−1)n ]f0 = 0
1 2
n(n − 1) n−2
⇒ E n f0 − nE n−1 f0 + E f0 + ... + (−1)n f0 = 0
2!
n(n − 1)
⇒ fn − nfn−1 + fn−2 + ... + (−1)n f0 = 0
2
The above formula is called Newton’s binomial expansion formula and is useful in finding
the missing values without constructing the difference table.

4.4 Newton’s Forward Interpolation Formula


Let
x = x0 + ph(≡ xp )
f (x) = f (xp ) = E p f0 = (1 + ∆)p f0
Using Binomial Theorem we get
p(p − 1) 2 p(p − 1)(p − 2) 4
f (x) = f (xp ) = (1 + ∆)p f0 = [1 + p∆ + ∆ + ∆ + ...]f0
2! 3!
p(p − 1) 2 p(p − 1)(p − 2) 4
⇒ f (x) = f (xp ) = (1 + ∆)p f0 = f0 + p∆f0 + ∆ f0 + ∆ f0 + ...
2! 3!
This formula is called the Newton’s Forward Difference Formula .
Suppose that the differences terminate at ∆n f0 then
p(p − 1) 2 p(p − 1)(p − 2) 4
⇒ f (x) ≈ Pn (x) = f0 + p∆f0 + ∆ f0 + ∆ f0 + ...
2! 3!
p(p − 1)(p − 2)(p − 3)...(p − n + 1) n
+ ∆ f0
n!
Newton Forward Difference Interpolating polynomial continues from this equation i.e.
p(p − 1)(p − 2)(p − 3)...(p − n + 1) n p(p − 1)(p − 2)(p − 3)...(p − n) n
P (x) = ...+ ∆ f0 + ∆ f0 +
n! (n + 1)!

p(p − 1)(p − 2)(p − 3)...(p − n + 1) n+2 p(p − 1)(p − 2)(p − 3)...(p − n + 2) n+3
∆ f0 + ∆ f0 +...
(n + 2)! (n + 3)!
The truncation error ϵn (p) associated with this formula is

ϵn (p) = f (x) − Pn (p)

p(p − 1)(p − 2)(p − 3)...(p − n) n p(p − 1)(p − 2)(p − 3)...(p − n + 1) n+2


ϵn (p) = ∆ f0 + ∆ f0 +...
(n + 1)! (n + 2)!
i.e.

16
p(p − 1)(p − 2)(p − 3)...(p − n) n
ϵn (p) ≈ ∆ f0
(n + 1)!
where
x = x0 + ph
To apply this formula, we choose x0 so that

| p |< 1

We choose x0 as the base point immediately before x, so that

0<p<1

This choice will lead to a smaller truncation error.

Example 4.2. Obtain the interpolating polynomial which passes through all of the following
points. Hence evaluate the following f (0.5), f (−0.5), f (1.5), f ′ (1).
x −1 0 1 2
f (x) 0 −1 0 15

SOLUTION

x f (x) ∆f (x) ∆2 f (x) ∆3 f (x)


−1 0
−1
0 −1 2
1 12
1 0 14
15
2 15
p(p − 1) 2 p(p − 1)(p − 2) 3
P3 (x) = f0 + p∆f0 + ∆ f0 + ∆ f0
2! 3!
where
x − x0
x = x0 + ph ⇒p=
h
Since
x0 = −1 h=1 ⇒ x = −1 + p ⇒p=x+1
p(p − 1) p(p − 1)(p − 2)
P3 (p) = 0 − p + (2) + (12)
2 6
P3 (p) = −p + p(p − 1) + 2p(p − 1)(p − 2)
⇒ P3 (x) = −(x + 1) + (x + 1)(x) + 2(x + 1)(x)(x − 1)

17
P3 (x) = −x − 1 + x2 + x + (2x + 2)(x2 − x)
P3 (x) = −x − 1 + x2 + x + 2x3 − 2x2 + 2x2 − 2x
P3 (x) = 2x3 + x2 − 2x − 1
⇒ f (x) = 2x3 + x2 − 2x − 1
⇒ f (−0.5) = 2(−0.5)3 + (−0.5)2 − 2(−0.5) − 1 = 0
⇒ f (0.5) = 2(0.5)3 + (0.5)2 − 2(0.5) − 1 = −1.5
⇒ f (1.5) = 2(1.5)3 + (1.5)2 − 2(1.5) − 1 = 5

4.5 Newton-Gregory Backward Interpolation Formula


Let
x = x0 + ph(≡ xp )
f (x) = f (xp ) = E p f0
We know that
E = (1 − ∇)−1 = 1 + ∇ + ∇3 + ∇4 + ∇5 + ...
We also know that
E −n f0 = f−n
⇒ f (x) = (1 − ∇)−p f0
Using Binomial Theorem we get

p(p + 1) 2 p(p + 1)(p + 2) 4


f (x) = f (xp ) = (1 − ∇)−p f0 = [1 + p∇ + ∇ + ∇ + ...]f0
2! 3!
p(p + 1) 2 p(p + 1)(p + 2) 4
⇒ f (x) = f (xp ) = (1 − ∇)−p f0 = f0 + p∇f0 + ∇ f0 + ∇ f0 + ...
2! 3!
This formula is called the Newton’s Backward Difference Formula .
Suppose that the differences terminate at ∇n f0 then

p(p + 1) 2 p(p + 1)(p + 2) 4


⇒ f (x) ≈ Pn (x) = f0 + p∇f0 + ∇ f0 + ∇ f0 + ...
2! 3!
p(p + 1)(p + 2)(p + 3)...(p + n − 1) n
+ ∇ f0
n!
Newton Forward Difference Interpolating polynomial continues from this equation i.e.

p(p − 1)(p − 2)(p − 3)...(p − n + 1) n p(p − 1)(p − 2)(p − 3)...(p − n) n


P (x) = ...+ ∇ f0 + ∇ f0 +
n! (n + 1)!

18
p(p − 1)(p − 2)(p − 3)...(p − n + 1) n+2 p(p − 1)(p − 2)(p − 3)...(p − n + 2) n+3
∇ f0 + ∇ f0 +...
(n + 2)! (n + 3)!
The truncation error ϵn (p) associated with this formula is

ϵn (p) = f (x) − Pn (p)

p(p + 1)(p + 2)(p + 3)...(p + n) n p(p + 1)(p + 2)(p + 3)...(p + n + 1) n+2


ϵn (p) = ∇ f0 + ∇ f0 +...
(n + 1)! (n + 2)!
i.e.

p(p + 1)(p + 2)(p + 3)...(p + n) n


ϵn (p) ≈ ∇ f0
(n + 1)!
where
x = x0 + ph
To apply this formula, we choose x0 so that

| p |< 1

We choose x0 as the base point immediately after x, so that

−1 < p < 0

This choice will lead to a smaller truncation error.

Example 4.3. Obtain the interpolating polynomial which passes through all of the following
points. Hence evaluate the following f (0.5), f (−0.5), f (1.5), f ′ (1).
x −1 0 1 2
f (x) 0 −1 0 15

SOLUTION

x f (x) ∇f (x) ∇2 f (x) ∇3 f (x)


−1 0
−1
0 −1 2
1 12
1 0 14
15
2 15
p(p + 1) 2 p(p + 1)(p + 2) 3
P3 (x) = f0 + p∇f0 + ∇ f0 + ∇ f0
2! 3!

19
where
x − x0
x = x0 + ph ⇒p=
h
Since
x0 = 2 h=1 ⇒x=2+p ⇒p=x−2
p(p + 1) p(p + 1)(p + 2)
P3 (p) = 15 + 15p + (14) + (12)
2 6
P3 (p) = 15 + 15p + 7p(p + 1) + 2p(p + 1)(p + 2)
⇒ P3 (x) = 15 + 15(x − 2) + 7(x − 2)(x − 1) + 2(x − 2)(x − 1)(x)
P3 (x) = 15 + 15x − 30 + (7x − 14)(x − 1) + (2x2 − 4x)(x − 1)
P3 (x) = 15 + 15x − 30 + 7x2 − 21x + 14 + 2x3 − 6x2 + 4x
P3 (x) = 2x3 + x2 − 2x − 1
⇒ f (x) = 2x3 + x2 − 2x − 1
⇒ f (−0.5) = 2(−0.5)3 + (−0.5)2 − 2(−0.5) − 1 = 0
⇒ f (0.5) = 2(0.5)3 + (0.5)2 − 2(0.5) − 1 = −1.5
⇒ f (1.5) = 2(1.5)3 + (1.5)2 − 2(1.5) − 1 = 5

20
5 INTERPOLATION WITH UNEQUAL INTERVALS
5.1 Introduction
The Newton’s forward and backward interpolation formulae which were derived in the pre-
vious section are applicable only when the values of n are given at equal intervals. In
this section we study the problem of interpolation when the values of the independent
variable x are given at unequal intervals. The concept of divided differences: Let the
function y = f (x) be given at the point x0 , x1 , x2 , x3 , ...xn (which need not be equally
spaced) f (x0 ), f (x1 ), f (x2 ), ..., f (xn ) denote the (n + 1) values the function at the points
x0 , x1 , x2 , x3 , ..., xn . Then the first divided differences of f (x) for the arguments x0 and x1 is
defined as
f (x0 ) − f (x1 )
x0 − x1
It is denoted by [x0 , x1 ]. Therefore

f (x0 ) − f (x1 )
f (x0 , x1 ) =
x0 − x1
Similarly we can define
f (x1 ) − f (x2 )
f (x1 , x2 ) =
x1 − x2
f (x2 ) − f (x3 )
f (x2 , x3 ) =
x2 − x3
The second divided differences for the arguments x0 , x1 , x2 , ... is defined as
f (x0 , x1 ) − f (x1 , x2 )
f (x0 , x1 , x2 ) =
x0 − x2
similarly the third differences for arguments x0 , x1 , x2 , x3 , ... is defined as
f (x0 , x1 , x2 ) − f (x1 , x2 , x3 )
f (x0 , x1 , x2 , x3 ) =
x 0 − x3
The first divided differences are called the divided differences of order one, the second di-
vided differences are called the divided differences of order two, etc.

5.2 Newton’s General Divided Differences Formula

5.3 Lagrange’s Interpolation Formula


Let y = f (x) be a function which assumes the values f (x0 ), f (x1 ), f (x2 ), ..., f (xn ) corre-
sponding to the values x = x0 , x1 , x2 , ..., xn , where the values of x are not equispaced. Since
(n + 1) values of the function are given corresponding to the (n + 1) values of the independent

21
variable x, we can represent the function y = f (x) be a polynomial in x of degree n.
Let the polynomial be

f (x) = a0 (x − x1 )(x − x2 )...(x − xn ) + a1 (x − x0 )(x − x2 )...(x − xn )+

a2 (x − x0 )(x − x1 )(x − x3 )...(x − xn ) + ... + an (x − x0 )(x − x1 )...(x − xn−1 ) (9)

Each term in equation (9) being a product of n factors in x of degree n, putting x = x0 in


(9) we get
f (x) = a0 (x0 − x1 )(x0 − x2 )...(x0 − xn )
f (x0 )
⇒ a0 =
(x0 − x1 )(x0 − x2 )(x0 − x3 )...(x0 − xn )
Putting x = x2 in (9) we get

f (x1 ) = a1 (x1 − x0 )(x1 − x2 )...(x1 − xn )


f (x1 )
⇒ a1 =
(x1 − x0 )(x1 − x2 )...(x1 − xn )
Similarly putting x = x2 , x = x3 , x = xn in (9) we get
f (x2 )
⇒ a2 =
(x2 − x0 )(x2 − x1 )...(x2 − xn )
.
.
.
f (xn )
⇒ an =
(xn − x0 )(xn − x1 )...(xn − xn−1 )
Substituting the values of a0 , a1 , ..., an in (9) we get
(x − x1 )(x − x2 )(x − x3 )...(x − xn )
y = f (x) = f (x0 )+
(x0 − x1 )(x0 − x2 )(x0 − x3 )...(x0 − xn )

(x − x0 )(x − x2 )(x − x3 )...(x − xn ) (x − x0 )(x − x1 )(x − x2 )...(x − xn−1 )


f (x1 ) + ... + f (xn ) + ...
(x1 − x0 )(x1 − x2 )(x1 − x3 )...(x1 − xn ) (xn − x0 )(xn − x1 )(xn − x2 )...(xn − xn−1 )
(10)

The formula given by (10) is called Lagrange’s interpolation formula. It is simple and easy
to remember but the calculations in the formula are more complicated than in Newton’s
divided difference formula. The application of the formula is not speedy and there is always
a chance of committing some error due to the number of positive and negative signs in the
numerator and denominator of each term.

22
Example 5.1. Obtain the interpolating polynomial which passes through all of the following
points. Hence evaluate the following f (0.5), f (−0.5), f (1.5), f ′ (1).
x −1 0 1 2
f (x) 0 −1 0 15

SOLUTION

P3 (x) = f0 L0 (x) + f1 L1 (x) + f2 L2 (x) + f3 L3 (x)


(x − x1 )(x − x2 )(x − x3 ) (x − 0)(x − 1)(x − 2) x(x − 1)(x − 2)
L0 (x) = = =
(x0 − x1 )(x0 − x2 )(x0 − x3 ) (−1 − 0)(−1 − 1)(−1 − 2) −6

(x − x0 )(x − x2 )(x − x3 ) (x + 1)(x − 1)(x − 2) (x − 1)(x + 1)(x − 2)


L1 (x) = = =
(x1 − x0 )(x1 − x2 )(x1 − x3 ) (0 + 1)(0 − 1)(0 − 2) 2

(x − x0 )(x − x1 )(x − x3 ) (x + 1)(x − 0)(x − 2) x(x + 1)(x − 2)


L2 (x) = = =
(x2 − x0 )(x2 − x1 )(x2 − x3 ) (1 + 1)(1 − 0)(1 − 2) −2

(x − x0 )(x − x1 )(x − x2 ) (x + 1)(x − 0)(x − 1) x(x + 1)(x − 1)


L3 (x) = = =
(x3 − x0 )(x3 − x1 )(x3 − x2 ) (2 + 1)(2 − 0)(2 − 1) 6

P3 (x) = 0 × L0 (x) + −1 × L1 (x) + 0 × L2 (x) + 15 × L3 (x)


P3 (x) = −1 × L1 (x) + 15 × L3 (x)
1 5
P3 (x) = − (x3 − 2x2 − x + 2) + (x3 − x)
2 2
P3 (x) = 2x + x − 2x − 1
3 2

⇒ f (x) = 2x3 + x2 − 2x − 1

⇒ f (−0.5) = 2(−0.5)3 + (−0.5)2 − 2(−0.5) − 1 = 0


⇒ f (0.5) = 2(0.5)3 + (0.5)2 − 2(0.5) − 1 = −1.5
⇒ f (1.5) = 2(1.5)3 + (1.5)2 − 2(1.5) − 1 = 5

Example 5.2. Using Lagrange’s interpolating formula find a polynomial which passes through
the points
(0. − 12), (1, 0), (3, 6), (4, 12)

23
SOLUTION
We have
x0 = 0, x1 = 1, x2 = 3, x3 = 4, f0 = −12, f1 = 0, f2 = 6, f3 = 12
Using Lagrange’s interpolating formula we can write

P3 (x) = f0 L0 (x) + f1 L1 (x) + f2 L2 (x) + f3 L3 (x)


(x − x1 )(x − x2 )(x − x3 ) (x − 0)(x − 3)(x − 4)
L0 (x) = =
(x0 − x1 )(x0 − x2 )(x0 − x3 ) (0 − 1)(0 − 3)(0 − 4)
(x − x0 )(x − x2 )(x − x3 ) (x − 0)(x − 3)(x − 4)
L1 (x) = =
(x1 − x0 )(x1 − x2 )(x1 − x3 ) (1 − 0)(1 − 3)(1 − 4)
(x − x0 )(x − x1 )(x − x3 ) (x − 1)(x − 1)(x − 4)
L2 (x) = =
(x2 − x0 )(x2 − x1 )(x2 − x3 ) (3 − 0)(3 − 1)(3 − 4)
(x − x0 )(x − x1 )(x − x2 ) (x − 1)(x − 1)(x − 3)
L3 (x) = =
(x3 − x0 )(x3 − x1 )(x3 − x2 ) (4 − 0)(4 − 1)(4 − 3)
x3 − 8x2 + 19x − 12 x3 − 5x2 + 4x x3 − 4x2 + 3x
P3 (x) = × (−12) + ×6+ × 12
12 (−6) 12
P3 (x) = x3 − 7x2 + 18x − 12
⇒ f (x) = x3 − 7x2 + 18x − 12

Example 5.3. Using Lagrange’s interpolating formula find the value of y = f (x) which
corresponds to x = 10 from the following table:

x 5 6 9 11
y = f (x) 12 13 14 16

SOLUTION
We have
x0 = 5, x1 = 6, x2 = 9, x3 = 11, f0 = 12, f1 = 13, f2 = 14, f3 = 16
Using Lagrange’s interpolating formula we can write

P3 (x) = f0 L0 (x) + f1 L1 (x) + f2 L2 (x) + f3 L3 (x)


(x − x1 )(x − x2 )(x − x3 ) (10 − 6)(10 − 9)(10 − 11)
L0 (x) = =
(x0 − x1 )(x0 − x2 )(x0 − x3 ) (5 − 6)(5 − 9)(5 − 11)
(x − x0 )(x − x2 )(x − x3 ) (10 − 5)(10 − 9)(10 − 11)
L1 (x) = =
(x1 − x0 )(x1 − x2 )(x1 − x3 ) (6 − 5)(6 − 9)(6 − 11)
(x − x0 )(x − x1 )(x − x3 ) (10 − 5)(10 − 6)(10 − 11)
L2 (x) = =
(x2 − x0 )(x2 − x1 )(x2 − x3 ) (9 − 5)(9 − 6)(9 − 11)

24
(x − x0 )(x − x1 )(x − x2 ) (10 − 5)(10 − 6)(10 − 9)
L3 (x) = =
(x3 − x0 )(x3 − x1 )(x3 − x2 ) (11 − 5)(11 − 6)(11 − 9)
13 35 16 42
P3 (x) = f0 L0 (x) + f1 L1 (x) + f2 L2 (x) + f3 L3 (x) = 2 − + + = = 14
3 3 3 3

5.4 Inverse (Lagrange’s) Interpolation


In interpolation we have discussed various methods of estimating the missing value of the
function y = f (x) corresponding to a value x intermediate between two given values. Now we
discuss inverse interpolation in which we interpolate the value of argument x corresponding
to an intermediate value y of the entry.
Use of Lagrange’s interpolation formula for inverse interpolation
In Lagrange’s Interpolation formula y = f (x) is expressed as a function of x as given below

(x − x0 )(x − x2 )(x − x3 )...(x − xn ) (x − x0 )(x − x1 )(x − x2 )...(x − xn−1 )


f (x1 ) + ... + f (xn ) + ...
(x1 − x0 )(x1 − x2 )(x1 − x3 )...(x1 − xn ) (xn − x0 )(xn − x1 )(xn − x2 )...(xn − xn−1 )
(11)

By interchanging x and y = f (x) we can express x as a function of y = f (x) as follows

(f − f0 )(f − f2 )(f − f3 )...(f − fn ) (f − f0 )(f − f1 )(f − f2 )...(f − fn−1 )


x1 + ... + xn + ...
(f1 − f0 )(f1 − f2 )(f1 − f3 )...(f1 − fn ) (fn − f0 )(fn − f1 )(fn − f2 )...(fn − fn−1 )
(12)

The formula given by (12) is called Lagrange’s interpolation inverse formula.

Example 5.4. The following table gives the value of the elliptical integral

∫ ϕ

F (ϕ) =
0 1 − 12 sin2 ϕ
for certain values of ϕ. Find the values of ϕ if F (ϕ) = 0.3887

ϕ 210 230 250


F (ϕ) 0.3706 0.4068 0.4433

SOLUTION
We have ϕ0 = 210 , ϕ1 = 230 , ϕ2 = 250 , F = 0.3887, F0 = 0.3706, F1 = 0.4068, F2 = 0.4433
Using the inverse interpolation formula we can write

(F − F1 )(F − F2 ) (F − F0 )(F − F2 ) (F − F0 )(F − F1 )


ϕ= ϕ0 + ϕ1 + ϕ2
(F0 − F1 )(F0 − F2 ) (F1 − F0 )(F1 − F2 ) (F2 − F0 )(F2 − F1 )

25
ϕ= (0.3887−0.4068)(0.3887−0.4433)
(0.3706−0.4068)(0.3706−0.4433)
×21+ (0.3887−0.3706)(0.3887−0.4433)
(0.4068−0.3706)(0.4068−0.4433)
×23+ (0.3887−0.3706)(0.3887−0.4068)
(0.4433−0.3706)(0.4433−0.4068)
×
25

ϕ = 7.884 + 17.20 − 3.087 = 21.99922


ϕ ≈ 220

Example 5.5. Find the value of x when y = f (x) = 0.3 by applying Lagrange’s formula
inversely.
x 0.4 0.6 0.8
y = f (x) 0.3683 0.3332 0.2897

SOLUTION
We have x0 = 0.4, x1 = 0.6, x2 = 0.8, f = 0.3, f0 = 0.3683, f1 = 0.3332, f2 = 0.2897
Using the inverse interpolation formula we can write

(f − f1 )(f − f2 ) (f − f0 )(f − f2 ) (f − f0 )(f − f1 )


x= x0 + x1 + x2
(f0 − f1 )(f0 − f2 ) (f1 − f0 )(f1 − f2 ) (f2 − f0 )(f2 − f1 )

x= (0.3−0.3332)(0.3−0.2897)
(0.3683−0.3332)(0.3683−0.2897)
×0.4+ (0.3332−0.3683)(0.3332−0.2897)
(0.3−0.3683)(0.3−0.2897)
×0.6+ (0.2897−0.3683)(0.2897−0.3332)
(0.3−0.3683)(0.3−0.3332)
×
0.8
x ≈ 0.757358

6 CENTRAL DIFFERENCE INTERPOLATION FOR-


MULAE
Coming soon...!

6.1 Gauss Forward Interpolation Formula


Coming soon...!

6.2 Gauss Backward Interpolation Formula


Coming soon...!

6.3 Bessel’s Formula


Coming soon...!

26
6.4 Stirling’s Formula
Coming soon...!

6.5 Laplace- Everett Formula


Coming soon...!

27
7 INVERSE INTERPOLATION
Coming soon...!

7.1 Introduction
Coming soon...!

7.2 Method of Successive Approximations


Coming soon...!

7.3 Method of Reversion Series


Coming soon...!

7.4 Applications of Interpolation


Coming soon...!

28
8 CURVE FITTING
8.1 Introduction
Coming soon...!

8.2 The Straight Line


Coming soon...!

8.3 Fitting a Straight Line


Coming soon...!

8.4 Fitting a Parabola


Coming soon...!

8.5 Exponential Function


Coming soon...!

8.6 Applications
Coming soon...!

29
9 MATRICES AND SIMULTANEOUS LINEAR EQUA-
TIONS
9.1 Matrix Inversion Method
System of linear equations arise frequently and if n equations in n unknowns are given, we
write

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1 (13)


a21 x1 + a22 x2 + a23 x3 + ... + a2n xn = b2 (14)
a31 x1 + a32 x2 + a33 x3 + ... + a3n xn = b3 (15)
a41 x1 + a42 x2 + a43 x3 + ... + a4n xn = b4 (16)
... ... (17)
an1 x1 + an2 x2 + an3 x3 + ... + ann xn = bn (18)

Coming soon...!

9.2 Gaussian Elimination Method


Coming soon...!

9.3 Gauss-Jordan Elimination Method


Coming soon...!

9.4 LU Decomposition Methods


Coming soon...!

9.5 Iteration Methods


Coming soon...!

9.5.1 Jacobi Method

Coming soon...!

9.5.2 Gauss-Seidel Method

Coming soon...!

30
9.6 Introduction to SOR Methods
Coming soon...!

9.7 Crout’s Triangulation Method (Method of Factorization)


Coming soon...!

9.8 Applications
Coming soon...!

31
10 NUMERICAL SOLUTION OF ORDINARY DIF-
FERENTIAL EQUATIONS
10.1 Introduction
The most general form of an ordinary differential equation of nth order is given by

dy d2 y d3 y dn y
ϕ(x, y, , 2 , 3 , ..., n ) = 0 (19)
dx dx dx dx
A general solution of an ordinary differential equation such as (1) is a relation between y, x
and n arbitrary constants which satisfies the equation and it is of the form

f (x, y, c1 , c2 , c3 , ..., cn ) = 0 (20)

If particular values are given to the constants c1 , c2 , c3 , ..., cn , then the resulting solution
is called a Particular solution. To obtain a particular solution from the general solution
given by (19), we must be given n conditions so that the constants can be determined. If
all the n conditions are specified at the same value of x, then the problem is termed as
initial value problem. Though there are many analytical methods for finding the solution of
the equation form given by (1), there exist large number of ordinary differential equations,
whose solution cannot be obtained by the known analytical methods. In such cases we use
numerical methods to get an approximate solution of a given differential equation under the
prescribed initial condition. In this chapter we restrict ourselves and develop the numerical
methods for findings a solution of an ordinary differential equation of first order and first
degree which is of the form
dy
= f (x, y)
dx
with the initial condition y(x0 ) = y0 , which is called initial value problem. The general
solutions of equation (20) will be obtained in two forms: (1) the values of y as a power
series in independent variable x and (19) as a set of tabulated values of x and y. We shall
now develop numerical methods for solution of the initial value problem of the form given
by (20). We partition the interval [a, b] on which the solution is derived in finite number of
subintervals by the points

a = x0 < x1 < x2 < x3 < ... < xn = b


The points are called mesh points. We assume that the points are spaced uniformly with the
relation
xn = x0 + nh

32
The existence of uniqueness of the solution of an initial value problem in [x0 , b] depends on
the theorem due to Lipschitz, which states that

1. If f (x, y) is a real function defined and continuous in [x0 , b], y ∈ (−∞, ∞), where x0
and b are finite.

2. There exists a constant L > 0 called Lipschitz’s constant such that for any two values
y = y1 and y = y2

| f (x, y1 ) − f (x, y2 ) |< L | L1 − L4 |

where x ∈ [x0 , b], then for any y(x0 ) = y0 , the initial value problem (20), has a unique
solution for x ∈ [x0 , b].

10.2 Taylor’s Series Method


Let y = f (x), be a solution of the equation
dy
= f (x, y)
dx
with y(x0 ) = y0 Expanding it by Taylor’s series about the point x0 , we get

(x − x0 ) ′ (x − x0 )2 ′′ (x − x0 )3 ′′′
f (x) = f (x0 ) + f (x0 ) + f (x0 ) + f (x0 ) + ...
1! 2! 3!
this may be written as

(x − x0 ) ′ (x − x0 )2 ′′ (x − x0 )3 ′′′
y = f (x) = y0 + y0 + y0 + y0 + ...
1! 2! 3!
Putting x = x1 = x0 + h we get

h ′ h2 ′′ h3 ′′′
f (x1 ) = y1 = y0 + y + y + y0 + ...
1! 0 2! 0 3!
Similarly we obtain
h ′ h2 h3
yn+1 = yn + yn + yn′′ + yn′′′ + ...
1! 2! 3!
i.e,

h ′ h2 ′′
yn+1 = yn + yn + yn + O(h3 )
1! 2!
3
where O(h ) means that all the succeeding terms containing the third and higher powers
of h. If the terms containing the third and higher powers of h are neglected then the local
truncation error in the solution is kh3 where k is a constant. For a better approximation

33
terms containing higher powers of h are considered.
Note: Taylor’s series method is applicable only when the various derivatives of f (x, y) exist
and the value of (x − x0 ) in the expansion of y = f (x) near x0 must be very small so that
the series converge.

Example 10.1. Solve


dy
=x+y y(1) = 0
dx
numerically up to x = 1.2, with h = 0.1

SOLUTION
We have x0 = 1, y0 = 1 and
dy
= y ′ = x + y ⇒ y0′ = 1 + 0 = 1
dx
d2 y
2
= y ′′ = 1 + y ′ ⇒ y0′′ = 1 + 1 = 2
dx
d3 y
= y ′′′ = y ′′ ⇒ y0′′′ = 2
dx3
d4 y
= y iv = y ′′′ ⇒ y0iv = 2
dx4
d5 y
5
= y v = y iv ⇒ y0v = 2
dx
.
.
.
Substituting the above values in

h ′ h2 ′′ h3 ′′′ h4 iv h5 v
y1 = y0 + y + y + y0 + y0 + y0 + ...
1! 0 2! 0 3! 4! 5!
we get

(0.1)2 (0.1)3 (0.1)4 (0.1)5


y1 = 0 + (0.1) + ×2+ ×2+ ×2+ × 2 + ...
2 6 4! 5!
⇒ y1 = 0.11033847
Therefore
y1 = y(0.1) ≈ 0.110
Now
x1 = x0 + h = 1 + 0.11 = 1.1

34
y1′ = x1 + y1 = 1.1 + 0.110 = 1.21
y1′′ = 1 + y1′ = 1 + 1.21 = 2.21
y1′′′ = y1′′ = 2.21
y1iv = 2.21
y1v = 2.21
.
.
.

(0.1)2 (0.1)3 (0.1)4 (0.1)5


y2 = 0.110 + (0.1)(1.21) + (2.21) + (2.21) + (2.21) + (2.21)
2 6 24 120
Therefore
y2 = 0.24205 ≈ 0.242

10.3 Euler’s Method


Coming soon...!

10.4 Modified Euler’s Method


Coming soon...!

10.5 Predictor-Corrector Methods


Coming soon...!

10.6 Milne’s Method


Coming soon...!

10.7 Adams Bashforth-Moulton Method


Coming soon...!

10.8 Runge-Kutta Method


Coming soon...!

35
10.9 Picard’s Method of Successive Approximation
Coming soon...!

10.10 Applications
Coming soon...!

36
11 NUMERICAL DIFFERENTIATION
dy
The process of computing the value of the derivative dx for some particular value of x
from the given data when the actual form of the function is not known is called Numerical
differentiation. When the values of the argument are equally spaced and we are to find
the derivative for some given x lying near the beginning of the table, we can represent the
dy
function by Newton–Gregory forward interpolation formula. When the value of dx is required
at a point near the end of the table, we use Newton’s backward interpolation formula and we
may use suitable Central difference interpolation formula when the derivative is to be found
at some point lying near the middle of the tabulated values.If the values of argument x are
not equally spaced, we should use Newton’s divided difference formula to approximate the
function y = f (x).

11.1 Derivatives Using Newton’s Forward Interpolation Formula


Consider Newton’s interpolation formula
p(p − 1) 2 p(p − 1)(p − 2) 3 p(p − 1)(p − 2)(p − 3) 4
y = f (x) = f0 + p △ f0 + △ f0 + △ f0 + △ f0 ...
2! 3! 4!
(21)

where
x − x0
p= . (22)
h
Differentiating Equation (21) w.r.t. p we get
dy df 2p − 1 2 3p2 − 6p + 2 3
= = △f0 + △ f0 + △ +... (23)
dx dx 2! 3!
Differentiating Equation (22) w.r.t. x we get
dp 1
= (24)
dx h
Now from Equation (24) and Equation (23)
dy dy dp dy 1 2p − 1 2 3p2 − 6p + 2 3
= . ⇒ = [△f0 + △ f0 + △ +...] (25)
dx dp dx dx h 2! 3!
dy
Equation (25) gives the value of dx at any x which is not tabulated. The formula Equation
(25) becomes simple for tabulated values of x, in particular when x = x0 and p = 0.
Putting p = 0 in Equation (25) we get
dy 1 1 1 1
( )x=x0 = [△f0 − △2 f0 + △3 f0 − △4 f0 + ...] (26)
dx h 2 3 4

37
Differentiating Equation (26) w.r.t. x

d2 y d dy dp 1 2 6p2 − 18p + 11 4
= ( ) = 2 [△ f0 + (p − 1) △ f0 +
3
△ f0 + ...] (27)
dx2 dp dx dx h 12
Putting p = 0 in Equation (27) we have

d2 y 1 11 4
( 2 )x=x0 = 2 [△2 f0 − △3 f0 + △ f0 − ...] (28)
dx h 12
similarly
d3 y 1 3
( 3 )x=x0 = 3 [△3 f0 − △4 f0 + ...] (29)
dx h 2
We know that
E = ehD
⇒ 1 + △ = ehD
⇒ hD = log(1 + △)
1 1 1
⇒ hD = △ − △2 + △3 − △4 +...
2 3 4
1 1 1 1
⇒ D = [△ − △2 + △3 − △4 +...]
h 2 3 4
1 1 1 1
⇒ D2 = 2 [△ − △2 + △3 − △4 +...]2
h 2 3 4
1 11 5
⇒ D2 = 2 [△2 − △3 + △4 − △5 +...]
h 12 6
Applying the above identities to f0 , we have
dy 1 1 1 1
Df0 = ( )x=x0 = [△f0 − △2 f0 + △3 f0 − △4 f0 + ...]
dx h 2 3 4
d2 y 1 11 4
D 2 f0 = ( 2
)x=x0 = 2 [△2 f0 − △3 f0 + △ f0 + ...]
dx h 12

11.2 Derivatives Using Newton’s Backward Interpolation Formula


Consider Newton’s backward interpolation formula
p(p + 1) 2 p(p + 1)(p + 2) 3 p(p + 1)(p + 2)(p + 3) 4
y = f (x) = f0 + p ▽ f0 + ▽ f0 + ▽ f0 + ▽ f0 ...
2! 3! 4!
(30)

where
x − x0
p= . (31)
h

38
(h being the interval of differencing). Differentiating Equation (30) w.r.t. p we get
dy df 2p + 1 2 3p2 + 6p + 2 3
= = ▽f0 + ▽ f0 + ▽ +... (32)
dx dx 2! 3!
Differentiating Equation (31) w.r.t. x we get
dp 1
= (33)
dx h
Now from Equation (33) and Equation (32)
dy dy dp dy 1 2p + 1 2 3p2 + 6p + 2 3
= . ⇒ = [▽f0 + ▽ f0 + ▽ +...] (34)
dx dp dx dx h 2! 3!
dy
Equation (34) gives the value of dx at any x which is not tabulated. The formula Equation
(34) becomes simple for tabulated values of x, in particular when x = x0 and p = 0.
Putting p = 0 in Equation (34) we get
dy 1 1 1 1
( )x=x0 = [▽f0 + ▽2 f0 + ▽3 + ▽4 f0 − ...] (35)
dx h 2 3 4
Differentiating Equation (35) w.r.t. x
d2 y d dy dp 1 2 6p2 + 18p + 11 4
= ( ) = 2 [▽ f0 + (p + 1) ▽ f0 +
3
▽ f0 + ...] (36)
dx2 dp dx dx h 12
Putting p = 0 in Equation (36) we have
d2 y 1 11 4
( 2 )x=x0 = 2 [▽2 f0 + ▽3 f0 + ▽ f0 + ...] (37)
dx h 12
In a similar manner we can find the derivatives of higher order at x = x0 .
dy d2 y
Example 11.1. From the table of values below compute dx
and dx2
for x = 1.
x 1 2 3 4 5 6
y = f (x) 1 8 27 64 125 216
Solution The difference table is

x y = f (x) △f △2 f △3 f △4 f
1 1
7
2 8 12
19 6
3 27 18 0
37 6
4 64 24 0
61 6
5 125 30
91
6 216

39
We have x0 = 1, h = 1, x = 1 is at the beginning of the table.
We use Newton’s forward formula

dy 1 1 1 1
( )x=x0 = [△f0 − △2 f0 + △3 f0 − △4 f0 + ...]
dx h 2 3 4
dy 1 1 1 1
⇒ ( )x=1 = [7 − × 12 + × 6 − × 0 + ...]
dx 1 2 3 4

=7−6+2=3
and

d2 y 1 11 4
( 2
)x=x0 = 2 [△2 f0 − △3 f0 + △ f0 − ...]
dx h 12

d2 y 1
( )x=1 = [12 − 6 + ...] = 6
dx2 12
dy d y 2
Example 11.2. From the table of values below compute dx and dx2 for x = 1.05

x 1.00 1.05 1.10 1.15 1.20 1.25 1.30


y = f (x) 1.00000 1.02470 1.04881 1.07238 1.09544 1.11803 1.14017
Solution The difference table is

x y = f (x) △f △2 f △3 f △4 f △5 f
1.00 1.00000
0.02470
1.05 1.02470 -0.00059
0.002411 -0.00002
1.10 1.04881 -0.00054 0.00003
0.02357 -0.00001 -0.00006
1.15 1.07238 -0.00051 -0.00003
0.02306 -0.00002
1.20 1.09544 -0.00047
0.02259
1.25 1.11803 -0.00045
0.02214
1.30 1.14017

We have x0 = 1.05, , h = 0.05, x = 1.05 is at the beginning of the table.


We use Newton’s forward formula

40
dy 1 1 1 1 1
( )x=x0 = [△f0 − △2 f0 + △3 f0 − △4 f0 + △5 f + ...]
dx h 2 3 4 5

dy 1 1 1 1 1
⇒ ( )x=1.05 = [0.02411− ×0.00054+ ×0.00003− ×0.00001+ ×0.00003+...]
dx 0.05 2 3 4 5

= 0.48763
and

d2 y 1 11 4
( 2
)x=x0 = 2 [△2 f0 − △3 f0 + △ f0 − ...]
dx h 12

d2 y 1 11 5
( 2
)x=1.05 = 2
[−0.00054 − 0.00003 + × 0.00001 − × −0.00003] = −0.2144.
dx (0.05) 12 6

Example 11.3. A rod is rotating in a plane about one of its ends. If the following table gives
the angle θ radians through which the rod has turned for different values of time t seconds,
find its angular velocity at t = 0.7seconds.
t seconds 0.0 0.2 0.4 0.6 0.8 1.0
θ radians 0.00 0.12 0.48 0.10 2.00 3.20

Solution The difference table is

t θ ▽θ ▽2 θ ▽3 θ ▽4 θ
0.0 0.00
0.12
0.2 0.12 0.24
0.36 0.02
0.4 0.48 0.26 0
0.62 0.02
0.6 0.10 0.28 0
0.90 0.02
0.8 2.00 0.30
1.20
1.0 3.20

We have x0 = t0 = 1.0, h = 0.2, x = t = 0.7, p = t−t0


h
= 0.7−1.0
0.2
= −1.5 is at the end of the
table.

41
We use Newton’s backward formula From the Newton’s backward interpolation formula we
have:

dθ 1 2p + 1 2 3p2 + 6p + 2 3
= [▽θ0 + ▽ θ0 + ▽ θ0 + ...]
dt t0 =0.7 h 2! 3!

dθ 1 2(−1.5) + 1 3(−1.5)2 + 6(−1.5) + 2


( )t0 =0.7 = [1.20+ (0.30)+ (0.02)] = 4.496radian/sec
dt 0.2 2! 3!

d2 θ 1 2 6p2 − 18p + 11 4
= [▽ θ0 + (p − 1) ▽ 3
θ0 + ▽ θ0 + ...]
dt2 h2 12

d2 θ 1
( 2
)t0 =0.7 = [0.30 − 0.5 × 0.02] = 7.25radian/sec2
dt (0.2)2

42
12 NUMERICAL INTEGRATION
12.1 Introduction
Numerical integration is used to obtain approximate answers for definite integrals that cannot
be solved analytically.
Numerical integration is a process of finding the numerical value of a definite integral
∫ b
I= f (x)dx
a

when a function y = f (x) is not known explicitly. But we are given only a set of values of
the function y = f (x) corresponding to the some values of x.
To evaluate the integral, we fit up a suitable interpolation polynomial to the given set
of values of f (x) and then integrate it within the desired limits. Here we integrate an
approximate interpolation formula instead of f (x). When this technique is applied on a
function of single variable, the process is called Quadrature. Suppose we are required to
evaluate the definite integral ∫ b
I= f (x)dx
a

firstly we have to approximate f (x) by a polynomial ϕ(x) of suitable degree.Then, then we


integrate f (x) within limits [a, b], i.e.,
∫ b ∫ b
f (x)dx ≈ ϕ(x)dx
a a

The difference ∫ ∫
b b
ε= f (x)dx − ϕ(x)dx
a a
is called the error of approximation.

12.2 General Quadrature Formula for Equidistant Ordinates


Consider an integral

∫ b
I= f (x)dx (38)
a

Let f (x) take the values f (x0 ) = f0 , f (x0 + h) = f1 , ..., f (x0 + nh) = fn when x = x0 , x =
x0 + h, ..., x = x0 + nh respectively.
To evaluate I, we replace f (x) by a suitable interpolation formula. Let the interval [a, b] be
divided into into n subintervals with the division points a = x0 , x0 + h < ... < x0 + nh = b

43
where the h is the width of each subinterval. Approximating f (x) by Newton’s Forward
Interpolation Formula we can write the integral (38) as
∫ ∫
x0 +nh x0 +nh
p(p − 1) 2
I= f (x)dx = [f0 + p △ f0 + △ f0 + ...]dx (39)
x0 x0 2!

since
x − x0
p=
h

x = x0 + ph
⇒ dx = hdp
x = x0 ⇒p=0
x = x0 + nh ⇒ p = n.
Expression (39) can be rewritten as

∫ ∫
n
p2 − p 2 p3 − 3p2 + 2p 3 n
p4 − 6p3 + 11p2 + 6p 4
I=h [f0 +p△f0 + △ f0 + △ f0 ]dx+h [ △ f0 +...]dx
0 2 6 0 24

n2 n3 n2 △3 f0 n4 △3 f 0 n5 3 11n3 △4 f 0
⇒ I = h[nf0 + △ f0 + ( − ) + ( − n3 + n2 ) + ( − n4 + − 3n2 ) + ...]
2 3 2 2 4 6 5 2 3 24
(40)

The equation (40) is called General Gauss Legendre Quadrature Formula, for equidistant or-
dinates from which we can generate any Numerical integration formula by assigning suitable
positive integral value to n. Now we deduce four quadrature formulae, namely:

1. Trapezoidal rule

2. Simpson’s one-third rule

3. Simpson’s three-eighths rule

4. Weddle’s rule from the general quadrature formula (40).

12.3 Trapezoidal Rule


Setting n = 1 in the relation (40) and neglecting all differences greater than the first we get
∫ x0 +h
1
I1 = f (x)dx = h[f0 + △ f0 ]
x0 2

44
h h
(2f0 + f1 − f0 ) = (f0 + f1 )
=
2 2
for the first subinterval [x0 , x0 + h], similarly, we get
∫ x0 +2h
h
I2 = f (x)dx = [f1 + f2 ]
x0 +h 2

∫ x0 +3h
h
I3 = f (x)dx = [f2 + f3 ]
x0 +2h 2
...
...

...

∫ x0 +nh
h
In = f (x)dx = [fn−1 + fn ]
x0 +(n−1)h 2
for the other integrals.
Adding I1 , I2 , I3 , ..., In we get
I1 + I2 + I3 + ... + In
∫ x0 +h ∫ x0 +2h ∫ x0 +3h ∫ x0 +nh
= f (x)dx + f (x)dx + f (x)dx + ... + f (x)dx
x0 x0 +h x0 +2h x0 +(n−1)h

h h h h
= [f0 + f1 ] + [f1 + f2 ] + [f2 + f3 ] + ... + [fn−1 − fn ]
2 2 2 2
∫ x0 +nh
h
⇒ f (x)dx = [(f0 + fn ) + 2(f1 + f2 + f3 + ... + fn−1 )]
x0 2

∫ b
h
I= f (x)dx = [(f0 + fn ) + 2(f1 + f2 + f3 + ... + fn−1 )] (41)
a 2

The formula (41) is called Trapezoidal Rule for Numerical Integration. The error committed
in this formula is given by

h3 ′′ −(b − a)3 ′′
ε ≃ − f (ξ) = f (ξ)
12 12n2
where
a = x0 < ξ < x n = b
NOTE: Trapezoidal rule can be applied to any number of subintervals odd or even.

45
∫1 x
Example 12.1. Calculate the value 0 1+x
dx correct up to three significant figures taking
six intervals by Trapezoidal rule

Example:
Here we have
x
f (x) =
1+x
a = 0, b = 1, n = 6
b−a 1−0 1
⇒h= = =
n 6 6
1 2 3 4 5 6
x 0 6 6 6 6 6 6
=1
y = f (x) 0.00000 0.14286 0.25000 0.33333 0.40000 0.45454 0.50000
fi f0 f1 f2 f3 f4 f5 f6

The trapezoidal rule can be written as


h
I= [(f0 + f6 ) + 2(f1 + f2 + f3 + f4 + f5 )]
2

1
I= [(0.00000 + 0.50000) + 2(0.14286 + 0.25000 + 0.33333 + 0.40000 + 0.45454)]
12
= 0.30512
≃ 0.0305
correct to three significant figures.
∫ 1 dx
Example 12.2. Calculate the value 0 1+x 2 correct up to five significant figures taking five

intervals by Trapezoidal rule. Also compare it with its exact value.

Example:
Here we have
1
f (x) =
1 + x2
a = 0, b = 1, n = 5
b−a 1−0 1
⇒h= = = = 0.2
n 5 5
1 2 3 4 5
x 0.0 5 5 5 5 5
=1
y = f (x) 1.000000 0.961538 0.832069 0.735294 0.609756 0.500000
fi f0 f1 f2 f3 f4 f5

46
The trapezoidal rule can be written as
h
I= [(f0 + f5 ) + 2(f1 + f2 + f3 + f4 )]
2

0.2
I= [(1.000000 + 0.500000) + 2(0.961538 + 0.862069 + 0.735294 + 0.609756)]
2
= 0.783714
≃ 0.78373
correct to five significant figures. The exact value
∫ 1
dx
I= 2
= [tan−1 x]10
0 1+x

π
= tan−1 1 − tan−1 0 = = 0.7853981
4
Therefore the absolute error is 0.00167.
∫5
Example 12.3. Calculate the value 1 log10 xdx correct up to four decimal places taking 8
subintervals by Trapezoidal rule.

Example:
Here we have
f (x) = log10 x
a = 0, b = 1, n = 8
b−a 5−1 4
⇒h= = = = 0.5
n 8 8
x 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
y = f (x) 0.00000 0.17609 0.30103 0.39794 0.47712 0.54407 0.60206 0.65321 0.69897
fi f0 f1 f2 f3 f4 f5 f6 f7 f8

The trapezoidal rule can be written as


h
I= [(f0 + f8 ) + 2(f1 + f2 + f3 + f4 + f5 + f6 + f7 )]
2

0.5
I= [(0.00000+0.69897)+2(0.17609+0.30103+0.39794+0.47712+0.54407+0.60206+0.65321)]
2
= 1.7505025
≃ 1.7505
correct to four decimal places.

47
12.4 Simpson’s one-third Rule
Substituting n = 2 in the General quadrature formula given by (40) and neglecting the third
and other higher order differences we get
∫ x0 +2h
8
I1 = f (x)dx = h[2f0 + 2 △ f0 + ( − 2) △2 f0 ]
x0 3

1
= h[2f0 + 2(f1 − f0 ) + (f2 − 2f1 + f0 )]
3

h
= [f0 + 4f1 + f2 ]
3
h
⇒ I1 = [f0 + 4f1 + f2 ]
3
Similarly ∫ x0 +4h
h
I2 = f (x)dx = [f2 + 4f3 + f4 ]
x0 +2h 3

...

...

...

∫ x0 +nh
h
I n2 = f (x)dx = [fn−2 + 4fn−1 + fn ]
x0 +(n−2)h 3
Adding I1 , I2 , ..., I n2 we get
∫ x0 +2h ∫ x0 +4h ∫ x0 +nh
I1 + I2 + ... + I n2 = f (x)dx + f (x)dx + ... + f (x)dx
x0 x0 +2h x0 +(n−2)h

h h h
= [f0 + 4f1 + f2 ] + [f2 + 4f3 + f4 ] + ... + [fn−2 + 4fn−1 + fn ]
3 3 3

h
= [(f0 + fn ) + 4(f1 + f3 + f5 + ... + fn−1 ) + 2(f2 + f4 + f6 + ... + fn−2 )]
3

∫ b
h
I= f (x)dx = [(f0 + fn ) + 4(sum of odd ordinates) + 2(sum of even ordinates)]
a 3

48
∫ b
h
I= f (x)dx = [(f0 + fn ) + 4(sum of odd ordinates) + 2(sum of even ordinates)] (42)
a 3
The formula (42) is called Simpson’s one-third Rule. The error committed in this formula is
given by

nh5 (iv) −(b − a)5 (iv)


ε≃− f (ξ) = f (ξ)
180 2880n4
where
a = x0 < ξ < x n = b (for n subintervals of lengths h)
NOTE:

1. The above formula may be written as


∫ x0 +nh
I= f (x)dx
x0

2. Simpson’s one-third rule can be applied only when the given interval [a, b] is subdivided
into even number of subintervals each of width h and within any two consecutive
subintervals and the interpolating polynomial ϕ(x) is of degree 2.
∫ 0.6
Example 12.4. Calculate the value 0 ex dx correct up to five significant figures taking six
intervals by Simpson’s one-third rule.

Example:
Here we have
f (x) = ex
a = 0, b = 0.6, n = 6
b−a 0.6 − 0
⇒h= = = 0.1
n 6
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6
y = f (x) 1.0000 1.10517 1.22140 1.34986 1.49182 1.64872 1.82212
fi f0 f1 f2 f3 f4 f5 f6

The Simpson’s one-third rule can be written as


h
I= [(f0 + f6 ) + 4(f1 + f3 + f5 ) + 2(f2 + f − 4)]
3

0.1
I= [(1.00000 + 1.82212) + 4(1.10517 + 1.34986 + 1.64872) + 2(1.22140 + 1.49182)]
3
49
0.1
= [(2.82212) + 4(4.10375) + 2(2.71322)]
3
= 0.8221186
≃ 0.82212.
correct to three significant figures.
∫ 0.6
Example 12.5. Calculate the value 0 ex dx correct up to five significant figures taking six
intervals by Simpson’s one-third rule.

Example:
Here we have
f (x) = ex
a = 0, b = 0.6, n = 6
b−a 0.6 − 0
⇒h= = = 0.1
n 6
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6
y = f (x) 1.0000 1.10517 1.22140 1.34986 1.49182 1.64872 1.82212
fi f0 f1 f2 f3 f4 f5 f6

The Simpson’s one-third rule can be written as


h
I= [(f0 + f6 ) + 4(f1 + f3 + f5 ) + 2(f2 + f − 4)]
3

0.1
I= [(1.00000 + 1.82212) + 4(1.10517 + 1.34986 + 1.64872) + 2(1.22140 + 1.49182)]
3
0.1
= [(2.82212) + 4(4.10375) + 2(2.71322)]
3
= 0.8221186
≃ 0.82212.
correct to three significant figures.

Example 12.6. The velocity of a train which starts from rest is given by the following table,
the time being reckoned in minutes from the start and the speed in km/hour

t(minutes) 2 4 6 8 10 12 14 16 18 20
v(km/hr) 16 28.8 40 46.4 51.2 32.0 17.6 8 3.2 0

50
Example:

ds
v= ⇒ ds = v.dt
dt
∫ ∫
⇒ ds = v.dt
∫ 20
s= v.dt
0
The train starts from rest, therefore the velocity v = 0 when t = 0 The given table of
velocities can be written
t 0 2 4 6 8 10 12 14 16 18 20
v 0 16 28.8 40 46.4 51.2 32.0 17.6 8 3.2 0
fi f0 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10

2 1
h= hrs = hrs
60 30
The Simpson’s Rule is
∫ 20
h
s= v.dt = [(f0 + f10 ) + 4(f1 + f3 + f5 + f7 + f9 ) + 2(f2 + f4 + f6 + f8 )]
0 3
1
= [(0 + 0) + 4(16 + 40 + 51.2 + 17.6 + 3.2) + 2(28.8 + 46.4 + 32.0 + 8)]
30 × 3
1
[0 + 4 × 128 + 2 × 115.2]
90
= 8.25km
Therefore the distance run by the train in 20 minutes=8.25 km.

12.5 Simpson’s three-eighths Rule


We assume that within any three consecutive subintervals of width h, the interpolating
polynomial ϕ(x) approximating f (x) is of degree 3.Hence substituting n = 3, i.e., the General
quadrature formula and neglecting all the differences above △3 , we get
∫ x0 +3h
9 9 81 △3 f o
I1 = f (x)dx = h[3f0 + △ f0 + (9 − ) △2 f0 + ( − 27 + 9) ]
x0 2 2 4 6

9 3
= h[3f0 + 9(f1 − f0 ) + (f2 − 2f1 + f0 ) + (f3 − 3f2 + 3f1 + f0 )]
4 8

3h
= [f0 + 3f1 + 3f2 + f3 ]
8
51
Similarly ∫ x0 +6h
3h
I2 = f (x)dx = [f3 + 3f4 + 3f5 + f6 ]
x0 +3h 8
∫ x0 +9h
3h
I3 = f (x)dx = [f6 + 3f7 + 3f8 + f9 ]
x0 +6h 8

...

...

...

∫ x0 +nh
3h
I n3 = f (x)dx = [fn−3 + 3fn−2 + 3fn−1 + fn ]
x0 +(n−3)h 8
Adding I1 , I2 , I3 , ..., I n3 we get
∫ x0 +3h ∫ x0 +6h ∫ x0 +nh
I1 + I2 + ... + I = n
3
f (x)dx + f (x)dx + ... + f (x)dx
x0 x0 +3h x0 +(n−3)h

3h 3h 3h
⇒I= [f0 + 3f1 + 3f2 + f3 ] + [f3 + 3f4 + 3f5 + f6 ] + ... + [fn−3 + 3fn−2 + 3fn−1 + fn ]
8 8 8
Therefore,
3h
I= [(f0 + fn ) + 3(f1 + f2 + f4 + f5 + ... + fn−1 ) + 2(f3 + f6 + ... + fn−3 )]
8

∫ b
3h
I= f (x)dx = [(f0 + fn ) + 3(f1 + f2 + f4 + f5 + ... + fn−1 ) + 2(f3 + f6 + ... + fn−3 )]
a 8
(43)

The formula (43) is called Simpson’s three-eighths Rule. The error committed in this formula
is given by

nh5 (iv)
ε≃− f (ξ)
80
where
a = x0 < ξ < x n = b (for n subintervals of lengths h)
NOTE: Simpson’s three-eighths rule can be applied when the range [a, b] is divided into a
number of subintervals, which must be a multiple of 3.

52
∫1 1
Example 12.7. Evaluate 0 1+x2
dx by taking seven ordinates

Solution:
We have
n+1=7⇒n=6
The points of division are
1 2 3 4 5
0, , , , , , 1
6 6 6 6 6
1 2 3 4 5 6
x 0 6 6 6 6 6 6
=1
1
y = f (x) = 1+x2
1.000000 0.9729730 0.9000000 0.8000000 0.6923077 0.5901639 0.50000000
fi f0 f1 f2 f3 f4 f5 f6

Here h = 16 , the Simpson’s three-eighths rule is

3h
I= [(f0 + f6 ) + 3(f1 + f2 + f4 + f5 ) + 2(f3 )]
8

3 3
= [(1+0.500000000)+3(0.9729730+0.90000000)]+ [3(0.6923077+0.5901639)+2(0.8000000)]
6×8 6×8

1
=
[1.50000000 + 9.4663338 + 1.6000000]
16
= 0.7853959.
∫π
Example 12.8. Calculate 02 esinx dx, correct to four decimal places.

Solution:
We divide the range in three equal points with the division points

π π π
x0 = 0, x1 = , x2 = , x3 =
6 3 2
where
π
h=
6
The table of values of the function is:

π π π
x 0 6 3 2
sinx
y=e 1 1.64872 2.36320 2.71828
fi f0 f1 f2 f3
By Simpson’s Three-eighths rule we get
∫ π
2 3h
I= esinx dx = [(f0 + f3 ) + 3(f1 + f2 )]
0 8

53

= [(1 + 2.71828) + 3(1.64872 + 2.36320)]
86

π
= [(3.71828 + 12.03576)]
16
= 0.091111

12.6 Weddle’s Rule


Here we assume that within any six consecutive subintervals of width h each, the interpolat-
ing polynomial approximating f (x) will be of degree 6. Substituting n = 6 in the the general
quadrature formula given by expression (40) and neglecting all differences above △6 , we get

∫ x0 +6h
123 4 33 41 6
I1 = f (x)dx = h[6f0 +18△f0 +27△2 f0 +24△3 f0 + △ f 0 + △5 f 0 + △ f0 ]
x0 10 10 140
Since
3 41 1
− =
10 140 140
We take the coefficient of △ f0 as 10 , so that the error committed is
6 3 1
140
and we write
∫ x0 +6h
3h
I1 = f (x)dx = [f0 + 5f1 + f2 + 6f3 + f4 + 5f5 + f6 ]
x0 10
Similarly
∫ x0 +12h
3h
I2 = f (x)dx = [f6 + 5f7 + f8 + 6f9 + f10 + 5f11 + f12 ]
x0 +6h 10
...

...

...

∫ x0 +nh
3h
I n6 = f (x)dx = [fn−6 + 5fn−5 + fn−4 + 6fn−3 + fn−2 + 5fn−1 + fn ]
x0 +(n−6)h 10
Adding I1 , I2 , ..., I n6 , we get
∫ x0 +nh
3h
I= f (x)dx = [f0 +5f1 +f2 +6f3 +f4 +5f5 +2f6 +5f7 +f8 +6f9 +f10 +6f11 +2f12 +...+2fn−6
x0 10

+5fn−5 + fn−4 + 6fn−3 + fn−2 + 5fn−1 + fn ]

54
3h
= [(f0 +fn )+(f2 +f4 +f8 +f10 +f14 +f16 +...+fn−4 +fn−2 )+5(f1 +f5 +f7 +f11 +...+fn−5 +fn−1 )
10
+6(f3 + f9 + f15 + ... + fn−3 ) + 2(f6 + f12 + ... + fn−6 )]
NOTE:

1. Weddle’s rule requires at least seve n consecutive equispaced ordinates with in the
given interval (a, b).

2. It is more accurate that the Trapezoidal and Simpson’s rules.

3. If f (x) is a polynomial of degree 5 or lower, Weddle’s rule gives an exact result.


∫π√
Example 12.9. Compute the integral 02 1 − 0.162sin2 ϕdϕ by Weddle’s rule.

Solution:

y = f (ϕ) = 1 − 0.162sin2 ϕ

π
a=0
2
Taking n = 12 we get
b−a π
2
−0 π
h= = =
h 12 24
ϕ y = f (ϕ) fi
0 1.000000 f0
π
24
0.998619 f1

24
0.994559 f2

24
0.988067 f3

24
0.979541 f4

24
0.969518 f5

24
0.958645 f6

24
0.947647 f7

24
0.937283 f8

24
0.928291 f9
10π
24
0.921332 f10
11π
24
0.916930 f11
12π
24
0.915423 f12

By Weddle’s rule we have

55
∫ π

2
I= 1 − 0.162sin2 ϕdϕ
0

3h 3h
= [(f0 + f12 ) + 5(f1 + f5 + f7 + f11 )] + [(f2 + f4 + f8 + f10 ) + 6(f3 + f9 ) + 2f6 ]
10 10


= [(1.000000 + 0.915423) + 5(0.998619 + 0.969518 + 0.947647 + 0.916930)]+
240

3π 3π
[(0.994559 + 0.937283 + 0.9213322)] + [6(0.988067 + 0.928291) + 2(0.958645)]
240 240
= 1505103504
∫ 5.2
Example 12.10. Find the value of 4
loge xdx by Weddle’s rule by taking 6 subintervals.

Solution:

Here h = 5.2−4
6
= 0.2
x 4.0 4.2 4.4 4.6 4.8 5.0 5.2
y = f (x) 1.3863 1.4351 1.4816 1.5261 1.686 1.6094 1.6487
Weddle’s rule is
3h
I= [f0 + 5f1 + f2 + 6f3 + f4 + 5f5 + f6 ]
10
3 × 0.2
= [1.3863 + 7.1755 + 1.4816 + 9.1566 + 1.5686 + 8.0470 + 1.687]
10

= 0.06 × 30.4643
= 1.827858

12.7 Newton-Cotes Formula


Coming soon...!

12.8 Romberg Integration


Coming soon...!

12.9 Applications
Coming soon...!

56

You might also like