Numerical I Module-1
Numerical I Module-1
Module on
on Numerical Analysis I
1
Module Introduction
This module consists of six chapters. The first chapter deals with one of the central concept in numerical
analysis which is error analysis. In this unit we will look briefly at many terms and concepts related to
numerical errors and discuss the sources, types and measures of errors focusing on how to measure and
control them. The second chapter deals with computing the approximate solution of different non-linear
algebraic equations, which are either complicated or impossible to solve by using analytical methods,
using different numerical methods focusing on how to minimize and control the approximation errors and
also dealing with the computer algorithms of each methods . Like that of the previous chapter, the third
chapter deals with how to compute the approximate solution of a variety of systems of both linear and
non-linear equations using different numerical methods, by considering the two broad categories which
are Direct Methods and Indirect(or Iterative) Methods, by explaining the merits and demerits of each
methods in both cases .
The last three chapters focuses on one of the most important concept in numerical analysis, which is the
approximation of complicated functions by using simpler functions such as polynomials, trigonometric,
rational functions and so on, that are easy for mathematical operations; in which the different
mathematical operations intended for the complicated functions can be done by using the simpler
approximating function; focusing on one of the most widely used techniques of approximation called
polynomial interpolation and consider its application. Briefly, chapter four discuss about difference
operators which are important in developing the interpolating polynomials whereas, chapter five focuses
on the development of the different polynomial interpolation techniques. Finally, chapter six explains the
applications of polynomial interpolation, by giving special focuses on the two basic uses of polynomial
interpolation, which are numerical integration and differentiation.
2
Module Objectives:
3
Table of content
CHAPTER ONE ............................................................................................................................. 6
1.1. Introduction ................................................................................................................................... 6
1.2. Errors............................................................................................................................................. 7
1.2.1. Sources of Errors................................................................................................................... 8
1.2.2. Measuring Errors................................................................................................................... 8
1.2.3. Classification of Errors ....................................................................................................... 16
1.3. Computer Representation of Numbers ........................................................................................ 19
1.3.1. Fixed Point Representation ................................................................................................. 19
1.3.2. Floating Point Representation ............................................................................................. 22
1.4. Propagation of Errors .................................................................................................................. 27
1.5. Stability of Algorithms and Conditioning numbers .................................................................... 30
1.5.1. Stability of Algorithms........................................................................................................ 31
1.5.2. Conditioning or Condition of a Problem............................................................................. 32
CHAPTER TWO .......................................................................................................................... 36
2. SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS ..................... 36
2.1. Introduction ................................................................................................................................. 36
2.2. Preliminaries: .............................................................................................................................. 37
2.2.5. The secant Method: ................................................................................................................... 56
CHAPTER THREE ...................................................................................................................... 69
3.2. Exact Method .............................................................................................................................. 70
3.2.1. Gaussian Elimination .......................................................................................................... 71
3.2.2. Gaussian Elimination Method ............................................................................................. 75
3.2.3. The backward or forward substitution method formula ...................................................... 80
3.2.4. Gaussian Elimination with partial pivoting......................................................................... 81
3.2.5. Gauss-Jordan Method ......................................................................................................... 82
3.2.6. Matrix Inversion Using Jordan Elimination........................................................................ 86
3.2.7. LU Matrix Decomposition .................................................................................................. 87
3.3. Indirect/Iterative/ methods of solving systems of linear equation ........................................ 91
3.3.1. Gauss-Seidel Method .......................................................................................................... 91
4
UNITE FOUR ....................................................................................................................... 100
4.1. INTRODUCTION................................................................................................................... 100
4.2. FORWARD DIFFERENCE OPERATOR ........................................................................... 100
4.2.1. The Operator E .................................................................................................................. 103
4.2.2. Relation between the Operator E and ∆ .................................................................................. 104
4.3. Backward Differences ............................................................................................................. 106
4.4. Central Deference ................................................................................................................... 109
UNIT FIVE ................................................................................................................................ 111
5. INTERPOLATION ............................................................................................................. 111
5.1. Introduction .................................................................................................................................. 111
5.2. Interpolation with Evenly Spaced Points ................................................................................... 112
5.2.3 Gauss interpolating polynomial ............................................................................................... 121
5.3 Interpolation with Unevenly Spaced Points ................................................................................ 131
5.3.2 Newton’s Divided Difference Interpolation............................................................................. 138
CHAPTER SIX ......................................................................................................................... 146
6.1 Introduction ................................................................................................................................... 147
6.2 Numerical Differentiation ............................................................................................................ 147
6.2.1 Formulae for derivatives .......................................................................................................... 148
6.2.2 Maxima and Minima of a tabulated function ........................................................................... 155
6.3 Numerical Integration .................................................................................................................. 158
6.3.1 Newton-Cotes quadrature formula ........................................................................................... 159
6.3.2 Errors in quadrature formulae .................................................................................................. 167
6.3.3 Romberg's Method ................................................................................................................... 169
6.3.4 Euler-Maclaurin formula.......................................................................................................... 172
6.4 Method of undetermined coefficients .......................................................................................... 175
6.4.1 Differentiation formulae .......................................................................................................... 175
6.4.2 Integration formulae................................................................................................................. 176
6.5 Numerical Double Integration ..................................................................................................... 178
5
CHAPTER ONE
1.1.Introduction
Numerical Analysis is the area of mathematics that creates, analyzes, and implements algorithms
for solving numerically the problems of continuous mathematics. Such problems originate
generally from real world applications of algebra, geometry, and calculus , and they involve
variables which varies continuously; these problems occurs throughout the natural sciences,
social sciences, engineering, medicine, and business.
During the past half century, the growth in power and availability of digital computers has led to
an increasing use of realistic mathematical models in science and engineering, and numerical
analysis of increasing sophistication has been needed to solve these more detailed mathematical
models of the real world problems.
Thus to solve any real life problem by using numerical methods the following three steps are
mostly considered:
Converting the real life (or physical) problem in to a mathematical model.
Apply an appropriate numerical method which can solve the mathematical model and
develop an algorithm for the method.
Finally, implement the algorithm on a computational tools (most commonly on
computers) to compute the required result.
More often all the above steps are exposed to an error due to different assumptions and
limitations. So most numerical methods give answers that are only approximations to the desired
true solution, and it's important to understand and to be able, if possible, to estimate or bound the
resulting error. Therefore, the study of errors is the central concern of numerical analysis.
This chapter examines the various sources and types of errors that may occur in a problem. The
representation of numbers in computers is examined, along with the error in computer arithmetic.
General results on the propagation of errors in a calculation are also considered. Finally, the
concept of stability and conditioning of problems on numerical methods are introduced and
illustrated.
6
After reading this chapter, students will be able to:
define and explain about numerical analysis
understand a lot of things about errors in numerical methods
identify, measures and control errors in numerical calculations
differentiate the different sources and types of errors
represent numbers on computers and calculate the maximum error bounds which
corresponds to the representation
understand and control the propagation of errors in numerical calculations
define and understand the algorithms of numerical methods
identify the stability of algorithms
identify well-conditioned and ill-conditioned problems by calculating their condition
numbers.
1.2. Errors
Whenever, we solve any problem using numerical analysis, errors will arise during the
calculations. To be able to deal with the issue of errors, we need to
7
1.2.1. Sources of Errors
Suppose is the exact value and is the approximate value of , thus the error occurred on
such approximation can be measured using one of the following ways depending on the accuracy
required.
a) Tue Error
True error denoted by Et is the difference between the true value (also called the exact value)
8
Example 1.1
f (2.3) f (2)
0.3
22.107 19.028
0.3
10.265
calculus.
f ( x ) 7 e 0 .5 x
f ' ( x) 7 0.5 e0.5 x
3 . 5e 0.5 x
So the true value of f ' ( 2) is
f ' (2) 3.5e 0.5( 2)
9.5140
c) True error is calculated as
Et = True value – Approximate value
9.5140 10.265
0.75061
9
b) Absolute True Error
Absolute true error which is denoted by is the absolute value of the true error, that means:
–
=| |=| − |
Since when we talk about error the main focus lies on the magnitude of the error than the sign,
absolute errors are used in place of actual errors.
In Example 1.1 above the absolute error becomes
= | | = 0.75061 = 0.75061
The magnitude of true error does not show how bad the error is. An absolute true error of
E A 0.75061 may seem to be small, but if the function given in the Example 1.1 were
E A 0.75061 10 6. This value of absolute true error is smaller, even when the two problems
are similar in that they use the same value of the function argument, x 2 and the step size,
h 0.3 . This brings us to the definition of relative true error.
c) Relative True Error
Relative true error is denoted by R and is defined as the ratio between the true error and the
True Error
true value.
True Value
−
= =
Example 1.2
For the problem in Example 1.1 above, find the relative true error at x 2 .
Solution
From Example 1.1,
Et = True value – Approximate value
9.5140 10.265
0.75061
Relative true error is calculated as
True Error
t
True Value
10
0.75061
9.5140
0.078895
Relative true errors are also presented as percentages. For this example,
t 0.0758895 100%
7.58895%
Absolute relative true errors may also need to be calculated. In such cases,
t | 0.075888 |
= 0.0758895
= 7.58895%
d) Approximate Error
In the previous section, we discussed how to calculate true errors. Such errors are calculated
only if true values are known. An example where this would be useful is when one is checking if
a program is in working order and you know some examples where the true error is known. But
mostly we will not have the luxury of knowing true values as why would you want to find the
approximate values if you know the true values. So when we are solving a problem numerically,
we will only have access to approximate values. We need to know how to quantify error for such
cases.
Approximate error is denoted by Ea and is defined as the difference between the present
approximation and previous approximation.
Approximate Error Present Approximation – Previous Approximation
Example1. 3
For the problem in Example 1.1 above, find the following
a) f ( 2 ) using h 0.3
b) f ( 2 ) using h 0.15
c) approximate error for the value of f ( 2) for part (b)
Solution
a) The approximate expression for the derivative of a function is
f ( x h) f ( x )
f ' ( x) .
h
For x 2 and h 0.3 ,
11
f (2 0.3) f (2)
f ' (2)
0.3
f (2.3) f (2)
0.3
calculating f ' ( 2) with h 0.15 would be Ea 0.38474 106 . This value of approximate error
is smaller, even when the two problems are similar in that they use the same value of the
function argument, x 2 , and h 0.15 and h 0.3 . This brings us to the definition of relative
approximate error.
12
Approximate Error
Relative Approximate Error
Present Approximation
Example 1.4
For the problem in Example 1.1 above, find the relative approximate error in calculating f ( 2)
using values from h 0.3 and h 0.15 ?
Solution
From Example1.3, the approximate value of f ( 2) 10.263 using h 0.3 and f ' ( 2) 9.8800
using h 0.15 .
E a Present Approximation – Previous Approximation
9.8799 10.265
0.38474
The relative approximate error is calculated as
Approximate Error
a
Present Approximation
0.38474
9.8799
0.038942
Relative approximate errors are also presented as percentages. For this example,
a 0.038942 100%
= 3.8942%
Absolute relative approximate errors may also need to be calculated. In this example
a | 0.038942 |
0.038942 or 3.8942%
f) The Limiting Errors
In a numerical method that uses iterative methods, a user can calculate relative approximate
error a at the end of each iteration. The user may pre-specify a minimum acceptable tolerance
called the pre-specified tolerance, s . If the absolute relative approximate error a is less than
or equal to the pre-specified tolerance s , that is, |a | s , then the acceptable error has been
reached and no more iterations would be required.
13
Alternatively, one may pre-specify how many significant digits they would like to be correct in
their answer. In that case, if one wants at least m significant digits to be correct in the answer,
then you would need to have the absolute relative approximate error, |a | 0.5 10 2 m %.
Example1.5
If one chooses 6 terms of the Maclaurin series for e x to calculate e 0.7 , how many significant
digits can you trust in the solution? Find your answer without knowing or using the exact
answer.
2
Solution: e x 1 x x .......... .......
2!
Using 6 terms, we get the current approximation as
0.7 2 0.7 3 0.7 4 0.7 5
e 0.7 1 0.7
2! 3! 4! 5!
2.0136
Using 5 terms, we get the previous approximation as
0.7 2 0.7 3 0.7 4
e 0.7 1 0.7
2! 3! 4!
2.0122
The percentage absolute relative approximate error is
2.0136 2.0122
a 100
2.0136
0.069527%
Since a 0.5 10 22 % , at least 2 significant digits are correct in the answer of
e 0.7 2 .0136
Q: But what do you mean by significant digits?
A: Significant digits are important in showing the truth one has in a reported number. For
example, if someone asked me what the population of my country is, I would respond, “The
population of Ethiopia is around 80 million”. But if someone was going to give me a 100 birr for
every citizen of the country, I would have to get an exact count. That count would have been
80,079,587 in year 2007 G.C. So you can see that in my statement that the population is 80
million, that there is only one significant digit, that is, 8, and in the statement that the population
is 80,079,587, there are eight significant digits. So, how do we differentiate the number of digits
14
correct in 80,000,000 and 80,079,587? Well for that, one may use scientific notation. For our
data we show
80,000,000 8 10 7
80,079,587 8.0079587 10 7
to signify the correct number of significant digits
Example1.6
Give some examples of showing the number of significant digits.
Solution
a) 0.0459 has three significant digits
b) 4.590 has four significant digits
c) 4008 has four significant digits
d) 4008.0 has five significant digits
e) 1.079 10 3 has four significant digits
f) 1.0790 10 3 has five significant digits
g) 1.07900 10 3 has six significant digits
Activity 1.1:
1. Let = 23.457609 be the value of the number and its approximate value is = 23.458,
then find
a) The true error
b) the absolute true error
c) the relative true error
d) both the percentage absolute and relative errors
′
2. For the question in Example 1.1 above compute the approximate value of (2) for
ℎ = 0.05 & ℎ = 0.001 and then find
a) the approximate absolute error
b) the approximate relative error
c) and compare this errors with their corresponding errors in example 1.3 and 1.4
3. Let = 20 and = 1000 are the exact measurements and if = 15 and
= 1005 are their respective approximations, then compare the significance of the two
errors by using
15
a) absolute true error
b) absolute relative error
c) percentage absolute error and percentage relative error
The errors induced by the different sources mentioned above are broadly classified as the
following three types of errors:
1) Inherent Errors: are errors which occur in the development of the mathematical model
for a given physical problem. These types of errors are mostly unavoidable, and they are
caused due to:
The approximate value of the initial data.
The different assumption taken on the model.
The limitations of the computing aids
Even if such errors are beyond the control of the numerical analyst, they can be
minimized by selecting:
A better initial data.
A better mathematical model that represent the problem.
Computing aids of higher precisions.
2) Truncation ( or numerical ) Errors
Truncation error is defined as the error caused by truncating (or cutting) an infinite
mathematical procedure in to a finite one. For example, the Maclaurin series for e x is given as
x2 x3
ex 1 x ....................
2! 3!
This series has an infinite number of terms but when using this series to calculate e x , only a
finite number of terms can be used. For example, if one uses three terms to calculate e x , then
x2
ex 1 x .
2!
the truncation error for such an approximation is
x2
Truncation error = e x 1 x ,
2!
16
x3 x4
.......................
3! 4!
But, how can truncation error be controlled in this example? We can use the concept of relative
approximate error to see how many terms need to be considered. Assume that one is calculating
e 1.2 using the Maclaurin series, then
Let us assume one wants the absolute relative approximate error to be less than 1%. In Table 1,
we show the value of e 1.2 , approximate error and absolute relative approximate error as a
function of the number of terms, n .
n e 1.2 Ea a %
1 1 − −
2 2.2 1.2 54.546
3 2.92 0.72 24.658
4 3.208 0.288 8.9776
5 3.2944 0.0864 2.6226
6 3.3151 0.020736 0.62550
3) Rounding Error
1
A computer can only represent a number approximately. For example, a number like may be
3
represented as 0.333333 on a PC. Then the round off error in this case is:
1
0.333333 0.00000033 .
3
Then there are other numbers that cannot be represented exactly. For example, and 2 are
numbers that need to be approximated in computer calculations.
Q: What problems can be created by round off errors?
17
A: Twenty-eight Americans were killed on February 25, 1991. An Iraqi Scud hit the Army
barracks in Dhahran, Saudi Arabia. The patriot defense system had failed to track and intercept
the Scud. What was the cause for this failure?
The Patriot defense system consists of an electronic detection device called the range gate. It
calculates the area in the air space where it should look for a Scud. To find out where it should
aim next, it calculates the velocity of the Scud and the last time the radar detected the Scud.
Time is saved in a register that has 24 bits length. Since the internal clock of the system is
measured for every one-tenth of a second, 1/10 is expressed in a 24 bit-register as
0.00011001100110011001100. However, this is not an exact representation. In fact, it would
need infinite numbers of bits to represent 1/10 exactly. So, the error in the representation in
decimal format is
1
(0 2 1 0 2 2 0 23 1 2 4 ... 1 2 22 0 2 23 0 2 24 )
10
9.537 108
The battery was on for 100 consecutive hours, hence causing an inaccuracy of
s 3600s
9.537 10 8 100 hr
0.1s 1hr
0.3433s
18
The shift calculated in the range gate due to 0.3433s was calculated as 687m . For the Patriot
missile defense system, the target is considered out of range if the shift was going to more than
137 m .
1.3.Computer Representation of Numbers
After completing this sections lesson, students are expected to:
identify how to represent numbers on a computer using different techniques
represent numbers both using fixed point and floating point representation
understand about the round-off errors
measure the maximum error bounds in number representation
A computer is in general built to handle pieces of information of a fixed size called a word. The
number of digits in a word (usually binary) is called the Word-Length of the computer. Typically
word-lengths are 16, 32, 48, or 64 bits. A real or integer number is usually stored in a word.
Integers can be exactly represented, provided that the word-length suffices to store all the digits
in its representation. But it is not easy to store real numbers, even the smallest rational numbers
such as 1/3 = 0.3333. .. can't be represented on a word-length.
Since there is a fixed space (word-length) of memory in the digital computer, a given number in
a certain base must be represented in a finite space in the memory of the computer. Thus, all
digits of a given number may not be represented in the memory. There are two conventional
ways for the representation of data in the word-length.
1.3.1. Fixed Point Representation
In the first generation of computers calculations were made on a Fixed Point Number system,
that is, real numbers were represented with a fixed number of t binary digits. If the word-length
of the computer is s+1 bits (including the sign bit), then only numbers in the interval =
[−2 ,2 ] are permitted.
Suppose the number to be represented has n digits. In the fixed point representation system
the n digits are subdivided in to & where is reserved for the integral part and is
reserved for the fractional part.
19
Integral part Decimal point Fractional part
= +
Consider an old time cash register that would ring any purchase between 0 and 999.99 units of
money. Note that there are five (not six) working spaces in the cash register (the decimal
number is shown just for clarification).
0 0 0 . 0 0
9 9 9 . 9 9
Q: Now look at any typical number between 0 and 999.99, such as 256.78. How would it be
represented?
A: The number 256.78 will be represented as
2 5 6 . 7 8
Q: What is the smallest change between consecutive numbers?
A: It is 0.01, like between the numbers 256.78 and 256.79.
Q: What amount would one pay for an item, if it costs 256.789?
A: The amount one would pay would be rounded off to 256.79 or chopped to 256.78. In either
case, the maximum error in the payment would be less than 0.01.
Q: What magnitude of relative errors would occur in a transaction?
A: Relative error for representing small numbers is going to be high, while for large numbers the
relative error is going to be small.
For example, for 256.786, rounding it off to 256.79 accounts for a round-off error of
256.786 256.79 0.004 . The relative error in this case is
0.004
t 100
256.786
0.001558% .
20
For another number, 3.546, rounding it off to 3.55 accounts for the same round-off error of
3.546 3.55 0.004 . The relative error in this case is
0.004
t 100
3.546
0.11280%
Example 1.7
Let = 13042, then represent using a fixed point representation if;
i) =3 & =2
Solution:
130 42
Then = 130.42
So, the computer understands the number as 130.42
ii) =3=
Solution:
130 420
Then = 130.420
So, the computer understands the number as 130.420
iii) =1 & =4
Solution:
1 30 42
Then = 1.3042
So, the computer understands the number as 1.3042
iv) =2=
Solution:
21
13 04
Then = 13.04
So, the computer understands the number as 13.04
As we can observe from the above example this way of representing numbers has different
limitations specially its exposed to large absolute or relative errors because the numbers must be
rounded to be represented.
1.3.2. Floating Point Representation
One of the most common way for the representation of numbers on a digital computers is the
Floating Point, in which the position of the decimal ( binary ) is not fixed at the outset; rather its
position with respect to the first digit is indicated for each number separately.
On a digital computers any n-digit floating point number in base , can be written in general
form as:
= ±(. … )×
Where are digits or bits with values from 0 to ( − 1) .
22
… ±
Let us go back to the example where we have five spaces available for a number. Let us also
limit ourselves to positive numbers with positive exponents for this example. If we use the same
five spaces, then let us use four for the mantissa and the last one for the exponent. So the
smallest number that can be represented is 1 but the largest number would be 9.999 10 9 . By
using the floating-point representation, what we lose in accuracy, we gain in the range of
numbers that can be represented. For our example, the maximum number represented changed
from 999.99 to 9.999 10 9 .
What is the error in representing numbers in the scientific format? Take the previous example of
256.78. It would be represented as 2.568 10 2 and in the five spaces as
2 5 6 8 2
Another example, the number 576329.78 would be represented as 5.763 10 5 and in five spaces
as
5 7 6 3 5
So, how much error is caused by such representation. In representing 256.78, the round off error
created is 256.78 256.8 0.02 , and the relative error is
0.02
t 100 0.0077888% ,
256.78
In representing 576329.78 , the round off error created is 576329 .78 5.763 10 5 29 .78 , and
the relative error is
29.78
t 100 0.0051672% .
576329.78
What you are seeing now is that although the errors are large for large numbers, but the relative
errors are of the same order for both large and small numbers.
Example 1.8
Represent the following numbers by their standard floating point representation on a five digit
computational tool that means when = 5.
23
i) = 13456
ii) = 134.5634
iii) = 1345.267
iv) = 1345.236
and compute their absolute true error of approximation?
Solution:
i) = 0.13456 × 10
13456 + 05
and also = 0.13456 × 10 thus no error is induced
ii) but here = 0.13456 × 10
13456 + 03
= | − | = |0.1345634 × 10 − 0.13456 × 10 |
= |0.1345634 − 0.13456| × 10
= 0.0000034 × 10
= 0.0034
Note:
1) A number cannot be represented exactly if it contains more than t-bits in the mantissa.
2) In the case of binary ( base two ) representation, there are two most commonly used
standard notations:
i) 32 bits of word length, of which
24 bits are reserved for the mantissa
7 >> >> >> >> >> exponent
1 bit is used by the sign.
ii) 64 bits of word length, of which
52 bits are reserved for the mantissa
11 >> >> >> >> >> exponent
1 bit is used by the sign.
24
In a floating point representation system with t digits for the mantissa a number with mantissa
greater than t digits cannot be represented exactly. Thus, such a number most somehow be
rounded-off to t-digits. There are two ways of reducing the number of digits of a given number,
which are discussed as follows.
i) By Chopping
In a t-digit computation, if all the digits of the mantissa to the right of the digit are just
dropped off , then we say that the number is approximated by chopping.
Let = ± × 10 where = 0. … … .
Then in a t-digit computer the mantissa is chopped as:
= 0. … where all the digits starting from to the right are dropped.
Thus, = × 10 = 0. … × 10
Error=| − | = 0.000 … 0 … × 10
= 0. … × 10 × 10
= . … × 10 × 10 × 10
≤ 9.999 … × 10 since 0 ≤ ≤9
≤ 10 × 10 = 10
Therefore, the maximum absolute error bound by chopping is
Error=| − | ≤ 10
Example 1.9
Let = 4 and = 14.28625. Using chopping find the maximum error committed in a 4 digit
computation?
Solution
= 14.28625 = 0.1428625 × 10 , on a 4 digit computation x is approximated by
= 0.1428 × 10 using chopping
Then, the maximum absolute error bound becomes
Error=| − | = |0.1428625 − 0.1428| × 10
= 0.0000625 × 10 = 0.625 × 10 × 10
= 0.625 × 10 = 6.25 × 10 < 9.9 × 10
< 10 × 10 = 10
Therefore, the maximum error committed is less than 10 = 0.01 .
Or simply since = 4, = 2 , we have
25
Error=| − | ≤ 10 = 10 = 10 = 0.01 .
ii) Rounding / Rounding off/
Let x = ±0. … … × 10
In this case if
≥ 5 , we add one to , i.e. we round up.
< 5 , we merely chopped off all the terms after .
Thus,
(0. … + 10 ) × 10 , ≥5
=
0. … × 10 , <5
For Case Ι , i.e., ≥5:
The Absolute Error becomes,
=| − |=
= |0. …′ + 1 − 0. … | × 10
= 0.000 … 0 ∝ ∝ … ∝ × 10
But ≥5 implies ∝ <5
= | − | =∝ .∝ … ∝ × 10 × 10 × 10
≤ 4.999 … × 10
< 5 × 10 × 10 = 0.5 × 10
Therefore, the maximum absolute error bound by rounding up is
E=| − | ≤ × 10
For Case ΙΙ , i.e., < 5 , Show that the maximum error bound is the same as the
previous case , i.e.
E=| − | ≤ × 10
E=| − | ≤ × 10
Example 1.10
Let = 4 and = 14.28625. Using Rounding find the maximum error committed in a 4 digit
computation?
Solution
26
= 14.28625 = 0.1428625 × 10 , on a 4 digit computation x is approximated by
= 0.1429 × 10 using Rounding
Then, the maximum absolute error bound becomes
Error=| − | = |0.1428625 − 0.1429| × 10
= 0.0000375 × 10 = 0.375 × 10 × 10
= 0.375 × 10 = 3.75 × 10 < 4.99 × 10
< 5 × 10 = × (10 )
Activity 1.2
1. Compute the absolute and relative errors committed when the numbers in Example 1.8
( ) are represented on a five digit computational tool?
2. Derive a formula for the maximum relative error bound when a number = ± × 10
where = 0. … … . is represented on a t-digit computational tool ,using both
chopping and rounding?
3. Find the maximum absolute error committed in a six digit computation when x=4/3 is
approximated Using chopping?
4. Find the maximum error committed in a six digit computation approximated by Using
Rounding for :
a) x=4/9
b) x=2/3
27
If a calculation is made with numbers that are not exact, then the calculation itself will have an
error. How do the errors in each individual number propagate through the calculations. Let’s
look at the concept via some examples.
Example 1.11
Find the bounds for the propagation error in adding two numbers. For example if one is
calculating X Y where
X 1.5 0.05 ,
Y 3.4 0.04 .
Solution
By looking at the numbers, the maximum possible value of X and Y are
X 1.55 and Y 3.44
Hence
X Y 1.55 3.44 4.99
is the maximum value of X Y .
The minimum possible value of X and Y are
X 1.45 and Y 3.36 .
Hence
X Y 1.45 3.36
4.81
is the minimum value of X Y .
Hence
4.81 X Y 4.99.
One can find similar intervals of the bound for the other arithmetic operations of
X Y , X * Y , and X / Y . What if the evaluations we are making are function evaluations instead?
How do we find the value of the propagation error in such cases.
If f is a function of several variables X 1 , X 2 , X 3 ,......., X n 1 , X n , then the maximum possible
f f f f
f X 1 X 2 ....... X n 1 X n
X 1 X 2 X n 1 X n
Example 1.12
28
The strain in an axial member of a square cross-section is given by
F
h 2E
where
F =axial force in the member, N
h = length or width of the cross-section, m
E =Young’s modulus, Pa
Given
F 72 0.9 N
h 4 0.1 mm
E 70 1.5 GPa
Find the maximum possible error in the measured strain.
Solution
72
(4 10 ) (70 10 9 )
3 2
64 .286 10 6
64 .286
F h E
F h E
1
2
F h E
2F
3
h h E
F
2 2
E h E
1 2F F
E 2
F 3 h 2 2 E
h E h E h E
1 2 72
3 2 9
0.9 0.0001
(4 10 ) (70 10 ) (4 10 ) (70 10 9 )
3 3
72
3 2 9 2
1.5 10 9
(4 10 ) (70 10 )
29
8.0357 10 7 3.2143 10 6 1.3776 10 6
5.3955 10 6
5.3955
Hence
(64 .286 5.3955 )
implying that the axial strain, is between 58 .8905 and 69 .6815
Example 1.13
Subtraction of numbers that are nearly equal can create unwanted inaccuracies. Using the
formula for error propagation, show that this is true.
Solution
Let
z x y
Then
z z
z x y
x y
(1)x (1)y
x y
z x y
z x y
As x and y become close to each other, the denominator becomes small and hence create large
relative errors.
For example if
x 2 0.001
y 2.003 0.001
z 0.001 0.001
z | 2 2.003 |
= 0.6667
= 66.67%
1.5. Stability of Algorithms and Conditioning numbers
30
After completing this sections lesson, students are expected to:
identify whether an algorithm is stable or unstable
identify whether a given problem is well or ill conditioned
calculate the condition numbers of different problems
Since we must live with errors in our numerical computations, the next question natural is
regarding appraisal of a given computed solution: In view of the fact that the problem and the
numerical algorithm both yields error, can we trust the numerical solution of a nearby problem
(or the same problem with slightly different data) to differ by only a little from our computed
solution? A negative answer could make our computed solution meaningless.
This question can be complicated to answer in general, and it leads to notations such as problem
sensitivity and algorithm stability. If the problem is too sensitive, or ill-conditioned, meaning
that even a small perturbation in the data produces a large difference in the result.
31
Figure 1.2 graphs of stable and unstable algorithms.
The words condition and conditioning are used informally to indicate how sensitive the solution
of a problem may be to small relative change in the input data. The condition of a numerical
problem is a qualitative or quantitative statement about how it is to solve, irrespective of the
algorithm used to solve it.
As a qualitative example, consider the solution of two simultaneous linear equations. The
problem may be described graphically by the pair of straight lines representing each equation:
the solution is then the point of intersection of the lines.
32
Quantitatively, the condition number K of a problem is a measure of the sensitivity of the
problem to a small perturbation or change. If this number is large, it indicates the problem is ill-
conditioned problem; in contrast, if the number is modest , the problem is recognized as a well-
conditioned problem.
For example, we can consider the problem of evaluating a differentiable function ( ) . Let be
a point close to . In this case K is a function of defined as the relative change in ( )
caused by a unit relative change in . That is
|[ ( ) ( )]/ ( )|
( ) = lim → |( )/ |
| ( ) ( )|
= lim → | |
( )
. ,( )
=
( )
Example 1.14
Suppose ( ) = √ . We get
,
. ( ) .[1/2 ] 1
( )= = =2
( ) √
So K is a constant which implies that taking square roots is equally well conditioned for all
non-negative, and that the relative error is reduced by half in the process.
Example 1.15
Suppose ( )= . In this case we get
,
. ( ) .[1/(1− )2
( )= = =
( ) 1/(1− ) 1−
So ( ) can get arbitrarily large for values of close to 1 and can be used to measure the
relative error in ( ) for such values, e.g. if = 1.000001 then the relative error will increase
by a factor of about 10 .
Review Exercise
33
ii) 1.3526, 2.00462, 1.532, 28.201, 31.0012
3. Find the relative error in computation of − for = 12.05 and = 8.02 having
absolute errors ∆ = 0.005 and ∆ = 0.001.
4. Find the relative error in computation of + for = 11.75 and = 6.56 having
absolute errors ∆ = 0.001 and ∆ = 0.003 respectively.
5. If =4 − 5 , find the percentage error in at = 1 , if the error is = 0.04.
error.
7. If ( ) = 4 cos − 6 , find the relative percentage error in ( ) for = 0 , if the
error is = 0.005.
8. Determine the number of correct digits in the number given its relative error .
i) = 386.4, = 0.3.
ii) = 86.34, = 0.1.
iii) = 0.4785, = 0.2 × 10 .
9. Evaluate = √5.01 − √5 correct to three significant digits.
10. If is approximated to 0.6667. Find
i) absolute error
ii) relative error and
iii) percentage error
11. If = and error in , , be 0.001, 0.002, 0.003, compute the relative error
in . Where = = = 1.
12. If the true value of a number is 2.546282 and 2.5463 is its approximate value; find the
absolute error, relative error and the percentage error in the number.
13. If = 10.00 ± 0.05, = 0.0356 ± 0.002, = 15300 ± 100, and = 62000 ± 500
then find the maximum value of the absolute error in
i) + + +
ii) +5 −
iii)
14. If (0.31 + 2.73)/( + 0.35)
34
where the coefficients are rounded off find the relative and absolute error in when
= 0.5 ± 0.1.
15. If = 5.43 and = 3.82 , where denote the length and breadth of a
rectangular plate, measured accurate up to 1 , find the error in computing the area.
35
CHAPTER TWO
2. SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL
EQUATIONS
2.1. Introduction:
Consider the equation
f ( x) 0 (2.1)
which may be given explicitly as a polynomial of degree n in x or f (x) may be defined
implicitly as a transcendental function. An equation which contains polynomials, exponential
functions, logarithmic functions, trigonometric functions etc. is called a transcendental equation.
Finding one or more roots of Eq. (2.1) is one of the more commonly occurring problems of
applied mathematics. Since there is no general formula for the solution of polynomial equations,
no general formula will exist for the solution of an arbitrary nonlinear equation of the form Eq.
(2.1) where f is a continuous real-valued function. How can we then decide whether or not such
an equation possesses a solution in the set of real numbers, and how can we find a solution? The
present chapter is devoted to the study of these questions. Our goal is to develop simple
numerical methods for the approximate solution of the Eq. (2.1) where f is a real-valued
function, defined and continuous on a bounded and closed interval of the real line.
Unit Objectives:
1. At the end of the unit students will be able to:
use the bisection method to finding roots of a nonlinear equation,
enumerate the advantages and disadvantages of the bisection method
use the Regula-Falsi method to finding roots of a nonlinear equation,
enumerate the advantages and disadvantages of the Regula-Falsi method
use the Newton-Raphson method to solve a nonlinear equation,
discuss the advantages and drawbacks of the Newton-Raphson method.
use the secant method to solve a nonlinear equation.
discuss the advantages and drawbacks of the secant method
use the Iteration method to numerically solve a nonlinear equation.
discuss the advantages and drawbacks of the Iteration method
discuss about the convergence of iteration methods
36
2.1. Preliminaries:
Geometrically the root of the equation Eq. (2.1) is the value of x at which the graph of
y f (x) intersects the x axis.
Definition 2.2: (Simple root) A number is a simple root of f ( x ) 0 if f ( ) 0. and
f ( x) x 3 x 2 0.
Definition 2.3: (Multiple root) A number is a multiple root, of multiplicity m, of f ( x) 0 , if
f x ( x ) m g ( x ), g ( ) 0.
For example, a direct method gives the root of a linear or first degree equation
a 0 x a1 0 (2.2)
a1
x= (2.3)
a0
37
Similarly, the roots of the quadratic equation
a 0 x 2 a1 x a 2 0 (2.4)
a1 a12 4a 0 a 2
are given by x = (2.5)
2a 0
(ii) Iterative methods: These methods are based on the idea of successive approximations, i.e
starting with one or more initial approximations to the root ,we obtain a sequence of
approximations or iterates {xk } which in the limit as k , converge to the exact root . An
This method uses one initial approximation to the root x0 . The sequence of approximations
is given by
x1 ( x0 ), x2 ( x1 ), x3 ( x2 ), ... (2.7)
If a method uses two initial approximations x0 , x1, to the root, then we can write the
method as
xk 1 ( xk 1, xk ), k 1, 2,3,... (2.8)
Remark: Given one or two initial approximations to the root, we require a suitable iteration
function for a given function f (x ) , such that the sequence of iterates, xk , converge to the
exact root . Further, we also require a suitable criterion to terminate the iteration
38
Criterion to terminate iteration procedure:
Since, we cannot perform infinite number of iterations, we need a criterion to stop the iterations.
we can select a tolerance 0 and generate x1 , x 2 , x3 ,..., x k , x k 1 , until one of the following
conditions is met:
(i) The equation f ( x ) 0 is satisfied to a given accuracy or f ( xk ) is bounded by an error
tolerance .
f ( xk ) . (2.11)
(ii) The magnitude of the difference between two successive iterates is smaller than a given
xk 1 xk . Or (2.12)
x k 1 xk
(iii) Relative error . (2.13)
xk 1
Generally, we use the second criterion. For example, if we require two decimal place accuracy,
then we iterate until xk 1 xk 0.005. If we require three decimal place accuracy, then we
iterate until xk 1 xk 0.0005. But inequality (Eq. 2.13) is the best stopping criterion to apply
39
Theorem 2.1: If f (x ) is continuous on some interval [a, b] and f ( a ) f (b ) 0, then the equation
f ( x ) 0 has at least one real root or an odd number of real roots in the interval (a, b).
This result is very simple to use. We set up a table of values of f (x ) for various values of x
Studying the changes in signs in the values of f (x ) , we determine the intervals in which the
roots lie. For example, if f (1) and f ( 2) are of opposite signs, then there is a root in the interval
(1,2).
Example 2.1: Determine an interval of length one unit in which the negative real root, which is
smallest in magnitude lies for the equation f ( x) 9 x 3 18x 2 37 x 70 0.
Solution: Let f ( x) 9 x 3 18x 2 37 x 70 0. Since, the smallest negative real root in magnitude
is required, we form a table of values for x 0,
x -5 -4 -3 -2 -1 0
40
One alternative to obtain an approximate solution is to plot the function and determine where it
cross the x-axis. If the equation f ( x) 0 can be conveniently written in the form f1 ( x) f 2 ( x),
then the point of intersection the graph of y f1 ( x ) and y f 2 ( x) gives the root of f ( x) 0.
1 f 2 ( x) x
0.5
f1 ( x) e x cos x
(0,0) 0.5 1
Figure 2.1
0.5 is approximate solution of f ( x ) 0
Remark: Graphical techniques are of limited practical value because they are not precise.
However, graphical methods can be utilized to obtain rough estimates of the roots. These
estimates can be employed as starting guesses for numerical methods which will be discussed in
the next sections.
2.2.2 Bisection method:
This method is based on the repeated application of The Intermediate Value Theorem.
Suppose f is a continuous function defined on the interval [a , b ], with f (a) and f (b)
of opposite sign. The Intermediate Value Theorem implies that a number m exists in ( a, b)
with f (m) 0. Although the procedure will work when there is more than one root in the
interval (a, b ), we assume for simplicity that the root in this interval is unique. The method
calls for a repeated halving (or bisecting) of subintervals of [a, b] and, at each step, locating
the half containing m .
a1 b1
To begin, set a1 a and b1 b and let m1 be the midpoint of [a , b ]; that is, m1
2
• If f (m1 ) 0, then m m1 , and we are done.
41
• If f (m1 ) 0, then f (m1 ) has the same sign as either f (a1 ) or f (b1 ).
• If f (m1 ) and f (a1 ) have the same signs, then m (m1 , b1 ). Set a2 m1 and b2 b1.
• If f (m1 ) and f (a1 ) have opposite signs, then m (a1 , m1 ). Set a2 a1 and b2 m1.
Then reapply the process to the interval [a2 , b2 ]. After repeating the bisection process a number
of times, we either find the root or find subinterval which contains the root. We take the
midpoint of the last subinterval as an approximation to the root.
The method is shown graphically in the Fig. 2.2
y (b, f (b))
b m2 m3 m1 a x
(a, f ( a ))
Figure 2.2
Graphical representation of bisection method.
Example 2.3: Perform five iterations of the bisection method to obtain the smallest positive root
of the equation f ( x) x 3 5x 1 0.
Solution: Since, f (0) 0 and f (1) 0, the smallest positive root lies in the interval (0,1).
a0 b0 0 1
Taking a0 0 , b0 1 , we get m1 0.5 , f (m1 ) f (0.5) 1.375 0,
2 2
and f (a0 ) f (m1 ) 0,
42
Thus, the root lies in the interval (0,0.25).
The sequence of intervals given in the table 2.2
n a n 1 bn 1 mn f (mn ) f (an1 )
1 0 1 0.5 <0
2 0 0.5 0.25 <0
3 0 0.25 0.125 >0
4 0.125 0.25 0.1875 >0
5 0.1875 0.25 0.21875 <0
Hence, the root lies in (0.1875,0.21875).The approximate root is taken as the midpoint of this
interval ,that is 0.203125.
Example 2.4:
Find the interval in which the smallest positive root of the equation f ( x) x 3 x 4 0 lies.
Determine the root correct to two decimal places using the bisection method.
Solution:
For f ( x) x 3 x 4, we find f (0) 4, f (1) 4, f (2) 2.
Therefore, the root lies in the interval (1,2). The sequence of intervals given in the table 2.3
n an bn mn f ( mn ) f ( a n )
1 1 2 1.5 >0
2 1.5 2 1.75 >0
3 1.75 2 1.875 <0
4 1.75 1.875 1.8125 >0
5 1.75 1.8125 1.78125 >0
6 1.78125 1.8125 1.796875 <0
7 1.78125 1.796875 1.7890625 >0
8 1.7890625 1.796875 1.792969 >0
9 1.792969 1.796875 1.794922 >0
10 1.794922 1.796875 1.795898 > 0.
After 10 iterations, we find that the root lies in the interval (1.795898, 1.796875).Therefore,
the approximate root is m = 1.796387. The root correct to two decimal places is 1.80.
43
Example 2.5: Show that f ( x) x 3 4 x 2 10 0. has a root in [1,2], and use the Bisection
method to determine an approximation to the root that is accurate to at least within 10−4.
Solution: Because f (1) 5, f (2) 14 the Intermediate Value Theorem ensures that this
continuous function has a root in [1,2].
For the first iteration of the Bisection method we use the fact that at the midpoint of
[1,2] we have f (1.5) 2.375 0. This indicates that we should select the interval [1,1.5]
for our second iteration. Then again take the mid point of [1,1.5] , we find that
f (1.25) 1.796875 0. so our new interval becomes [1.25,1.5], whose midpoint is 1.375.
Continuing in this manner gives the values in Table 2.4.
n an bn mn f ( mn )
44
You might suspect this is true because f (m9 ) f (m13 ) , but we cannot be sure of this
slow to converge (that is, n may become quite large before m mn is sufficiently small), and a
good intermediate approximation might be inadvertently discarded. However, the method has the
important property that it always converges to a solution.
Theorem 2.2: Let f be continuous on [a, b] and f (a ) f (b) 0. Then the Bisection method
generates a sequence mn n1 approximating the root with the property
ba
mn , when n 1 .
2n
Proof: For each n 1, we have
ba
bn an and ∈ (an , bn ). (2.14)
2 n1
an bn
Since mn for all n 1,
2
bn an b a
it follows that mn n .
2 2
Example 2.6: Determine approximately how many iterations are necessary to solve
f ( x) x 3 4 x 2 10 0 with accuracy of ε =10-5 for a 1 and b 2 .
ba
Solution: This requires finding an integer n that will satisfies mn
2n
2 1
i.e. mn n
2 n 10 5
2
To determine n we use logarithms to the base 10
n 5
2
Since 2 n 10 5 log10 < log10
10 5
2
n log10 5
5
n> 2
16.6 .
log10
It would appear to require 17 iterations to obtain an approximate accuracy to 10-5.
45
REMARK: If an error tolerance ε is prescribed, then the approximate number of the iterations
[log(b a) log ]
required may be determined from the relation n .
log 2
Activity 2.3:
1. Determine the number of iterations necessary to solve f ( x) x 3 4 x 2 10 0 with
accuracy 10−3 using a 1 and b 2 .
2. Perform five iterations of the bisection method to obtain the smallest positive root of the
following equations
i) x 5 4 x 2 0 ii ) cos x 3x 1 iii) x 3 2 x 2 1 0 iv) 5 x 3 20 x 2 3 0
3. Find the root of the equation sin x 1 x 3 , which lies in the interval ( 2, 1), correct to
three decimal places.
4. Use the Bisection method to find solutions, accurate to within 10−5 for the following equations
i) 3x e x 0 for 1 x 2 ii ) 2 x 3 cos x e x 0 for 0 x 1
46
2.2.3 Method of False Position:
The method is also called linear interpolation method or chord method or regula-falsi method.
At the start of all iterations of the method, we require the interval in which the root lies. Let the
root of the equation f ( x) 0, lie in the interval ( xk 1 , xk ), that is, f k 1 f k 0, where
f ( x) 0. Draw a straight line joining the points P and Q (see Fig. 2.3). The line PQ is taken as
an approximation of the curve in the interval [ xk 1 , xk ]. The equation of the line PQ is given by
y fk x xk
f k 1 f k xk 1 xk
The point of intersection of this line PQ with the x axis is taken as the next approximation
to the root. Setting y 0, and solving for x, we get
xk 1 xk x x k 1
x xk ( ) fk x k ( k ) fk
f k 1 f k f k f k 1
The next approximation to the root is taken as
x k x k 1
x k 1 x k ( ) fk
f k f k 1
Simplifying, we can also write the approximation as
xk 1 f k xk f k 1
xk 1
f k f k 1 k 1, 2,3,... (2.15)
Therefore, starting with the initial interval ( x0 , x1 ), in which the root lies, we compute
x0 f1 x1 f 0
x2
f1 f 0
Now, if f ( x0 ) f ( x2 ) 0, then the root lies in the interval ( x0 , x2 ). Otherwise, the root lies in
the interval ( x2 , x1 ). The iteration is continued using the interval in which the root lies, until
the required accuracy criterion satisfied.
47
The method is shown graphically by
y P
x3
x0 x2 X
x1
Q
Figure 2.3
Method of false position
Remark : i) At the start of each iteration, the required root lies in an interval, whose length is
decreasing. Hence, the method always converges.
ii) The method of false position has a disadvantage. If the root lies initially in the
interval ( x0 , x1 ), then one of the end points is fixed for all iterations.
Example 2.7: Locate the intervals which contain the smallest positive real roots of the equation
x 3 3x 1 0. Obtain these roots correct to three decimal places, using the
method of false position.
Solution: We form the following table of values for the function f ( x ).
x 0 1 2 3
f (x ) 1 -1 3 19
Table 2.5
There is one positive real root in the interval (0,1). and another in the interval (1,2).
There is no real root for x 2 as f ( x ) 0, for all x 2.
we find the root in (0,1). We have
x 0 0, x1 1, f 0 f ( x 0 ) f (0 ) 1, f 1 f ( x1 ) f (1) 1 .
x0 f1 x1 f 0 0 1
x2 0.5 , f ( x 2 ) f ( 0 .5 ) 0 .375 .
f1 f 0 1 1
48
x0 f 2 x 2 f 0 0 0.5(1)
x3 0.36364 , f ( x3 ) f (0 .36364 ) 0 .04283 .
f2 f0 0.375 1
x0 f 3 x3 f 0 0 0.36364(1)
x4 0.34870 , f ( x 4 ) f ( 0 .34870 ) 0 .00370 .
f3 f0 0.04283 1
x 0 f 5 x 5 f 0 0 0 .34741 (1)
x6 0 .347306
f5 f0 0 .0003 1
The root has been computed correct to three decimal places. The required root can be
taken as x x 6 0.347306. We may also give the result as 0.347, even though x6 is more
accurate. Note that the left end point x 0 is fixed for all iterations.
Example 2.8: Find the root correct to two decimal places of the equation, cos x xe x ,
using the method of false position.
Solution: Define f ( x) cos x xe x 0. There is no negative root for the equation. We have
f ( 0 ) 1, f (1) 2 .17798 .
Since, f (0) f (1) 0, the root lies in the interval (0,1).
x0 f1 x1 f 0 0 1(1)
x2 0.31467 , f ( x 2 ) f ( 0 .31467 ) 0 .51986 .
f1 f 0 2.177985 1
Since, f (0.31467) f (1) 0, the root lies in the interval (0.31467 ,1).
49
x2 f1 x1 f 2 0.31467(2.17798) 1(0.51986)
x3 0.44673 , f ( x ) f ( 0.44673 ) 0.20354 .
f1 f 2 2.17798 0.51986 3
Since, f (0.49402) f (1) 0, the root lies in the interval (0.49402 ,1).
The root has been computed correct to two decimal places. The required root can be
taken as x x 7 0.51692.
Note that the right end point x 2 is fixed for all iterations.
Activity 2.4:
In the following problems, find the root as specified using the regula-falsi method.
1. Find the positive root of x 3 2 x 5. (Do only four iterations).
2. Find an approximate root of − 1.2 = 0 , correct to three decimal places.
3. Solve the equation , x tan x 1, starting with a 2 .5, and b 3, correct to three decimal places.
50
ALGORITHM: The False Position Method:
INPUT: initial approximations p0, p1; tolerance TOL; maximum number of iterations N0.
OUTPUT: approximate solution p or message of failure.
Step 1 Set i = 2;
q0 = f ( p0);
q1 = f ( p1).
Step 2 While i ≤ N0 do Steps 3–7.
Step 3 Set p = p1 − q1( p1 − p0)/(q1 − q0). (Compute pi.)
Step 4 If | p − p1| < TOL then
OUTPUT (p); (The procedure was successful.)
STOP.
Step 5 Set i = i + 1;
q = f ( p).
Step 6 If q ・ q1 < 0 then set p0 = p1;
q0 = q1.
Step 7 Set p1 = p;
q1 = q.
Step 8 OUTPUT (‘Method failed after N0 iterations, N0 =’, N0);
(The procedure unsuccessful.) STOP.
2.2.4. Newton-Raphson Method:
This method is also called Newton’s method.
Let x0 be an initial approximation to the root of f ( x ) 0. Then , P( x0 , f 0 ), where f 0 f ( x0 ), is a
point on the curve. Draw the tangent to the curve at P, (see Fig.2.4). We approximate the curve
in the neighborhood of the root by the tangent to the curve at the point P. The point of
intersection of the tangent with the x axis is taken as the next approximation to the root. The
process is repeated until the required accuracy is obtained. The equation of the tangent to the
curve y f (x) at the point , P( x0 , f 0 ), is given by
y f ( x0 ) ( x x0 ) f ' ( x0 )
where f ' ( x0 ) is the slope of the tangent to the curve at P.
51
Setting y 0 and solving for x, we get
f ( x0 )
x x0 , f ' ( x0 ) 0
f ' ( x0 )
The next approximation to the root is given by
f ( x0 )
x1 x0 , f ' ( x0 ) 0
f ' ( x0 )
We repeat the procedure. The iteration method is defined as
f ( xk )
x k 1 = x k , f ' ( xk ) 0 k 0,1, 2,... (2.16)
f ( xk )
(x) 2
f ( xk ) xf ' ( xk ) f ' ' ( xk ) ... 0.
2!
Neglecting the second and higher powers of x , we obtain
f ( xk )
f ( xk ) xf ' ( xk ) 0, or x .
f ' ( xk )
Hence, we obtain the iteration method
f ( xk )
xk 1 xk x xk . f ' ( xk ) 0
f ' ( xk )
which is same as the method derived earlier.
Geometrically, the method consists in replacing the part of the curve between the point
( x0 , f ( x0 )), and the x axis by means of the tangent to the curve at the point.
The method is shown graphically in the Fig.2.4
y
P ( x0 , f 0 )
52
x1 x 0 x
Figure 2.4
Newton-Raphson Method
Remark:1. Convergence of the Newton’s method depends on the initial approximation to the
root. If the approximation is far away from the exact root, the method diverges.
However, if a root lies in a small interval ( a , b ) and x 0 ( a , b ), then the method
converges.
2. The computational cost of the method is one evaluation of the function f (x ) and one
evaluation of the derivative f ' ( x ), for each iteration.
Example 2.9: Perform four iterations of the Newton’s method to find the smallest positive root
of the equation f ( x) x 3 5 x 1 0.
Solution: We have f (0 ) 1, f (1) 3 .
Since , f (0) f (1) 0, the smallest positive root lies in the interval (0,1).
2 x03 1 2(0.5)3 1
x1 0.176471,
3 x02 5 3(0.5) 2 5
2 x13 1 2(0.176471) 3 1
x2 2 0.201568,
3x1 5 3(0.176471) 2 5
2 x23 1 2(0.201568) 3 1
x3 0.201640,
3x22 5 3(0.201568) 2 5
2 x33 1 2(0.201640) 3 1
x4 0.201640
3x32 5 3(0.201640) 2 5
Therefore, the root correct to six decimal places is x 0.201640.
53
Example2.10: Derive the Newton’s method for finding the qth root of a positive number N ,
1 1
q
N , where N 0, q 0. Hence, compute 17 , correct to four decimal places,
3
2 xk3 17
xk 1 ,
3 xk2
With x0 2 , we obtain the following results.
2 x13 17 2(2.75) 3 17
x2 2.582645,
3 x12 3( 2.75) 2
2 x23 17 2( 2.582645) 3 17
x3 2.571332,
3x22 3(2.582645) 2
2 x43 17 2(2.571332) 3 17
x4 2.571282,
3 x42 3( 2.571332) 2
We may take x 2.571282 as the required root correct to four decimal places.
54
Activity2.5:
determine the initial approximations for finding the smallest positive root. Use these to
find the root correct to three decimal places .Use Newton-Raphson method.
decimal places.
3. Use Newton’s method to find solutions accurate to within 10−4 for the following problems.
55
2.2.5. The secant Method:
We have seen that the Newton-Raphson Method requires the evaluation of derivatives of the
functions and this is not always possible, particularly in the case of functions arising in practical
problems. In the secant method ,the derivative at xk is approximated by the formula
f ( xk ) f ( xk 1 )
f ( x k ) (2.17)
x k x k 1
Hence Newton-Raphson Method formula (2.16) becomes
f ( x k )( x k xk 1 )
x k 1 x k , f ( x k ) f ( xk 1 ) 0 , k 1, 2,3,... (2.18)
f ( x k ) f ( x k 1 )
f ( x 2 )( x2 x1 ) 0.234375(0.25 1)
x3 x 2 0.25 0.186441, f ( x3 ) 0.074276
f ( x 2 ) f ( x1 ) 0.234375 3
f ( x3 )( x3 x2 ) 0.074276(0.186441 0.25)
x 4 x3 0.186441 0.201736, f ( x 4 ) 0.000470
f ( x3 ) f ( x 2 ) 0.074276 0.234375
f ( x 4 )( x 4 x3 ) 0.000470(0.201736 0.186441)
x5 x 4 0.201736 0.201640.
f ( x 4 ) f ( x3 ) 0.000470 0.074276
56
Example 2.12: Given f ( x) x 4 x 10 0 .Determine the initial approximations for finding the
smallest positive root. Use these to find the root correct to three decimal places
using the secant method.
Solution: We have f ( x) x 4 x 10 0,
we find that f (0) 10, and f (1) 10, f (2) 4.
Hence, the smallest positive root lies in the interval (1,2).
The Secant method gives the iteration scheme
f ( xi )( xi xi 1 )
xi 1 xi i 1,2,3,...
f ( xi ) f ( xi 1 )
x2 1, 1.7143,
x3 1, 1.8385,
x4 1, 1.8578,
x5 1, 1.8556,
x6 1, 1.8556.
The root correct to three decimal places is 1.856.
Activity2.6:
1.Use secant method to obtain the smallest positive root , correct to three decimal places, of the
following questions
i) x 3 3 x 2 3 0
ii ) x 3 x 2 x 7 0
iii) x e x 0
2. Use the secant method to find solutions, accurate to within 10−5 for the following problems.
i) x 2 4 x 4 ln x 0 for 1 x 2
ii ) x 1 2 sin x 0 for 0 x 0.5
57
ALGORITHM: The Secant Method
The method is also called method of successive approximations or fixed point iteration method.
The first step in this method is to rewrite the given equation f ( x) 0 in an equivalent form as
x (x) (2.19)
There are many ways of rewriting f ( x) 0 in this form.
58
a fixed point of (x). A fixed point of a function is a point such that ( ).
Using Eq. (2.19), the iteration method is written as
xk 1 ( xk ), k 0,1, 2,... (2.21)
x1 ( x0 ), x2 ( x1 ), x3 ( x2 ), … (2.22)
The stopping criterion is same as used earlier.
Since, there are many ways of writing f ( x) 0 as x (x), it is important to know whether all
or at least one of these iteration methods converges.
Convergence of an iteration method xk 1 ( xk ), k 0,1, 2,... depends on the choice of the
xk3 1
(i ) xk 1 , k 0,1, 2,... (2.23)
5
With x0 1, we get the sequence of approximations as
5 xk 1
(iii) x k 1 k 0,1, 2,... (2.25)
xk
59
which does not converge to the root in (0,1).
Now, we derive the condition that the iteration function (x) should satisfy in order that
the method converges.
Condition of convergence
or k 1 ' (t k ) k , xk t k .
Then, k 1 ck 1 0 . (2.29)
or
60
We can test this condition using x0 , the initial approximation, before the computations
are done.
Let us now check whether the methods (2.23), (2.24), (2.25) converge to a root in (0, 1) of
the equation f ( x) x 3 5 x 1 0.
x3 1 3x 2 3x 2
(i) We have ( x) , ' ( x) , ' ( x) 1,
5 5 5
for all x in 0 < x <1. Hence, the method converges to a root in (0, 1).
1
5
(ii) We have ( x) (5 x 1) 3 , ' ( x) 2
, .
3
3(5 x 1)
Now ' ( x) 1, when x is close to 1 and ' ( x) 1, in the other part of the interval.
5x 1 1
(iii) We have ( x ) , ' ( x) 3 1
,
x 2 2
2 x (5 x 1)
Again ' (x) 1, when x is close to 1 and ' (x) 1, in the other part of the interval.
Convergence is not guaranteed.
Remark: Sometimes, it may not be possible to find a suitable iteration function (x) by
manipulating the given function f (x) . Then, we may use the following procedure.
Write f ( x) 0 as x x f ( x) ( x) , where is a constant to be determined.
Let x0 be an initial approximation contained in the interval in which the root lies.
For convergence, we require
' ( x0 ) 1 f ' ( x) 1. (2.31)
Simplifying, we find the interval in which lies. We choose a value for from
this interval and compute the approximations. A judicious choice of a value in
this interval may give faster convergence.
Example2.13: Find the smallest positive root of the equation x 3 x 10 0 , using the general
iteration method.
Solution: We have f ( x) x 3 x 10 , f(0) = – 10, f(1) = – 10,
f(2) = 8 – 2 – 10 = – 4, f(3) = 27 – 3 – 10 = 14.
61
Since, f(2) f(3) < 0, the smallest positive root lies in the interval (2 , 3).
1
Write x 3 x 10 , and x ( x 10) 3 ( x) . We define the iteration method as
1
1
xk 1 ( xk 10) . We obtain ' ( x)
3
2 ,
3( x 10) 3
We find ' ( x) 1, for all x in the interval (2, 3). Hence, the iteration converges.
x1 = (12.5)1/3 = 2.3208,
x2 = (12.3208)1/3 = 2.3097,
x3 = (12.3097)1/3 = 2.3090,
x4 = (12.3090)1/3 = 2.3089.
4(9 x 2 2 x)
We obtain ' ( x)
(3x 3 x 2 12) 2
We find ' ( x) 1, for all x in the interval (– 1, 0). Hence, the iteration converges.
62
x1 = − =−
( . ) ( . )
= −0.33290,
x2 = − =− ( . ) ( . )
= −0.33333,
x3 = − =−
( . ) ( . )
= −0.33333.
( )
we have '' ( ) <1 , if | α | < | β |.
2
63
= 1 (9 x 2 8 x 4) < 1
for all x ∈ (– 1, 0). This condition is also to be satisfied at the initial approximation.
Hence, takes negative values. The interval for α depends on the initial approximation x0 .
64
Activity2.7
1.Define the fixed point iteration method to obtain a root of f ( x) 0. When does the method
Converge?
2 .In the following problems, find the smallest positive root as specified using fixed point
iteration method.
i) x 2 5 x 1 0 , correct to four decimal places.
The constant c , which is independent of k , is called the asymptotic error constant and it
depends on the derivatives of f (x ) at x .
Let us now obtain the orders of the methods that were derived earlier.
Method of false position:
We have noted earlier that if the root lies initially in the interval ( x0 , x1 ) , then one of the end
points is fixed for all iterations. If the left end point x0 is fixed and the right end point moves
towards the required root, the method behaves like
x0 f k x k f 0
xk 1
fk f0
series and simplify using the fact that f ( ) 0 . We obtain the error equation as
65
f ' ' ( )
k 1 c 0 k , where c
f ' ( )
Since 0 is finite and fixed, the error equation becomes
k 1 c k , where c c 0 . (2.33)
Hence, the method of false position has order 1 or has linear rate of convergence.
Method of successive approximations or fixed point iteration method
We have xk 1 ( xk ), and ( ),
Subtracting, we get xk 1 ( xk ) ( ) ( xk ) ( )
[ ( ) ( xk ) ' ( ) ...] ( )
or k 1 k ' ( ) 0( k2 ).
Hence, the fixed point iteration method has order 1 or has linear rate of convergence.
Newton-Raphson method:
f ( xk )
The method is given by xk 1 xk , f ' ( xk ) 0
f ' ( xk )
Substituting xk k , xk 1 k 1 , we obtain
f ( k )
k 1 k
f ' ( k )
Expand the terms in Taylor’s series. Using the fact that f ( ) 0 , and canceling f '' ( ) , we
1
[ k f ' ( ) k2 f '' ( ) ...]
obtain k 1 k 2
f ( ) k f '' ( )
'
f '' ( ) 2 f '' ( )
k 1 k [ k ' k ...][1 ' k ...]1
2 f ( ) f ( )
f '' ( ) 2 f '' ( )
k 1 k [ k ' k ...][1 ' k ...]
2 f ( ) f ( )
f '' ( ) 2 f '' ( ) 2
k 1 k [ k k ...] k ...
2 f ' ( ) f ' ( )
66
f ' ' ( )
k 1 c k2 , where c ,
f ' ( )
2
and k 1 c k . (2.35)
67
Use the endpoints of each interval as the initial approximations in (i) and (ii) and the midpoints
as the initial approximation in (iii).
6. Given the following equations :
i) x 4 x 10 0 ,
ii) cos x x 2 x 0
iii) e x 2 x 2 cos x 6 0
determine the initial approximations for finding the smallest positive root. Use these to
find the root correct to three decimal places with the following methods:
(i) Secant method, (ii) Regula-Falsi method (iii )Newton-Raphson method.
7. Use a fixed-point iteration method to determine a solution accurate to within 10−2 for
x 4 3x 2 3 0 on [1, 2]. Use x0 1 .
8. (a) Show that the equation log ex x 2 1 has exactly two real roots, α1 = 0.45 and α2 = 1.
converges to α1 or α2
9.The equation x 2 ax b 0 has two real roots α and β. Show that the iteration method
( xk2 b)
xk 1 is convergent near x = α if 2| α | < | α + β |.
a
10. What is the rate of convergence of the following methods:
(i) The Bisection method (ii) Method of false position, (iii) Newton-Raphson method,
(iv) Secant method, (v) Fixed point iteration method?
68
CHAPTER THREE
3. SYSTEMS OF EQUATIONS
3.1. INTRODUCATION
Systems of equations are used to represent physical problems that involve the
interaction of various properties. The variables in the system represent the
properties being studied, and the equations describe the interaction between the
variables.
The system is easiest to study when the equations are all linear. Often the number
of equations is the same as the number of variables, for only in this case is it likely
that a unique solution will exist.
Although not all physical problems can be reasonably represented using a linear
system with the same number of equations as unknowns, the solutions to many
problems either have this form or can be approximated by such a system. In fact,
this is quite often the only approach that can give quantitative information about a
physical problem.
System of linear equations occur in solving problems in a wide verity of displaces,
including mathematics and statistics ,the physical, biological, and social sciences,
as well as engineering and business. There are two different approaches for finding
the numerical solution of a system of equations, namely
69
Unit Objectives:
At the end of the unit students will be able to:
use the Exact method to finding roots of system of linear equation,
use the Gaussian Elimination method to finding solution of system of linear equation,
use the gaussian elimination method to solve system of linear equation,
use the the backward or forward substitution method formula to solve a nonlinear equation.
use the gaussian elimination with partial pivoting method to numerically solve a nonlinear
equation.
discuss the advantages and drawbacks of LU Matrix Decomposition method
1.2.Exact Method
In this chapter we consider exact methods for approximating the solution of a system of n linear
equations in n unknowns. An exact method is one that gives the exact solution to the system, if it
is assumed that all calculations can be performed without round-off error effects. This
assumption is idealized. We will need to consider quite carefully the role of finite-digit
arithmetic error in the approximation to the solution to the system and how to arrange the
calculations to minimize its effect.
+ + + ⋯ =
⋮ ⋱ ⋮
+ + + ⋯ =
, , ⋯
⋮ ⋱ ⋮ ⋮ = ⋮
, , ⋯
, , ⋯
= ⋮ ⋱ ⋮ , = ⋮ = ⋮
, , ⋯
NB: A linear system of n equations in n variables, with coefficient matrix A and constant
vector ≠ 0, has a unique solution iff the determinant of A ≠ 0.
70
1.2.1. Gaussian Elimination
If you have studied linear algebra or matrix theory, you probably have been introduced
to Gaussian elimination, the most elementary method for systematically determining the solution
of a system of linear equations. Variables are eliminated from the equations until one equation
involves only one variable, a second equation involves only that variable and one other, a third
has only these two and one additional, and so on. The solution is found by solving for the
variable in the single equation, using this to reduce the second equation to one that now contains
a single variable, and so on, until values for all the variables are found.
By a sequence of the operations just given, a linear system can be transformed to a more easily
solved linear system with the same solutions. The sequence of operations is illustrated in the
next example.
1: + + 3 = 4,
2∶ 2 + − + = 1,
71
3∶ 3 − − + 2 = −3,
4∶ − + 2 + 3 − = 4,
Will be solved for , , , and . First use equation to eliminate the unknown from
, , and by performing ( −2 )→( ),
( −3 )→( ), and
( + )→( ).
1: + + 3 = 4,
2∶ − − − 5 = −7,
3∶ − 4 − − 7 = −15,
4∶ 3 + 3 + 2 = 8,
Where, for simplicity, the new equations are again labeled , , , and .
E1: + + 3 = 4,
E2: − − − 5 = −7,
E3: 3 + 13 = 13,
E4: − 13 = −13.
The system of equations is now in triangular (or reduced) form and can be solved for the
unknowns by a backward-substitution process. Noting that E4 implies
72
1 1
= (13 − 13 ) = (13 − 13) = 0
3 3
Continuing, gives
= − (−7 + 5 + ) = −(−7 + 5 + 0) = 2,
and gives
=4−3 − = 4 − 3 − 2 = −1.
The solution is, therefore, = −1, = 2, = 0, and = 1. It is easy to verify that these
values solve the original system of equations.
When performing the calculations of Example 1, we did not need to write out the full equations
at each step or to carry the variables , , , and through the calculations, since they
always remained in the same column. The only variation from system to system occurred in the
coefficients of the unknowns and in the values on the right side of the equations. For this reason,
a linear system is often replaced by a matrix, a rectangular array of elements in which not only is
the value of an element important, but also its position in the array. The matrix contains all the
information about the system that is necessary to determine its solution in a compact form. The
notation for an n ×m (n by m) matrix will be a capital letter, such as A, for the matrix and
lowercase letters with double subscripts, such as aij, to refer to the entry at the intersection of the
ith row and jth column; that is
⋯
A= = ⋮ ⋱ ⋮
⋯
2 −1 7
A=
3 1 0
is a 2 ×3 matrix with a11 = 2, a12 = −1, a13 = 7, a21 = 3, a22 = 1, and a23 = 0.
The 1 ×n matrix A = [a11 a12 · · ·a1n] is called an n-dimensional row vector, and an n × 1
matrix
73
A= ..
.
is called an n-dimensional column vector. Usually the unnecessary subscript is omitted for
vectors and a boldface lowercase letter is used for notation. So
X= ..
.
+ +··· + =
+ +··· + =
⋮ ⋱ ⋮ = ⋮
+ + ⋯+ =
An n × (n+1) matrix can be used to represent this linear system by first constructing
⋯
= = ⋮ ⋱ ⋮ and = ..
.
⋯
⋯
⋮ ⋱ ⋮ ⋮ .
⋯
Repeating the operations involved in Example 1 with the matrix notation results in first
considering the augmented matrix:
+ + 3 = 4,
2 + − + = 1,
3 − − + 2 = −3,
74
− + 2 + 3 − = 4,
1 1 0 3 4
2 1 −1 1 1
3 −1 −1 2 −3
−1 2 3 − 1 4
1 1 0 3 1 1 0 3
0 −1 −1 −5 0 −1 −1 −5
and
0 −4 −1 −7 0 0 3 13
0 3 3 2 0 0 0 − 13
The latter matrix can now be transformed into its corresponding linear system and solutions for
, , , and obtained. The procedure involved in this process is called Gaussian
Elimination with Backward Substitution.
The general Gaussian elimination procedure applied to the linear system
: + +··· + =
: + +··· + =
⋮ ⋱ ⋮ = ⋮
: + + ⋯+ =
⋯
= ⋮ ⋱ ⋮
⋯
Suppose that a11 ≠ 0. To convert the entries in the first column, below a11,
75
k = 2,3, . . . , n eliminates (that is, change to zero) the coefficient of in each of these rows:
…
…
E2 − m21E1 → E2 , E3 − m31E1 → E3, …,
⋮ ⋱ ⋮ ⋮
…
En − mn1E1 → En
…
0 …
⋮ ⋱ ⋮ ⋮
0 …
Although the entries in rows 2, 3, . . . , n are expected to change, for ease of notation, we again
denote the entry in the ith row and the jth column by aij.
If the pivot element a22 ≠ 0, we form the multipliers mk2 = ak2/a22 and perform the operations
( − )→ for each k = 3. . . n obtaining
…
0 …
E3 − m32E2 → E3… En − mn2E2 → En
⋮ ⋱ ⋮ ⋮
0 …
…
0 …
⋮ ⋱ ⋮ ⋮
0 0 0 …
Since the new linear system is triangular, backward substitution can be performed. Solving the
nth equation for xn gives
Solving the (n − 1)st equation for xn−1 and using the known value for xn yields
Note:
Consider the system =
Let us denote the original system by
76
⋯
⋮ ⋱ ⋮ =
⋮ ⋮
⋯
Then eliminate from the last (n-1) equations by subtracting the multiple of the first
eqn from the ith eqn.
( ) ( )
As a result, the first row of and are left unchanged & the remaining rows are
changed.
And so, we get a new system, denoted by:
( ) ( )
=
.
⋯ ⎡ ⎤
0 ⋮ ⋱ ⋮ ⋮ =⎢ ⎥
0 ⋯ ⎢⋮⎥
⎣ ⎦
Where the new coefficients are given by
= −
= − i, j= 2, 3, 4, . . ., n
( )
Step 2 If ≠ 0, we can eliminate from the last (n-2) eqns by generating the multiplier:
= , i = 3, 4, 5,…
77
( ) ( ) ( )
Let 1 ≤ ≤ − 1. assume that = eliminated at successive stage, & has
the form
…
⎡ ⎤
⎢ 0 … ⎥
⎢ ⋮ ⎥
⎢⋮ 0 0 0 … ⎥
⎢ ⎥
⎢ ⋮ ⎥
⎣ 0 0 0 … ⎦
Assume that multipliers
= , = + 1, + 2, …
⋯ ⎡ ⎤
0 ⋮ ⋱ ⋮ ⋮ =⎢ ⎥
0 0 0 0 ⋯ ⎢⋮⎥
⎣ ⎦
Then, using backward substitution formula
∑
= = , − 1, − 2, … , 1
78
= = = 2 and
= = = −1
= − , = − , = 2,3
= −2 = 2 − 2(2) = −2
= − −1 = 3 − 2(1) = 1
= − (−1) = 0 + (1)(1) = 1
= − , = 2,3
= − = 3 − 2(0) = 3
= − = 2 − (−1)(0) = 2
1 2 1 ⋮ 0
⟹ 0 −2 1 ⋮ 3
0 −1 1 ⋮ 2
Step 2 ≠ , and then generate the multipliers
= = =
1 2 1 ⋮ 0
⟹ 0 −2 1 ⋮ 3 ⟹ = = 1, = −1, =1
0 0 ⋮
79
Quick Exercise for student
System AX=B, where the matrix A is upper triangular has the form
+ +··· + =
+ +··· + =
⋮ ⋱ ⋮ = ⋮
+ ⋯+ =
⋮ ⋱ ⋮ = ⋮
=
≠0
= 1, 2, … ℎ ℎ ℎ
, , ,… , ,
80
from
−
=
, ,
⋮
The backward substitution formula (BSF)
Hey, not bad. Now you’re making progress! The general rule to follow is: at each
elimination stage, arrange the rows of the augmented matrix so that the new pivot elt is
larger in absolute value than elt beneath it in its column.
Example
Solve the following system using Gaussian elimination with partial pivoting.
+ −2 = 3
4 −2 + = 5
3 − +3 = 8
Solution
Consider the augmented matrix
[1] 1 −2 ⋮ 3
4 −2 1 ⋮ 5
3 −1 3 ⋮ 8
→ Interchanging the 2nd and the 1st rows because ≤ to get a multiplier of
small magnitude
4 −2 1 ⋮ 5
1 1 −2 ⋮ 3
1 8 1 ⋮ 8
→ Eliminate from the 2nd and 3erd rows to get
Apply the row operations = − (0.5) = −
(0.75) for 1 ≤ ≤ 4 to the above augmented matrix. And hence the modified
augment matrix is given by
81
4 −2 1 ⋮ 5
0 1.5 − 2.25 ⋮ 1.75
0 0.5 2.25 ⋮ 4.25
+ − =
+ − =
− + =
1.2.5. Gauss-Jordan Method
Step1
Assume that ≠0
=1 , . .
= , = 1, 2, 3, … + 1
Now, make the non-diagonal elements of the first column zero by applying
= − , = 1, 2, 3, … = 1, 2, 3, … +1
82
Then the new system is:
⋯ ,
0 … ,
⋮ = ⋮
⋮
0 … ,
Step 2
Assume that ≠0
=1 , . .
= , = 2, 3, … + 1
Now, make the non-diagonal elts of the 2nd column zero by applying
= − , where
= 1, 2, 3, …
= 1, 2, 3, … +1
Continue the process until the system takes the form:
IX = B
0 0 0 ,
0 0 …0 ,
= ⋮
⋮ ⋮
0 0 0… ,
83
1 1 2 ⋮ 23
3
2 1 3 ⋮ 7
1 1 1 ⋮ 4
→ → −2 , → − ,
1 1 ⋮ 23
3 2
0 1 −3 ⋮ 17
0 2 −1 ⋮ 10
3 3
→ → −1 3 , → −2 3 ,
≠ 0, and so make = 1, by → = , = 2, 3, 4 … + 1
1 0 3 ⋮ −5
0 1 − 3 ⋮ 17
0 0 1 ⋮ −8
→ Make all the non- diagonal entries of column zero
→ → −3 , → +3 ,
1 0 0 ⋮ 19
0 1 0 ⋮ −7
0 0 1 ⋮ −8
Therefore = 19, = −7, and = −8
Example 2: Use the Gauss-Jordan elimination method to solve the linear system
1 2 3 x1 3
3 1 5 x 2 2
2 4 1 x 1
3
1 2 3 3
M 3 1 5 2
2 4 1 1
84
Then perform Gauss-Jordan elimination.
1 2 3 3
M 3 1 5 2
2 4 1 1
1 2 3 3
0 7 14 7
0 0 7 7
1 0 1 1
0 1 2 1
0 0 7 7
1 0 0 2
0 1 0 1
0 0 1 1
x1 2
Hence, the solution is X x2 1
x 1
3
Exercise ( )
+2 + = 3
2 +3 +3 = 10
3 − +2 = 13
−2 −6 = 14
9 +4 + = −17
+6 = 4
85
Remark: Gauss- Jordan method is more expensive than Gaussian elimination for
solving system of linear equation.
It involves approximately 50% more arithmetic operation than Gaussian elimination does
1.2.6. Matrix Inversion Using Jordan Elimination
1 −1 −2
= 2 −3 −5
−1 3 5
Solution:
1 −1 −2 ⋮ 1 0 0
2 −3 −5 ⋮ 0 1 0
−1 3 5 ⋮ 0 0 1
→ =1
→ −2
→ +
1 −1 −2 ⋮ 1 0 0
⟹ 0 − 1 − 1 ⋮ −2 1 0
0 2 3 ⋮ 1 0 1
→ = −1 so make = 1 by
→−
86
1 −1 −2 ⋮ 1 0 0
⟹ 0 1 1 ⋮ 2 −1 0
0 2 3 ⋮ 1 0 1
→ +
→ −2
1 0 −1 ⋮ 3 −1 0
⟹ 0 1 1 ⋮ 2 −1 0
0 0 1 ⋮ −3 2 1
→ =1
→ +
→ −
1 0 0 ⋮ 0 1 1
⟹ 0 1 0 ⋮ 5 −3 −1
0 0 1 ⋮ −3 2 1
0 1 1
Therefore = 5 −3 −1
−3 2 1
Procedure:
Step 1: write = as
LUX=B
⇔ L(UX)=B
87
Remark: make sure A is non-Singular matrix.
− +2 + = 0
8 +6 = 10
−2 + 5 = −11
Solution:
−1 2 1
= 0 8 6
−2 0 5
→Check:
Now factorize A as
A=LU
−1 2 1 0 0 1
= 0 8 6 = 0 0 1
−2 0 5 0 0 1
−1 2 1
= 0 8 6 = + +
−2 0 5 + +
⟹ = −1, = 0, = −2
= 2, ⟹ = −2
−1 2 1 −1 0 0 1
= 0 8 6 = 0 0 1
−2 0 5 0 0 1
88
−1 2 1 −1 0 0 1 −2 −1
⟹ = = 0 8 6 = 0 8 0 0 1 32
−2 0 5 −2 −4 6 0 0 1
Now = ⇔ =
⟹ =
−1 0 0 0
0 8 0 = 10
−2 −4 6 −11
0
⇔ 8 = 10
−2 −4 +6 −11
⟹ = 0, 8 = 10 ⟹ =5 4 & = −1
0
= = 5
4
−1
1 −2 −1 0
⟹ 0 1 3 = 5
2 4
0 0 1 −1
⟹ = −1, =2 =3
Hence
3
= 2
−1
89
Exercise
+ + = 0
1. 2 − + = 6
3 +2 −4 = −4
2 +6 +4 = 5
2. 6 + 19 + 12 = 6
2 + 8 + 14 = 7
2 + +3 = −1
3. 4 + +7 = 5
−6 − 22 − 12 = −2
1. Tri-Diagonal Matrix
∝ 0 0 … 0
⎛ … 0⎞
=⎜0 ∝ … ⋮ ⎟
⋮ ⋱
⎝0 0 0 0 … ⎠
2 −1 0 0 1
–1 2 −1 0 0
=
0 −1 2 −1 0
0 0 −1 2 1
90
2 −1 0 0 0 0 0 1 0 0
–1 2 −1 0 0 0 0 1 0
=
0 −1 2 −1 0 0 0 0 1
0 0 −1 2 0 0 0 0 0
Iteration is a popular technique finding roots of equations. Generalization of fixed point iteration
can be applied to systems of linear equations to produce accurate results. The Gauss-Seidel
method is the most common iterative method and is attributed to Philipp Ludwig von Seidel
(1821-1896).
Consider that the n×n square matrix A is split into three parts, the main diagonal D, below
diagonal L and above diagonal U. We have A = D - L - U.
91
=
D=
U=
92
L=
The solution to the linear system AX=B can be obtained starting with P0, and using iteration
scheme
= +
where
=( − )
and
=( − )
A sufficient condition for the method to be applicable is that A is strictly diagonally dominant
dominant.
Try 10 iterations.
93
The system can be expressed as
94
Hence,
For the purpose of hand calculation let’s see 3 set of linear equations containing 3 unknowns.
x
If the diagonal elements are all nonzero, the first equation can be solved for 1, the second for
(a) = =
(b) =
(c) =
Steps to be followed
ii. Using the values of x1 from step i and x3 = 0.0 solve for x 2 from (b)
iii. Using the value of x1 from step i and that of x 2 from step ii solve for x3 from (c)
95
iv. Using the value of x 2 from step ii and that of x3 from step iii solve for x1 from (a)
v. Using the value of x1 from step iv and that of x3 from step iii solve for x 2 from (b)
vi. Using the value of x1 from step iv and that of x 2 from step v solve for x3 from(c)
Example2 Use the Gauss-Seidel method to obtain the solution of the following system of linear
equations.
5x1 x2 x3 4
x1 3x2 x3 2
x1 x2 4 x3 3
4 x2 x3
Solving for: x1 from eq1 x1 = 5
2 x1 x 3
x2 from eq2 x2 =
3
3 x1 x2
x3 from eq3 x3 =
4
Executing the above steps repetitively we will have the following result.
96
x1 x2 x3
As we can see the values start to repeat after the 8th iteration hence we can stop the calculation
and take the final values as the solution of the linear system of equations.
Hence, x1 = 0.65625
x2 = 0.15625
x3 = 0.875
97
3. Review Exersise
1 3 7 5
1 2 3 1 0 1
1 3 1 2 1 3
(a ) (b) 3 1 2 (c ) (d ) 2 1 2
8 4 9 5 6 2 0 1 4 1 2 1
1 1 1 3
1 1 0
2. Find the values of x for which the matrix A= 1 0 1 is invertible. In that case give A-1.
1 2 x
1 2 0 1 2 0
4. Given that 2 1 0 A
2 1 0 5I 3 , what is det(A)?
0 0 1 0 0 1
1 0 0 0
1 1 1 x m x x
0 1 0 0
(a) (b) a b c (c) x xm x
0 0 1 0 a2
a
b2 c 2 x
x x
b c d
2x 5 y 1
(a )
3x 2 y 4
98
2x y 6z 6 3x 4 y 7 z 0
3 x 2 y z 1
(b) c ) 3 x 2 y 2 z 2 d ) y 2z 3
x 2 y 9z 9
x y 2z 4 x 3 y z 5
x y z w 4
2 y z 3w 4
e)
2 x y z 2w 5
x yw4
25 5 1
A 64 8 1
144 12 1
8. Use the LU decomposition method to solve the following simultaneous linear equations.
25 5 1 a1 106.8
64 8 1 a 177.2
2
144 12 1 a3 279.2
9. A man refused to tell anyone his age, but he likes to drop hints about it. He then remarks that
twice his mother’s age add up to 140 and also that his age plus his father’s age add up to 105
Furthermore, he says that the sum of his age and his mother’s age is 30 more than his father’s
age. Calculate the man’s age or show that his hints contradict one a
99
UNITE FOUR
2. FINITE DIFFERENCES
1.1. INTRODUCTION
Let = ( ) be any function given by the values , , ,…, , which it takes for the
equidistant values , , ,…, , of the independent variable x, then − , −
, − ,…, − , are called the first differences of the function y.
They are denoted by ∆ , ∆ , … etc.
We have ∆ = −
∆ = −
...
∆ = −
The symbol ∆ is called the difference operator. The differences of the first differences denoted
by ∆ ,∆ ,..., ∆ are called second differences, where
∆ = ∆(∆ ) = ∆( − )= ∆ −∆
=( − )−( − )
= − 2 +
100
∆ = ∆(∆ ) = − 2 +
∆ is called the second difference operator.
∆ = ∆ − ∆ = − 3 + 3 −
...
∆ = ∆ − ∆
1.2.1. Difference Table
It is a convenient method for displaying the successive differences of a function. The following
table is an example to show how the differences are formed
X Y ∆ ∆ ∆ ∆ ∆ ∆
∆ ∆
∆ ∆
∆ ∆ ∆
∆ ∆ ∆
∆ ∆ ∆
∆ ∆
∆ ∆
101
The above table is called a diagonal difference table. The first term in the table is . It is called
the leading term.
The differences, ∆ , ∆ , ∆ ,., are called the leading differences. The differences ∆ with
a fixed subscript are called forward differences. In forming such a difference table care must be
taken to maintain correct sign.
X 0 10 20 30
Solution
X y ∆ ∆ ∆
0 0
0.174
10 0.174 −.001
0.173 −.001
20 0.347 −.002
. 171
30 0.518
102
Group work Exercise
Let y = f (x) be function of x and x, x + h, x + 2h, x + 3h, , etc., be the consecutive values of x,
then the operator E is defined as
Ef(x) = f (x + h),
= Ef(x + h)
= f (x + 2h)
103
3. E m (En f (x)) = En (Em f (x)) = En+m f (x) where m, n are positive integers
Ey0 = y 1
Ey1 =y2
= y0 + 2c1∆y0 + ∆2 y0
∆f x= f(x+h) – f(x),
∆f(x) = Ef(x) − f( x)
104
⇒ ∆f(x) = (E – 1)f(x) .
∆=E−1
i.e., E = 1+ ∆ .
E∆ ≡ ∆E
= Ef(x + h) – Ef(x)
= ∆(f(x +h)
= ∆Ef(x)
∴ E∆ ≡ ∆E.
∆f ( x )
∆
Example: Evaluate .
Solution:
∆
= (∆ )
(E − 1 )
=( − 2 + 1)
=( −2+ )x3
105
= −2 +
= 6xh.
= 755.
Find the first term of the series whose second and subsequent terms are
8, 3, 0, –1, 0, …
1.3.Backward Differences
Let y = f(x) be a function given by the values y0 , y1, … yn which it takes for the equally spaced
Values x0 , x1 , …, xn of the independent variable x. Then y1– y0 , y2– y1, …, yn – yn-1 are called
the first backward differences of y = f (x). They are denoted by ∇y0 , ∇y1 ,..., ∇yn , respectively.
Thus we have
y1− y0 = ∇y1
y2 − y1 = ∇y2
yn − yn − 1 = ∇yn ,
106
x y
Note: In the above table the differences ∇ny with a fixed subscript i, lie along the diagonal
upward sloping.
Alternative notation: Let the function y = f(x) be given at equal spaces of the independent
variable x at x = a, a + h, a + 2h, … then we define
Where ∇ is called the backward difference operator, h is called the interval of differencing.
=∇f(x + (n – 1) h)
Similarly we get
107
∇2f(x + 2h) = ∇( ∇f(x + 2h)
= ∇ (∆f(x +h)
= ∆ (∆ f(x)
= ∆ ( ( ))
….
f(x + nh) = ∆ ( ( ))
∇f(x) = f( x) – f(x - h) = f (x ) − E −1 f (x )
⇒ ∇ = 1 − E −1
or ∇= .
(b) ∆∇ = ∆ − ∇
(c) ∇ = E −1∆.
Solution
= Ef( x) – f( x) – Ef( x – h) + f (x – h)
= f(x + h) – f(x) – f( x) + f (x – h)
108
= [(E – 1) – (1− E −1)] f x
= (∆ − ∇) f x
∴ ∆∇f(x) = (∆ − ∇) f(x)
∴ ∆∇ = ∆ − ∇.
and E −1 ∆f (x) = E −1 [f (x + h) − f( x)
= f (x) − f (x –h) ∇
∴ ∇ = E −1 ∆
1.4.Central Deference
− = , − = ,... − =
Similarly, higher-order central differences can be defined. With the values of x and y as in the
preceding two tables a central difference table can be formed.
x y
109
It is clear from the three tables that in definite numerical case the same number occurred in the
same positions whether we use forward, backward or central differences.
Thus we obtain
∆ =∇ =
∆ =∇ =
Exercise
1. Find the forward difference table corresponding to the data points (1, 3), (2, 5), (3, 7), and
(4, 10).
2. Find the forward table corresponding to the data points (1, 3), (2, 5), (3, 8), and (4, 10).
3. Find the backward difference table corresponding to the data points (1, 3), (3, 5), (5, 7),
and (7, 10).
4. Find the backward difference table corresponding to the data points (1, 3), (3, 5), (5, 8),
and (7, 10).
110
UNIT FIVE
5. INTERPOLATION
5.1. Introduction
111
Unit Objectives:
At the end of the unit students will be able to:
Derive Newton’s method of interpolation,
Solve problems using Newton’s method of interpolation
Derive Lagrangian method of interpolation,
Solve problems using Lagrangian method of interpolation
Derive Newton’s divided difference method of interpolation
Apply Newton’s divided difference method of interpolation
required to find y n (x) , a polynomial of nth degree such that y and y n (x) agree at the tabulated
xi xi 1 h , for i 1,2,3,..., n.
Therefore , x1 x0 h , x 2 x0 2 h , etc.
xi x0 ih , for i 1,2,3,..., n.
y n ( x ) a0 a1 ( x x0 ) a 2 ( x x0 )( x x1 ) a3 ( x x0 )( x x1 )( x x 2 ) ...
a n ( x x0 )( x x1 )( x x2 )...( x xn 1 ). ( 5.2)
112
again put x x1 in ( 5.2) we obtain ,
y1 y0 y0 y0
a1 i.e a1
x1 x0 h h
2 y0 3 y0 n y0
a2 a a
2!h 2 , 3!h 3 , . . .,
3 n
n! h n
y 2 y0 3 y0 n y0
by a0 y0 , a1 0 , a2 a a we obtain ,
2!h 2 , 3!h 3 , . . .,
3 n
h n! h n
y0 2 y0 3 y0
yn ( x) y0 ( x x0 ) ( x x0 )( x x1 ) ( x x0 )( x x1 )( x x2 )
h 2! h 2 3! h3
n y0
... ( x x0 )( x x1 )( x x2 )...( x xn1 ). ( 5.3)
n! h n
p( p 1) 2 p ( p 1)( p 2) 3
y n ( x) y0 py0 y0 y0 ...
2! 3!
p( p 1)( p 2)...( p n 1) n
y0 (5.4)
n!
This is Newton’s forward difference interpolation formula and useful for interpolating near the
beginning of a set of tabular values.
5.2.2 Newton’s Backward Difference Interpolation Formula
113
Given the set of (n 1) values, viz., ( x0 , y 0 ), ( x1 , y1 ), ( x 2 , y 2 ), . . . , ( x n , y n ) of x and y , it is
required to find y n (x) , a polynomial of nth degree such that y and y n (x) agree at the tabulated
xi xi 1 h , for i 1,2,3,..., n.
Therefore , x1 x0 h , x 2 x0 2 h , etc.
xi x0 ih , for i 1,2,3,..., n.
y n ( x ) a 0 a1 ( x x n ) a 2 ( x xn )( x x n 1 ) a3 ( x x n )( x xn 1 )( x x n 2 ) ...
a n ( x x n )( x xn 1 )( x x n 2 )...( x x1 ). ( 5.5)
y n ( xn1 ) y n 1 a0 a1 ( xn1 xn ) y n a1 ( xn 1 xn )
yn yn1 yn y h
a1 i.e a1
xn xn1 h h
2 yn 3 yn n yn
a2 a a
2! h 2 , 3!h3 , . . .,
3 n
n!h n
114
y h 2 yn 3 yn n yn
by a0 y n a1 a2 a a we obtain ,
, h 2! h 2 ,
3
3!h 3 ,. . .
n
n!h n
yn 2 yn 3 yn
y n ( x) y n ( x xn ) ( x x n )( x x n 1 ) ( x xn )( x xn1 )( x xn2 )
h 2!h 2 3!h 3
n yn
... ( x xn )( x xn1 )( x xn2 )...( x x1 ). (5.6)
n!h n
p( p 1) 2 p( p 1)( p 2) 3
yn ( x) y n py n yn yn ...
2! 3!
p( p 1)( p 2)...( p n 1) n
yn . (5.7)
n!
This is Newton’s backward difference interpolation formula and useful for interpolating near the
end of the tabular values.
Example 5.1: Using Newton’s forward difference interpolation formula, find the form of the
function y (x ) from the following table
x 0 1 2 3
f (x ) 1 2 1 10
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.1.
115
x f (x ) 2 3
0 1
1 2 -2
-1 12
2 1 10
3 10
y3 ( x) 1 x ( x 2 x) 2( x)(x 2 3x 2).
y3 ( x) 2 x3 7 x 2 6x 1.
Example 5.2: Find the interpolating polynomial corresponding to the data (1,5),(2,9),(3,14),and
(4,21). Using Newton’s backward difference interpolation polynomial.
Solution: We have the following backward difference table for the data.
The backward differences can be written in a tabular form as in Table 5.2.
116
x f (x ) 2 3
1 5
2 9 1
5 1
3 14 2
4 21
Find:
i) tan 0.12 ii ) tan 0.26
117
Solution: We have the following forward difference table for the data.
The forward differences can be written in a tabular form as in Table 5.3.
x f (x ) 2 3 4
0.10 0.1003
0.0508
0.0516 0.0002
0.0526 0.0004
0.0540
0.30 0.3093
118
0.4(0.4 1) 0.4(0.4 1)(0.4 2)
y 4 (0.12) tan 0.12 0.1003 0.4(0.0508) (0.0008) (0.0002)
2 6
0.4(0.4 1)(0.4 2)(0.4 3)
(0.0002).
24
=0.1205.
ii) To find tan 0.26 we use Newton’s back ward difference interpolation polynomial.
x x4 0.26 0.30
we have x 0.26 , h xi 1 xi 0.05 and p 0.8.
h 0.05
Hence formula (5.7) gives
p ( p 1) 2 p ( p 1)( p 2) 3 p( p 1)( p 2)( p 3) 4
y 4 ( x) y 4 py 4 y4 y4 y4 .
2! 3! 4!
0.8(0.8 1)
y 4 (0.26) tan 0.26 0.3093 0.8(0.0540) (0.0014)
2
0.8(0.8 1)(0.8 2) 0.8(0.8 1)(0.8 2)(0.8 3)
(0.0004) (0.0002).
6 24
=0.2662
Example 5.4: Using Newton’s forward difference formula , find the sum
Sn 13 23 33 ... n3 .
Solution:
or Sn (n 1)3 .
119
(n 1)(n 2) (n 1)(n 2)(n 3)
S n 1 (n 1)(8) (19) (18)
2 6
Activity 5.1:
formula
x 3 4 5 6 7 8 9
f (x ) 13 21 31 43 57 73 91
x 4 5 7 10 11 13
3. Given
120
5.2.3 Gauss interpolating polynomial
Newton- Gregory forward and backward difference interpolation polynomials are suitable for
interpolating near the beginning and end of the data points given respectively. For interpolating
near the middle, we are going to present the method called Gauss interpolating polynomial. Such
formulae involve difference near the horizontal line through x xi . Let ( xi , f ( xi )) for 0 i n
be n 1 data points such that x i 1 x i h. Let x i be any arbitrary data such that n f ( xi ) exist,
and let x x i ph .
where G1 , G 2 , . . . have to be determined. The y p on the left side can be expressed in terms of
121
x f (x ) 2 3 4 5 6
x 3 y 3
y 3
y 2 3 y 3
x 1 y 1 2 y 2 4 y3
y 1 3 y 2 5 y 3
x0 y0 2 y 1 4 y 2 6 y3
y 0 3 y 1 5 y 2
x1 y1 2 y0 4 y 1
y1 3 y0
x2 y2 2 y1
y 2
x3 y3
Clearly, y0 E p y0
122
(1 ) p y0
p( p 1) 2 p( p 1)( p 2) 3
y0 py0 y0 y0 ...
2! 3!
Similarly, the right side of (5.8) can be expressed in terms of y 0 , y 0 and higher order
differences.
We have 2 y 1 2 E 1 y0
2 (1 ) 1 y0
2 (1 2 3 ...) y0
2 ( y0 y0 2 y 0 3 y0 ...)
2 y0 3 y 0 4 y 0 5 y0 ...
3 4 5 6
3 y 1 y0 y0 y0 y0 ...
4 y 2 4 E 2 y0
4 (1 ) 2 y0
123
G1 p,
p ( p 1)
G2 ,
2!
( p 1) p ( p 1) (5.10)
G3 ,
3!
( p 1) p( p 1)( p 2)
G4 ,
4!
x f (x ) 2 3 4 5 6
x 3 y 3
y 3
y 2 3 y 3
x 1 y 1 2 y 2 4 y3
p ( p y
1)1 2 ( p 1) p ( p 1) 3 y3 2 ( p 1) p( p 1)(5 yp3 2) 4
y p ( x) y0 py0 y 1 y 1 y 2 ...(5.11)
2! 3! 4!
x0 y0 2 y 1 4 y 2 6 y3
124
y 0 3 y 1 5 y 2
x1 y1 2 y0 4 y 1
y1 3 y0
x2 y2 2 y1
y 2
x3 y3
(5.12)
y p ( x ) y 0 G '1y 1 G ' 2 2 y 1 G ' 3 3 y 2 G ' 4 4 y 2 ...
where G '1 , G ' 2 , . . . , have to be determined. Following the same procedure as in Gauss’ forward
formula, we obtain
125
G '1 p,
' p ( p 1)
G2 ,
2!
' ( p 1) p ( p 1) (5.13)
G3 ,
3!
' ( p 2)( p 1) p ( p 1)
G4 ,...
4!
p ( p 1) 2 ( p 1) p ( p 1) 3 ( p 2)( p 1) p ( p 1) 4
y p ( x) y0 py 1 y 1 y2 y 2 ...(5.14)
2! 3! 4!
Example 5.5: For the following table ,find the values of e1.17 using the Gauss’ forward
formula
Solution:
Clearly, h x i 1 x i 0.05 and x 1.17 is near to x3 1.15 . Hence
x x3 1.17 1.15
p 0.4. Again the corresponding difference table is given by
h 0.05
x ex 2 3 4 5 6
1.00 2.7183
126
0.1394
0.1465 0.0004
0.1540 0.0004 0
0.1702 0.0005
0.1790
1.30 3.6693
p( p 1) 2 ( p 1) p ( p 1) 3 ( p 1) p ( p 1)( p 2) 4
y6 ( x) y0 py0 y 1 y 1 y 2 ...
2! 3! 4!
=3.2221
127
Example 5.6: For the following table ,find the values of e1.17 using the Gauss’ backward
formula
x x3 1.17 1.15
p 0.4. Again the corresponding difference table is given by
h 0.05
x ex 2 3 4 5 6
1.00 2.7183
0.1394
0.1465 0.0004
0.1540 0.0004 0
0.1702 0.0005
0.1790
128
1.30 3.6693
p ( p 1) 2 ( p 1) p ( p 1) 3 ( p 2)( p 1) p ( p 1) 4
y6 ( x) y0 py 1 y 1 y 2 y 2 ...
2! 3! 4!
129
Activity 5.2:
1. For the following table ,find the values of f (5.5) using the Gauss’ forward
formula
x 3 4 5 6 7 8 9
f (x ) 13 21 31 43 57 73 91
2. For the following table ,find the values of f (8.5) using the Gauss’ backward
formula
x 4 5 7 10 11 13
3. Given
130
5.3 Interpolation with Unevenly Spaced Points
f (x ) f ( x0 ) f ( x1 ) f ( x2 ) … f ( xn )
= 0 ( x ) f 0 + 1 ( x ) f 1 + ... + n (x ) f n
n
pn ( x) i ( x) f i . (5.15)
i 0
where f ( x i ) = f i and i (x ) ), i 0,1,2,..., n. are polynomials of degree n. This polynomial fits the
data given in (5.1) exactly.
At x = x 0 , we get
f ( x 0 ) p n ( x 0 ) = 0 ( x 0 ) f ( x0 ) + 1 ( x 0 ) f ( x1 ) + ... + n ( x 0 ) f ( x n )
f ( x i ) p n ( xi ) = 0 ( x i ) f ( x0 ) + 1 ( x i ) f ( x1 ) + ... + i ( x i ) f ( x i ) + ... n ( x i ) f ( x n )
131
Therefore, i (x ) , which are polynomials of degree n, satisfy the conditions
0 ≠
i (x j ) = (5.16)
1 =
Since, i (x ) = 0 at x x0 , x1 , x 2 , . . . , x i 1 , x i 1 , . . . , x n , we know that
( x x 0 ) , ( x x1 ) , ( x x 2 ) , . . . , ( x x i 1 ) , ( x xi 1 ) , . . . , ( x x n ).
where c is a constant.
Now, since i ( xi ) = 1, we get
i ( xi ) = 1 = c ( xi x0 ) ( x i x1 ) ( xi x 2 )... ( x i x i 1 ) ( xi xi 1 )... ( xi x n ).
1
Hence, C
( xi x0 )( xi x1 )...( xi xi 1 )( xi xi 1 )...( xi xn )
( x x0 )( x x1 )...( x xi 1 )( x xi 1 )...( x xn )
Therefore, i ( x)
( xi x0 )( xi x1 )...( xi xi 1 )( xi xi 1 )...( xi xn ) (5.17)
Note that the denominator on the right hand side of i (x ) is obtained by setting x = x i in
the numerator. The polynomial given in (5.15) where i (x ) are defined by (5.17) , i.e,
n
pn ( x) i ( x) f i (5.18)
i 0
is called the Lagrange interpolating polynomial and i (x ) are called the Lagrange fundamental
polynomials.
We can write the Lagrange fundamental polynomials i (x ) in a simple notation.
which is the product of all factors. Differentiating w(x) with respect to x and substituting x = x i
we get
w' ( xi ) ( x i x 0 ) ( x i x1 ) ( xi x 2 )... ( x i x i 1 ) ( xi xi 1 )... ( xi x n ).
132
w( x )
i ( x)
( x xi ) w' ( xi )
So that (5.18) becomes
n
w( x)
pn ( x) fi . (5.19)
i 0 ( x xi ) w' ( xi )
x f (x )
x0 f ( x0 )
x1 f ( x1 )
p1 ( x ) = 0 ( x ) f ( x0 ) + 1 ( x ) f ( x1 ) .
( x x1 ) ( x x0 )
= f ( x0 ) + f ( x1 ) .
( x0 x1 ) ( x1 x0 )
QUADRATIC INTERPOLATION
133
For n = 2, we have the data
x f (x )
x0 f ( x0 )
x1 f ( x1 )
x2 f ( x2 )
( x x1 )( x x2 ) ( x xo )( x x 2 ) ( x xo )( x x1 )
0 ( x) , 1 ( x) , 2 ( x)
( xo x1 )( xo x2 ) ( x1 xo )( x1 x 2 ) ( x 2 xo )( x 2 x1 )
p2 ( x ) = 0 ( x ) f ( x 0 ) + 1 ( x ) f ( x1 ) + 2 ( x ) f ( x 2 ) .
( x x1 )( x x2 ) ( x xo )( x x 2 ) ( x xo )( x x1 )
= f ( x0 ) + f ( x1 ) + f ( x 2 ).
( xo x1 )( xo x2 ) ( x1 xo )( x1 x 2 ) ( x2 xo )( x2 x1 )
Example 5. 7: Determine the linear Lagrange interpolating polynomial that passes through the
points (2,4) and (5, 1).
Solution: In this case we have
( x x1 ) ( x 5) 1 ( x x0 ) ( x 2) 1
0 ( x) ( x 5) , 1 ( x) ( x 2)
( x0 x1 ) (2 5) 3 ( x1 x0 ) (5 2) 3
The Lagrange linear interpolation polynomial is given by
p1 ( x ) = 0 ( x ) f ( x0 ) + 1 ( x ) f ( x1 ) .
1 1
= ( x 5) (4) + ( x 2) (1)
3 3
= x6
134
Example 5. 8: Given that f (0) 1 , f (1) 3 , f (3) 55 , find the unique polynomial of degree
2 or less, which fits the given data.
Solution: We have x 0 0 , f 0 1 , x1 1 , f1 3 , x2 3 , f 2 55 . The Lagrange fundamental
polynomials are given by
( x x1 )( x x 2 ) ( x 1)( x 3) 1 2
0 ( x) 0 ( x) ( x 4 x 3).
( xo x1 )( xo x2 ) (1)(3) 3
( x xo )( x x 2 ) ( x 0)( x 3) 1
1 ( x) 1 ( x) (3x x 2 ).
( x1 xo )( x1 x2 ) (1)(2) 2
( x xo )( x x1 ) ( x 0)( x 1) 1 2
2 ( x) 2 ( x) ( x x).
( x 2 xo )( x 2 x1 ) (3)(2) 6
Hence, the Lagrange quadratic polynomial is given by
P2(x) = 0 ( x ) f (x0) + 1 ( x ) f (x1) + 2 ( x ) f (x2)
1 1 1
( x 2 4 x 3)(1) (3 x x 2 )(3) ( x 2 x)(55).
3 2 6
= 8x2 – 6x + 1.
Example 5.9: Using Lagrange interpolation formula, find the form of the function y(x) from the
following table
x 0 1 3 4
f (x ) -12 0 12 24
135
x 0 3 4
R (x ) 12 6 8
x 2 5 x 12.
Hence the required polynomial approximate to y (x) is given by
y (x) ( x 1) ( x 2 5 x 12 )
x 3 6 x 2 17 x 12.
Remark: For a given data, it is possible to construct the Lagrange interpolation polynomial.
However, it is very difficult and time consuming to collect and simplify the coefficients
of x i , i 0,1,2,..., n. Now, assume that we have determined the Lagrange interpolation
polynomial of degree n based on the data values ( xi , f ( x i )) , i 0,1,2,..., n at the (n + 1)
distinct points. Suppose that to this given data, a new value ( x n 1 , f ( x n 1 )) at the distinct
point x n 1 is added at the end of the table. If we require the Lagrange interpolating
polynomial for this new data, then we need to compute all the Lagrange fundamental
polynomials again. The nth degree polynomial obtained earlier is of no use. This is the
disadvantage of the Lagrange interpolation. However, Lagrange interpolation is a
fundamental result and is used in proving many theoretical results of interpolation.
Activity 5.3:
136
2. What is the disadvantage of Lagrange interpolation?
Construct the quadratic Lagrange interpolating polynomial that fits the data. Hence, find
f ( 12 ) . Compare with the exact value.
137
5.3.2 Newton’s Divided Difference Interpolation
Divided Differences
Let the data values, ( xi , f ( x i )) , i 0,1,2,..., n, be given. We define the divided differences as
follows. First divided difference:
Consider any two consecutive data values ( x i , f ( x i )), and ( x i 1 , f ( xi 1 )). Then,
we define the first divided difference as
f ( xi 1 ) f ( xi )
f [ xi , xi 1 ] i 0,1,2,..., n 1.
xi 1 xi
f ( x1 ) f ( x0 ) f ( x2 ) f ( x1 )
Therefore, f [ x0 , x1 ] , f [ x1 , x2 ] etc.
x1 x0 x2 x1
f ( xi ) f ( xi 1 )
Note that f [ x i , xi 1 ] f [ x i 1 , xi ] .
xi xi 1 xi 1 xi
We say that the divided differences are symmetrical about their arguments.
Second divided difference Consider any three consecutive data values ( x i , f ( x i )), ( x i 1 , f ( xi 1 )),
f [ xi 1 , xi 2 ] f [ xi , xi1 ]
f [ xi , xi 1 , xi 2 ] i 0,1,2,..., n 2.
xi 2 xi
f [ x1 , x2 ] f [ x0 , x1 ] f [ x2 , x3 ] f [ x1 , x2 ]
Therefore, f [ x0 , x1 , x2 ] , f [ x1 , x2 , x3 ] etc.
x 2 x0 x3 x1
The nth divided difference using all the data values in the table, is defined as
f [ x1 , x 2 ,..., x n ] f [ x0 , x1 ,..., x n 1 ]
f [ x0 , x1 ,..., x n ] (5.20)
x n x0
x0 f ( x0 )
138
f [ x 0 , x1 ]
x1 f ( x1 ) f [ x 0 , x1 , x 2 ]
f [ x1 , x 2 ] f [ x 0 , x1 , x 2 , x3 ]
x2 f ( x2 ) f [ x1 , x 2 , x3 ]
f [ x 2 , x3 ]
x3 f ( x3 )
Example 5.10: Obtain the divided difference table for the data
x -1 0 2 3
f (x ) -8 3 1 12
139
Solution: We have the following divided difference table for the data.
The divided differences can be written in a tabular form as in Table 5.9.
x f (x ) First d.d Second d.d Third d.d
-1 -8
=11
0 3 =-4
= −1 =2
Table 5.9
Divided 2 1 =4 differences
(d.d).
=11
3 12
x -4 -1 0 2 5
f (x ) 245 23 4 6 335
We mentioned earlier that the interpolating polynomial representing a given data values is
unique, but the polynomial can be represented in various forms.
We write the interpolating polynomial as
f (x ) p n (x )
Setting p n ( x 0 ) f 0 , we obtain
140
Setting p n ( x1 ) f 1 , we obtain
f1 c 0 f f0
f 1 c 0 ( x x 0 )c1 , or c1 1 f [ x 0 , x1 ] .
x1 x0 x1 x0
Setting p n ( x 2 ) f 2 , we obtain f 2 c 0 ( x x 0 )c1 ( x x 0 )( x x1 ) c 2 ,
f 2 f 0 ( x2 x0 ) f [ x0 , x1 ]
or c2
( x2 x0 )( x2 x1 )
1 f f0
[ f 2 f 0 ( x2 x0 )( 1 )]
( x2 x0 )( x2 x1 ) x1 x0
f0 f1 f2
( x0 x1 )( x0 x2 ) ( x1 x0 )( x1 x2 ) ( x2 x0 )( x2 x1 )
= f [ x 0 , x1 , x 2 ].
p 2 ( x ) f ( x 0 ) ( x x 0 ) f [ x 0 , x1 ] ( x x 0 )( x x1 ) f [ x 0 , x1 , x 2 ].
Example 5.11: Find f (x ) as a polynomial in x for the following data by Newton’s divided
difference formula
x -4 -1 0 2 5
f (x ) 1245 33 5 9 1335
Solution: First we form the divided difference table for the data.
141
x f (x ) First d.d Second d.d Third d.d Fourth d.d
-4 1245
-404
-1 33 94
-28 -14
0 5 10 3
2 13
2 9 88
442
5 1335
( x x 0 )( x x1 )( x x 2 ) f [ x 0 , x1 , x 2 , x 3 ]
( x x 0 )( x x1 )( x x 2 )( x x 3 ) f [ x 0 , x1 , x 2 , x 3 , x 4 ]
f (x ) 9 16 17 18 44 81
142
Hence, interpolate at x = 0.5 and x = 3.1.
Solution: We form the divided difference table for the given data.
x f (x ) First d.d Second d.d Third d.d Fourth d.d
–2 9
-1 16 -3
1 1
0 17 0 0
1 1
1 18 4 0
13 1
3 44 8
37
4 81
Since, the fourth order differences are zeros, the data represents a third degree polynomial.
Newton’s divided difference formula gives the polynomial as
f (x ) f ( x 0 ) ( x x 0 ) f [ x 0 , x1 ] ( x x 0 )( x x1 ) f [ x 0 , x1 , x 2 ]
( x x 0 )( x x1 )( x x 2 ) f [ x 0 , x1 , x 2 , x 3 ]
= 9 + 7 x + 14 – 3 x 2 – 9 x – 6 + x 3 + 3 x 2 + 2 x = x 3 + 17.
Hence, f (0.5) (0.5)3 + 17 = 17.125.
f (3.1) (3.1)3 + 17 = 47.791.
143
Remark: Newton’s divided difference interpolating polynomial possesses the permanence
property. Suppose that we add a new data value ( x n 1 , f ( x n 1 )) at the distinct point x n 1 , at
the end of the given table of values. This new data of values can be represented by a
(n + 1)th degree polynomial. Now, the (n + 1)th column of the divided difference table
contains the (n + 1)th divided difference. Therefore, we require to add the term
( x x 0 )( x x1 )... ( x x n 1 )( x x n ) f [ x 0 , x1 ,..., x n , x n 1 ].
Activity5.5:
x 1 3 4 5 7 10
2. Does the Newton’s divided difference interpolating polynomial have the permanence
property?
144
x 4 5 7 10 11 13
using the information that they satisfy the conditions i ( x j ) 0, for i j and i ( x j ) 1. for i j.
3. Given
145
x 0.20 0.22 0.24 0.26 0.28 0.3
ii ) f ( x) ln x x 0 1, x1 1.1, x2 1.3, x 3 1 . 4,
5.Use appropriate Lagrange interpolating polynomials of degrees one, two, and three to
approximate each of the following:
i) f (8.4) if f (8.1) 16.94410, f (8.3) 17.56492, f (8.6) 18.50515, f (8.7) 18.82091
1
ii ) f ( ) if f (0.75) −0.07181250, f (0.5) −0.02475000, f (0.25) 0.33493750,
3
6.Use the Lagrange and the Newton-divided difference formulas to calculate f (3) from
the following table :.
x 0 1 2 4 5 6
f (x ) 1 14 15 5 6 19
CHAPTER SIX
146
6.1 Introduction
Numerical differentiation is the process of calculating the values of the derivative of a function at
some assigned values of from the given set of values ( , ). To compute , we first replace
the exact relation = ( ) by the best interpolating polynomial = ∅( ) and then differentiate
the later as many times as we desire. the choice of the interpolation formula to be used, will
147
iii. near the middle of the table, is calculated by means of Stirling's or
Bessel's formula.
2. If the values are not equi-spaced, we use Newton's divided difference formula to
represent the function.
Hence corresponding to each of the interpolation formulae we can derive the formula for finding
the derivative.
Obs. while using these formulae, it must be observed that the table of values defines the function
at these points only and does not completely define the function and the function may not be
differentiable at all. As such, the process of numerical differentiation should be used if the
tabulated values are such that the differences of some order are constants. Otherwise, errors are
bound to creep in which go on increasing as derivatives of higher order are found. This is due to
the fact that the difference between ( ) and the approximating polynomial ( ) may be small
at the data points but ( ) − ′( ) may be large.
Consider the function = ( ) which is tabulated for the equally spaced data points , that is,
= + ℎ, for = 0, 1, 2, . . . , , with their corresponding functional values = ( ) , for
= 0,1,2, … , . Depending on these data points we can derive different numerical differentiation
formulae as follows.
( ) ( )( )
= + Δ + ∆ + ∆ +. ..
! !
=Δ + !
∆ + !
∆ +. ..
Since = , therefore =
148
Now = . = [Δ + !
∆ + !
∆ +. . . ]
. . . . . . . . . . . . . . . . . . . . . . . . . . . (1)
= , =0
= [Δ − ∆ + ∆ − ∆ + ∆ − ∆ +. . . ]
. . . . . . . . . . . . . . . . . . . . . . . . . . (2)
1 1 1
= = . = . = . .
ℎ ℎ ℎ
1 1 2 6 −6 12 − 36 + 22 1
= . . = ∆ + ∆ + ∆ +. . .
ℎ ℎ 2! 3! 4! ℎ
Putting = 0, we obtain
= ∆ −∆ + ∆ − ∆ + ∆ ...
. . . . . . . . . . . . . . . . . . . . . . . . . . (3)
Similarly,
= ∆ − ∆ +. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . (4)
( + 1) ( + 1)( + 2)
= + ∇ + ∇ + ∇ +. ..
2! 3!
149
Differentiating both sides with respect to , we have
=∇ + !
∇ + !
∇ +. ..
Since = , therefore =
Now = . = [∇ + ∇ + ∇ +. . . ]
! !
. . . . . . . . . . . . . . . . . . . . . . . . . . . (5)
= , =0
= [∇ = ∇ + ∇ + ∇ + ∇ + ∇ +. . . ]
. . . . . . . . . . . . . . . . . . . . . . . . . . (6)
1 1 1
= = . = . = . .
ℎ ℎ ℎ
1 1 2 6 +6 6 + 18 + 11 1
= . . = ∇ + ∇ + ∇ +. . .
ℎ ℎ 2! 3! 12 ℎ
Putting = 0, we obtain
= ∇ +∇ + ∇ + ∇ + ∇ +. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . (7)
Similarly,
= ∇ + ∇ +. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . (8)
150
Stirling's formula (p. ) is
p ∆y + ∆y p p(p − 1 ) ∆ y +∆ y
=y + + ∆ y +
1! 2 2! 3! 2
p (p − 1 )
+
∆ y +⋯
4!
Differentiating both sides with respect to p, we get
∆y + ∆y 2p 3p − 1 ∆ y +∆ y 4p − 2p
= + ∆ y + + ∆ y +⋯
2 2! 3! 2 4!
Since = , therefore =
∆ ∆ ∆ ∆
Now = . = + p∆ y + + ∆ y +⋯
. . . . . . . . . . . . . . . . . . . . . . . . . . (9)
Similarly
= ∆ y − ∆ y + ∆ y −. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . (10)
Obs: we can similarly use any other interpolation formula for computing derivatives.
Solution:
151
X Y ∆ ∆ ∆ ∆ ∆ ∆
1.0 7.989
0.414
0.378 0.006
0.299 0.005
0.281
1.6 10.031
We have
= .
0.378 − (−0.003) + (0.004) − (0) + (−0.001) − (−0.003)
.
152
= 3.946
=( . )
−0.03 − (0.04) + (0) − (−0.001) + (−0.003)
.
= −3.545
(b) We use the above difference table and the backward difference operator ∇ instead of Δ.
= ∇ +∇ + ∇ + ∇ + ∇ +. . . ... (ii)
Here ℎ = 0.1, = 1.6, ∇ = 0.281, ∇ = −0.018, etc. putting these values in (i) and (ii)
we get
= .
0.281 + (−0.018) + (0.005) + (−0.001) + (−0.001) + (−0.003)
.
= 2.727
= −1.703
Example 6.2: A slider in a machine moves along a fixed straight rod. its distance cm along the
rod is given below for a various of the time seconds. Find the velocity of the slider and its
acceleration when = 0.3 second.
solution:
∆ ∆ ∆ ∆ ∆ ∆
153
0 30.13
1.49
1.25 -0.24
-0.14 0.02
-0.57
0.6 33.24
As the derivatives are required near the middle of the table, we use Stirling's formulae:
∆ ∆ ∆ ∆ ∆ ∆
= − + +⋯ ...(1)
= ∆ x − ∆ x + ∆ x . . . ... (2)
. . . . . .
= .
− + −⋯
.
154
= 5.33
= .
−0.46 − (−0.01) + (0.29). . .
.
= −45.6
Hence the required velocity is 5.33 cm/sec and acceleration is -45.6 cm/sec2.
( ) ( )( )
= + Δ + !
∆ + !
∆ +. ..
=Δ + !
∆ + !
∆ +. .. (1)
For maxima or minima, / = 0. Hence equating the right hand side of (1) to zero and
retaining only up to third differences, we obtain
Δ + !
∆ + !
∆ +. . . = 0
i.e. ∆ + (∆ −∆ ) + Δ − ∆ + ∆ =0
Substituting the values of Δ , ∆ and ∆ from the difference table, we solve this quadratic
for . Then the corresponding values of are given by = + ℎ at which is maximum or
minimum.
Example 6.3: From the table below for what values of , is minimum? Also find this value of
.
= 3 4 5 6 7 8
solution:
155
The difference table is:
∆ ∆ ∆ ∆ ∆
3 0.205
0.035
4 0.240 -0.016
0.019 0.000
-0.012 0.001
7 0.250 -0.014
-0.026
8 0.224
( − 1)
= 0.205 + (0.035) + (−0.016) ...( )
2!
156
2 −1
= 0.035 + (−0.016)
2!
For to be minimum , / =0
2.6875 × (2.6875 − 1)
= 0.205 + 2.6875 × (0.035) + (−0.016)
2
= 0.2628
Activity 6.1
: 4 8 15 7 6 2
157
2. Find the first and second derivatives of ( ) at = 1.5 if
: 1.5 2.0 2.5 3.0 3.5 4.0
4. Given sin 0° = 0.0000, sin 10° = 0.1736, sin 20° = 0.3420, sin 30° = 0.5000, and
sin 40° = 0.6428, then
i) find the value of sin 23°
5. The population of a certain town (as obtained from census data) is shown in the following
table:
( in thousands)
Estimate the population in the years 1966 and 1993. And also find the rate of growth of
population in 1981.
158
understand the need for numerical integration techniques.
derive the multiple-segment quadrature formulae of integration.
use the multiple-segment quadrature formulae of integration to solve problems.
calculate the truncation errors in each quadrature formulae.
derive and use the Romberg's method and Euler-Maclaurin formulae.
The process of evaluating a definite integral from a set of tabulated values of the
integrand f(x) is called numerical Integration . This process when applied to a function
of a single variable is known as quadrature.
The problem of numerical integration, like that of numerical differentiation, is solved by
representing f(x) by an interpolation formula and then integrating it between the given
limits. In this way, we can derive quadrature formula for approximate integration of
function defined by a set of numerical values only.
b
Let I f x dx
a
159
= ( )
1 ℎ
( ) =ℎ + ∆ = ( + )
2 2
Similarly
1 ℎ
( ) =ℎ + ∆ = ( + )
2 2
.............................................................................
1 ℎ
( ) =ℎ + ∆ = ( + )
( ) 2 2
adding these n integrals, we obtain
160
∫ ( ) = [( + ) + 2( + + ⋯+ )] . . . . (2)
∫ ( )
( ) = ( +4 + ), being even.
∫ ( ) = [( + ) + 4( + + ⋯+ )+
161
3 3 1
( ) = 3ℎ + ∆ + ∆ + ∆
2 2 8
3ℎ
= ( +3 +3 + )
8
similarly
3ℎ
( ) = ( +3 +3 + )
8
Adding all such expressions from to + ℎ , where n is a multiple of
3, we obtain
3ℎ
( ) = [( + ) + 3( + + + +⋯+ )
8
+2( + + ⋯+ )] . . . . . . . (4)
4. Boole's rule
Putting = 4 in equ (1) above and neglecting all differences above the
fourth order, we get
5 2 7
( ) = 4ℎ + 2∆ + ∆ + ∆ + ∆
3 3 90
2ℎ
= (7 + 32 + 12 + 32 +7 )
45
similarly
2ℎ
( ) = (7 + 32 + 12 + 32 +7 )
45
and so on.
Adding all these integrals from to + ℎ , where n is a multiple of 4, we
obtain
162
2ℎ
( ) = (7 + 32 + 12 + 32 + 14 +
45
+32 + 12 + 32 + 14 …) . . . . . (5)
This is known as Boole's rule.
Obs. While applying (5), the number of sub-intervals should be taken as a
multiple of 4.
5. Weddle's rule
Putting = 6 in equ (1) above and neglecting all differences above the sixth
order, we obtain
9 123 11 1 41
( ) = 6ℎ + 3∆ + ∆ + 4∆ + ∆ + ∆ + . ∆
2 60 20 6 140
3ℎ
( ) = ( +5 + +6 + +5 + )
10
Similarly
∫ ( ) = ( +5 + +6 + +5 + )
and so on.
Adding all these integrals from to + ℎ , where n is a multiple of 6, we
obtain
3ℎ
( ) = ( +5 + +6 + +
10
+5 +2 +5 + + ⋯) . . . . . (6)
This is known as Weddle's rule.
Obs. While applying (6), the number of sub-intervals should be taken as a
multiple of 6.
Weddle's rule is generally more accurate than any of the others. Of the two
Simpson's rules, the 1/3 rule is better.
163
Example 6.4:
Evaluate ∫ by using
i) Trapezoidal rule
ii) Simpson's 1/3 rule
iii) Simpson's 3/8 rule
iv) Weddle's rule and compare the results with its actual value.
Solution:
Divide the interval (0,6) into six equal parts each of width h=1. The values of
( )= are given below:
0 1 2 3 4 5 6
i) Trapezoidal rule
ℎ
= [( + ) + 2( + + + + )]
1+ 2
= 1.4108.
ii) Simpson's 1/3 rule
ℎ
= [( + ) + 4( + + ) + 2( + )]
1+ 3
= 1.3662.
iii) Simpson's 3/8 rule
3ℎ
= [( + ) + 3( + + + ) + 2( )]
1+ 8
164
= 1.3571.
iv) Weddle's rule
3ℎ
= ( +5 + +6 + +5 + )
1+ 10
= 0.3[1 + 5(0.5) + 0.2 + 6(0.1) + 0.0588 + 5(0.0385) + 0.027
= 1.3735 .
Also
This shows that the value of the integral found by Weddle's rule is the
nearest to the actual value followed by its value given by Simpson's 1/3
rule.
Example 6.5:
The velocity ( / ) of a moped which starts from rest, is given at
fixed intervals of time ( ) as follows:
2 4 6 8 10 12 14 16 18 20
10 18 25 29 32 20 11 5 2 0
= , therefore
165
Activity 6.2
1. Use trapezoidal rule to evaluate ∫ considering five sub-
intervals.
2. Evaluate ∫ using
.
Evaluate ∫ log using
i) Trapezoidal rule
ii) Simpson's 1/3 rule
iii) Simpson's 3/8 rule
iv) Weddle's rule.
6. The following table gives the velocity of a particle at time :
( ): 0 2 4 6 8 10 12
( / ): 4 6 16 34 60 94 136
Find the distance moved by the particle in 12 seconds and also the
166
acceleration at = 2 .
=∫ −∫ ( )
where ( ) is the polynomial representing the function = ( ), in the interval
[ , ].
1) Error in Trapezoidal rule
Expanding = ( ) around = by Taylor's series, we get
( )
= +( − ) + +⋯ . . . (1)
!
Therefore,
( − )
= +( − ) + +⋯
2!
= ℎ+ !
+ !
+⋯ . . . (2)
= ℎ( + ) . . . (3)
= +ℎ + + +⋯
! !
=ℎ + + +⋯ . . . (4)
. !
= !
− . !
ℎ +⋯ =− +⋯
167
Similarly principal part of the error in [ , ]=− and so on.
since ℎ = −
Hence the error in the trapezoidal rule is of order ℎ .
= 2ℎ + !
+ !
+ !
+ !
+⋯ (6)
Also = area over the first doubt strip by Simpson's 1/3 rule
= ℎ( +4 + ) . . . (7)
ℎ ℎ ℎ
= [ + 4( +ℎ + + +⋯)+
3 2! 3!
+( + 2ℎ + !
+ !
+ ⋯ )]
= 2ℎ + 2ℎ + + + + ⋯ . . . (8)
168
Therefore, Error in the interval [ , ]
=∫ − [(6) − (8)]
= − ℎ +⋯=−
since 2 ℎ = −
Hence the error in Simpson's 1/3 rule is of order ℎ .
3) Error in Simpson's 3/8 rule
Proceeding as above, here the principal part of the error in the interval
[ , ]
=− . . . (10)
=− . . . (11)
=− . . . (12)
169
As an illustration, let us improve upon the value of the integral
=∫ ( ) ,
by Trapezoidal rule. If , be the values of with sub-intervals of width ℎ , ℎ
and , be their corresponding errors respectively, then
( ) ( )
=− ( ), =− ( )
= or = . . . (1)
= ( − )
= . . . (3)
ℎ =ℎ and ℎ = ℎ
Now we use the trapezoidal rule several times successively having ℎ and apply
(4) to each pair of values as per the following scheme:
(ℎ)
(ℎ, ℎ/2)
170
(ℎ/2, ℎ/4) (ℎ, ℎ/2, ℎ/4, ℎ/8)
(ℎ/4, ℎ/8)
(ℎ/8)
The computation is continued till successive values are close to each other. This
method is called Richardson's deferred approach to the limit and its systematic
refinement is called Romberg's method.
Example 6.6:
Use Romberg's method to compute ∫ correct to 4 decimal places.
Solution:
Let's take ℎ = 0.5, 0.25, and 0.125 successively and evaluate the given integral
using Trapezoidal rule.
i) When ℎ = 0.5, the values of = (1 + ) are
: 0 0.5 1.0
: 1 0.8 0.5
.
therefore, = [1 + 2 × 0.8 + 0.5] = 0.775
.
therefore, = [1 + 2(0.9412 + 0.8 + 0.64) + 0.5] = 0.7828
171
(ℎ, ℎ/2) = [4 (ℎ/2) − (ℎ)] = [3.1312 − 0.775] = 0.7854
and (ℎ, ℎ/2, ℎ/4) = [4 (ℎ/2, ℎ/4) − (ℎ, ℎ/2)] = [3.142 − 0.7854] = 0.7855
0.7854
0.7828 0.7855
0.7855
0.7848
172
where , ,…, are the ( + 1) equi-spaced values of with difference ℎ.
From (1) above, we have
( )=∆ ( ) = ( − 1) ( )=( − 1) ( ) since =
= 1+ℎ + + +⋯ −1 ( )
! !
= (ℎ ) 1+ + +⋯ ( )
! !
= 1− + − +⋯ ( )
= ∫ ( ) − ( )+ ( )− ( )+⋯ . . . (3)
− [ ( )− ( )] + ⋯ . . . (4)
1 1 ℎ
( )= ( ) − [ ( ) − ( )] + [ ( )− ( )]
ℎ 2 12
− [ ( )− ( )] + ⋯
that is;
1 1 ℎ
( ) = ( )+ [ ( ) − ( )] − [ ( )− ( )]
ℎ 2 12
+ [ ( )− ( )] + ⋯
= [ ( ) + 2 ( ) + 2 ( ) + ⋯+ 2 ( )+ ( )]
− [ ( )− ( )] + [ ( )− ( )] + ⋯
Hence ∫ ( ) = [ +2 +2 +⋯+2 + ]
− ( − )+ ( − )+⋯ . . . (5)
173
successive corrections to this value. This formula is often used to find the sum of
a series of the form
( )+ ( + ℎ) + ⋯ + ( + ℎ) .
Example 6.7:
Using Euler-Maclaurin formula, find the value of log 2 from ∫ ?
Solution:
Taking = , = 0, = 10, ℎ = 0.1, we have
= −( )
and = −( )
Example 6.8:
Apply Euler-Maclaurin formula to evaluate
+ + + ⋯+
Solution:
Taking = , = 51, = 24, ℎ = 2, we have
=− and =−
Therefore,
1 1 1 1 1 1 1 1 1 1 1
+ + + ⋯+ = + + + −
51 53 55 99 2 2 51 99 3 51 99
174
− − +⋯
= + 0.000243 + 0.0000022 − ⋯
= 0.00499 approximately.
Hence = ( − ) . . . (2)
175
where the unknown constants , , are determined by making (3) exact for
( ) = 1, and respectively. Therefore,
0= + +
1= ( − ℎ) + + ( + ℎ)
2 = ( − ℎ) + + ( + ℎ)
To solve these equations, we shift the origin to that is, = 0. As such,
being slope of the tangent to the curve = ( ) at = remains unaltered.
Thus the equations reduce to
+ + =0
− + = 1/ℎ
and + =0
giving =− , = 0, = 1/2ℎ
Hence = ( − ) . . . (4)
′ = ( −2 + ) . . . (5)
∫ = + . . . (6)
where the unknown constants , are determined by making (6) exact for
( ) = 1 and respectively.
So, putting ( ) = 1, successively in (6), we get
+ =∫ 1. =ℎ
+ ( + ℎ) = ∫ . = [( + ℎ) − ]
To solve these, we shift the origin to and take = 0 . Therefore, the above
equations reduce to
176
+ = ℎ and = ℎ,
Hence ∫ = ( + ) . . . (7)
∫ = + + . . . (8)
where the unknown constants , , are determined by making (8) exact for
( ) = 1, and respectively.
So, putting ( ) = 1, , successively in (8), we obtain
+ + =∫ 1. = 2ℎ
( − ℎ) + + ( + ℎ) = ∫ .
= [( + ℎ) − ( − ℎ) ]
( − ℎ) + + ( + ℎ) = ∫ .
= [( + ℎ) − ( − ℎ) ]
Hence ∫ = ( +4 + ) . . . (9)
Activity 6.3
1. Obtain an estimate of the number of sub-intervals that should be chosen so as
177
is less than 0.001.
/
4. Using Euler-Maclaurin formula, find the value of ∫ sin correct to five
decimal places.
5. Apply Euler-Maclaurin formula, to evaluate
i) + + +⋯+
ii) ( )
+( )
+( )
+ ⋯+ ( )
= ( , )
178
is evaluated numerically by two successive integrations in and directions considering
one variable at a time. Repeated application of trapezoidal rule ( or Simpson's rule )
yields formulae for evaluating .
1) Trapezoidal rule
Dividing the interval ( , ) into equal sub-intervals each of length ℎ and
the interval ( , ) into equal sub-intervals each of length , we have:
= + ℎ, = , =
= + , = , = .
Using trapezoidal rule in both directions, we get
ℎ
= [ ( , )+ ( , ) + 2{ ( , ) + ( , ) + ⋯ + ( , )}]
2
= [( + ) + 2( + + ⋯+ )
+( + )+2 + + ⋯+ ,
+2 ∑ ( + )+2 + + ⋯+ , ]
where = , .
2) Simpson's rule
Dividing the interval ( , ) into 2 equal sub-intervals each of length ℎ
and the interval ( , ) into 2 equal sub-intervals each of length . Then
applying Simpson's rule in both directions, we get
ℎ
( , ) = [ ( , )+4 ( , )+ ( , )]
3
= [ , +4 , + , +4 , +4 , + ,
+ , +4 , + , ]
Adding all such intervals, we obtain the value of .
Example 6.9:
179
Using trapezoidal rule evaluate
Solution:
Taking ℎ = = 0.25 so that = = 4, we obtain
= [ ( , ) + ( , ) + 2( ( , . ) + ( , . ) + ( , . ))
+ ( , ) + ( , ) + 2( ( , . ) + ( , . ) + ( , . ))
+2{ ( . , ) + ( . , ) + 2( ( . , . ) + ( . , . ) + ( . , . ))
+ ( . , ) + ( . , ) + 2( ( . , . ) + ( . , . ) + ( . , . ))
+ ( . , ) + ( . , ) + 2( ( . , . ) + ( . , . ) + ( . , . ) )}]
= 0.3407.
Example 6.10:
Apply Simpson's rule to evaluate the integral
. .
=∫ ∫ , taking two sub-intervals.
Solution:
Taking ℎ = 0.2 = 0.3 so that = = 2, we get
Activity 6.4
1. Evaluate ∫ ∫ using trapezoidal rule (by taking ℎ = = 0.5).
2. Using Trapezoidal and Simpson's rules, evaluate
180
∫ ∫ , by taking two sub-intervals.
. .
4. Evaluate ∫ ∫ , using Simpson's rule.
Review Exercise
1. Find the first and second derivative of the function tabulated below, at the point = 1.1:
: 1.0 1.2 1.4 1.6 1.8 2.0
181
( ): 0 0.128 0.544 1.296 2.432 4.00
: 0 1 5 21 27
3. Find the value of cos(1.74) using the values given in the table below:
: 1.70 1.74 1.78 1.82 1.86
= ( − )− ( − )+ ( − )
6. Using the following data , find for which is minimum and find this value of .
: 0.60 0.65 0.70 0.75
7.
182
Bibliography
Brice Carnahan, H. L. (1969). Applied Numerical Methods. New York: John Wiely and Sons Inc.
E.Atkinson, K. (1978). An Introductory to Numerical Analysis. Canada: John Willey and Sons
Inc.
J.Stoer, R. B. (1991). Introduction to Numerica Analysis. New York: Springer-velag New York
Inc.
Jain, M. (2007). Numerical methods for engineering and Computation . New Dehil: New age
Intrnational.
183
Lee W. Johnson, R. R. (1977). Numerical Analysis. Philippines: Addison-Wesley Publishing
Campany Inc.
P. Kandasamy. 2008. Numerical Method. Ram Nagar, New Delhi, 110055.
184