MA214LectureNotesFULL PDF
MA214LectureNotesFULL PDF
Analysis
Study Material for MA 214
Department of Mathematics
Indian Institute of Technology Bombay
Powai, Mumbai 400 076.
Introduction to Numerical
Analysis
Study Material for MA 214
Spring 2018-19,
Mathematics Department, Indian Institute of Technology Bombay.
1 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Sequences of Real Numbers 1
1.2 Limits and Continuity 4
1.3 Differentiation 7
1.4 Integration 10
1.5 Taylor’s Theorem 11
1.6 Orders of Convergence 17
1.6.1 Big Oh and Little oh Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.2 Order of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Exercises 21
2 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1 Floating-Point Representation 26
2.1.1 Floating-Point Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.1.2 Underflow and Overflow of Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.3 Chopping and Rounding a Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.4 Arithmetic Using n-Digit Rounding and Chopping . . . . . . . . . . . . . . . . . . . 32
2.2 Types of Errors 33
2.3 Loss of Significance 34
2.4 Propagation of Relative Error in Arithmetic Operations 38
2.4.1 Addition and Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.4 Total Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5
Section Contents
6
Section Contents
5 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.1 Polynomial Interpolation 168
5.1.1 Existence and Uniqueness of Interpolating Polynomial . . . . . . . . . . . . . . . 169
5.1.2 Lagrange’s Form of Interpolating Polynomial . . . . . . . . . . . . . . . . . . . . . 172
5.1.3 Newton’s Form of Interpolating Polynomial . . . . . . . . . . . . . . . . . . . . . . . 175
7
7 Numerical Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . 235
7.1 Review of Theory 236
7.2 Discretization Notations 240
7.3 Euler’s Method 241
7.3.1 Error in Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
7.4 Modified Euler’s Methods 247
7.5 Runge-Kutta Methods 249
7.5.1 Order Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
7.5.2 Order Four . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
7.6 Exercises 252
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Preface
These lecture notes are prepared to serve as a supplementary study material for the
students of the course MA 214 on Introduction to Numerical Analysis at the department
of Mathematics, IIT Bombay in Spring Semester 2018-19. These notes are prepared
mainly from the books quoted in the syllabus. However, other books were also used
especially exercise problems are taken from various other books.
CHAPTER 1
Mathematical Preliminaries
This chapter reviews some of the concepts and results from calculus that are frequently
used in this course. We recall important definitions and theorems, and outline proofs of
certain theorems. The readers are assumed to be familiar with a first course in calculus.
In Section 1.1, we introduce sequences of real numbers and discuss the concept of limit
and continuity in Section 1.2 with the intermediate value theorem. This theorem plays a
basic role in finding initial guesses in iterative methods for solving nonlinear equations.
In Section 1.3 we define the notion of derivative of a function, and prove Rolle’s theorem
and mean-value theorem for derivatives. The mean-value theorem for integration is
discussed in Section 1.4. These two theorems are crucially used in devising methods
for numerical integration and differentiation. Finally, Taylor’s theorem is discussed in
Section 1.5, which is essential for derivation and error analysis of almost all numerical
methods discussed in this course. In Section 1.6 we introduce tools useful in discussing
speed of convergence of sequences and rate at which a function f (x) approaches a point
f (x0 ) as x → x0 .
1
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.1 Sequences of Real Numbers
a1 , a2 , . . . , an , an+1 , . . . .
In other words, a sequence is a function that associates the real number an for each
natural number n. The notation {an } is used to denote the sequence, i.e.
{an } := a1 , a2 , . . . , an , an+1 , . . . .
The following result is very useful in computing the limit of a sequence sandwiched
between two sequences having a common limit.
2
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.1 Sequences of Real Numbers
The following theorem shows the advantage of working with monotonic sequences.
Theorem 1.1.7.
Bounded monotonic sequences always converge.
Theorem 1.1.8.
Let {an } and {bn } be two sequences. Assume that lim an and lim bn exist. Then
n→∞ n→∞
3
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.2 Limits and Continuity
1 1
4. lim = , provided lim an ̸= 0.
n→∞ an lim an n→∞
n→∞
In the previous section, we introduced the concept of limit for a sequences of real
numbers. We now define the “limit” in the context of functions.
1. Let f be a function defined on the left side (or both sides) of a, except possibly
at a itself. Then, we say “the left-hand limit of f (x) as x approaches a,
equals l” and denote
lim f (x) = l,
x→a−
if we can make the values of f (x) arbitrarily close to l (as close to l as we like)
by taking x to be sufficiently close to a and x less than a.
2. Let f be a function defined on the right side (or both sides) of a, except possibly
at a itself. Then, we say “the right-hand limit of f (x) as x approaches a,
equals r” and denote
lim f (x) = r,
x→a+
if we can make the values of f (x) arbitrarily close to r (as close to r as we like)
by taking x to be sufficiently close to a and x greater than a.
lim f (x) = L,
x→a
Remark 1.2.2.
Note that in each of the above definitions the value of the function f at the point
a does not play any role. In fact, the function f need not be defined at a.
4
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.2 Limits and Continuity
In the previous section, we have seen some limit laws in the context of sequences.
Similar limit laws also hold for limits of functions. We have the following result, often
referred to as “the limit laws” or as “algebra of limits”.
Theorem 1.2.3.
Let f , g be two functions defined on both sides of a, except possibly at a itself.
Assume that lim f (x) and lim g(x) exist. Then
x→a x→a
1 1
4. lim = , provided lim g(x) ̸= 0.
x→a g(x) lim g(x) x→a
x→a
Remark 1.2.4.
Polynomials, rational functions, all trigonometric functions wherever they are de-
fined, have property called direct substitution property:
Theorem 1.2.5.
If f (x) ≤ g(x) when x is in an interval containing a (except possibly at a) and the
limits of f and g both exist as x approaches a, then
5
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.2 Limits and Continuity
then
lim g(x) = L.
x→a
We will now give a rigorous definition of the limit of a function. Similar definitions can
be written down for left-hand and right-hand limits of functions.
Definition 1.2.7.
Let f be a function defined on some open interval that contains a, except possibly
at a itself. Then we say that the limit of f (x) as x approaches a is L and we write
lim f (x) = L.
x→a
3. continuous at a if
lim f (x) = f (a).
x→a
Remark 1.2.9.
Note that the definition for continuity of a function f at a, means the following
three conditions are satisfied:
6
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.3 Differentiation
Equivalently, for any given ϵ > 0, there exists a δ > 0 such that
Theorem 1.2.10.
If f and g are continuous at a, then the functions f + g, f − g, cg (c is a constant),
f g, f /g (provided g(a) ̸= 0), f ◦g (composition of f and g, whenever it makes sense)
are all continuous.
1.3 Differentiation
We next give the basic definition of a differentiable function.
f (a + h) − f (a)
f ′ (a) = lim , (1.1)
h→0 h
if this limit exists. We say f is differentiable at a. A function f is said to be
differentiable on (c, d) if f is differentiable at every point in (c, d).
There are alternate ways of defining the derivative of a function, which leads to different
difference formulae.
Remark 1.3.2.
7
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.3 Differentiation
f (a) − f (a − h)
f ′ (a) = lim , (1.2)
h→0 h
and
f (a + h) − f (a − h)
f ′ (a) = lim , (1.3)
h→0 2h
provided the limits exist.
Theorem 1.3.4.
If f is differentiable at a, then f is continuous at a.
Proof.
Using the identities
f (x) − f (a)
f (x) = (x − a) + f (a)
x−a
and taking limit as x → a yields the desired result.
The converse of Theorem 1.3.4 is not true. For, the function f (x) = |x| is continuous
at x = 0 but is not differentiable there.
Theorem 1.3.5.
Suppose f is differentiable at a. Then there exists a function ϕ such that
8
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.3 Differentiation
Proof.
Define ϕ by
f (x) − f (a)
ϕ(x) = − f ′ (a).
x−a
Since f is differentiable at a, the result follows on taking limits on both sides of the
last equation as x → a.
Proof.
If f is a constant function i.e., f (x) = f (a) for every x ∈ [a, b], clearly such a c exists.
If f is not a constant, then at least one of the following holds.
Case 1: The graph of f goes above the line y = f (a) i.e., f (x) > f (a) for some
x ∈ (a, b).
Case 2: The graph of f goes below the line y = f (a) i.e., f (x) < f (a) for some
x ∈ (a, b).
In case (1), i.e., if the graph of f goes above the line y = f (a), then the global
maximum cannot be at a or b. Therefore, it must lie in the open interval (a, b).
Denote that point by c. That is, global maximum on [a, b] is actually a local maxi-
mum, and hence f ′ (c) = 0. A similar argument can be given in case (2), to show the
existence of local minimum in (a, b). This completes the proof of Rolle’s theorem.
The mean value theorem plays an important role in obtaining error estimates for certain
numerical methods.
9
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.4 Integration
f (b) − f (a)
f ′ (c) = .
b−a
or, equivalently,
f (b) − f (a) = f ′ (c)(b − a).
Proof.
Define ϕ on [a, b] by
f (b) − f (a)
ϕ(x) = f (x) − f (a) − (x − a).
b−a
By Rolle’s theorem there exists a c ∈ (a, b) such that ϕ′ (c) = 0 and hence the proof
is complete.
1.4 Integration
In Theorem 1.3.7, we have discussed the mean value property for the derivative of a
function. We now discuss the mean value theorems for integration.
∫b
f (x) dx = f (c)(b − a).
a
Proof.
Let m and M be minimum and maximum values of f in the interval [a, b], respectively.
10
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
Then,
∫b
m(b − a) ≤ f (x) dx ≤ M (b − a).
a
Since f is continuous, the result follows from the intermediate value theorem.
∫b
1
f (x) dx.
b−a
a
Observe that the first mean value theorem for integrals asserts that the average of
an integrable function f on an interval [a, b] belongs to the range of the function
f.
The Theorem 1.4.1 is often referred to as the first mean value theorem for integrals.
We now state the second mean value theorem for integrals, which is a general form of
Theorem 1.4.1.
∫b ∫b
f (x)g(x) dx = f (c) g(x) dx.
a a
11
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
The most important result used very frequently in numerical analysis, especially in
error analysis of numerical methods, is the Taylor’s expansion of a C ∞ function in a
neighborhood of a point a ∈ R. In this section, we define the Taylor’s polynomial
and prove an important theorem called the Taylor’s theorem. The idea of the proof of
this theorem is similar to the one used in proving the mean value theorem, where we
construct a function and apply Rolle’s theorem several times to it.
where Tn is the Taylor’s polynomial of degree n for f at the point a given by (1.4)
and the second term on the right hand side is called the remainder term.
Proof.
Let us assume x > a and prove the theorem. The proof is similar if x < a.
Define g(t) by
g(t) = f (t) − Tn (t) − A(t − a)n+1
and choose A so that g(x) = 0, which gives
f (x) − Tn (x)
A= .
(x − a)n+1
Note that
g (k) (a) = 0 for k = 0, 1, . . . n.
12
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
Also, observe that the function g is continuous on [a, x] and differentiable in (a, x).
Apply Rolle’s theorem to g on [a, x] (after verifying all the hypotheses of Rolle’s
theorem) to get
a < c1 < x satisfying g ′ (c1 ) = 0.
In turn apply Rolle’s theorem to g (2) , g (3) , . . . , g (n) on intervals [a, c2 ], [a, c3 ], . . . ,
[a, cn ], respectively.
At the last step, we get
But
g (n+1) (cn+1 ) = f (n+1) (cn+1 ) − A(n + 1)!,
which gives
f (n+1) (cn+1 )
A= .
(n + 1)!
Equating both values of A, we get
f (n+1) (cn+1 )
f (x) = Tn (x) + (x − a)n+1 .
(n + 1)!
Observe that the mean value theorem (Theorem 1.3.7) is a particular case of the Taylor’s
theorem.
Remark 1.5.3.
The representation (1.5) is called the Taylor’s formula for the function f about
the point a.
13
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
for some real number h, then the Taylor’s theorem can be used to get
is not known since it involves the evaluation of f (n+1) at some unknown value ξ
lying between a and a + h. Also, observe that as h → 0, the remainder term
approaches to zero, provided f (n+1) is bounded. This means that for smaller values
of h, the Taylor’s polynomial gives a good approximation of f (a + h).
As remarked above, the remainder term involves an unknown parameter and often this
term cannot be calculated explicitly. However, an estimate of the error involved in the
approximation of the function by its Taylor’s polynomial can be obtained by finding a
bound for the remainder term.
Then for fixed points a, x ∈ I, the remainder term in (1.5) satisfies the estimate
f (n+1) (ξ) Mn+1 n+1
(x − a)n+1 ≤ x − a .
(n + 1)! (n + 1)!
which holds for all x ∈ I. Observe that the right hand side of the above estimate
is a fixed number. We refer to such estimates as remainder estimates.
Note
In most applications of Taylor’s theorem, one never knows ξ precisely. However
in view of remainder estimate given above, it does not matter as long as we know
that the remainder can be bounded by obtaining a bound Mn+1 which is valid
for all ξ between a and x.
14
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
Example 1.5.6.
A second degree polynomial approximation to
√
f (x) = x + 1, x ∈ [−1, ∞)
x x2
f (x) ≈ 1 + − ,
2 8
where the remainder term is neglected and hence what we obtained here is only an
approximate representation of f .
The truncation error is obtained using the remainder term in the formula (1.5) with
n = 2 and is given by
x3
√ ,
16( 1 + ξ )5
for some point ξ between 0 and x.
Note that we cannot obtain a remainder estimate in the present example as f ′′′ is not
bounded in [−1, ∞). However, for any 0 < δ < 1, if we restrict the domain of f to
[−δ, ∞), then we can obtain the remainder estimate for a fixed x ∈ [−δ, ∞) as
x3
√ .
16( 1 − δ )5
Further, if we restrict the domain of f to [−δ, b] for some real number b > 0, then we
get the remainder estimate independent of x as
b3
√ .
16( 1 − δ )5
15
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.5 Taylor’s Theorem
2 2
f(x)=cos(x) f(x)=cos(x)
Taylor polynomial of degree 2 (n=1) Taylor polynomial of degree 10 (n=5)
1.5 1.5
1 1
0.5 0.5
0
y
y
−0.5 −0.5
−1 −1
−1.5 −1.5
−2 −2
−6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6
x x
Figure 1.1: Comparison between the graph of f (x) = cos(x) and the Taylor polynomial
of degree 2 and 10 about the point a = 0.
Theorem 1.5.8.
Let f ∈ C ∞ (I) and let a ∈ I. Assume that there exists an open neighborhood
(interval) Na ⊂ I of the point a and there exists a constant M (may depend on a)
such that (k)
f (x) ≤ M k ,
Example 1.5.9.
16
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.6 Orders of Convergence
cos(0) 2 sin(0) 3
f (x) = cos(0) − sin(0)x − x + x + ···
2! 3!
∑
∞
(−1)k
= x2k .
k=0
(2k)!
which is the Taylor polynomial of degree 2n for the function f (x) = cos(x) about the
point a = 0. The remainder term is given by
cos(ξ)
(−1)n+1 x2(n+1) ,
(2(n + 1))!
17
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.6 Orders of Convergence
The notions of big Oh and little oh are well understood through the following exam-
ple.
Example 1.6.1.
Consider the two sequences {n} and {n2 } both of which are unbounded and tend to
infinity as n → ∞. However we feel that the sequence {n} grows ‘slowly’ compared
to the sequence {n2 }.
Consider also the sequences {1/n} and {1/n2 } both of which decrease to zero as
n → ∞. However we feel that the sequence {1/n2 } decreases more rapidly compared
to the sequence {1/n}.
The above examples motivate us to develop tools that compare two sequences {an } and
{bn }. Landau has introduced the concepts of Big Oh and Little oh for comparing two
sequences that we will define below.
Remark 1.6.3.
18
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.6 Orders of Convergence
{ }
an
is bounded. That is, there exists a constant C such that
bn
an
≤C
bn
{ bn }̸= 0 for every n, then we have an = o(bn ) if and only if the sequence
2. If
an
converges to 0. That is,
bn
an
lim = 0.
n→∞ bn
3. For any pair of sequences {an } and {bn } such that an = o(bn ), it follows that
an = O(bn ).
The converse is not true. Consider the sequences an = n and bn = 2n + 3,
for which an = O(bn ) holds but an = o(bn ) does not hold.
4. Let {an } and {bn } be two sequences that converge to 0. Then an = O(bn )
means the sequence {an } tends to 0 at least as fast as the sequence {bn };
and an = o(bn ) means the sequence {an } tends to 0 faster than the sequence
{bn }.
The Big Oh and Little oh notations can be adapted for functions as follows.
Example 1.6.5.
19
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.6 Orders of Convergence
∑
n
(−1)k cos(ξ)
cos(x) = x2k + (−1)n+1 x2(n+1)
k=0
(2k)! (2(n + 1))!
cos(ξ)
g(x) = (−1)n+1 x2(n+1) .
(2(n + 1))!
lim an = a.
n→∞
We would like to measure the speed at which the convergence takes place. For example,
consider
1
lim =0
n→∞ 2n + 3
and
1
lim = 0.
n→∞ n2
We feel that the first sequence goes to zero linearly and the second goes with a much
superior speed because of the presence of n2 in its denominator. We will define the
notion of order of convergence precisely.
20
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.7 Exercises
1. We say that the order of convergence is atleast linear if there exists a constant
c < 1 and a natural number N such that
1.7 Exercises
Sequences of Real Numbers
1. Let L be a real number and let {an } be a sequence of real numbers. If there exists
a positive integer N such that
|an − L| ≤ µ|an−1 − L|,
for all n ≥ N and for some fixed µ ∈ (0, 1), then show that an → L as n → ∞.
2. Consider the sequences {an } and {bn }, where
1 1
an = , bn = 2 , n = 1, 2, · · · .
n n
Clearly, both the sequences converge to zero. For the given ϵ = 10−2 , obtain the
smallest positive integers Na and Nb such that
|an | < ϵ whenever n ≥ Na , and |bn | < ϵ whenever n ≥ Nb .
For any ϵ > 0, show that Na > Nb .
21
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.7 Exercises
3. Let {xn } and {yn } be two sequences such that xn , yn ∈ [a, b] and xn < yn for each
n = 1, 2, · · · . If xn → b as n → ∞, then show that the sequence {yn } converges.
Find the limit of the sequence {yn }.
[ ]
n−2 n+2
4. Let In = , , n = 1, 2, · · · and {an } be a sequence with an is chosen
2n 2n
1
arbitrarily in In for each n = 1, 2, · · · . Show that an → as n → ∞.
2
Limits and Continuity
5. Let f be a real-valued function such that f (x) ≥ sin(x) for all x ∈ R. If
lim f (x) = L exists, then show that L ≥ 0.
x→0
P (x) P (x)
lim and lim
x→∞ Q(x) x→0 Q(x)
7. Show that the equation sin x+x2 = 1 has at least one solution in the interval [0, 1].
8. Let f (x) be continuous on [a, b], let x1 , · · · , xn be points in [a, b], and let g1 , · · · ,
gn be real numbers having same sign. Show that
∑
n ∑
n
f (xi )gi = f (ξ) gi , for some ξ ∈ [a, b].
i=1 i=1
9. Let f : [0, 1] → [0, 1] be a continuous function. Prove that the equation f (x) = x
has at least one solution lying in the interval [0, 1] (Note: A solution of this equa-
tion is called a fixed point of the function f ).
22
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.7 Exercises
12. Suppose f is differentiable in an open interval (a, b). Prove the following state-
ments
(a) If f ′ (x) ≥ 0 for all x ∈ (a, b), then f is non-decreasing.
(b) If f ′ (x) = 0 for all x ∈ (a, b), then f is constant.
(c) If f ′ (x) ≤ 0 for all x ∈ (a, b), then f is non-increasing.
13. Let f : [a, b] → R be given by f (x) = x2 . Find a point c specified by the mean-
value theorem for derivatives. Verify that this point lies in the interval (a, b).
Integration
14. Let g : [0, 1] → R be a continuous function. Show that there exists a c ∈ (0, 1)
such that
∫1
1
x2 (1 − x)2 g(x)dx = g(ξ).
30
0
√ √
where nπ ≤ c ≤ (n + 1)π.
Taylor’s Theorem
16. Find the Taylor’s polynomial of degree 2 for the function
√
f (x) = x + 1
17. Use Taylor’s formula about a = 0 to evaluate approximately the value of the func-
tion f (x) = ex at x = 0.5 using three terms (i.e., n = 2) in the formula. Obtain
the remainder R2 (0.5) in terms of the unknown c. Compute approximately the
possible values of c and show that these values lie in the interval (0, 0.5).
23
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 1.7 Exercises
24
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 2
Error Analysis
Numerical analysis deals with developing methods, called numerical methods, to ap-
proximate a solution of a given Mathematical problem (whenever a solution exists).
The approximate solution obtained by a method involves an error, which we call the
mathematical error, and is precisely the difference between the exact solution and the
approximate solution. Thus, we have
The error involved in the numerical solution when compared to the exact solution can
be worser than the mathematical error and is now given by
25
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
A digital calculating device can hold only a finite number of digits because of memory
restrictions. Therefore, a number cannot be stored exactly. Certain approximation
needs to be done, and only an approximate value of the given number will finally be
stored in the device. For further calculations, this approximate value is used instead of
the exact value of the number. This is the source of arithmetic error.
In this chapter, we introduce the floating-point representation of a real number and
illustrate a few ways to obtain floating-point approximation of a given real number.
We further introduce different types of errors that we come across in numerical analysis
and their effects in the computation. At the end of this chapter, we will be familiar
with the arithmetic errors, their effect on computed results and some ways to minimize
this error in the computation.
Remark 2.1.1.
When β = 2, the floating-point representation (2.1) is called the binary floating-
point representation and when β = 10, it is called the decimal floating-point
representation.
Note
Throughout this course, we always take β = 10.
Due to memory restrictions, a computing device can store only a finite number of digits
in the mantissa. In this section, we introduce the floating-point approximation and
discuss how a given real number can be approximated.
26
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
Although different computing devices have different ways of representing numbers, here
we introduce a mathematical form of this representation, which we will use throughout
this course.
where
d1 d2 dn
(.d1 d2 · · · dn )β = + 2 + ··· + n (2.4)
β β β
Remark 2.1.3.
When β = 2, the n-digit floating-point representation (2.3) is called the n-digit
binary floating-point representation and when β = 10, it is called the n-digit
decimal floating-point representation.
Example 2.1.4.
The following are examples of real numbers in the decimal floating point representa-
tion.
1. The real number x = 6.238 is represented in the decimal floating-point repre-
sentation as
6.238 = (−1)0 × 0.6238 × 101 ,
in which case, we have s = 0, β = 10, e = 1, d1 = 6, d2 = 2, d3 = 3 and d4 = 8.
27
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
Remark 2.1.5.
The floating-point representation of the number 1/3 is
1
= 0.33333 · · · = (−1)0 × (0.33333 · · · )10 × 100 .
3
An n-digit decimal floating-point representation of this number has to contain only
n digits in its mantissa. Therefore, 1/3 cannot be represented as an n-digit floating-
point number.
Any computing device has its own memory limitations in storing a real number. In
terms of the floating-point representation, these limitations lead to the restrictions in
the number of digits in the mantissa (n) and the range of the exponent (e). In Section
2.1.2, we introduce the concept of under and over flow of memory, which is a result
of the restriction in the exponent. The restriction on the length of the mantissa is
discussed in Section 2.1.3.
2.1.2 Underflow and Overflow of Memory
When the value of the exponent e in a floating-point number exceeds the maximum
limit of the memory, we encounter the overflow of memory, whereas when this value
goes below the minimum of the range, then we encounter underflow. Thus, for a given
computing device, there are integers m and M such that the exponent e is limited to a
range
m ≤ e ≤ M. (2.5)
During the calculation, if some computed number has an exponent e > M then we say,
the memory overflow occurs and if e < m, we say the memory underflow occurs.
Remark 2.1.6.
In the case of overflow of memory in a floating-point number, a computer will
usually produce meaningless results or simply prints the symbol inf or NaN. When
your computation involves an undetermined quantity (like 0×∞, ∞−∞, 0/0), then
the output of the computed value on a computer will be the symbol NaN (means
‘not a number’). For instance, if X is a sufficiently large number that results in an
overflow of memory when stored on a computing device, and x is another number
that results in an underflow, then their product will be returned as NaN.
On the other hand, we feel that the underflow is more serious than overflow in a
computation. Because, when underflow occurs, a computer will simply consider the
number as zero without any warning. However, by writing a separate subroutine,
one can monitor and get a warning whenever an underflow occurs.
28
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
29
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
When the value of j is further reduced slightly as shown in the following program
j=-323.64;
if(10^j>0)
fprintf('The given number is greater than zero\n');
elseif (10^j==0)
fprintf('The given number is equal to zero\n');
else
fprintf('The given number is less than zero\n');
end
the output shows
The given number is equal to zero
If your computer is not showing the above output, try decreasing the value of j till
you get the above output.
In this example, we see that the number 10−323.64 is recognized as zero by the com-
puter. This is due to the underflow of memory. Note that multiplying any large
number by this number will give zero as answer. If a computation involves such an
underflow of memory, then there is a danger of having a large difference between the
actual value and the computed value.
The number of digits in the mantissa, as given in Definition 2.1.2, is called the pre-
cision or length of the floating-point number. In general, a real number can have
infinitely many digits, which a computing device cannot hold in its memory. Rather,
each computing device will have its own limitation on the length of the mantissa. If
a given real number has infinitely many (or sufficiently large number of) digits in the
mantissa of the floating-point form as in (2.1), then the computing device converts this
number into an n-digit floating-point form as in (2.3). Such an approximation is called
the floating-point approximation of a real number.
There are many ways to get floating-point approximation of a given real number. Here
we introduce two types of floating-point approximation.
30
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
where
Note
As already mentioned, throughout this course, we always take β = 10. Also, we
do not assume any restriction on the exponent e ∈ Z.
Example 2.1.10.
The floating-point representation of π is given by
which is equal to 3.1415. Since the sixth digit of the mantissa in the floating-point rep-
resentation of π is a 9, the floating-point approximation of π using five-digit rounding
is given by
fl(π) = (−1)0 × (.31416) × 101 ,
which is equal to 3.1416.
Remark 2.1.11.
Most of the modern processors, including Intel, uses IEEE 754 standard format.
This format uses 52 bits in mantissa, (64-bit binary representation), 11 bits in
exponent and 1 bit for sign. This representation is called the double precision
number.
31
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.1 Floating-Point Representation
Example 2.1.12.
Consider the function (√ √ )
f (x) = x x+1− x .
32
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.2 Types of Errors
Finally, we have
Thus, the value of f (100000) using six-digit rounding is 100. Similarly, we can see
that using six-digit chopping, the value of f (100000) is 200.
The approximate representation of a real number obviously differs from the actual
number, whose difference is called an error.
33
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.3 Loss of Significance
Remark 2.2.2.
Let xA denote the approximation to the real number x. We use the following
notations:
The absolute error has to be understood more carefully because a relatively small
difference between two large numbers can appear to be large, and a relatively large
difference between two small numbers can appear to be small. On the other hand, the
relative error gives a percentage of the difference between two numbers, which is usually
more meaningful as illustrated below.
Example 2.2.3.
Let x = 100000, xA = 99999, y = 1 and yA = 1/2. We have
1
Ea (xA ) = 1, Ea (yA ) = .
2
Although Ea (xA ) > Ea (yA ), we have
1
Er (xA ) = 10−5 , Er (yA ) = .
2
Hence, in terms of percentage error, xA has only 10−3 % error when compared to x
whereas yA has 50% error when compared to y.
The errors defined above are between a given number and its approximate value. Quite
often we also approximate a given function by another function that can be handled
more easily. For instance, a sufficiently differentiable function can be approximated
using Taylor’s theorem (Theorem 1.5.2). The error between the function value and
the value obtained from the corresponding Taylor’s polynomial is defined as truncation
error as defined in Definition 1.5.5.
In place of relative error, we often use the concept of significant digits that is closely
related to relative error.
34
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.3 Loss of Significance
Example 2.3.2.
1. For x = 1/3, the approximate number xA = 0.333 has three significant digits,
since
|x − xA |
= 0.001 < 0.005 = 0.5 × 10−2 .
|x|
Thus, r = 3.
2. For x = 0.02138, the approximate number xA = 0.02144 has three significant
digits, since
|x − xA |
≈ 0.0028 < 0.005 = 0.5 × 10−2 .
|x|
Thus, r = 3.
3. For x = 0.02132, the approximate number xA = 0.02144 has two significant
digits, since
|x − xA |
≈ 0.0056 < 0.05 = 0.5 × 10−1 .
|x|
Thus, r = 2.
4. For x = 0.02138, the approximate number xA = 0.02149 has two significant
digits, since
|x − xA |
≈ 0.0051 < 0.05 = 0.5 × 10−1 .
|x|
Thus, r = 2.
5. For x = 0.02108, the approximate number xA = 0.0211 has three significant
digits, since
|x − xA |
≈ 0.0009 < 0.005 = 0.5 × 10−2 .
|x|
Thus, r = 3.
35
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.3 Loss of Significance
Remark 2.3.3.
Number of significant digits roughly measures the number of leading non-zero digits
of xA that are correct relative to the corresponding digits in the true value x.
However, this is not a precise way to get the number of significant digits as it is
evident from the above examples.
The role of significant digits in numerical calculations is very important in the sense
that the loss of significant digits may result in drastic amplification of the relative error
as illustrated in the following example.
Example 2.3.4.
Let us consider two real numbers
The numbers
are approximations to x and y, correct to seven and eight significant digits, respec-
tively. The exact difference between xA and yA is
zA = xA − yA = 0.12210000 × 10−3
z = x − y = 0.12270000 × 10−3 .
Therefore,
|z − zA |
≈ 0.0049 < 0.5 × 10−2
|z|
and hence zA has only three significant digits with respect to z. Thus, we started with
two approximate numbers xA and yA which are correct to seven and eight significant
36
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.3 Loss of Significance
digits with respect to x and y, respectively. But their difference zA has only three
significant digits with respect to z. Hence, there is a loss of significant digits in the
process of subtraction. A simple calculation shows that
Similarly, we have
Er (zA ) ≈ 375067 × Er (yA ).
Loss of significant digits is therefore dangerous. The loss of significant digits in the
process of calculation is referred to as loss of significance.
Example 2.3.5.
Consider the function √ √
f (x) = x( x + 1 − x).
From Example 2.1.12, the value of f (100000) using six-digit rounding is 100, whereas
the true value is 158.113. There is a drastic error in the value of the function, which
is due to the loss of significant digits. It is evident that as x increases, the terms
√ √
x + 1 and x comes closer to each other and therefore loss of significance in their
computed value increases.
Such a loss of significance can be avoided by rewriting the given expression of f in
such a way that subtraction of near-by non-negative numbers is avoided. For instance,
we can rewrite the expression of the function f as
x
f (x) = √ √ .
x+1+ x
With this new form of f , we obtain f (100000) = 158.114000 using six-digit rounding.
Example 2.3.6.
Consider evaluating the function
f (x) = 1 − cos x
near x = 0. Since cos x ≈ 1 for x near zero, there will be loss of significance in the
37
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.4 Propagation of Relative Error in Arithmetic Operations
process of evaluating f (x) for x near zero. So, we have to use an alternative formula
for f (x) such as
f (x) = 1 − cos x
1 − cos2 x
=
1 + cos x
sin2 x
=
1 + cos x
which can be evaluated quite accurately for small x.
Remark 2.3.7.
Unlike the above examples, we may not be able to always write an equivalent
formula of a given function to avoid loss of significance in the evaluation. In such
cases, we have to go for a suitable approximation of the given function by other
functions, for instance Taylor’s polynomial of desired degree, that do not involve
loss of significance.
Let xA and yA denote the approximate numbers used in the calculation, and let xT and
yT be the corresponding true values. We will now see how relative error propagates
with the four basic arithmetic operations.
Let xT = xA +ϵ and yT = yA +η be positive real numbers. The relative error Er (xA ±yA )
is given by
(xT ± yT ) − (xA ± yA )
Er (xA ± yA ) =
xT ± y T
(xT ± yT ) − (xT − ϵ ± (yT − η))
=
xT ± y T
38
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.4 Propagation of Relative Error in Arithmetic Operations
ϵ±η
Er (xA ± yA ) = . (2.12)
xT ± yT
The above expression shows that there can be a drastic increase in the relative error
during subtraction of two approximate numbers whenever xT ≈ yT as we have witnessed
in Example 2.3.4 and Example 2.3.5. On the other hand, it is easy to see from (2.12)
that
|Er (xA + yA )| ≤ |Er (xA )| + |Er (yA )|,
which shows that the relative error propagates slowly in addition. Note that such an
inequality in the case of subtraction is not possible.
2.4.2 Multiplication
(xT × yT ) − (xA × yA )
Er (xA × yA ) =
xT × yT
( )
(xT × yT ) − (xT − ϵ) × (yT − η)
=
xT × y T
ηxT + ϵyT − ϵη
=
xT × y T
( )( )
ϵ η ϵ η
= + −
xT yT xT yT
Thus, we have
|Er (xA × yA )| ≤ |Er (xA )| + |Er (yA )| + |Er (xA )| |Er (yA )|
Note that when |Er (xA )| and |Er (yA )| are very small, then their product is negligible
when compared to |Er (xA )| + |Er (yA )|. Therefore, the above inequality reduces to
39
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.4 Propagation of Relative Error in Arithmetic Operations
2.4.3 Division
40
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.4 Propagation of Relative Error in Arithmetic Operations
Example 2.4.1.
Consider evaluating the integral
∫ 1 n
x
In = dx, for n = 0, 1, · · · , 30.
0 x+5
The following table shows the computed value of In using both iterative formulas
along with the exact value. The numbers are computed using MATLAB using double
precision arithmetic and the final answer is rounded to 6 digits after the decimal
point.
Clearly the backward iteration gives exact value up to the number of digits shown,
whereas forward iteration tends to increase error and give entirely wrong values. This
is due to the propagation of error from one iteration to the next iteration. In forward
iteration, the total error from one iteration is magnified by a factor of 5 at the next
iteration. In backward iteration, the total error from one iteration is divided by 5 at
the next iteration. Thus, in this example, with each iteration, the total error tends
to increase rapidly in the forward iteration and tends to increase very slowly in the
backward iteration.
41
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.5 Propagation of Relative Error in Function Evaluation
where ξ is an unknown point between x and xA . The relative error in f (xA ) when
compared to f (x) is given by
f ′ (ξ)
Er (f (xA )) = (x − xA ).
f (x)
Thus, we have
( )
f ′ (ξ)
Er (f (xA )) = x Er (xA ). (2.15)
f (x)
Since xA and x are assumed to be very close to each other and ξ lies between x and xA ,
we may make the approximation
The expression inside the bracket on the right hand side of (2.16) is the amplifica-
tion factor for the relative error in f (xA ) in terms of the relative error in xA . Thus,
this expression plays an important role in understanding the propagation of relative
error in evaluating the function value f (x) and hence motivates the following defini-
tion.
42
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.5 Propagation of Relative Error in Function Evaluation
The condition number of a function at a point x = c can be used to decide whether the
evaluation of the function at x = c is well-conditioned or ill-conditioned depending on
whether this condition number is smaller or larger as we approach this point. It is not
possible to decide a priori how large the condition number should be to say that the
function evaluation is ill-conditioned and it depends on the circumstances in which we
are working.
Example 2.5.3.
Consider the function
√
f (x) = x,
for all x ∈ [0, ∞). Then
1
f ′ (x) = √ , for all x ∈ [0, ∞).
2 x
43
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.5 Propagation of Relative Error in Function Evaluation
Example 2.5.4.
Consider the function
10
f (x) = ,
1 − x2
for all x ∈ R. Then
20x
f ′ (x) = ,
(1 − x2 )2
so that
( )
20x
′ x
f (x) (1 − x2 )2
x =
f (x) 10
(1 − x2 )
2x2
=
|1 − x2 |
and this number can be quite large for |x| near 1. Thus, for x near 1 or -1, the process
of evaluating this function is ill-conditioned.
The above two examples gives us a feeling that if the process of evaluating a function
is well-conditioned, then we tend to get less propagating relative error. But, this is not
true in general as shown in the following example.
Example 2.5.5.
Consider the function
√ √
f (x) = x + 1 − x, for all x ∈ (0, ∞).
44
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.5 Propagation of Relative Error in Function Evaluation
Suppose there are n steps to evaluate a function f (x) at a point x = c. Then the total
process of evaluating this function is said to have instability if at least one of the n
steps is ill-conditioned. If all the steps are well-conditioned, then the process is said to
be stable.
Example 2.5.6.
We continue the discussion in Example 2.5.5 and check the stability in evaluating the
function f . Let us analyze the computational process. The function f consists of the
following four computational steps in evaluating the value of f at x = x0 :
√ √
x1 := x0 + 1, x2 := x1 , x3 := x0 , x4 := x2 − x3 .
Now consider the last two steps where we already computed x2 and now going to
compute x3 and finally evaluate the function
f4 (t) := x2 − t.
45
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.5 Propagation of Relative Error in Function Evaluation
It is easy to verify that the condition number of each of the above steps is well-
conditioned. For instance, the last step defines
1
f˜5 (t) = ,
x2 + t
and the condition number of this function is approximately,
f˜′ (x) t 1
5
x = ≈
f5 (x)
˜ x2 + t 2
Remark 2.5.7.
As discussed in Remark 2.3.7, we may not be lucky all the time to come out with
an alternate expression that lead to stable evaluation for any given function when
the original expression leads to unstable evaluation. In such situations, we have to
compromise and go for a suitable approximation with other functions with stable
evaluation process. For instance, we may try approximating the given function
with its Taylor’s polynomial, if possible.
46
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.6 Exercises
2.6 Exercises
Floating-Point Approximation
1. Let X be a sufficiently large number which result in an overflow of memory on a
computing device. Let x be a sufficiently small number which result in underflow
of memory on the same computing device. Then give the output of the following
operations:
(i) x × X (ii) 3 × X (iii) 3 × x (iv) x/X (v) X/x.
2. In the following problems, show all the steps involved in the computation.
i) Using 5-digit rounding, compute 37654 + 25.874 − 37679.
ii) Let a = 0.00456, b = 0.123, c = −0.128. Using 3-digit rounding, compute
(a + b) + c, and a + (b + c). What is your conclusion?
iii) Let a = 2, b = −0.6, c = 0.602. Using 3-digit rounding, compute a × (b + c),
and (a × b) + (a × c). What is your conclusion?
3. To find the mid-point of an interval [a, b], the formula a+b
2
is often used. Compute
the mid-point of the interval [0.982, 0.987] using 3-digit chopping. On the number
line represent all the three points. What do you observe? Now use the more
geometric formula a + b−a 2
to compute the mid-point, once again using 3-digit
chopping. What do you observe this time? Why is the second formula more
geometric?
4. Consider a computing device having exponents e in the range m ≤ e ≤ M ,
m, M ∈ Z. If the device uses n-digit rounding binary floating-point arithmetic,
then show that δ = 2−n is the machine epsilon when n ≤ |m| + 1.
Types of Errors
5. If fl(x) is the approximation of a real number x in a computing device, and ϵ is
the corresponding relative error, then show that fl(x) = (1 − ϵ)x.
6. Let x, y and z be real numbers whose floating point approximations in a com-
puting device coincide with x, y and z respectively. Show that the relative er-
ror in computing x(y + z) equals ϵ1 + ϵ2 − ϵ1 ϵ2 , where ϵ1 = Er (fl(y + z)) and
ϵ2 = Er (fl(x × fl(y + z))).
7. Let ϵ = Er (fl(x)). Show that
i) |ϵ| ≤ 10−n+1 if the computing device uses n-digit (decimal) chopping.
ii) |ϵ| ≤ 12 10−n+1 if the computing device uses n-digit (decimal) rounding.
iii) Can the equality hold in the above inequalities?
47
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.6 Exercises
8. Let the approximation sin x ≈ x be used on the interval [−δ, δ] where δ > 0.
Show that for all x ∈ [−δ, δ],
x3
| sin x − x| < .
6
Find a δ > 0 such that the inequality
1
| sin x − x| < 10−6 ,
2
holds for all x ∈ [−δ, δ].
9. Let xA = 3.14 and yA = 2.651 be obtained from xT and yT using 4-digit rounding.
Find the smallest interval that contains
(i) xT (ii) yT (iii) xT + yT (iv) xT − yT (v) xT × yT (vi) xT /yT .
10. The ideal gas law is given by P V = nRT where R is the gas constant. We are
interested in knowing the value of T for which P = V = n = 1. If R is known only
approximately as RA = 8.3143 with an absolute error at most 0.12 × 10−2 . What
is the relative error in the computation of T that results in using RA instead of R?
48
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.6 Exercises
• there exists constant M > 0 such that |f ′ (x)| ≥ M and |g ′ (x)| ≤ M for all
x ∈ R,
• the condition number of f is less than 1, and
• the condition number of g is greater than 1.
Show that |g(x)| < |f (x)| for all x ∈ R. (Mid-Sem, Spring 2011)
16. Find the condition number at a point x = c for the following functions
(i) f (x) = x2 , (ii) g(x) = π x , (iii) h(x) = bx .
17. Let xT be a real number. Let xA = 2.5 be an approximate value of xT with
an absolute error at most 0.01. The function f (x) = x3 is evaluated at x = xA
instead of x = xT . Estimate the resulting absolute error.
18. Is the process of computing the function f (x) = (ex − 1)/x stable or unstable for
x ≈ 0? Justify your answer. (Quiz1, Autumn 2010)
19. Show that the process of evaluating the function
1 − cos x
f (x) =
x2
for x ≈ 0 is unstable. Suggest an alternate formula for evaluating f for x ≈ 0,
and check if the computation process is stable using this new formula.
sin2 x
h(x) =
1 − cos2 x
for values of x very close to 0.
49
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 2.6 Exercises
using your calculator (without simplifying the given formula) at x = 89.9, 89.95, 89.99
degrees. Also compute the values of the function tan x at these values of x and
compare the number of significant digits.
50
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 3
In this chapter, we study the methods for solving system of linear equations, and
computing an eigenvalue and the corresponding eigen vector for a matrix. The methods
for solving linear systems are categorized into two types, namely, the direct methods
and the iterative methods. Theoretically, direct methods give exact solution of a linear
system and therefore these methods do not involve mathematical error. However, when
we implement the direct methods on a computer, because of the presence of arithmetic
error, the computed value from a computer will still be an approximate solution. On
the other hand, an iterative method generates a sequence of approximate solutions to
a given linear system which is expected to converge to the exact solution.
Some matrices are sensitive to even a small error in the right hand side vector of a
linear system. Such a matrix can be identified with the help of the condition number of
51
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.1 System of Linear Equations
the matrix. The condition number of a matrix is defined in terms of the matrix norm.
In Section 3.3, we introduce the notion of matrix norms, define condition number of
a matrix, and discuss few important theorems that are used in the error analysis of
iterative methods. We continue the chapter with the discussion of iterative methods
to linear system in Section 3.4, where we introduce two basic iterative methods and
discuss the sufficient condition under which the methods converge. We end this section
with the definition of the residual error and another iterative method called the residual
corrector method.
Finally in Section 3.5 we discuss the power method, which is used to capture the domi-
nant eigenvalue and a corresponding eigenvectors of a given matrix. We end the chapter
with the Gerschgorin’s Theorem and its application to power method.
52
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Theorem 3.1.1.
Let A be an n × n matrix and b ∈ Rn . Then the following statements concerning the
system of linear equations Ax = b are equivalent.
1. det(A) ̸= 0
2. For each right hand side vector b, the system Ax = b has a unique solution x.
Note
We always assume that the coefficient matrix A is invertible. Any discussion of
what happens when A is not invertible is outside the scope of this course.
In this section, we discuss two direct methods namely, the Gaussian elimination method
and the LU factorization method. We also introduce the Thomas algorithm, which is
a particular case of the Gaussian elimination method for tridiagonal systems.
Let us describe the Gaussian elimination method to solve a system of linear equations
in three variables. The method for a general system is similar.
Consider the following system of three linear equations in three variables x1 , x2 , x3 :
For convenience, we denote the first, second, and third equations by E1 , E2 , and E3 ,
respectively.
Step 1: If a11 ̸= 0, then define
a21 a31
m21 = , m31 = . (3.5)
a11 a11
We will now obtain a new system that is equivalent to the system (3.4) as follows:
53
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Note that the variable x1 has been eliminated from the last two equations.
(2)
Step 2: If a22 ̸= 0, then define
(2)
a32
m32 = (2)
. (3.8)
a22
We still use the same names E1 , E2 , E3 for the first, second, and third equations of the
modified system (3.6), respectively. We will now obtain a new system that is equivalent
to the system (3.6) as follows:
• Retain the first two equations in (3.6) as they are.
• Replace the third equation by the equation E3 − m32 E2 .
The new system is given by
Note that the variable x2 has been eliminated from the last equation. This phase of
the (Naive) Gaussian elimination method is called forward elimination phase.
Step 3: Observe that the system (3.9) is readily solvable for x3 if the coefficient
(3)
a33 ̸= 0. Substituting the value of x3 in the second equation of (3.9), we can solve
54
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
for x2 . Substituting the values of x1 and x2 in the first equation, we can solve for x1 .
This solution phase of the (Naive) Gaussian elimination method is called backward
substitution phase.
Note
The coefficient matrix of the system (3.9) is an upper triangular matrix given by
a11 a12 a13
(2) (2)
U = 0 a22 a23 . (3.10)
(3)
0 0 a33
1. First of all, we do not know if the method described here can be successfully
applied for all systems of linear equations which are uniquely solvable (that
is, the coefficient matrix is invertible).
2. Secondly, even when we apply the method successfully, it is not clear if the
computed solution is the exact solution. In fact, it is not even clear that the
computed solution is close to the exact solution.
Example 3.2.2.
Consider the system of equations
( )( ) ( )
0 1 x1 1
= . (3.12)
1 1 x2 2
The Step 1 cannot be started as a11 = 0. Thus the naive Gaussian elimination method
fails.
55
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Example 3.2.3.
Let 0 < ϵ ≪ 1. Consider the system of equations
( )( ) ( )
ϵ 1 x1 1
= . (3.13)
1 1 x2 2
Since ϵ ̸= 0, after Step 1 of the Naive Gaussian elimination method, we get the system
( )( ) ( )
ϵ 1 x1 1
= . (3.14)
0 1 − ϵ−1 x2 2 − ϵ−1
2 − ϵ−1
x2 = , x1 = (1 − x2 )ϵ−1 . (3.15)
1 − ϵ−1
Note that for a sufficiently small ϵ, the computer evaluates 2−ϵ−1 as −ϵ−1 , and 1−ϵ−1
also as −ϵ−1 . Thus, x2 ≈ 1 and as a consequence x1 ≈ 0. However the exact/correct
solution is given by
1 1 − 2ϵ
x1 = ≈ 1, x2 = ≈ 1. (3.16)
1−ϵ 1−ϵ
Thus, in this particular example, the solution obtained by the naive Gaussian elimi-
nation method is completely wrong.
To understand this example better, we instruct the reader to solve the system (3.13)
for the cases (1) ϵ = 10−3 , and (2) ϵ = 10−5 using 3-digit rounding.
In the following example, we illustrate the need of a pivoting strategy in the Gaussian
elimination method.
Example 3.2.4.
Consider the linear system
Let us solve this system using (naive) Gaussian elimination method using 4-digit
rounding.
56
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
After eliminating x1 from the second and third equations, we get (with m21 = 0.3333,
m31 = 0.1667)
After eliminating x2 from the third equation, we get (with m32 = 16670)
The above examples highlight the inadequacy of the Naive Gaussian elimination method.
These inadequcies can be overcome by modifying the procedure of Naive Gaussian elim-
ination method. There are many kinds of modification. We will discuss one of the most
popular modified methods which is called modified Gaussian elimination method with
partial pivoting.
57
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Step 1: Define s1 = max { |a11 |, |a21 |, |a31 | }. Note that s1 ̸= 0 (why?). Let k be the
least number such that s1 = |ak1 |. Interchange the first equation and the k th equation.
Let us re-write the system after this modification.
(1) (1) (1) (1)
a11 x1 + a12 x2 + a13 x3 = b1
(1) (1) (1) (1)
a21 x1 + a22 x2 + a23 x3 = b2 (3.20)
(1) (1) (1) (1)
a31 x1 + a32 x2 + a33 x3 = b3 .
where
(1) (1) (1) (1) (1) (1) (1) (1)
a11 = ak1 , a12 = ak2 , a13 = ak3 , ak1 = a11 , ak2 = a12 , ak3 = a13 ; b1 = bk , bk = b1 ,
(3.21)
(1)
and rest of the coefficients aij are same as aij as all equations other than the first and
k th remain untouched by the interchange of first and k th equation. Now eliminate the
x1 variable from the second and third equations of the system (3.20). Define
(1) (1)
a21 a31
m21 = (1)
, m31 = (1)
. (3.22)
a11 a11
We will now obtain a new system that is equivalent to the system (3.20) as follows:
(2) (2)
where the coefficients aij , and bk are given by
Note that the variable x1 has been eliminated from the last two equations.
{ }
(2) (2)
Step 2: Define s2 = max |a22 |, |a32 | . Note that s2 ̸= 0 (why?). Let l be the least
(2)
number such that sl = |al2 |. Interchange the second row and the lth rows. Let us
58
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
59
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
It is important to know the length of a computation and for that reason, we count the
number of arithmetic operations involved in the naive Gaussian elimination method for
the system Ax = b.
Forward elimination phase: The forward elimination phase consists of (1) the mod-
ification of the coefficient matrix A and (2) the modification of the right hand side
vector b.
1. Modification of the coefficient matrix:
We now count the additions/subtractions, multiplications and divisions in going
from the given system to the triangular system.
Let us explain the first row of the above table. In the first step, computation
of m21 , m31 , · · · , mn1 involve (n − 1) divisions. For each i, j = 2, 3, · · · , n, the
(2)
computation of aij involves a multiplication and a subtraction. In total, there
are (n − 1)2 multiplications and (n − 1)2 subtractions. Note that we do not
count the operations involved in computing the coefficients of x1 in the 2nd to nth
(2)
equations (namely, ai1 ), as we do not compute them and simply take them as
zero. Similarly, other entries in the above table can be accounted for.
Observe that the total number of the operations involved in the modification of
the coefficient matrix is equal to
n(n − 1)(4n + 1)
,
6
which is of order O(n3 ) as n → ∞.
60
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
n(n − 1)
Addition/Subtraction = (n − 1) + (n − 2) + · · · + 1 =
2
n(n − 1)
Multiplication/Division = (n − 1) + (n − 2) + · · · + 1 =
2
Thus the total number of the operations involved in the modification of the right
hand side vector is equal to n(n − 1), which is of order O(n2 ) as n → ∞.
n(n − 1)
Addition/Subtraction = (n − 1) + (n − 2) + · · · + 1 =
2
n(n + 1)
Multiplication/Division = n + (n − 1) + · · · + 1 =
2
Thus the total number of the operations involved in the modification of the right hand
side vector is equal to n2 .
4n3 + 9n2 − 7n
,
6
which is of order O(n3 ) as n → ∞.
The Gaussian elimination method can be simplified in the case of a tri-diagonal system
so as to increase the efficiency. The resulting simplified method is called the Thomas
method.
A tri-diagonal system of linear equations is of the form
β1 x 1 +γ1 x2 +0x3 +0x4 +0x5 +0x6 +··· +0xn−2 +0xn−1 +0xn = b1
α2 x1 +β2 x2 +γ2 x3 +0x4 +0x5 +0x6 +··· +0xn−2 +0xn−1 +0xn = b2
0x1 +α3 x2 +β3 x3 +γ3 x4 +0x5 +0x6 +··· +0xn−2 +0xn−1 +0xn = b3
0x1 +0x2 +α4 x3 +β4 x4 +γ4 x5 +0x6 +··· +0xn−2 +0xn−1 +0xn = b4
··· (3.27)
···
···
0x1 +0x2 +0x3 +0x4 +0x5 +0x6 +··· +αn−1 xn−2 +βn−1 xn−1 +γn−1 xn = bn−1
0x1 +0x2 +0x3 +0x4 +0x5 +0x6 +··· +0xn−2 +αn xn−1 +βn xn = bn
61
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
62
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Remark 3.2.5.
If the denominator of any of the ej ’s or fj ’s is zero, then the Thomas method fails.
This is the situation when βj − αj ej−1 = 0 which is the coefficient of xj in the
reduced equation. A suitable partial pivoting as done in the modified Gaussian
elimination method may sometime help us to overcome this problem.
3.2.5 LU Factorization
In Theorem 3.1.1, we have stated that when a matrix is invertible, then the correspond-
ing linear system can be solved. Let us now ask the next question:
‘Can we give examples of a class(es) of invertible matrices for which the system of linear
equations (3.2) given by
a11 a12 ··· a1n x1 b1
a21 a22 ···
a2n x2 b2
.. .. .. .. = ..
. . ··· . . .
an1 an2 ··· ann xn bn
is “easily” solvable?’
There are three types of matrices whose simple structure makes the linear system solv-
able “readily”. These matrices are as follows:
63
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
and lii ̸= 0 for each i = 1, 2, · · · , n. The linear system takes the form
l11 0 0 ··· 0 x1 b1
l21 l22 0
· · · 0 x 2 b2
.. .. .. .. = .. (3.28)
. . ··· ··· . . .
ln1 ln2 ln3 · · · lnn xn bn
From the first equation, we solve for x1 given by
b1
x1 = .
l11
Substituting this value of x1 in the second equation, we get the value of x2 as
b1
b2 − l21
l11
x2 = .
l22
Proceeding in this manner, we solve for the vector x. This procedure of obtaining
solution may be called the forward substitution.
64
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
In general, an invertible matrix A need not be one among the simple structures listed
above. However, in certain situations we can always find an invertible lower triangular
matrix L and an invertible upper triangular matrix U in such a way that
A = LU .
Lz = b
for the vector z, which can be obtained easily using forward substitution. After obtain-
ing z, we solve the upper triangular system
Ux = z
for the vector x, which is again obtained easily using backward substitution.
Remark 3.2.6.
In Gaussian elimination method discussed in Section 3.2.1, we have seen that a
given matrix A can be reduced to an upper triangular matrix U by an elimination
procedure and thereby can be solved using backward substitution. In the elim-
ination procedure, we also obtained a lower triangular matrix L in such a way
that
A = LU
as remarked in this section.
A = LU .
65
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Remark 3.2.8.
Clearly if a matrix has an LU decomposition, the matrices L and U are not unique
as
A = LU = (LD)(D−1 U ) = L̃Ũ
for any invertible diagonal matrix D. Note that A = L̃Ũ is also an LU decompo-
sition of A as L̃ is a lower triangular matrix and Ũ is an upper triangular matrix.
Doolittle’s factorization
We now state the sufficient condition under which the Doolittle’s factorization of a
given matrix exists. For this, we need the notion of leading principal minors of a given
matrix, which we define first and then state the required theorem.
Let A be an n × n matrix.
1. A sub-matrix of order k (< n) of the matrix A is a k × k matrix obtained by
removing n − k rows and n − k columns from A.
The determinant of such a sub-matrix of order k of A is called a minor of
order k of the matrix A.
2. The principal sub-matrix of order k of the matrix A is obtained by removing
the last n − k rows and the last n − k columns from A.
The determinant of the leading principal sub-matrix of order k of A is called
the principal minor of order k of the matrix A.
3. A principal sub-matrix and the corresponding principal minor are called the
leading principal sub-matrix and the leading principal minor of order
k, respectively, if k < n.
66
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Example 3.2.11.
Consider the 3 × 3 matrix
a11 a12 a13
A = a21 a22 a23 .
a31 a32 a33
|a11 | = a11 .
We now state the sufficient condition for the existence of a Doolittle factorization of a
given matrix.
Theorem 3.2.12.
Let n ≥ 2, and A be an n × n invertible matrix such that all of its first n − 1 leading
principal minors are non-zero. Then A has an LU -decomposition where L is a unit
lower triangular matrix (i.e. all its diagonal elements equal 1). That is, A has a
Doolittle’s factorization.
We omit the proof of this theorem but illustrate the construction procedure of a Doolit-
tle factorization in the case when n = 3.
67
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
These gives first column of L and the first row of U . Next multiply row 2 of L times
columns 2 and 3 of U , to obtain
These can be solved for u22 and u23 . Next multiply row 3 of L to obtain
l31 u12 + l32 u22 = a32 , l31 u13 + l32 u23 + u33 = a33 . (3.33)
These equations yield values for l32 and u33 , completing the construction of L and U .
In this process, we must have u11 ̸= 0, u22 ̸= 0 in order to solve for L, which is true
because of the assumptions that all the leading principal minors of A are non-zero.
The decomposition we have found is the Doolittle’s factorization of A.
Example 3.2.14.
Consider the matrix
1 1 −1
A= 1 2 −2 .
−2 1 1
Using (3.31), we get
u11 = 1, u12 = 1, u13 = −1,
a21 a31
l21 = = 1, l31 = = −2.
u11 u11
Using (3.32) and (3.33),
68
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Further, taking b = (1, 1, 1)T , we now solve the system Ax = b using LU factorization,
with the matrix A given above. As discussed earlier, first we have to solve the lower
triangular system
1 0 0 z1 1
1 1 0 z2 = 1 .
−2 3 1 z3 1
Forward substitution yields z1 = 1, z2 = 0, z3 = 3. Keeping the vector z = (1, 0, 3)T
as the right hand side, we now solve the upper triangular system
1 1 −1 x1 1
0 1 −1 x2 = 0 .
0 0 2 x3 3
Crout’s factorization
In Doolittle’s factorization, the lower triangular matrix has special property. If we ask
the upper triangular matrix to have special property in the LU decomposition, it is
known as the Crout’s factorization.
Cholesky’s factorization
69
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
xT Ax > 0,
We recall (from a course on Linear Algebra) a useful theorem concerning positive defi-
nite matrices.
Lemma 3.2.17.
The following statements concerning a symmetric n × n matrix A are equivalent.
1. The matrix A is positive definite.
2. All the principal minors of the matrix A are positive.
3. All the eigenvalues of the matrix A are positive.
The statements (1) and (2) are equivalent by definition. The proof of equivalence of
(3) with other two statements is out of the scope of this course.
We now define Cholesky’s factorization.
Remark 3.2.19.
It is clear from the definition that if a matrix has a Cholesky’s factorization, then
the matrix has to be necessarily a symmetric matrix.
We now establish a sufficient condition for the existence of the Cholesky’s factoriza-
tion.
Theorem 3.2.20.
If A is an n × n symmetric and positive definite matrix, then A has a unique factor-
ization
A = LLT ,
where L is a lower triangular matrix with positive diagonal elements.
70
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Proof.
We prove the theorem by induction. The proof of the theorem is trivial when n = 1.
Let us assume that the theorem holds for n = k for some k ∈ N. That is, we assume
that for every k × k symmetric and positive definite matrix Bk , there exists a unique
lower triangular k × k matrix L̃k such that
Bk = L̃k L̃Tk .
where the real number l(k+1)(k+1) and the vector l = (l1(k+1) , l2(k+1) , · · · , lk(k+1) )T are
to be chosen such that A = LLT . That is
( ) ( )( T )
Ak a Lk 0 Lk l
= . (3.34)
aT a(k+1)(k+1) lT l(k+1)(k+1) 0T l(k+1)(k+1)
Lk l = a, (3.35)
which by forward substitution yields the vector l. Here, we need to justify that Lk is
invertible, which is left as an exercise.
Finally, we have lT l + l(k+1)(k+1)
2
= a(k+1)(k+1) and this gives
2
l(k+1)(k+1) = a(k+1)(k+1) − lT l, (3.36)
2
provided the positivity of l(k+1)(k+1) is justified, which follows from taking determinant
on both sides of (3.34) and using the property (3) of Lemma 3.2.17. A complete proof
of this justification is left as an exercise.
71
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Example 3.2.21.
Consider the matrix
9 3 −2
A= 3 2 3 .
−2 3 23
We can check that this matrix is positive definite by any of the equivalent conditions
listed in Lemma 3.2.17. Therefore, we expect a unique Cholesky’s factorization for A.
For the construction, we follow the proof of Theorem 3.2.20.
1. For n = 1, we have A1 = (9) and therefore let us take L1 = (3).
2. For n = 2, we have ( )
9 3
A2 = .
3 2
Therefore, ( ) ( )
L1 0 3 0
L2 = = .
l l22 l l22
This gives l = 1 and l2 + l22
2
= 2 or l22 = 1. Thus, we have
( )
3 0
L2 = .
1 1
3. For n = 3, we take ( )
L2 0
L= ,
lT l33
where lT = (l13 , l23 ) and l33 are to be obtained in such a way that A = LLT .
The vector l is obtained by solving the lower triangular system (3.35) with
a = (−2, 3)T (by forward substitution), which gives l = (−2/3, 11/3)T .
2
Finally, from (3.36), we have l33 = 82/9. Thus, the required lower triangular
matrix L is
3 0 0
L= 1 1 0 .
√
−2/3 11/3 82/3
It is straightforward to check that A = LLT .
The Cholesky’s factorization can also be obtained using Doolittle or Crout factorizations
as illustrated by the following example.
72
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.2 Direct Methods for Linear Systems
Example 3.2.22.
Find the Doolittle, Crout, and Cholesky’s factorizations of the matrix
60 30 20
A = 30 20 15 .
20 15 12
Combining the first two matrices into one and the last two matrices into another,
73
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
Note
We use the following notation for a vector x ∈ Rn in terms of its components:
x = (x1 , x2 , · · · , xn )T .
Example 3.3.2.
There can be many vector norms on Rn . We define three important vector norms on
Rn , which are frequently used in matrix analysis.
1. The Euclidean norm (also called the l2 -norm) on Rn is denoted by ∥ · ∥2 ,
and is defined by v
u n
u∑
∥x∥2 = t |xi |2 . (3.37)
i=1
74
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
All the three norms defined above are indeed norms; it is easy to verify that they
satisfy the defining conditions of a norm given in Definition 3.3.1.
Let us illustrate each of the above defined norms with some specific vectors.
Example 3.3.3.
Let us compute norms of some vectors now. Let x = (4, 4, −4, 4)T , y = (0, 5, 5, 5)T ,
z = (6, 0, 0, 0)T . Verify that ∥x∥1 = 16, ∥y∥1 = 15, ∥z∥1 = 6; ∥x∥2 = 8, ∥y∥2 = 8.66,
∥z∥2 = 6; ∥x∥∞ = 4, ∥y∥∞ = 5, ∥z∥∞ = 6.
From this example we see that asking which vector is big does not make sense. But
once the norm is fixed, this question makes sense as the answer depends on the norm
used. In this example each vector is big compared to other two but in different norms.
Remark 3.3.4.
In our computations, we employ any one of the norms depending on convenience.
It is a fact that “all vector norms on Rn are equivalent”; we will not elaborate
further on this.
We can also define matrix norms on the vector space Mn (R) of all n × n real matrices,
which helps us in finding distance between two matrices.
75
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
Note
As in the case of vector norms, the condition 4 in the above definition is called
the triangle inequality.
For a matrix A ∈ Mn (R), we use the notation
where aij denotes the element in the ith row and j th column of A.
There can be many matrix norms on Mn (R). We will describe some of them now.
Example 3.3.6.
The following define norms on Mn (R).
v
u n ∑
u∑ n
1. ∥A∥ = t |aij |2 .
i=1 j=1
All the three norms defined above are indeed norms; it is easy to verify that they
satisfy the defining conditions of a matrix norm of Definition 3.3.5.
Among matrix norms, there are special ones that satisfy very useful and important
properties. They are called matrix norms subordinate to a vector norm. As the name
suggests, to define them we need to fix a vector norm. We will give a precise definition
now.
The formula (3.40) indeed defines a matrix norm on Mn (R). The proof of this fact is
beyond the scope of our course. In this course, by matrix norm, we always mean a norm
subordinate to some vector norm. An equivalent and more useful formula for the matrix
norm subordinate to a vector norm is given in the following lemma.
76
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
Lemma 3.3.8.
For any A ∈ Mn (R) and a given vector norm ∥ · ∥, we have
∥Az∥
∥A∥ = max . (3.41)
z ̸=0 ∥z∥
Proof.
For any z ̸= 0, we have x = z/||z|| as a unit vector. Hence
( )
z
max ||Ax|| = max
A
= max ||Az|| .
||x||=1 ||z ||̸=0 ||z||
||z ||̸=0 ||z||
The matrix norm subordinate to a vector norm has additional properties as stated in
the following theorem whose proof is left as an exercise.
Theorem 3.3.9.
Let ∥ · ∥ be a matrix norm subordinate to a vector norm. Then
1. ∥Ax∥ ≤ ∥A∥∥x∥ for all x ∈ Rn .
2. ∥I∥ = 1 where I is the identity matrix.
3. ∥AB∥ ≤ ∥A∥ ∥B∥ for all A, B ∈ Mn (R).
Note
We do not use different notations for a matrix norm and the subordinate vector
norm. These are to be understood depending on the argument.
We will now state a few results concerning matrix norms subordinate to some of the
vector norms described in Example 3.3.2. We omit their proofs.
77
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
Description and computation of the matrix norm subordinate to the Euclidean vector
norm on Rn is more subtle.
Example 3.3.13.
Let us now compute ∥A∥∞ and ∥A∥2 for the matrix
1 1 −1
1 2 −2 .
−2 1 1
1. ∥A∥∞ = 5 since
∑
3
|a1j | = |1| + |1| + | − 1| = 3,
j=1
∑
3
|a2j | = |1| + |2| + | − 2| = 5,
j=1
∑
3
|a3j | = | − 2| + |1| + |1| = 4.
j=1
78
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
The following theorem motivates the condition number for an invertible matrix which is
similar to the condition number defined for a function in Section 2.5.1.
Theorem 3.3.14.
Let A be an invertible n × n matrix. Let x and x̃ be the solutions of the systems
∥x − x̃∥ ∥b − b̃∥
≤ ∥A∥ ∥A−1 ∥ (3.45)
∥x∥ ∥b∥
for any fixed vector norm and the matrix norm subordinate to this vector norm.
Proof.
Since A is invertible, we have
( )
−1
x − x̃ = A b − b̃ .
Taking norms on both sides and using the fact that ∥Ax∥ ≤ ∥A∥∥x∥ (see Theorem
3.3.9) holds for every x ∈ Rn , we get
The inequality (3.46) estimates the error in the solution caused by error on the right
hand side vector of the linear system Ax = b. The inequality (3.45) is concerned
with estimating the relative error in the solution in terms of the relative error in the
right hand side vector b.
∥b∥
Since Ax = b, we get ∥b∥ = ∥Ax∥ ≤ ∥A∥∥x∥. Therefore ∥x∥ ≥ ∥A∥ . Using this
inequality in (3.46), we get (3.45).
Remark 3.3.15.
1. In the above theorem, it is important to note that a vector norm is fixed and
the matrix norm used is subordinate to this fixed vector norm.
2. The theorem holds no matter which vector norm is fixed as long as the matrix
norm subordinate to it is used.
3. In fact, whenever we do analysis on linear systems, we always fix a vector
norm and then use matrix norm subordinate to it.
79
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
Notice that the constant appearing on the right hand side of the inequality (3.45) (which
is ∥A∥ ∥A−1 ∥) depends only on the matrix A (for a given vector norm). This number
is called the condition number of the matrix A. Notice that this condition number
depends very much on the vector norm being used on Rn and the matrix norm that is
subordinate to the vector norm.
Remark 3.3.17.
From Theorem 3.3.14, it is clear that if the condition number is small, then the
relative error in the solution will also be small whenever the relative error in the
right hand side vector is small. On the other hand, if the condition number is very
large, then the relative error could be large even though the relative error in the
right hand side vector is small. We illustrate this in the following example.
Example 3.3.18.
The linear system
has the solution x1 = 0, x2 = 0.1. Let us denote this by x = (0, 0.1)T , and the right
hand side vector by b = (0.7, 1)T . The perturbed system
∥x − x̃∥∞
= 1.7,
∥x∥∞
80
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
which is too high compared to the relative error in the right hand side vector which
is given by
∥b − b̃∥∞
= 0.01.
∥b∥∞
The condition number of the coefficient matrix of the system is 289. Therefore the
magnification of the relative error is expected (see the inequality (3.45)).
Definition 3.3.19.
A matrix with a large condition number is said to be ill conditioned. A matrix
with a small condition number is said to be well conditioned.
Remark 3.3.20.
An immediate question is that how large should the condition number be to declare
that a matrix is ill-conditioned. This quantification is very difficult in practical
situations as it depends on how large the relative error in the right hand side vector
and also the tolerance level of the user. That is, how much error a user can tolerate
in the application for which the linear system is solved. For instance, in finance
related applications, even 20% of error may be tolerable, whereas in computing the
path of a missile even a 0.2% error may lead to fatal disasters.
Example 3.3.21.
The Hilbert matrix of order n is given by
1 12 1
3
··· 1
n
1 1 1
··· 1
2 3 4 n+1
· ···
Hn = (3.48)
· ···
· ···
1
n
1
n+1
1
n+1
··· 1
2n−1
For n = 4, we have
25
κ(H4 ) = ∥H4 ∥∞ ∥H4−1 ∥∞ = 13620 ≈ 28000
12
which may be taken as an ill-conditioned matrix. In fact, as the value of n increases,
81
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.3 Matrix Norms and Condition Number of a Matrix
An interesting and important question is that what kind of matrices could have large
condition numbers. A partial answer is stated in the following theorem.
Theorem 3.3.22.
Let A ∈ Mn (R) be non-singular. Then, for any singular n × n matrix B, we have
1 ∥A − B∥
≤ . (3.49)
κ(A) ∥A∥
Proof.
We have
1 1 1 1
≤ 1 1
= =
κ(A) ∥A∥∥A ∥
−1 ∥A∥ −1
∥A x∥
∥A∥ ∥A y∥
−1
max
x̸=0 ∥x∥ ∥y∥
where y ̸= 0 is arbitrary. Take y = Az, for some arbitrary vector z. Then we get
( )
1 1 ∥Az∥
≤ .
κ(A) ∥A∥ ∥z∥
1 ∥(A − B)z∥
≤
κ(A) ∥A∥∥z∥
∥(A − B)∥∥z∥
≤
∥A∥∥z∥
∥A − B∥
= .
∥A∥
From the above theorem it is apparent that if A is close to a singular matrix, then the
reciprocal of the condition number is close to zero, ie., κ(A) is large. Let us illustrate
this in the following example.
Example 3.3.23.
Clearly the matrix ( )
1 1
B=
1 1
82
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Let us use the l∞ norm on Rn . Then ∥A∥∞ = 2 + ϵ and ∥A−1 ∥∞ = ϵ−2 (2 + ϵ). Hence
In Section 3.2.1 and Section 3.2.5 we have discussed methods that obtain exact solution
of a linear system Ax = b in the absence of floating point errors (i.e., exact when
arithmetic is used). Such methods are called the direct methods. The solution of
a linear system can also be obtained using iterative procedures. Such methods are
called iterative methods. There are many iterative procedures out of which Jacobi
and Gauss-Seidel methods are the simplest ones. In this section we introduce these
methods and discuss their convergence.
In Section 3.2.5, we have seen that when a linear system Ax = b is such that the
coefficient matrix A is a diagonal matrix, then this system can be solved very easily.
We explore this idea to build a new method based on iterative procedure. For this, we
83
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Example 3.4.1.
Let us illustrate the Jacobi method in the case of 3 × 3 system
84
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
which is the Jacobi iterative sequence given by (3.51) in the case of 3 × 3 system.
Now, the question is that will the sequence of vectors {x(k+1) } generated by the iterative
procedure (3.51) always converge to the exact solution x of the given linear system?
The following example gives a system for which the Jacobi iterative sequence converges
to the exact solution.
Example 3.4.2.
The Jacobi iterative sequence for the system
6x1 + x2 + 2x3 = −2,
x1 + 4x2 + 0.5x3 = 1,
−x1 + 0.5x2 − 4x3 = 0.
is given by
(k+1) 1 (k) (k)
x1 = (−2 − x2 − 2x3 ),
6
(k+1) 1 (k) (k)
x2 = (1 − x1 − 0.5x3 ),
4
(k+1) 1 (k) (k)
x3 = − (0 + x1 − 0.5x2 ).
4
The exact solution (upto 6-digit rounding) of this system is
85
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
and so on. We observe from the above computed results that the sequence {x(k) }
seems to be approaching the exact solution.
In the following example we discuss a system for which the Jacobi iterative sequence
does not converge to the exact solution.
Example 3.4.3.
Consider the system
x1 + 4x2 + 0.5x3 = 1,
6x1 + x2 + 2x3 = −2,
−x1 + 0.5x2 − 4x3 = 0.
which is exactly the same as the system discussed in Example 3.4.2 but the only
difference is the interchange of first and second equation. Hence, the exact solution
is same as given in (3.52). The Jacobi iterative sequence for this system is given by
(k+1) (k) (k)
x1 = (1 − 4x2 − 0.5x3 ),
(k+1) (k) (k)
x2 = (−2 − 6x1 − 2x3 ),
(k+1) 1 (k) (k)
x3 = − (0 + x1 − 0.5x2 ).
4
86
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
The above two examples shows that the Jacobi sequence need not converge always and
so we need to look for a condition on the system for which the Jacobi iterative sequence
converges to the exact solution.
Define the error in the k th iterate x(k) compared to the exact solution by
e(k) = x − x(k) .
e(k+1) = Be(k) ,
where B is as defined in (3.51). Using any vector norm and the matrix norm subordinate
to it in the above equation, we get
∥e(k+1) ∥ = ∥Be(k) ∥
≤ ∥B∥∥e(k) ∥
≤ ···
≤ ···
≤ ···
≤ ∥B∥k+1 ∥e(0) ∥.
Thus, when ∥B∥ < 1, the iteration method (3.51) always converges for any initial guess
x(0) .
Again the question is
“what are all the matrices A for which the corresponding matrices B in (3.51) have
the property ∥B∥ < 1, for some matrix norm subordinate to some vector norm?”
We would like an answer that is ‘’easily verifiable” using the entries of the matrix
A. One such class of matrices are the diagonally dominant matrices, which we define
now.
87
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
We now prove the sufficient condition for the convergence of the Jacobi method. This
theorem asserts that if A is a diagonally dominant matrix, then B in (3.51) of the
Jacobi method is such that ∥B∥∞ < 1.
Theorem 3.4.5.
If the coefficient matrix A is diagonally dominant, then the Jacobi method (3.51)
converges to the exact solution of Ax = b.
Proof.
Since A is diagonally dominant, the diagonal entries are all non-zero and hence the
Jacobi iterating sequence x(k) given by
( )
(k+1) 1 ∑ n
(k)
xi = bi − aij xj , i = 1, 2, · · · , n. (3.53)
aii j=1,j̸=i
(k+1)
∑
n
aij (k)
ei =− ej , i = 1, 2, · · · , n. (3.54)
j=1
aii
j̸=i
which gives
n
∑
aij (k)
(k+1)
|ei | ≤ ∥e ∥∞ . (3.55)
aii
j=1
j̸=i
Define n
∑ aij
µ = max (3.56)
1≤i≤n aii .
j=1
j̸=i
Then
(k+1)
|ei | ≤ µ∥e(k) ∥∞ , (3.57)
88
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
The matrix A is diagonally dominant if and only if µ < 1. Then iterating the last
inequality we get
∥e(k+1) ∥∞ ≤ µk+1 ∥e(0) ∥∞ . (3.59)
Therefore, if A is diagonally dominant, the Jacobi method converges.
Remark 3.4.6.
Observe that the system given in Example 3.4.2 is diagonally dominant, whereas
the system in Example 3.4.3 is not so.
Example 3.4.7.
Consider the 3 × 3 system
When the diagonal elements of this system are non-zero, we can rewrite the above
system as
1
x1 = (b1 − a12 x2 − a13 x3 ),
a11
1
x2 = (b2 − a21 x1 − a23 x3 ),
a22
1
x3 = (b3 − a31 x1 − a32 x2 ).
a33
Let
(0) (0) (0)
x(0) = (x1 , x2 , x3 )T
89
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Remark 3.4.8.
Compare (JS) and (GSS).
Theorem 3.4.9.
If the coefficient matrix is diagonally dominant, then the Gauss-Seidel method con-
verges to the exact solution of the system Ax = b.
Proof.
Since A is diagonally dominant, all the diagonal elements of A are non-zero, and
hence the Gauss-Seidel iterative sequence given by
{ }
(k+1) 1 ∑
i−1
(k+1)
∑n
(k)
xi = bi − aij xj − aij xj , i = 1, 2, · · · , n. (3.60)
aii j=1 j=i+1
(k+1)
∑
i−1
aij (k+1)
∑n
aij (k)
ei =− ej − e , i = 1, 2, · · · , n. (3.61)
j=1
aii a j
j=i+1 ii
90
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
91
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
with the convention that α1 = βn = 0. Note that µ given in (3.56) can be written as
µ = max (αi + βi )
1≤i≤n
Since µ < 1, we have αl < 1 and therefore the above inequality gives
βl
∥e(k+1) ∥∞ ≤ ∥e(k) ∥∞ . (3.64)
1 − αl
Define
βi
η = max . (3.65)
1≤i≤n 1 − αi
βi αi [1 − (αi + βi )] αi
(αi + βi ) − = ≥ [1 − µ] ≥ 0, (3.67)
1 − αi 1 − αi 1 − αi
we have
η ≤ µ < 1. (3.68)
Thus, when the coefficient matrix A is diagonally dominant, Gauss-Seidel method
converges.
92
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Remark 3.4.10.
A careful observation of the proof of the above theorem reveals that the Gauss-
Seidel method converges faster than the Jacobi method by comparing (3.68) and
(3.59).
Let x∗ denote the computed solution using some method. The mathematical error
in the approximate solution when compared to the exact solution of a linear system
Ax = b is given by
e = x − x∗ . (3.69)
Recall from Chapter 2 that the mathematical error is due to the approximation made
in the numerical method where the computation is done without any floating-point
approximation ( ie., without rounding or chopping). Observe that to get the mathe-
matical error, we need to know the exact solution. But an astonishing feature of linear
systems (which is not there in nonlinear equations) is that this error can be obtained
exactly without the knowledge of the exact solution. To do this, we first define the
residual vector
r = b − Ax∗ (3.70)
in the approximation of b by Ax∗ . This vector is also referred to as residual error.
Since b = Ax, we have
r = b − Ax∗ = Ax − Ax∗ = A(x − x∗ ).
The above identity can be written as
Ae = r. (3.71)
This shows that the error e satisfies a linear system with the same coefficient matrix
A as in the original system Ax = b, but a different right hand side vector. Thus, by
having the approximate solution x∗ in hand, we can obtain the error e without knowing
the exact solution x of the system.
When we use a computing device for solving a linear system, irrespective to whether
we use direct methods or iterative methods, we always get an approximate solution.
93
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
An attractive feature (as discussed in the above section) of linear systems is that the
error involved in the approximate solution when compared to the exact solution can
theoretically be obtained exactly. In this section, we discuss how to use this error to
develop an iterative procedure to increase the accuracy of the obtained approximate
solution using any other numerical method.
There is an obvious difficulty in the process of obtaining e as the solution of the system
(3.71), especially on a computer. Since b and Ax∗ are very close to each other, the
computation of r involves loss of significant digits which leads to zero residual error,
which is of no use. To avoid this situation, the calculation of (3.70) should be carried
out at a higher-precision. For instance, if x∗ is computed using single-precision, then
r can be computed using double-precision and then rounded back to single precision.
Let us illustrate the computational procedure of the residual error in the following
example.
The solution of the system by Gaussian elimination without pivoting using 4-digit
rounding leads to
94
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Thus e∗ gives a good idea of the size of the error e in the computed solution x∗ .
Let us now propose an iterative procedure by first predicting the error by solving the
system (3.71) and then correcting the approximate solution x∗ by adding the predicted
error to the vector x∗ .
If we take x∗ = x(0) , and define r (0) = b − Ax(0) , then the error e(0) = x − x(0) can be
obtained by solving the linear system
Ae(0) = r (0) .
Now, define
x(1) = x(0) + e(0) .
We expect the vector x(1) is more closer to the exact solution than x(0) . Again compute
the residual error vector r (1) using the formula
r (1) = b − Ax(1) ,
Ae(1) = r (1)
and define
x(2) = x(1) + e(1) .
Continuing in this way, we can generate a sequence {x(k) } using the formula
where
Ae(k) = r (k)
with
r (k) = b − Ax(k) ,
for k = 0, 1, · · · . The above iterative procedure is called the residual corrector
method (also called the iterative refinement method). Note that in comput-
ing r (k) and e(k) , we use a higher precision than the precision used in computing
x(k) .
95
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.4 Iterative Methods for Linear Systems
Example 3.4.12.
Using Gaussian elimination with pivoting and four digit rounding in solving the linear
system
x1 + 0.5x2 + 0.3333x3 = 1
0.5x1 + 0.3333x2 + 0.25x3 = 0
0.3333x1 + 0.25x2 + 0.2x3 = 0
we obtain the solution as x = (9.062, −36.32, 30.30)T . Let us start with an initial
guess of
x(0) = (8.968, −35.77, 29.77)T .
Using 8-digit rounding arithmetic, we obtain
After solving the system Ae(0) = r (0) using Gaussian elimination with pivoting using
8-digit rounding and the final answer is rounded to four digits, we get
Similarly, we can predict the error in x(1) when compared to the exact solution and
correct the solution to obtain the second iterated vector as
and so on.
The convergence analysis of this iterative procedure to the exact solution is omitted for
this course.
In the iterative methods discussed above, we have a sequence {x(k) } that is expected
to converge to the exact solution of the given linear system. In practical situation, we
96
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
cannot go on computing the x(k) indefinitely and we need to terminate our computation
once the value of x(k) reaches a desired accuracy for a sufficiently large k. That is, when
the error
∥e(k) ∥ = ∥x − x(k) ∥
in the k th iteration in some norm is sufficiently small. Since, we do not know the exact
solution x, the error given above cannot be computed easily and needs another linear
system (3.71) to be solved. Therefore, the question is how to decide where we have to
stop our computation (without solving this linear system)? In other words, how do we
know whether the computed vector x(k) at the k th iteration is sufficiently close to the
exact solution or not. This can be decided by looking at the residual error vector of
the k th iteration defined as
Thus, for a given sufficiently small positive number ϵ, we stop the iteration if
∥r (k) ∥ < ϵ,
97
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
discussed in the next chapter (on methods for solving nonlinear equations) to compute
a root of this polynomial. But this is not an efficient way of computing eigenvalues
because of two reasons. One reason is that obtaining explicit form of (3.74) is itself a
difficult task when the dimension of the matrix is very large. Secondly, if we make any
small error (like floating-point error) in obtaining the explicit form of the polynomial
(3.74), the resulting polynomial may have a root which is entirely different from any
of the eigenvalues that we are looking for. This is illustrated in the following example
by Wilkinson where we see that “the roots of polynomials are extremely sensitive to
perturbations in the coefficients”.
Example 3.5.1 [Wilkinson’s example].
Let f (x) and g(x) be two polynomials given by
f (x) = (x − 1)(x − 2) · · · (x − 10), g(x) = x10 .
The roots of the polynomial f (x) are 1, 2, · · · , 10, and all these roots are simple roots.
If we perturb this polynomial as F (x) = f (x) + 0.01g(x), then all the roots lie in the
interval [1, 3.5] (verified graphically). In fact, the largest root of the polynomial f (x)
is 10 and the largest root of the polynomial F (x) is approximately equal to 3.398067.
Thus, if the coefficient of x10 is perturbed by a small amount of 0.01, the root 10 of
f (x) could move as much a distance as approximately 6.6.
Due to the two reasons discussed above, we look for an alternate method to compute
the eigenvalues of a given matrix. One such method is the power method that can be
used to obtain the eigenvalue which is the largest in magnitude among all the other
eigenvalues and the corresponding eigen vector. In Subsection 3.5.1, we present the
power method and discuss the condition under which this method can be applied. In
Subsection 3.5.2 we prove the Gerschgorin theorem which may be used as a tool to find
a class of matrices for which power method can be applied successfully.
There are many variations of power method in the literature. We will present the most
elementary form of power method. We always deal with matrices with real entries, all
of whose eigenvalues are real numbers.
Power method is used to obtain a specific eigenvalue called dominant eigenvalue and
a corresponding eigenvector for a given n × n matrix A. The concept of a dominant
eigenvalue plays a very important role in many applications. The power method pro-
vides an approximation to it under some conditions on the matrix. We now define the
98
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Remark 3.5.3.
A matrix may have a unique dominant eigenvalue or more than one dominant eigen-
values. Further, even if dominant eigenvalue is unique the corresponding algebraic and
geometric multiplicities could be more than one, and also both algebraic and geometric
multiplicities may not be the same. All these possibilities are illustrated in the following
example.
Example 3.5.4.
1. The matrix
1 0 0
A= 0 −2 1
0 0 −1
has eigenvalues 1, −1, and −2. The matrix A has a unique dominant eigenvalue,
which is −2 as this is the largest in absolute value, of all eigenvalues. Note that
the dominant eigenvalue of A is a simple eigenvalue.
99
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
2. The matrix
1 3 4
B= 0 2 1
0 0 −2
has eigenvalues 1, −2, and 2. According to our definition, the matrix B has
two dominant eigenvalues. They are −2 and 2. Note that both the dominant
eigenvalues of B are simple eigenvalues.
3. Consider the matrices
1 3 4 ( ) ( )
2 0 2 1
C1 = 0 2 5 , C2 = , C3 = .
0 2 0 2
0 0 2
The matrix C1 has a unique dominant eigenvalue 2, which has algebraic mul-
tiplicity 2 and geometric multiplicity 1. The matrix C2 has a unique dominant
eigenvalue 2, whose algebraic and geometric multiplicities equal 2. The matrix
C3 has a unique dominant eigenvalue 2, which has algebraic multiplicity 2 and
geometric multiplicity 1.
As mentioned above the power method is used to compute the dominant eigenvalue
and the corresponding eigen vector of a given n × n matrix provided this eigenvalue is
unique. Thus, in the above examples, power method can be used for the matrices A
but not for B even though B has distinct eigenvalues. Let us now detail the power
method.
(H2) There exists a basis of Rn consisting of eigenvectors of A. That is, there exists
v 1 , v 2 , · · · , v n satisfying Av k = λk v k for k = 1, 2, · · · , n; and such that for each
v ∈ Rn there exists unique real numbers c1 , c2 , · · · , cn such that
v = c1 v 1 + c2 v 2 + · · · + cn v n .
Equivalently, the matrix A is diagonalizable.
100
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
∪
∞
for some scalars c1 , c2 , · · · , cn ∈ R with c1 ̸= 0 and x (0)
∈
/ KerAk .
k=1
x(0) = c1 v 1 + c2 v 2 + · · · + cn v n , c1 ̸= 0.
• Using the assumption (3.75), we get |λj /λ1 | < 1, for j = 2, · · · , n. Therefore, we
have
Ak x(0)
lim = c1 v 1 . (3.78)
k→∞ λk1
For c1 ̸= 0, the right hand side of the above equation is a scalar multiple of the
eigenvector.
• From the above expression for Ak x(0) , we also see that
(Ak+1 x(0) )i
lim = λ1 , (3.79)
k→∞ (Ak x(0) )i
where i is any index such that the fractions on the left hand side are meaningful
/ ∪∞
(which is the case when x(0) ∈ k
k=1 KerA ).
101
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
The power method generates two sequences {µk } and {x(k) }, using the results (3.78) and
(3.79), that converge to the dominant eigenvalue λ1 and the corresponding eigenvectors
v 1 , respectively.
We now describe the steps involved in the power method for generating these two
sequences.
Setting up the iterative sequences:
Step 1: Choose a vector x(0) arbitrarily and set y (1) := Ax(0) .
(1)
Step 2: Define µ1 := yi , where i ∈ {1, · · · , n} is the least index such that
(1)
∥y (1) ∥∞ = |yi |
and set
(1) y (1)
x := .
µ1
Step 3: From x(1) , we can obtain µ2 and x(2) as in step 2.
Continue this procedure.
General form of the power method iterative sequences:
After choosing the initial vector x(0) arbitrarily, we generate the sequences {µ(k) } and
{x(k) } using the formulas
(k+1) y (k+1)
µk+1 = yi , x(k+1) = , (3.80)
µk+1
where
(k+1)
y (k+1) = Ax(k) and i is such that |yi | = ∥y (k+1) ∥∞ , (3.81)
for k = 0, 1, · · · .
This iterative procedure is called the power method.
Remark 3.5.5.
The scaling factor µk introduced in (3.80) makes sure that x(k) has its maximum
norm equal to 1, i.e., ∥x(k) ∥∞ = 1. This rules out the possibilities of limk→∞ x(k)
being 0 or the vector x(k) escaping to infinity.
Remark 3.5.6.
∞
∪
If the initial guess x(0) ∈ Rn is such that x(0) ∈
/ Ker(Ak ) and c1 ̸= 0, then it
k=1
can be shown (proof omitted) that under the assumptions stated above, we have
102
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Point 2 means that whatever may be the case (among the three cases stated), the
sequence {x(k) } approaches the eigenspace associated with the eigenvalue λ1 , as
k → ∞.
We now give a numerical example illustrating power method procedure.
Example 3.5.7.
Consider the matrix
3 0 0
A = −4 6 2 .
16 −15 −5
The eigenvalues of this matrix are
λ1 = 3, λ2 = 1 and λ3 = 0.
Thus, the hypothesis (H1) and (H2) are satisfied. Choose the initial guess x0 =
(1, 0.5, 0.25)T , which also satisfies the hypothesis (H3).
The first ten terms of the iterative sequence in power method given by (3.80)-(3.81)
for the given matrix A are as follows:
Iteration No: 1
103
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
104
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Iteration No: 9
These ten iterates suggest that the sequence {µk } converges to the eigenvalue λ1 = 3
1
and the sequence {x(k) } converges to (0.5, 0, 1) = v 1 .
2
1. The Power method requires at the beginning that the matrix has only one
dominant eigenvalue, and this information is generally unavailable.
2. Even when there is only one dominant eigenvalue, it is not clear how to
choose the initial guess x(0) such that it has a non-zero component (c1 in the
notation of the theorem) along the eigenvector v 1 .
Note that in the above example, all the hypothesis are satisfied. Now let us ask the
question
“What happens when any of the hypotheses of power method is violated?”
We discuss these situations through examples.
Example 3.5.9 [Dominant eigenvalue is not unique (Failure of H1)].
Consider the matrix
1 3 4
B= 0 2 1 ,
0 0 −2
which has eigenvalues 1, −2, and 2. Clearly, the matrix B has two dominant eigen-
105
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
values, namely, −2 and 2. We start with an initial guess x(0) = (1, 1, 1) and the first
five iterations generated using power method are given below:
Iteration No: 1
106
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Thus we conclude that the power method when applied to a matrix which has more
than one dominant eigenvalue may not converge.
Let us now illustrate the situation when the hypothesis (H3) is violated.
Example 3.5.11 [Failure of hypothesis (H3): Initial guess x(0) is such that c1 = 0].
107
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
λ1 = 3, λ2 = 1 and λ3 = 0.
Iteration No: 1
108
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
This makes the iteration to converge to λ2 , which is the next dominant eigenvalue.
Remark 3.5.12.
It is important that we understand the hypothesis (H3) on the initial guess x(0)
correctly. Note that (H3) says that the coefficient of v 1 (which was denoted by c1 )
should be non-zero when x(0) is expressed as
x(0) = c1 v 1 + c2 v 2 + · · · + cn v n .
Example 3.5.13.
Consider the matrix
91.4 −22.0 −44.8000
A = 175.2 −41.0 −86.4 .
105.2 −26.0 −51.4000
109
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Note that the matrix A satisfies the hypothesis (H1) since -5 is the unique dominant
eigenvalue and it is also a simple eigenvalue. The matrix A satisfies the hypothesis
(H2) as all eigenvalues are distinct and hence eigevectors form a basis for R3 . Thus
the fate of the power method iterates depends solely on the choice of the initial guess
x(0) and whether it satisfies the hypothesis (H3)
• Let us take the initial guess x(0) = (1, 0.5, 0.25)T . Note that c1 ̸= 0 for this ini-
tial guess. Thus the initial guess satisfies the hypothesis (H3) and the iterative
sequences generated by power method converges to the dominant eigenvalue
λ1 = −5 and the corresponding eigenvector (with a scalar multiple) 51 v 1 .
• Let us take the initial guess x(0) = (0, 0.5, 0.25)T . Note that c1 ̸= 0 for this ini-
tial guess. Thus the initial guess satisfies the hypothesis (H3) and the iterative
sequences generated by power method converges to the dominant eigenvalue
λ1 = −5 and the corresponding eigenvector (with a scalar multiple) 51 v 1 . Com-
pare this with Example 3.5.11. In the present case the first coordinate of the
initial guess vector is zero, just as in Example 3.5.11. In Example 3.5.11 the
power method iterate converged to the second dominant eigenvalue and the
corresponding eigenvector, which does not happen in the present case. The
reason is that in the Example 3.5.11, c1 = 0 for the initial guess chosen, but in
the current example c1 ̸= 0.
and Dk denotes the closed disk in the complex plane with centre akk and radius ρk ,
i.e.,
{ }
Dk = z ∈ C : |z − akk | ≤ ρk . (3.83)
110
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Proof.
We will prove only (i) as it is easy, and the proving (ii) is beyond the scope of this
course.
Let λ be an eigenvalue of A. Then there exists a v = (v1 , v2 , · · · , vn ) ∈ Rn and v ̸= 0
such that
Av = λv (3.84)
Let 1 ≤ r ≤ n be such that |vr | = max{|v1 |, |v2 |, · · · , |vn |}. The rth equation of the
system of equations (3.84) is given by (actually, of Av − λv = 0)
Observe that the right hand side of the inequality (3.87) is ρr . This proves that
λ ∈ Dr .
111
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Example 3.5.15.
For the matrix
4 1 1
0 2 1 ,
−2 0 9
the Gerschgorin’s disks are given by
{ }
D1 = z ∈ C : |z − 4| ≤ 2 ,
{ }
D2 = z ∈ C : |z − 2| ≤ 1 ,
{ }
D3 = z ∈ C : |z − 9| ≤ 2 .
Draw a picture of these disks and observe that D3 neither intersects D1 nor D2 . By
(ii) of Theorem 3.5.14, D3 has one eigenvalue and D1 ∪D2 has two eigenvalues counting
multiplicities. Note that the eigenvalues are approximately 4.6318, 1.8828 ∈ D1 ∪ D2
and 8.4853 ∈ D3 .
Remark 3.5.16.
Gerschgorin’s circle theorem is helpful in finding bound for eigenvalues. For the
matrix in Example 3.5.15, any number z in D1 satisfies |z| ≤ 6. Similarly any
number z in D2 satisfies |z| ≤ 3, and any number z in D3 satisfies |z| ≤ 11. Since
any eigenvalue λ lies in one of three disks, we can conclude that |λ| ≤ 11.
Remark 3.5.17.
The main disadvantage of the power method discussed in Section 3.5.1 is that if a
given matrix has more than one dominant eigenvalues, then the method may not
converge. So, for a given matrix, we do not know whether the power method will
converge or not. Also, as the power method is reasonably slow (see Example 3.5.13
for an illustration) we may have to perform reasonably large number of iterations
to come to know that the method is not actually converging.
Thus, a tool to find out whether the given matrix has a unique dominant eigen-
value or not is highly desirable. The Gerschgorin theorem (Theorem 3.5.14) can
sometimes be used to see if power method can be used for a given matrix. For
instance, in Example 3.5.15, we see that the power method can be used to obtain
an approximation to the dominant eigenvalue.
Since a matrix A and its transpose (denoted by AT ) have same eigenvalues, we can apply
Gerschgorin Circle Theorem to AT and conclude the following corollary.
112
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Corollary 3.5.18.
Let A be an n × n matrix. For each k = 1, 2, · · · , n, define τk by
∑
n
τk = |ajk |, (3.88)
j=1
j̸=k
and Bk denotes the closed disk in the complex plane with centre akk and radius τk .
That is,
{ }
Bk = z ∈ C : |z − akk | ≤ τk . (3.89)
1. Each eigenvalue of A lies in one of the disks Bk . That is, no eigenvalue of A
lies in C \ ∪nk=1 Bk .
2. Suppose that among the disks B1 , B2 , · · · , Bn , there is a collection of m disks
whose union (denoted by C1 ) is disjoint from the union of the rest of the n −
m disks (denoted by C2 ). Then exactly m eigenvalues lie in C1 and n − m
eigenvalues lie in C2 (here each eigenvalue is counted as many times as its
algebraic multiplicity).
Optimal Bounds
Using the Gerschgorin theorem, we wish to identify an annular region in the complex
plane of the form
{ }
z ∈ C : r ≤ |z| ≤ R , (3.90)
for some non-negative real numbers r and R, containing all the eigenvalues of a given
matrix.
Let A = (aij ) be an n × n matrix with real entries. For k = 1, 2, · · · , n, consider the
Gerschgorin disk Dk defined by (3.83). Then, for z ∈ Dk we have
|z − akk | ≤ ρk ,
113
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.5 Eigenvalue Problems
Therefore, by the Gerschgorin theorem, all the eigenvalues of A are contained in the
annular region (3.90) where
r = min (|akk | − ρk ) and R = max (|akk | + ρk ).
1≤k≤n 1≤k≤n
Example 3.5.19.
For the matrix
6 0 2
−1 −5 0
−2 2 −3
the Gerschgorin disks can by obtained using (3.83) and are given by
114
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
Thus, the smallest annular region, obtained using the Gerschgorin theorem, that
contains all the eigenvalues of A is given by
{ }
z ∈ C : 1 ≤ |z| ≤ 8
3.6 Exercises
ii)
iii)
x1 − x2 + 3x3 = 2,
3x1 − 3x2 + x3 = −1,
x1 + x2 = 3.
115
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
x1 + x2 + x3 = 6,
3x1 + (3 + ϵ)x2 + 4x3 = 20,
2x1 + x2 + 3x3 = 13
using naive Gaussian elimination method, and using modified Gaussian elimina-
tion method with partial pivoting. Obtain the residual error in each case on a
computer for which the ϵ is less than its machine epsilon. The residual error
vector corresponding to an approximate solution x∗ is defined as r = b − Ax∗ ,
where A and b are the coefficient matrix and the right side vector, respectively,
of the given linear system.
i) Find the exact solution of the given system using Gaussian elimination
method with partial pivoting (i.e., with infinite precision arithmetic).
ii) Solve the given system using naive Gaussian elimination method using 4-
digit rounding.
e =b
iii) Obtain a system of linear equations Ax e that is equivalent to the given
system, where the matrix A e is row-equilibrated. Solve the system Ax e =b e
using naive Gaussian elimination method using 4-digit rounding.
iv) Compute the relative errors involved in the two approximations obtained
above.
5. Count the number of operations involved in finding a solution using naive Gaus-
sian elimination method to the following special class of linear systems having the
form
a11 x1 + · · · +a1n xn = b1 ,
···
···
an1 x1 + · · · +ann xn = bn ,
116
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
2x1 + 3x2 = 1,
x1 + 2x2 + 3x3 = 4,
x2 + 2x3 + 3x4 = 5,
x3 + 2x4 = 2.
LU Decomposition
7. Prove or disprove the following statements:
i) An invertible matrix has at most one Doolittle factorization.
ii) If a singular matrix has a Doolittle factorization, then the matrix has at
least two Doolittle factorizations.
9. Give an example of a non-invertible 3×3 matrix A such that the leading principal
minors of order 1 and 2 are non-zero, and A has Doolittle factorization.
4x1 + x2 + x3 = 4,
x1 + 4x2 − 2x3 = 4,
3x1 + 2x2 − 4x3 = 6.
117
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
15. Prove the uniqueness of the factorization A = LLT , where L is a lower trian-
gular matrix all of whose diagonal entries are positive. (Hint: Assume that
there are lower triangular matrices L1 and L2 with positive diagonals. Prove that
L1 L−1
2 = I.)
18. Show that the norm defined on the set of all n × n matrices by
∥A∥ := max |aij |
1≤i≤n
1≤j≤n
19. Let A be an invertible matrix. Show that its condition number κ(A) satisfies
κ(A) ≥ 1.
118
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
20. Let A and B be invertible matrices with condition numbers κ(A) and κ(B) re-
spectively. Show that κ(AB) ≤ κ(A)κ(B).
21. Let A be an n × n matrix with real entries. Let κ2 (A) and κ∞ (A) denote the
condition numbers of a matrix A that are computed using the matrix norms ∥A∥2
and ∥A∥∞ , respectively. Answer the following questions.
i) Determine all the diagonal matrices such that κ∞ (A) = 1.
ii) Let Q be a matrix such that QT Q = I (such matrices are called orthogonal
matrices). Show that κ2 (Q) = 1.
iii) If κ2 (A) = 1, show that all the eigenvalues of AT A are equal. Further, de-
duce that A is a scalar multiple of an orthogonal matrix.
predict how slight changes in b will affect the solution x. Test your prediction in
the concrete case when b = (4, 4)T and b̃ = (3, 5)T . Use the maximum norm for
vectors in R2 .
x1 + x2 = 1, x1 + 2x2 = 2.
and
10−4 x1 + 10−4 x2 = 10−4 , x1 + 2x2 = 2.
Let us denote the first and second systems by A1 x = b1 and A2 x = b2 re-
spectively. Use maximum-norm for vectors and the matrix norm subordinate to
maximum-norm for matrices in your computations.
i) Solve each of the above systems using Naive Gaussian elimination method.
ii) Compute the condition numbers of A1 and A2 .
119
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
iii) For each of the systems, find an upper bound for the relative error in the
f1 and b
solution if the right hand sides are approximated by b f2 respectively.
iv) Solve the systems
f1 and A2 x = b
A1 x = b f2
f1 = (1.02, 1.98)T and b
where b f2 = (1.02 × 10−4 , 1.98)T using naive Gaussian
elimination method. Compute the relative error in each case. Compare the
computed relative errors with the bounds obtained above.
25. In the following problems, the matrix norm ∥·∥ denotes a matrix norm subordinate
to a fixed vector norm.
i) Let A be an invertible matrix and B be any singular matrix. Prove the
following inequality.
1
≤ ∥A−1 ∥.
∥A − B∥
ii) Let A be an invertible matrix, and B be a matrix such that
1
> ∥A−1 ∥.
∥A − B∥
26. Spectral radius of a square matrix A is defined as ρ(A) := max |λj |, where λj ’s
j=1,··· ,n
are the eigenvalues of A.
i) For any subordinate matrix norm ∥ · ∥, show that ρ(A) ≤ ∥A∥.
ii) If A is invertible and if an iterative method of the form x(k+1) = Bx(k) + c
converges to the solution of Ax = b for any initial guess x(0) and any vector
b, then show that ρ(B) < 1.
Diagonally Dominant Matrix
27. Let A be a diagonally dominant matrix. Then show that Naive Gaussian elimi-
nation method to solve the system of linear equations Ax = b never fails to give
an approximate solution of the given linear system.
28. Let A be a diagonally dominant matrix such that aij = 0 for every i, j ∈
{1, 2, · · · , n} such that i > j + 1. Does naive Gaussian elimination method
preserve the diagonal dominance? Justify your answer.
120
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
Iterative Methods
29. Write the formula for the Jacobi iterative sequence of the system
7x1 − 15x2 − 21x3 = 2,
7x1 − x2 − 5x3 = −3,
7x1 + 5x2 + x3 = 1.
Without performing the iterations, show that the sequence does not converge to
the exact solution of this system. Can you make a suitable interchange of rows
so that the resulting system is diagonally dominants?
30. Let A be a diagonally dominant matrix. Show that all the diagonal elements of
A are non-zero (i.e., aii ̸= 0 for i = 1, 2, · · · , n.). As a consequence, the iterating
sequences of Jacobi and Gauss-Seidel methods are well-defined if the coefficient
matrix A in the linear system Ax = b is a diagonally dominant matrix.
31. Find the n × n matrix B and the n-dimensional vector c such that the Gauss-
Seidal method can be written in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · ·
32. For each of the following systems, write down the formula for iterating sequences
of Jacobi and Gauss-Seidel methods. Compute three iterates by taking x0 =
(0, 0, 0)T . Discuss if you can guarantee that these sequences converge to the exact
solution. In case you are not sure about convergence, suggest another iterating
sequence that converges to the exact solution if possible; and justify that the new
sequence converges to the exact solution.
i)
5x1 + 2x2 + x3 = 0.12,
1.75x1 + 7x2 + 0.5x3 = 0.1,
x1 + 0.2x2 + 4.5x3 = 0.5.
ii)
x1 − 2x2 + 2x3 = 1,
x1 + x2 − x3 = 1,
2x1 − 2x2 + x3 = 1.
iii)
x1 + x2 + 10x3 = −1,
2x1 + 3x2 + 5x3 = −6,
3x1 + 2x2 − 3x3 = 4.
121
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
Eigenvalue Problems
34. The matrix
2 0 0
A= 2 1 0
3 0 1
122
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
lie, given that all eigenvalues of A are real. Show that power method can be
applied for this matrix to find the dominant eigenvalue without computing eigen-
values explicitly. Compute the first three iterates of Power method sequences.
−2.7083 −2.6824 0.4543
A= 0.1913 0.7629 0.1007 .
−0.3235 −0.4052 5.0453
38. Use the Gerschgorin Circle theorem to determine bounds for the eigenvalues for
the following matrices. Also find optimum bounds wherever possible. Also draw
123
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
39. Prove that the following two statements concerning n × n matrices are equivalent.
i) Every diagonally dominant matrix is invertible.
ii) Each of the eigenvalues of a matrix A, belongs to at least one Gerschgorin
disk corresponding to A.
40. Prove that the eigenvalues of the matrix
6 2 1
1 −5 0
2 1 4
i) Gerschgorin theorem was used to conclude that the matrix A satisfies Hy-
pothesis (H1) of Power method. Find the set of all such values of α ∈ R.
ii) Gerschgorin theorem was used to conclude that the matrix A has distinct
eigenvalues. Find the set of all such values of α ∈ R.
124
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 3.6 Exercises
43. Find the set of all α ∈ R such that the vector (α −1, α +1)T satisfies Hypothesis
(H3) of the Power method for the matrix
1 0 0
0 2 1 .
0 0 −3
125
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 4
Nonlinear Equations
One of the most frequently occurring problems in practical applications is to find the
roots of equations of the form
f (x) = 0, (4.1)
where f : [a, b] → R is a given nonlinear function. It is well-known that not all nonlinear
equations can be solved explicitly to obtain the exact value of the roots and hence, we
need to look for methods to compute approximate value of the roots. By approximate
root to (4.1), we mean a point x∗ ∈ R for which the value f (x) is very near to zero, ie.,
f (x∗ ) ≈ 0.
This process of improving the approximation to the root is called the iterative process
(or iterative procedure), and such methods are called iterative methods. In an
127
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
The idea behind the closed domain methods is to start with an interval (denoted by
[a0 , b0 ]) in which there exists at least one root of the given nonlinear equation and then
reduce the length of this interval iteratively with the condition that there is at least
one root of the equation at each iteration.
Note that the initial interval [a0 , b0 ] can be obtained using the intermediate value theo-
rem (as we always assume that the nonlinear function f is continuous) by checking the
condition that
f (a0 )f (b0 ) < 0.
That is, f (a0 ) and f (b0 ) are of opposite sign. The closed domain methods differ from
each other only by the way they go on reducing the length of this interval at each
iteration.
In the following subsections we discuss two closed domain methods, namely, (i) the
bisection method and (ii) the regula-falsi method.
128
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
The most simple way of reducing the length of the interval is to sub-divide the interval
into two equal parts and then take the sub-interval that contains a root of the equation
and discard the other part of the interval. This method is called the bisection method.
Let us explain the procedure of generating the first iteration of this method.
Step 1: Define x1 to be the mid-point of the interval [a0 , b0 ]. That is,
a0 + b 0
x1 = .
2
Step 2: Now, exactly one of the following two statements hold.
1. x1 solves the nonlinear equation. That is, f (x1 ) = 0.
2. Either f (a0 )f (x1 ) < 0 or f (b0 )f (x1 ) < 0.
If case (1) above holds, then x1 is a required root of the given equation f (x) = 0 and
therefore we stop the iterative procedure. If f (x1 ) ̸= 0, then case (2) holds as f (a0 )
and f (b0 ) are already of opposite signs. In this case, we define a subinterval [a1 , b1 ] of
[a0 , b0 ] as follows. {
[a0 , x1 ], if f (a0 )f (x1 ) < 0,
[a1 , b1 ] =
[x1 , b0 ], if f (b0 )f (x1 ) < 0.
The outcome of the first iteration of the bisection method is the interval [a1 , b1 ] and the
first member of the corresponding iterative sequence is the real number x1 . Observe
that
• the length of the interval [a1 , b1 ] is exactly half of the length of [a0 , b0 ] and
• [a1 , b1 ] has at least one root of the nonlinear equation f (x) = 0.
Similarly, we can obtain x2 and [a2 , b2 ] as the result of the second iteration and so on.
We now present the algorithm for the bisection method.
129
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
Remark 4.1.2.
In practice, one may also use any of the stopping criteria listed in Section 4.2, either
single or multiple criteria.
Assuming that, for each n = 1, 2, · · · , the number xn is not a root of the nonlinear
equation f (x) = 0, we get a sequence of real numbers {xn }. The question is whether
this sequence converges to a root of the nonlinear equation f (x) = 0. We now discuss
the error estimate and convergence of the iterative sequence generated by the bisection
method.
130
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
Proof.
It directly follows from the construction of the intervals [an , bn ] that
( )n
1 1
bn − an = (bn−1 − an−1 ) = · · · = (b0 − a0 ).
2 2
As a consequence, we get
lim (bn − an ) = 0.
n→∞
lim an = lim bn .
n→∞ n→∞
Since for each n = 0, 1, 2, · · · , the number xn+1 is the mid-point of the interval [an , bn ],
we also have
an < xn+1 < bn .
Now by sandwich theorem for sequences, we conclude that the sequence {xn } of mid-
points also converges to the same limit as the sequences of end-points. Thus we
have
lim an = lim bn = lim xn = r (say). (4.3)
n→∞ n→∞ n→∞
Since for each n = 0, 1, 2, · · · , we have f (an )f (bn ) < 0, applying limits on both sides
of the inequality and using the continuity of f , we get
from which we conclude that f (r) = 0. That is, the sequence of mid-points {xn }
defined by the bisection method converges to a root of the nonlinear equation f (x) =
0.
Since the sequences {an } and {bn } are non-decreasing and non-increasing, respec-
tively, for each n = 0, 1, 2, · · · , we have r ∈ [an , bn ]. Also, xn+1 is the mid-point of
131
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
Corollary 4.1.4.
Let ϵ > 0 be given. Let f satisfy the hypothesis of bisection method with the interval
[a0 , b0 ]. Let xn be as in the bisection method, and r be the root of the nonlinear
equation f (x) = 0 to which bisection method converges. Then Then
|xn − r| ≤ ϵ (4.5)
whenever n satisfies
log(b0 − a0 ) − log ϵ
n≥ (4.6)
log 2
Proof.
By the error estimate of bisection method given by (4.2), we are sure that
|xn − r| ≤ ϵ,
Remark 4.1.5.
The Corollary 4.1.4 tells us that if we want an approximation xn to the root r of
the given equation such that the absolute error is less than a pre-assigned positive
quantity, then it is enough to perform n iterations, where n is the least integer that
satisfies the inequality (4.6). It is interesting to observe that to obtain this n, we
don’t need to know the root r.
However, it is not always true that the smallest n satisfying the inequality (4.6) is
the smallest n for which (4.5) is satisfied (see Example 4.1.6).
132
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
Example 4.1.6.
Let us find an approximate root to the nonlinear equation
sin x + x2 − 1 = 0
using bisection method so that the resultant absolute error is at most ϵ = 0.125.
To apply bisection method, we must choose an interval [a0 , b0 ] such that the function
f (x) = sin x + x2 − 1
satisfies the hypothesis of bisection method. Note that f satisfies hypothesis of bi-
section on the interval [0, 1]. In order to achieve the required accuracy, we should
first decide how many iterations are needed. The inequality (4.6), says that required
accuracy is achieved provided n satisfies
log(1) − log(0.125)
n≥ =3
log 2
Thus we have to compute x3 . We will do it now.
Iteration 1: We have a0 = 0, b0 = 1. Thus x1 = 0.5. Since,
133
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
The regula-falsi method is similar to the bisection method. Although the bisection
method (discussed in the previous subsection) always converges to the root, the con-
vergence is very slow, especially when the length of the initial interval [a0 , b0 ] is very
large and the equation has the root very close to the one of the end points. This is
because, at every iteration, we are subdividing the interval [an , bn ] into two equal parts
and taking the mid-point as xn+1 (the (n + 1)th member of the iterative sequence).
Therefore, it takes several iterations to reduce the length of the interval to a very small
number, and as a consequence the distance between the root and xn+1 .
The regula-falsi method differs from the bisection method only in the choice of xn+1
in the interval [an , bn ] for each n = 0, 1, 2, · · · . Instead of taking the midpoint of the
interval, we now take the x-coordinate of the point of intersection of the line joining
the points (an , f (an )) and (bn , f (bn )) with the x-axis. Let us now explain the procedure
of generating the first iteration of this method.
Step 1: Assume the hypothesis of the bisection method and let [a0 , b0 ] be the initial
134
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
interval. The line joining the points (a0 , f (a0 )) and (b0 , f (b0 )) is given by
f (b0 ) − f (a0 )
y = f (a0 ) + (x − a0 ),
b 0 − a0
The first member x1 of the regula-falsi iterative sequence is the x-coordinate of the point
of intersection of the above line with the x-axis. Therefore, x1 satisfies the equation
f (b0 ) − f (a0 )
f (a0 ) + (x1 − a0 ) = 0
b 0 − a0
and is given by
b 0 − a0
x1 = a0 − f (a0 ) ,
f (b0 ) − f (a0 )
which can also be written as
a0 f (b0 ) − b0 f (a0 )
x1 = .
f (b0 ) − f (a0 )
Step 2: Now, exactly one of the following two statements hold.
1. x1 solves the nonlinear equation. That is, f (x1 ) = 0.
2. Either f (a0 )f (x1 ) < 0 or f (b0 )f (x1 ) < 0.
If case (1) above holds, then x1 is a required root of the given equation f (x) = 0 and
therefore we stop the iterative procedure. If f (x1 ) ̸= 0, then case (2) holds as f (a0 )
and f (b0 ) are already of opposite signs. We now define a subinterval [a1 , b1 ] of [a0 , b0 ]
as follows. {
[a0 , x1 ], if f (a0 )f (x1 ) < 0,
[a1 , b1 ] =
[x1 , b0 ], if f (b0 )f (x1 ) < 0.
The outcome of the first iteration of the regula-falsi method is the interval [a1 , b1 ] and
the first member of the corresponding iterative sequence is the real number x1 . Observe
that
• the length of the interval [a1 , b1 ] may be (although not always) much less than
half of the length of [a0 , b0 ] and
• [a1 , b1 ] has at least one root of the nonlinear equation f (x) = 0.
We now summarize the regula-falsi method.
Algorithm:
135
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
Step 3: Stop the iteration if the case (1) in step 2 holds and declare the value of
xn+1 as the required root. Otherwise repeat step 1 with n replaced by n + 1.
Continue this process till a desired accuracy is achieved.
136
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
a0 ≤ a1 ≤ · · · ≤ an · · · ≤ b0 .
b0 ≥ b1 ≥ · · · ≥ bn · · · ≥ a0 .
lim an ≤ lim bn .
n→∞ n→∞
as an < xn+1 < bn for all n = 0, 1, 2, · · · . In this case, the common limit will
be a root of the nonlinear equation, as is the case with bisection method.
It is important to observe that, it may happen that the lengths of the subintervals
chosen by regula-falsi method do not go to zero. In other words, if an → α and
bn → β, then it may happen that α < β. Let us illustrate this case by the following
example.
Example 4.1.11.
Consider the nonlinear equation
ex − 2 = 0.
137
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.1 Closed Domain Methods
The last example ruled out any hopes of proving that α = β. However, it is true that
the sequence {xn } converges to a root of the nonlinear equation f (x) = 0, and as can
be expected, the proof is by contradiction argument. Let us now state the theorem on
regula-falsi method.
Theorem 4.1.12 [Convergence of Regula-falsi method].
Hypothesis: Let f : [a0 , b0 ] → R be a continuous function such that the numbers
f (a0 ) and f (b0 ) have opposite signs.
Proof is omitted.
Example 4.1.13.
Let us find an approximate root to the nonlinear equation
sin x + x2 − 1 = 0
138
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.2 Stopping Criteria
The outcome of any iterative method for a given nonlinear equation is a sequence of real
numbers that is expected to converges to a root of the equation. When we implement
such a methods on a computer, we cannot go on computing the iterations indefinitely
and needs to stop the computation at some point. It is desirable to stop computing
the iterations when the xn ’s are reasonably close to an exact root r for a sufficiently
large n. In other words, we want to stop the computation at the nth iteration when the
computed value is such that
|xn − r| < ϵ
In general, we don’t know the root r of the given nonlinear equation to which the
iterative sequence is converging. Therefore, we have no idea of when to stop the iteration
as we have seen in the case of regula-falsi method. In fact, this situation will be there
for any open domain methods discussed in the next section. An alternate way is to look
for some criteria that does not use the knowledge of the root r, but gives a rough idea
of how close we are to this root. Such a criteria is called a stopping criteria. We
now list some of the commonly used stopping criteria for iterative methods to nonlinear
equations.
139
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.2 Stopping Criteria
Stopping Criterion 1: Fix a K ∈ N, and ask the iteration to stop after finding xK .
This criterion is borne out of fatigue, as it clearly has no mathematical reason why
the K fixed at the beginning of the iteration is more important than any other natural
number! If we stop the computation using this criterion, we will declare xK to be the
approximate root to the nonlinear equation f (x) = 0.
Stopping Criterion 2: Fix a real number ϵ > 0 and a natural number N . Ask the
iteration to stop after finding xk such that
One may interpret this stopping criterion by saying that there is ‘not much’ improve-
ment in the value of xk compared to a previous value xk−N . If we stop the computation
using this criterion, we will declare xk to be the approximate root of the nonlinear
equation f (x) = 0.
It is more convenient to take N = 1 in which case, we get the stopping criteria
Stopping Criterion 3: Fix a real number ϵ > 0 and a natural number N . Ask the
iteration to stop after finding xk such that
xk − xk−N
< ϵ.
xk
If we stop the computation using this criterion, we will declare xk to be the approximate
root to the nonlinear equation f (x) = 0.
As in the above case, it is convenient to take N = 1.
Stopping Criterion 4: Fix a real number ϵ > 0 and ask the iteration to stop after
finding xk such that
|f (xk )| < ϵ.
If we stop the computation using this criterion, we will declare xk to be the approximate
root to the nonlinear equation f (x) = 0. Sometimes the number |f (xk )| is called the
residual error corresponding to the approximate root xk of the nonlinear equation
f (x) = 0.
In practice, one may use any of the stopping criteria listed above, either single or
multiple criteria.
140
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
Remark 4.2.1.
We can also use any of the above stopping criteria in bisection method.
141
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
Step 2: Choose any one of the stopping criteria (or a combination of them) discussed
in Section 4.2. If this criterion is satisfied, stop the iteration. Otherwise, repeat the
step 1 by replacing n with n + 1 until the criterion is satisfied.
Recall that xn+1 for each n = 1, 2, · · · given by (4.9) (or (4.10)) is the x-coordinate
of the point of intersection of the secant line joining the points (xn−1 , f (xn−1 )) and
(xn , f (xn )) with the x-axis and hence the name secant method.
Remark 4.3.2.
It is evident that the secant method fails to determine xn+1 if we have f (xn−1 ) =
f (xn ). Observe that such a situation never occurs in regula-falsi method.
Example 4.3.3.
Consider the equation
sin x + x2 − 1 = 0.
Let x0 = 0, x1 = 1. Then the iterations from the secant method are given by
n xn ϵ
2 0.543044 0.093689
3 0.626623 0.010110
4 0.637072 0.000339
5 0.636732 0.000001
Figure 4.1 shows the iterative points x2 and x3 in black bullet. Recall that the exact
value of the root (to which the iteration seems to converge) unto 6 significant digits
is r ≈ 0.636733. Obviously, the secant method is much faster than bisection method
in this example.
142
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Remark 4.3.5.
The expression (4.11) implies that the order of convergence (see Chapter 1 for the
definition) of the iterative sequence of secant method is 1.62.
143
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
In Theorem 4.3.4, we have seen that the secant method has more than linear order of
convergence. This method can further be modified to achieve quadratic convergence.
To do this, we first observe that when the iterative sequence of the secant method
converges, then as n increases, we see that xn−1 approaches xn . Thus, for a sufficiently
large value of n, we have
f (xn ) − f (xn−1 )
f ′ (xn ) ≈ ,
xn − xn−1
provided f is a C 1 function. Thus, if f (x) is differentiable, then on replacing in (4.10),
the slope of the secant by the slope of the tangent at xn , we get the iteration formula
f (xn )
xn+1 = xn − (4.12)
f ′ (xn )
and is called the Newton-Raphson method.
We may assume that for x ≈ x0 , f (x) ≈ g(x). This can also be interpreted as
Now, if we choose the initial guess x0 very close to the root r of f (x) = 0. That
is., if r ≈ x0 , we have g(r) ≈ f (r) = 0. This gives (approximately)
Up on replacing r by x1 and using ‘=’ symbol instead of ‘≈’ symbol, we get the
first iteration of the Newton-Raphson’s iterative formula (4.12).
Recall in secant method, we need two initial guesses x0 and x1 to start the iteration.
In Newton-Raphson method, we need one initial guess x0 to start the iteration. The
consecutive iteration x1 is the x-coordinate of the point of intersection of the x-axis
and the tangent line at x0 , and similarly for the other iterations. This geometrical
interpretation of the Newton-Raphson method is clearly observed in Figure 4.2.
We now derive the Newton-Raphson method under the assumption that f is a C 2
function.
144
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
(x − x0 )2 ′′
f (x) = f (x0 ) + f ′ (x0 )(x − x0 ) + f (ξ),
2!
where ξ lies between x0 and x. When x0 is very close to x, the last term in the above
equation is smaller when compared to the other two terms on the right hand side. By
neglecting this term we have
Notice that the graph of the function g(x) = f (x0 ) + f ′ (x0 )(x − x0 ) is precisely the
tangent line to the graph of f at the point (x0 , f (x0 )). We now define x1 to be the x-
coordinate of the point of intersection of this tangent line with the x-coordinate. That
is, the point x1 is such that g(x1 ) = 0, which gives
f (x0 )
x1 = x0 − . (4.14)
f ′ (x0 )
This gives the first member of the iterative sequence of the Newton-Raphson’s method.
We now summarize the Newton-Raphson’s method.
Algorithm 4.3.7.
Hypothesis:
1. Let the function f be C 1 and r be the root of the equation f (x) = 0 with
f ′ (r) ̸= 0.
2. The initial guess x0 is chosen sufficiently close to the root r.
Algorithm:
Step 1: For n = 0, 1, 2, · · · , the iterative sequence of the Newton-Raphson’s method
is given by (4.12)
f (xn )
xn+1 = xn − ′ .
f (xn )
Step 2: Choose any one of the stopping criteria (or a combination of them) discussed
in Section 4.2. If this criterion is satisfied, stop the iteration. Otherwise, repeat the
step 1 by replacing n with n + 1 until the criterion is satisfied.
145
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
0.5
−0.5
−1
−1.5
−2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 4.2: Iteration Procedure of Newton-Raphson’s method for f (x) = sin(x)+x2 −1.
Remark 4.3.8.
It is evident that if the initial guess x0 is such that f ′ (xn ) = 0, for some n ∈ N, then
the Newton-Raphson method fails. Geometrically, this means that the tangent line
to the graph of f at the point (xn , f (xn )) is parallel to the x-axis. Therefore, this
line never intersects x-axis and hence xn+1 never exists. See Remark 4.3.2 for the
failure of secant method and compare it with the present case.
Example 4.3.9.
Consider the equation
sin x + x2 − 1 = 0.
Let x0 = 1. Then the iterations from the Newton-Raphson method gives
n xn ϵ
1 0.668752 0.032019
2 0.637068 0.000335
3 0.636733 0.000000
Figure 4.2 shows the iterative points x2 and x3 in black bullet. Recall that the exact
root is x∗ ≈ 0.636733. Obviously, the Newton-Raphson method is much faster than
both bisection and secant methods in this example.
146
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
The proofs of the conclusions (1)-(3) are omitted. Conclusion (4) implies that the
Newton-Raphson method converges quadratically. The proof is a direct application of
the Taylor’s theorem and is left as an exercise.
Theorem on Newton-Raphson method says that if we start near-by a root of the non-
linear equation, then Newton-Raphson iterative sequence is well-defined and converges.
For increasing convex functions, we need not be very careful in choosing the initial guess.
For such functions, the Newton-Raphson iterative sequence always converges, whatever
may be the initial guess. This is the content of the next theorem.
Theorem 4.3.11 [Convergence Result for Convex Functions].
Hypothesis: Let f : R → R be a twice continuously differentiable function such
that
1. f is convex, i.e., f ′′ (x) > 0 for all x ∈ R.
2. f is strictly increasing, i.e., f ′ (x) > 0 for all x ∈ R.
3. there exists an r ∈ R such that f (r) = 0.
Conclusion: Then
1. r is the unique root of f (x) = 0.
2. For every choice of x0 , the Newton-Raphson iterative sequence converges to r.
Proof.
Proof of (1): Since f is strictly increasing, the function cannot take the same value
147
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
f (xn )
en+1 = en + ,
f ′ (xn )
Example 4.3.12.
Start with x0 = −2.4 and use Newton-Raphson iteration to find the root r = −2.0 of
the polynomial
f (x) = x3 − 3x + 2.
The iteration formula is
2x3n − 2
xn+1 = .
3x2n − 3
It is easy to verify that |r − xn+1 |/|r − xn |2 ≈ 2/3, which shows the quadratic con-
vergence of Newton-Raphson’s method.
In fixed point iteration method, the problem of finding a root of a nonlinear equation
f (x) = 0 is equivalently viewed as the problem of finding a fixed point of a suitably
defined function g.
148
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
α = g(α).
The idea behind the choice of g is to rewrite the given nonlinear equation f (x) = 0 in
the form x = g(x) for some function g. In general, there may be more than one choice
of g with this property as illustrated by the following example.
Example 4.3.14.
Note that α ∈ R is a root of the equation x2 − x − 2 = 0 if and only if α is a root of
each of the following equations.
1. x = x2 − 2
√
2. x = x + 2
2
3. x = 1 + .
x
In other words, obtaining a root of the equation f (x) = 0, where f (x) = x2 − x − 2
is equivalent to finding the fixed point of any one of the following functions:
1. g1 (x) = x2 − 2
√
2. g2 (x) = x + 2
2
3. g3 (x) = 1 + .
x
The fixed-point iteration method for finding a root of g(x) = x consists of a sequence
of iterates {xn }, starting from an initial guess x0 , defined by
xn = g(xn−1 ) (4.17)
149
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
fined.
Example 4.3.15.
Consider the equation
x2 − x = 0.
√
This equation can be re-written as x = ± x. Let us take the iterative function
√
g(x) = − x.
Since g(x) is defined only for x > 0, we have to choose x0 > 0. For this value of x0 ,
we have g(x0 ) < 0 and therefore, x1 cannot be calculated.
Therefore, the choice of g(x) has to be made carefully so that the sequence of iterates
can be calculated.
How to choose such an iteration function g(x)?
Note that x1 = g(x0 ), and x2 = g(x1 ). Thus x1 is defined whenever x0 belongs to the
domain of g. Thus we must take the initial guess x0 from the domain of g. For defining
x2 , we need that x1 is in the domain of g once again. In fact the sequence is given by
x0 , g(x0 ), g ◦ g(x0 ), g ◦ g ◦ g(x0 ), · · ·
Thus to have a well-defined iterative sequence, we require that
Range of the function g is contained in the domain of g.
A function with this property is called a self map. We make our first assumption on
the iterative function as
Assumption 1: a ≤ g(x) ≤ b for all a ≤ x ≤ b.
It follows that if a ≤ x0 ≤ b, then for all n, xn ∈ [a, b] and therefore xn+1 = g(xn ) is
defined and belongs to [a, b].
Let us now discuss about the point 3. This is a natural expectation since the expression
ξ = g(ξ). To achieve this, we need g(x) to be a continuous function. For if xn → ξ then
ξ = lim xn = lim g(xn−1 ) = g( lim xn−1 ) = g(ξ)
n→∞ n→∞ n→∞
Therefore, we need
Assumption 2: The iterative function g is continuous.
It is easy to prove that a continuous self map on a bounded interval always has a fixed
point. However, the question is whether the sequence (4.17) generated by the iterative
150
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
function g converges, which is the requirement stated in point 2. This point is well
understood geometrically. Figure 4.3(a) and Figure 4.3(c) illustrate the convergence of
the fixed-point iterations whereas Figure 4.3(b) and Figure 4.3(d) illustrate the diverging
iterations. In this geometrical observation, we see that when |g ′ (x)| < 1, we have
convergence and otherwise, we have divergence. Therefore, we make the assumption
Assumption 3: The iteration function g(x) is differentiable on I = [a, b]. Further,
there exists a constant 0 < K < 1 such that
|g ′ (x)| ≤ K, x ∈ I. (4.18)
Such a function is called the contraction map.
Let us now present the algorithm of the fixed-point iteration method.
151
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
Algorithm 4.3.16.
Hypothesis: Let g : [a, b] → [a, b] be an iteration function such that Assumptions 1,
2, and 3 stated above hold.
Algorithm:
Step 1: Choose an initial guess x0 ∈ [a, b].
Step 2: Define the iteration methods as
xn+1 = g(xn ), n = 0, 1, · · ·
Step 3: For a pre-assigned positive quantity ϵ, check for one of the (fixed) stop-
ping criteria discussed in Section 4.2. If the criterion is satisfied, stop the iteration.
Otherwise, repeat the step 1 by replacing n with n + 1 until the criterion is satisfied.
Conclusion: Then
1. x = g(x) has a unique root r in [a, b].
2. For any choice of x0 ∈ [a, b], with xn+1 = g(xn ), n = 0, 1, · · · ,
lim xn = r.
n→∞
3. We further have
λn
|xn − r| ≤ λn |x0 − r| ≤ |x1 − x0 | (4.20)
1−λ
and
r − xn+1
lim = g ′ (r). (4.21)
n→∞ r − xn
152
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
Proof.
Proof for (1) is easy.
From mean-value theorem and (4.18), we have
By induction, we have
|r − xn+1 | ≤ λn |r − x0 |, n = 0, 1, · · · .
|r − x0 | = |r − x1 + x1 − x0 |
≤ |r − x1 | + |x1 − x0 |
≤ λ|r − x0 | + |x1 − x0 |.
Remark 4.3.18.
From the inequality (4.22), we see that the fixed point iteration method has linear
convergence. In other words, the order of convergence of this method is at least 1.
Example 4.3.19.
The nonlinear equation
x3 + 4x2 − 10 = 0
has a unique root in [1, 2]. Note that the solution of each of the following fixed-point
153
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
154
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
be seen easily. Thus the successive iterates using g1 have increasing moduli,
which also shows that g1 is not a self map.
2. Let us consider the iterative function g2 . It is easy to check that g2 is not a self
map of [1, 2] to itself. In our computation above, we see that the entire iterative
sequence is not defined as one of the iterates becomes negative, when the initial
guess is taken as 1.5. The exact root is approximately equal to r = 1.365. There
is no interval containing r on which |g2′ (x)| < 1. In fact, g2′ (r) ≈ 3.4 and as a
consequence |g2′ (x)| > 3 on an interval containing r. Thus we don’t expect a
convergent iterative sequence even if the sequence is well-defined!
3. Regarding the iterative function g3 , note that this iteration function is a de-
creasing function on [1, 2] as
3x2
g3′ (x) = − √ <0
4 10 − x3
on [1, 2]. Thus maximum of g3 is attained at x = 1, which is 1.5; and the
minimum is attained at x = 2 which is approximately equal to 0.707. Thus g3
is a self map of [1, 2]. But |g3′ (2)| ≈ 2.12. Thus the condition
155
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.3 Open Domain Methods
Example 4.3.20.
Consider the equation
sin x + x2 − 1 = 0.
Take the initial interval as [0, 1]. There are at least three possible choices for the
iteration function, namely,
1. g1 (x) = sin−1 (1 − x2 ),
√
2. g2 (x) = − 1 − sin x,
√
3. g3 (x) = 1 − sin x.
Here we have
−2
g1′ (x) = √ .
2 − x2
We can see that |g1′ (x)| > 1. Taking x0 = 0.8 and denoting the absolute error as ϵ,
we have
n g1 (x) ϵ
0 0.368268 0.268465
1 1.043914 0.407181
2 -0.089877 0.726610
3 1.443606 0.806873
The sequence of iterations is diverging as expected.
If we take g2 (x), clearly the assumption 1 is violated and therefore is not suitable for
the iteration process.
Let us take g3 (x). Here, we have
− cos x
g3′ (x) = √ .
1 − sin x
156
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.4 Comparison and Pitfalls of Iterative Methods
Therefore,
√
1 − sin2 x
|g3′ (x)| = √
2 1 − sin x
√
1 + sin x
=
2
1
≤ √ < 1.
2
Taking x0 = 0.8 and denoting the absolute error as ϵ, we have
n g3 (x) ϵ
0 0.531643 0.105090
1 0.702175 0.065442
2 0.595080 0.041653
3 0.662891 0.026158
The sequence is converging.
1. In both these methods, where we are trying to find a root of the nonlinear equa-
tion f (x) = 0, we are required to find an interval [a, b] such that f (a) and f (b)
have opposite signs. This calls for a complete study of the function f . In case
the function has no roots on the real line, this search for an interval will be fu-
tile. There is no way to realize this immediately, thus necessitating a full fledged
understanding of the funtion f .
2. Once it is known that we can start these methods, then surely the iterative se-
quences converge to a root of the nonlinear equation.
3. In bisection method, we can keep track of the error by means of an upper bound.
But such a thing is not available for regula falsi method. In general convergence
of bisection method iterates is slower compared to that of regula falsi method
iterates.
4. If the initial interval [a, b] is such that the equation f (x) = 0 has a unique root in
it, then both the methods converge to the root. If there are more than one roots
in [a, b], then usually both methods find different roots. The only way of finding
157
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.4 Comparison and Pitfalls of Iterative Methods
the desired root is to find an interval in which there is exactly one root to the
nonlinear equation.
Open domain methods: Secant, Newton-Raphson, and Fixed point methods
1. The main advantage of the open domain methods when compared to closed do-
main methods is that we don’t need to locate a root in an interval. Rather, we
can start the iteration with an arbitrarily chosen initial guess(es).
2. The disadvantage of the open domain methods is that the iterative sequence may
not be well-defined for all initial guesses. Even if the sequence is well-defined, it
may not converge. Even if it converges, it may not converge to a specific root of
interest.
3. In situations where both open and closed domain methods converge, open domain
methods are generally faster compared to closed domain methods. Especially,
Newton-Raphson’s method is faster than other methods as the order of conver-
gence of this method is 2. In fact, this is the fastest method known today.
4. In these methods, it may happen that we are trying to find a particular root of the
nonlinear equation, but the iterative sequence may converge to a different root.
Thus we have to be careful in choosing the initial guess. If the initial guess is far
away from the expected root, then there is a danger that the iteration converges
to another root of the equation.
In the case of Newton-Raphson’s method, this usually happens when the slope
f ′ (x0 ) is small and the tangent line to the curve y = f (x) is nearly parallel to
the x-axis. Similarly, in the case of secant method, this usually happens when
the slope of the secant joining (x0 , f (x0 )) and (x1 , f (x1 )) is nearly parallel to the
x-axis.
For example, if
f (x) = cos x
and we seek the root x∗ = π/2 and start with x0 = 3, calculation reveals that
x1 = −4.01525, x2 = −4.85266, · · · ,
and the iteration converges to x = −4.71238898 ≈ −3π/2. The iterative sequence
for n = 1, 2 is depicted in Figure 4.4.
158
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.4 Comparison and Pitfalls of Iterative Methods
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
−6 −4 −2 0 2 4 6
and the sequence diverges to +∞. This particular function has another suprising
problem. The value of f (x) goes to zero rapidly as x gets large, for example
f (x15 ) = 0.0000000536, and it is possible that p15 could be mistaken for a root as
per the residual error. Thus, using residual error for iterative methods nonlinear
equations is often not preferred.
6. The method can stuck in a cycle. For instance, let us compute the iterative
sequence generated by the Newton-Raphson’s method for the function f (x) =
x3 − x − 3 with the initial guess x0 = 0. The iterative sequence is
7. If f (x) has no real root, then there is no indication by these methods and the iter-
ative sequence may simply oscillate. For example compute the Newton-Raphson
iteration for
f (x) = x2 − 4x + 5.
159
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
4.5 Exercises
Bisection Method and Regula-falsi Method
In the following problems on bisection method, the notation xn is used to denote the
mid-point of the interval [an−1 , bn−1 ], and is termed as the bisection method’s nth iterate
(or simply, the nth iterate, as the context of bisection method is clear)
2x6 − 5x4 + 2 = 0
starting with the initial interval [0, 1]. In order to approximate a solution of the
nonlinear equation with an absolute error less than or equal to 10−3 , what is the
number of iterations required as per the error estimate of the bisection method?
Also find the corresponding approximate solution.
2. Let bisection method be used to solve the nonlinear equation (x is in radians)
x sin x − 1 = 0
starting with the initial interval [0, 2]. In order to approximate a solution of the
nonlinear equation with an absolute error less than or equal to 10−3 , what is the
number of iterations required as per the error estimate of the bisection method?
Also find the corresponding approximate solution.
3. Let bisection method be used to solve a nonlinear equation f (x) = 0 starting with
the initial interval [a0 , b0 ] where a0 > 0. Let xn be as in the bisection method, and
r be the solution of the nonlinear equation f (x) = 0 to which bisection method
converges. Let ϵ > 0. Show that the absolute value of the relative error of xn
w.r.t. r is at most ϵ whenever n satisfies
log(b0 − a0 ) − log ϵ − log a0
n≥ .
log 2
What happens if a0 < 0 < b0 ?
4. Consider the nonlinear equation
10x + x − 4 = 0
i) Find an interval [a0 , b0 ] such that the function f (x) = 10x + x − 4 satisfies
the hypothesis of bisection method.
ii) Let r be the solution of the nonlinear equation to which bisection method
iterative sequence converges. Find an n such that xn (notation as in the
bisection method) approximates r to two significant digits. Also find xn .
160
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
5. If bisection method is used with the initial interval [a0 , b0 ] = [2, 3], how many
iterations are required to assure that an approximate solution of the nonlinear
equation f (x) = 0 is obtained with an absolute error that is at most 10−5 ?
6. Assume that in solving a nonlinear equation f (x) = 0 with the initial interval
[a0 , b0 ], the iterative sequence {xn } given by bisection method is never an exact
solution of f (x) = 0. Let us define a sequence of numbers {dn } by
{
0 if [an , bn ] is the left half of the interval [an−1 , bn−1 ],
dn =
1 if [an , bn ] is the right half of the interval [an−1 , bn−1 ].
Using the sequence {dn } defined above, express the solution of f (x) = 0 to which
the bisection method converges. (Hint: Try the case [a0 , b0 ] = [0, 1] first and
think of binary representation of a number. Then try for the case [a0 , b0 ] = [0, 2],
then for the case [a0 , b0 ] = [1, 3], and then the general case!)
7. In the notation of bisection method, determine (with justification) if the following
are possible.
i) a0 < a1 < a2 < · · · < an < · · ·
ii) b0 > b1 > b2 > · · · > bn > · · ·
iii) a0 = a1 < a2 = a3 < · · · < a2m = a2m+1 < · · · (Hint: First guess what
should be the solution found by bisection method in such a case, and then
find the simplest function having it as a root! Do not forget the connection
between bisection method and binary representation of a number, described
in the last problem)
8. Draw the graph of a function that satisfies the hypothesis of bisection method
on the interval [0, 1] and having exactly one root in [0, 1] such that the errors
e1 , e2 , e3 satisfy |e1 | > |e2 |, and |e2 | < |e3 |. Give formula for one such function.
9. Draw the graph of a function for which bisection method iterates satisfy x1 = 2,
x2 = 0, and x3 = 1 (in the usual notation of bisection method). Indicate in the
graph why x1 = 2, x2 = 0, and x3 = 1 hold. Also mention precisely the corre-
sponding intervals [a0 , b0 ],[a1 , b1 ], [a2 , b2 ].
10. Draw the graph of a function (there is no need to give a formula for the func-
tion) for which a0 , a1 , a2 , a3 (in the usual notation of bisection method) satisfy
a0 < a1 = a2 < a3 . (Mark these points clearly on the graph.)
161
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
method results from one of the following two situations: (i) the iterative sequence
is not well-defined, and (ii) the iterative sequence does not converge at all.
12. Let α be a positive real number. Find formula for an iterative sequence based on
√
Newton-Raphson method for finding α and α1/3 . Apply the methods to α = 18
to obtain the results whcih are correct to two significant digits when compared
to their exact values.
13. Consider the nonlinear equation
1 3
x − x2 + x + 1 = 0.
3
Show that there exists an initial guess x0 ∈ (0, 4) for which x2 of the Newton-
Raphson method iterative sequence is not defined.
14. Let a be a real number such that 0 < a ≤ 1. Let {xn }∞
n=1 be the iterative sequence
of the Newton-Raphson method to solve the nonlinear equation e−ax = x. If x∗
denotes the exact root of this equation and x0 > 0, then show that
1
|x∗ − xn+1 | ≤ (x∗ − xn )2 .
2
15. Newton-Raphson method is to be applied for approximating a root of the nonlin-
ear equation x4 − x − 10 = 0.
i) How many solutions of the nonlinear equation are there in [1, ∞)? Are they
simple?
ii) Find an interval [1, b] that contains the smallest positive solution of the
nonlinear equation.
iii) Compute five iterates of Newton-Raphson method, for each of the initial
guesses x0 = 1, x0 = 2, x0 = 100. What are your observations?
iv) A solution of the nonlinear equation is approximately equal to 1.85558 Find
a δ as in the proof of theorem on Newton-Raphson method, so that iterative
sequence of Newton-Raphson method always converges for every initial guess
x0 ∈ [1.85558 − δ, 1.85558 + δ].
v) Can we appeal to the theorem for convex functions in the context of Newton-
Raphson method? Justify.
16. Newton-Raphson method is to be applied for approximating a root of the equation
sin x = 0.
i) Find formula for the Newton-Raphson iterative sequence.
ii) Let α ∈ (−π/2, π/2) and α ̸= 0 be such that if x0 = α, then the iteration
becomes a cycle i.e.,
α = x0 = x2 = · · · = x2k = x2k+2 = · · · , x1 = x3 = · · · = x2k+1 = x2k+3 = · · ·
Find a non-linear equation g(y) = 0 whose solution is α.
162
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
10
y = x sin x − 1 2
−2
−4
−6
−8
−10
−10 −8 −6 −4 −2 0 2 4 6 8 10
x
Figure 4.5: Graph of x sin x − 1
163
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
19. Draw the graph of a function for which secant method iterates satisfy x0 = 0,
x1 = 3, and x2 = 1, x3 = 2 (in the usual notation of secant method). Indicate in
the graph why x2 = 1, x3 = 2 hold.
22. To solve the nonlinear equation e−x − cos x = 0 by fixed-point iteration method,
the following fixed-point formulations may be considered.
i) x = − ln(cos x)
ii) x = cos−1 (e−x )
Discuss about convergence of the fixed-point iterative sequences generated by the
two formulations.
164
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
23. Show that g(x) = π + 12 sin(x/2) has a unique fixed point in [0, 2π]. Use fixed-
point iteration method with g as the iteration function and x0 = 0 to find an
approximate solution for the equaton 21 sin(x/2) − x + π = 0 with the stopping
criterion that the residual error is less than 10−4 .
24. Let α ∈ R and β ∈ R be the roots of x2 + ax + b = 0, and such that |α| > |β|.
Let g and h be two iterating functions satisfying the hypothesis of the theorem
on fixed-point method on some intervals. Consider the iterative sequences {xn }
and {yn } corresponding to two the iterating functions g and h given by
axn + b b
xn+1 = − , and yn+1 = −
xn yn + a
respectively. Show that the iterative sequences {xn } and {yn } converge to α and
β respectively.
25. Let {xn } ⊂ [a, b] be a sequence generated by a fixed point iteration method with
a continuously differentiable iteration function g(x). If this sequence converges
to x∗ , then show that
λ
|xn+1 − x∗ | ≤ |xn+1 − xn |,
1−λ
where λ := max |g ′ (x)|. (This estimate helps us to decide when to stop iterating
x∈[a,b]
if we are using a stopping criterion based on the distance between successive it-
erates.)
26. Explain why the sequence of iterates xn+1 = 1 − 0.9x2n , with initial guess x0 = 0,
does not converge to any solution of the quadratic equation 0.9x2 + x − 1 = 0?
[Hint: Observe what happens after 25 iterations, may be using a computer.]
27. Let x∗ be the smallest positive root of the equation 20x3 − 20x2 − 25x + 4 = 0.
The following question is concerning the fixed-point formulation of the nonlinear
equation given by x = g(x), where g(x) = x3 − x2 − x4 + 51 .
i) Show that x∗ ∈ [0, 1].
ii) Does the function g satisfy the hypothesis of theorem on fixed-point method?
If yes, we know that x∗ is the only fixed point of g lying in the interval
[0, 1]. In the notation of fixed-point iteration method, find an n such that
|x∗ − xn | < 10−3 , when the initial guess x0 is equal to 0.
28. Let f : [a, b] → R be a function such that f ′ is continuous, f (a)f (b) < 0, and
there exists an α > 0 such that f ′ (x) ≥ α > 0.
165
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 4.5 Exercises
i) Show that f (x) = 0 has exactly one solution in the interval [a, b].
ii) Show that with a suitable choice of the parameter λ, the solution of the
nonlinear equation f (x) = 0 can be obtained by applying the fixed-point
iteration method applied to the function F (x) = x + λf (x).
29. Let p > 1 be a real number. Show that the following expression has a meaning
and find its value. √ √
√
x = p + p + p + ···
√
Note that the last equation is interpreted as x = limn→∞ xn , where x1 = p,
√ √ √
x2 = p + p, · · · . (Hint: Note that xn+1 = p + xn , and show that the se-
quence {xn } converges using the theorem on fixed-point method.)
30. Let p be a positive real number. Show that the following expression has a meaning
and find its value.
1
x=
1
p+
1
p+
p + ···
Note that the last equation is interpreted as x = limn→∞ xn , where x1 = p1 ,
x2 = p+1 1 , · · · . (Hint: You may have to consider the three cases 0 < p < 1/4,
p
p = 1/4, and p > 1/4 separately.)
31. Draw the graph of a function having the following properties: (i) The function
has exactly TWO fixed points. (ii) Give two choices of the initial guess x0 and
y0 such that the corresponding sequences {xn } and {yn } have the properties that
{xn } converges to one of the fixed points and the sequence {yn } goes away and
diverges. Point out the first three terms of both the sequences on the graph.
166
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 5
Interpolation
Let a physical experiment be conducted and the outcome is recorded only at some finite
number of times. If we want to know the outcome at some intermediate time where
the data is not available, then we may have to repeat the whole experiment once again
to get this data. In the mathematical language, suppose that the finite set of values
{f (xi ) : i = 0, 1, · · · , n}
{xi : i = 0, 1, · · · , n}
is known and we want to find the value of f (x), where x ∈ (xj , xk ), for some j =
1, 2, · · · , n and k = 1, 2, · · · , n. One way of obtaining the value of f (x) is to compute
this value directly from the expression of the function f . Often, we may not know the
expression of the function explicitly and only the data
{(xi , yi ) : i = 0, 1, · · · , n}
167
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
In certain circumstances, the function f may be known explicitly, but still too dif-
ficult to perform certain operations like differentiation and integration. Thus, it is
often preferred to restrict the class of interpolating functions to polynomials, where the
differentiation and the integration can be done more easily.
In Section 5.1, we introduce the basic problem of polynomial interpolation and prove
the existence and uniqueness of polynomial interpolating the given data. There are at
least two ways to obtain the unique polynomial interpolating a given data, one is the
Lagrange and another one is the Newton. In Section 5.1.2, we introduce Lagrange form
of interpolating polynomial, whereas Section 5.1.3 introduces the notion of divided
differences and Newton form of interpolating polynomial. The error analysis of the
polynomial interpolation is studied in Section 5.3. In certain cases, the interpolating
polynomial can differ significantly from the exact function. This is illustrated by Carl
Runge and is called the Runge Phenomenon. In Section 5.3.4, we present the example
due to Runge and state a few results on convergence of the interpolating polynomials.
The concept of piecewise polynomial interpolation and Spline interpolation are discussed
in Section 5.5.
Definition 5.1.1.
Any collection of distinct real numbers x0 , x1 , · · · , xn (not necessarily in increasing
order) is called nodes.
pn (xi ) = yi , i = 0, 1, · · · n. (5.1)
168
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
Remark 5.1.3.
Let x0 , x1 , · · · , xn be given nodes, and y0 , y1 , · · · , yn be real numbers. Let pn (x) be
a polynomial interpolating the given data. Then the graph of pn (x) passes through
the set of (n + 1) distinct points in the xy-plane given by the table
x x0 x1 x2 x3 ··· xn
.
y y0 y1 y2 y3 ··· yn
We call the set {(xi , yi ), i = 0, 1, · · · , n} as data and quite often we represent this
set in the above form of a table.
The following result asserts that an interpolating polynomial exists and is unique.
Proof.
Proof of uniqueness: Assume that pn (x) and qn (x) are interpolating polynomials
of degree less than or equal to n that satisfies the interpolation condition (5.1). Let
Then, rn (x) is also a polynomial of degree less than or equal to n, and by the inter-
polation condition, we have
rn (xi ) = 0,
for every i = 0, 1, · · · , n. Thus, rn (x) is a polynomial of degree less than or equal to n
with n + 1 distinct roots. By the fundamental theorem of algebra, we conclude that
rn (x) is the zero polynomial. That is, the interpolating polynomial is unique.
169
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
p0 (x) = y0
is the required polynomial and its degree is less than or equal to 0. Assume that the
result is true for n = k. We will now prove that the result is true for n = k + 1.
Let the data be given by
x x0 x1 x2 x3 ··· xk xk+1
y y0 y1 y2 y3 ··· yk yk+1
By the assumption, there exists a polynomial pk (x) of degree less than or equal to k
such that the first k interpolating conditions
pk (xi ) = yi , i = 0, 1, · · · , k
where the constant c is such that the (k + 1)th interpolation condition pk+1 (xk+1 ) =
yk+1 holds. This is achieved by choosing
yk+1 − pk (xk+1 )
c= .
(xk+1 − x0 )(xk+1 − x1 ) · · · (xk+1 − xk )
Note that pk+1 (xi ) = yi for i = 0, 1, · · · , k and therefore pk+1 (x) is an interpolating
polynomial for the given data. This proves the result for n = k + 1. By the principle
of mathematical induction, the result is true for any natural number n.
Remark 5.1.5.
A special case is when the data values yi , i = 0, 1, · · · , n, are the values of a function
f at given nodes xi , i = 0, 1, · · · , n. In such a case, a polynomial interpolating the
given data
x x0 x1 x2 x3 ··· xn
y f (x0 ) f (x1 ) f (x2 ) f (x3 ) ··· f (xn )
is said to be the polynomial interpolating the given function or the interpolating
polynomial for the given function and has a special significance in applications of
Numerical Analysis for computing approximate solutions of differential equations
and numerically computing complicated integrals.
170
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
Example 5.1.6.
Let the following data represent the values of f :
x 0 0.5 1
f (x) 1.0000 0.5242 −0.9037
The questions are the following:
1. What is the exact expression for the function f ?
2. What is the value of f (0.75)?
We cannot get the exact expression for the function f just from the given data,
because there are infinitely many functions having same value at the given set of
points. Due to this, we cannot expect an exact value for f (0.75), in fact, it can be
any real number. On the other hand, if we look for f in the class of polynomials of
degree less than or equal to 2, then Theorem 5.1.4 tells us that there is exactly one
such polynomial and hence we can obtain a unique value for f (0.75).
The interpolating polynomial happens to be
and we have
p2 (0.75) = −0.0707380.
The function used to generate the above table of data is
(π )
f (x) = sin ex .
2
With this expression of f , we have (using 7-digit rounding)
f (0.75) ≈ −0.1827495.
That is, at the point x = 0.75 the polynomial approximation to the given function f
has more than 61% error. The graph of the function f (blue solid line) and p2 (green
dash line) are depicted in Figure 5.1. The blue dots denote the given data, magenta
‘+’ symbol indicates the value of p2 (0.75) and the red ‘O’ symbol represents the value
of f (0.75). It is also observed from the graph that if we approximate the function f
for x ∈ [0, 0.5], then we obtain a better accuracy than approximating f in the interval
(0.5, 1).
171
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
0.8
0.6
0.4
0.2
0
y
−0.2
−0.4
−0.6
−0.8
−1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
(π )
Figure 5.1: The function f (x) = sin ex (blue solid line) and p2 (x) (green dash line).
2
Blue dots represent the given data, magenta ‘+’ symbol indicates the value of p2 (0.75)
and the red ‘O’ symbol represents the value of f (0.75).
Remark 5.1.8.
Note that the k th Lagrange polynomial depends on all the n+1 nodes x0 , x1 , · · · , xn .
172
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
∑
n
pn (x) = f (xi )li (x). (5.4)
i=0
This form of the interpolating polynomial is called the Lagrange’s form of In-
terpolating Polynomial.
Proof.
∑
Firstly, we will prove that q(x) := ni=0 f (xi )li (x) is an interpolating polynomial for
the function f at the nodes x0 , x1 , · · · , xn . Since
{
1 if i = j
li (xj ) = ,
0 if i ̸= j
Example 5.1.10.
Consider the case n = 1 in which we have two distinct points x0 and x1 . Thus, we
have
x − x1 x − x0
l0 (x) = , l1 (x) =
x0 − x 1 x 1 − x0
and therefore,
173
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
f (x1 ) − f (x0 )
p1 (x) = f (x0 ) + (x − x0 ). (5.5)
x1 − x 0
This is the linear interpolating polynomial of the function f . Similarly, if we are
given three nodes with corresponding values, then we can generate the quadratic
interpolating polynomial and so on..
Example 5.1.11.
Let the values of the function f (x) = ex be given at x0 = 0.82 and x1 = 0.83 by
In this example, we would like to obtain an approximate value of e0.826 using the
polynomial p1 (x) that interpolates f at the nodes x0 , x1 . The polynomial p1 (x) is
given by
2.293319 − 2.270500
p1 (x) ≈ 2.270500 + (x − 0.82) = 2.2819x + 0.399342.
0.83 − 0.82
The approximate value of e0.826 is taken to be p1 (0.826), which is given by
p1 (0.826) ≈ 2.2841914.
p2 (0.826) ≈ 2.2841639.
Note that the approximation to e0.826 obtained using the interpolating polynomial
p2 (x), namely 2.2841639, approximates the exact value to at least eight significant
digits.
174
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
Remark 5.1.12.
The above example gives us a feeling that if we increase the number of nodes,
and thereby increasing the degree of the interpolating polynomial, the polynomial
approximates the orginal function more accurately. But this is not true in general,
and we will discuss this further in Section 5.3.4.
Remark 5.1.13.
Let x0 , x1 , · · · , xn be nodes, and f be a function. Recall that computing an in-
terpolating polynomial in Lagrange’s form requires us to compute for each k =
0, 1, · · · , n, the k th Lagrange’s polynomial lk (x) which depends on the given nodes
x0 , x1 , · · · , xn . Suppose that we have found the corresponding interpolating poly-
nomial pn (x) of f in the Lagrange’s form for the given data. Now if we add one
more node xn+1 , the computation of the interpolating polynomial pn+1 (x) in the
Lagrange’s form requires us to compute a new set of Lagrange’s polynomials cor-
responding to the set of (n + 1) nodes, and no advantage can be taken of the fact
that pn is already available.
We saw in the last section that it is easy to write the Lagrange form of the interpolating
polynomial once the Lagrange polynomials associated to a given set of nodes have been
written. However we observed in Remark 5.1.13 that the knowledge of pn (in Lagrange
form) cannot be utilized to construct pn+1 in the Lagrange form. In this section we
describe Newton’s form of interpolating polynomial, which uses the knowledge of pn in
constructing pn+1 .
175
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.1 Polynomial Interpolation
∏
2 ∏
n−1
pn (x) = A0 +A1 (x−x0 )+A2 (x−x0 )(x−x1 )+A3 (x−xi )+· · ·+An (x−xi ) (5.6)
i=0 i=0
Proof.
Recall that in the proof of Theorem 5.1.4, we proved the existence of an interpolating
polynomial using mathematical induction. In fact, we have given an algorithm for
constructing interpolating polynomial. The interpolating polynomial given by (5.2)
was precisely the Newton’s form of interpolating polynomial.
Remark 5.1.15.
Let us recall the equation (5.2) from the proof of Theorem 5.1.4 now.
1. It says that for each n ∈ N, we have
∏
n−1
pn (x) = pn−1 (x) + An (x − xi ) (5.7)
i=0
for some constant An . This shows the recursive nature of computing New-
ton’s form of interpolating polynomial.
2. Indeed An is given by
From the last equality, note that A0 depends only on f (x0 ). A1 depends on
the values of f at x0 and x1 only. In general, An depends on the values of f
at x0 , x1 , x2 , · · · , xn only.
3. To compute Newton’s form of interpolating polynomial pn (x), it is enough
to compute Ak for k = 0, 1, · · · , n. However note that the formula (5.8) is
not well-suited to compute Ak because we need to evaluate all the successive
interpolating polynomials pk (x) for k = 0, 1, · · · , n−1 and then evaluate them
at the node xk which is computationally costly. It then appears that we are
in a similar situation to that of Lagrange’s form of interpolating polynomial
as far as computational costs are concerned. But this is not the case, as we
shall see shortly that we can compute An directly using the given data (that
is, the given values of the function at the nodes), and this will be done in
Section 5.2.
176
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Remark 5.2.2.
∑
n ∏
k−1
pn (x) = f [x0 ] + f [x0 , x1 , · · · , xk ] (x − xi ) (5.9)
k=1 i=0
Example 5.2.3.
As a continuation of Example 5.1.10, let us construct the linear interpolating polyno-
mial of a function f in the Newton’s form. In this case, the interpolating polynomial
is given by
p1 (x) = f [x0 ] + f [x0 , x1 ](x − x0 ),
where
f (x0 ) − f (x1 )
f [x0 ] = f (x0 ), f [x0 , x1 ] = (5.10)
x0 − x 1
are zeroth and first order divided differences, respectively. Observe that this polyno-
mial is exactly the same as the interpolating polynomial obtained using Lagrange’s
form in Example 5.1.10.
177
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Proof.
Since z0 , z1 , · · · , zn is a permutation of x0 , x1 , · · · , xn , which means that the nodes
x0 , x1 , · · · , xn have only been re-labelled as z0 , z1 , · · · , zn , and hence the polynomial
interpolating the function f at both these sets of nodes is the same. By definition
f [x0 , x1 , · · · , xn ] is the coefficient of xn in the polynomial interpolating the func-
tion f at the nodes x0 , x1 , · · · , xn , and f [z0 , z1 , · · · , zn ] is the coefficient of xn in the
polynomial interpolating the function f at the nodes z0 , z1 , · · · , zn . Since both the
interpolating polynomials are equal, so are the coefficients of xn in them. Thus, we
get
f [x0 , x1 , · · · , xn ] = f [z0 , z1 , · · · , zn ].
This completes the proof.
The following result helps us in computing recursively the divided differences of higher
order.
Proof.
Let us start the proof by setting up the following notations.
• Let pn (x) be the polynomial interpolating f at the nodes x0 , x1 , · · · , xn .
• Let pn−1 (x) be the polynomial interpolating f at the nodes x0 , x1 , · · · , xn−1 .
• Let q(x) be the polynomial interpolating f at the nodes x1 , x2 , · · · , xn .
Claim: We will prove the following relation between pn−1 , pn , and q:
x − x0 ( )
pn (x) = pn−1 (x) + q(x) − pn−1 (x) (5.13)
xn − x0
178
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Since both sides of the equality in (5.13) are polynomials of degree less than or equal
to n, and pn (x) is the polynomial interpolating f at the nodes x0 , x1 , · · · , xn , the
equality in (5.13) holds for all x if and only if it holds for x ∈ { x0 , x1 , · · · , xn } and
both sides of the equality reduce to f (x) for x ∈ { x0 , x1 , · · · , xn }. Let us now verify
the equation (5.13) for x ∈ { x0 , x1 , · · · , xn }.
1. When x = x0 ,
x0 − x0 ( )
pn−1 (x0 ) + q(x0 ) − pn−1 (x0 ) = pn−1 (x0 ) = f (x0 ) = pn (x0 ).
xn − x 0
2. When x = xk for 1 ≤ k ≤ n − 1, q(xk ) = pn−1 (xk ) and thus we have
xk − x0 ( )
pn−1 (xk ) + q(xk ) − pn−1 (xk ) = pn−1 (xk ) = f (xk ) = pn (xk ).
xn − x0
3. When x = xn , we have
xn − x0 ( ) ( )
pn−1 (xn )+ q(xn )−pn−1 (xn ) = pn−1 (xn )+ f (xn )−pn−1 (xn ) = f (xn ) = pn (xn ).
xn − x0
This finishes the proof of the Claim.
The coefficient of xn in the polynomial pn (x) is f [x0 , x1 , · · · , xn ]. The coefficient of
xn using the right hand side of the equation (5.13) is given by
( ) 1 ( ( ))
coefficient of xn in pn−1 (x) + coefficient of xn in (x−x0 ) q(x)−pn−1 (x) .
xn − x0
On noting that the coefficient of xn−1 in the polynomial pn−1 is f [x0 , x1 , · · · , xn−1 ],
the coefficient of xn−1 in the polynomial q is f [x1 , x2 , · · · , xn ], and the coefficient of
xn in the polynomial pn−1 is zero, we get that the coefficient of xn using the right
hand side of the equation (5.13) becomes
179
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Remark 5.2.6.
Let i, j ∈ N. Applying Theorem 5.2.5 to a set of nodes xi , xi+1 , · · · , xi+j , we
conclude
f [xi+1 , xi+2 , · · · , xi+j ] − f [xi , xi+1 , · · · , xi+j−1 ]
f [xi , xi+1 , · · · , xi+j ] = (5.14)
xi+j − xi
Note that the divided differences f [x0 , x1 , · · · , xn ] are defined only for distinct nodes
x0 , x1 , · · · , xn .
∑
n ∏
k−1
pn (x) = f [x0 ] + f [x0 , x1 , · · · , xk ] (x − xi ) (5.15)
k=1 i=0
One can explicitly write the formula (5.15) for n = 1, 2, 3, 4, 5, · · · . For instance, when
n = 5, the formula (5.15) reads
p5 (x) = f [x0 ]
+ f [x0 , x1 ](x − x0 )
+ f [x0 , x1 , x2 ](x − x0 )(x − x1 )
.
+ f [x0 , x1 , x2 , x3 ](x − x0 )(x − x1 )(x − x2 )
+ f [x0 , x1 , x2 , x3 , x4 ](x − x0 )(x − x1 )(x − x2 )(x − x3 )
+ f [x0 , x1 , x2 , x3 , x4 , x5 ](x − x0 )(x − x1 )(x − x2 )(x − x3 )(x − x4 )
(5.16)
For easy computation of the divided differences in view of the formula (5.12), it is
convenient to write the divided differences in a table form. For n = 5, the divided
difference table is given by
180
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
x0 f (x0 )
f [x0 , x1 ]
x1 f (x1 ) f [x0 , x1 , x2 ]
f [x1 , x2 ] f [x0 , x1 , x2 , x3 ]
x2 f (x2 ) f [x1 , x2 , x3 ] f [x0 , x1 , x2 , x3 , x4 ]
f [x2 , x3 ] f [x1 , x2 , x3 , x4 ] f [x0 , x1 , x2 , x3 , x4 , x5 ]
x3 f (x3 ) f [x2 , x3 , x4 ] × f [x1 , x2 , x3 , x4 , x5 ]
f [x3 , x4 ] f [x2 , x3 , x4 , x5 ]
x4 f (x4 ) f [x3 , x4 , x5 ] × × ×
f [x4 , x5 ] × × ×
x5 f (x5 ) × × × ×
Comparing the above divided differences table and the interpolating polynomial p5
given by (5.16), we see that the leading members of each column (denoted in bold font)
are the required divided differences used in p5 (x).
Theorems 5.2.5, in view of Theorem 5.2.4, suggests that extending the notion of the nth
order divided difference f [x0 , x1 , · · · , xn ] having repeated arguments is not immediate.
However, such an extension is useful in numerical analysis. For instance, in the error
analysis of certain quadrature formulas, it would be useful.
The following result gives an expression for f [x0 , x1 , · · · , xn ] in the case of distinct
arguments and this expression helps in extending the notion of divided differences to
the case of repeated arguments.
where
{ }
∑
n ∑
n
τn = (t1 , t2 , · · · , tn ) : ti ≥ 0, i = 1, 2, · · · , n; ti ≤ 1 , t0 = 1 − ti .
i=1 i=1
(5.18)
181
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Proof.
We prove the theorem by induction.
Step 1 : First, we prove (5.17) for n = 1 . From (5.18), we see that τ1 = [0, 1]. Using
the expression for t0 in (5.18), we have
∫1 ∫1 t1 =1
′ ′ 1
f (t0 x0 + t1 x1 )dt1 = f (x0 + t1 (x1 − x0 )) dt1 = f (x0 + t1 (x1 − x0 ))
x1 − x0 t1 =0
0 0
f (x1 ) − f (x0 )
= = f [x0 , x1 ].
x1 − x0
Thus, we have proved the result (5.17) for n = 1.
Step 2 : Now assume that the formula (5.17) holds for n = k ≥ 1 and prove the
formula for n = k + 1. We have
∫ ∫
· · · f (k+1) (t0 x0 + t1 x1 + · · · + tk+1 xk+1 ) dt1 · · · dtk+1
τk+1
1−(t1 +···+tk )
∫ ∫ ∫
= ··· f (k+1) (x0 + t1 (x1 − x0 ) + · · · + tk+1 (xk+1 − x0 )) dtk+1 dt1 · · · dtk
τk 0
∫ ∫
1 [ (k) ]t =1−(t +···+tk )
= ··· f (x0 + t1 (x1 − x0 ) + · · · + tk+1 (xk+1 − x0 )) tk+1 =0 1 dt1 · · · dtk
xk+1 − x0 k+1
τk
∫ ∫
1 · · · f (k) (xk+1 + t1 (x1 − xk+1 ) + · · · + tk (xk − xk+1 )) dt1 · · · dtk
=
xk+1 − x0
τk
∫ ∫
− ··· f (k) (x0 + t1 (x1 − x0 ) + · · · + tk (xk − x0 )) dt1 · · · dtk
τk
f [x1 , · · · , xk , xk+1 ] − f [x0 , x1 , · · · , xk ]
=
xk+1 − x0
= f [x0 , x1 , · · · , xk+1 ].
Remark 5.2.8.
Since f is n-times continuously differentiable, the right hand side of (5.17) is mean-
ingful for any set of points x0 , x1 , · · · , xn , which are not necessarily distinct. Thus,
the notion of divided difference can be extended to the case of repeated arguments.
182
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.2 Newton’s Divided Differences
Hypothesis:
1. Let x0 , x1 , · · · , xn ∈ [a, b] be a set of points, not necessarily distinct.
2. Let f be n-times continuously differentiable function on the interval [a, b].
Conclusion:
1. The function (x0 , x1 , · · · , xn ) 7−→ f [x0 , x1 , · · · , xn ] is continuous on Rn+1 .
2. For any x ∈ [a, b], the limit
lim f [x0 , x1 , · · · , xn ]
(x0 ,x1 ,··· ,xn )→(x,x,··· ,x)
exists and this limit is the nth -order divided difference f [x, x, · · · , x] (x repeated
(n + 1)-times). Further, we have
f (n) (x)
f [x, x, · · · , x] = . (5.19)
n!
Proof.
1. The proof is outside the scope of this course and hence omitted.
2. The proof follows from the fact that
∫ ∫
1
· · · 1 dt1 · · · dtn =
n!
τn
The following theorem gives an expression for the divided difference of an (n + 2)-times
differentiable function when two nodes are repeated.
Theorem 5.2.10.
Hypothesis:
1. Let x0 , x1 , · · · , xn be given (distinct) nodes in an interval [a, b].
2. Let x ∈ [a, b], x ∈ / {x0 , x1 , · · · , xn }.
Conclusion: The (n + 2)nd -order divided difference f [x0 , x1 , · · · , xn , x, x] of an (n +
2)-times continuously differentiable function f is given by
d
f [x0 , x1 , · · · , xn , x, x] = f [x0 , x1 , · · · , xn , x]. (5.20)
dx
183
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
Proof.
It follows from (5.17) that the function F (x) = f [x0 , x1 , · · · , xn , x] for all x ∈ [a, b]
is well-defined and continuous. Therefore, using the continuity and the symmetry
properties of the divided differences, we get
As all the points used on the right hand side are distinct, we can use the formula
(5.12) to get
f [x0 , x1 , · · · , xn , x + h] − f [x, x0 , x1 , · · · , xn ]
f [x0 , x1 , · · · , xn , x, x] = lim
h→0 h
f [x0 , x1 , · · · , xn , x + h] − f [x0 , x1 , · · · , xn , x]
= lim
h→0 h
d
= f [x0 , x1 , · · · , xn , x].
dx
This completes the proof.
184
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
The following theorem provides a formula for the interpolation error when we as-
sume that the necessary data are given exactly without any floating-point approxi-
mation.
Proof.
If x is one of the nodes, then (5.23) holds trivially for any choice of ξx ∈ (a, b).
Therefore, it is enough to prove the theorem when x is not a node. The idea is to
obtain a function having at least n + 2 distinct zeros; and then apply Rolle’s theorem
n + 1 times to get the desired conclusion.
For a given x ∈ I with x ̸= xi (i = 0, 1, · · · , n), define a new function ψ on the
interval I by
∏ n
ψ(t) = f (t) − pn (t) − λ (t − xi ), t ∈ I, (5.24)
i=0
f (x) − pn (x)
λ= .
∏ n
(x − xi )
i=0
185
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
f (x) − pn (x)
0 = ψ (n+1) (ξx ) = f (n+1) (ξx ) − (n + 1) !.
∏ n
(x − xi )
i=0
Thus,
MEn (x)
f (n+1) (ξx ) − (n + 1) ! = 0.
∏
n
(x − xi )
i=0
The following theorem plays an important role in the error analysis of numerical inte-
gration. The idea behind the proof of this theorem is similar to the idea used in the
above theorem and is left as an exercise.
Theorem 5.3.2.
If f ∈ C n+1 [a, b] and if x0 , x1 , · · · , xn are distinct nodes in [a, b], then for any x ̸= xi ,
i = 0, 1, · · · , n, there exists a point ξx ∈ (a, b) such that
f (n+1) (ξx )
f [x0 , x1 , · · · , xn , x] = . (5.25)
(n + 1) !
Remark 5.3.3.
It is interesting to note that when all the nodes coincide, then (5.25) reduces to
(5.19).
186
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
Example 5.3.5.
Let us obtain an upper bound of the mathematical error for the linear interpolat-
ing polynomial with respect to the infinity norm. As in Example 5.1.10, the linear
interpolating polynomial for f at x0 and x1 (x0 < x1 ) is given by
where f [x0 , x1 ] is given by (5.10). For each x ∈ I := [x0 , x1 ], using the formula (5.23),
the error ME1 (x) is given by
(x − x0 )(x − x1 ) ′′
ME1 (x) = f (ξx ),
2
where ξx ∈ (x0 , x1 ) depends on x. Therefore,
∥f ′′ ∥∞,I
|ME1 (x)| ≤ |(x − x0 )(x − x1 )| .
2
Note that the maximum value of |(x − x0 )(x − x1 )| as x varies in the interval [x0 , x1 ],
occurs at x = (x0 + x1 )/2. Therefore, we have
(x1 − x0 )2
|(x − x0 )(x − x1 )| ≤ .
4
Using this inequality, we get an upper bound
∥f ′′ ∥∞,I
|ME1 (x)| ≤ (x1 − x0 ) 2
, for all x ∈ [x0 , x1 ].
8
Note that the above inequality is true for all x ∈ [x0 , x1 ]. In particular, this inequality
is true for an x at with the function |ME1 | attains its maximum. Thus, we have
∥f ′′ ∥∞,I
∥ME1 ∥∞,I ≤ (x1 − x0 )2 .
8
The right hand side quantity, which is a real number, is an upper bound for the
mathematical error in linear interpolation with respect to the infinity norm.
Example 5.3.6.
Let the function
f (x) = sin x
be approximated by an interpolating polynomial p9 (x) of degree less than or equal to
187
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
9 for f at the nodes x0 , x1 , · · · , x9 in the interval I := [0, 1]. Let us obtain an upper
bound for ∥ME9 ∥∞,I , where (from (5.23))
f (10) (ξx ) ∏
9
ME9 (x) = (x − xi ).
10! i=0
∏9
Since |f (10) (ξx )| ≤ 1 and i=0 |x − xi | ≤ 1, we have
1
| sin x − p9 (x)| = |ME9 (x)| ≤ < 2.8 × 10−7 , for all x ∈ [0, 1].
10!
Since this holds for all x ∈ [0, 1], we have
The right hand side number is the upper bound for the mathematical error ME9 with
respect to the infinity norm on the interval I.
Quite often, the polynomial interpolation that we compute is based on the function
data subjected to floating-point approximation. In this subsection, we analyze the
arithmetic error arising due to the floating-point approximation fl(f (xi )) of f (xi ) for
each node point xi , i = 0, 1, · · · , n in the interval I = [a, b]. All other computations are
assumed to be performed with infinite precision, for the sake of simplicity.
The Lagrange form of interpolating polynomial that uses the values fl(f (xi )) instead of
f (xi ), i = 0, 1, · · · , n is given by
∑
n
p̃n (x) = fl(f (xi )) li (x).
i=0
188
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
(a) (b)
1.25 11
10
1.2 9
1.15 7
l(x)
l(x)
6
1.1 5
1.05 3
1 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(c) x (d) x
3500 14000
3000 12000
2500 10000
2000 8000
l(x)
Mn
1500 6000
1000 4000
500 2000
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 18 20
x n
Figure 5.2: (a) to (c) depicts the graph of function l given by (5.28) for x ∈ [0, 1] when
n = 2, 8, and 18. (d) depicts the function n in the x-axis and Mn given by (5.29) in the
y-axis.
The upper bound in (5.27) might grow quite large as n increases, especially when the
nodes are equally spaced as we will study now.
Assume that the nodes are equally spaced in the interval [a, b], with x0 = a and xn = b,
and xi+1 − xi = h for all i = 0, 1, · · · , n − 1. Note that h = (b − a)/n. We write
xi = a + ih, i = 0, 1, · · · , n.
Clearly, the Lagrange polynomials are not dependent on the choice of a, b, and h. They
depend entirely on n and η (which depends on x). The Figure 5.2 (a), (b) and (c) shows
189
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
for n = 2, 8 and 18. It is observed that as n increases, the maximum of the function l
increases. In fact, as n → ∞, the maximum of l tends to infinity and it is observed in
Figure 5.2 (d) which depicts n in the x-axis and the function
∑
n
Mn = ||li ||∞,I (5.29)
i=0
in the y-axis. This shows that the upper bound of the arithmetic error AEn given in
(5.27) tends to infinity as n → ∞. This gives the possibility that the arithmetic error
may tend to increase as n increases. Thus, as we increase the degree of the interpolating
polynomial, the approximation may go worser due to the presence of floating-point
approximation. In fact, this behavior of the arithmetic error in polynomial interpolation
can also be analyzed theoretically, but this is outside the scope of the present course.
TEn (x) = f (x) − p̃n (x) = (f (x) − pn (x)) + (pn (x) − p̃n (x)). (5.30)
Taking infinity norm on both sides of the equation (5.30) and using triangle inequality,
we get
∥TEn (x)∥∞,I = ||f − p̃||∞,I ≤ ||f − pn ||∞,I + ||pn − p̃||∞,I ≤ ||f − pn ||∞,I + ||ϵ||∞ Mn .
It is clear from the Figure 5.2 (d) that Mn increases exponentially with respect to n.
This implies that even if ||ϵ||∞ is very small, a large enough n can bring in a significantly
large error in the interpolating polynomial.
Thus, for a given function and a set of equally spaced nodes, even if the mathematical
error is bounded, the presence of floating-point approximation in the given data can
lead to significantly large arithmetic error for larger values of n.
In the previous section, we have seen that even a small arithmetic error may lead to a
drastic magnification of total error as we go on increasing the degree of the polynomial.
190
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
(a) (b)
30
1
0.8 25
0.6
20
0.4
0.2
15
p n( x )
p n( x )
0
10
−0.2
−0.4
5
−0.6
−0.8 0
−1
−5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x x
Figure 5.3: Runge Phenomenon is illustrated. Figure (a) depicts the graph of the
function f given by (5.31) (blue solid line) along with the interpolating polynomial
of degree 2 (red dash line) and 8 (magenta dash dot line) with equally spaced nodes.
Figure (b) shows the graph of f (blue solid line) along with the interpolating polynomial
of degree 18 (red dash line) with equally spaced nodes.
This gives us a feeling that if the calculation is done with infinite precision (that is,
without any finite digit floating point arithmetic) and the function f is smooth, then we
always have a better approximation for a larger value of n. In other words, we expect
lim ∥f − pn ∥∞,I = 0.
n→∞
But this is not true in the case of equally spaced nodes. This was first shown by Carl
Runge, where he discovered that there are certain functions for which, as we go on
increasing the degree of interpolating polynomial, the total error increases drastically
and the corresponding interpolating polynomial oscillates near the boundary of the
interval in which the interpolation is done. Such a phenomenon is called the Runge
Phenomenon. This phenomenon is well understood by the following example given by
Carl Runge.
Example 5.3.7 [Runge’s Function].
Consider the Runge’s function defined on the interval [−1, 1] given by
1
f (x) = . (5.31)
1 + 25x2
The interpolating polynomials with n = 2, n = 8 and n = 18 are depicted in Figure
5.3. This figure clearly shows that as we increase the degree of the polynomial, the
interpolating polynomial differs significantly from the actual function especially, near
the end points of the interval.
191
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.3 Error in Polynomial Interpolation
In the light of the discussion made in Subsection 5.3.2, we may think that the Runge
phenomenon is due to the amplification of the arithmetic error. But, even if the cal-
culation is done with infinite precision (that is, without any finite digit floating point
arithmetic), we may still have the Runge phenomenon due to the amplification in
mathematical error. This can be observed by taking infinity norm on both sides of the
formula (5.23). This gives an upper bound of the infinity norm of MEn (x) as
(b − a)n+1 (n+1)
∥MEn ∥∞,I ≤ ∥f ∥∞,I .
(n + 1)!
Although the first part, (b − a)n+1 /(n + 1)! in the upper bound tends to zero as n → ∞,
if the second part, ∥f (n+1) ∥∞,I increases significantly as n increases, then the upper
bound can still increase and makes it possible for the mathematical error to be quite
high.
A more deeper analysis is required to understand the Runge phenomenon more rigor-
ously.
We end this section by stating without proof a negative result and a positive result
concerning the convergence of sequence of interpolating polynomials.
be given. Then there exists a continuous function f defined on the interval [a, b] such
that the polynomials pn (x) that interpolate the function f at these nodes have the
property that ∥pn − f ∥∞,[a,b] does not tend to zero as n → ∞.
Example 5.3.9.
In fact, the interpolating polynomial pn (x) for the Runge’s function goes worser and
worser as shown in Figure 5.3 for increasing values of n with equally spaced nodes.
That is, ∥f − pn ∥∞,[−1,1] → ∞ as n → ∞ for any sequence of equally spaced nodes.
Let us now state a positive result concerning the convergence of sequence of interpo-
lating polynomials to a given continuous function.
192
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.4 Piecewise Polynomial Interpolation
Theorem 5.3.10.
Let f be a continuous function on the interval [a, b]. Then there exists a sequence of
nodes
(n) (n)
a ≤ x0 < x1 < · · · < x(n)
n ≤ b, for n ∈ N,
such that the polynomials pn (x) that interpolate the function f at these nodes satisfy
∥pn − f ∥∞,[a,b] → 0 as n → ∞.
The Theorem 5.3.10 is very interesting because it implies that for the Runge’s function,
we can find a sequence of nodes for which the corresponding interpolating polynomial
yields a good approximation even for a large value of n.
Example 5.3.11.
For instance, define a sequence of nodes
( )
(n) (2i + 1)π
xi = cos , i = 0, 1, · · · , n (5.32)
2(n + 1)
(n)
for each n = 0, 1, 2, · · · . The nodes xi defined by (5.32) are called Chebyshev
nodes.
In particular, if n = 4, the nodes are
(4) (4)
x0 = cos(π/10), x1 = cos(3π/10),
(4) (4)
x2 = cos(5π/10), x3 = cos(7π/10), x44 = cos(9π/10).
Figure 5.4 depicts pn (x) for n = 4, 18, 32, and 64 along with the Runge’s function.
From these figures, we observe that the interpolating polynomial pn (x) agrees well
with the Runge’s function.
193
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.4 Piecewise Polynomial Interpolation
(a) (b)
1.2 1.2
1 1
0.8 0.8
f ( x ) , p 1 8( x )
f ( x ) , p 4( x )
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x x
(c) (d)
1.2 1.2
1 1
0.8 0.8
f ( x ) , p 3 2( x )
f ( x ) , p 6 4( x )
0.6 0.6
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x x
Figure 5.4: Runge Phenomenon is illustrated with Chebyshev nodes. Figure (a) to (d)
shows the graph of the Runge function (blue solid line) and the interpolating polynomial
with Chebyshev nodes (red dash line) for n = 4, 18, 32 and 64 respectively. Note that
the two graphs in Figure (c) and (d) are indistinguishable.
194
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.4 Piecewise Polynomial Interpolation
Let us start with linear interpolation over an interval I = [a, b] which leads to
Note that s(x) is a continuous function on [x0 , x2 ], which interpolates f (x) and is
linear in [x0 , x1 ] and [x1 , x2 ]. Such an interpolating function is called piecewise linear
interpolating function.
In a similar way as done above, we can also obtain piecewise quadratic, cubic interpo-
lating functions and so on.
Example 5.4.1.
Consider the Example 5.1, where we have obtained the quadratic interpolating poly-
nomial for the function (π )
f (x) = sin ex .
2
The piecewise linear polynomial interpolating function for the data
x 0 0.5 1
f (x) 1.0000 0.5242 −0.9037
is given by
{
1 − 0.9516 x , x ∈ [0, 0.5]
s(x) =
1.9521 − 2.8558 x , x ∈ [0.5, 1].
The following table shows the value of the function f at x = 0.25 and x = 0.75 along
with the values of p2 (x) and s(x) with relative errors.
195
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.5 Spline Interpolation
0.8
0.6
0.4
0.2
0
y
−0.2
−0.4
−0.6
−0.8
−1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
(π )
Figure 5.5: The function f (x) = sin ex
(blue line), p2 (x) (green bash line) and
2
the piecewise linear interpolation s(x) (red dash and dot line) are shown. Blue dots
represent the given data, blue ‘x’ symbol indicates the value of f (0.25) and f (0.75),
green ‘+’ symbol indicates the value of p2 (0.25) and p2 (0.75), and the red ‘O’ symbol
represents the value of s(0.25) and s(0.75).
196
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.5 Spline Interpolation
We shall now study how we can obtain the interpolation of a function f as spline
interpolating functions instead of polynomials. For the sake of simplicity, we restrict
only to d = 3, called the cubic spline interpolating function.
Remark 5.5.2.
Note that in each subinterval [xi−1 , xi ], i = 1, 2, · · · , n, we only know f (xi−1 ) and
f (xi ). But we look for a cubic polynomial in this subinterval. Therefore, we cannot
follow the Lagrange’s or Newton’s interpolating formula, as these formulas demand
the function values at four distinct nodes in the subinterval. We need to adopt a
different method for the construction of the polynomial in each subinterval in order
to obtain the spline interpolation.
Mi = s′′ (xi ), i = 0, 1, · · · , n
197
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.5 Spline Interpolation
M0 = Mn = 0 (5.36)
Example 5.5.3.
Calculate the natural cubic spline interpolating the data
{ ( ) ( ) ( )}
1 1 1
(1, 1), 2, , 3, , 4, .
2 3 4
198
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.5 Spline Interpolation
0.9
0.8
0.7
s(x)
0.6
0.5
0.4
0.3
0.2
1 1.5 2 2.5 3 3.5 4
x
Figure 5.6: The function f (x) = 1/x (blue line) and the corresponding cubic natural
spline interpolating function s(x) (red dash line) are shown. The data are represented
by blue dots.
6 6
199
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.5 Spline Interpolation
3 2M − M
−
0 1
, x ∈ [1, 2]
2 6
5 3M1 − 2M2
K2 = − , x ∈ [2, 3] .
6 6
7 4M2 − 3M3
− , x ∈ [3, 4]
12 6
Substituting these expressions in the expression of s(x), we get the required cubic
spline as given in (5.34).
Step 3: Since we are constructing the natural spline, we take M0 = M3 = 0. The
system (5.35) gives
2 1 1
M1 + M2 = ,
3 6 3
1 2 1
M1 + M2 = .
6 3 12
Solving this system ,we get M1 = 12 , M2 = 0. Substituting these values into (5.34),
we obtain
1 3 1 2 1 3
x − x − x+ , x ∈ [1, 2]
12 4 3 2
1 3 7 17
s(x) = − x3 + x2 − x + , x ∈ [2, 3]
12 4 3 6
− 1 x+ 7 , x ∈ [3, 4]
12 12
which is the required natural cubic spline approximation to the given data. A com-
parison result is depicted in Figure 5.6.
200
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.6 Exercises
5.6 Exercises
Polynomial Interpolation
1. Let x0 , x1 , · · · , xn be distinct nodes. If p(x) is a polynomial of degree less than or
equal to n, then show that
∑
n
p(x) = p(xi )li (x),
i=0
201
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.6 Exercises
7. Find the Lagrange form of interpolating polynomial p2 (x) that interpolates the
function f (x) = e−x at the nodes x0 = −1, x1 = 0 and x2 = 1. Further, find the
2
value of p2 (−0.9) (use 6-digit rounding). Compare the value with the true value
f (−0.9) (use 6-digit rounding). Find the percentage error in this calculation.
12. If f ∈ C n+1 [a, b] and if x0 , x1 , · · · , xn are distinct nodes in [a, b], then show that
there exists a point ξx ∈ (a, b) such that
f (n+1) (ξx )
f [x0 , x1 , · · · , xn , x] =
(n + 1) !
13. Let N be a natural number. Let p1 (x) denote the linear interpolating polynomial
on the interval [N , N +1] interpolating the function f (x) = x2 at the nodes N and
N + 1. Find an upper bound for the mathematical error ME1 using the infinity
norm on the interval [N , N + 1] (i.e., ∥ME1 ∥∞, [N ,N +1] ).
202
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.6 Exercises
14. Let p3 (x) denote a polynomial of degree less than or equal to 3 that interpolates
the function f (x) = ln x at the nodes x0 = 1, x1 = 43 , x2 = 53 , x3 = 2. Find a
lower bound on the absolute value of mathematical error |ME3 (x)| at the point
x = 32 , using the formula for mathematical error in interpolation.
15. Let f : [0, π6 ] → R be a given function. The following is the meaning for the
Cubic interpolation in a table of function values
x x0 x1 ··· xN
f (x) f (x0 ) f (x1 ) · · · f (xN )
The values of f (x) are tabulated for a set of equally spaced points in [a, b], say
xi for i = 0, 1, · · · , N with x0 = 0, xN = π6 , and h = xi+1 − xi > 0 for every
i = 0, 1, · · · , N − 1. For an x̄ ∈ [0, π6 ] at which the function value f (x̄) is not
tabulated, the value of f (x̄) is taken to be the value of p3 (x̄), where p3 (x) is the
polynomial of degree less than or equal to 3 that interpolates f at the nodes
xi , xi+1 , xi+2 , xi+3 where i is the least index such that x ∈ [xi , xi+3 ].
Take f (x) = sin x for x ∈ [0, π6 ]; and answer the following questions.
4
i) When x̄ and p3 are as described above, then show that |f (x̄) − p3 (x̄)| ≤ h48 .
ii) If h = 0.005, then show that cubic interpolation in the table of function
values yields the value of f (x̄) with at least 10 decimal-place accuracy.
16. Let x0 , x1 , · · · , xn be n + 1 distinct nodes, and f be a function. For each
i = 0, 1, · · · , n, let fl (f (xi )) denote the floating point approximation of f (xi ) ob-
tained by rounding to 5 decimal places (note this is different from using 5-digit
rounding). Assume that 0.1 ≤ f (xi ) < 1 for all i = 0, 1, · · · , n. Let pn (x) de-
note the Lagrange form of interpolating polynomial corresponding to the data
{(xi , f (xi )) : i = 0, 1, · · · , n}. Let p̃n (x) denote the Lagrange form of interpolat-
ing polynomial corresponding to the data {(xi , fl (f (xi ))) : i = 0, 1, · · · , n}. Show
that the arithmetic error at a point x̃ satisfies the inequality
1 −5 ∑
n
|pn (x̃) − p̃n (x̃)| ≤ 10 |lk (x̃)|.
2 k=0
Spline Interpolation
17. Find a natural cubic spline interpolating function for the data
x -1 0 1
.
y 5 7 9
18. Determine whether the natural cubic spline function that interpolates the table
x -1 0 1
y −3 −1 −1
203
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 5.6 Exercises
is a natural cubic spline interpolating function on the interval [−4, 5] for the data
x -4 -3 4 5
.
y 9 7 29 -3
20. Does there exist real numbers a and b so that the function
(x − 2) + a(x − 1) x ∈ [−1, 2]
3 2
is a natural cubic spline interpolating function on the interval [−1, 5] for the data
x -1 2 3 5
?
y -31 -1 1 17
21. Show that the natural cubic spline interpolation function for a given data is
unique.
204
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 6
There are two reasons for approximating derivatives and integrals of a function f (x).
One is when the function is very difficult to differentiate or integrate, or only the
tabular values are available for the function. Another reason is to obtain solution of a
differential or integral equation. In this chapter we introduce some basic methods to
approximate integral and derivative of a function given either explicitly or by tabulated
values.
In Section 6.1, we obtain numerical methods for evaluating the integral of a given
integrable function f defined on the interval [a, b]. Section 6.2 introduces various ways
to obtain numerical formulas for approximating derivatives of a given differentiable
function.
In this section we derive and analyze numerical methods for evaluating definite integrals.
The problem is to evaluate the number
∫b
I(f ) = f (x)dx. (6.1)
a
Most such integrals cannot be evaluated explicitly, and with many others, it is faster to
integrate numerically than explicitly. The process of approximating the value of I(f )
is usually referred to as numerical integration or quadrature rule.
205
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
We now consider the case when n = 0 in (6.2). Then, the corresponding interpolating
polynomial is the constant function p0 (x) = f (x0 ), and therefore
From this, we can obtain two quadrature rules depending on the choice of x0 .
• If x0 = a, then this approximation becomes
and is called the rectangle rule. The geometrical interpretation of the rectangle
rule is illustrated in Figure 6.1.
• If x0 = (a + b)/2, we get
( )
a+b
I(f ) ≈ IM (f ) := (b − a)f (6.4)
2
and is called the mid-point rule.
206
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
y
11111111111
00000000000
y=f(x)
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
00000000000
11111111111
a b
x
Proof.
Let p0 (x) be the polynomial interpolating the function f at the node a. For each
x ∈ [a, b], we have
∫b
MER (f ) = I(f ) − IR (f ) = f [a, x](x − a) dx. (6.6)
a
From Corollary 5.2.9 (Conclusion 1), we see that the function x 7−→ f [a, x] is continu-
207
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
y
1111111111
0000000000
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
y=f(x)
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111 x
a b
ous. Therefore, from the mean value theorem for integrals (after noting that (x − a)
is negative for all x ∈ [a, b]), the expression (6.6) for the mathematical error takes the
form
∫b
MER (f ) = f [a, ξ] (x − a) dx,
a
for some ξ ∈ (a, b). By mean value theorem, f [a, ξ] = f ′ (η) for some η ∈ (a, ξ). Thus,
we get
f ′ (η)(b − a)2
MER (f ) = ,
2
for some η ∈ (a, b).
and therefore
∫b
I(f ) ≈ IT (f ) := (f (x0 ) + f [x0 , x1 ](x − x0 )) dx.
a
208
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Proof.
We have for x ∈ [a, b]
∫b
I(f ) = IT (f ) + f [a, b, x](x − a)(x − b)dx.
a
∫b
MET (f ) = I(f ) − IT (f ) = f [a, b, x](x − a)(x − b)dx. (6.9)
a
From Corollary 5.2.9 (Conclusion 1), we see that the function x 7−→ f [a, b, x] is con-
tinuous. Therefore, from the mean value theorem for integrals (after noting that
(x − a)(x − b) is negative for all x ∈ [a, b]), the expression (6.9) for the mathematical
error takes the form
∫b
MET (f ) = f [a, b, η] (x − a)(x − b)dx,
a
for some η ∈ (a, b). The formula (6.8) now follows from (5.25) and a direct evaluation
of the above integral.
209
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Example 6.1.3.
For the function f (x) = 1/(x + 1), we approximate the integral
∫1
I= f (x)dx,
0
The true value is I(f ) = log(2) ≈ 0.693147. Therefore, the error is MET (f ) ≈
−0.0569. Using the formula (6.8), we get the bounds for MET (f ) as
1 1
− < MET (f ) < −
6 48
which clearly holds in the present case.
We can improve the approximation of trapezoidal rule by breaking the interval [a, b]
into smaller subintervals and apply the trapezoidal rule (6.7) on each subinterval. We
will derive a general formula for this.
Let us subdivide the interval [a, b] into n equal subintervals of length
b−a
h=
n
with endpoints of the subintervals as
xj = a + jh, j = 0, 1, · · · , n.
Then, we get
∫b ∫xn n−1 ∫j+1
∑
x
210
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Example 6.1.4.
Using composite trapezoidal rule with n = 2, let us approximate the integral
∫1
I= f (x)dx,
0
where
1
f (x) =
.
1+x
As we have seen in Example 6.1.3, the true value is I(f ) = log(2) ≈ 0.693147. Now,
the composite trapezoidal rule with x0 = 0, x1 = 1/2 and x2 = 1 gives
IT2 (f ) ≈ 0.70833.
Thus the error is -0.0152. Recall from Example 6.1.3 that with n = 1, the trapezoidal
rule gave an error of -0.0569.
We now calculate I(p2 (x)) to obtain the formula for the case when n = 2. Let us choose
x0 = a, x1 = (a + b)/2 and x2 = b. The Lagrange form of interpolating polynomial is
p2 (x) = f (x0 )l0 (x) + f (x1 )l1 (x) + f (x2 )l2 (x).
Then
∫b ∫b ∫b ∫b
p2 (x)dx = f (x0 ) l0 (x) dx + f (x1 ) l1 (x) dx + f (x2 ) l2 (x) dx.
a a a a
y=p 2(x)
11111111111
00000000000
0000000000
1111111111
00000000000
11111111111
0000000000
1111111111
00000000000
11111111111
0000000000
1111111111
00000000000
11111111111
0000000000
1111111111
00000000000
11111111111
0000000000
1111111111
00000000000
11111111111
0000000000
1111111111
y=f(x)
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
0000000000
1111111111
a
0000000000
1111111111 (a+b)/2 b
x
we get
∫b ∫b
(x − x1 )(x − x2 )
l0 (x) dx = dx
(x0 − x1 )(x0 − x2 )
a a
∫1
t(t − 1) b − a
= dt
2 2
−1
b−a
= .
6
Similarly, we can see
∫b
4
l1 (x) dx = (b − a),
6
a
∫b
b−a
l2 (x) dx = .
6
a
which is the famous Simpson’s Rule. The Simpson’s rule is illustrated in Figure 6.3.
We now obtain the mathematical error in Simpson’s rule, given by,
MES (f ) := I(f ) − I(p2 ).
212
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Proof.
Let x, y ∈ (a, b) be such that a, (a + b)/2, b, x, y are distinct. From the fourth order
divided difference formula, we have
Note that the above equality holds for every x, y ∈ [a, b] in view of Corollary 5.2.9
(Conclusion 1).
The mathematical error in Simpson’s method can therefore be written as
∫b ∫b
MES (f ) = f [y, a, (a + b)/2, b]ϕ(x)dx + f [y, a, (a + b)/2, b, x](x − y)ϕ(x)dx,
a a
where
ϕ(x) = (x − a)(x − (a + b)/2)(x − b).
A direct integration shows
∫b ∫b ( )
a+b
ϕ(x)dx = (x − a) x − (x − b)dx = 0.
2
a a
Thus, we have
∫b
MES (f ) = f [y, a, (a + b)/2, b, x](x − y)ϕ(x)dx,
a
Recall in the proof of Theorem 6.1.2, we used mean value theorem for integrals
(Theorem 1.4.3) at this stage to arrive at the conclusion. But, in the present case,
we cannot use this theorem because the function (x − y)ϕ(x) need not be of one sign
on [a, b]. However, the choice y = (a + b)/2 makes this function one signed (nega-
tive). Now using Corollary 5.2.9 (Conclusion 1) and following the idea of the proof of
Theorem 6.1.2, we arrive at the formula (6.12) (this is left as an exercise).
213
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Example 6.1.6.
We now use the Simpson’s rule to approximate the integral
∫1
I(f ) = f (x) dx,
0
where
1
f (x) = .
1+x
The true value is I(f ) = log(2) ≈ 0.693147. Using the Simpson’s rule (6.11), we get
( )
1 8 1 25
IS (f ) = 1+ + = ≈ 0.694444.
6 3 2 36
Let us now derive the composite Simpson’s rule. First let us subdivide the interval [a, b]
into 2n equal parts, where n ∈ N. For k = 0, 1, · · · , 2n, define xk := a + kh, where
h = (b − a)/2n. Applying Simpson’s rule (6.11) on the interval [x2i , x2i+2 ], we get
∫ x2i+2
2h
f (x)dx ≈ {f (x2i ) + 4f (x2i+1 ) + f (x2i+2 )}
x2i 6
Summing over i = 0, · · · , n − 1, we get
∫ b n−1 ∫ x2i+2
∑
f (x)dx = f (x)dx
a i=0 x2i
2h ∑
n−1
≈ {f (x2i ) + 4f (x2i+1 ) + f (x2i+2 )} .
6 i=0
Therefore, the composite Simpson’s rule takes the form
[ ]
h ∑
n−1 ∑
n−1
ISn (f ) = f (x0 ) + f (x2n ) + 2 f (x2i ) + 4 f (x2i+1 ) . (6.12)
3 i=1 i=0
214
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
where wi ’s are weights. In deriving those rules, we have fixed the nodes x0 , x1 , · · · ,
xn (n = 0 for rectangle rule, n = 1 for trapezoidal rule and n = 2 for Simpson’s rule),
and used interpolating polynomials to obtain the corresponding weights. Instead, we
may use another approach in which for a fixed set of nodes weights are determined
by imposing the condition that the resulting rule is exact for polynomials of degree
less than or equal to n. Such a method is called the method of undetermined coeffi-
cients.
Example 6.1.7.
Let us find w0 , w1 , and w2 such that the approximate formula
∫b ( )
a+b
f (x) dx ≈ w0 f (a) + w1 f + w2 f (b) (6.14)
2
a
• The condition that the formula (6.14) is exact for the polynomial p(x) = x
yields ∫b ( )
b 2 − a2 a+b
= x dx = aw0 + w1 + bw2 .
2 2
a
• The condition that the formula (6.14) is exact for the polynomial p(x) = x2
yields ∫b ( )2
b 3 − a3 2 2 a+b
= x dx = a w0 + w 1 + b2 w 2
3 2
a
215
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Example 6.1.9.
Let us determine the degree of precision of Simpson’s rule. It will suffice to apply
the rule over the interval [0, 2] (in fact any interval is good enough and we chose this
interval for the sake of having easy computation).
∫2
2
dx = 2 = (1 + 4 + 1),
6
0
∫2
2
xdx = 2 = (0 + 4 + 2),
6
0
∫2
8 2
x2 dx = = (0 + 4 + 4),
3 6
0
∫2
2
x3 dx = 4 = (0 + 4 + 8),
6
0
∫2
32 2 20
x4 dx = ̸= (0 + 4 + 16) = .
5 6 3
0
Remark 6.1.10.
In Example 6.1.7 we have obtained the Simpson’s rule using the method of un-
determined coefficients by requiring the exactness of the rule for polynomials of
degree less than or equal to 2, the above example shows that the rule is exact for
polynomials of degree three as well.
216
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
In Example 6.1.7 we have fixed the nodes and obtained the weights in the quadrature
rule (6.14) such that the rule is exact for polynomials of degree less than or equal to
2. In general, by fixing the nodes, we can obtain the weights in (6.13) such that the
rule is exact for polynomials of degree less than or equal to n. But it is also possible to
derive a quadrature rule such that the rule is exact for polynomials of degree less than
or equal to 2n + 1 by choosing the n + 1 nodes and the weights appropriately. This is
the basic idea of Gaussian rules.
Let us consider the special case
∫1 ∑
n
f (x)dx ≈ wi f (xi ). (6.15)
−1 i=0
The weights wi and the nodes xi (i = 0, · · · , n) are to be chosen in such a way that the
rule (6.15) is exact, that is
∫1 ∑
n
f (x)dx = wi f (xi ), (6.16)
−1 i=0
whenever f (x) is a polynomial of degree less than or equal to 2n + 1. Note that (6.16)
holds for every polynomial f (x) of degree less than or equal to 2n + 1 if and only if
(6.16) holds for f (x) = 1, x, x2 , · · · , x2n+1 .
Case 1: (n = 0). In this case, the quadrature formula (6.15) takes the form
∫1
f (x)dx ≈ w0 f (x0 ).
−1
∫1 ∫1
1 dx = w0 and x dx = w0 x0 .
−1 −1
∫1
f (x)dx ≈ 2f (0) =: IG0 (f ), (6.17)
−1
217
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Case 2: (n = 1). In this case, the quadrature formula (6.15) takes the form
∫1
f (x)dx ≈ w0 f (x0 ) + w1 f (x1 ).
−1
Case 3: (General). In general, the quadrature formula is given by (6.15), where there
are 2(n + 1) free parameters xi and wi for i = 0, 1, · · · , n. The condition (6.16) leads
to the nonlinear system
∑n 0 , i = 1, 3, · · · , 2n + 1
i
w j xj = 2 .
, i = 0, 2, · · · , 2n
j=0 i+1
These are nonlinear equations and their solvability is not at all obvious and therefore
the discussion is outside the scope of this course.
So far, we derived Gaussian rule for integrals over [−1, 1]. But this is not a limitation
as any integral on the interval [a, b] can easily be transformed to an integral on [−1, 1]
by using the linear change of variable
b + a + t(b − a)
x= , −1 ≤ t ≤ 1. (6.19)
2
Thus, we have
∫b ∫1 ( )
b−a b + a + t(b − a)
f (x)dx = f dt.
2 2
a −1
218
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.1 Numerical Integration
Example 6.1.11.
We now use the Gaussian rule to approximate the integral
∫1
I(f ) = f (x) dx,
0
where
1
f (x) = .
1+x
Note that the true value is I(f ) = log(2) ≈ 0.693147.
To use the Gaussian quadrature, we first need to make the linear change of variable
(6.19) with a = 0 and b = 1 and we get
t+1
x= , −1 ≤ t ≤ 1.
2
Thus the required integral is
∫1 ∫1
dx dt
I(f ) = = .
1+x 3+t
0 −1
We need to take f (t) = 1/(3 + t) in the Gaussian quadrature formula (6.18) and we
get
∫1 ∫1 ( ) ( )
dx dt 1 1
= ≈ f −√ +f √ ≈ 0.692308 ≈ IG1 (f ).
1+x 3+t 3 3
0 −1
Theorem 6.1.12.
Let f (x) be continous for a ≤ x ≤ b, and let n ≥ 1. Then the absolute error |En (f )|
in using Gaussian numerical integration rule to obtain I(f ) satisfies
where
ρ2n+1 (f ) = inf ∥f − q∥∞
deg q≤2n+1
219
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
Proof.
En (p) = 0 for any polynomial p(x) of degree ≤ 2n + 1. Also, the error function En
∗
satisfies En (F + G) = En (F ) + En (G) for all F , G ∈ C[a, b]. Let p(x) = q2n+1 (x), the
minimax approximation of degree ≤ 2n + 1 to f (x) on [a, b]. Then
∗
En (f ) = En (f ) − En (q2n+1 )
∗
= En (f − q2n+1 )
∫ b ∑
n
∗ ∗
= (f (x) − q2n+1 (x))dx − wj (f (xj ) − q2n+1 (xj )).
a j=0
Therefore, we have
∑
n
∗
|En (f )| ≤ ∥f − q2n+1 ∥∞ [(b − a) + |wj |].
j=0
∑
n
But we have |wj | = b − a and therefore, we get the desired result.
j=0
The most simple way to obtain a numerical method for approximating the derivative
of a C 1 function f is to use the definition of derivative
f (x + h) − f (x)
f ′ (x) = lim .
h→0 h
220
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
f (x + h) − f (x)
f ′ (x) ≈ =: Dh+ f (x) (6.21)
h
for a sufficiently small value of h > 0. The formula Dh+ f (x) is called the forward
difference formula for the derivative of f at the point x.
Theorem 6.2.1.
Let f ∈ C 2 [a, b]. The mathematical error in the forward difference formula is given
by
h
f ′ (x) − Dh+ f (x) = − f ′′ (η) (6.22)
2
for some η ∈ (x, x + h).
Proof.
By Taylor’s theorem, we have
h2 ′′
f (x + h) = f (x) + hf ′ (x) + f (η) (6.23)
2
for some η ∈ (x, x + h). Using (6.21) and (6.23), we obtain
{[ ] }
1 h2 ′′ h
+
Dh f (x) = f (x) + hf (x) + f (η) − f (x) = f ′ (x) + f ′′ (η).
′
h 2 2
Remark 6.2.2.
If we consider the left hand side of (6.22) as a function of h, i.e., if
Let M > 0 be such that |f ′′ (x)| ≤ M for all x ∈ [a, b]. Then we see that
g(h) M
h ≤ 2 .
That is, g = O(h) as h → 0. We say that the forward difference formula Dh+ f (x)
is of order 1 (order of accuracy).
221
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
f (x) − f (x − h)
f ′ (x) = lim .
h→0 h
Therefore, the approximating formula for the first derivative of f can also be taken as
f (x) − f (x − h)
f ′ (x) ≈ =: Dh− f (x) (6.24)
h
The formula Dh− f (x) is called the backward difference formula for the derivative of f
at the point x.
Deriving the mathematical error for backward difference formula is similar to that of
the forward difference formula. It can be shown that the backward difference formula
is of order 1.
f (x + h) − f (x − h)
f ′ (x) = lim .
h→0 2h
Therefore, the approximating formula for the first derivative of f can also be taken as
f (x + h) − f (x − h)
f ′ (x) ≈ =: Dh0 f (x), (6.25)
2h
for a sufficiently small value of h > 0. The formula Dh0 f (x) is called the central difference
formula for the derivative of f at the point x.
The central difference formula is of order 2 as shown in the following theorem.
Theorem 6.2.3.
Let f ∈ C 3 [a, b]. The mathematical error in the central difference formula is given
by
h2 ′′′
f ′ (x) − Dh0 f (x) = − f (η), (6.26)
6
where η ∈ (x − h, x + h).
222
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
Central
Backward
f’ Forward
y=f(x)
. . . x
x−h x x+h
Proof.
Using Taylor’s theorem, we have
′ h2 ′′ h3 ′′′
f (x + h) = f (x) + hf (x) + f (x) + f (η1 )
2! 3!
and
h2 ′′ h3
f (x − h) = f (x) − hf ′ (x) + f (x) − f ′′′ (η2 ),
2! 3!
where η1 ∈ (x, x + h) and η2 ∈ (x − h, x).
Therefore, we have
h3 ′′′
f (x + h) − f (x − h) = 2hf ′ (x) + (f (η1 ) + f ′′′ (η2 )).
3!
Since f ′′′ (x) is continuous, by intermediate value theorem applied to f ′′ , we have
223
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
Example 6.2.4.
To find the value of the derivative of the function given by f (x) = sin x at x = 1 with
h = 0.003906, we use the three primitive difference formulas. We have
f (x − h) = f (0.996094) = 0.839354,
f (x) = f (1) = 0.841471,
f (x + h) = f (1.003906) = 0.843575.
f (x) − f (x − h)
1. Backward difference: Dh− f (x) = = 0.541935.
h
f (x + h) − f (x − h)
2. Central Difference: Dh0 f (x) = = 0.540303.
2h
f (x + h) − f (x)
3. Forward Difference: Dh+ f (x) = = 0.538670.
h
Note that the exact value is f ′ (1) = cos 1 = 0.540302.
Using the polynomial interpolation we can obtain formula for derivatives of any order
for a given function. For instance, to calculate f ′ (x) at some point x, we use the
approximate formula
f ′ (x) ≈ p′n (x),
where pn (x) denotes the interpolating polynomial for f (x). Many formulas can be
obtained by varying n and by varying the placement of the nodes x0 , · · · , xn relative to
the point x of interest.
Let us take n = 1. The linear interpolating polynomial is given by
In particular,
• if we take x0 = x and x1 = x + h for a small value h > 0, we obtain the forward
difference formula Dh+ f (x).
• if we take x0 = x − h and x1 = x for small value of h > 0, we obtain the backward
difference formula Dh− f (x).
• if we take x0 = x − h and x1 = x + h for small value of h > 0, we get the central
difference formula Dh0 f (x).
224
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
We next prove the formula for mathematical error in approximating the first derivative
using interpolating polynomial.
Theorem 6.2.5 [Mathematical Error].
Hypothesis:
1. Let f be an (n + 2)-times continuously differentiable function on the interval
[a, b].
2. Let x0 , x1 , · · · , xn be n + 1 distinct nodes in [a, b].
3. Let pn (x) denote the polynomial that interpolates f at the nodes x0 , x1 , · · · , xn .
4. Let x be any point in [a, b] such that x ∈ / {x0 , x1 · · · , xn }.
Conclusion: Then
f (n+2) (ηx ) f (n+1) (ξx )
f ′ (x) − p′n (x) = wn (x) + wn′ (x) (6.28)
(n + 2)! (n + 1)!
∏
n
with wn (x) = (x − xi ) and ξx and ηx are points in between the maximum and
i=0
minimum of x0 , x1 · · · , xn and x, that depend on x.
Proof.
For any x ∈ [a, b] with x ∈
/ {x0 , x1 · · · , xn }, by Newton’s form of interpolating
polynomial, we have
d
f [x0 , · · · , xn , x] = f [x0 , · · · , xn , x, x].
dx
Therefore, we have
Further, from Theorem 5.3.2, we see that there exists an ξx ∈ (a, b) such that
225
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
f (n+1) (ξx )
f [x0 , · · · , xn , x] = .
(n + 1)!
Using Theorem 5.3.2 along with the Hermite-Genocchi formula, we see that there
exists an ηx ∈ (a, b) such that
f (n+2) (ηx )
f [x0 , · · · , xn , x, x] = .
(n + 2)!
Therefore, we get
Difference formulas for higher order derivatives and their mathematical error can be
obtained similarly. The derivation of the mathematical error for the formulas of higher
order derivatives are omitted for this course.
Example 6.2.6.
Let x0 , x1 , and x2 be the given nodes. Then, the Newton’s form of interpolating
polynomial for f is given by
h2 ′′′
f ′ (x) − p′2 (x) = f (ξ),
3
for some ξ ∈ (x, x + 2h).
226
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
Another method to derive formulas for numerical differentiation is called the method of
undetermined coefficients. The idea behind this method is similar to the one discussed
in deriving quadrature formulas.
Suppose we seek a formula for f (k) (x) that involves the nodes x0 , x1 , · · · , xn . Then,
write the formula in the form
where wi , i = 0, 1, · · · , n are free variables that are obtained by imposing the con-
dition that this formula is exact for polynomials of degree less than or equal to n.
Example 6.2.7.
We will illustrate the method by deriving the formula for f ′′ (x) at nodes x0 = x − h,
x1 = x and x2 = x + h for a small value of h > 0.
For a small value of h > 0, let
where w0 , w1 and w2 are to be obtained so that this formula is exact when f (x) is
a polynomial of degree less than or equal to 2. This condition is equivalent to the
exactness for the three polynomials 1, x and x2 .
Step 1: When f (x) = 1 for all x. Then the formula of the form (6.30) is assumed to
be exact and we get
w0 + w1 + w2 = 0. (6.31)
Step 2: When f (x) = x for all x. Then the formula of the form (6.30) is assumed to
be exact and we get
w0 (x − h) + w1 x + w2 (x + h) = 0.
Using (6.31), we get
w2 − w0 = 0. (6.32)
Step 3: When f (x) = x2 for all x. Then the formula of the form (6.30) is assumed
to be exact and we get
w0 (x − h)2 + w1 x2 + w2 (x + h)2 = 2.
227
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
(2) f (x + h) − 2f (x) + f (x − h)
Dh f (x) = , (6.34)
h2
which is the required formula.
Let us now derive the mathematical error involved in this formula. For this, we use
the Taylor’s series
h2 ′′ h3
f (x ± h) = f (x) ± hf ′ (x) + f (x) ± f (3) (x) + · · ·
2! 3!
in (6.34) to get
( )
(2) 1 ′ h2 ′′ h3 (3) h4 (4) 2
Dh f (x) = 2 f (x) + hf (x) + f (x) + f (x) + f (x) + · · · − 2 f (x)
h 2! 3! 4! h
( 2 3 4
)
1 ′ h ′′ h (3) h (4)
+ 2 f (x) − hf (x) + f (x) − f (x) + f (x) − · · · .
h 2! 3! 4!
h2 (4)
Dh f (x) = f ′′ (x) +
(2)
[(f (x) + · · · ) + (f (4) (x) − · · · )].
24
Now treating the fourth order terms on the right hand side as remainders in Taylor’s
series, we get
h2
Dh f (x) = f ′′ (x) + [f (4) (ξ1 ) + f (4) (ξ2 )],
(2)
24
for some ξ1 , ξ2 ∈ (x − h, x + h). Using intermediate value theorem for the function
f (4) , we get the mathematical error as
h2 (4)
f ′′ (x) − Dh f (x) = −
(2)
f (ξ) (6.35)
12
for some ξ ∈ (x − h, x + h), which is the required mathematical error.
228
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
Difference formulas are useful when deriving methods for solving differential equations.
But they can lead to serious errors when applied to function values that are subjected
to floating-point approximations. Let
f (xi ) = fi + ϵi , i = 0, 1, 2.
(2)
To illustrate the effect of such errors, we choose the approximation Dh f (x) given by
(6.34) for the second derivative of f with x0 = x − h, x1 = x and x2 = x + h. Instead of
using the exact values f (xi ), we use the appoximate values fi in the difference formula
(6.34). That is,
(2) f2 − 2f1 + f0
D̄h f (x1 ) = .
h2
The total error committed is
f (x2 ) − 2f (x1 ) + f (x0 ) ϵ2 − 2ϵ1 + ϵ0
f ′′ (x1 ) − D̄h f (x1 ) = f ′′ (x1 ) −
(2)
+
h2 h2
h 2
ϵ2 − 2ϵ1 + ϵ0
= − f (4) (ξ) + .
12 h2
Using the notation ϵ∞ := max{|ϵ0 |, |ϵ1 |, |ϵ3 |}, we have
h2 (4) 4ϵ∞
|f ′′ (x1 ) − D̄h f (x1 )| ≤
(2)
|f (ξ)| + 2 . (6.36)
12 h
The error bound in (6.36) clearly shows that, although the first term (bound of math-
ematical error) tends to zero as h → 0, the second term (bound of arithmetic error)
can tend to infinity as h → 0. This gives a possibility for the total error to be as large
as possible when h → 0. In fact, there is an optimal value of h to minimize the right
side of (6.36) (as shown in Figure 6.5), which we will illustrate in the following example.
Example 6.2.8.
In finding f ′′ (π/6) for the function f (x) = cos x, if we use the function values fi that
has six significant digits when compared to f (xi ), then
|f (xi ) − fi |
≤ 0.5 × 10−5 .
|f (xi )|
229
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.2 Numerical Differentiation
upper bound
*
h
Figure 6.5: A sketch of the upper bound in total error as given in (6.36) as a function
of h. The black star indicates the optimal value of h.
form
h2 (4) 4ϵ∞
|f ′′ (π/6) − D̄h f (π/6)| ≤
(2)
|f (ξ)| + 2 ,
12 h
where ϵ∞ ≤ 0.5 × 10−5 and ξ ≈ π/6. Thus, we have
h2 (π ) 4 2 × 10−5
|f ′′ (π/6) − D̄h f (π/6)| ≤ + 2 (0.5 × 10−5 ) ≈ 0.0722h2 +
(2)
cos =: E(h).
12 6 h h2
The bound E(h) indicates that there is a smallest value of h, call it h∗ , such that the
bound increases rapidly for 0 < h < h∗ when h → 0. To find it, let E ′ (h) = 0, with its
root being h∗ . This leads to h∗ ≈ 0.129. Thus, for close values of h > h∗ ≈ 0.129, we
have less error bound than the values 0 < h < h∗ . This behavior of E(h) is observed
in the following table. Note that the true value is f ′′ (π/6) = − cos(π/6) ≈ −0.86603.
(2)
h D̄h f (π/6) Total Error E(h)
0.2 -0.86313 -0.0029 0.0034
0.129 -0.86479 -0.0012 0.0024
0.005 -0.80000 -0.0660 0.8000
0.001 0.00000 -0.8660 20
When h is very small, f (x−h), f (x) and f (x+h) are very close numbers and therefore
their difference in the numerator of the formula (6.34) tend to have loss of significance.
(2)
This is clearly observed in the values of D̄h f (π/6) when compared to the true value
where, we are not loosing much number of significant digits for h > 0.129, whereas
for h < 0.129, there is a drastic loss of significant digits.
230
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.3 Exercises
6.3 Exercises
Numerical Integration
1. Apply Rectangle, Trapezoidal, Simpson and Gaussian methods to evaluate
∫π/2
cos x
i) I = dx (exact value ≈ 0.623225)
1 + cos2 x
0
∫π
dx
ii) I = dx (exact value ≈ 1.047198)
5 + 4 cos x
0
∫1
e−x dx (exact value ≈ 0.746824),
2
iii) I =
0
∫π
iv) I = sin3 x cos4 x dx (exact value ≈ 0.114286)
0
∫1
v) I = (1 + e−x sin(4x)) dx. (exact value ≈ 1.308250)
0
Compute the relative error (when compared to the given exact values) in each
method.
2. Write down the errors in the approximation of
∫1 ∫1
4
x dx and x5 dx
0 0
by the Trapezoidal rule and Simpson’s rule. Find the value of the constant C for
which the Trapezoidal rule gives the exact result for the calculation of the integral
∫1
(x5 − Cx4 )dx.
0
231
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.3 Exercises
∫b
4. Obtain expressions for the arithmetic error in approximating the integral a f (x) dx
using the trapezoidal and the Simpson’s rules. Also obtain upper bounds.
5. Let a = x0 < x1 < · · · < xn = b be equally spaced nodes (i.e., xk = x0 + kh for
k = 1, 2, · · · , n) in the interval [a, b]. Note that h = (b − a)/n. Let f be a twice
continuously differentiable function on [a, b].
i) Show that the expression for the mathematical error in approximating the
∫b
integral a f (x) dx using the composite trapezoidal rule, denoted by ETn (f ),
is given by
(b − a)h2 ′′
ETn (f ) = − f (ξ),
12
for some ξ ∈ (a, b).
ii) Show that the mathematical error ETn (f ) tends to zero as n → ∞ (one uses
the terminology composite trapezoidal rule is convergent in such a case).
6. Determine the minimum number of subintervals and the corresponding step size
h so that the error for the composite trapezoidal rule is less than 5 × 10−9 for
∫7
approximating the integral 2 dx/x.
7. Let a = x0 < x1 < · · · < xn = b be equally spaced nodes (i.e., xk = x0 + kh for
k = 1, 2, · · · , n) in the interval [a, b], and n is an even natural number. Note that
h = (b − a)/n. Let f be a four times continuously differentiable function on [a, b].
i) Show that the expression for the mathematical error in approximating the
∫b
integral a f (x) dx using the composite Simpson rule, denoted by ESn (f ), is
given by
(b − a)h4 (4)
ESn (f ) = − f (ξ),
180
for some ξ ∈ (a, b).
ii) Show that the mathematical error ESn (f ) tends to zero as n → ∞ (one uses
the terminology composite Simpson rule is convergent in such a case).
8. Use composite Simpson’s and composite Trapezoidal rules to obtain an approxi-
mate value for the improper integral
∫∞
1
2
dx, with n = 4.
x +9
1
such that the formula is exact for all polynomials of degree as high as possible.
What is the degree of precision?
232
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.3 Exercises
∫1
10. Use the two-point Gaussian quadrature rule to approximate −1 dx/(x + 2) and
compare the result with the trapezoidal and Simpson’s rules.
11. Assume that xk = x0 + kh are equally spaced nodes. The quadrature formula
∫x3
3h
f (x)dx ≈ (f (x0 ) + 3f (x1 ) + 3f (x2 ) + f (x3 ))
8
x0
3 3
is called the Simpson’s 8
rule. Determine the degree of precision of Simpson’s 8
rule.
Numerical Differentiation
12. In this problem, perform the calculations using 6-digit rounding arithmetic.
i) Find the value of the derivative of the function f (x) = sin x at x = 1 using
the forward, backward, and central difference formulae with h1 = 0.015625,
and h2 = 0.000015.
ii) Find f ′ (1) directly and compare with the values obtained for each hi (i =
1, 2).
13. Obtain the central difference formula for f ′ (x) using polynomial interpolation
with nodes at x − h, x, x + h, where h > 0.
14. Given the values of the function f (x) = ln x at x0 = 2.0, x1 = 2.2 and x2 =
2.6, find the approximate value of f ′ (2.0) using the method based on quadratic
interpolation. Obtain an error bound.
15. The following data corresponds to the function f (x) = sin x.
x 0.5 0.6 0.7
f (x) 0.4794 0.5646 0.6442
Obtain the approximate value of f ′ (0.5), f ′ (0.6), and f ′ (0.7) using forward, back-
ward, and central difference formulae whichever are applicable. Compute the
relative error in all the three cases.
16. The following data corresponds to the function f (x) = ex − 2x2 + 3x + 1.
x 0.0 0.2 0.4
f (x) 0.0 0.7414 1.3718
Obtain the approximate value of f ′ (0.0), f ′ (0.2), and f ′ (0.4) using forward, back-
ward, and central difference formulae whichever are applicable. Compute the
relative error in all the three cases.
17. Obtain expressions for the arithmetic error in approximating the first derivative
of a function using the forward, backward, and central difference formulae.
233
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 6.3 Exercises
234
S. Baskar and S. Sivaji Ganesh Spring 2018-19
CHAPTER 7
y ′ = f (x, y),
We call the problem of solving the above ODE along with this initial condition, the
initial value problem.
It is well-known that there are many ODEs of physical interest that cannot be solved
exactly although we know that such problems have unique solutions. If one wants
the solution of such problems, then the only way is to obtain it approximately. One
common way of obtaining an approximate solution to a given initial value problem is to
numerically compute the solution using a numerical method (or numerical scheme). In
this chapter, we introduce some basic numerical methods for approximating the solution
of a given initial value problem.
In Section 7.1, we review the exact solvability of a given initial value problem and
motivate the need of numerical methods. In Section 7.3, we introduce a basic numerical
method called Euler’s method for obtaining approximate solution of an initial value
problem and discussed the error involved in this method. We also show in this section
that the Euler method is of order 1. Modified forms of Euler method can be obtained
235
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.1 Review of Theory
for all x ∈ I.
Proof.
If y is a solution of the initial value problem (7.1), then we have
Integrating the above equation from x0 to x yields the integral equation (7.2).
On the other hand let y be a solution of integral equation (7.2). Observe that, due
to continuity of the function x → y(x), the function s → f (s, y(s)) is continuous
236
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.1 Review of Theory
Remark 7.1.2.
The following are the consequences of Lemma 7.1.1.
Before going for a numerical method, it is important to ensure that the given initial
value problem has a unique solution. Otherwise, we will be solving an initial value
problem numerically, which actually does not have a solution or it may have many
solutions and we do not know which solution the numerical method obtains. Let us
illustrate the non-uniqueness of solution of an initial value problem.
237
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.1 Review of Theory
Note that y(x) = 0 for all x ∈ R is clearly a solution for this initial value problem.
Also, y(x) = x3 for all x ∈ R. This initial value problem has infinite family of
solutions parametrized by c ≥ 0, given by
{
0 if x ≤ c,
yc (x) =
(x − c) if x > c,
3
defined for all x ∈ R. Thus, we see that a solution to an initial value problem need
not be unique.
R = {x : |x − x0 | ≤ a} × {y : |y − y0 | ≤ b} (7.5)
Then the initial value problem (7.1) has at least one solution on the interval |x−x0 | ≤
δ where δ = min{a, Mb }. Moreover, the initial value problem (7.1) has exactly one
solution on this interval.
Remark 7.1.5.
We state without proof that the function f (x, y) = y 2/3 in Example 7.1.3 is not a
Lipschitz function on any rectangle containing the point (0, 0) in it.
While verifying that a given function f is Lipschitz continuous is a little difficult, one
can easily give a set of alternative conditions that guarantee Lipschitz continuity, and
also these conditions are easy to verify.
238
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.1 Review of Theory
Lemma 7.1.6.
Let D ⊆ R2 be an open set and f : D → R be a continuous function such that
its partial derivative with respect to the variable y is also continuous on D, i.e.,
∂f
∂y
: D → R is a continuous function. Let the rectangle R defined by
R = {x : |x − x0 | ≤ a} × {y : |y − y0 | ≤ b}
Proof.
Let (x, y1 ), (x, y2 ) ∈ R. Applying mean value theorem with respect to the y variable,
we get
∂f
f (x, y1 ) − f (x, y1 ) = (x, ξ)(y1 − y2 ), (7.6)
∂y
for some ξ between y1 and y2 . Since we are applying mean value theorem by fixing
x, this ξ will also depend on x. However since ∂f
∂y
: D → R is a continuous function,
it will be bounded on R. That is, there exists a number L > 0 such that
∂f
(x, y) ≤ L for all (x, y) ∈ R. (7.7)
∂y
Taking modulus in the equation (7.6), and using the last inequality we get
∂f
|f (x, y1 ) − f (x, y1 )| = (x, ξ) |(y1 − y2 )| ≤ L |(y1 − y2 )|.
∂y
The following theorem can be used as a tool to check the existence and uniqueness of
solution of a given initial value problem (7.1).
Corollary 7.1.7 [Existence and Uniqueness Theorem].
Let D ⊆ R2 be a domain and I ⊆ R be an interval. Let f : D → R be a continuous
function. Let (x0 , y0 ) ∈ D be a point such that the rectangle R defined by
R = {x : |x − x0 | ≤ a} × {y : |y − y0 | ≤ b} (7.8)
is contained in D.
If the partial derivative ∂f /∂y is also continuous in D, then there exists a unique
solution y = y(x) of the initial value problem (7.1) defined on the interval |x − x0 | ≤ δ
where δ = min{a, Mb }.
239
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.2 Discretization Notations
We state an example (without details) of an initial value problem that has a unique
solution but the right hand side function in the differential equation is not a Lipschitz
function. Thus this example illustrates the fact that condition of Lipschitz continuity is
only a sufficient condition for uniqueness and by no means necessary.
Example 7.1.8.
The IVP {
y sin y1 if y ̸= 0
y′ = y(0) = 0.
0 if y = 0,
has unique solution, despite the RHS not being Lipschitz continuous with respect to
the variable y on any rectangle containing (0, 0).
y ′ = e−x , y(0) = 0.
2
xj = x0 + jh, j = 0, 1, · · · n (7.10)
for a sufficiently small positive real number h. We use the notation for the approximate
solution as
240
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
Since we assumed that we know the value of y(x0 ), the right hand side is fully known
and hence y(x1 ) can now be computed explicitly.
In general, if you know the value of y(xj ), j = 0, 1, · · · , n, we can obtain the value of
y(xj+1 ) by using the forward difference formula in (7.1a) at x = xj to get
1
(y(xj+1 ) − y(xj )) ≈ f (xj , y(xj )).
h
Denoting the approximate value of y(xj ) by yj , we can adopt the formula
241
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
Example 7.3.2.
Consider the initial value problem
y ′ = y, y(0) = 1.
The Euler method (7.12) for this equation takes the form
Note that the exact solution for the given initial value problem is y(x) = ex .
On applying Euler’s method with h = 0.01 and using 7-digit rounding, we get
242
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
The numerical results along with the error is presented in the following table for
h = 0.01.
h x yh (x) Exact Solution Error Relative Error
0.01 0.00 1.000000 1.000000 0.000000 0.000000
0.01 0.01 1.010000 1.010050 0.000050 0.000050
0.01 0.02 1.020100 1.020201 0.000101 0.000099
0.01 0.03 1.030301 1.030455 0.000154 0.000149
0.01 0.04 1.040604 1.040811 0.000207 0.000199
0.01 0.05 1.051010 1.051271 0.000261 0.000248
Since the exact solution of this equation is y = ex , the correct value at x = 0.04 is
1.040811.
By taking smaller values of h, we may improve the accuracy in the Euler’s method.
The numerical results along with the error is shown in the following table for h =
0.005.
h x yh (x) Exact Solution Error Relative Error
0.005 0.00 1.000000 1.000000 0.000000 0.000000
0.005 0.005 1.005000 1.005013 0.000013 0.000012
0.005 0.01 1.010025 1.010050 0.000025 0.000025
0.005 0.015 1.015075 1.015113 0.000038 0.000037
0.005 0.02 1.020151 1.020201 0.000051 0.000050
0.005 0.025 1.025251 1.025315 0.000064 0.000062
0.005 0.03 1.030378 1.030455 0.000077 0.000075
0.005 0.035 1.035529 1.035620 0.000090 0.000087
0.005 0.04 1.040707 1.040811 0.000104 0.000100
0.005 0.045 1.045910 1.046028 0.000117 0.000112
0.005 0.05 1.051140 1.051271 0.000131 0.000125
In Example (7.3.2), we illustrated that as we reduce the step size h, we tend to get more
accurate solution of a given IVP at a given point x = xj . The truncation error confirms
this illustration when y ′′ is a bounded function. However, the mathematical error which
involves the truncation error in the computed solution yj and the propagating error
from the computation of the solution at x = xi for i = 0, 1, · · · j − 1. In addition to the
mathematical error, we also have arithmetic error due to floating-point approximation
in each arithmetic operation. In this section, we study the total error involved in forward
Euler’s method. Total error involved in backward Euler’s method can be obtained in a
243
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
similar way.
Using Taylor’s theorem, write
h2 ′′
y(xj+1 ) = y(xj ) + hy ′ (xj ) + y (ξj )
2
for some xj < ξj < xj+1 . Since y(x) satisfies the ODE y ′ = f (x, y(x)), we get
h2 ′′
y(xj+1 ) = y(xj ) + hf (xj , y(xj )) + y (ξj ).
2
Thus, the local truncation error in forward Euler’s method is
h2 ′′
Tj+1 = y (ξj ), (7.14)
2
which is the error involved in obtaining the value y(xj+1 ) using the exact value y(xj ).
The forward Euler’s method uses the approximate value yj in the formula and therefore
the finally computed value yj+1 not only involves the truncation error but also the
propagated error involved in computing yj . Thus, the local mathematical error in the
forward Euler’s method is given by
h2 ′′
ME(yj+1 ) := y(xj+1 ) − yj+1 = y(xj ) − yj + h(f (xj , y(xj )) − f (xj , yj )) + y (ξj ).
2
Here, y(xj ) − yj + h(f (xj , y(xj )) − f (xj , yj )) is the propagated error.
The propagated error can be simplified by applying the mean value theorem to f (x, z)
considering it as a functin of z:
∂f (xj , ηj )
f (xj , y(xj )) − f (xj , yj ) = [y(xj ) − yj ],
∂z
for some ηj lying between y(xj ) and yj . Using this, we get the mathematical error
[ ]
∂f (xj , ηj ) h2
ME(yj+1 ) = 1 + h ME(yj ) + y ′′ (ξj ) (7.15)
∂z 2
for some xj < ξj < xj+1 , and ηj lying between y(xj ) and yj .
We now assume that over the interval of interest,
∂f (xj , y(xj ))
< L, |y ′′ (x)| < Y ,
∂z
where L and Y are fixed positive constants. On taking absolute values in (7.15), we
obtain
h2
|ME(yj+1 )| ≤ (1 + hL)|ME(yj )| + Y. (7.16)
2
244
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
h2
|ME(yj+1 )| ≤ (1 + hL)2 |ME(yj−1 )| + (1 + (1 + hL)) Y
2
≤ ···
≤ ···
( ) h2
≤ (1 + hL)j+1 |ME(y0 )| + 1 + (1 + hL) + (1 + hL)2 + · · · + (1 + hL)j Y.
2
Using the formulas
1. For any α ̸= 1,
αj+1 − 1
1 + α + α2 + · · · + αj =
α−1
2. For any x ≥ −1,
(1 + x)N ≤ eN x ,
in the above inequality, we have proved the following theorem.
Theorem 7.3.3.
Let y ∈ C 2 [a, b] be a solution of the IVP (7.1) with
∂f (x, y)
′′
∂y < L, |y (x)| < Y ,
for all x and y, and some constants L > 0 and Y > 0. The mathematical error in
the forward Euler’s method at a point xj = x0 + jh satisfies
hY ( (xn −x0 )L )
|ME(yj )| ≤ e − 1 + e(xn −x0 )L |y(x0 ) − y0 | (7.17)
2L
Example 7.3.4.
Consider the initial value problem
Let us now find the upper bound for the mathematical error of forward Euler’s method
in solving this problem.
Here f (x, y) = y. Therefore, ∂f /∂y = 1 and hence we can take L = 1.
Since y = ex , y ′′ = ex and |y ′′ (x)| ≤ e for 0 ≤ x ≤ 1. Therefore, we take Y = e.
245
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.3 Euler’s Method
he
|ME(yj )| ≤ (e − 1) ≈ 2.3354h.
2
Here, we assume that there is no approximation in the initial condition and therefore
the second term in (7.17) is zero.
To validate the upper bound obtained above, we shall compute the approximate
solution of the given IVP using forward Euler’s method. The method for the given
IVP reads
yj+1 = yj + hf (xj , yj ) = (1 + h)yj .
The solution of this difference equation satisfing y(0) = 1 is
yj = (1 + h)j .
Now, if h = 0.1, n = 10, we have yj = (1.1)10 . Therefore, the forward Euler’s method
gives y(1) ≈ y10 ≈ 2.5937. But the exact value is y(1) = e ≈ 2.71828. The error is
0.12466, whereas the bound obtained from (7.17) was 0.2354.
Remark 7.3.5.
The error bound (7.17) is valid for a large family of the initial value problems. But,
it usually produces a very poor estimate due to the presence of the exponential
terms. For instance, in the above example, if we take xn to be very large, then the
corresponding bound will also be very large.
The above error analysis assumes that the numbers used are of infinite precision and
no floating point approximation is assumed. When we include the floating point ap-
proximation that yn = ỹn + ϵn , then the bound for total error is given in the following
theorem. The proof of this theorem is left as an exercise.
Theorem 7.3.6.
Let y ∈ C 2 [a, b] be a solution of the IVP (7.1) with
∂f (x, y)
′′
∂y < L, |y (x)| < Y ,
for all x and y, and some constants L > 0 and Y > 0. Let yj be the approximate
solution of (7.1) computed using the forward Euler’s method (7.12) with infinite
246
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.4 Modified Euler’s Methods
precision and let ỹj be the corresponding computed value using finite digit floating-
point arithmetic. If
yj = ỹj + ϵj ,
then the total error TE(yj ) := y(xj ) − ỹj in forward Euler’s method at a point
xj = x0 + jh satisfies
( )
1 hY ϵ ( (xn −x0 )L )
|TE(yj )| ≤ + e − 1 + e(xn −x0 )L |ϵ0 |, (7.18)
L 2 h
The Euler’s method can be obtained by replacing the integral on the right hand side
by the rectangle rule.
Using the integral form in the interval [xj−1 , xj+1 ] and using the mid-point quadrature
formula given by ∫ xj+1
f (s, y)ds ≈ f (xj , yj )(xj+1 − xj−1 ),
xj−1
we get the Euler’s mid-point method
yj+1 = yj−1 + 2hf (xj , yj ). (7.20)
To compute the value of yj+1 , we need to know the value of yj−1 and yj . Note that
the above formula cannot be used to compute the value of y1 . Hence, we need another
method to obtain y1 and then yj , for j = 2, 3, · · · , n can be obtained using (7.20). This
method belongs to the class of 2-step methods.
Example 7.4.1.
Consider the initial-value problem
y ′ = y, y(0) = 1.
247
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.4 Modified Euler’s Methods
Since the exact solution of this equation is y = ex , the correct value at x = 0.04 is
1.040811. The error is 0.000003.
Recall the error in Euler method was 0.000199.
The methods derived above are explicit methods in the sense that the value of yj+1 is
computed using the known values. If we using the trapezoidal rule for the integration
on the right hand side of (7.19), we get
h
yj+1 = yj + (f (xj , yj ) + f (xj+1 , yj+1 )). (7.21)
2
This method is called the Euler’s Trapezoidal method. Here, we see that the
formula (7.21) involves an implicit relation for yj+1 . Such methods are referred to as
implicit methods.
Although the Euler’s Trapezoidal method gives an implicit relation for yj+1 , some-
times it is explicit to compute the values yj+1 as illustrated in the following exam-
ple.
Example 7.4.2.
Let us use the Euler’s trapezoidal rule with h = 0.2 to obtain the appoximate solution
of the initial value problem
y ′ = xy, y(0) = 1.
We have y0 = 1 and
h
y1 = y0 + (x0 y0 + x1 y1 ) = 1 + 0.1(0 + 0.2y1 ),
2
which gives (1 − 0.02)y1 = 1, and this implies y1 ≈ 1.0204. Similarly,
h
y2 = y1 + (x1 y1 + x2 y2 ) = 1.0204 + 0.1(0.2 × 1.0204 + 0.4y2 ),
2
248
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.5 Runge-Kutta Methods
and
1.0408
y(0.4) ≈ y2 = ≈ 1.0842.
1 − 0.04
In general, the Euler’s trapezoidal rule gives a nonlinear equation for yj+1 as illustrated
below.
Example 7.4.3.
Consider the initial value problem
y ′ = e−y , y(0) = 1.
We use the Euler’s trapezoidal rule with h = 0.2 to solve the above problem. We
have,
h
y1 = y0 + (e−y0 + e−y1 ) = 1 + 0.1(e−1 + e−y1 ),
2
which gives the nonlinear equation
and the solution of this equation is the approximate value of the solution y(x1 ) of the
given initial value problem.
Let y be a solution of the ODE (7.1a). The Runge-Kutta method of order 2 is obtained
by truncating the Taylor expansion of y(x + h) after the quadratic term. We derive
now a formula for this method. Taylor expansion of y(x + h) at the point x upto the
249
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.5 Runge-Kutta Methods
h2 ′′
y(x + h) = y(x) + hy ′ (x) + y (x) + O(h3 ). (7.22)
2
Since y satisfies the given ODE y ′ = f (x, y), by differentiating this ODE with respect
to x both sides gives
∂f ∂f
y ′′ (x) = (x, y(x)) + y ′ (x) (x, y(x)) (7.23)
∂x ∂y
h
y(x + h) = y(x) + f (x, y(x))
[2 ]
h ∂f ∂f
+ f (x, y(x)) + h (x, y(x)) + hf (x, y(x)) (x, y(x))
2 ∂x ∂y
+ O(h3 ) (7.24)
h
y(xj+1 ) = y(xj ) + f (xj , y(xj ))
[ 2 ]
h ∂f ∂f
+ f (xj , y(xj )) + h (xj , y(xj )) + hf (xj , y(xj )) (xj , y(xj ))
2 ∂x ∂y
+ O(h3 ). (7.25)
Let us now expand the function f = f (s, t), which is a function of two variables, into
its Taylor series at the point (ξ, τ ) and truncate the series after the linear term. It is
given by
∂f ∂f ( ) ( )
f (s, t) = f (ξ, τ ) + (s − ξ) (ξ, τ ) + (t − τ ) (ξ, τ ) + O (s − ξ)2 + O (t − τ )2 .
∂s ∂t
Taking (ξ, τ ) = (xj , y(xj )) and comparing the above equation with the term in the
square brackets in the equation (7.25), we get
h h
y(xj+1 ) = y(xj ) + f (xj , y(xj )) + [f (xj+1 , y(xj ) + hf (xj , y(xj )))] + O(h3 ). (7.26)
2 2
250
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.5 Runge-Kutta Methods
Truncating the higher order terms and denoting the approximate value of y(xj+1 ) as
yj+1 , we get
h h
yj+1 = yj + f (xj , yj ) + [f (xj+1 , yj + hf (xj , yj ))] . (7.27)
2 2
Although the terms dropped from (7.26) to get (7.27) are of order 3 (namely, O(h3 )),
the resultant approximation to y ′ is of order 2 as is evident from the Taylor’s formula
(7.22). The equation (7.27) is therefore known as Runge-Kutta method of order
2. The formula (7.27) may be written as
h
yj+1 = yj + (k1 + k2 ),
2
where
k1 = f (xj , yj )
k2 = f (xj+1 , yj + hk1 ).
Example 7.5.1.
Consider the initial-value problem
y ′ = y, y(0) = 1.
251
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.6 Exercises
We state without derivation, the formula for the Runge-Kutta method of order 4.
h
yj+1 = yj + (k1 + 2k2 + 2k3 + k4 )
6
where
k1 = f (xj , yj ),
( )
h h
k2 = f xj + , yj + k1 ,
2 2
( )
h h
k3 = f xj + , yj + k2 ,
2 2
k4 = f (xj + h, yj + hk3 )
The local truncation error of the 4th order Runge-Kutta Method is of O(h5 ).
Example 7.5.2.
Consider the initial-value problem
y ′ = y, y(0) = 1.
7.6 Exercises
1. Let h > 0 and let xj = x0 + jh (j = 1, 2, · · · , n) be given nodes. Consider the
initial value problem y ′ (x) = f (x, y), y(x0 ) = y0 , with
∂f (x, y)
≤ 0,
∂y
252
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.6 Exercises
i) Using error analysis of the Euler’s method, show that there exists an h > 0
such that
h2 ′′
|en | ≤ |en−1 | + f (ξ) for some ξ ∈ (xn−1 , xn ),
2
where en = y(xn ) − yn with yn obtained using Euler method.
ii) Applying the conclusion of (i) above recursively, prove that
1
|en | ≤ |e0 | + n h2 Y where Y = max |y ′′ (x)|. (∗∗)
2 x0 ≤x≤xn
2. The solution of the initial value problem
is y(x) = sin x. For λ = −20, find the approximate value of y(3) using the Euler’s
method with h = 0.5. Compute the error bound given in (), and Show that the
actual absolute error exceeds the computed error bound given in (). Explain why
it does not contradict the validity of ().
3. Derive the backward Euler’s method for finding the approximate value of y(xn ) for
some xn < 0, where y satisfies the initial value problem y ′ (x) = f (x, y), y(0) = y0 .
4. Consider the initial value problem y ′ = −2y, 0 ≤ x ≤ 1, y(0) = 1.
i) Find an upper bound on the error in approximating the value of y(1) com-
puted using the Euler’s method (at x = 1) in terms of the step size h.
ii) For each h, solve the difference equation which results from the Euler’s
method, and obtain an approximate value of y(1).
iii) Find the error involved in the approximate value of y(1) obtained in (ii)
above by comparing with the exact solution.
iv) Compare the error bound obtained in (i) with the actual error obtained in
(iii) for h = 0.1, and for h = 0.01.
v) If we want the absolute value of the error obtained in (iii) to be at most
0.5 × 10−6 , then how small the step size h should be?
5. Consider the initial value problem y ′ = xy, y(0) = 1. Estimate the error involved
in the approximate value of y(1) computed using the Euler’s method (with infinite
precision) with step size h = 0.01.
6. Find an upper bound for the propagated error in Euler method (with infinite
precision) with h = 0.1 for solving the initial value problem y ′ = y, y(0) = 1, in
the interval
i) [0, 1] and
253
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section 7.6 Exercises
y ′ = e−x , y(x0 ) = y0 .
2
in the form ( )
−(xj − √h )2 −(xj + √h )2
yj+1 = yj−1 + h e 3 +e 3
when the nodes are equally spaced with spacing h = xj+1 − xj , j ∈ Z. (h > 0).
Let x0 = 0, and y0 = 1. Using the method derived above, obtain approximate
values of y(−0.1) and y(0.1).
254
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Index
l1 -norm, 74 of a matrix, 79
l2 -norm, 73 continuous function, 6
l∞ -norm, 74 contraction map, 143
2-step method, 241 convergent sequence, 2
2-step methods, 241 Crout’s factorization, 69
cubic spline, 189
quadrature rule, 197
data, 161
absolute error, 33
decimal representation, 26
arithmetic error, 25
decreasing sequence, 3
in interpolating polynomial, 176, 180
degree of precision, 208
backward difference, 214 derivative of a function, 7
backward Euler’s method, 235 diagonally dominant matrix, 86
backward substitution, 55, 59, 64 difference
base, 26, 27 backward, 214
big oh, 18, 19 central, 215
binary representation, 26 forward, 213
bisection method, 121 differentiable function, 7
algorithm, 121 direct method, 51, 53
Bolzano-Weierstrass theorem, 3 direct methods, 82
bounded sequence, 3 Divided difference
bracketing methods, 120 higher-order formula, 170
symmetry, 170
central difference, 215 divided difference, 169
Chebyshev nodes, 185 dominant eigenvalue, 96
Cholesky’s factorization, 70 Doolittle’s factorization, 66
chopping a number, 30 double precision, 31
closed domain method, 120
composite eigenvalue
Simpson’s rule, 206 dominant, 96
trapezoidal rule, 203 power method, 100
condition number Error
of a function, 42 in Euler’s method, 237
255
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section Index
256
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section Index
257
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section Index
258
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section Index
259
S. Baskar and S. Sivaji Ganesh Spring 2018-19
Section Index
underflow, 28
undetermined coefficients
differentiation, 219
integration, 207
unstable computation, 45
vector norm, 73
l1 , 74
Euclidean (l2 ), 73
maximum (l∞ ), 74
weights, 198
well conditioned matrix, 80
well-conditioned
function evaluation, 43
Wilkinson’s example, 95
260
S. Baskar and S. Sivaji Ganesh Spring 2018-19