0% found this document useful (0 votes)
89 views

MS213 Worked Examples

The document provides worked examples of numerical methods for solving equations, including bisection, fixed point iteration, Newton's method, secant method, and Steffensen's method. It also gives examples of Gaussian elimination with and without pivoting to solve systems of linear equations, and LU decompositions using Doolittle's, Crout's, and Cholesky's methods. The solutions include step-by-step working and computed approximations across iterations of the numerical methods.

Uploaded by

Dimas Setiawan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views

MS213 Worked Examples

The document provides worked examples of numerical methods for solving equations, including bisection, fixed point iteration, Newton's method, secant method, and Steffensen's method. It also gives examples of Gaussian elimination with and without pivoting to solve systems of linear equations, and LU decompositions using Doolittle's, Crout's, and Cholesky's methods. The solutions include step-by-step working and computed approximations across iterations of the numerical methods.

Uploaded by

Dimas Setiawan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Numerical Mathematics

MS213 Additional Revision Sheet X


Worked Examples

X.1 Compute 5 iterations of the following methods (using the indicated starting values) to approxi-
mate a root of
f (x) = x − cos(x)
in the interval [0, 1]. Tabulate your results neatly showing, for each iteration n, the values of n,
pn−1 , pn and |pn − pn−1 |:

(a) Bisection Method, [a, b] = [0, 1] (b) Fixed Point Iteration, p0 = 0.5
(c) Newton-Raphson Method, p0 = 0.5 (d) Secant Method, p0 = 0, p1 = 1
(e) Steffensen’s Method, p0 = 0.5

Solution

(a) To find a solution of f (x) = x − cos(x) = 0, the Bisection method generates a sequence of
approximations pn defined by:
1
pn = (an + bn )
2
where [a0 , b0 ] = [0, 1]. The first 5 iterations are as follows:
Iter an−1 bn−1 pn f (an−1 ) f (pn ) |pn − pn−1 |
1 0.000000 1.000000 0.500000 -1.000 -0.378 0.5000000
2 0.500000 1.000000 0.750000 -0.378 0.018 0.2500000
3 0.500000 0.750000 0.625000 -0.378 -0.186 0.1250000
4 0.625000 0.750000 0.687500 -0.186 -0.085 0.0625000
5 0.687500 0.750000 0.718750 -0.085 -0.034 0.0312500
(b) To use fixed-point iteration to solve the equation:

x = g(x) = cos(x)

we generate a sequence of approximations pn+1 = g (pn ) starting with p0 = 0.5. The first 5
iterations are:
n pn−1 pn |pn − pn−1 |
1 0.500000000 0.877582562 0.377582562
2 0.877582562 0.639012494 0.238570068
3 0.639012494 0.802685101 0.163672607
4 0.802685101 0.694778027 0.107907074
5 0.694778027 0.768195831 0.073417804

MS213 Worked Examples page 1 of 20


(c) To use Newton’s method to solve the equation:

f (x) = x − cos(x) = 0

we generate a sequence of approximations, defined by

f (pn ) pn − cos (pn )


pn+1 = pn − 0
= pn −
f (pn ) 1 + sin (pn )

starting with p0 = 0.5. The iterations are:

n pn f (pn ) f 0 (pn ) pn+1 |pn+1 − pn |


1 0.50000000 -0.377583 1.479426 0.75522242 0.25522242
2 0.75522242 0.027103 1.685451 0.73914167 0.01608075
3 0.73914167 0.000095 1.673654 0.73908513 0.00005653
4 0.73908513 0.000000 1.673612 0.73908513 0.00000000

(d) To use the Secant method to solve the equation:

f (x) = x − cos(x) = 0

we generate a sequence of approximations, defined by

f (pn ) (pn − pn−1 )


pn+1 = pn −
f (pn ) − f (pn−1 )

starting with p0 = 0 and p1 = 1. The first 5 iterations are:

n pn−1 f (pn−1 ) pn f (pn ) pn+1 |pn+1 − pn |


1 0.00000000 -1.000000 1.00000000 0.459698 0.68507336 0.31492664
2 1.00000000 0.459698 0.68507336 -0.089299 0.73629900 0.05122564
3 0.68507336 -0.089299 0.73629900 -0.004660 0.73911936 0.00282036
4 0.73629900 -0.004660 0.73911936 0.000057 0.73908511 0.00003425
5 0.73911936 0.000057 0.73908511 -0.000000 0.73908513 0.00000002

(e) To use Steffensen’s method to solve the equation:

x = g(x) = cos(x)

we first generate two fixed-point approximations pn+1 = g (pn ) and pn+2 = g (pn+1 ) starting
with p0 = 0.5. Steffensen’s method then provides the improved approximations p∗n given by:

(∆pn )2 (pn+1 − pn )2
p∗n = pn − = p n −
∆2 p n pn+2 − 2pn+1 + pn

The iterations are:


n pn pn+1 pn+2 p∗n |p∗n − pn |
1 0.50000000 0.87758256 0.63901249 0.73138519 0.23138519
2 0.73138519 0.74424995 0.73559621 0.73907634 0.00769115
3 0.73907634 0.73909106 0.73908114 0.73908513 0.00000879
4 0.73908513 0.73908513 0.73908513 0.73908513 0.00000000

MS213 Worked Examples page 2 of 20


X.2 (a) With 4-digit arithmetic and rounding, solve the linear system:

2.51 x1 + 1.48 x2 + 4.53 x3 = 0.05


1.48 x1 + 0.93 x2 − 1.3 x3 = 1.03
2.68 x1 + 3.04 x2 − 1.48 x3 = −0.53

using the Gaussian elimination algorithm:


(i) without pivoting;
(ii) with partial pivoting; and
(iii) scaled column pivoting.
(b) Find an LU decomposition for the matrix A using:
(i) Doolittle’s algorithm (i.e. where L is a unit lower triangular matrix);
(ii) Crout’s algorithm (i.e. where U is a unit upper triangular matrix); and
(iii) Choleski’s method (i.e. where A = LLT ).
and hence solve Ax = b where
    
1 2 3 x1 20
A =  2 20 26  , x =  x2  , b =  144 
3 26 70 x3 262

Solution

(a) (i) Writing the system in augmented matrix form, the basic steps in the Gaussian elim-
ination process (without pivoting and using 4-digit arithmetic with rounding) are as
follows:
   
2.51 1.48 4.53 0.05 2.51 1.4800 4.530 0.0500
 1.48 0.93 −1.30 1.03  →  0 0.0574 −3.971 1.0010 
2.68 3.04 −1.48 −0.53 0 1.4590 −6.318 −0.5834
   
2.51 1.4800 4.530 0.0500 2.51 1.4800 4.530 0.050
 0 0.0574 −3.971 1.0010  →  0 0.0574 −3.971 1.001 
0 1.4590 −6.318 −0.5834 0 0 94.580 −26.030

Back-substitution provides the solution:


−26.03
x3 = = −0.2752
94.58
1.001 − 1.093
x2 = = −1.603
0.0574
0.05 + 3.619
x1 = = 1.462
2.51
(ii) Writing the system in augmented matrix form, the basic steps in the Gaussian elimi-
nation process (using a partial pivoting strategy and 4-digit arithmetic with rounding)
are as follows:

MS213 Worked Examples page 3 of 20


Interchange Row 1 with Row 3
   
2.51 1.48 4.53 0.05 2.68 3.04 −1.48 −0.53
 1.48 0.93 −1.30 1.03  →  1.48 0.93 −1.30 1.03 
2.68 3.04 −1.48 −0.53 2.51 1.48 4.53 0.05
   
2.68 3.04 −1.48 −0.53 2.68 3.040 −1.4800 −0.5300
 1.48 0.93 −1.30 1.03  →  0 −0.749 −0.4827 1.3230 
2.51 1.48 4.53 0.05 0 −1.367 5.9160 0.5464
Interchange Row 2 with Row 3
   
2.68 3.040 −1.4800 −0.5300 2.68 3.040 −1.4800 −0.5300
 0 −0.749 −0.4827 1.3230  →  0 −1.367 5.9160 0.5464 
0 −1.367 5.9160 0.5464 0 −0.749 −0.4827 1.3230
   
2.68 3.040 −1.4800 −0.5300 2.68 3.040 −1.480 −0.5300
 0 −1.367 5.9160 0.5464  →  0 −1.367 5.916 0.5464 
0 −0.749 −0.4827 1.3230 0 0 −3.724 1.0240
Back-substitution provides the solution:
1.024
x3 = = −0.275
−3.724
0.5464 + 1.627
x2 = = −1.59
−1.367
−0.53 + 4.427
x1 = = 1.454
2.68
(iii) Writing the system in augmented matrix form, the basic steps in the Gaussian elim-
ination process (using a scaled-column pivoting strategy and 4-digit arithmetic with
rounding) are as follows:
Interchange Row 1 with Row 2
   
2.51 1.48 4.53 0.05 1.48 0.93 −1.30 1.03
 1.48 0.93 −1.30 1.03  →  2.51 1.48 4.53 0.05 
2.68 3.04 −1.48 −0.53 2.68 3.04 −1.48 −0.53
   
1.48 0.93 −1.30 1.03 1.48 0.930 −1.300 1.030
 2.51 1.48 4.53 0.05  →  0 −0.097 6.735 −1.697 
2.68 3.04 −1.48 −0.53 0 1.356 0.874 −2.395
Interchange Row 2 with Row 3
   
1.48 0.930 −1.300 1.030 1.48 0.930 −1.300 1.030
 0 −0.097 6.735 −1.697  →  0 1.356 0.874 −2.395 
0 1.356 0.874 −2.395 0 −0.097 6.735 −1.697
   
1.48 0.930 −1.300 1.030 1.48 0.930 −1.300 1.030
 0 1.356 0.874 −2.395  →  0 1.356 0.874 −2.395 
0 −0.097 6.735 −1.697 0 0 6.798 −1.868
Back-substitution provides the solution:
−1.868
x3 = = −0.2748
6.798
−2.395 + 0.2402
x2 = = −1.589
1.356
1.03 + 1.121
x1 = = 1.453
1.48

MS213 Worked Examples page 4 of 20


(b) (i) The starting point for the Doolittle Algorithm is the matrix equation:
     
1 0 0 u1,1 u1,2 u1,3 1 2 3
 l2,1 1 0 · 0 u2,2 u2,3  =  2 20 26 
l3,1 l3,2 1 0 0 u3,3 3 26 70
Noting that u1,1 = a1,1 = 1, we work systematically across a row of U followed by the
corresponding column of L, using row-column multiplication:
u1,1 = 1 u1,2 = 2 u1,3 = 3
l2,1 = 2 l3,1 = 3

u2,2 = 16 u2,3 = 20
5
l3,2 =
4

u3,3 = 36
The LU matrix factorization is therefore:
     
1 2 3 1 0 0 1 2 3
 2 20 26 = 2
  1 0  ·  0 16 20 
5
3 26 70 3 4 1 0 0 36
To solve the system Ax = b, writing A = LU , we first solve Ly = b and then back-
substitute, by solving U x = y, to find x. Solving Ly = b yields:
   
1 0 0 20 20
[Ly | b] =  2 1 0 144  ⇒ y =  104 
3 54 1 262 72
Solving U x = y, yields:
   
1 2 3 20 6
[U x | y] = 0 16 20 104 ⇒ x = 4 
  
0 0 36 72 2
(ii) The starting point for the Crout Algorithm is the matrix equation:
     
l1,1 0 0 1 u1,2 u1,3 1 2 3
 l2,1 l2,2 0 · 0 1 u2,3  =  2 20 26 
l3,1 l3,2 l3,3 0 0 1 3 26 70
Noting that l1,1 = a1,1 = 1, we work systematically across a row of U followed by the
corresponding column of L, using row-column multiplication:
l1,1 = 1 u1,2 = 2 u1,3 = 3
l2,1 = 2 l3,1 = 3

5
l2,2 = 16 u2,3 =
4
l3,2 = 20

l3,3 = 36

MS213 Worked Examples page 5 of 20


The LU matrix factorization is therefore:
     
1 2 3 1 0 0 1 2 3
5
 2 20 26  =  2 16 0  ·  0 1 4

3 26 70 3 20 36 0 0 1

To solve the system Ax = b, writing A = LU , we first solve Ly = b and then back-


substitute, by solving U x = y, to find x. Solving Ly = b yields:
   
1 0 0 20 20
[Ly | b] =  2 16 0 144  ⇒ y =  13 2

3 20 36 262 2

Solving U x = y, yields:
  

1 2 3 20 6
5 13 
[U x | y] =  0 1 4 2 ⇒x= 4 
0 0 1 2 2

(iii) The starting point for the Choleski Algorithm is the matrix equation:
     
l1,1 0 0 l1,1 l1,2 l1,3 1 2 3
 l2,1 l2,2 0  ·  0 l2,2 l2,3  =  2 20 26 
l3,1 l3,2 l3,3 0 0 l3,3 3 26 70
2 = a
Noting that l1,1 T
1,1 = 1, we work systematically across a row of L and the corre-
sponding column of L, using row-column multiplication:

l1,1 = 1 l1,2 = 2 l1,3 = 3

l2,2 = 4 l2,3 = 0

l3,3 = 6

The LLT matrix factorization is therefore:


     
1 2 3 1 0 0 1 2 3
 2 20 26  =  2 4 0  ·  0 4 5 
3 26 70 3 5 6 0 0 6

To solve the system Ax = b, writing A = LLT , we first solve Ly = b and then back-
substitute, by solving LT x = y, to find x. Solving Ly = b yields:
   
1 0 0 20 20
[Ly | b] =  2 4 0 144  ⇒ y =  26 
3 5 6 262 12

Solving LT x = y, yields:
   
 T  1 2 3 20 6
L x | y =  0 4 5 26  ⇒ x =  4 
0 0 6 12 2

MS213 Worked Examples page 6 of 20


X.3 By performing 4 iterations and starting with the initial guess x1 = x2 = x3 = 0, apply:

(a) the Jacobi iterative method;


(b) the Gauss-Seidel iterative method; and
(c) the Successive Over-Relaxation (SOR) iterative method, with acceleration parameter ω =
1.06;

to approximate the solution of the linear system:

3 x1 − x2 = 2
−x1 + 3 x2 − x3 = 1
−x2 + 3 x3 = 2

In the case of (a) and (b), calculate the spectral radius of the iteration matrix and hence comment
on the expected convergence properties of the two methods. Using the results of part (a), calculate
the optimum value of ω for the SOR method.

Solution

(a) The iteration scheme for the Jacobi method may be written in the following form:

(k+1) 1h (k)
i
x1 = 2 + x2
3
(k+1) 1 h
(k) (k)
i
x2 = 1 + x1 + x3
3
(k+1) 1h (k)
i
x3 = 2 + x2
3
This leads to the sequence of iterations:

k 0 1 2 3 4
(k)
x1 0 0.66667 0.77778 0.92593 0.95062
(k)
x2 0 0.33333 0.77778 0.85185 0.95062
(k)
x3 0 0.66667 0.77778 0.92593 0.95062

The iteration matrix for Jacobi’s method is:


 −1  
3 0 0 0 −1 0
TJ = −D−1 (L + U ) = −  0 3 0   −1 0 −1 
0 0 3 0 −1 0
1
 
0 3 0
=  13 0 31 
0 31 0

For Jacobi’s method, the characteristic polynomial


λ −1

3 0  
|λI − TJ | = − 13 λ − 31 = λ λ2 − 2

0 −1
9
3 λ

MS213 Worked Examples page 7 of 20


has roots √
λ=0 and λ= 0.222222 = 0.471405
which leads to ρ(TJ ) = 0.471405.
(b) The iteration scheme for the Gauss-Seidel method may be written in the following form:
(k+1) 1h (k)
i
x1 = 2 + x2
3
(k+1) 1h (k+1) (k)
i
x2 = 1 + x1 + x3
3
(k+1) 1h (k+1)
i
x3 = 2 + x2
3
This leads to the sequence of iterations:

k 0 1 2 3 4
(k)
x1 0 0.66667 0.85185 0.96708 0.99268
(k)
x2 0 0.55556 0.90123 0.97805 0.99512
(k)
x3 0 0.85185 0.96708 0.99268 0.99837

The iteration matrix for the Gauss-Seidel method is:


 −1  
3 0 0 0 −1 0
−1
TG = −(L + D) U = − −1  3 0   0 0 −1 
0 −1 3 0 0 0
 1   
3 0 0 0 −1 0
1 1
= − 
9 3 0 · 0 0 −1 
1 1 1
27 9 3 0 0 0
1
 
0 3 0
1 1
=  0 9 3

1 1
0 27 9

For the Gauss-Seidel method, the characteristic polynomial


− 13

λ 0  
1 1 2 2
0 = |λI − TG | = 0 λ − 9
−3 = λ λ − λ
9
0 − 27 λ − 19
1

has roots
λ = 0, λ = 0.222222
so that ρ(TG ) = 0.222222. As expected, ρ(TG ) = ρ(TJ )2 so that convergence will be twice
as rapid for Gauss-Seidel.
(c) Let x̂ denote the vector obtained from an application of the Gauss-Seiel method and let ω
denote the acceleration parameter, then the SOR method may be written in the form:
(k+1) (k+1) (k)
xi = ωx̂i + (1 − ω)xi
for each component i = 1, 2, . . . , n. For the linear system given by:
3 x1 − x2 = 2
−x1 + 3 x2 − x3 = 1
−x2 + 3 x3 = 2

MS213 Worked Examples page 8 of 20


the Gauss-Seidel equations are: The iteration scheme for the Gauss-Seidel method may be
written in the following form:

(k+1) 1h (k)
i
x1 = 2 + x2
3
(k+1) 1h (k+1) (k)
i
x2 = 1 + x1 + x3
3
(k+1) 1h (k+1)
i
x3 = 2 + x2
3
With ω = 1.06, the scheme
(k+1) (k+1) (k)
xi = ωx̂i + (1 − ω)xi

then leads to the following sequence of iterations:

k 0 1 2 3 4
(k)
x1 0 0.70667 0.87733 0.99044 0.99888
(k)
x2 0 0.60302 0.95212 0.99522 0.99955
(k)
x3 0 0.91973 0.98790 0.99904 0.99990

Note: As an alternative and possibly simpler approach, we can first calculate the Gauss-
Seidel iteration, and then combine it with the previous iteration value. As this is an al-
gebraically equivalent formulation, it leads to the same sequence of iterations. Results are
presented correct to 6 decimal places:
x1 x2 x3

(0)
xi 0.000000 0.000000 0.000000
(1)
x̂i 0.666667 0.568889 0.867674

(1) (1) (0)


xi = 1.06x̂i − 0.06xi 0.706667 0.603022 0.919735

(1)
xi 0.706667 0.603022 0.919735
(2)
x̂i 0.867674 0.932356 0.984039

(2) (2) (1)


xi = 1.06x̂i − 0.06xi 0.877335 0.952116 0.987897

(2)
xi 0.877335 0.952116 0.987897
(3)
x̂i 0.984039 0.992779 0.998406

(3) (3) (2)


xi = 1.06x̂i − 0.06xi 0.990441 0.995219 0.999037

(3)
xi 0.990441 0.995219 0.999037
(4)
x̂i 0.998406 0.999307 0.999851

(4) (4) (3)


xi = 1.06x̂i − 0.06xi 0.998884 0.999552 0.999900

MS213 Worked Examples page 9 of 20



From part (a), we found that ρ(TJ ) = 0.222222 and so we obtain
2 2
ω= p = √ ≈ 1.06275
1 + 1 − ρ(TJ ) 2 1 + 1 − 0.222222

MS213 Worked Examples page 10 of 20


X.4 (a) Let f (x) = sin(x) and a = π4 . Use the formula

1 f (x + h) − f (x − h)
f 0 (x) ≈ µδf (x) =
h 2h
to approximate f 0 (a) = cos π4 , using h = 0.5, 0.25, 0.125 and 0.0625. Derive the truncation


error for the approximation and compare it with the actual error for each value of h. The
error appears to decrease by a constant factor as h is decreased by a factor of 2. Explain.
[Note: Tabulate your results neatly.]
(b) The following table presents second-order approximations to the first derivative of the func-
tion f (x) = sin x at the point x = π4 using the finite-difference formula:

f (x + h) − f (x − h)
f 0 (x) ≈ R1 (h) =
2h
0.5
using successively smaller mesh spacings h = , i = 0, 1, 2, 3:
2i
R1 R2 R3 R4
R1 (0.5) = 0.678010
R1 (0.25) = 0.699764 R2 (0.5)
R1 (0.125) = 0.705267 R2 (0.25) R3 (0.25)
R1 (0.0625) = 0.706647 R2 (0.125) R3 (0.125) R4 (0.125)
Complete the Richardson extrapolation table to obtain R4 (0.125).
(c) Assume that f ∈ C 4 [a, b], that x − h, x, x + h ∈ [a, b] where h > 0 and that

f (x + h) − 2f (x) + f (x − h)
δx2 (h) ≡
h2
is an approximation to the second derivative of f (x).
(i) Derive an expression for the leading terms (in powers of h) in the truncation error of
this approximation.
(ii) Assume that a computer is utilised to make numerical computations and that

f (x0 − h) = y−1 + e−1 , f (x0 ) = y0 + e0 and f (x0 + h) = y1 + e1 ,

where f (x0 − h), f (x0 ) and f (x0 + h) are approximated by the numerical values y−1 ,
y0 and y1 , and e−1 , e0 and e1 are the associated round-off errors, respectively. Assume
further that the round-off errors are bounded in magnitude by the machine epsilon .
Derive an upper bound, in terms of h and , for the total error (round-off and truncation)
in the computational formula δx2 (h) and hence determine the value of h which minimizes
this error.
(iii) Let f (x) = cos(x). Use the formula for δx2 (h) with h = 0.1, 0.01 and 0.001 to find
approximations to f 00 (0.8). Carry full calculator precision in all cases. Compare with
the true value f 00 (0.8) = − cos(0.8). Using your results from part (ii) and assuming
a machine epsilon with value  = 0.5 × 10−9 , estimate the optimal step-size for this
approximation and, in the context of the errors observed above, comment on your
results.

MS213 Worked Examples page 11 of 20



(d) Compile a table of values for the function f (x) = x for 1 ≤ x ≤ 1.3 in steps of 0.05.
Construct a forward difference table, using 5 decimal places of accuracy, to estimate y 0 (1.0)
and y 00 (1.0) using the respective formulae:
 
0 1 1 2 1 3
y ≈ ∆ − ∆ + ∆ y0 ,
h 2 3
1
y 00 ≈ ∆2 − ∆3 y0 ,

h2
where h = 0.05. Find the error in these approximations.

Solution

(a) By Taylor expansions, we find that


f (x + h) − f (x − h) h2 (iii)
f 0 (x) = − f (ξ)
2h 6

where the latter term is the truncation error of the approximation, for some ξ ∈ [x−h, x+h].
Using the given formula:
1 f (x + h) − f (x − h)
µδf (x) =
h 2h
for different values of h, we obtain the following approximations to f 0 (a) = cos π4 :


1 2
h h µδf (x) Abs Error Ratio max h6 |f (3) (x)|
0.5 0.678010 2.9097 × 10−2 3.9981 × 10−2
0.25 0.699764 7.3427 × 10−3 3.9627 9.9953 × 10−3
0.125 0.705267 1.8400 × 10−3 3.9906 2.4988 × 10−3
0.0625 0.706647 4.6027 × 10−4 3.9977 6.2471 × 10−4

(b) Using the formula


4i−1 Ri−1 h2 − Ri−1 (h)

Ri (h) =
4i−1 − 1
for each i = 2, . . . , 4, the Richardson extrapolation table is completed as follows:
R1 R2 R3 R4
R1 (0.5) = 0.678010
R1 (0.25) = 0.699764 0.707015
R1 (0.125) = 0.705267 0.707101 0.707107
R1 (0.0625) = 0.706647 0.707106 0.707107 0.707107
(c) (i) Using Taylor series expansions about x, we find:
h2 00 h3 h4 (iv)
f (x + h) = f (x) + hf 0 (x) + f (x) + f 000 (x) + f (c1 )
2! 3! 4!
h 2 h 3 h4 (iv)
f (x − h) = f (x) − hf 0 (x) + f 00 (x) − f 000 (x) + f (c2 )
" 2! 3! # 4!
4
h f (iv) (c1 ) + f (iv) (c2 )
2 00
f (x + h) − 2f (x) + f (x − h) = h f (x) +
12 2
f (x + h) − 2f (x) + f (x − h) h2 (iv)
⇒ f 00 (x) = − f (c),
h2 12

MS213 Worked Examples page 12 of 20


where we have used Using the Intermediate Value Theorem and the continuity of f to
find a value c so that " #
f (iv) (c1 ) + f (iv) (c2 )
= f (iv) (c),
2
where c, c1 , c2 ∈ (x − h, x + h).
(ii) The total error is given by

e1 − 2e0 + e−1 h2 (iv)


E(f, h) = − f (c),
h2 12

error components. Given that |ei | ≤ ,


i.e. involving both round-off and truncation
i = −1, 0, 1, and letting M = max[x−h,x+h] f (iv) (c) , we write

4 M h2
|E(f, h)| ≤ + .
h2 12
The optimum step size will minimize the quantity
4 M h2
g(h) = + .
h2 12
Setting g 0 (h) = 0 results in − h83 + M6h = 0 which yields the equation h4 = 48
M , from
which we obtain the value of h which minimizes |E(f, h)|:
 1
48 4
h= .
M
(iii) The three calculations may be summarized as follows:

D2 (0.100) = −0.69612631
D2 (0.010) = −0.69670090
D2 (0.001) = −0.69670665

Comparison with the true value f 00 (0.8) = − cos(0.8):

Step Size D2 (h) Abs Error


0.100 -0.69612631 5.803954e-004
0.010 -0.69670090 5.805870e-006
0.001 -0.69670665 5.808741e-008
Using the formula from part (ii),
 1
48 4
h= .
M

with f (iv) (x) ≤ |cos(x)| ≤ 1 = M and  = 0.5 × 10−9 , we obtain


 14
24 × 10−9

h= = 0.01244666,
1

which would give an error of approximately 10−5 . However, note that greater precision
than this is available even on the calculator. Taking  = 2.22 × 10−16 , for example,
we can compute that the optimal step-size is now 3.2123 × 10−4 which gives D2 (h) =
−0.69670670 with an absolute error of 6.6 × 10−9 .

MS213 Worked Examples page 13 of 20


(d) The forward difference table is as follows:
x f (x) ∆f ∆2 f ∆3 f
1.00 1.00000
0.02470
1.05 1.02470 -0.00059
0.02411 0.00005
1.10 1.04881 -0.00054
0.02357 0.00004
1.15 1.07238 -0.00050
0.02307 0.00002
1.20 1.09544 -0.00048
0.02259 0.00003
1.25 1.11803 -0.00045
0.02214
1.30 1.14017
Using these results, we can write
1
y0 ≈ (0.02470 + 0.000295 + 0.000017) = 0.50024
.05
1
y 00 ≈ (−0.00059 − 0.00005) = −0.256
.0025
The correct results are y 0 (1) = 12 = 0.5 and y 00 (1) = − 14 = −0.25 and the numerical results
are disappointing in terms of accuracy.

MS213 Worked Examples page 14 of 20


X.5 With the following tabulated values of x and f (x) = ex :

x 0.00 0.25 0.50 0.75 1.00


f (x) 1.0000 1.2840 1.6487 2.1170 2.7183

(a) Use the appropriate Lagrange interpolating polynomial of degree 2 to find an approximation
to f (0.125).
(b) Construct a divided-difference table and hence construct the Newton divided-difference poly-
nomial of degree 4 interpolating f at these points and use it to estimate f (0.125).
(c) Construct a forward-difference table and hence construct the Newton forward-difference
polynomial of degree 4 interpolating f at these points and use it to estimate f (0.125).
(d) Use the forward-difference table of part (c) to construct the Newton backward-difference
polynomial of degree 4 interpolating f at these points and use it to estimate f (0.875).
(e) Suppose that a table of values of f (x) = ex , 0 ≤ x ≤ 1, is to be constructed, with the values
of ex given with a spacing of h. If (i) linear interpolation and (ii) quadratic interpolation
is to be used in this table, how small should h be chosen in order to have an interpolation
error which is less than 10−7 .

(Note: Round all calculations to 4 decimal places.)

Solution

(a) The appropriate nodes are x0 = 0, x1 = 0.25 and x2 = 0.5 where the Lagrange form of the
interpolating polynomial is:
n n
X Y x − xi
pn (x) = Lk (x) fk , Lk (x) =
xk − xi
k=0 i=0
i6=k

First, calculate the values of Lk (0.125):

(0.125 − 0.25)(0.125 − 0.5)


L0 (0.125) = = 0.375
(0 − 0.25)(0 − 0.5)
(0.125 − 0)(0.125 − 0.5)
L1 (0.125) = = 0.75
(0.25 − 0)(0.25 − 0.5)
(0.125 − 0)(0.125 − 0.25)
L2 (0.125) = = −0.125
(0.5 − 0)(0.5 − 0.25)

We then compute p2 (0.125) as follows:


2
X
p2 (0.125) = Lk (x) fk = (0.375)(1) + (0.75)(1.284) + (−0.125)(1.6487)
k=0
= 0.375 + 0.963 − 0.2061 = 1.1319

(b) The divided-difference table takes the following form:

MS213 Worked Examples page 15 of 20


x f (x) f [, ] f [, , ] f [, , , ] f [, , , , ]
0.00 1.0000
1.1360
0.25 1.2840 0.6456
1.4588 0.2443
0.50 1.6487 0.8288 0.0693
1.8732 0.3136
0.75 2.1170 1.0640
2.4052
1.00 2.7183
We can then construct the Newton divided-difference polynomial of degree 4:
4
X k−1
Y
p4 (x = 0.125) = f [x0 ] + f [x0 , x1 , . . . , xk ] (x − xi )
k=1 i=0
= f [x0 ] + f [x0 , x1 ] (x − x0 ) + f [x0 , x1 , x2 ] (x − x0 ) (x − x1 )
+f [x0 , x1 , x2 , x3 ] (x − x0 ) (x − x1 ) (x − x2 )
+f [x0 , x1 , x2 , x3 , x4 ] (x − x0 ) (x − x1 ) (x − x2 ) (x − x3 )
= 1.0000 + (1.136)(0.125) + (0.6456)(−0.0156) + (0.2443)(0.0059)
+(0.0693)(−0.0037)
= 1.1330

(c) The forward-difference table takes the following form:


x f (x) ∆f ∆2 f ∆3 f ∆4 f
0.00 1.0000
0.2840
0.25 1.2840 0.0807
0.3647 0.0229
0.50 1.6487 0.1036 0.0065
0.4683 0.0294
0.75 2.1170 0.1330
0.6013
1.00 2.7183
We can then construct the Newton forward-difference polynomial of degree 4: Since x =
x0 + sh with x = 0.125, x0 = 0 and  h = 0.25, wes find that s = 0.5. We can compute the
  1s = 0.5  ks . Therefore,
 s−k
s-coefficients using the fact that  and
 k+1  = k+1  
s s 2 s 3 s
p4 (s = 0.5) = f0 + ∆f0 + ∆ f0 + ∆ f0 + ∆4 f0
1 2 3 4
= 1.0000 + (0.5)(0.284) + (−0.125)(0.0807) + (0.0625)(0.0229)
+(−0.0391)(0.0065)
= 1.1330

(d) We can then construct the Newton backward-difference polynomial of degree 4: Since x =
x0 − sh with x = 0.875, x0 = 1 and h = 0.25, we find that s = 0.5. We can compute the

MS213 Worked Examples page 16 of 20


s s s−k s
  
s-coefficients using the fact that 1 = 0.5 and k+1 = k+1 k . Therefore,
       
s s 2 s 3 s
p4 (s = 0.5) = f0 − ∆f0 + ∆ f0 − ∆ f0 + ∆4 f0
1 2 3 4
= 2.7183 − (0.5)(0.6013) + (−0.125)(0.133) − (0.0625)(0.0294)
+(−0.0391)(0.0065)
= 2.3989

(e) (i) The error in linear interpolation is:



f (2) (ξ)
|f (x) − p1 (x)| ≤ max |(x − xi ) (x − xi+1 )| max

2!

x∈[xi ,xi+1 ] ξ∈[0,1]

We find upper bounds for both terms, noting that xi+1 = xi + h:

h2
(1) max |(x − xi ) (x − xi+1 )| =
x∈[xi ,xi+1 ] 4

f (2) (ξ) eξ e1

(2) max = max ≤

ξ∈[0,1] 2! ξ∈[0,1] 2! 2!

Combining (1) and (2), the accuracy requirement of 10−7 leads to:

e1 h2 p
< 10−7 ⇒ h < 2.963 × 10−7 = 5.4243 × 10−4
8
(ii) The error in quadratic interpolation is:

f (3) (ξ)
|f (x) − p2 (x)| ≤ max |(x − xi−1 ) (x − xi ) (x − xi+1 )| max

3!

x∈[xi−1 ,xi ,xi+1 ] ξ∈[0,1]

We find upper bounds for both terms, noting that xi = xi−1 + h and xi+1 = xi + h:

(1) max |(x − xi−1 ) (x − xi ) (x − xi+1 )| = max |(y + h)y(y − h)|


x∈[xi−1 ,xi ,xi+1 ] y∈[−h,h]

2h3
= √
3 3
f (3) (ξ) e e1
ξ
(2) max = max ≤

ξ∈[0,1] 3! ξ∈[0,1] 3! 3!

Combining (1) and (2), the accuracy requirement of 10−7 leads to:

e 1 h3 1
√ < 10−7 ⇒ h < 5.7347 × 10−7 3 = 0.00831
9 3

MS213 Worked Examples page 17 of 20


X.6 (a) Apply
R 1 x (i) the Composite Trapezoidal Rule and (ii) Simpson’s Composite Rule to approximate
0 e dx using 5 equally spaced points on [0, 1]. Given that the true value of this integral
is 1.718282 (correct to 6 decimal places), determine the error in your approximation and
compare it with that obtained using the theoretical error bound in each case.
(b) Determine the number of sub-intervals
R1 required by (i) the Trapezoidal Rule and (ii) Simp-
son’s Rule to approximate 0 ex dx, so that the truncation error is less than 10−6 .
Z 1
(c) The following table provides composite Trapezoidal Rule approximations to ex dx using
0
successively smaller mesh spacings h = 2−i , i = 0, 1, 2, 3 and is the first step of the Romberg
integration method:
Ri,1 Ri,2 Ri,3 Ri,4
1.859141
1.753931 R2,2
1.727222 R3,2 R3,3
1.720519 R4,2 R4,3 R4,4
Complete the Romberg integration table to obtain R4,4 .
(d) By transforming the interval of integration, apply the 2-point Gaussian quadrature formula
   
1 1
I2 (f ) = f − √ +f √
3 3
to approximate: Z 1
ex dx
0
and compare your approximation with the true value 1.718282.

Solution

(b−a) 1
(a) (i) With h = n = 4 and the nodes xi = 0 + (i − 1)h, we apply the integration rule as
follows:
h
I5 (f ) = [f0 + 2 (f1 + f2 + f3 ) + f4 ]
2         
h 1 1 3
= f (0) + 2 f +f +f + f (1)
2 4 2 4
= 0.125 [1 + 2(5.049747) + 2.718282]
= 1.727222

The actual error is |1.718282 − 1.727222| = 0.00894 whereas the error bound formula
yields

(b − a)h2 (1 − 0)h2
max f 00 (µ) = max |eµ |

12 [a,b] 12 [0,1]
(0.0625)
= 2.718282 = 0.014158
12

MS213 Worked Examples page 18 of 20


(b−a) 1
(ii) With h = n = 4 and the nodes xi = 0 + (i − 1)h, we apply the integration rule as
follows:
h
I5 (f ) = [f0 + 4 (f1 + f3 ) + 2f2 + f4 ]
3         
h 1 3 1
= f (0) + 4 f +f + 2f + f (1)
3 4 4 2
h
= [1 + 4(3.401025) + 2(1.648721) + 2.718282]
3
= 1.718319
The actual error is |1.718282 − 1.718319| = 0.000037 whereas the error bound formula
yields
(b − a)h4 (1)h4
max f (4) (µ) = max |eµ |

180 [a,b] 180 [0,1]
(0.00390625)
= 2.718282 = 0.000059
180
(b) (i) We solve for h using the theoretical error bound:
(b − a)h2
max f 00 (µ) < 10−6

12 [a,b]

12 × 10−6
⇒ h2 <
(b − a) max[0,1] |eµ |
12 × 10−6
= = 4.4146 × 10−6
(1 − 0)2.718282
⇒ h < 0.002101
Since n = b−a
h = 475.945, we must choose n = 476 sub-intervals to achieve the desired
accuracy.
(ii) We solve for h using the theoretical error bound:
(b − a)h4
max f (4) (µ) < 10−6

180 [a,b]

180 × 10−6
⇒ h4 <
(b − a) max[0,1] |eµ |
180 × 10−6
= = 6.6218 × 10−5
(1 − 0)2.718282
⇒ h < 0.090208
Since m = b−a2h = 5.54275, we must choose m = 6 sub-intervals (i.e. using a total of 13
equally spaced points) to achieve the desired accuracy.
(c) Using the formula
4j−1 Ri,j−1 − Ri−1,j−1
Ri,j =
4j−1 − 1
for each i = 2, 3, . . ., n and j = 2, . . . , i, the table is completed as follows:
Ri,1 Ri,2 Ri,3 Ri,4
1.859141
1.753931 1.718861
1.727222 1.718319 1.718283
1.720519 1.718284 1.718282 1.718282

MS213 Worked Examples page 19 of 20


(d) We will use Gauss quadrature to integrate f (x) over the arbitrary closed interval [a, b] by
mapping the interval [a, b] = [0, 1] onto the interval [−1, 1] using the linear transformation:

b−a 1 b−a 1
x=a+ (z + 1) = (z + 1), dx = = dz
2 2 2 2
Rb
Making this substitution in a f (x) dx gives:
b 1   1  
b−a b−a
Z Z Z
1 1
f (x) dx = f a+ (z + 1) dz = f (z + 1) dz
a 2 −1 2 2 −1 2

If we define the new integrand as:


  
1 1
F (z) = f (z + 1)
2 2

the n = 2 Gaussian quadrature approximation is computed as


   
1 1
I2 (F ) = F − √ +F √
3 3
= (1)(0.617657) + (1)(1.10024)
= 1.717896

The error in the approximation is

|1.718282 − 1.717896| = 0.000385

MS213: Numerical Mathematics Worked Examples JC 22/09/2014

MS213 Worked Examples page 20 of 20

You might also like