MS213 Worked Examples
MS213 Worked Examples
X.1 Compute 5 iterations of the following methods (using the indicated starting values) to approxi-
mate a root of
f (x) = x − cos(x)
in the interval [0, 1]. Tabulate your results neatly showing, for each iteration n, the values of n,
pn−1 , pn and |pn − pn−1 |:
(a) Bisection Method, [a, b] = [0, 1] (b) Fixed Point Iteration, p0 = 0.5
(c) Newton-Raphson Method, p0 = 0.5 (d) Secant Method, p0 = 0, p1 = 1
(e) Steffensen’s Method, p0 = 0.5
Solution
(a) To find a solution of f (x) = x − cos(x) = 0, the Bisection method generates a sequence of
approximations pn defined by:
1
pn = (an + bn )
2
where [a0 , b0 ] = [0, 1]. The first 5 iterations are as follows:
Iter an−1 bn−1 pn f (an−1 ) f (pn ) |pn − pn−1 |
1 0.000000 1.000000 0.500000 -1.000 -0.378 0.5000000
2 0.500000 1.000000 0.750000 -0.378 0.018 0.2500000
3 0.500000 0.750000 0.625000 -0.378 -0.186 0.1250000
4 0.625000 0.750000 0.687500 -0.186 -0.085 0.0625000
5 0.687500 0.750000 0.718750 -0.085 -0.034 0.0312500
(b) To use fixed-point iteration to solve the equation:
x = g(x) = cos(x)
we generate a sequence of approximations pn+1 = g (pn ) starting with p0 = 0.5. The first 5
iterations are:
n pn−1 pn |pn − pn−1 |
1 0.500000000 0.877582562 0.377582562
2 0.877582562 0.639012494 0.238570068
3 0.639012494 0.802685101 0.163672607
4 0.802685101 0.694778027 0.107907074
5 0.694778027 0.768195831 0.073417804
f (x) = x − cos(x) = 0
f (x) = x − cos(x) = 0
x = g(x) = cos(x)
we first generate two fixed-point approximations pn+1 = g (pn ) and pn+2 = g (pn+1 ) starting
with p0 = 0.5. Steffensen’s method then provides the improved approximations p∗n given by:
(∆pn )2 (pn+1 − pn )2
p∗n = pn − = p n −
∆2 p n pn+2 − 2pn+1 + pn
Solution
(a) (i) Writing the system in augmented matrix form, the basic steps in the Gaussian elim-
ination process (without pivoting and using 4-digit arithmetic with rounding) are as
follows:
2.51 1.48 4.53 0.05 2.51 1.4800 4.530 0.0500
1.48 0.93 −1.30 1.03 → 0 0.0574 −3.971 1.0010
2.68 3.04 −1.48 −0.53 0 1.4590 −6.318 −0.5834
2.51 1.4800 4.530 0.0500 2.51 1.4800 4.530 0.050
0 0.0574 −3.971 1.0010 → 0 0.0574 −3.971 1.001
0 1.4590 −6.318 −0.5834 0 0 94.580 −26.030
u2,2 = 16 u2,3 = 20
5
l3,2 =
4
u3,3 = 36
The LU matrix factorization is therefore:
1 2 3 1 0 0 1 2 3
2 20 26 = 2
1 0 · 0 16 20
5
3 26 70 3 4 1 0 0 36
To solve the system Ax = b, writing A = LU , we first solve Ly = b and then back-
substitute, by solving U x = y, to find x. Solving Ly = b yields:
1 0 0 20 20
[Ly | b] = 2 1 0 144 ⇒ y = 104
3 54 1 262 72
Solving U x = y, yields:
1 2 3 20 6
[U x | y] = 0 16 20 104 ⇒ x = 4
0 0 36 72 2
(ii) The starting point for the Crout Algorithm is the matrix equation:
l1,1 0 0 1 u1,2 u1,3 1 2 3
l2,1 l2,2 0 · 0 1 u2,3 = 2 20 26
l3,1 l3,2 l3,3 0 0 1 3 26 70
Noting that l1,1 = a1,1 = 1, we work systematically across a row of U followed by the
corresponding column of L, using row-column multiplication:
l1,1 = 1 u1,2 = 2 u1,3 = 3
l2,1 = 2 l3,1 = 3
5
l2,2 = 16 u2,3 =
4
l3,2 = 20
l3,3 = 36
Solving U x = y, yields:
1 2 3 20 6
5 13
[U x | y] = 0 1 4 2 ⇒x= 4
0 0 1 2 2
(iii) The starting point for the Choleski Algorithm is the matrix equation:
l1,1 0 0 l1,1 l1,2 l1,3 1 2 3
l2,1 l2,2 0 · 0 l2,2 l2,3 = 2 20 26
l3,1 l3,2 l3,3 0 0 l3,3 3 26 70
2 = a
Noting that l1,1 T
1,1 = 1, we work systematically across a row of L and the corre-
sponding column of L, using row-column multiplication:
l2,2 = 4 l2,3 = 0
l3,3 = 6
To solve the system Ax = b, writing A = LLT , we first solve Ly = b and then back-
substitute, by solving LT x = y, to find x. Solving Ly = b yields:
1 0 0 20 20
[Ly | b] = 2 4 0 144 ⇒ y = 26
3 5 6 262 12
Solving LT x = y, yields:
T 1 2 3 20 6
L x | y = 0 4 5 26 ⇒ x = 4
0 0 6 12 2
3 x1 − x2 = 2
−x1 + 3 x2 − x3 = 1
−x2 + 3 x3 = 2
In the case of (a) and (b), calculate the spectral radius of the iteration matrix and hence comment
on the expected convergence properties of the two methods. Using the results of part (a), calculate
the optimum value of ω for the SOR method.
Solution
(a) The iteration scheme for the Jacobi method may be written in the following form:
(k+1) 1h (k)
i
x1 = 2 + x2
3
(k+1) 1 h
(k) (k)
i
x2 = 1 + x1 + x3
3
(k+1) 1h (k)
i
x3 = 2 + x2
3
This leads to the sequence of iterations:
k 0 1 2 3 4
(k)
x1 0 0.66667 0.77778 0.92593 0.95062
(k)
x2 0 0.33333 0.77778 0.85185 0.95062
(k)
x3 0 0.66667 0.77778 0.92593 0.95062
k 0 1 2 3 4
(k)
x1 0 0.66667 0.85185 0.96708 0.99268
(k)
x2 0 0.55556 0.90123 0.97805 0.99512
(k)
x3 0 0.85185 0.96708 0.99268 0.99837
has roots
λ = 0, λ = 0.222222
so that ρ(TG ) = 0.222222. As expected, ρ(TG ) = ρ(TJ )2 so that convergence will be twice
as rapid for Gauss-Seidel.
(c) Let x̂ denote the vector obtained from an application of the Gauss-Seiel method and let ω
denote the acceleration parameter, then the SOR method may be written in the form:
(k+1) (k+1) (k)
xi = ωx̂i + (1 − ω)xi
for each component i = 1, 2, . . . , n. For the linear system given by:
3 x1 − x2 = 2
−x1 + 3 x2 − x3 = 1
−x2 + 3 x3 = 2
(k+1) 1h (k)
i
x1 = 2 + x2
3
(k+1) 1h (k+1) (k)
i
x2 = 1 + x1 + x3
3
(k+1) 1h (k+1)
i
x3 = 2 + x2
3
With ω = 1.06, the scheme
(k+1) (k+1) (k)
xi = ωx̂i + (1 − ω)xi
k 0 1 2 3 4
(k)
x1 0 0.70667 0.87733 0.99044 0.99888
(k)
x2 0 0.60302 0.95212 0.99522 0.99955
(k)
x3 0 0.91973 0.98790 0.99904 0.99990
Note: As an alternative and possibly simpler approach, we can first calculate the Gauss-
Seidel iteration, and then combine it with the previous iteration value. As this is an al-
gebraically equivalent formulation, it leads to the same sequence of iterations. Results are
presented correct to 6 decimal places:
x1 x2 x3
(0)
xi 0.000000 0.000000 0.000000
(1)
x̂i 0.666667 0.568889 0.867674
(1)
xi 0.706667 0.603022 0.919735
(2)
x̂i 0.867674 0.932356 0.984039
(2)
xi 0.877335 0.952116 0.987897
(3)
x̂i 0.984039 0.992779 0.998406
(3)
xi 0.990441 0.995219 0.999037
(4)
x̂i 0.998406 0.999307 0.999851
1 f (x + h) − f (x − h)
f 0 (x) ≈ µδf (x) =
h 2h
to approximate f 0 (a) = cos π4 , using h = 0.5, 0.25, 0.125 and 0.0625. Derive the truncation
error for the approximation and compare it with the actual error for each value of h. The
error appears to decrease by a constant factor as h is decreased by a factor of 2. Explain.
[Note: Tabulate your results neatly.]
(b) The following table presents second-order approximations to the first derivative of the func-
tion f (x) = sin x at the point x = π4 using the finite-difference formula:
f (x + h) − f (x − h)
f 0 (x) ≈ R1 (h) =
2h
0.5
using successively smaller mesh spacings h = , i = 0, 1, 2, 3:
2i
R1 R2 R3 R4
R1 (0.5) = 0.678010
R1 (0.25) = 0.699764 R2 (0.5)
R1 (0.125) = 0.705267 R2 (0.25) R3 (0.25)
R1 (0.0625) = 0.706647 R2 (0.125) R3 (0.125) R4 (0.125)
Complete the Richardson extrapolation table to obtain R4 (0.125).
(c) Assume that f ∈ C 4 [a, b], that x − h, x, x + h ∈ [a, b] where h > 0 and that
f (x + h) − 2f (x) + f (x − h)
δx2 (h) ≡
h2
is an approximation to the second derivative of f (x).
(i) Derive an expression for the leading terms (in powers of h) in the truncation error of
this approximation.
(ii) Assume that a computer is utilised to make numerical computations and that
where f (x0 − h), f (x0 ) and f (x0 + h) are approximated by the numerical values y−1 ,
y0 and y1 , and e−1 , e0 and e1 are the associated round-off errors, respectively. Assume
further that the round-off errors are bounded in magnitude by the machine epsilon .
Derive an upper bound, in terms of h and , for the total error (round-off and truncation)
in the computational formula δx2 (h) and hence determine the value of h which minimizes
this error.
(iii) Let f (x) = cos(x). Use the formula for δx2 (h) with h = 0.1, 0.01 and 0.001 to find
approximations to f 00 (0.8). Carry full calculator precision in all cases. Compare with
the true value f 00 (0.8) = − cos(0.8). Using your results from part (ii) and assuming
a machine epsilon with value = 0.5 × 10−9 , estimate the optimal step-size for this
approximation and, in the context of the errors observed above, comment on your
results.
Solution
where the latter term is the truncation error of the approximation, for some ξ ∈ [x−h, x+h].
Using the given formula:
1 f (x + h) − f (x − h)
µδf (x) =
h 2h
for different values of h, we obtain the following approximations to f 0 (a) = cos π4 :
1 2
h h µδf (x) Abs Error Ratio max h6 |f (3) (x)|
0.5 0.678010 2.9097 × 10−2 3.9981 × 10−2
0.25 0.699764 7.3427 × 10−3 3.9627 9.9953 × 10−3
0.125 0.705267 1.8400 × 10−3 3.9906 2.4988 × 10−3
0.0625 0.706647 4.6027 × 10−4 3.9977 6.2471 × 10−4
4 M h2
|E(f, h)| ≤ + .
h2 12
The optimum step size will minimize the quantity
4 M h2
g(h) = + .
h2 12
Setting g 0 (h) = 0 results in − h83 + M6h = 0 which yields the equation h4 = 48
M , from
which we obtain the value of h which minimizes |E(f, h)|:
1
48 4
h= .
M
(iii) The three calculations may be summarized as follows:
D2 (0.100) = −0.69612631
D2 (0.010) = −0.69670090
D2 (0.001) = −0.69670665
14
24 × 10−9
h= = 0.01244666,
1
which would give an error of approximately 10−5 . However, note that greater precision
than this is available even on the calculator. Taking = 2.22 × 10−16 , for example,
we can compute that the optimal step-size is now 3.2123 × 10−4 which gives D2 (h) =
−0.69670670 with an absolute error of 6.6 × 10−9 .
(a) Use the appropriate Lagrange interpolating polynomial of degree 2 to find an approximation
to f (0.125).
(b) Construct a divided-difference table and hence construct the Newton divided-difference poly-
nomial of degree 4 interpolating f at these points and use it to estimate f (0.125).
(c) Construct a forward-difference table and hence construct the Newton forward-difference
polynomial of degree 4 interpolating f at these points and use it to estimate f (0.125).
(d) Use the forward-difference table of part (c) to construct the Newton backward-difference
polynomial of degree 4 interpolating f at these points and use it to estimate f (0.875).
(e) Suppose that a table of values of f (x) = ex , 0 ≤ x ≤ 1, is to be constructed, with the values
of ex given with a spacing of h. If (i) linear interpolation and (ii) quadratic interpolation
is to be used in this table, how small should h be chosen in order to have an interpolation
error which is less than 10−7 .
Solution
(a) The appropriate nodes are x0 = 0, x1 = 0.25 and x2 = 0.5 where the Lagrange form of the
interpolating polynomial is:
n n
X Y x − xi
pn (x) = Lk (x) fk , Lk (x) =
xk − xi
k=0 i=0
i6=k
(d) We can then construct the Newton backward-difference polynomial of degree 4: Since x =
x0 − sh with x = 0.875, x0 = 1 and h = 0.25, we find that s = 0.5. We can compute the
h2
(1) max |(x − xi ) (x − xi+1 )| =
x∈[xi ,xi+1 ] 4
f (2) (ξ) eξ e1
(2) max = max ≤
ξ∈[0,1] 2! ξ∈[0,1] 2! 2!
Combining (1) and (2), the accuracy requirement of 10−7 leads to:
e1 h2 p
< 10−7 ⇒ h < 2.963 × 10−7 = 5.4243 × 10−4
8
(ii) The error in quadratic interpolation is:
f (3) (ξ)
|f (x) − p2 (x)| ≤ max |(x − xi−1 ) (x − xi ) (x − xi+1 )| max
3!
x∈[xi−1 ,xi ,xi+1 ] ξ∈[0,1]
We find upper bounds for both terms, noting that xi = xi−1 + h and xi+1 = xi + h:
2h3
= √
3 3
f (3) (ξ) e e1
ξ
(2) max = max ≤
ξ∈[0,1] 3! ξ∈[0,1] 3! 3!
Combining (1) and (2), the accuracy requirement of 10−7 leads to:
e 1 h3 1
√ < 10−7 ⇒ h < 5.7347 × 10−7 3 = 0.00831
9 3
Solution
(b−a) 1
(a) (i) With h = n = 4 and the nodes xi = 0 + (i − 1)h, we apply the integration rule as
follows:
h
I5 (f ) = [f0 + 2 (f1 + f2 + f3 ) + f4 ]
2
h 1 1 3
= f (0) + 2 f +f +f + f (1)
2 4 2 4
= 0.125 [1 + 2(5.049747) + 2.718282]
= 1.727222
The actual error is |1.718282 − 1.727222| = 0.00894 whereas the error bound formula
yields
(b − a)h2 (1 − 0)h2
max f 00 (µ) = max |eµ |
12 [a,b] 12 [0,1]
(0.0625)
= 2.718282 = 0.014158
12
12 × 10−6
⇒ h2 <
(b − a) max[0,1] |eµ |
12 × 10−6
= = 4.4146 × 10−6
(1 − 0)2.718282
⇒ h < 0.002101
Since n = b−a
h = 475.945, we must choose n = 476 sub-intervals to achieve the desired
accuracy.
(ii) We solve for h using the theoretical error bound:
(b − a)h4
max f (4) (µ) < 10−6
180 [a,b]
180 × 10−6
⇒ h4 <
(b − a) max[0,1] |eµ |
180 × 10−6
= = 6.6218 × 10−5
(1 − 0)2.718282
⇒ h < 0.090208
Since m = b−a2h = 5.54275, we must choose m = 6 sub-intervals (i.e. using a total of 13
equally spaced points) to achieve the desired accuracy.
(c) Using the formula
4j−1 Ri,j−1 − Ri−1,j−1
Ri,j =
4j−1 − 1
for each i = 2, 3, . . ., n and j = 2, . . . , i, the table is completed as follows:
Ri,1 Ri,2 Ri,3 Ri,4
1.859141
1.753931 1.718861
1.727222 1.718319 1.718283
1.720519 1.718284 1.718282 1.718282
b−a 1 b−a 1
x=a+ (z + 1) = (z + 1), dx = = dz
2 2 2 2
Rb
Making this substitution in a f (x) dx gives:
b 1 1
b−a b−a
Z Z Z
1 1
f (x) dx = f a+ (z + 1) dz = f (z + 1) dz
a 2 −1 2 2 −1 2