MATH3511 Assignment 1
MATH3511 Assignment 1
Assignment 1
Phoebe Grosser, u7294693
14th March 2023
Question 1
The intersection of the graphs y = 3x and y = ex (or, equivalently, the root of the function f (x) = ex − 3x) was found
using the Bisection Method, correct to two decimal points.
The initial starting values were selected as a = 1.4 and b = 1.6. The Bisection Method was carried out using Matlab and
values for the iteration step n, the approximate root xn , the value of the function f (xn ) and the updated bounds a and
b were inserted into Table 1.
The algorithm used to execute the Bisection Method, which is the same as the one presented in the lectures, is included
in Figure 1. The algorithm was terminated when the value of xn converged to the true root of f (x) = ex − 3x that exists
within the initial bound [a, b] (specifically, 1.51213), correct to two decimal places.
n xn f (xn ) a b
0 1.500000000000000 -0.018310929661935 1.500000000000000 1.600000000000000
1 1.550000000000000 0.061470182590742 1.500000000000000 1.550000000000000
2 1.525000000000000 0.020143569306689 1.500000000000000 1.525000000000000
3 1.512500000000000 0.000561779129503 1.500000000000000 1.512500000000000
Table 1. Bisection Method performed on f (x) = ex − 3x
Hence, the root of f (x) = ex − 3x within the initial bounds [1.4, 1.6] was found to be 1.51 (to two decimal places), via the
Bisection Method. The code used to perform the above calculations is included on the next page.
Page 1
1 %% Assignment 1 , Question 1 Code ( Bisection Method )
2 f = @( x ) exp ( x ) - 3* x ;
3
4 a1 = 1.4;
5 b1 = 1.6;
6 [a , b ] = bisect (f , a1 , b1 , 1.0*10^( -10) ) ;
7
8 function [a , b ] = bisect (f , inita , initb , tol )
9 format long ; format compact
10
11 fval = tol + 1.0;
12 a = inita ;
13 b = initb ;
14 n = -1;
15
16 while abs ( fval ) > tol
17 n = n + 1;
18
19 m = ( a + b ) /2.0;
20 fval = f ( m ) ;
21 if fval < 0.0
22 a = m;
23 else
24 b = m;
25 end
26
27 [n , m , fval , a , b ]
28
29 end
30 end
Page 2
Question 2
The following functions f below were approximated with Taylor series centered at c (to the order n specified next to each
function), and then the Taylor error formula was used to determine an upper bound to the error of the approximation
when x lied in the specified interval I.
√
1. f (x) = x, c = 4, n = 2, I = [4.0, 4.2]
2. f (x) = x sin (x), c = 0, n = 4, I = [−1, 1]
√
For the first function, f (x) = x, the second-order Taylor series approximated around c = 4 is:
2
X f (k) (4)
f (x) ≈ (x − 4)k
k!
k=0
f (4) f ′ (4) f ′′ (4)
= (x − 4)0 + (x − 4) + (x − 4)2
0! 1! 2!
Then, by noting the following derivatives:
√
f (x) = x =⇒ f (4) = 2
1 1
f ′ (x) = √ =⇒ f ′ (4) =
2 x 4
−1 −1
f ′′ (x) = 3/2 =⇒ f ′′ (4) =
4x 32
the Taylor series can subsequently be specified as:
1 1
f (x) ≈ 2 + (x − 4) − (x − 4)2
4 64
3 3x x2
= + −
4 8 64
√ 3 3x x2
The second-order Taylor series approximation to f (x) = x is therefore f (x) ≈ 4 + 8 − 64 . The error can then be
determined via Taylor’s remainder formula:
f (n+1) (ξ)
En+1 = (x − c)n+1 (1)
(n + 1)!
where ξ is some unknown point that lies between c and x and depends on both. We want to find an upper bound to the
error when x lies in [4.0, 4.2], so we want to maximise the ξ term. We first note:
3 √
f (n+1) (ξ) = f (3) (ξ) = 5 for n = 2 and f = x
8ξ 2
It can be clearly seen that the true error between f (x) and the Taylor approximation remains lower than the error estimate
bound for x ∈ [4.0, 4.2], as expected.
Page 3
√
Figure 1: Comparison of the true error and error formula estimate for a 2nd order Taylor approximation to f (x) = x
Page 4
For the second function, f (x) = x sin (x), the fourth-order Taylor series around c = 0 is:
4
X f (k) (0)
f (x) ≈ (x)k
k!
k=0
f (0) 0 f ′ (0) 1 f ′′ (0) 2 f ′′′ (0) 3 f ′′′′ (0) 4
= (x) + (x) + (x) + (x) + (x)
0! 1! 2! 3! 4!
As before, by noting the following derivatives:
For the x ∈ [0, 1] case, we want to find an upper bound to E5 . Firstly, we can note:
Page 5
Figure 2: Comparison of the true error and error formula estimate for a 4th order Taylor approximation to f (x) = x sin (x)
Page 6
1 % Assignment 1 , Question 2 Code ( Taylor Series )
2 % Part 1
3 x = linspace (4 ,4.2 ,200) ;
4 taylor_err = 1/512*( x - 4) .^3;
5 true_err = abs ( sqrt ( x ) - ((3.* x ) ./8 - ( x .^2) ./64 + 3/4) ) ;
6
7 figure (1)
8 plot (x ,( true_err ) , ' -r ')
9 hold on
10 plot (x ,( taylor_err ) , ' -b ')
11 hold off
12 xlim ([4 4.2])
13 ylim ([0 1.8*10^( -5) ])
14 xlabel ( 'x ')
15 ylabel ( ' Error ')
16 legend ( ' True error ' , ' Error formula estimate ' , ' Location ' , ' southeast ')
17
18 % Part 2
19 x_1 = linspace ( -1 ,1 ,200) ;
20 taylor_err_1 = 0.03956.* abs ( x_1 ) .^5;
21 true_err_1 = abs ( x_1 .* sin ( x_1 ) - ( x_1 .^2 - x_1 .^4/6) ) ;
22
23 figure (2)
24 plot ( x_1 , true_err_1 , ' -r ')
25 hold on
26 plot ( x_1 , taylor_err_1 , ' -b ')
27 hold off
28 xlim ([ -1 1])
29 xlabel ( 'x ')
30 ylabel ( ' Error ')
31 legend ( ' True error ' , ' Abs ( Error formula estimate ) ' , ' Location ' , ' northeast ')
Page 7
Question 3
We want to show that, if r is a double zero of a function f (ie. f (r) = f ′ (r) = 0 ̸= f ′′ (r)), then Newton’s method shall
have en+1 ≈ 12 en (ie. linear convergence).
First, define the error as: en+1 = xn+1 − r. Then, via Newton’s method, we know that xn+1 = xn − ff′(x n)
(xn ) . Substituting
this into the en+1 error term, we get:
en+1 = xn+1 − r
f (xn )
= xn − −r
f ′ (xn )
f (xn )
= en − ′
f (xn )
Then, we can take Taylor expansions of f (xn ) and f ′ (xn ) around the root r, assuming we are sufficiently close to it. This
uses infinitesimal expansion around the root to avoid division by 0 in the normal formula derived in the lectures. The
Taylor expansions are:
1
f (xn ) = f (r + en ) ≈ f (r) + en f ′ (r) + e2n f ′′ (ξn )
2
f ′ (xn ) = f ′ (r + en ) ≈ f ′ (r) + en f ′′ (ξn )
where ξn is our error term. Substituting f (r) = f ′ (r) = 0, the above Taylor expansions simplify to:
1 2 ′′
f (xn ) ≈ e f (ξn )
2 n
f ′ (xn ) ≈ en f ′′ (ξn )
We should note that these error terms are not the same (as they come from different Taylor series), but we can reasonably
approximate ξn ≈ r if xn is close enough to the root (as ξn must be bounded by r and xn ). Thus, as f ′′ is continuous, the
two f ′′ (ξn ) error terms should converge. Substituting these Taylor expansions into the formula for en+1 derived above
with ξn ≈ r, we can determine the convergence rate:
f (xn )
en+1 = en −
f ′ (xn )
1 2 ′′
2 en f (r)
≈ en −
en f ′′ (r)
1
= en − en
2
1
= en
2
Thus, en+1 = 12 en (ie. linear convergence with constant c = 12 ) when Newton’s method is performed on a double root r.
Page 8
Question 4
We want to determine the conditions on α to ensure that the iteration
will converge to a root of f , if started near the root. The broadest condition for convergence is linear convergence, so
we will only require that the iteration must linearly converge. First, consider the first-order (linear) Taylor expansion of
f (xn ) around its root, r:
f ′ (r)
f (xn ) ≈ f (r) + (xn − r) + O((xn − r)2 )
1!
= f ′ (r)(xn − r) + O((xn − r)2 ), as f (r) = 0 by definition
= f ′ (r)en + O(e2n ), via en = xn − r
Substituting this into Equation 5 and subtracting r from both sides, we get:
For linear convergence, we require that en+1 ≤ Cen for |C| ≤ 1. That is, via comparison to the above equation, we
require:
2
|1 − αf ′ (r)| ≤ 1 =⇒ 0 ≤ α ≤
f ′ (r)
Page 9
Question 5
a.
We want to find the root of the equation:
using Newton’s Method, with an initial guess of x0 = −1. The algorithm used to perform Newton’s Method is the one
provided in Matlab Grader with a few modifications (shown at the end of the assignment). The output of Newton’s
Method is shown in the below table:
n xn f (xn ) en (true)
0 -1.000000000000000 -3.000000000000000 2.3117191415079 · 10−1
1 -0.812500000000000 -0.465820312500000 4.3671914150790 · 10−2
2 -0.770804195804196 -0.020137886720229 1.976109954986 · 10−3
3 -0.768832384255760 -0.000043708433414 4.298406550 · 10−6
4 -0.768828085869608 -0.000000000207412 2.0399 · 10−11
Table 2. Newton’s Method performed on p(x) = 4x − 2x2 + 3
3
These results act as expected, as the error of each step is approximately equal to the square of the step beforehand. This
is particularly evident when an extra column is added to the previous table, with the square of the error term en :
n xn f (xn ) en (true) e2n
−1
0 -1.000000000000000 -3.000000000000000 2.3117191415079 · 10 5.3440453892140 · 10−2
−2
1 -0.812500000000000 -0.465820312500000 4.3671914150790 · 10 1.907236085594 · 10−3
−3
2 -0.770804195804196 -0.020137886720229 1.976109954986 · 10 3.9050105541948 · 10−6
−6
3 -0.768832384255760 -0.000043708433414 4.298406550 · 10 1.8476298869083 · 10−11
−11
4 -0.768828085869608 -0.000000000207412 2.0399 · 10 4.161192010000 · 10−22
Table 2 (Modified). Newton’s Method performed on p(x) = 4x3 − 2x2 + 3
Clearly, en+1 ≈ e2n , satisfying quadratic convergence. Furthermore, the root xn converges towards the only real root of
p(x), which is -0.76882808584920981.
a.
We now want to find the root of the equation:
using Newton’s Method, with an initial guess of x0 = 2. As p(x) has three real roots at x = 1, 3 and -98, it is expected
that an initial guess of x0 should allow convergence to either x = 1 or x = 3. However, performing Newton’s Method with
the same algorithm as previously, we get:
n xn f (xn ) en (for r = 1) en (for r = 3)
0 2 -100 1 1
1 -98 0 99 101
2 -98 0 99 101
Table 3. Newton’s Method performed on p(x) = x3 + 94x2 − 389x + 294
What can be seen is that, after only one iteration, the method instantly converges to the further root at x = −98. The
reason for this can be elucidated when p(x) = x3 + 94x2 − 389x + 294 is plotted, as below.
As shown in Figure 3, the derivative of p(x) is very close to 0 at x = 2 (albeit not exactly 0). Newton’s Method converges
on a root by finding the zero of the linear tangent line at the point xn . When the derivative is very close to 0 at a point
xn , the tangent line at xn will be almost horizontal and hence only marginally slope towards the x-axis (consequently
only crossing the axis at a point very far from xn ). This overshoot due to a very small first derivative is precisely what
has happened in this case, which caused Newton’s Method to converge to the root that is furthermost from x = 2.
The change in the slope around x = 2 is in fact so subtle that initiating the algorithm at x0 = 2.1 will converge the
method to x = 3, and initiating the algorithm at x0 = 1.9 will converge the method to x = 1. The the slope of the tangent
lines at the initial point are sufficiently non-horizontal such that the method ”zooms into” the closest root.
Page 10
Figure 3: Plot of the function p(x) = x3 + 94x2 − 389x + 294
Page 11
29 % set the maximum possible number of iterations
30
31 max_n = 100;
32
33 % find the starting values x_0 , f ( x_0 )
34
35 x = x_init ;
36 Fx = f ( x ) ;
37 % err = abs ( x - ( -0.76882808584920981) ) ;
38 err_1 = abs ( x - 1) ;
39 err_2 = abs ( x - 3) ;
40
41 % initialise the lists
42
43 x_list = [ x ];
44 Fx_list = [ Fx ];
45 % err_list = [ err ];
46 err_1_list = [ err_1 ];
47 err_2_list = [ err_2 ];
48
49 % start iterating
50
51 for n = 1: max_n
52
53 % evaluate the derivative
54
55 dFx = df ( x ) ;
56 dx = - Fx / dFx ;
57
58 % stop if the derivative is close to zero
59
60 if abs ( dFx ) < eps (0)
61 print ( ' Small Derivative ')
62 return
63 end
64
65 % find x_ { n +1} and f ( x_ { n +1})
66
67 x = x + dx ;
68 Fx = f ( x ) ;
69 % err = abs ( x - ( -0.76882808584920981) ) ;
70 err_1 = abs ( x - 1) ;
71 err_2 = abs ( x - 3) ;
72
73 % add the results to the list
74
75 x_list = [ x_list , x ];
76 Fx_list = [ Fx_list , Fx ];
77 % err_list = [ err_list , err ];
78 err_1_list = [ err_1_list , err_1 ];
79 err_2_list = [ err_2_list , err_2 ];
80
81 % exit if the error is small
82
83 if abs ( dx ) < tol
84 return
85 end
86 end
87
Page 12
88 % print a warning message if the root was not found
89
90 print ( ' The method did not converge ')
91 end
Page 13