0% found this document useful (0 votes)
18 views13 pages

MATH3511 Assignment 1

The document details an assignment on scientific computing, specifically focusing on the Bisection Method and Taylor series approximations. It includes the application of the Bisection Method to find the root of the function f(x) = e^x - 3x, and the use of Taylor series to approximate functions with error analysis. Additionally, it presents Matlab code used for calculations and visualizations of errors in the approximations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views13 pages

MATH3511 Assignment 1

The document details an assignment on scientific computing, specifically focusing on the Bisection Method and Taylor series approximations. It includes the application of the Bisection Method to find the root of the function f(x) = e^x - 3x, and the use of Taylor series to approximate functions with error analysis. Additionally, it presents Matlab code used for calculations and visualizations of errors in the approximations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MATH3511 - Scientific Computing

Assignment 1
Phoebe Grosser, u7294693
14th March 2023

Question 1
The intersection of the graphs y = 3x and y = ex (or, equivalently, the root of the function f (x) = ex − 3x) was found
using the Bisection Method, correct to two decimal points.

The initial starting values were selected as a = 1.4 and b = 1.6. The Bisection Method was carried out using Matlab and
values for the iteration step n, the approximate root xn , the value of the function f (xn ) and the updated bounds a and
b were inserted into Table 1.

The algorithm used to execute the Bisection Method, which is the same as the one presented in the lectures, is included
in Figure 1. The algorithm was terminated when the value of xn converged to the true root of f (x) = ex − 3x that exists
within the initial bound [a, b] (specifically, 1.51213), correct to two decimal places.

n xn f (xn ) a b
0 1.500000000000000 -0.018310929661935 1.500000000000000 1.600000000000000
1 1.550000000000000 0.061470182590742 1.500000000000000 1.550000000000000
2 1.525000000000000 0.020143569306689 1.500000000000000 1.525000000000000
3 1.512500000000000 0.000561779129503 1.500000000000000 1.512500000000000
Table 1. Bisection Method performed on f (x) = ex − 3x

Hence, the root of f (x) = ex − 3x within the initial bounds [1.4, 1.6] was found to be 1.51 (to two decimal places), via the
Bisection Method. The code used to perform the above calculations is included on the next page.

Page 1
1 %% Assignment 1 , Question 1 Code ( Bisection Method )
2 f = @( x ) exp ( x ) - 3* x ;
3
4 a1 = 1.4;
5 b1 = 1.6;
6 [a , b ] = bisect (f , a1 , b1 , 1.0*10^( -10) ) ;
7
8 function [a , b ] = bisect (f , inita , initb , tol )
9 format long ; format compact
10
11 fval = tol + 1.0;
12 a = inita ;
13 b = initb ;
14 n = -1;
15
16 while abs ( fval ) > tol
17 n = n + 1;
18
19 m = ( a + b ) /2.0;
20 fval = f ( m ) ;
21 if fval < 0.0
22 a = m;
23 else
24 b = m;
25 end
26
27 [n , m , fval , a , b ]
28
29 end
30 end

Matlab code used for Question 1

Page 2
Question 2
The following functions f below were approximated with Taylor series centered at c (to the order n specified next to each
function), and then the Taylor error formula was used to determine an upper bound to the error of the approximation
when x lied in the specified interval I.

1. f (x) = x, c = 4, n = 2, I = [4.0, 4.2]
2. f (x) = x sin (x), c = 0, n = 4, I = [−1, 1]

For the first function, f (x) = x, the second-order Taylor series approximated around c = 4 is:
2
X f (k) (4)
f (x) ≈ (x − 4)k
k!
k=0
f (4) f ′ (4) f ′′ (4)
= (x − 4)0 + (x − 4) + (x − 4)2
0! 1! 2!
Then, by noting the following derivatives:

f (x) = x =⇒ f (4) = 2
1 1
f ′ (x) = √ =⇒ f ′ (4) =
2 x 4
−1 −1
f ′′ (x) = 3/2 =⇒ f ′′ (4) =
4x 32
the Taylor series can subsequently be specified as:
1 1
f (x) ≈ 2 + (x − 4) − (x − 4)2
4 64
3 3x x2
= + −
4 8 64
√ 3 3x x2
The second-order Taylor series approximation to f (x) = x is therefore f (x) ≈ 4 + 8 − 64 . The error can then be
determined via Taylor’s remainder formula:

f (n+1) (ξ)
En+1 = (x − c)n+1 (1)
(n + 1)!

where ξ is some unknown point that lies between c and x and depends on both. We want to find an upper bound to the
error when x lies in [4.0, 4.2], so we want to maximise the ξ term. We first note:
3 √
f (n+1) (ξ) = f (3) (ξ) = 5 for n = 2 and f = x
8ξ 2

The error can consequently be specified as:


3 1
E3 = 5 · (x − 4)3
8ξ 2 3!
3
5 is a continuously decreasing function for ξ ∈ [4.0, 4.2] (ie. between c and x), so it will take a maximal value at ξ = 4.
8ξ 2
Hence, the error term will be bounded by:
3 1 3 1 1
E3 = 5 · (x − 4)3 ≤ 5 · (x − 4)3 = (x − 4)3
8ξ 2 3! 8(4) 2 3! 512
1
Thus, the error of the Taylor series approximation will be bounded by 512 (x − 4)3 for x in the range [4.0, 4.2]. The true
error was plotted against the error formula estimate for each x in Figure 1.

It can be clearly seen that the true error between f (x) and the Taylor approximation remains lower than the error estimate
bound for x ∈ [4.0, 4.2], as expected.

Page 3

Figure 1: Comparison of the true error and error formula estimate for a 2nd order Taylor approximation to f (x) = x

Page 4
For the second function, f (x) = x sin (x), the fourth-order Taylor series around c = 0 is:
4
X f (k) (0)
f (x) ≈ (x)k
k!
k=0
f (0) 0 f ′ (0) 1 f ′′ (0) 2 f ′′′ (0) 3 f ′′′′ (0) 4
= (x) + (x) + (x) + (x) + (x)
0! 1! 2! 3! 4!
As before, by noting the following derivatives:

f (x) = x sin x =⇒ f (0) = 0


f ′ (x) = sin x + x cos x =⇒ f ′ (0) = 0
f ′′ (x) = 2 cos x − x sin x =⇒ f ′′ (0) = 2
f ′′′ (x) = −3 sin x − x cos x =⇒ f ′′′ (0) = 0
f ′′′′ (x) = −4 cos x + x sin x =⇒ f ′′′′ (0) = −4

the Taylor series can subsequently be specified as:


0 2 0 −4 4
f (x) ≈ 0 + x + x2 + x3 + x
1! 2! 3! 4!
1
= x2 − x4
6
The fourth-order Taylor series approximation to f (x) = x sin x is therefore f (x) ≈ x2 − 61 x4 . The error can also be
determined via Taylor’s remainder formula in Equation 1. We first note that Equation 1 is dependent on ξ, which must
exist between c and x. However, as c = 0 and the x domain is x ∈ [−1, 1], it is natural to split this error into two cases:
x ∈ [−1, 0] and x ∈ [0, 1].

For the x ∈ [0, 1] case, we want to find an upper bound to E5 . Firstly, we can note:

f (n+1) (ξ) = f (5) (ξ) = 5 sin ξ + ξ cos ξ for n = 4 and f = x sin x

The error term, E5 , can consequently be specified as:


5 sin ξ + ξ cos ξ 5
E5 = x (2)
5!
As 5 sin ξ + ξ cos ξ is a continuously increasing function between ξ ∈ [0, 1], it reaches a maximum value at ξ = 1. Hence,
the error term will be bounded by:

5 sin ξ + ξ cos ξ 5 5 sin (1) + cos (1) 5


E5 = x ≤ x ≈ 0.03956x5 (3)
5! 5!
For the x ∈ [−1, 0] case, the error term E5 will take the same form as in Equation 2. 5 sin ξ + ξ cos ξ is a continuously
decreasing function between ξ ∈ [−1, 0], so the absolute value of the error term E5 will reach a maximum value at ξ = −1.
Hence, the absolute value of the error term will be bounded by:

5 sin ξ + ξ cos ξ 5 5 sin (−1) + (−1) cos (−1) 5


|E5 | = x ≤| x ≈ 0.03956|x5 | (4)
5! 5!
The absolute value of the Taylor error estimate is the same in both regions, so we can simply take |E5 | ≤ 0.03956|x5 |
as the upper bound to the error formula estimate over the entire region x ∈ [−1, 1]. The true error was plotted against
the error formula estimate for each x in Figure 2. Again, it can be clearly seen that the true error between f (x) and the
Taylor approximation remains lower than the error estimate bound for x ∈ [−1, 1], as expected. The code used to create
these plots is included on the following page.

Page 5
Figure 2: Comparison of the true error and error formula estimate for a 4th order Taylor approximation to f (x) = x sin (x)

Page 6
1 % Assignment 1 , Question 2 Code ( Taylor Series )
2 % Part 1
3 x = linspace (4 ,4.2 ,200) ;
4 taylor_err = 1/512*( x - 4) .^3;
5 true_err = abs ( sqrt ( x ) - ((3.* x ) ./8 - ( x .^2) ./64 + 3/4) ) ;
6
7 figure (1)
8 plot (x ,( true_err ) , ' -r ')
9 hold on
10 plot (x ,( taylor_err ) , ' -b ')
11 hold off
12 xlim ([4 4.2])
13 ylim ([0 1.8*10^( -5) ])
14 xlabel ( 'x ')
15 ylabel ( ' Error ')
16 legend ( ' True error ' , ' Error formula estimate ' , ' Location ' , ' southeast ')
17
18 % Part 2
19 x_1 = linspace ( -1 ,1 ,200) ;
20 taylor_err_1 = 0.03956.* abs ( x_1 ) .^5;
21 true_err_1 = abs ( x_1 .* sin ( x_1 ) - ( x_1 .^2 - x_1 .^4/6) ) ;
22
23 figure (2)
24 plot ( x_1 , true_err_1 , ' -r ')
25 hold on
26 plot ( x_1 , taylor_err_1 , ' -b ')
27 hold off
28 xlim ([ -1 1])
29 xlabel ( 'x ')
30 ylabel ( ' Error ')
31 legend ( ' True error ' , ' Abs ( Error formula estimate ) ' , ' Location ' , ' northeast ')

Matlab code used for Question 2

Page 7
Question 3
We want to show that, if r is a double zero of a function f (ie. f (r) = f ′ (r) = 0 ̸= f ′′ (r)), then Newton’s method shall
have en+1 ≈ 12 en (ie. linear convergence).
First, define the error as: en+1 = xn+1 − r. Then, via Newton’s method, we know that xn+1 = xn − ff′(x n)
(xn ) . Substituting
this into the en+1 error term, we get:

en+1 = xn+1 − r
f (xn )
= xn − −r
f ′ (xn )
f (xn )
= en − ′
f (xn )

Then, we can take Taylor expansions of f (xn ) and f ′ (xn ) around the root r, assuming we are sufficiently close to it. This
uses infinitesimal expansion around the root to avoid division by 0 in the normal formula derived in the lectures. The
Taylor expansions are:
1
f (xn ) = f (r + en ) ≈ f (r) + en f ′ (r) + e2n f ′′ (ξn )
2
f ′ (xn ) = f ′ (r + en ) ≈ f ′ (r) + en f ′′ (ξn )

where ξn is our error term. Substituting f (r) = f ′ (r) = 0, the above Taylor expansions simplify to:
1 2 ′′
f (xn ) ≈ e f (ξn )
2 n
f ′ (xn ) ≈ en f ′′ (ξn )

We should note that these error terms are not the same (as they come from different Taylor series), but we can reasonably
approximate ξn ≈ r if xn is close enough to the root (as ξn must be bounded by r and xn ). Thus, as f ′′ is continuous, the
two f ′′ (ξn ) error terms should converge. Substituting these Taylor expansions into the formula for en+1 derived above
with ξn ≈ r, we can determine the convergence rate:

f (xn )
en+1 = en −
f ′ (xn )
1 2 ′′

2 en f (r)
≈ en −
en f ′′ (r)
1
= en − en
2
1
= en
2
Thus, en+1 = 12 en (ie. linear convergence with constant c = 12 ) when Newton’s method is performed on a double root r.

Page 8
Question 4
We want to determine the conditions on α to ensure that the iteration

xn+1 = xn − αf (xn ) (5)

will converge to a root of f , if started near the root. The broadest condition for convergence is linear convergence, so
we will only require that the iteration must linearly converge. First, consider the first-order (linear) Taylor expansion of
f (xn ) around its root, r:

f ′ (r)
f (xn ) ≈ f (r) + (xn − r) + O((xn − r)2 )
1!
= f ′ (r)(xn − r) + O((xn − r)2 ), as f (r) = 0 by definition
= f ′ (r)en + O(e2n ), via en = xn − r

Substituting this into Equation 5 and subtracting r from both sides, we get:

xn+1 − r = xn − α(f ′ (r)en + O(e2n )) − r


=⇒ en+1 = en − α(f ′ (r)en + O(e2n ))
≈ (1 − αf ′ (r))en

For linear convergence, we require that en+1 ≤ Cen for |C| ≤ 1. That is, via comparison to the above equation, we
require:
2
|1 − αf ′ (r)| ≤ 1 =⇒ 0 ≤ α ≤
f ′ (r)

This is the condition on α to ensure linear convergence.

Page 9
Question 5
a.
We want to find the root of the equation:

p(x) = 4x3 − 2x2 + 3 (6)

using Newton’s Method, with an initial guess of x0 = −1. The algorithm used to perform Newton’s Method is the one
provided in Matlab Grader with a few modifications (shown at the end of the assignment). The output of Newton’s
Method is shown in the below table:
n xn f (xn ) en (true)
0 -1.000000000000000 -3.000000000000000 2.3117191415079 · 10−1
1 -0.812500000000000 -0.465820312500000 4.3671914150790 · 10−2
2 -0.770804195804196 -0.020137886720229 1.976109954986 · 10−3
3 -0.768832384255760 -0.000043708433414 4.298406550 · 10−6
4 -0.768828085869608 -0.000000000207412 2.0399 · 10−11
Table 2. Newton’s Method performed on p(x) = 4x − 2x2 + 3
3

These results act as expected, as the error of each step is approximately equal to the square of the step beforehand. This
is particularly evident when an extra column is added to the previous table, with the square of the error term en :
n xn f (xn ) en (true) e2n
−1
0 -1.000000000000000 -3.000000000000000 2.3117191415079 · 10 5.3440453892140 · 10−2
−2
1 -0.812500000000000 -0.465820312500000 4.3671914150790 · 10 1.907236085594 · 10−3
−3
2 -0.770804195804196 -0.020137886720229 1.976109954986 · 10 3.9050105541948 · 10−6
−6
3 -0.768832384255760 -0.000043708433414 4.298406550 · 10 1.8476298869083 · 10−11
−11
4 -0.768828085869608 -0.000000000207412 2.0399 · 10 4.161192010000 · 10−22
Table 2 (Modified). Newton’s Method performed on p(x) = 4x3 − 2x2 + 3
Clearly, en+1 ≈ e2n , satisfying quadratic convergence. Furthermore, the root xn converges towards the only real root of
p(x), which is -0.76882808584920981.

a.
We now want to find the root of the equation:

p(x) = x3 + 94x2 − 389x + 294 (7)

using Newton’s Method, with an initial guess of x0 = 2. As p(x) has three real roots at x = 1, 3 and -98, it is expected
that an initial guess of x0 should allow convergence to either x = 1 or x = 3. However, performing Newton’s Method with
the same algorithm as previously, we get:
n xn f (xn ) en (for r = 1) en (for r = 3)
0 2 -100 1 1
1 -98 0 99 101
2 -98 0 99 101
Table 3. Newton’s Method performed on p(x) = x3 + 94x2 − 389x + 294
What can be seen is that, after only one iteration, the method instantly converges to the further root at x = −98. The
reason for this can be elucidated when p(x) = x3 + 94x2 − 389x + 294 is plotted, as below.

As shown in Figure 3, the derivative of p(x) is very close to 0 at x = 2 (albeit not exactly 0). Newton’s Method converges
on a root by finding the zero of the linear tangent line at the point xn . When the derivative is very close to 0 at a point
xn , the tangent line at xn will be almost horizontal and hence only marginally slope towards the x-axis (consequently
only crossing the axis at a point very far from xn ). This overshoot due to a very small first derivative is precisely what
has happened in this case, which caused Newton’s Method to converge to the root that is furthermost from x = 2.

The change in the slope around x = 2 is in fact so subtle that initiating the algorithm at x0 = 2.1 will converge the
method to x = 3, and initiating the algorithm at x0 = 1.9 will converge the method to x = 1. The the slope of the tangent
lines at the initial point are sufficiently non-horizontal such that the method ”zooms into” the closest root.

Page 10
Figure 3: Plot of the function p(x) = x3 + 94x2 − 389x + 294

1 % Assignment 1 , Question 5 Code ( Newton ' s Method )


2 % Part a
3 % f = @( x ) 4* x ^3 - 2* x ^2 + 3;
4 % df = @( x ) 12* x ^2 - 4* x ;
5 % [x , fx , err ] = newton (f , df , -1 , 1.0 E -15)
6
7 % Part b
8 f = @( x ) x ^3 + 94* x ^2 - 389* x + 294;
9 df = @( x ) 3* x ^2 + (94*2) * x - 389;
10 [x , fx , err_1 , err_2 ] = newton (f , df , 2 , 1.0 E -10)
11
12
13
14 function [ x_list , Fx_list , err_1_list , err_2_list ] = newton (f , df , x_init , tol )
15 format long ; format compact
16 % Find the root of f ( x ) using Newton ' s iteration
17 % x_ { n +1} = x_n - f ( x_n ) /f '( x_n )
18 %
19 % Input :
20 % f : the function whose root needs to be found
21 % df : the derivative of f
22 % x_init : the initial guess
23 % tol : the solution tolerance , | f ( x ) | < tol
24 %
25 % Output :
26 % x_list : a list of the x_n values
27 % Fx_list : a list containing f ( x_n )
28 %

Page 11
29 % set the maximum possible number of iterations
30
31 max_n = 100;
32
33 % find the starting values x_0 , f ( x_0 )
34
35 x = x_init ;
36 Fx = f ( x ) ;
37 % err = abs ( x - ( -0.76882808584920981) ) ;
38 err_1 = abs ( x - 1) ;
39 err_2 = abs ( x - 3) ;
40
41 % initialise the lists
42
43 x_list = [ x ];
44 Fx_list = [ Fx ];
45 % err_list = [ err ];
46 err_1_list = [ err_1 ];
47 err_2_list = [ err_2 ];
48
49 % start iterating
50
51 for n = 1: max_n
52
53 % evaluate the derivative
54
55 dFx = df ( x ) ;
56 dx = - Fx / dFx ;
57
58 % stop if the derivative is close to zero
59
60 if abs ( dFx ) < eps (0)
61 print ( ' Small Derivative ')
62 return
63 end
64
65 % find x_ { n +1} and f ( x_ { n +1})
66
67 x = x + dx ;
68 Fx = f ( x ) ;
69 % err = abs ( x - ( -0.76882808584920981) ) ;
70 err_1 = abs ( x - 1) ;
71 err_2 = abs ( x - 3) ;
72
73 % add the results to the list
74
75 x_list = [ x_list , x ];
76 Fx_list = [ Fx_list , Fx ];
77 % err_list = [ err_list , err ];
78 err_1_list = [ err_1_list , err_1 ];
79 err_2_list = [ err_2_list , err_2 ];
80
81 % exit if the error is small
82
83 if abs ( dx ) < tol
84 return
85 end
86 end
87

Page 12
88 % print a warning message if the root was not found
89
90 print ( ' The method did not converge ')
91 end

Matlab code used for Question 5

Page 13

You might also like