APM2613 Tutorial 01 2022
APM2613 Tutorial 01 2022
Working Example
Consider the function
𝑓(𝑥) = 𝑥 3 − 7𝑥 2 + 14𝑥 − 6
Whose roots are to be approximated using the various methods.
The graph of the function is as shown in Fig. 1 below using the command
octave:13> x=0:0.1:4;plot(x,f(x))
So we have an idea of where the roots are: one in [0,1], and the other two in [2.5,4]. Let us
focus on the one root in [0,1] to demonstrate the various methods.
Open Rubric
Fig.1: Graph of 𝑓(𝑥) = 𝑥 3 − 7𝑥 2 + 14𝑥 − 6
Bisection method:
Here we use 𝑎 = 0, 𝑏 = 1.
By intermediate value theorem, 𝑓(0) = −6 < 0, 𝑓(1) = 2 > 0. So 𝑓(𝑥) has a root in [0,1]
𝑎+𝑏
Iteration 1: 𝑐1 = 2
= 0.5 and 𝑓(𝑐1 ) =-0.6250
So we replace a with c because 𝑓(𝑎)𝑓(𝑐1 ) > 0 (i.e. 𝑓(𝑎), 𝑓(𝑐1 ) have the same
sign.
Iteration 2: Our new bracket is [𝑐1 , 𝑏] = [0.5,1]and
𝑐 +𝑏
𝑐2 = 1 = 0.7500 and 𝑓(𝑐2 ) = 0.9844
2
Now 𝑓(𝑐2 )𝑓(𝑐1 ) < 0, so we replace b with 𝑐_2 and our new bracket is [𝑐1 , 𝑐2 ].
𝑐1 +𝑐2
Iteration 3: With 𝑎 = 𝑐1 , 𝑏 = 𝑐2 , 𝑐3 = = 0.6250
2
etc.
We can continue in this way iteratively, checking the sign of the new iterate against the
previous iterates forming the bracket and replace whichever does not satisfy the condition of
𝑓(𝑐𝑖 )𝑓(𝑐𝑖+1 ) < 0
2
𝑖 𝑐𝑖
1 0.5000
2 0.7500
3 0.6250
4 0.5625
5 0.5938
6 0.5781
It is apparent that it will take quite a number of iterations for the approximation to converge to
expected root of 𝑥 = 0.5858 in the given interval.
Note that in this case the root oscillates around the expected root, so iterations will converge
from both sides. In some cases convergence is from one side.
The natural question to ask is how many iterations can we expect to perform before we reach a
certain level of accuracy, say, to within 10−4. For this we can easily estimate the number of
iterations by using the relation
𝑏−𝑎
< 10−4
2𝑁
A little algebra exercise leads to 𝑁 > 13.2877. So we can say at least 14 iterations are needed
to reach the required accuracy of within 10−4.
Fixed-point method
We now illustrate an approximation of the root using the fixed-point method.
The general idea of the fixed-point scheme is idealy to find a 𝑥 so that
𝑥 = 𝑔(𝑥)
i.e. to find a value 𝑥 which when substituted in 𝑔(𝑥) gives back the same value.
The iterative formula 𝑔(𝑥) derived from a given root equation 𝑓(𝑥) = 0 is not unique. At a basic
level, 𝑔(𝑥) can be obtained by rearranging 𝑓(𝑥) = 0 such that 𝑥 is isolated to one side of the
equation.
For our example the root equation is 𝑓(𝑥) = 𝑥 3 − 7𝑥 2 + 14𝑥 − 6 = 0, which we can arrange in
more than one way to isolate 𝑥 to one side.
6−𝑥 3 +7𝑥 2 7𝑥 2 −14𝑥+6 7𝑥 2 −14𝑥+6
e.g. 𝑥 = 14
, 𝑥= 𝑥2
, 𝑥= √ 𝑥
, … are few possibilities.
What is important to note though is that these schemes can lead to different results: converging
to different roots or not converging to any root (diverging).
6−𝑥 3 +7𝑥 2
Let us demonstrate some results using the fixed-point scheme 𝑥 = 14
, in which case
6−𝑥 3 +7𝑥 2
𝑔(𝑥) = 14
. We show the results of iteratively using
3
6−𝑥𝑖 3 +7𝑥𝑖 2
𝑥𝑖+1 = 14
, for different values of the initial values 𝑥0
(recall that the known roots are 3.4142, 3.0000 and 0.5858)
g=inline(‘(6-x.^3+7*x.^2)/14’);
N=Nmax % maximum number of iterations
x=0.5 % Initial value of x
for i=1:N
x(i+1)=g(x(i));
end
x
It seems the chosen initial values converge to the roots 0.5858 and 3.4359 after different
number of iterations. For the 20 iterations performed with the various initial values we get
4
- 𝑥0 = 2.5 seems to converge to 0.5858 after more than 20 iterations
- 𝑥0 = 3.2 seems to converge to 0.3442 after just over 20 iterations
- 𝑥0 = 2.8 converges to 0.5858, but after 43 iterations
Note that this scheme does not seem to pick up the root 𝑥 = 3.0 using the chosen initial values.
More iterations and initial values would have to be used to possibly converge to this root.
Remark:
A note on convergence of the fixed-point method. The question that we may ask is how can we
know that the sequence generated by a given iterative scheme 𝑥𝑖+1 = 𝑔(𝑥𝑖 ) will converge to
some root, and what initial value can we choose to ensure convergence.
The answer to this question is found in Theorem 2.4 (the so called fixed-point theorem), which
states that under certain conditions on 𝑔(𝑥) on a certain interval [𝑎, 𝑏] the relation
Exercise:
Try other iterative formulas with different initial values to approximate the root of the given
function 𝑓(𝑥).
Newton’s method
As we can see from the above method, the choice of an iterative scheme 𝑔(𝑥) and a suitable
initial value 𝑥_0 are critical in obtaining convergence of a scheme. The emphasis on 𝑔(𝑥)
suggests that if a ‘smart’ way of formulating 𝑔(𝑥) can be found, then convergence can be
managed, if not achieved.
One such way of formulating 𝑔(𝑥) is Newton’s formula which is based on using the tangent to
the function and the intercept of this tangent at with the 𝑥-axis. Newton’s iterative scheme is
given by
𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 − ,
𝑓 ′ (𝑥𝑖 )
for a given initial value 𝑥_0.
5
A simple code would be
f=inline(‘x.^3-7*x.^2+14*x-6’)
df=inline(‘3*x.^2-14*x+14’) % derivative of f
…
For i=1:N
x(i+1)=x(i)-f(x(i))/df(x(i));
end
Using the same initial values as for the fixed-point method we obtain
How interesting!
The various initial values led to all three known roots.
More than that, the iterative scheme converged in much fewer iterations:
- 𝑥0 = 0.5 to 0.5858 in 2 iterations
- 𝑥0 = 2.5 to 3.0000 in 4 iterations
- 𝑥0 = 3.2 to 3.4142 in 4 iterations
- 𝑥0 = 3.6 to 3.4142 in 4 iterations
Notice the speed at which the scheme converges to the known values. This is in line with the
quadratic convergence discussed on p69 of the textbook. Notwithstanding the possible
complications of Newton’s method (see Lesson 2), the method generally converges much faster
than the other methods.
6
𝑓(𝑥𝑖 )(𝑥𝑖 − 𝑥𝑖−1 )
𝑥𝑖+1 = 𝑥𝑖 −
𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖−1 )
𝑥0 , 𝑥1 given
Using our famous example root equation 𝑓(𝑥) = 𝑥 3 − 7𝑥 2 + 14𝑥 − 6 = 0, we apply the two
methods and compare. A simple code would be
f=inline('x.^3-7*x.^2+14*x-6');
x=[0 1];
for i=2:10
x(i+1)=x(i)-f(x(i))*(x(i)-x(i-1))/(f(x(i))-f(x(i-1)));
end
Secant method:
𝑖 𝑥𝑖 𝑥𝑖 𝑥𝑖 𝑥𝑖 𝑥𝑖
0 0 0.5000 2.5000 2.8000 3.2000
1 1.0000 3.6000 4.0000 3.2000 3.6000
2 0.7500 2.5161 1.3333 3.0833 3.3000
3 0.5077 4.3242 13.0000 2.8971 3.3650
4 0.5961 2.1024 1.3079 3.0165 3.4403
5 0.5864 0.6436 1.2825 3.0028 3.4105
6 0.5858 0.2529 -1.5516 2.9999 3.4140
7 0.5858 0.5985 1.1404 3.0000 3.4142
8 0.5858 0.5885 1.0157 3.0000 3.4142
9 0.5858 0.5858 0.1623 3.0000 3.4142
10 0.5858 0.5858 0.7224 3.0000 3.4142
11 0.5858 0.5858 0.6236 3.0000 NaN
12 NaN 0.5858 0.5814 3.0000 NaN
7
Regula falsi Method
Using some of the same initial intervals as used for the secant method, and an modification of
the above code to include an if…else condition to compare signs, we get the following results for
the regula-falsi method:
𝒊 𝒙𝒊 𝒙𝒊 𝒙𝒊 𝒙𝒊
0 0 0.5000 2.8000 3.2000
1 1.0000 3.6000 3.2000 3.6000
2 0.7500 2.5161 3.0833 3.3000
3 0.6443 1.3608 3.0261 3.3650
4 0.6058 0.6664 3.0073 3.3954
5 0.5925 0.5911 3.0019 3.4074
6 0.5880 0.5861 3.0005 3.4118
7 0.5865 0.5858 3.0001 3.4134
8 0.5860 0.5858 3.0000 3.4139
9 0.5859 0.5858 3.0000 3.4141
10 0.5858 0.5858 3.0000 3.4142
11 0.5858 0.5858 3.0000 3.4142
12 0.5858 0.5858 3.0000 3.4142
Notice that the subsequent iterates by the secant and regula falsi methods differ because of the
bracketing condition used in the regula falsi method.
Muller’s Method
Polynomials often have complex roots which may not be picked up by the method used so far
since geometrically they make use of values on the x-axis. Starting with 𝑝0 = 0; 𝑝1 = 2; 𝑝2 = 2.8
and following Algorithm 2.8 we get
𝑖 0 1 2 3 4 5
𝑝𝑖 0 2.0000 2.8000 2.8669 2.9806 2.9806
This procedure converges to the approximation 𝑥 = 2.9806 after 2 iterations for the selected
initial values.
This tutorial should give a kick start in working on the assignment problems.
Have a go!