Lecture 5
Lecture 5
Recall the bisection method. Suppose that c = f (a) < 0 and d = f (b) > 0. If f is continuous, then obviously
it must be zero at some x∗ between a and b. The bisection method then consists of looking half way between
a and b for the zero of f , i.e. let x = (a + b)/2 and evaluate y = f (x). Unless this is zero, then from the
signs of c, d and y we can decide which new interval to subdivide. In particular, if c and y have the same
sign, then [x, b] should be the new interval, but if c and y have different signs, then [a, x] should be the new
interval. (See Figure 5.1.)
u
a0 x1 x2 x0 b0
a1 u b1
a2 b2
u
Deciding to do different things in different situations in a program is called flow control. The most common
way to do this is the if ... else ... end statement which is an extension of the if ... end statement
we have used already.
One good thing about the bisection method, that we don’t have with Newton’s method, is that we always
know that the actual solution x∗ is inside the current interval [a, b], since f (a) and f (b) have different signs.
This allows us to be sure about what the maximum error can be. Precisely, the error is always less than half
of the length of the current interval [a, b], i.e.
18
Introduction to Numerical Methods. . . by Young and Mohlenkamp c 2020 19
The following function program (available to download as mybisect.m) does n iterations of the bisection
method and returns not only the final value, but also the maximum possible error:
function [ x e ] = mybisect (f ,a ,b , n )
% function [ x e ] = mybisect (f ,a ,b , n )
% Does n iterations of the bisection method for a function f
% Inputs : f -- a function
% a , b -- left and right edges of the interval
% n -- the number of bisections to do .
% Outputs : x -- the estimated solution of f ( x ) = 0
% e -- an upper bound on the error
Another important aspect of bisection is that it always works. We saw that Newton’s method can fail to
converge to x∗ if x0 is not close enough to x∗ . In contrast, the current interval [a, b] will always be decreased
by a factor of 2 at each step and so it will always eventually shrink down as small as you wish.
The bisection method and Newton’s method are both used to obtain closer and closer approximations of a
solution, but both require starting places. The bisection method requires two points a and b that have a root
20 LECTURE 5. THE BISECTION METHOD AND LOCATING ROOTS
between them, and Newton’s method requires one point x0 which is reasonably close to a root. How do you
come up with these starting points? It depends. If you are solving an equation once, then the best thing to
do first is to just graph it. From an accurate graph you can see approximately where the graph crosses zero.
There are other situations where you are not just solving an equation once, but have to solve the same
equation many times, but with different coefficients. This happens often when you are developing software
for a specific application. In this situation the first thing you want to take advantage of is the natural domain
of the problem, i.e. on what interval is a solution physically reasonable. If that is known, then it is easy
to get close to the root by simply checking the sign of the function at a fixed number of points inside the
interval. Whenever the sign changes from one point to the next, there is a root between those points. The
following program will look for the roots of a function f on a specified interval [a0 , b0 ].
function [a , b ] = myrootfind (f , a0 , b0 )
% function [a , b ] = myrootfind (f , a0 , b0 )
% Looks for subintervals where the function changes sign
% Inputs : f -- a function
% a0 -- the left edge of the domain
% b0 -- the right edge of the domain
% Outputs : a -- an array , giving the left edges of subintervals
% on which f changes sign
% b -- an array , giving the right edges of the subintervals
n = 1001; % number of test points to use
a = []; % start empty array
b = [];
% split the interval into n -1 intervals and evaluate at the break points
x = linspace ( a0 , b0 , n );
y = f ( x );
% loop through the intervals
for i = 1:( n -1)
if y ( i )* y ( i +1) <= 0 % The sign changed , record it
a = [ a x ( i )];
b = [ b x ( i +1)];
end
end
if size (a ,1) == 0
warning ( ’ no roots were found ’)
end
end
The final situation is writing a program that will look for roots with no given information. This is a difficult
problem and one that is not often encountered in actual applications.
Introduction to Numerical Methods. . . by Young and Mohlenkamp c 2020 21
Once a root has been located on an interval [a, b], these a and b can serve as the beginning points for the
bisection and secant methods (see the next section). For Newton’s method one would want to choose x0
between a and b. One obvious choice would be to let x0 be the bisector of a and b, i.e. x0 = (a + b)/2. An
even better choice would be to use the secant method to choose x0 .
Exercises
5.1 Modify mybisect to solve until the absolute error is bounded by a given tolerance. Use a while loop
to do this. Run your program on the function f (x) = 2x3 + 3x − 1 with starting interval [0, 1] and a
tolerance of 10−8 . How many steps does the program use to achieve this tolerance? (You can count
the steps by adding 1 to a counting variable i in the loop of the program.) How big is the final residual
f (x)? Turn in your program and a brief summary of the results.
5.2 Perform 3 iterations of the bisection method on the function f (x) = x3 − 4, with starting interval
[1, 3]. By hand, but use a calculator.) Calculate the errors and percentage errors of x0 , x1 , x2 , and
x3 . Compare the errors with those in exercise 3.2.