Bracketing Methods: Methods Because Two Initial Guesses For The Root Are Required. As The Name Implies, These
Bracketing Methods: Methods Because Two Initial Guesses For The Root Are Required. As The Name Implies, These
5
Bracketing Methods
This chapter on roots of equations deals with methods that exploit the fact that a function
typically changes sign in the vicinity of a root. These techniques are called bracketing
methods because two initial guesses for the root are required. As the name implies, these
guesses must “bracket,” or be on either side of, the root. The particular methods described
herein employ different strategies to systematically reduce the width of the bracket and,
hence, home in on the correct answer.
As a prelude to these techniques, we will briely discuss graphical methods
for depicting functions and their roots. Beyond their utility for providing rough guesses,
graphical techniques are also useful for visualizing the properties of the functions and
the behavior of the various numerical methods.
123
[1]
124 BRACKETING METHODS
c f (c)
4 34.190
8 17.712
12 6.114
16 22.230
20 28.368
These points are plotted in Fig. 5.1. The resulting curve crosses the c axis between 12 and
16. Visual inspection of the plot provides a rough estimate of the root of 14.75. The valid-
ity of the graphical estimate can be checked by substituting it into Eq. (E5.1.1) to yield
668.06
f (14.75) 5 (1 2 e20.146843(14.75)) 2 40 5 0.100
14.75
which is close to zero. It can also be checked by substituting it into Eq. (PT2.3) along
with the parameter values from this example to give
9.81(68.1)
y5 (1 2 e2 (14.75y68.1)10) 5 40.100
14.75
which is very close to the desired fall velocity of 40 m/s.
FIGURE 5.1
The graphical approach for determining the roots of an equation.
f (c)
40
20
Root
0
4812 20 c
–10
5.1 GRAPHICAL METHODS 125
Graphical techniques are of limited practical value because they are not precise. However,
f (x) graphical methods can be utilized to obtain rough estimates of roots. These estimates can be
employed as starting guesses for numerical methods discussed in this and the next chapter.
Aside from providing rough estimates of the root, graphical interpretations are im-
portant tools for understanding the properties of the functions and anticipating the pitfalls
of the numerical methods. For example, Fig. 5.2 shows a number of ways in which roots
x
can occur (or be absent) in an interval prescribed by a lower bound xl and an upper
(a)
bound xu. Figure 5.2b depicts the case where a single root is bracketed by negative and
f (x) positive values of f(x). However, Fig. 5.2d, where f(xl) and f(xu) are also on opposite
sides of the x axis, shows three roots occurring within the interval. In general, if f(xl)
and f(xu) have opposite signs, there are an odd number of roots in the interval. As indi-
cated by Fig. 5.2a and c, if f(xl) and f(xu) have the same sign, there are either no roots
or an even number of roots between the values.
x
(b) Although these generalizations are usually true, there are cases where they do not
hold. For example, functions that are tangential to the x axis (Fig. 5.3a) and discontinu-
f (x) ous functions (Fig. 5.3b) can violate these principles. An example of a function that is
tangential to the axis is the cubic equation f(x) 5 (x 2 2)(x 2 2)(x 2 4). Notice that
x 5 2 makes two terms in this polynomial equal to zero. Mathematically, x 5 2 is called
a multiple root. At the end of Chap. 6, we will present techniques that are expressly
designed to locate multiple roots.
x
(c) The existence of cases of the type depicted in Fig. 5.3 makes it dificult to develop
general computer algorithms guaranteed to locate all the roots in an interval. However,
f (x) when used in conjunction with graphical approaches, the methods described in the
FIGURE 5.3
x Illustration of some exceptions to the general cases depicted f (x)
in Fig. 5.2. (a) Multiple root that occurs when the function is
xl xu
tangen- tial to the x axis. For this case, although the end
(d) points are of op- posite signs, there are an even number of
axis intersections for the interval. (b) Discontinuous function
FIGURE 5.2 where end points of oppo- site sign bracket an even number of
Illustration of a number of roots. Special strategies are required for determining the
roots for these cases. x
general ways that a root may
occur in an interval prescribed (a)
by a lower bound xl and an
upper bound xU. Parts (a) and f (x)
(c) indicate that if both f(xl) and
f(xU) have the same sign, either
there will be no roots or there
will be an even number of roots
within the interval. Parts (b) and
(d) indicate that if the function
has different signs at the end x
points, there will be an odd xl xu
number of roots in the interval.
(b)
5.2 THE BISECTION METHOD 127
FIGURE 5.5
Step 1: Choose lower xl and upper xU guesses for the root such that the function changes
sign over the interval. This can be checked by ensuring that f(xl)f(xU) , 0.
Step 2: An estimate of the root xr is determined by
xl 1 xU
xr 5
2
Step 3: Make the following evaluations to determine in which subinterval the root lies:
(a) If f(xl)f(xr) , 0, the root lies in the lower subinterval. Therefore, set xU 5 xr and
return to step 2.
(b) If f(xl)f(xr) . 0, the root lies in the upper subinterval. Therefore, set xl 5 xr and
return to step 2.
(c) If f(xl)f(xr) 5 0, the root equals xr; terminate the computation.
128 BRACKETING METHODS
12 16
14 16
FIGURE 5.6 15
A graphical depiction of the
14
bisection method. This plot
conforms to the first three
iterations from Example 5.3.
Therefore, the root is between 14 and 15. The upper bound is redeined as 15, and the
root estimate for the third iteration is calculated as
x 5 14 1 15 5 14.5
r
2
which represents a percent relative error of et 5 2.0%. The method can be repeated until the result is accurate enough to satisf
In the previous example, you may have noticed that the true error does not decrease
with each iteration. However, the interval within which the root is located is halved with
each step in the process. As discussed in the next section, the interval width provides an
exact estimate of the upper bound of the error for the bisection method.
Recall that the true percent relative error for the root estimate of 15 was 1.3%. Therefore,
ea is greater than et. This behavior is manifested for the other iterations:
1 12 16 14 5.413
2 14 16 15 6.667 1.344
3 14 15 14.5 3.448 2.035
4 14.5 15 14.75 1.695 0.345
5 14.75 15 14.875 0.840 0.499
6 14.75 14.875 14.8125 0.422 0.077
erminated.
to the fact that, for bisection, the true root can lie anywhere within the bracketing interval. The true and approximate errors are far apart when the
5.3 THE FALSE-POSITION METHOD 135
f (x )(xulu2 x )
x 5ru x 2 (5.7)
f (xl ) 2 f (xu)
FIGURE 5.12
A graphical depiction of the f (x)
method of false position.
Similar triangles used to
derive the for- mula for the f (xu)
method are shaded.
xr
xl
xu x
f (xl)
136 BRACKETING METHODS
Cross-multiply Eq. (5.6) to yield then adding and subtracting xu on the right-hand side:
f (xl)(xr 2 xu) 5 f (xu)(xr 2 xl) 2 f (xxl)f2(xfu)(x )
xr 5 1 xu f ( x l )
f (x ) 2 f (x )2 x
xu u
This is the false-position formula. The value of xr computed with Eq. (5.7) then replaces
whichever of the two initial guesses, xl or xu, yields a function value with the same sign
as f(xr). In this way, the values of xl and xu always bracket the true root. The process is
repeated until the root is estimated adequately. The algorithm is identical to the one for
bisection (Fig. 5.5) with the exception that Eq. (5.7) is used for step 2. In addition, the
same stopping criterion [Eq. (5.2)] is used to terminate the computation.
First iteration:
xl 5 12 f (xl) 5 6.1139
xu 5 16 f (xu) 5 22.2303
22.2303(12 2 16)
xr 5 16 2 6.1139 2 5 14.309
(22.2303)
which has a true relative error of 0.88 percent.
Second iteration:
f (xl) f (xr) 5 21.5376
5.3 THE FALSE-POSITION METHOD 137
Therefore, the root lies in the irst subinterval, and xr becomes the upper limit for the
next iteration, xu 5 14.9113:
xl 5 12 f (xl) 5 6.1139
xu 5 14.9309 f (xu) 5 20.2515
xr 5 14.9309 2 20.2515(12 2 14.9309)
6.1139 2 (20.2515) 5 14.8151
which has true and approximate relative errors of 0.09 and 0.78 percent. Additional
iterations can be performed to reine the estimate of the roots.
A feeling for the relative eficiency of the bisection and false-position methods
can be appreciated by referring to Fig. 5.13, where we have plotted the true percent
relative errors for Examples 5.4 and 5.5. Note how the error for false position decreases
much faster than for bisection because of the more eficient scheme for root
location in the false-position method.
Recall in the bisection method that the interval between xl and xu grew smaller
during the course of a computation. The interval, as deined by ¢xy2 5 Z xu 2 xl Z y2
for the irst iteration, therefore provided a measure of the error for this approach. This
is not the case
FIGURE 5.13
Comparison of the relative
errors of the bisection and the
false-position methods.
True percent relative error
10
Bisection
10– 1
False position
10– 2
10– 3
10– 4
0 3 6
Iterations
138 BRACKETING METHODS
for the method of false position because one of the initial guesses may stay ixed through-
out the computation as the other guess converges on the root. For instance, in
Example 5.5 the lower guess xl remained at 12 while xu converged on the root. For
such cases, the interval does not shrink but rather approaches a constant value.
Example 5.5 suggests that Eq. (5.2) represents a very conservative error criterion.
In fact, Eq. (5.2) actually constitutes an approximation of the discrepancy of the previous
iteration. This is because for a case such as Example 5.5, where the method is converg-
ing quickly (for example, the error is being reduced nearly an order of magnitude per
iteration), the root for the present iteration xrnew is a much better estimate of the true value
than the result of the previous iteration xold
r . Thus, the quantity in the numerator of Eq. (5.2)
actually represents the discrepancy of the previous iteration. Consequently, we are assured
that satisfaction of Eq. (5.2) ensures that the root will be known with greater accuracy
than the prescribed tolerance. However, as described in the next section, there are cases
where false position converges slowly. For these cases, Eq. (5.2) becomes unreliable, and
an alternative stopping criterion must be developed.
Thus, after ive iterations, the true error is reduced to less than 2 percent. For
false position, a very different outcome is obtained:
f (x)
10
0
1.0 x
FIGURE 5.14
Plot of f (x) 5 x10 2 1, illustrating slow convergence of the false-position method.
by examining a plot of the function. As in Fig. 5.14, the curve violates the premise upon which false position was based—that is, if f(xl) is much clos
PROBLEMS