False Position by Muhiemin Mazin
False Position by Muhiemin Mazin
False Position by Muhiemin Mazin
“False-Position
method”
A report
Submitted to the petroleum engineering
Department of the university of Karbala
Prepared by Supervised by
Muhiemin Mazen Dr.Falah Hasan
Numerical methods Page |2
Contents
3.......................... ................................ Introduction
4............................................... Halting Conditions
4...................... The Effect of Non-linear Functions
5................................... ................................ Theory
5................................................... Iteration Process
5............................................... Halting Conditions
5...................... ................................ Error Analysis
7............................ ................................ Application
7............................. ................................ Example 1
7............................. ................................ Example 2
8............................... ................................ Reference
Numerical methods Page |3
Introduction
The false-position method is a modification on the bisection method: if it is
known that the root lies on [a, b], then it is reasonable that we can approximate the
function on the interval by interpolating the points (a, f(a)) and (b, f(b)). In that case,
why not use the root of this linear interpolation as our next approximation to the root?
he bisection method chooses the midpoint as our next approximation. However,
consider the function in Figure (1),
This should, and usually does, give better approximations of the root,
especially when the approximation of the function by a linear function is a valid.
This method is called the false-position method, also known as the reguli-falsi.
Later, we look at a case where the false position method fails because the
function is highly non-linear.
Numerical methods Page |4
Halting Conditions
The halting conditions for the false-position method are different from the bisection
method. If you view the sequence of iterations of the false-position method in Figure
(3), you will note that only the left bound is ever updated, and because the function is
concave up, the left bound will be the only one which is ever updated.
Thus, instead of checking the width of the interval, we check the change in the end
points to determine when to stop.
Iteration Process
Given the interval [a, b], define c = (a f(b) − b f(a))/(f(b) − f(a)). Then
• if f(c) = 0 (unlikely in practice), then halt, as we have found a root,
• if f(c) and f(a) have opposite signs, then a root must lie on [a, c], so
assign step = b - c and assign b = c,
• else f(c) and f(b) must have opposite signs, and thus a root must lie on [c, b],
so assign step = c - a and assign a = c.
Halting Conditions
There are three conditions which may cause the iteration process to halt:
1. As indicated, if f(c) = 0.
2. We halt if both of the following conditions are met:
o The step size is sufficiently small, that is step < εstep, and
o The function evaluated at one of the end point |f(a)| or |f(b)| < εabs.
3. If we have iterated some maximum number of times, say N, and have not met
Condition 1, we halt and indicate that a solution was not found.
If we halt due to Condition 1, we state that c is our approximation to the root. If we
halt according to Condition 2, we choose either a or b, depending on whether |f(a)| <
|f(b)| or |f(a)| > |f(b)|, respectively.
If we halt due to Condition 3, then we indicate that a solution may not exist (the
function may be discontinuous).
Error Analysis
The error analysis for the false-position method is not as easy as it is for the
bisection method, however, if one of the end points becomes fixed, it can be shown
that it is still an O(h) operation, that is, it is the same rate as the bisection method,
Numerical methods Page |6
usually faster, but possibly slower. For differentiable functions, the closer the fixed
end point is to the actual root, the faster the convergence.
To see this, let r be the root, and assume that the point b is fixed. Then, the
change in a will be proportional to the difference between the slope between (r, 0) and
(b, f(b)) and the derivative at r. To view this, first, let the error be h = a - r and assume
that we are sufficiently close to the root so that f(a) ≈ f(1)(r) h. The slope of the line
connecting (a, f(a)) and (b, f(b)) is approximately f(b)/(b - r). This is shown in Figure
(5)
The error after one iteration is h minus the width of the smaller shown interval,
Therefore, the closer b is to r, the better an approximation f(b)/(b - r) is to the
derivative f(1)(r), and therefore, the faster the convergence.
To visualize this, suppose the right end point b is fixed and the other, a, is
sufficiently close to the root that the function f(x) is closely approximated by the
Taylor series, that is, f(a) ≈ f(1)(r)(a - r). The lines interpolating the point (a, f(a)) and
(b, f(b)) are essentially parallel to the line interpolating (r, 0) and (b, f(b)), which is
demonstrated in Figure (7) for the function f(x) = x2 - 5 starting with the interval [2,
7].
Thus, focusing in on the region around the root and iterating, we have the behaviour
seen in Figure 3 where the next approximation follows a slope which is approximately
parallel to the blue line shown in Figure(7). (The aspect ratios of both Figures (7) and (8) are
equally distorted.)
Thus, with the third iteration, we note that the last step 1.7273 → 1.7317 is less
than 0.01 and |f(1.7317)| < 0.01, and therefore we chose b = 1.7317 to be our
approximation of the root.
Note that after three iterations of the false-position method, we have an acceptable
answer (1.7317 where f(1.7317) = -0.0044) whereas with the bisection method, it took
seven iterations to find a (notable less accurate) acceptable answer (1.71344 where
f(1.73144) = 0.0082)
Example 2
Consider finding the root of f(x) = e-x(3.2 sin(x) - 0.5 cos(x)) on the interval [3, 4], this
time with εstep = 0.001, εabs = 0.001.
Table 2. False-position method applied to f(x) = e-x(3.2 sin(x) - 0.5 cos(x)).
Thus, after the sixth iteration, we note that the final step, 3.2978 → 3.2969 has a size less
than 0.001 and |f(3.2969)| < 0.001 and therefore we chose b = 3.2969 to be our approximation
of the root.
In this case, the solution we found was not as good as the solution we found using the
bisection method (f(3.2963) = 0.000034799) however, we only used six instead of
eleven iterations.
In addition we can use The false-position method in both Matlab & Maple
Numerical methods Page |8
Reference
• Bradie, Section 2.2, The Method of False Position, p.71.
• Mathews, Section 2.2, p.56.
• Weisstein, http://mathworld.wolfram.com/MethodofFalsePosition.html.
• https://web.mit.edu/10.001/Web/Course_Notes/NLAE/node5.html
• https://atozmath.com/example/CONM/Bisection.aspx?he=e&q=fp
• http://purescience.uobabylon.edu.iq/lecture.aspx?fid=21&lcid=81355