0% found this document useful (0 votes)
23 views5 pages

Eq Roots

1) Bracketing methods are used to find roots of equations by exploiting the fact that a function changes sign near its root. They require initial guesses that bracket the root. 2) The bisection method divides the interval in half at each step to narrow in on the root. The regula falsi method uses linear interpolation to estimate intermediate x values. 3) Iterations continue until the difference between estimates is less than a small error bound ε, indicating convergence to the true root without prior knowledge of its value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views5 pages

Eq Roots

1) Bracketing methods are used to find roots of equations by exploiting the fact that a function changes sign near its root. They require initial guesses that bracket the root. 2) The bisection method divides the interval in half at each step to narrow in on the root. The regula falsi method uses linear interpolation to estimate intermediate x values. 3) Iterations continue until the difference between estimates is less than a small error bound ε, indicating convergence to the true root without prior knowledge of its value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Roots of Equations

Introduction
Before the advent of digital computers, there were several ways to solve for roots of
algebraic and transcendental equations. For some cases, the roots could be obtained by
direct methods, as in the case of quadratic formula for quadratic equations. Although there
were equations like this that could be solved directly, there were many more that could not.
For example, even an apparently simple function such as f  x   e  x  x cannot be solved
analytically. In such instances, the only alternative is an approximate solution technique.
One method to obtain an approximate solution is to plot the function and determine
where it crosses the x-axis. This point, which represents the x value for which f  x   0 , is
the root.
y

y  f x 

Objective: To find xr such that f(xr) = 0

xr -Root of the function f(x)


Although graphical methods are useful for obtaining rough estimates of roots, they are
limited because of their lack of precision. Sometimes so much time is spent in making the
graph of the function. Better “techniques” were developed in numerical methods where an
initial value of x is guessed and evaluated to determine whether it results to an f(x) close to
zero. If f(x) is not close to zero, several methods can be employed to provide a value of x
leading to a better estimate of the root. Usually the process is iterative which is tedious and
prone to errors without the use of a digital computer.
In general, there are two general classification of methods of finding roots of equations:
1. Bracketing Methods
2. Open Methods

Bracketing Methods
To find the root of a function f(x), bracketing methods exploit the fact that a function
typically changes sign in the vicinity of a root. These techniques are called bracketing
methods because two initial guesses for the root are required. As the name implies, these
guesses must “bracket” or be on either side of the root. The particular methods described
herein employ different strategies to systematically reduce the width of the bracket and,
hence, home in on the correct value.
a) The Bisector Method
If f(x) is real and continuous in the interval from xl to xu and f(xl) and f(xu) have opposite
signs, that is [f(xl)f(xu)] < 0 , then there is at least one real root between xl and xu .
Incremental search methods capitalize on this observation by locating an interval where
the function changes sign. Then the location of the sign change (and consequently the root)
is identified more precisely by dividing the interval into number of subintervals. Each of
these subintervals is searched to locate the sign change. The process is repeated and the root
estimate refined by dividing the subintervals into finer increments.
The bisector method, which is alternatively called binary chopping, interval halving is
one type of incremental search method in which the interval is always divided in half. If the
function changes sign over an interval, the function value at the midpoint is evaluated. The
location of the root is then determined as lying at the midpoint of the subinterval within
which the sign change occurs. [See Figure Below for Bisector Method]

f x  f x 

xl xm
x
xu
xr
xm 1 nearer to xr

The following is a simple algorithm of bisector method.


1) Choose limiting values xl and xu (with xl < xu )
2) Compute either f l  f  xl  or f u  f  xu 
x x
3) Compute the interval midpoint xm  l u and compute f m  f  xm 
2
4) Use either (i) or (ii) depending on which of fl or fu is available from Step (2):
(i) If  f l f m   0 , reset xl to xm , otherwise, reset xu to xm
(ii) If  f u f m   0 , reset xu to xm , otherwise, reset xl to xm
5) If  xu  xl    , [where  , the error bound is sufficiently small], proceed to Step (6)
6) If f  xl  f  xm   0 , the root, xr  xm , otherwise use linear interpolation to estimate the

root from either of the equations: xr  xl  u l


x  x  f xl  or x  x  xu  xl  f xu 
 f xu   f xl  r u
 f xu   f xl 
Note: The major step in the procedure is Step (4) in which we determine which half of the
previous interval we are to retain.
b) The Regula-Falsi Method [The False-Position Method]
The Regula-Falsi method may be thought of as an attempt to improve the convergence
characteristics of the bisector method. We begin with limiting values xl and xu such that f(x)
changes sign only once on the interval from xl to xu . An approximate root is found by
linear interpolation between xl and xu and serves as an intermediate value xint . The new
interval containing the root is now either from xl to xint or from xint to xu . The logic for
determining which interval is retained is the same as in the bisector method. Because the
intermediate point xint is found by interpolation, the separate interpolation step used at the
end of the bisector method becomes unnecessary; instead, the last computed value of xint is
taken to be the approximate solution. [See Figure Below for Regula-Falsi Method]

f x  f x 

xint 1 x
xl
xr xu

xint 2 nearer to xr

The following is the algorithm of regula-falsi method:


1) Choose limiting values xl and xu (with xl < xu )
2) Compute either f l  f  xl  or f u  f  xu  and set a counter i = 0
3) Increase i by 1, and compute the intermediate point xint from either:
( x  x ) f  xl  x  x  f xu 
xint  xl  u l or xint  xu  u l
 f xu   f xl   f xu   f xl 
4) Compute f int  f  xint 
5) Use either (i) or (ii) depending on which of fl or fu is available from Step (2):
(i) If  f l f int   0 , reset xl to xint , otherwise, reset xu to xint
(ii) If  f u f int   0 , reset xu to xint , otherwise, reset xl to xint
5) If  xu  xl    , [where  , the error bound is sufficiently small], proceed to Step (6)
6) If [ f  xint ] , is sufficiently small, that is, less than or equal to some small prescribed
quantity  or if I reaches an iteration limit N, take xint as the approximate root;
otherwise return to Step (3).
Termination Criteria and Error Estimates
In employing the algorithms for Bisector & Regula-Falsi Methods, the question of what is
the criterion for deciding the termination of the methods is important. Common sense
suggests that the result will be compared with the ‘true value of the root’. However, this is
not possible since the ‘true value of the root’ is not known at the start of the process. It is
not advisable to base the termination of the methods by specifying the maximum number of
iterations to be made since number of iterations towards convergence to the ‘true value of
the root’ is not also known.
Therefore, an error estimate that is not contingent to the prior knowledge of the ‘true
value of the root’ or to a specific number of iterations must be developed. The common
strategy is to terminate the iterations when the relative error is less than  , an error bound
which is some very small value. This is implemented by comparing the absolute value of
the difference between the previous and the new xint with  ; that is,  xint n   xint n1   .
Example: Find the root of f  x   x 3  3 x  1  0 , that lies between xl  1 and x u  2
Using Regula-Falsi, the results after 10 iterations is shown in the table below:

k xl xu f  xl  f  xu 
1 1.000 2.000 -1.000 3.000
2 1.250 2.000 -0.797 3.000
3 1.407 2.000 -0.434 3.000
4 1.482 2.000 -0.190 3.000
5 1.513 2.000 -0.075 3.000
6 1.525 2.000 -0.028 3.000
7 1.529 2.000 -0.011 3.000
8 1.531 2.000 -0.004 3.000
9 1.532 2.000 -0.001 3.000
10 1.532 2.000 -0.001 3.000
Hence, the root is approximately 1.532; it is computed more accurately to be 1.532089

You might also like