0% found this document useful (0 votes)
4 views26 pages

Lecture 4

This chapter discusses the numerical solution of nonlinear equations, focusing on single equations due to their prevalence and the complexity of simultaneous equations. It introduces definitions for algebraic and transcendental equations, outlines the importance of locating roots, and presents numerical methods such as the bisection method, false position method, and Newton-Raphson method for finding approximate solutions. The chapter emphasizes the necessity for initial estimates and the iterative nature of these methods to achieve desired accuracy.

Uploaded by

Radwa Essam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views26 pages

Lecture 4

This chapter discusses the numerical solution of nonlinear equations, focusing on single equations due to their prevalence and the complexity of simultaneous equations. It introduces definitions for algebraic and transcendental equations, outlines the importance of locating roots, and presents numerical methods such as the bisection method, false position method, and Newton-Raphson method for finding approximate solutions. The chapter emphasizes the necessity for initial estimates and the iterative nature of these methods to achieve desired accuracy.

Uploaded by

Radwa Essam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

1.

Numerical solution of nonlinear equations


This chapter will be concerned with the numerical solution of
nonlinear equations and system of equations. First we shall restrict
ourselves to single equations. One reason is that this case is more
common than that of simultaneous nonlinear equations. An equally
important reason, however, is that the solution of simultaneous
nonlinear equations is a difficult problem. For the single nonlinear
equations of interest here, we assume no solution in closed form can
be found. Thus we must seek methods which lead to approximate
solutions.
The first basic nonlinear equation in algebra is the quadratic equation
𝑎𝑥 + 𝑏𝑥 + 𝑐 = 0,
and we all know that the solution of this equation is
−𝑏 ± √𝑏 − 4𝑎𝑐
𝑥=
2𝑎
For cubic and quartic equations, formulas exist but are so complex as
to be rarely used; for higher-degree equations it is difficult to exhibit
the solution in explicit form. So it is typical to employ numerical
methods for the solution of such equations. Before going on we
introduce the following definitions.

Definition 1.1 (Algebraic equation) An 𝑎𝑙𝑔𝑒𝑏𝑟𝑎𝑖𝑐 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 is an


equation of the polynomial form
𝑓 (𝑥 ) = 𝑎 𝑥 + 𝑎 𝑥 + ⋯ + 𝑎 𝑥 + 𝑎 = 0,
where 𝑎 , 𝑎 , ⋯ , 𝑎 and 𝑎 are real numbers.

For example; the following equations are algebraic equations


2 Chapter 1 Numerical solution of nonlinear equations

𝑥 + 7𝑥 − 5𝑥 + 10 = 0,
𝑥 − 4 = 0.

Definition 1.2 (Transcendental Equation) Any equation, which cannot


be represented by a polynomial form is called
𝑡𝑟𝑎𝑛𝑠𝑐𝑒𝑛𝑑𝑒𝑛𝑡𝑎𝑙 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛.

For example; the equations


sin 𝑥 + cos 2𝑥 = 0, ln 𝑥 − 4𝑥 = 0,
are transcendental equations.
A nonlinear equation is either algebraic or transcendental. For an
algebraic equation all roots, both real and complex, are to be
determined, but for a transcendental equation the search is usually
restricted to real roots. The purpose of this chapter is to present some
numerical methods to solve the general nonlinear equations problem.
One of the most frequently occurring problems in scientific work is to
solve the equation
𝑓 (𝑥) = 0, (1)
whether this equation is an algebraic or a transcendental equation.

Definition 1.3 (Roots of an equation) The values of 𝑥 that satisfy


equation (1) are called the roots of this equation. These values are
also called the 𝒛𝒆𝒓𝒐𝒔 of the function 𝑓(𝑥).

We assume in the foregoing that the exact root of equation (1) will be
denoted by α and the approximate solution 𝑥 will be approximated
by a sequence of estimates 𝑥 , 𝑥 , ⋯, with
𝑥 = lim 𝑥

Chapter 1 Numerical solution of nonlinear equations 3

The numerical analysis provides a mean where a solution may be


found, or, at least approximated as closely as desired. Many of these
numerical procedures follow a scheme that may be thought of as
providing a series of successive approximations, each more precise
than the previous one, so that enough repetitions of the procedure
eventually give an approximation which differs from the true value by
less than some arbitrary error tolerance. It was immediately clear that,
all the numerical methods require obtaining starting values before the
method can begin. So that the numerical solution of equation (1) is
carried out in two steps:
 Step 1: Location of roots
In practical problems, a priori knowledge of the nature, number and
approximate location of the roots is often required to ensure the
convergence of the iterative methods we now discuss. That is to say,
in this step the required roots of equation (1) are located in some
intervals so that each root is restricted to some interval (𝑎, 𝑏), 𝑎, 𝑏 ∈
𝓡.
 Step 2: Using iterative methods of solution
In this step, we introduce the following elementary iterative methods
for finding a solution of equation (1):
1. Bisection method,
2. False position method,
3. Secant method.
4. Simple iteration method,
5. Newton-Raphson method,
Most practical computer routines implementing these methods will
quite when:
4 Chapter 1 Numerical solution of nonlinear equations

 either 𝑥𝑎𝑝𝑝 is obtained to a pre-determined tolerance or

 𝑓 𝑥𝑎𝑝𝑝 is so small that to machine accuracy it registers as


zero.
 Some routines also include a maximum number of iterations
as a safety measure.

1.1 Location of roots

Note that all the methods we will use require an initial estimate of the
root we are computing. It often requires as much thought and effort to
get a good starting value as it does to refine it to acceptable accuracy.
Sometimes one’s knowledge of the physical problem will suggest a
starting value. When this is not available, one normally finds starting
values by initial trial-and-error computations (analytical approach), or
by making a rough graph of the function (graphical approach). We
now discuss the usual approaches to guess the initial approximations.

1.1.1 Analytical approach

In this approach the trial-and-error computations are used to locate the


intervals that contain the required roots. By the use of the following
theorem we can locate the roots of the equation 𝑓 (𝑥) = 0.

Theorem 1.1 For any continuous real function 𝑓(𝑥): 𝑅  𝑅 in the


period (𝑎, 𝑏), 𝑎, 𝑏 ∈ 𝑅, if 𝑓 (𝑎) has opposite sign with respect to
𝑓 (𝑏), i.e., 𝑓 (𝑎)𝑓(𝑏) < 0, then there is 𝒂𝒕 𝒍𝒆𝒂𝒔𝒕 𝒐𝒏𝒆 𝒓𝒐𝒐𝒕 in this
interval (In fact there is an odd number of roots).

The above theorem is illustrated by the following figure.


Chapter 1 Numerical solution of nonlinear equations 5

𝑓 >0
𝑓 >0
𝑓 >0 𝑓 >0
𝑎 α 𝑏 𝑐 α 𝑑 α 𝑒 α
𝑥
𝑓 <0 𝑓 <0 𝑓 <0

𝑓 <0

Fig. 1.1. Location of the roots of the equation 𝑓(𝑥) = 0

Due to the opposite signs of the sketched function through the


different intervals, we have
𝑓(𝑎)𝑓(𝑏) < 0  (𝑜𝑛𝑒 𝑟𝑜𝑜𝑡)
𝑓(𝑐 )𝑓 (𝑑) < 0  (𝑜𝑛𝑒 𝑟𝑜𝑜𝑡)
𝑓(𝑎)𝑓 (𝑒) < 0  (𝑡ℎ𝑟𝑒𝑒 𝑟𝑜𝑜𝑡𝑠)

However, the opposite signs of the sketched function imply odd


number of the roots of the function.

1.1.2 Graphical approach

Another approach to locate the zeros of a function 𝑓 (𝑥) is to draw its


graph 𝑦 = 𝑓 (𝑥) and observe where it crosses the x axis. (i.e., for
which 𝑓(𝑥) = 0). On occasions it is helpful to obtain a graphical
solution of this type in a slightly different manner by writing 𝑓 (𝑥) in
the form 𝑓(𝑥) = 𝑔(𝑥) − ℎ(𝑥), when the zeros of 𝑓(𝑥) will
correspond to the equation 𝑔(𝑥) = ℎ(𝑥 ). To see how this may help,
suppose that we seek the roots of the transcendental equation

sin 𝑥 − cosh 𝑥 + 1 = 0,
6 Chapter 1 Numerical solution of nonlinear equations

where 𝑓 (𝑥) = sin 𝑥 − cosh 𝑥 + 1. Then by writing


𝑓(𝑥) = 𝑔(𝑥) − ℎ(𝑥 ),
with 𝑔(𝑥) = sin 𝑥 and ℎ(𝑥 ) = cosh 𝑥 − 1, the zeros of 𝑓 (𝑥) = 0 will
correspond to those values of x for which the graphs of 𝑦 = 𝑔(𝑥) and
𝑦 = ℎ(𝑥) intersect, as shown in Fig. 1.2. Familiarity with the two
graphs involved makes it easier to sketch them and to appreciate how
many zeros there are likely to be. The two zeros are seen to be 𝑥 = 0
(exact) and 𝑥 = 1.3.

1
𝑦=cosh(𝑥−1) 𝑦 = sin 𝑥

0 1.3 𝑥

-1

Fig. 1.2. Roots of sin 𝑥 − cosh 𝑥 + 1 = 0

Example 1 Determine approximate values of the roots of the equation


sin 𝑥 − 𝑥 + 0.5 = 0

Solution We first sketch the graphs of the two curves 𝑦 = sin 𝑥 and
𝑦 = 𝑥 − 0.5 (see Fig. 1.3). Since the roots of the given equation
represent the points of intersections of these two curves, and we know
that |sin 𝑥| ≤ 1, then we have 𝑥 − 0.5 ≤ 1. This gives
−1 ≤ 𝑥 − 0.5 ≤ 1,
or
−0.5 ≤ 𝑥 − 0.5 ≤ 1.5
Chapter 1 Numerical solution of nonlinear equations 7

So, we are only interested in this interval (− 0.5, 1.5).

Fig. 1.3 Example 1

We deduce from the graph that the equation has only one root near

𝑥 = 1.5. We then tabulate 𝑓 (𝑥 ) near 𝑥 = 1.5 as follows:

𝑥 𝑓 (𝑥 )
1.45 0.0427

There is at least 1.49 0.0067


one root 1.50 − 0.0025

Since the sign of 𝑓 (𝑥) is changed from positive for 𝑥 = 1.45 to


negative at 𝑥 = 1.5, we conclude that the value of the root of 𝑓(𝑥) =
0 lies between 𝑥 = 1.45 and 𝑥 = 1.5.. But the value of 𝑓(𝑥) at 𝑥 =
1.49 is also positive but less than the value of 𝑓 (𝑥 ) at 𝑥 = 1.45.
Therefore, we conclude that the root of the equation lies in the interval
1.49 < 𝑥 < 1.5

1.2 Methods of solutions

We have seen in the above section how to estimate the number, nature
and values of the roots of general nonlinear equations in the form
8 Chapter 1 Numerical solution of nonlinear equations

𝑓 (𝑥 ) = 0. In the present section we shall introduce some methods for


solving such equations with better accuracy.

1.2.1 Bisection method

The first technique, based on the intermediate value theorem, is called


𝒕𝒉𝒆 𝒃𝒊𝒔𝒆𝒄𝒕𝒊𝒐𝒏 𝒂𝒍𝒈𝒐𝒓𝒊𝒕𝒉𝒎 or 𝒃𝒊𝒏𝒂𝒓𝒚 − 𝒔𝒆𝒂𝒓𝒄𝒉 𝒎𝒆𝒕𝒉𝒐𝒅.
Suppose a continuous function 𝑓, defined on the interval [𝑎, 𝑏], is
given with 𝑓(𝑎) and 𝑓(𝑏) of opposite signs. Then by Theorem 3.1,
there exists a root α of the equation 𝑓(𝑥) = 0, 𝑎 < α < 𝑏, for which
𝑓(α) = 0. Although the procedure will work for the case when 𝑓(𝑎)
and 𝑓 (𝑏) have opposite signs and there is more than one root in the
interval [𝑎, 𝑏], it will be assumed for simplicity that the root in this
interval is unique.
To solve the equation
𝑓 (𝑥) = 0, (1)
the bisection method calls for a repeated halving of subintervals of
[𝑎, 𝑏] and, at each step, locating the “half” containing the
approximate value of the root. To begin, compute the first
approximation as the midpoint of [𝑎, 𝑏]; that is,
𝑎+𝑏
𝑥 = (2)
2
At this point, we have one of the following cases:

1. If 𝑓 (𝑥 ) = 0, then 𝛼 = 𝑥 which is the required root. If not,


then 𝑓 (𝑥 ) has the same sign as either 𝑓(𝑎) or 𝑓 (𝑏).
2. If 𝑓(𝑥 ) and 𝑓(𝑎) have opposite signs, then 𝛼 ∈ (𝑎, 𝑥 ) and a
new approximation is given by
𝑎+𝑥
𝑥 = (3)
2
Chapter 1 Numerical solution of nonlinear equations 9

3. If 𝑓 (𝑥 ) and 𝑓(𝑏) have opposite signs, then 𝛼 ∈ (𝑥 , 𝑏) and


a new approximation is given by
𝑥 +𝑏
𝑥 = (4)
2
We can obviously continue this interval-halving to obtain a smaller
and smaller subinterval within which a root lies. This produces the
bisection algorithm (see Fig. 1.4).

 𝑏, 𝑓 (𝑏)

𝑎 𝑥 𝑥 𝛂 𝑏

 𝑎, 𝑓(𝑎)

Fig. 1.4. The bisection algorithm

The Bisection method, as conceptually clear, has significant


drawbacks and it is suitable for automatic computations. Besides it has
the important property that it will always converge to a suitable
solution and for that reason is often used as a “starter” for the more
efficient methods presented later in this chapter. Another important
advantage of the bisection method, beyond its simplicity, is our
knowledge of the accuracy of the current approximation to the root.
The accuracy of a computed value is usually expressed either as the
absolute error; namely
10 Chapter 1 Numerical solution of nonlinear equations

𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟 = |𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 − 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒|,


or as the relative error; namely
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑒𝑟𝑟𝑜𝑟 =
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
The relative error is often the better measure of accuracy for very
large or very small values. Sometimes the accuracy is expressed as the
number of correct digits after the decimal point.
The error of the Bisection method is bounded by
𝑏−𝑎
|α − 𝑥 | ≤ ,
2
where the exact root α lies in the interval (𝑎, 𝑏). However, the number
of iterations required to determine the root under consideration within
the tolerance 𝑇𝑜𝑙 is given by
𝑏−𝑎
≤ 𝑇𝑜𝑙,
2
which implies that

𝑏−𝑎
2 ≥ ,
𝑇𝑜𝑙

or

1 𝑏−𝑎
𝑛≥
ln 2 𝑇𝑜𝑙

Example 1 Find the smallest positive root of the equation


𝑥 − 8𝑥 + 5 = 0,
by the bisection method correct to 5 decimal places.
Chapter 1 Numerical solution of nonlinear equations 11

Solution The solution of this problem lies in two main steps. In the
first step we locate the interval containing the root. Put
𝑓 (𝑥 ) = 𝑥 − 8𝑥 + 5,
then
𝑥 0 1
𝑓 (𝑥 ) 5 −2

𝑓 0 𝑓 1 <0

The inequality 𝑓(0)𝑓(1) < 0 implies that there is at least one root in
the interval (0,1).

+ + + − −
0 0.5 0.625 0.75 −1
𝑎 𝑥 𝑥 𝑥 𝑏

The bisection method gives the following values.


0+1
𝑥 = = 0.5, 𝑓 (0.5) = 1.125
2
0.5 + 1
𝑥 = = 0.75, 𝑓 (0.75) = − 0.578125
2
0.5 + 0.75
𝑥 = = 0.625, 𝑓(0.625) = 0.244141
2
0.625 + 0.75
𝑥 = = 0.6875, 𝑓(0.6875) = − 0.175049
2
0.625 + 0.6875
𝑥 = = 0.65625, 𝑓 (0.65625) = 0.032623
2
⋮ ⋮
and so on.
12 Chapter 1 Numerical solution of nonlinear equations

The first eighteen iterations are listed in the following table.


𝒏 𝒂(+) 𝒃(−) 𝒙 𝒇(𝒙)
1 0 1 0.5 1.125000
2 0.500000 1.000000 0.750000 -0.578125
3 0.500000 0.750000 0.625000 0.244141
4 0.625000 0.750000 0.687500 -0.175049
5 0.625000 0.687500 0.656250 0.032623
6 0.656250 0.687500 0.671875 -0.071705
7 0.656250 0.671875 0.664063 -0.019666
8 0.656250 0.664063 0.660157 0.006445
9 0.660157 0.664063 0.662110 -0.006618
10 0.660157 0.662110 0.661134 -0.000092
11 0.660157 0.661134 0.660646 0.003173
12 0.660646 0.661134 0.660890 0.001541
13 0.660890 0.661134 0.661012 0.000725
14 0.661012 0.661134 0.661073 0.000316
15 0.661073 0.661134 0.661104 0.000109
16 0.661104 0.661134 0.661119 0.000009
17 0.661119 0.661134 0.661127 -0.000045
18 0.661119 0.661127 0.661123 -0.000018

From this table, the positive root of the given equation is


approximated to six decimal places by 𝑥 = 0.66112 after 18 iterations.

Example 2 Find a point of intersection of the two curves


𝑦 = 𝑥 𝑎𝑛𝑑 𝑦 = 12𝑥 + 10,
by the bisection method

Solution To find a point of intersection of the given two curves we


have to solve the equation
𝑥 = 12𝑥 + 10,
that is, we solve the equation
Chapter 1 Numerical solution of nonlinear equations 13

𝑓(𝑥) = 𝑥 − 12𝑥 − 10 = 0
Here, we have
𝑥 0 1 2 3 4
𝑓 (𝑥 ) −10 −21 −42 −37 54

Then there is at least one root in the interval (3, 4). Using the
bisection method gives the following values.

𝒏 𝒂(−) 𝒃(+) 𝒙 𝒇(𝒙)


1 3 4 3.500000 -6.937500
2 3.500000 4.000000 3.750000 19.003906
3 3.500000 3.750000 3.625000 4.988525
4 3.500000 3.625000 3.562500 -1.225082
5 3.562500 3.625000 3.593750 1.817765
6 3.562500 3.593750 3.578125 0.280517
7 3.562500 3.578125 3.570313 -0.476170
8 3.570313 3.578125 3.574219 -0.098813
9 3.574219 3.578125 3.576172 0.090605
10 3.574219 3.576172 3.575196 -0.004117
11 3.575196 3.576172 3.575684 0.043228
12 3.575196 3.575684 3.575440 0.019552
13 3.575196 3.575440 3.575318 0.007716
14 3.575196 3.575318 3.575257 0.001799
15 3.575196 3.575257 3.575227 -0.001111
16 3.575227 3.575257 3.575242 0.000344
17 3.575227 3.575242 3.575235 -0.000335
18 3.575235 3.575242 3.575239 0.000053
19 3.575235 3.575239 3.575237 -0.000141
20 3.575237 3.575239 3.575238 -0.000044
21 3.575238 3.575239 3.575239 0.000053
22 3.575238 3.575239 3.575239 0.000053
14 Chapter 1 Numerical solution of nonlinear equations

After 22 iterations, we get 𝑥 = 3.575239, correct to six decimal


places. Therefore, the coordinate of the point of intersection is
(3.575239, 163.38806).

Advantages and drawbacks of the bisection method


 The bisection method is very easy and simple
 The method is always convergent.
 The error bound decreases by one half with each iteration.
That is the error can be controlled.
 The bisection method converges very slowly, so that it is used
to get an initial estimate for much faster methods
 The bisection method cannot detect multiple roots

Example 3 Find the positive root of the equation 𝑥 = 𝑒 correct to 3


decimal places by the Bisection method.

Solution First we use graphs of the


curves
𝑦 = 𝑥 𝑎𝑛𝑑 𝑦=𝑒 ,
as shown in Fig. 1.5 and we
notice from the table
𝑥 0.5 0.6
𝑓(𝑥) −0.1756 0.0933

that
𝑓(𝑥) = 𝑥𝑒 − 1 0 1
Fig. 1.5
Changes its sign in the interval
Chapter 1 Numerical solution of nonlinear equations 15

(0.5, 0.6). Apply the bisection method as shown in the following


table
𝒏 𝒂(−) 𝒃(+) 𝒙 𝒇(𝒙)
1 0.5 0.6 0.550000 -0.046711
2 0.550000 0.600000 0.575000 0.021850
3 0.550000 0.575000 0.562500 -0.012782
4 0.562500 0.575000 0.568750 0.004446
5 0.562500 0.568750 0.565625 -0.004190
6 0.565625 0.568750 0.567188 0.000124
7 0.565625 0.567188 0.566407 -0.002033
8 0.566407 0.567188 0.566798 -0.000954
9 0.566798 0.567188 0.566993 -0.000415
10 0.566993 0.567188 0.567091 -0.000144
11 0.567091 0.567188 0.567140 -0.000009

After 11 iterations, the difference between the last two values of 𝑓 (𝑥)
is
0.567140 − 0.567091 ≅ 0.000049 = 4.9 × 10 < 5 × 10 ,
then the last value of 𝑥 is the required value of the root i.e.,
𝑥 ≅ 0.567
correct to 3-decimal places.

Example 4 Solve the equation


4𝑒 sin 𝑥 − 1 = 0,
in the interval (0, 0.5).

Solution The bisection method yields.


16 Chapter 1 Numerical solution of nonlinear equations

𝒏 𝒂(−) 𝒃(+) 𝒙 𝒇(𝒙)


1 0 0.5 0.25 -0.229286411
2 0.25 0.5 0.375 0.006940729
3 0.25 0.375 0.3125 -0.100292711
4 0.3125 0.375 0.34375 -0.044067942
5 0.34375 0.375 0.359375 -0.017925398
6 0.359375 0.375 0.3671875 -0.005334495
7 0.3671875 0.375 0.37109375 0.000842363
8 0.3671875 0.37109375 0.369140625 -0.002236228
9 0.369140625 0.37109375 0.370117188 -0.000694476
10 0.370117188 0.37109375 0.370605469 7.45574E-05
11 0.370117188 0.370605469 0.370361328 -0.000309806
12 0.370361328 0.370605469 0.370483398 -0.000117586
13 0.370483398 0.370605469 0.370544434 -2.15046E-05
14 0.370544434 0.370605469 0.370574951 2.65288E-05
15 0.370544434 0.370574951 0.370559692 2.51269E-06
16 0.370544434 0.370559692 0.370552063 -9.49581E-06
17 0.370552063 0.370559692 0.370555878 -3.49152E-06
18 0.370555878 0.370559692 0.370557785 -4.89403E-07
19 0.370557785 0.370559692 0.370558739 1.01165E-06
20 0.370557785 0.370558739 0.370558262 2.61123E-07
21 0.370557785 0.370558262 0.370558023 -1.14139E-07
22 0.370558023 0.370558262 0.370558143 7.3492E-08
23 0.370558023 0.370558143 0.370558083 -2.03237E-08
24 0.370558083 0.370558143 0.370558113 2.65841E-08
25 0.370558083 0.370558113 0.370558098 3.13021E-09

From this table, we get


𝑥 = 0.370558,
correct to six decimal places.

1.2.2 False position method

This is an alternative method for solving the general nonlinear


continuous equation of form
Chapter 1 Numerical solution of nonlinear equations 17

𝑓 (𝑥) = 0, (1)
when it is known that the required root is located between two values
𝑎 and 𝑏 given that 𝑓 (𝑎) and 𝑓 (𝑏) have opposite signs. The false
position method can be summarized in the following steps.

 𝑏, 𝑓(𝑏)

𝑎 𝑥 𝑥 𝛂 𝑏

 𝑎, 𝑓(𝑎)

Fig. 1.6. The false position method

𝑺𝒕𝒆𝒑 𝟏. Approximate the function 𝑓 (𝑥 ) in the interval (𝑎, 𝑏) to a


straight line joining the two points 𝑎, 𝑓(𝑎) and 𝑏, 𝑓(𝑏) as shown
in Fig. 3.8. The equation of this line is
𝑓 (𝑏 ) − 𝑓 (𝑎 )
𝑦 − 𝑓 (𝑎 ) = (𝑥 − 𝑎 )
𝑏−𝑎
𝑺𝒕𝒆𝒑 𝟐. The first approximate solution can be considered as the value
𝑥 which is the intersection of the approximated straight line with the
𝑥 −axis. That is to say when 𝑦 = 0, 𝑥 = 𝑥 . Substitute in the
equation of the straight line we get
𝑓 (𝑏 ) − 𝑓 (𝑎 )
0 − 𝑓 (𝑎) = (𝑥 − 𝑎 ),
𝑏−𝑎
18 Chapter 1 Numerical solution of nonlinear equations

from which we get


𝑏−𝑎
𝑥 =𝑎− 𝑓 (𝑎), (2)
𝑓 (𝑏 ) − 𝑓 (𝑎 )
or

𝑎𝑓(𝑏) − 𝑏𝑓(𝑎)
𝑥 = (3)
𝑓 (𝑏 ) − 𝑓 (𝑎 )

This equation may be written in the form

𝑎 𝑓 (𝑎 )
𝑏 𝑓 (𝑏 ) (4)
𝑥 =
𝑓(𝑏) − 𝑓(𝑎)

𝑺𝒕𝒆𝒑 𝟑. In the second approximation step, we consider a smaller


range for the chosen period, that is by considering a new straight line
joining between two of the three points
𝑎, 𝑓(𝑎) , 𝑏, 𝑓 (𝑏) , 𝑥 , 𝑓 (𝑥 ) at which the values of 𝑓 (𝑥 ) have
opposite signs. For example: if
𝑓 (𝑎) ≡ −𝑣𝑒
𝑓 (𝑏) ≡ +𝑣𝑒
𝑓 (𝑥 ) ≡ −𝑣𝑒
then we consider the two points 𝑥 , 𝑓 (𝑥 ) , 𝑏, 𝑓(𝑏) and substitute
in the equation
𝑥 𝑓(𝑥 )
𝑏 𝑓 (𝑏 )
𝑥 =
𝑓 (𝑏 ) − 𝑓 (𝑥 )
𝑺𝒕𝒆𝒑 𝟑 is repeated until the difference between two successive
approximations is in the range of assumed allowable tolerance.
Chapter 1 Numerical solution of nonlinear equations 19

Advantages and drawbacks of the false position method


The false position method is sometimes called the method of linear
interpolation. It is a two-point method, whereas Newton-Raphson is a
one-point method. It has almost assured convergence, and it may
converge to a root faster. However it may happen that most or all of
the calculated 𝑥 values are in the same side of the root in which case
convergence may be slow.

Example 1 Find the smallest positive root of the equation


𝑥 − 8𝑥 + 5 = 0,
by the false position method.

Solution Here we have 𝑓 (𝑥 ) = 𝑥 − 8𝑥 + 5. The zeros of this


polynomial, we are searching for lie on the interval (0,1). Thus

𝑎=0 𝑓 (𝑎 ) = 5

𝑏=1 𝑓 (𝑏) = −2

The first approximation is

𝑎 𝑓 (𝑎 )
𝑏 𝑓 (𝑏 ) (0)(−2) − (1)(5)
𝑥 = = = 0.714286,
𝑓(𝑏) − 𝑓(𝑎) −2 − 5

𝑓 (𝑥 ) = −0.349854
The sign of 𝑓 (𝑥 ) implies that the required root lies in the interval
(0, 0.714286). Therefore, the second approximation is given by

𝑎=0 𝑓 (𝑎 ) = 5

𝑥 = 0.714286 𝑓 (𝑏) = −0.349854
20 Chapter 1 Numerical solution of nonlinear equations

Therefore, the second approximation is given by

(0)(−0.349854) − (0.714286)(5)
𝑥 = = 0.667575,
−0.349854 − 5
𝑓(𝑥 ) = − 0.04309,

Repeating the process, we get the following table

𝒏 𝒙 𝒇 (𝒙)
0 5.000000
1 -2.000000
0.714286 -0.349856
0.667575 -0.043091
0.661871 -0.005020
0.661207 -0.000580
0.661130 -0.000065
0.661121 -0.000005
0.661120 0.000002
0.661120 0.000002

After 8 iterations, we see that 𝑥 = 0.66112 approximates the required


root correct to 5 decimal places by the false position method while in
the Bisection method we need to 18 iterations.

Example 2 Using the false position method, find the three roots of the
equation
𝑥 − 4𝑥 − 5𝑥 + 10 = 0

Solution First, to locate the roots of this equation we construct the


table for the function 𝑓 (𝑥) = 𝑥 − 4𝑥 − 5𝑥 + 10,
Chapter 1 Numerical solution of nonlinear equations 21

𝑥 0 1 2 3 4 5
𝑓 (𝑥 ) 10 2 −8 −14 −10 10

𝑓 1 𝑓 2 <0 𝑓 4 𝑓 5 <0

It is evident that the given equation has two roots in the intervals (1,2)
and (4,5), respectively. On finding these roots by the use of the false
position method we obtain

𝒏 𝒙 𝒇(𝒙) 𝒏 𝒙 𝒇(𝒙)
1 2 4 -10
2 -8 5 10
1.200000 -0.032000 4.500000 -2.375000
1.196850 0.000378 4.595960 -0.391427
1.196887 -0.000002 4.611179 -0.060413
1.196887 -0.000002 4.613514 -0.009224
4.613870 -0.001410
4.613924 -0.000225
4.613933 -0.000027
4.613934 -0.000005
4.613934 -0.000005

From the above tables we conclude that


𝑥 = 1.196887 & 𝑥 = 4.613934,
are two roots of the given equation correct to six decimal places. To
get the last root we use the relation roots and coefficients of algebraic
equations
1.196887 + 4.613934 + 𝑥 = 4,
from which the third root is given by 𝑥 = −1.810821.
22 Chapter 1 Numerical solution of nonlinear equations

Example 3 Find the positive root, between 5 and 8, of the equation


.
𝑥 = 69 correct to 3 decimal places.

Solution Using the false position method with the starting values
(5, −34.506758) and (8,28.00586)
To get the table
𝒏 𝒙 𝒇 (𝒙)
5 -34.506758
8 28.005860
6.655990 -4.275627
6.834002 -0.406143
6.850670 -0.037546
6.852209 -0.003458
6.852351 -0.000313
6.852364 -0.000025
6.852365 -0.000003
6.852365 -0.000003

so that, the root after two iterations is 6.852365.

1.2.3 The secant method

The secant method is a modification of the false position method.


Instead of requiring that the function have opposite signs at the two
values used for interpolation, we can choose the two values nearest the
root (as indicated by the magnitude of the function at the various
points) and interpolate from these. Usually the nearest values to the
root will be the last two values calculated. This makes the interval
Chapter 1 Numerical solution of nonlinear equations 23

under consideration shorter and hence improves the assumption that


the function can be represented by the line through the two points.
To solve the equation

𝑓 (𝑥) = 0, (1)
the secant method, as illustrated in Fig. 1.7, uses the secant to the
curve 𝑦 = 𝑓 (𝑥) employing two previous approximations 𝑥 and 𝑥 .
That is
0 − 𝑓(𝑥 ) 𝑓 (𝑥 ) − 𝑓(𝑥 )
= ,
𝑥 −𝑥 𝑥 −𝑥
or
𝑥 −𝑥
𝑥 =𝑥 − 𝑓(𝑥 ), 𝑛 = 1,2, ⋯ (2)
𝑓 (𝑥 ) − 𝑓 (𝑥 )
or, alternatively

𝑥 𝑓 (𝑥 )
𝑥 𝑓 (𝑥 )
𝑥 = , 𝑛 = 1,2, ⋯ (3)
𝑓 (𝑥 ) − 𝑓 (𝑥 )

After using (2) or (3) to generate 𝑥 ,𝑥 takes on the value of 𝑥


and 𝑥 takes on the value of 𝑥 ; the old value of 𝑥 is discarded.

Convergence condition of the secant method

From a study of Fig. 3.9 it is clear that the secant method converges
for any continuous function. However, from equation (1), we have
𝑓(𝑥 )
𝑥 −𝑥 = , (4)
𝑚
24 Chapter 1 Numerical solution of nonlinear equations

( ) ( )
where 𝑚 is the slop defined by 𝑚 = . Equation (4)

represents the difference between two successive iterations, i.e., it can


be named “correction term”. Then for convergence, it must be:
𝑓(𝑥 )
lim = 0,
→ 𝑚
i.e., the correction term has decreasing values going from one iteration

step to the other.

𝑓(𝑥)

 𝑥 𝑥 𝑥 x

Fig. 1.7. The secant method

Example 1 Find the root of the equation


𝑥 − 8𝑥 + 5 = 0,
in the interval (0,1) using the secant method.

Solution Using the secant method to solve the equation


𝑓 (𝑥) = 𝑥 − 8𝑥 + 5 = 0,
Chapter 1 Numerical solution of nonlinear equations 25

with

𝑓(0) = 5 & 𝑓(1) = −2

so that we get the following results

𝑛 𝑥 𝑓(𝑥 )
0 5
1 −2
1 0.714285714 -0.349854
2 0.653710295 0.049672
3 0.661241475 -0.000810
4 0.661120635 -0.000002
5 0.661120336 0.000000
6 0.661120336 0.000000

From the above table the approximate root correct to 9 decimal places
is 0.661120336.

Example 2 Find the positive root of sin 𝑥 − = 0, using the secant

method and the method of false position.

Solution Since the root


lies between and 𝜋 (see

Fig. 1.8) we use 𝑥 =

and 𝑥 = 𝜋. Using the


formulas of the secant and
false position methods we Fig. 1.8. The function 𝑓(𝑥) = sin 𝑥 −

obtain the table below


26 Chapter 1 Numerical solution of nonlinear equations

False position method Secant method


𝑛
𝑥 𝑓 (𝑥 ) 𝑥 𝑓 (𝑥 )
1.570796 0.214602 1.570796 0.214602
3.141593 -1.570796 3.141593 -1.570796
1 1.759603 0.102427 1.759604 0.102427
2 1.844202 0.040756 1.844203 0.040755
3 1.877013 0.014974 1.900109 -0.003790
4 1.888954 0.005336 1.895352 0.000117
5 1.893195 0.001881 1.895494 0.000000
6 1.894688 0.000660 1.895494 0.000000
7 1.895212 0.000231
8 1.895395 0.000081
9 1.895460 0.000028
10 1.895482 0.000010
11 1.895490 0.000003
12 1.895493 0.000001

As we would expect, the secant method converges faster to the true


value 1.895494 than the method of false position.

Exercise 1.1

In problems 1 − 10, approximate to within 10 the roots of the


following equations by using the bisection method
𝟏. 𝑥 − 4𝑥 − 8.95 = 0
[Answer: 𝑥 = 2.7037429]

𝟐. 𝑥 + 3𝑥 − 1 = 0, (− 4, 1)

𝟑. 𝑥 − 10 = 0, (0, 1.3)
[Answer: 𝑥 = 1.258925]

You might also like