0% found this document useful (0 votes)
20 views

A New Optimal Iterative Algorithm For Solving Nonl

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

A New Optimal Iterative Algorithm For Solving Nonl

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Journal of AppliedMath 2024, 2(1), 477.

https://doi.org/10.59400/jam.v2i1.477

Article

A new optimal iterative algorithm for solving nonlinear equations


Dhyan R. Gorashiya1, Rajesh C. Shah2,*

1
Department of Metallurgical and Materials Engineering, Faculty of Technology & Engineering, The Maharaja Sayajirao University of Baroda,
Vadodara 390001, Gujarat State, India
2
Department of Applied Mathematics, Faculty of Technology & Engineering, The Maharaja Sayajirao University of Baroda, Vadodara 390001,
Gujarat State, India
* Corresponding author: Rajesh C. Shah, [email protected]

CITATION Abstract: The aim of this paper is to propose a new iterative algorithm (scheme or method)
Gorashiya DR, Shah RC. A new
for solving algebraic and transcendental equations considering fixed point and an initial guess
optimal iterative algorithm for value on the x-axis. The concepts of slope of a line and Taylor series are used in the
solving nonlinear equations. Journal derivation. The algorithm has second-order convergence and requires two function
of AppliedMath. 2024; 2(1): 477. evaluations in each step, which shows that it is optimal with computational efficiency index
https://doi.org/10.59400/jam.v2i1.477
1.414 and informational efficiency 1. The validity of the algorithm is examined by solving
ARTICLE INFO some examples and their comparisons with the Newton’s method.
Received: 12 January 2024 Keywords: iterative algorithm; second-order convergence; nonlinear equations; newton’s
Accepted: 29 January 2024 method; optimal method
Available online: 28 March 2024

COPYRIGHT 1. Introduction
Solving an equation 𝑓(𝑥) = 0 always receives an attraction for investigators
Copyright © 2024 by author(s). due to its various applications in science, technology and engineering. Many
Journal of AppliedMath is published investigators [1–8] developed new iterative methods for better and faster
by Academic Publishing Pte. Ltd.
This work is licensed under the
convergence. Among various numerical methods, optimal methods draw significant
Creative Commons Attribution (CC attention due to its computational and informational efficiency. Any n-step iterative
BY) license.
method is optimal when it reaches the order of convergence 2𝑛 and requiring
https://creativecommons.org/licenses/
by/4.0/ (𝑛 + 1) function evaluations. As mentioned above the most important parameters
computational efficiency and informational efficiency are usually expressed [9,10]
by
1
𝑟
𝐸 = 𝑟 𝑐 and 𝐼 = ,
𝑐
respectively, where r is the order of convergence and c is the computational cost
(number of function evaluations in each step).
In this paper new optimal iterative algorithm (scheme or method) for solving an
equation 𝑓 (𝑥) = 0 is proposed, wherein a fixed point 𝑥𝑝 and an initial guess value
𝑥0 on the x-axis are considered. The concepts of slope of a line and Taylor’s series
are used in the derivation. The algorithm is a single-step and having second-order
convergence. The validity of the algorithm is examined by solving some examples
and their comparisons with the Newton’s method. The algorithm works for both
when 𝑓 ′ (𝑥) ≠ 0 and 𝑓 ′ (𝑥) = 0.

2. The proposed optimal iterative algorithm


Figure 1 shows geometric view of the proposed algorithm for solving
𝑓(𝑥) = 0. Let 𝜉 be the exact (simple) root in 𝐼 ⊂ 𝑅, where I is some open interval

1
Journal of AppliedMath 2024, 2(1), 477.

and R is a set of real numbers. Let 𝑓(𝑥) and its first and second-order derivatives are
continuous nearer to 𝜉. In order to develop a proposed algorithm, consider a fixed
point 𝐴(𝑥𝑝 , 0) and the initial guess value (if possible, sufficiently close to 𝜉 )
𝐵(𝑥0 , 0) on the x-axis. Draw a perpendicular from B to meet the curve 𝑦 = 𝑓(𝑥) at
𝐶(𝑥0 , 𝑓(𝑥0 )) as shown in figure. Then for 0 < 𝑘 < 1, let 𝐷(𝑥0 , 𝑘𝑓(𝑥0 )) be a point
on inside the line segment 𝐵𝐶. Draw a line segment 𝐴𝐷. By considering the same
slope α of 𝐴𝐷, draw another line segment 𝐵𝐸 giving the first approximation 𝑥1 (say
𝐹(𝑥1 , 0)) by the following procedure, where 𝑥1 = 𝑥0 + ℎ.

Figure 1. The proposed algorithm.

As discussed above, Slope of 𝐴𝐷 = Slope of 𝐵𝐸 implies


𝑘𝑓(𝑥0 ) 𝑓(𝑥1 )
= (1)
𝑥0 − 𝑥𝑝 𝑥1 − 𝑥0
Using 𝑥1 = 𝑥0 + ℎ and Taylor series 𝑓(𝑥1 ) = 𝑓(𝑥0 + ℎ) ≈ 𝑓(𝑥0 ) + ℎ𝑓 ′ (𝑥0 ) ,
Equation (1) becomes
(𝑥0 − 𝑥𝑝 )[𝑓(𝑥0 ) − 𝑥0 𝑓 ′ (𝑥0 )] + 𝑘𝑥0 𝑓(𝑥0 )
𝑥1 =
𝑘𝑓(𝑥0 ) − (𝑥0 − 𝑥𝑝 )𝑓 ′ (𝑥0 )
The repetition of the above procedure gives general algorithm as
(𝑥𝑛 − 𝑥𝑝 )[𝑓(𝑥𝑛 ) − 𝑥𝑛 𝑓 ′ (𝑥𝑛 )] + 𝑘𝑥𝑛 𝑓(𝑥𝑛 )
𝑥𝑛+1 = ; 0 < 𝑘 < 1, 𝑛 = 0,1,2, . . . . (2)
𝑘𝑓(𝑥𝑛 ) − (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 )
This algorithm converges to the root 𝜉, provided 𝑘𝑓(𝑥𝑛 ) − (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ) ≠
0. Here, the values of k is restricted to 0 < 𝑘 < 1 because of following reasons.
(a) At 𝑘 = 0, 𝛼 = 0 and the Algorithm (2) leads to the Newton’s formula.
(b) At 𝑘 = 1, α takes maximum possible angle, which may result in more iterations
and delayed in convergence.
It should be noted here that after first iteration, due to 𝑘𝑓(𝑥1 ) there is a decrease
in slope (say β, 𝛽 < 𝛼) and which continues for further iterations also.

3. Convergence analysis
In this article, the generalized convergence rate is considered similar to that for
Newton’s formula by Taylor series expansion.
Let 𝜀𝑛 and 𝜀𝑛+1 be the errors in the nth and (𝑛 + 1) th iterations, respectively,
then
𝜀𝑛 = 𝜉 − 𝑥𝑛 , 𝜀𝑛+1 = 𝜉 − 𝑥𝑛+1. (3)

2
Journal of AppliedMath 2024, 2(1), 477.

Using Equation (3), right-hand side of Algorithm (2) with Taylor’s expansion
up to 𝑂(𝜀𝑛2 ) and the fact that 𝑓(𝜉) = 0, gives
(𝑥𝑛 − 𝑥𝑝 )[𝑓(𝑥𝑛 ) − 𝑥𝑛 𝑓 ′ (𝑥𝑛 )] + 𝑘𝑥𝑛 𝑓(𝑥𝑛 ) 𝑎 + 𝑏𝜀𝑛 + 𝑐𝜀𝑛2 𝑎 + 𝑏𝜀𝑛 + 𝑐𝜀𝑛2
= =
𝑘𝑓(𝑥𝑛 ) − (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ) 𝑑 + 𝑒𝜀𝑛 + 𝑔𝜀𝑛2 𝑑 (1 + 𝑒 𝜀 + 𝑔 𝜀 2 )
𝑑 𝑛 𝑑 𝑛
𝑎 𝑏 𝑐
𝑑
+ 𝑑 𝜀𝑛 + 𝜀2
𝑑 𝑛
= 𝑒 𝑔
1 + ( 𝜀𝑛 + 𝜀𝑛2 )
𝑑 𝑑
𝑎 𝑏 𝑐 𝑒 𝑔 𝑒 𝑔 2
= ( + 𝜀𝑛 + 𝜀𝑛2 ) [1 − ( 𝜀𝑛 + 𝜀𝑛2 ) + ( 𝜀𝑛 + 𝜀𝑛2 ) ]
𝑑 𝑑 𝑑 𝑑 𝑑 𝑑 𝑑
2
𝑎 𝑏𝑑 − 𝑎𝑒 𝑐 𝑏𝑒 𝑎𝑔 𝑎𝑒
= +( 2
) 𝜀𝑛 + ( − 2 − 2 + 3 ) 𝜀𝑛2 ,
𝑑 𝑑 𝑑 𝑑 𝑑 𝑑
where
𝑎 = −𝜉 2 𝑓 ′ (𝜉) + 𝜉𝑥𝑝 𝑓 ′ (𝜉),
𝑏 = 𝜉 2 𝑓 ″ (𝜉) + 𝜉𝑓 ′ (𝜉) − 𝜉𝑥𝑝 𝑓 ″ (𝜉) − 𝑘𝜉𝑓 ′ (𝜉),
3 1 𝑥𝑝 1 𝑘𝜉
𝑐 = − 𝜉𝑓 ″ (𝜉) − 𝜉 2 𝑓 ‴ (𝜉) + 𝑓 ″ (𝜉) + 𝜉𝑥𝑝 𝑓 ‴ (𝜉) + 𝑓 ″ (𝜉) + 𝑘𝑓 ′ (𝜉),
2 2 2 2 2
𝑑 = −𝜉𝑓 ′ (𝜉) + 𝑥𝑝 𝑓 ′ (𝜉),
𝑒 = −𝑘𝑓 ′ (𝜉) + 𝜉𝑓 ″ (𝜉) + 𝑓 ′ (𝜉) − 𝑥𝑝 𝑓 ″ (𝜉),
𝑘 𝜉 𝑥𝑝
𝑔 = 𝑓 ″ (𝜉) − 𝑓 ‴ (𝜉) − 𝑓 ″ (𝜉) + 𝑓 ‴ (𝜉).
2 2 2
Since
𝑎 𝑏𝑑−𝑎𝑒
= 𝜉 and = 0,
𝑑 𝑑2
Algorithm (2) becomes
𝜀𝑛+1 ≈ 𝜀𝑛2 (Finite quantity).
Thus, the algorithm converges quadratically.

4. Results and discussion


From the above sections, it is noticed that the proposed algorithm is a single-
step, having second-order convergence and uses two function evaluations in each
step with the computational efficiency index 𝐸 = 21/2 = 1.414 and informational
2
efficiency 𝐼 = 2 = 1, which is same as the Newton’s method. Thus, the proposed
algorithm for solving an equation 𝑓(𝑥) = 0 is optimal as per definition given in
introduction section.
The comparative study between the proposed Algorithm (2) and the Newton’s
method are given in Tables 1–6, where |𝑥𝑛+1 − 𝑥𝑛 | < 0.00001.
Table 1 show result for 𝑥 3 − 𝑥 2 − 𝑥 − 1 = 0 considering different values of
𝑥𝑝 , and same values of 𝑥0 and k, where the value 𝑥0 = 1.5 is chosen by using
intermediate value theorem (IVT). It should be noted here that use of IVT is not
necessary. Table shows that the proposed algorithm gives the same result as that of
the Newton’s method correct up to at least fourteen decimal places with the same
number of iterations when 𝑘 = 0.00001.

3
Journal of AppliedMath 2024, 2(1), 477.

Table 1. Comparative study of the proposed Algorithm (2) with the Newton’s method for 𝑥 3 − 𝑥 2 − 𝑥 − 1 = 0
considering different values of 𝑥𝑝 , and same values of 𝑥0 and k.
(1) (4) (5) (6) (7)
Initial (2) (3) Number of iterations Number of iterations Solution of the given Solution of the given
guess value Value of xp Value of k required by the required by the Newton’s equation by the proposed equation by the Newton’s
x0 proposed Algorithm (2) method Algorithm (2) method
−3.0
−1.5
1.5 0.00001 5 5 1.83928675521416 1.83928675521416
3.0
6.0

Table 2. Comparative study of the proposed Algorithm (2) with the Newton’s method for different equations
considering same values of 𝑥0 , 𝑥𝑝 and k.
(5) (6) (7)
(2) (8)
Number of Number of Solution of the given
(1) Initial (3) (4) Solution of the given
iterations required iterations required equations by the
Equations guess Value of xp Value of k equations by the
by the proposed by the Newton’s proposed Algorithm
value x0 Newton’s method
Algorithm (2) method (2)
𝑥3 − 𝑥2 − 𝑥 + 1 = 0 1.5 0.5 16 16 1.000009413559562 1.000009413617196

√21
4𝑥 4 − 4𝑥 2 = 0 1.65 19 26 0.000007441381281 0.000007484300647
7
𝑥 3 − 𝑒 −𝑥 = 0 0 –1 5 5 0.772882959152200 0.772882959152202
sin 𝑥 = 0 1.5 0.5 0.00001 4 4 −12.566370614359171 −12.566370614359172
𝑥 10
−1=0 0.5 –0.5 43 43 1.000000000000000 1.000000000000000
𝑥 2 +7𝑥−30 3.5 2 11 11 3.000000000000253 3.000000000000253
𝑒 −1 =0
3 2
𝑥 +𝑥 −2 =0 2.2 0 6 6 1.000000000000000 1.000000000000000
23
(𝑥 − 2) −1 =0 3.5 –4 13 13 3.000000000022981 3.000000000022981

Table 2 show result for different equations considering same values of 𝑥0 , 𝑥𝑝


and k (𝑘 = 0.00001).
Tables 1 and 2 show that for this value of k, the proposed algorithm has almost
good agreement of results with the Newton’s method.

2
Table 3. Comparative study of the proposed Algorithm (2) with the Newton’s method for 𝑥𝑒 𝑥 − sin2 𝑥 + 3 cos 𝑥 +
5 = 0 considering different values of 𝑥0 , and same values of 𝑥𝑝 and k.
(5) (6)
(7)
(2) Number of Number of (8)
(3) Solution of the given
(1) Initial (4) iterations iterations 1Solution of the given
Value of equation by the
Equation guess Value of k required by required by equation by the
xp proposed Algorithm
value x0 the proposed the Newton’s Newton’s method
(2)
Algorithm (2) method

2
–1 5 5 −1.207647827130919 −1.207647827130919
𝑥𝑒 𝑥 − sin2 𝑥 + 3 cos 𝑥 + 5 = 0 0 0.00001
–2 7 7 −1.207647827173526 −1.207647827173531

2
Table 3 show results for 𝑥𝑒 𝑥 − sin2 𝑥 + 3 cos 𝑥 + 5 = 0 considering different
values of 𝑥0 , and same values of 𝑥𝑝 and k. Table shows that change in 𝑥0 leads
change in number of iterations for getting same result by both proposed as well as
Newton’s methods.

4
Journal of AppliedMath 2024, 2(1), 477.

Table 4. Comparative study of the proposed Algorithm (2) with the Newton’s method for sin 𝑥 = 0 considering
different values of k, and same values of 𝑥0 and 𝑥𝑝 .
(4) (5)
(1) (6) (7)
(2) Number of Number of
Initial (3) Solution of the given Solution of the given
Value of iterations required iterations required
guess Value of k equation by the equation by the Newton’s
xp by the proposed by the Newton’s
value x0 proposed Algorithm (2) method
Algorithm (2) method
0.5 6 3.141592653589793
0.25 5 6.283185307179575
0.125 7 18.849555921538759
0.0625 6 −116.238928182822363
0.03125 7 −31.415926535897924
1.5 0.5 0.015625 5 4 −15.707963267948967 –12.566370614359172
0.0078125 5 −18.849555921538759
0.00390625 5 −12.566370614359171
0.001953125 4 −12.566370614359164
0.0009765 4 −12.566370614359172
0.00001 4 −12.566370614359171

Table 4 show results for sin 𝑥 = 0 considering different values of k, and same
values of 𝑥0 and 𝑥𝑝 . This table shows case of multiple roots of an equation. It is
observed that the proposed algorithm gives different roots for different values of k
with the variation in number of iterations.

Table 5. Comparative study of the proposed Algorithm (2) with the Newton’s method for 𝑥10 − 1 = 0 considering
different values of k, and same values of 𝑥0 and 𝑥𝑝 .
(4) (5)
(1)
Number of Number of (6) (7)
Initial (2) (3)
iterations required iterations required Solution of the given equation by Solution of the given equation
guess Value of xp Value of k
by the proposed by the Newton’s the proposed Algorithm (2) by the Newton’s method
value x0
Algorithm (2) method
0.5 13 1.000000000005467
0.25 19 1.000000000000000
0.125 24 1.000000000000095
0.0625 29 1.000000000002376
0.03125 34 1.000000000000000
0.5 −0.5 0.015625 37 43 1.000000000000537 1.000000000000000
0.0078125 39 1.000000000226450
0.00390625 41 1.000000000000017
0.001953125 42 1.000000000000000
0.0009765 42 1.000000000002626
0.00001 43 1.000000000000000

Table 5 show results for 𝑥10 − 1 = 0 considering different values of k, and


same values of 𝑥0 and 𝑥𝑝 . Here, it is observed that, the proposed algorithm gives the
same root as that of the Newton’s method with the variation in number of iterations.

5
Journal of AppliedMath 2024, 2(1), 477.

Table 6. Comparative study of the proposed Algorithm (2) with the Newton’s
method for 𝑥 3 + 𝑥 2 − 2 = 0 with the same values of 𝑥0 , 𝑥𝑝 and k.
(4) (5)
(6) (7)
(1) Number of Number of
(2) (3) Solution of the given Solution of the
Initial iterations iterations
Value Value of equation by the given equation by
guess required by the required by the
of xp k proposed Algorithm the Newton’s
value x0 proposed Newton’s
(2) method
Algorithm (2) method
0 2 0.00001 36 Fails 1.000000000020332 Fails

Table 6 show result for 𝑥 3 + 𝑥 2 − 2 = 0 with the same values of 𝑥0 , 𝑥𝑝 and k.


It is observed that the proposed algorithm works even if 𝑓 ′ (𝑥) = 0, which is the
limitation of the methods suggested by authors[1–7]. Of course, Newton’s method
also unable to give solution, but according to Traub [10] in such cases it converges
linearly.

5. Conclusions
This paper proposes new single-step, second-order optimal iterative algorithm
for finding root of an equation 𝑓(𝑥) = 0, wherein the concepts of slope of a line and
Taylor’s series are used in the derivation. The algorithm has computational
efficiency index 1.414 and informational efficiency 1, and it works for both
𝑓 ′ (𝑥) ≠ 0 and 𝑓 ′ (𝑥) = 0. Usually, the condition 𝑓 ′ (𝑥) = 0 is the limitation of the
methods suggested by authors [1–7]. The algorithm may be considered as a
generalization of the Newton’s method involving two parameters k and 𝑥𝑝 . The
proposed algorithm is valid if
𝑘𝑓(𝑥𝑛 ) − (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ) ≠ 0.
If 𝑘𝑓(𝑥𝑛 ) − (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ) = 0, then
𝑘𝑓(𝑥𝑛 ) = (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ). (4)
As 0 < 𝑘 < 1, and 𝑥𝑛 and 𝑥𝑝 are two different points, so 𝑘 ≠ 0 and (𝑥𝑛 −
𝑥𝑝 ) ≠ 0 . Hence, 𝑘𝑓(𝑥𝑛 ) = (𝑥𝑛 − 𝑥𝑝 )𝑓 ′ (𝑥𝑛 ) is satisfied when 𝑓 (𝑥𝑛 ) = 0 and
𝑓 ′ (𝑥𝑛 ) = 0 simultaneously.
Again, rearranging Equation (4) implies
𝑘 𝑓 ′ (𝑥𝑛 )
= (5)
𝑥𝑛 − 𝑥𝑝 𝑓(𝑥𝑛 )
The satisfaction of the above condition is dependent on k and 𝑥𝑝 . If at any stage
of iterations, Equation (5) is satisfied then the proposed algorithm fails but it happens
rarely.

Author contributions: Conceptualization, DRG and RCS; methodology, DRG and


RCS; software, DRG and RCS; validation, DRG and RCS; formal analysis, DRG
and RCS; investigation, DRG and RCS; data curation, DRG and RCS; writing—
original draft preparation, RCS; writing—review and editing, DRG and RCS. All
authors have read and agreed to the published version of the manuscript.
Conflict of interest: The authors declare no conflict of interest.

6
Journal of AppliedMath 2024, 2(1), 477.

References
1. Ujević N. A method for solving nonlinear equations. Applied Mathematics and Computation. 2006, 174(2): 1416-1426. doi:
10.1016/j.amc.2005.05.036
2. Sharma JR. A one-parameter family of second-order iteration methods. Applied Mathematics and Computation. 2007,
186(2): 1402-1406. doi: 10.1016/j.amc.2006.07.140
3. Saeed RK, Aziz KM. An iterative method with quartic convergence for solving nonlinear equations. Applied Mathematics
and Computation. 2008, 202(2): 435-440. doi: 10.1016/j.amc.2008.02.037
4. Maheshwari AK. A fourth order iterative method for solving nonlinear equations. Applied Mathematics and Computation.
2009, 211(2): 383-391. doi: 10.1016/j.amc.2009.01.047
5. Singh MK. A Six-order Variant of Newton’s Method for Solving Nonlinear Equations. ICHB PAS Poznan Supercomputing
and Networking Center. Published online 2009. doi: 10.12921/CMST.2009.15.02.185-193
6. Thukral R. A new eighth-order iterative method for solving nonlinear equations. Applied Mathematics and Computation.
2010, 217(1): 222-229. doi: 10.1016/j.amc.2010.05.048
7. Matinfar M, Aminzadeh M. An iterative method with six-order convergence for solving nonlinear equations. International
Journal of Mathematical Modeling and Computations. 2012, 2(1): 45-51.
8. Saheya B, Chen G, Sui Y, et al. A new Newton-like method for solving nonlinear equations. SpringerPlus. 2016, 5(1). doi:
10.1186/s40064-016-2909-7
9. Ostrowski AM. Solution of equations and systems of equations. Academic Press, New York. 1960.
10. Traub JF. Iterative methods for the solution of equations. Prentice-Hall, Englewood Cliffs, New Jersey. 1964.

You might also like