0% found this document useful (0 votes)
9 views10 pages

Hermite Optimal Derivative-Free Root Finding Methods Based On The Hermite Interpolation

The document presents optimal derivative-free root finding methods based on Hermite interpolation, achieving orders of convergence up to 16. It discusses the transformation of existing methods to derivative-free forms while preserving convergence properties. The effectiveness of these methods is validated through numerical comparisons with existing techniques.

Uploaded by

sanaullahjamali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views10 pages

Hermite Optimal Derivative-Free Root Finding Methods Based On The Hermite Interpolation

The document presents optimal derivative-free root finding methods based on Hermite interpolation, achieving orders of convergence up to 16. It discusses the transformation of existing methods to derivative-free forms while preserving convergence properties. The effectiveness of these methods is validated through numerical comparisons with existing techniques.

Uploaded by

sanaullahjamali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/306358507

Optimal derivative-free root finding methods based on the Hermite


interpolation

Article in Journal of Nonlinear Science and Applications · August 2016


DOI: 10.22436/jnsa.009.06.82

CITATION READS

1 147

3 authors:

Nusrat Yasmin Fiza Zafar


Bahauddin Zakariya University Bahauddin Zakariya University (BZU)
44 PUBLICATIONS 182 CITATIONS 56 PUBLICATIONS 245 CITATIONS

SEE PROFILE SEE PROFILE

Saima Akram
Bahauddin Zakariya University
34 PUBLICATIONS 96 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Computational mathemstics View project

All content following this page was uploaded by Saima Akram on 23 August 2016.

The user has requested enhancement of the downloaded file.


Available online at www.tjnsa.com
J. Nonlinear Sci. Appl. 9 (2016), 4427–4435
Research Article

Optimal derivative-free root finding methods based


on the Hermite interpolation
Nusrat Yasmin, Fiza Zafar∗, Saima Akram
Centre for Advanced Studies in Pure and Applied Mathematics, Bahauddin Zakariya University Multan, Pakistan.

Communicated by N. Hussain

Abstract
We develop n-point optimal derivative-free root finding methods of order 2n , based on the Hermite
interpolation, by applying a first-order derivative transformation. Analysis of convergence confirms that
the optimal order of convergence of the transformed methods is preserved, according to the conjecture of
Kung and Traub. To check the effectiveness and reliability of the newly presented methods, different type
of nonlinear functions are taken and compared. c 2016 all rights reserved.
Keywords: Root finding methods, optimal order of convergence, derivative approximation, Hermite
interpolation.
2010 MSC: 65H04, 65H05.

1. Introduction

Nonlinear problems frequently arise in real life situations. Commonly, mathematical models arise in
modeling of the problems of electrical, mechanical, civil, and chemical engineering, which ends up with a
nonlinear equation or a system of nonlinear equations. Moreover, nonlinear problems are commonly found
in number theory, modeling and simulation, and cryptography. Therefore, the development of efficient
methods for finding the solution of nonlinear equations has been of great importance. Various one-point and
multipoint root finding methods, were developed in recent past (see for example [1–3, 5, 7, 12]). A significant
one-point method is Newton’s method, which bears a second order of convergence and is optimal according
to the conjecture of Kung and Traub [10]. Newton’s method requires one functional and one derivative


Corresponding author
Email addresses: [email protected] (Nusrat Yasmin), [email protected] (Fiza Zafar), [email protected]
(Saima Akram)

Received 2016-06-09
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4428

evaluation for the completion of one computational cycle. But the method is very sensitive regarding the
choice of initial guess. One more drawback of it is frequent unavailability of the first derivative of the function,
so we have another attraction, which is to develop iterative schemes without requiring any derivatives. For
comparison of root finding methods, Ostrowski [4] coined the concept of efficiency index: “the efficiency
index of an iterative method is θ1/ν where θ represents the order of convergence of an iterative method and
ν is the number of functional evaluations per iteration”. We say that an iterative method is more efficient if
it takes less time and evaluations to provide fast and accurate approximations. The development of highly
efficient multipoint root finding methods has been focused by many researchers in the last decade.
Steffensen [9] approximate the first derivative arising in the Newton’s method by
f (xn ) − f (wn )
f 0 (xn ) ≈ = f [xn , wn ] (1.1)
x n − wn
and obtain the following iterative scheme
f (xn )
xn+1 = xn − , n≥0 (1.2)
f [wn , xn ]
with second order convergence, where wn = xn + f (xn ). Recently Cordero and Torregrosa in [1] used the
following approximation of the first derivative

f 0 (xn ) ≈ f [xn , wn ], wn = xn + γf (xn )m , m ≥ q, γ ∈ R − {0} (1.3)

to transform a three point eighth order with-derivative method, to a derivative-free method by preserving the
order of convergence. We generalize transformation (1.3) to transform n-point optimal methods involving
first derivative, based on the Hermite interpolation.
In Section 2, we present 2-point, 3-point, 4-point and n-point optimal derivative-free methods based
on the Hermite interpolation. We wish that the first derivative at each successive iterate is approximated
using the Hermite interpolation. Hence first we construct a two-point with-derivative iterative method,
then transform this newly developed two-point iterative method to derivative-free. The 3-point, 4-point and
n−point methods are transformed forms of existing optimal with-derivative methods based on the Hermite
interpolation. We apply the transformation (1.3) in such a way that the optimal order of convergence of
newly developed Hermite interpolation based schemes is preserved. Analysis of convergence of the newly
developed schemes are also given. In Section 3, we give numerical comparison of the presented methods
with the existing methods of same domain.

2. Transforming with-derivative methods based on the Hermite interpolation into derivative-


free methods
In this section, first we develop two, three, four, and n-point with-derivative methods based on the
Hermite interpolation and then use (1.3) to transform the with-derivative optimal methods based on the
Hermite interpolation to derivative-free ones.

2.1. Optimal two-point fourth-order method


For a construction of an optimal fourth-order two-point method using three function evaluations, we use
the following quadratic polynomial,

f (t) = H2 (t) = a0 + a1 (t − x) + a2 (t − x)2 , (2.1)

satisfying
f (x) = H2 (x), f 0 (x) = H20 (x), f (y) = H2 (y). (2.2)
By using (2.2) we get
a0 = f (x), (2.3)
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4429

a1 = f 0 (x) = H20 (x), (2.4)


f [y, x] − f 0 (x)
a2 = . (2.5)
(y − x)
Hence, by using the values of a1 and a2 , we obtain

H20 (yn ) = 2f [yn , xn ] − f 0 (xn ). (2.6)

By using (2.6), we obtain the following two-point optimal with-derivative method:


f (xn )
yn =xn − , n ≥ 0,
f 0 (xn )
(2.7)
f (yn )
xn+1 =yn − .
2f [yn , xn ] − f 0 (xn )
Theorem 2.1. Let f : D ⊆ R → R be a sufficiently differentiable function in an open interval D and α a
simple root of f . If x0 is close enough to α, then the iterative scheme defined by (2.7) is of optimal order 4
and has the following error equation:

en+1 = c2 (−c3 + c22 )e4n + O(e5n ), (2.8)


f (j) (α)
where, cj = j!f 0 (α) , j ≥ 2 and en = xn − α.
Proof. By using Taylor’s expansions about α, we have

f (xn ) = f 0 (α)[en + c2 e2n + c3 e3n + c4 e4n ] + O(e5n )

and
f 0 (xn ) = f 0 (α)[1 + 2c2 en + 3c3 e2n + 4c4 e3n ] + O(e4n ).
By substituting above expressions in the first step of (2.7), we get

yn − α = c2 e2n + (2c3 − 2c22 )e3n + (3c4 − 7c3 c2 + 4c32 )e4n + O(e5n ).

Again by using Taylor’s expansion we have

f (yn ) = c2 f 0 (α)e2n + (2c3 − 2c22 )f 0 (α)e3n + (3c4 − 7c3 c2 + 4c32 )f 0 (α)e4n + O(e5n ).

Hence, by using the above expressions in the second step of (2.7), we get the following error equation:

en+1 = c2 (−c3 + c22 )e4n + O(e5n ).

Thus the proof is complete.

Now, we intend to transform the optimal two-point method (2.7) into a derivative free one by using (1.3)
so that the optimal order is preserved.
By using the approximation given in (1.3), the scheme (2.7) transforms into
f (xn )
yn =xn − , wn = xn + f (xn )2 , n ≥ 0,
f [wn , xn ]
(2.9)
f (xn )
xn+1 =yn − ,
2f [yn , xn ] − f [wn , xn ]
where H20 (yn ) is given by (2.6). Similar to Theorem 2.1, it can be easily shown that the scheme (2.9) has
optimal order four with the following error equation:

en+1 = −c2 (c3 + f 0 (α)2 c2 − 2c22 )e4n + O(e5n ).


N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4430

2.2. Optimal three-point eighth-order method


Petković in [6] proposed a three-point method, based on the Hermite interpolation, which is given by
f (xn )
yn =xn − , n ≥ 0,
f 0 (xn )
zn =ϕf (xn , yn ), ϕf ψ4 , (2.10)
f (zn)
xn+1 =zn − 0 ,
H3 (zn )
where, ψ4 is a real function chosen in such a way that it requires already computed values f (xn ), f 0 (xn ) and
f (yn ) and it provides the fourth-order convergence of the sequence {xn }. The scheme (2.9) is an example
of such a function. The value of H30 (zn ) in [6] is given by
zn − xn (zn − xn )2 zn − yn
H30 (zn ) = f [zn , xn ](2 + )− f [yn , xn ] + f 0 (xn ) . (2.11)
zn − yn (yn − xn )(zn − yn ) yn − x n
By applying the transformation (1.3) to the three-point scheme (2.10) and using the scheme (2.9) at the
second step, we obtain a new optimal three-point method given by
f (xn )
yn =xn − , wn = xn + f (xn )3 , n ≥ 0,
f [wn , xn ]
f (yn )
zn =yn − , (2.12)
2f [yn , xn ] − f [wn , xn ]
f (zn )
xn+1 =zn − 0 ,
K3 (zn )
where
z n − xn (zn − xn )2 zn − yn
K30 (zn ) = f [zn , xn ](2 + )− f [yn , xn ] + f [wn , xn ] .
zn − yn (yn − xn )(zn − yn ) yn − x n
Theorem 2.2. Let f : D ⊆ R → R be a sufficiently differentiable function in an open interval D and α a
simple root of f . If x0 is close enough to α, then the iterative scheme defined by (2.12) is of optimal order
8 and has the following error equation:
en+1 = en+1 = c22 (c2 c23 − 2c32 c3 + c4 c22 + c52 − c4 c3 )e8n + O(e9n ), (2.13)
f (j) (α)
where, cj = j!f 0 (α) , j ≥ 2, and en = xn − α.
Proof. By using Taylor’s expansions, the proof would be similar to Theorem 2.1 and those already taken in
[6, 12]. Hence it is omitted.

2.3. Optimal four-point sixteenth-order method


Fiza et al. in [11] developed an optimal sixteenth-order four-point scheme based on the Hermite interpo-
lation. We add the fourth step of their scheme to (2.12) and obtain a new derivative-free optimal four-point
method, using the transformation (1.3) as follows:
f (xn )
yn =xn − , wn = xn + f (xn )4 , n ≥ 0,
f [wn , xn ]
f (yn )
zn =yn − ,
2f [yn , xn ] − f [wn , xn ]
(2.14)
f (zn )
tn =zn − 0 ,
K3 (zn )
f (tn )
xn+1 =tn − 0 ,
K4 (tn )
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4431

where
z−x (z − x)2 (z − y)
K30 (z) =f [z, x](2 + )− f [y, x] + f [w, x] ,
z−y (y − x)(z − y) (y − x) (2.15)
K40 (t) =f [t, z] + (t − z)f [t, z, y] + (t − z)(t − y)f [t, z, y, x] + (t − z)(t − y)(t − x)f [t, z, y, x, 2]

and f [t, z, y, x, 2] is defined by


1 1
f [t, z, y, x, 2] = 2
(f [t, z] − f [z, y]) − 2
(f [z, y] − f [y, x])
(t − x) (t − y) (t − x) (z − x)
1 1
− 2
(f [z, y] − f [y, x]) + (f [y, x] − f [w, x]).
(t − x)(z − x) (t − x)(z − x)(y − x)

Theorem 2.3. Let f : D ⊆ R → R be a sufficiently differentiable function in an open interval D and α a


simple root of f . If x0 is close enough to α, then the iterative scheme defined by (2.14) is of optimal order
16 and has the following error equation:

en+1 = −c42 (−c3 + c22 )2 (c32 − c3 c2 + c4 )(−c42 + c3 c22 + c41 c2 − c4 c2 + c5 )e16 + O(e17 ),

f (j) (α)
where, cj = j!f 0 (α) , j ≥ 2 and en = xn − α.

Proof. By using Taylor’s expansions, the proof would be similar to those already taken in [11]. Hence it is
skipped over.

2.4. Optimal n-point method of order 2n


In [6], Petković et al. designed an n-point n-step method of order 2n requiring n evaluations of function
and one evaluation of the first derivative f 0 (xn ) at each step. By applying the transformation (1.3), we
obtain the following derivative-free n-point n-step method:

f (xk )
φ1 (xk ) = xk − , zk = xk + f (xk )m , m ≥ n ≥ 0,
f [zk , xk ]
φ2 (xk ) = Ψf (x, φ1 (xk )),Ψf ∈ Ψ4 ,
f (φ2 (xk ))
φ3 (xk ) = φ2 − 0 , (2.16)
K3 (φ2 (xk ))
..
.
f (φ(n−1) (xk ))
xk+1 = φn (xk ) = φ(n−1) (xk ) − 0 ,
Kn (φ(n−1) (xk ))

where, Ψ4 is any fourth-order iterative method and Kn is the Hermite interpolating polynomial of degree n
given by

Kn (x) = f (x) = a0 + a1 (f (x) − f (xk )) + a2 (f (x) − f (xk ))2 + · · · + an (f (x) − f (xk ))n , (2.17)

with the conditions


f (xk ) =Kn (xk ), f 0 (xk ) = Kn0 (xk ),
(2.18)
f (φ1 )) =Kn (φ1 ), ..., f (φn ) = Kn (φn ).

By the use of conditions (2.18) and implementing scheme (2.9) at the second step, the coefficients
a0 , a1 , ..., an can be determined easily, and hence we obtain an n-point derivative-free method of the following
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4432

form:
f (xk )
φ1 (xk ) =xk − , wk = xk + f (xk )m , m ≥ n ≥ 0,
f [wk , xk ]
f (φ1 (xk ))
φ2 (xk ) =φ1 − 0 ,
K2 (φ1 (xk ))
f (φ2 (xk )) (2.19)
φ3 (xk ) =φ2 − 0 ,
K3 (φ2 (xk ))
..
.
f (φ(n−1) (xk ))
xk+1 =φn (xk ) = φ(n−1) (xk ) − 0 ,
Kn (φ(n−1) (xk ))

where, Kn is defined by (2.17).

Theorem 2.4. Let f : D ⊆ R → R be a sufficiently differentiable function in an open interval D and α a


simple root of f . If x0 is close enough to α, then the n- point iterative scheme defined by (2.19) is of optimal
order 2n .

3. Numerical results

In this section, we first transform an optimal four-point method of Sharifi et al. [7], denoted by SL16,
which is given by

f (xn )
yn =xn − , n ≥ 0,
f 0 (xn )
f (yn )
rn =yn − L1 (un ) 0 ,
f (xn )
(3.1)
f (rn )
sn =rn − L2 (un , vn , wn ) ,
f 0 (xn )
f (sn )
xn+1 =sn − L3 (un , vn , wn , pn , qn , tn ) ,
f 0 (xn )

where,

L1 (un ) =1 + 2un + 5u2n − 6u3n ,


L2 (un , vn , wn ) =1 + 2un + 4wn + 6u2n + vn ,
L3 (un , vn , wn , pn , qn , tn ) =1 + 6u2n + 2un − vn3 + vn + 4wn − 4wn2 + un wn
+ 6u2n wn + 2u3n wn − 10un wn2 + tn + 2qn + 8pn
+ 2un tn + 2vn wn + 6u2n tn − 4vn2 wn + 24u4n wn ,

are weight functions, where

f (yn ) f (rn ) f (rn ) f (sn ) f (sn ) f (sn )


un = , vn = , wn = , pn = , qn = , tn = (3.2)
f (xn ) f (yn ) f (xn ) f (xn ) f (yn ) f (rn )

and the modified form of which is denoted by MSL16, given by

f (xn )
yn =xn − ,
f [wn , xn ]
wn =xn + f (xn )4 , n ≥ 0,
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4433

f (yn )
rn =yn − L1 (un ) ,
f [wn , xn ]
f (rn )
sn =rn − L2 (un , vn , wn ) ,
f [wn , xn ]
f (sn )
xn+1 =sn − L3 (un , vn , wn , pn , qn , tn ) .
f [wn , xn ]

We now test all the discussed methods using a number of nonlinear equations. We employ multiprecision
arithmetic with 4000 significant decimal digits in the programming package Maple 16 to obtain a high
accuracy and avoid loss of significant digits. We compare the convergence behavior of the modified methods
MSL16 and (2.14), denoted by MFM16, with their with-derivative versions, i.e., SL16 and the optimal four-
point method by Fiza et al. (FM16) [11] and optimal sixteenth-order class of Soleymani (SSS16) et al. [8]
respectively using the nonlinear functions given in Table 1. Table 1 also includes the exact root α and initial
approximation x0 , which are calculated using Maple 16. The error |xn − α| and the computational order of
convergence (coc) for the first three iterations of various methods are displayed in Tables 2–6 which supports
the theoretical order of convergence. The formula to compute the computational order of convergence (coc)
is given by
log |(xn+1 − α)/(xn − α)|
coc ≈ .
log |(xn − α)/(xn−1 − α)|
It can be seen from Tables 1–6 that for the presented examples, the modified four-point methods MSL16,
MFM16 and MSSS16 are comparable and competitive to the methods SL16, FM16 and SSS16.

Table 1: Test functions


Example Test Functions Exact root α x0
1 f1 (x) = (2 + x3 ) cos( πx 2
2 ) + log(x + 2x + 2) −1 −0.93
2 x 1
2 f2 (x) = x e + x cos x3 + 1 −1.5650602... −2.0
3 f3 (x) = xex + log(1 + x + x4 ) 0 −0.5
4 f4 (x) = esin(8x) − 4x 0.34985721... 7.0
5 f5 (x) = −20x5 − x2 + 21 0.42767729... 0.38

Table 2: Numerical results of example 1


f1 (x) = (2 + x3 ) cos( πx 2
2 ) + log(x + 2x + 2), x0 = −0.93
SL16 FM16 SSS16 MSL16 MFM16 MSSS16
|x1 − α| 2.63(-9) 2.60(-12) 4.97(-11) 2.59(-9) 1.86(-12) 4.35(-11)
|x2 − α| 1.61(-127) 2.75(-177) 1.18 (-154) 1.23(-127) 1.15(-179) 1.36(-155)
|x3 − α| 6.48(-2019) 6.94(-2817) 1.23 (-2452) 8.22(-2021) 4.91(-2855) 1.17(-2467)
coc 16.00 16.00 16.00 16.00 16.00 16.00

Table 3: Numerical results of example 2


1
f2 (x) = x2 e x
+ x cos + 1, x0 = −2.0
x3
SL16 FM16 SSS16 MSL16 MFM16 MSSS16
|x1 − α| 9.71(-15) 8.50(-15) 5.80 (-15) 4.34(-15) 2.86(-15) 7.33 (-15)
|x2 − α| 1.08(-228) 8.63(-231) 6.47 (-234) 2.89(-234) 2.95(-238) 1.11(-232)
|x3 − α| 5.76(-3652) 1.11(-3686) 3.77 (-3737) 4.42(-3714) 4.91(-3806) 9.19(-3718)
coc 16.00 16.00 16.00 16.00 16.00 16.00
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4434

Table 4: Numerical results of example 3


f3 (x) = xex + log(1 + x + x4 ), x0 = −0.5
SL16 FM16 SSS16 MSL16 MFM16 MSSS16
|x1 − α| 2.97 6.04(-6) 3.80(-4) 1.52(-7) 4.12(-10) 1.83(-10)
|x2 − α| 7.70(-1) 2.67(-88) 8.79(-60) 1.24(-112) 4.37(-154) 2.29(-160)
|x3 − α| 4.59(-6) 5.85(-1406) 5.72(-950) 4.90(-1794) 1.16(-2457) 7.91(-2559)
coc 8.94 16.00 16.00 16.00 16.00 16.00

Table 5: Numerical results of example 4


f4 (x) = e(sin 8x) − 4x, x0 = 7
SL16 FM16 SSS16 MSL16 MFM16 MSSS16
|x1 − α| D* 33.22 D 3.00(-2) 2.42(-3) 3.04(-3)
|x2 − α| D 3.26(-4) D 5.71(-11) 2.06(-32) 3.79(-30)
|x3 − α| D 5.75(-51) D 8.21(-155) 3.90(-498) 1.38(-461)
coc D 9.33 D 16.49 16.02 16.03

Table 6: Numerical results of example 5


f5 (x) = −20x5 − x2 + 1
2 , x0 = 0.38
SL16 FM16 SSS16 MSL16 MFM16 MSSS16
|x1 − α| 1.28(-3) 1.11(-13) 2.19(-9) 1.05(-3) 4.86(-14) 1.51(-9)
|x2 − α| 2.68(-34) 4.53(-200) 6.00(-128) 1.05(-35) 3.87(-205) 1.65(-131)
|x3 − α| 5.39(-525) 2.70(-3182) 6.02(-2025) 1.60(-547) 1.01(-3262) 7.74(-2083)
coc 16.00 16.00 16.00 16.00 16.00 16.00

*D stands for divergence

3.1. Conclusions
We applied the well-known conjecture of Cordero and Torregrosa [1] to transform an n-point optimal
method based on the Hermite interpolation to derivative-free optimal methods. Some existing derivative-
based methods are also modified using this transformation. Convergence analysis is performed for the
transformed methods. Finally, numerical examples is supplied and a comparison is provided, which supports
the theoretical results. It can be seen that the modified derivative-free methods can compete, and work
better than their with-derivative versions. Specially in the case of example 5, it can be seen that derivative-
involved methods SL16 and SSS16 fail to converge to any root, while their transformed derivative-free forms
converge even when the initial guess is taken far from the required root.

References
[1] A. Cordero, J. R. Torregrosa, Low-complexity root-finding iteration functions with no derivatives of any order of
convergence, J. Comput. Appl. Math., 275 (2015), 502–515. 1, 1, 3.1
[2] Y. H. Geum, Y. I. Kim, A biparametric family of optimally convergent sixteenth-order multipoint methods with
their fourth-step weighting function as a sum of a rational and a generic two-variable function, J. Comput. Appl.
Math., 235 (2011), 3178–3188.
[3] H. T. Kung, J. F. Traub, Optimal order of one-point and multipoint iteration, J. Assoc. Comput. Mach., 21
(1974), 643–651. 1
[4] A. M. Ostrowski, Solution of equations and systems of equations, Academic Press, New York, (1960). 1
[5] M. S. Petković, B. Neta, L. D. Petković, J. Džunić, Multipoint methods for solving nonlinear equations, Elsevier,
Amsterdam, (2013). 1
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4435

[6] M. S. Petković, L. D. Petković, Families of optimal multipoint methods for solving nonlinear equations: A Survey,
Appl. Anal. Discrete Math., 4 (2010), 1–22. 2.2, 2.2, 2.2, 2.4
[7] S. Sharifi, M. Salimi, S. Siegmund, T. Lotfi, A new class of optimal four-point methods with convergence order
16 for solving nonlinear equations, Math. Comput. Simulation, 119 (2016), 69–90. 1, 3
[8] F. Soleymani, S. Shateyi, H. Salmani, Computing simple roots by an optimal sixteenth-order class, J. Appl. Math.,
2012 (2012), 13 pages. 3
[9] I. F. Steffensen, Remarks on iteration, candinavian Actuarial J., 16 (1933), 64–72. 1
[10] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, New Jersey, (1964).
1
[11] F. Zafar, N. Hussain, Z. Fatima, A. Kharal, Optimal sixteenth order convergent method based on Quasi-Hermite
interpolation for computing roots, Sci. World J., 2014 (2014), 18 pages. 2.3, 2.3, 3
[12] F. Zafar, N. Yasmin, S. Akram, M. D. Junjua, A general class of derivative free optimal root finding methods
based on rational interpolation, Sci. World J., 2015 (2015), 12 pages. 1, 2.2

View publication stats

You might also like