Hermite Optimal Derivative-Free Root Finding Methods Based On The Hermite Interpolation
Hermite Optimal Derivative-Free Root Finding Methods Based On The Hermite Interpolation
net/publication/306358507
CITATION READS
1 147
3 authors:
Saima Akram
Bahauddin Zakariya University
34 PUBLICATIONS 96 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Saima Akram on 23 August 2016.
Communicated by N. Hussain
Abstract
We develop n-point optimal derivative-free root finding methods of order 2n , based on the Hermite
interpolation, by applying a first-order derivative transformation. Analysis of convergence confirms that
the optimal order of convergence of the transformed methods is preserved, according to the conjecture of
Kung and Traub. To check the effectiveness and reliability of the newly presented methods, different type
of nonlinear functions are taken and compared. c 2016 all rights reserved.
Keywords: Root finding methods, optimal order of convergence, derivative approximation, Hermite
interpolation.
2010 MSC: 65H04, 65H05.
1. Introduction
Nonlinear problems frequently arise in real life situations. Commonly, mathematical models arise in
modeling of the problems of electrical, mechanical, civil, and chemical engineering, which ends up with a
nonlinear equation or a system of nonlinear equations. Moreover, nonlinear problems are commonly found
in number theory, modeling and simulation, and cryptography. Therefore, the development of efficient
methods for finding the solution of nonlinear equations has been of great importance. Various one-point and
multipoint root finding methods, were developed in recent past (see for example [1–3, 5, 7, 12]). A significant
one-point method is Newton’s method, which bears a second order of convergence and is optimal according
to the conjecture of Kung and Traub [10]. Newton’s method requires one functional and one derivative
∗
Corresponding author
Email addresses: [email protected] (Nusrat Yasmin), [email protected] (Fiza Zafar), [email protected]
(Saima Akram)
Received 2016-06-09
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4428
evaluation for the completion of one computational cycle. But the method is very sensitive regarding the
choice of initial guess. One more drawback of it is frequent unavailability of the first derivative of the function,
so we have another attraction, which is to develop iterative schemes without requiring any derivatives. For
comparison of root finding methods, Ostrowski [4] coined the concept of efficiency index: “the efficiency
index of an iterative method is θ1/ν where θ represents the order of convergence of an iterative method and
ν is the number of functional evaluations per iteration”. We say that an iterative method is more efficient if
it takes less time and evaluations to provide fast and accurate approximations. The development of highly
efficient multipoint root finding methods has been focused by many researchers in the last decade.
Steffensen [9] approximate the first derivative arising in the Newton’s method by
f (xn ) − f (wn )
f 0 (xn ) ≈ = f [xn , wn ] (1.1)
x n − wn
and obtain the following iterative scheme
f (xn )
xn+1 = xn − , n≥0 (1.2)
f [wn , xn ]
with second order convergence, where wn = xn + f (xn ). Recently Cordero and Torregrosa in [1] used the
following approximation of the first derivative
to transform a three point eighth order with-derivative method, to a derivative-free method by preserving the
order of convergence. We generalize transformation (1.3) to transform n-point optimal methods involving
first derivative, based on the Hermite interpolation.
In Section 2, we present 2-point, 3-point, 4-point and n-point optimal derivative-free methods based
on the Hermite interpolation. We wish that the first derivative at each successive iterate is approximated
using the Hermite interpolation. Hence first we construct a two-point with-derivative iterative method,
then transform this newly developed two-point iterative method to derivative-free. The 3-point, 4-point and
n−point methods are transformed forms of existing optimal with-derivative methods based on the Hermite
interpolation. We apply the transformation (1.3) in such a way that the optimal order of convergence of
newly developed Hermite interpolation based schemes is preserved. Analysis of convergence of the newly
developed schemes are also given. In Section 3, we give numerical comparison of the presented methods
with the existing methods of same domain.
satisfying
f (x) = H2 (x), f 0 (x) = H20 (x), f (y) = H2 (y). (2.2)
By using (2.2) we get
a0 = f (x), (2.3)
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4429
and
f 0 (xn ) = f 0 (α)[1 + 2c2 en + 3c3 e2n + 4c4 e3n ] + O(e4n ).
By substituting above expressions in the first step of (2.7), we get
f (yn ) = c2 f 0 (α)e2n + (2c3 − 2c22 )f 0 (α)e3n + (3c4 − 7c3 c2 + 4c32 )f 0 (α)e4n + O(e5n ).
Hence, by using the above expressions in the second step of (2.7), we get the following error equation:
Now, we intend to transform the optimal two-point method (2.7) into a derivative free one by using (1.3)
so that the optimal order is preserved.
By using the approximation given in (1.3), the scheme (2.7) transforms into
f (xn )
yn =xn − , wn = xn + f (xn )2 , n ≥ 0,
f [wn , xn ]
(2.9)
f (xn )
xn+1 =yn − ,
2f [yn , xn ] − f [wn , xn ]
where H20 (yn ) is given by (2.6). Similar to Theorem 2.1, it can be easily shown that the scheme (2.9) has
optimal order four with the following error equation:
where
z−x (z − x)2 (z − y)
K30 (z) =f [z, x](2 + )− f [y, x] + f [w, x] ,
z−y (y − x)(z − y) (y − x) (2.15)
K40 (t) =f [t, z] + (t − z)f [t, z, y] + (t − z)(t − y)f [t, z, y, x] + (t − z)(t − y)(t − x)f [t, z, y, x, 2]
en+1 = −c42 (−c3 + c22 )2 (c32 − c3 c2 + c4 )(−c42 + c3 c22 + c41 c2 − c4 c2 + c5 )e16 + O(e17 ),
f (j) (α)
where, cj = j!f 0 (α) , j ≥ 2 and en = xn − α.
Proof. By using Taylor’s expansions, the proof would be similar to those already taken in [11]. Hence it is
skipped over.
f (xk )
φ1 (xk ) = xk − , zk = xk + f (xk )m , m ≥ n ≥ 0,
f [zk , xk ]
φ2 (xk ) = Ψf (x, φ1 (xk )),Ψf ∈ Ψ4 ,
f (φ2 (xk ))
φ3 (xk ) = φ2 − 0 , (2.16)
K3 (φ2 (xk ))
..
.
f (φ(n−1) (xk ))
xk+1 = φn (xk ) = φ(n−1) (xk ) − 0 ,
Kn (φ(n−1) (xk ))
where, Ψ4 is any fourth-order iterative method and Kn is the Hermite interpolating polynomial of degree n
given by
Kn (x) = f (x) = a0 + a1 (f (x) − f (xk )) + a2 (f (x) − f (xk ))2 + · · · + an (f (x) − f (xk ))n , (2.17)
By the use of conditions (2.18) and implementing scheme (2.9) at the second step, the coefficients
a0 , a1 , ..., an can be determined easily, and hence we obtain an n-point derivative-free method of the following
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4432
form:
f (xk )
φ1 (xk ) =xk − , wk = xk + f (xk )m , m ≥ n ≥ 0,
f [wk , xk ]
f (φ1 (xk ))
φ2 (xk ) =φ1 − 0 ,
K2 (φ1 (xk ))
f (φ2 (xk )) (2.19)
φ3 (xk ) =φ2 − 0 ,
K3 (φ2 (xk ))
..
.
f (φ(n−1) (xk ))
xk+1 =φn (xk ) = φ(n−1) (xk ) − 0 ,
Kn (φ(n−1) (xk ))
3. Numerical results
In this section, we first transform an optimal four-point method of Sharifi et al. [7], denoted by SL16,
which is given by
f (xn )
yn =xn − , n ≥ 0,
f 0 (xn )
f (yn )
rn =yn − L1 (un ) 0 ,
f (xn )
(3.1)
f (rn )
sn =rn − L2 (un , vn , wn ) ,
f 0 (xn )
f (sn )
xn+1 =sn − L3 (un , vn , wn , pn , qn , tn ) ,
f 0 (xn )
where,
f (xn )
yn =xn − ,
f [wn , xn ]
wn =xn + f (xn )4 , n ≥ 0,
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4433
f (yn )
rn =yn − L1 (un ) ,
f [wn , xn ]
f (rn )
sn =rn − L2 (un , vn , wn ) ,
f [wn , xn ]
f (sn )
xn+1 =sn − L3 (un , vn , wn , pn , qn , tn ) .
f [wn , xn ]
We now test all the discussed methods using a number of nonlinear equations. We employ multiprecision
arithmetic with 4000 significant decimal digits in the programming package Maple 16 to obtain a high
accuracy and avoid loss of significant digits. We compare the convergence behavior of the modified methods
MSL16 and (2.14), denoted by MFM16, with their with-derivative versions, i.e., SL16 and the optimal four-
point method by Fiza et al. (FM16) [11] and optimal sixteenth-order class of Soleymani (SSS16) et al. [8]
respectively using the nonlinear functions given in Table 1. Table 1 also includes the exact root α and initial
approximation x0 , which are calculated using Maple 16. The error |xn − α| and the computational order of
convergence (coc) for the first three iterations of various methods are displayed in Tables 2–6 which supports
the theoretical order of convergence. The formula to compute the computational order of convergence (coc)
is given by
log |(xn+1 − α)/(xn − α)|
coc ≈ .
log |(xn − α)/(xn−1 − α)|
It can be seen from Tables 1–6 that for the presented examples, the modified four-point methods MSL16,
MFM16 and MSSS16 are comparable and competitive to the methods SL16, FM16 and SSS16.
3.1. Conclusions
We applied the well-known conjecture of Cordero and Torregrosa [1] to transform an n-point optimal
method based on the Hermite interpolation to derivative-free optimal methods. Some existing derivative-
based methods are also modified using this transformation. Convergence analysis is performed for the
transformed methods. Finally, numerical examples is supplied and a comparison is provided, which supports
the theoretical results. It can be seen that the modified derivative-free methods can compete, and work
better than their with-derivative versions. Specially in the case of example 5, it can be seen that derivative-
involved methods SL16 and SSS16 fail to converge to any root, while their transformed derivative-free forms
converge even when the initial guess is taken far from the required root.
References
[1] A. Cordero, J. R. Torregrosa, Low-complexity root-finding iteration functions with no derivatives of any order of
convergence, J. Comput. Appl. Math., 275 (2015), 502–515. 1, 1, 3.1
[2] Y. H. Geum, Y. I. Kim, A biparametric family of optimally convergent sixteenth-order multipoint methods with
their fourth-step weighting function as a sum of a rational and a generic two-variable function, J. Comput. Appl.
Math., 235 (2011), 3178–3188.
[3] H. T. Kung, J. F. Traub, Optimal order of one-point and multipoint iteration, J. Assoc. Comput. Mach., 21
(1974), 643–651. 1
[4] A. M. Ostrowski, Solution of equations and systems of equations, Academic Press, New York, (1960). 1
[5] M. S. Petković, B. Neta, L. D. Petković, J. Džunić, Multipoint methods for solving nonlinear equations, Elsevier,
Amsterdam, (2013). 1
N. Yasmin, F. Zafar, S. Akram, J. Nonlinear Sci. Appl. 9 (2016), 4427–4435 4435
[6] M. S. Petković, L. D. Petković, Families of optimal multipoint methods for solving nonlinear equations: A Survey,
Appl. Anal. Discrete Math., 4 (2010), 1–22. 2.2, 2.2, 2.2, 2.4
[7] S. Sharifi, M. Salimi, S. Siegmund, T. Lotfi, A new class of optimal four-point methods with convergence order
16 for solving nonlinear equations, Math. Comput. Simulation, 119 (2016), 69–90. 1, 3
[8] F. Soleymani, S. Shateyi, H. Salmani, Computing simple roots by an optimal sixteenth-order class, J. Appl. Math.,
2012 (2012), 13 pages. 3
[9] I. F. Steffensen, Remarks on iteration, candinavian Actuarial J., 16 (1933), 64–72. 1
[10] J. F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, New Jersey, (1964).
1
[11] F. Zafar, N. Hussain, Z. Fatima, A. Kharal, Optimal sixteenth order convergent method based on Quasi-Hermite
interpolation for computing roots, Sci. World J., 2014 (2014), 18 pages. 2.3, 2.3, 3
[12] F. Zafar, N. Yasmin, S. Akram, M. D. Junjua, A general class of derivative free optimal root finding methods
based on rational interpolation, Sci. World J., 2015 (2015), 12 pages. 1, 2.2