Extending The Applicability of The SuperHalleyLike Method Using Continuous Derivatives and Restricted Convergence Domains
Extending The Applicability of The SuperHalleyLike Method Using Continuous Derivatives and Restricted Convergence Domains
Extending The Applicability of The SuperHalleyLike Method Using Continuous Derivatives and Restricted Convergence Domains
DOI: 10.2478/amsil-2018-0008
1. Introduction
F (x) = 0,
the local convergence analysis of the super-Halley method defined for each
n = 0, 1, 2, . . . by
where x0 is an initial point and Ln = F 0 (xn )−1 F 00 (xn )F 0 (xn )−1 F (xn ). The ef-
ficiency and importance of method (1.1) were discussed in [4–9,14]. The study
of convergence of iterative algorithms is usually centered into two categories:
semilocal and local convergence analysis. The semilocal convergence is based
on the information around an initial point, to obtain conditions ensuring the
convergence of these algorithms, while the local convergence is based on the
information around a solution to find estimates of the computed radii of the
convergence balls. Local results are important since they provide the degree
of difficulty in choosing initial points. The local convergence of method (1.1)
was shown using hypotheses given in non-affine invariant form by
(C1 ) F : Ω ⊂ B1 → B2 is a thrice continuously differentiable operator.
(C2 ) There exists x0 ∈ Ω such that F 0 (x0 )−1 ∈ L(B2 , B1 ) and kF −1 (x0 )k ≤ β.
There exist η ≥ 0, β1 ≥ 0, β2 ≥ 0 and β3 ≥ 0 such that
(C3 ) kF 0 (x0 )−1 F 0 (x0 )k ≤ η,
(C4 ) kF 00 (x)k ≤ β1 , for each x ∈ D,
(C5 ) kF 000 (x)k ≤ β2 , for each x ∈ D,
(C6 ) kF 000 (x) − F 000 (y)k ≤ β3 kx − yk for each x, y ∈ D.
The hypotheses for the local convergence analysis of these methods are
the same but x0 is replaced by x∗ . Notice however that hypotheses (C5 ) and
(C6 ) limit the applicability of these methods. As a motivational example, let
us define function f on Ω = [− 21 , 52 ] by
x3 ln x2 + x5 − x4 , x 6= 0,
f (x) =
0, x = 0.
Let also w : [0, r0 ) → [0, +∞), v : [0, r0 ) → [0, +∞) and v1 : [0, r0 ) → [0, +∞)
be continuous and increasing functions with w(0) = 0. Moreover, define func-
tions g1 and h1 on the interval [0, r0 ) by
R1
0
w((1 − θ)t)dθ
g1 (t) =
1 − w0 (t)
24 Ioannis K. Argyros, Santhosh George
and
h1 (t) = g1 (t) − 1.
and
hp (t) = p(t) − 1.
and
h2 (t) = g2 (t) − 1.
(2.2) r = min{r1 , r2 }.
and
Let U (v, ρ), Ū (v, ρ) stand, respectively for the open and closed balls in B1
with center v ∈ B1 and of radius ρ > 0.
Extending the applicability of the super-Halley-like method... 25
Next, we present the local convergence analysis of method (1.1) using the
preceding notation.
and
and Ū (x∗ , r) ⊆ Ω, where the radii r0 and r are defined by (2.1) and (2.2),
respectively. Then, sequence {xn } starting from x0 ∈ U (x∗ , r)\{x∗ } and given
by (1.1) is well defined, remains in U (x∗ , r) and converges to x∗ . Moreover,
the following estimates hold
and
Proof. We shall show estimates (2.10) and (2.11) using induction on the
integer k. Let x ∈ U (x∗ , r). Using (2.1), (2.2) and (2.5), we get that
In view of (2.13) and the Banach lemma on invertible operators ([1, 4, 15, 16])
we have that F 0 (x)−1 ∈ L(B2 , B1 ) and
1
(2.14) kF 0 (x)−1 F 0 (x∗ )k ≤ .
1 − w0 (kx − x∗ k)
Using (2.2), (2.3) (for i = 1), (2.7), (2.14) and (2.15) we obtain in turn that
which shows (2.10) for n = 0 and y0 ∈ U (x∗ , r). Next, we must show (I −
L0 )−1 ∈ L(B2 , B1 ). By (2.2), (2.8), (2.9) and (2.14) we have in turn that
so
1
(2.18) k(I − L0 )−1 k ≤ ,
1 − p(kx0 − x∗ k)
Extending the applicability of the super-Halley-like method... 27
We also have that x1 is well defined by the second substep of method (1.1)
and (2.18). Moreover, by (2.2), (2.3) (for i = 2), (2.14), (2.16)-(2.19), we get
in turn that
1
kx1 − x∗ k = kx0 − x∗ − F 0 (x0 )−1 F (x0 ) − L0 (I − L0 )−1 F 0 (x0 )−1 F (x0 )k
2
≤ kx0 − x∗ − F 0 (x0 )−1 F (x0 )k
1
+ kL0 kk(I − L0 )−1 kkF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 F (x0 )k
2
≤ g1 (kx0 − x∗ k)kx0 − x∗ k
R1
p(kx0 − x∗ k) 0 v(θkx0 − x∗ k)dθkx0 − x∗ k
+
2(1 − p(kx0 − x∗ k))(1 − w0 (t))
= g2 (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k < r,
which shows (2.11) for n = 0 and x1 ∈ U (x∗ , r). The induction is finished,
if we simply replace x0 , y0 , x1 by xk , yk , xk+1 , respectively in the preceding
estimates. Furthermore, from the estimate
Remark 2.2.
(a) Let w0 (t) = L0 t, w(t) = Lt and w∗ (t) = L∗ t (w∗ replacing w in (2.7)). In
[3], Argyros and Ren used instead of (2.7) the condition
L ≤ L∗
L0 ≤ L∗ .
The advantages are obtained under the same computational cost as before,
since in practice the computation of constant L∗ requires the computation
of L0 and L as special cases. In the literature (with the exception of our
works) (2.20) is only used for the computation of the upper bounds of the
inverses of the operators involved.
(b) The radius rA was obtained by Argyros in [4] as the convergence radius
for Newton’s method under conditions (2.3)-(2.7). Notice that the con-
vergence radius for Newton’s method given independently by Rheinboldt
([15]) and Traub ([16]) is given by
2 2
ρ= ∗
< rA = .
3L 2L0 + L
L
kxn+1 − x∗ k ≤ kxn − x∗ k2 ,
1 − L0 kxn − x∗ k
L
kxn+1 − x∗ k ≤ kxn − x∗ k2 .
1 − Lkxn − x∗ k
Clearly, the new error bounds are more precise, if L0 < L. Moreover,
the radius of convergence of method (1.1) given by r is smaller than rA
(see (2.2)).
Extending the applicability of the super-Halley-like method... 29
(c) The local results can be used for projection methods such as Arnoldi’s
method, the generalized minimum residual method (GMREM), the gen-
eralized conjugate method (GCM) for combined Newton/finite projection
methods and in connection to the mesh independence principle in order to
develop the cheapest and most efficient mesh refinement strategy ([4–7]).
(d) The results can be also used to solve equations where the operator F 0
satisfies the autonomous differential equation ([5, 7]):
ln kxkxn+2 −xn+1 k
n+1 −xn k
ξ= kxn+1 −xn k
, for each n = 1, 2, . . .
ln kx n −xn−1 k
v(t) = 1 + L0 t
or
v(t) = M = 2,
The following conditions (H) in a non affine invariant form have been used
to show the semi-local convergence analysis of method (1.1) ([5, 14]):
(H1 ) kF 0 (x0 )−1 k ≤ β.
(H2 ) kF 0 (x0 )−1 F (x0 )k ≤ η.
(H3 ) kF 00 (x)k ≤ M for each x ∈ Ω.
(H4 ) There exists a continuous and increasing function ϕ : [0, +∞) → [0, +∞)
with ϕ(0) = 0 such that for each x, y ∈ Ω kF 00 (x)−F 00 (y)k ≤ ϕ(kx−yk).
(H5 ) There exists a continuous and increasing function ψ : [0, 1] → [0, +∞)
such that ϕ(ts) ≤ ψ(t)ϕ(s) for each t ∈ [0, 1] and s ∈ [0, +∞).
(H6 ) Ū (x0 , r̄) ⊆ Ω, where r̄ is a positive zero of some scalar equation.
In many applications the iterates {xn } remain in a neighborhood of Ω0 . If
we locate Ω0 before we find M, q and ψ, then the new semi-local convergence
analysis will be weaker. Consequently, the new convergence domain will be
at least as large as if we were using Ω. To achieve this goal we consider the
weaker conditions (A) in an affine invariant form:
(A1 ) kF 0 (x0 )−1 F (x0 )k ≤ η.
(A2 ) There exists a continuous and increasing function w0 : [0, +∞) → [0, +∞)
with w0 (0) = 0 such that for each x ∈ Ω
and
w(t)η
q(t) = ,
(1 − w0 (t))2
Extending the applicability of the super-Halley-like method... 31
"R 1
1 0
w2 (θη)(1 − θ)dθ
d(t) =
1 − w0 (t) 1 − q(t)
R1
w(t + θη)(1 − θ)dθq(t)
0
+
1 − q(t)
1 1
Z
w(t)
+ w(1 + θη)dθ
2 0 (1 − w0 (t))2 (1 − q(t))
#
1 1
Z
1 q(t) w(t)
+ w t+ θ (1 − θ)dθ
2 0 2 1 − q(t) (1 − w0 (t))2 (1 − q(t))
and
1 q(t)
c(t) = (1 + )d(t)t.
2 1 − q(t)
η
(A4 ) Equation 1−c(t) − t = 0 has zeros in the interval (0, r0 ). Denote by r
the smallest such a zero. Moreover, the zero r satisfies q(r) < 1 and
c(r) < 1.
(A5 ) Ū (x0 , r) ⊆ Ω.
R1
(A6 ) There exists r∗ ≥ r such that 0 w0 ((1 − θ)r + θr∗ )dθ < 1.
It is convenient for the semi-local convergence analysis of method (1.1)
that follows to define the scalar sequences {pn }, {qn }, {sn }, {tn } for each
n = 0, 1, 2, . . . by
t0 = 0, s0 = η,
w(tn − t0 )(sn − tn )
(3.2) qn = ,
(1 − w0 (tn − t0 ))2
1 qn
(3.3) tn+1 = sn + (sn − tn ),
2 1 − qn
"R 1
1 0
w̄(θ(sn − tn ))(1 − θ)dθ(sn − tn )2
sn+1 = tn+1 +
1 − w0 (tn+1 − t0 ) 1 − qn
R1
w(tn − t0 + θ(sn − tn ))(1 − θ)dθqn (sn − tn )2
(3.4) + 0
1 − qn
32 Ioannis K. Argyros, Santhosh George
Z 1
1 qn
+ w(tn − t0 + θ(sn − tn ))dθ
(sn − tn )
2 0 1 − qn
#
1 1
qn (sn − tn )
Z
qn
+ w sn − t0 t + θ (1 − θ)dθ (sn − tn ) ,
2 0 2(1 − qn ) 1 − qn
where
w1 , n = 0,
w̄ =
w2 , n > 0.
Assumption 3.1. Suppose that method (1.1) is well defined for each n =
0, 1, 2, . . . Then the following equality holds
Z 1
F (xn+1 ) = [F 00 (xn + θ(yn − xn )) − F 00 (xn )](1 − θ)(I − Ln )−1 (yn − xn )2 dθ
0
Z 1
− F 00 (xn + θ(yn − xn ))(1 − Ln )−1 Ln (xn )(1 − θ)dθ(yn − xn )2
0
Z 1
+ F 00 (xn + θ(yn − xn ))dθ(yn − xn )(xn+1 − yn )
0
Z 1
(3.5) + F 00 (yn + θ(xn+1 − yn ))(1 − θ)dθ(xn+1 − yn )2 .
0
Proof. Using method (1.1) and the Taylor series expansion about yn ∈ Ω,
we can write
Z xn+1
0
F (xn+1 ) = F (yn ) + F (yn )(xn+1 − yn ) + F 00 (x)(xn+1 − x)dx,
yn
kxn − x∗ k ≤ r − tn .
Extending the applicability of the super-Halley-like method... 33
It follows from (3.6) and the Banach lemma on invertible operators that
F 0 (x)−1 ∈ L(B2 , B1 ) and
1
(3.7) kF 0 (x)−1 F 0 (x0 )k ≤ .
1 − w0 (kx − x0 k)
We also have that y0 is well defined by the first substep of method (1.1) and
L0 exists for n = 0. In view of the definition of L0 , (A1 ), (A3 ), (A4 ), (3.1),
(3.2) and (3.7), we get that
1
(3.9) k(I − L0 )−1 k ≤ .
1 − q0
The point x1 is also well defined by the second substep of method (1.1) for
n = 0 and
(3.10) kyk − xk k ≤ sk − tk
and
(3.12) := αk+1 ,
so
and
1
kxk+2 − yk+1 k ≤ kLk+1 kk(I − Lk+1 )−1 kkyk+1 − xk+1 k
2
1 qk+1
≤ (sk+1 − tk+1 ) = tk+2 − sk+1 ,
2 1 − qk+1
and the induction for (3.10) and (3.11) is completed. Moreover, we have that
= ck+1 η, c = c(r),
so
1 − ck+2 η
(3.13) ≤ ck+1 η + . . . + η = η< = r,
1−c 1−c
Scalar sequences {tk }, {sk } are increasing and bounded from above by r (see
(3.10), (3.11), (3.13) and (3.14)), so they converge to their unique least upper
bound r1 ≤ r. In view of (3.10) and (3.11) sequences {xk }, {yk } are Cauchy in
a complete space B1 and as such they converge to some x∗ ∈ Ū (x0 , r) (since
Ū (x0 , r) is a closed set). By letting k −→ ∞ in (3.12) we get F (x∗ ) = 0.
36 Ioannis K. Argyros, Santhosh George
0 = F (y ∗ ) − F (x∗ ) = T (y ∗ − x∗ ),
we conclude that x∗ = y ∗ .
Remark 3.3.
(a) In view of the estimate
Moreover, the third condition in (A3 ) implies the second condition but
(H40 ) kF 0 (x0 )−1 (F 00 (x) − F 00 (y))k ≤ ϕ1 (kx − yk) for each x, y ∈ Ω. Choose
w(t) = M0 . Then, we have that
M0 ≤ M1
and
w2 (t) ≤ ϕ1 (t),
since Ω0 ⊆ Ω.
(H50 ) ϕ1 (ts) ≤ ψ1 (t)ϕ1 (s), for each t ∈ [0, 1] and s ∈ [0, +∞).
Then, there exists a continuous and increasing function ψ0 : [0, 1] →
[0, +∞) such that
and
ψ0 (t) ≤ ψ(t)
4. Numerical Examples
r1 = 0.6667, r2 = 0.4384 = r.
As already noted in the introduction earlier results using (C) conditions cannot
be used.
38 Ioannis K. Argyros, Santhosh George
e−1 2
F (w) = (ex − 1, y + y, z)T .
2
The Fréchet-derivative is defined by
x
e 0 0
F 0 (v) = 0 (e − 1)y + 1 0 .
0 0 1
Notice that using the conditions (2.5)–(2.9), we get w0 (t) = (e − 1)t, w(t) =
1 1
e e−1 t, v1 (t) = v(t) = e e−1 . The parameters for method (1.1) are
r1 = 0.3827, r2 = 0.1262 = r.
Under the old approach ([2, 3, 5–8, 12, 15, 16]) we have w̃0 (t) = w̃(t) = et,
ṽ1 (t) = ṽ(t) = e. The old parameters for method (1.1) are
Here g̃1 , g̃2 are g1 , g2 with w̃0 (t) = w̃(t) = et, ṽ1 (t) = ṽ(t) = e. Table 1
gives the comparison of g1 (kxn − x∗ k), g2 (kxn − x∗ k), g̃1 (kxn − x∗ k) and
g̃2 (kxn − x∗ k).
Table 1. Comparison table
n g1 (kxn − x k) g2 (kxn − x∗ k) g̃1 (kxn − x∗ k) g̃2 (kxn − x∗ k)
∗
We have that
Z 1
F 0 (ϕ(ξ))(x) = ξ(x) − 15 xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ Ω.
0
Extending the applicability of the super-Halley-like method... 39
Then, we get that x∗ = 0, w0 (t) = 7.5t, w(t) = 15t, v(t) = 15, v1 (t) = 30.
The parameters for method (1.1) are
r1 = 0.0667, r2 = 0.0021 = r.
Under the old approach w̃0 (t) = w̃(t) = 15t, ṽ(t) = 15, ṽ1 (t) = 30. The old
parameters for method (1.1) are
References
[1] Argyros I.K., On the Newton–Kantorovich hypothesis for solving equations, J. Comput.
Appl. Math. 169 (2004), 315–332.
[2] Argyros I.K., Ezquerro J.A., Gutiérrez J.M., Hernández M.A., Hilout S., On the semilo-
cal convergence of efficient Chebyshev-Secant-type methods, J. Comput. Appl. Math.
235 (2011), 3195–3206.
[3] Argyros I.K., Ren H., Efficient Steffensen-type algorithms for solving nonlinear equa-
tions, Int. J. Comput. Math. 90 (2013), 691–704.
[4] Argyros I.K., Computational Theory of Iterative Methods, Studies in Computational
Mathematics, 15, Elsevier B.V., New York, 2007.
[5] Ezquerro J.A., Hernández M.A., An optimization of Chebyshev’s method, J. Complex-
ity 25 (2009), 343–361.
[6] Ezquerro J.A., Grau A., Grau-Sánchez M., Hernández M.A., Construction of deri-
vative-free iterative methods from Chebyshev’s method, Anal. Appl. (Singap.) 11
(2013), 1350009, 16 pp.
[7] Ezquerro J.A., Gutiérrez J.M., Hernández M.A., Salanova M.A., Chebyshev-like meth-
ods and quadratic equations, Rev. Anal. Numér. Théor. Approx. 28 (1999), 23–35.
[8] Grau M., Díaz-Barrero J.L., An improvement of the Euler–Chebyshev iterative method,
J. Math. Anal. Appl. 315 (2006), 1–7.
[9] Grau-Sánchez M., Gutiérrez J.M., Some variants of the Chebyshev–Halley family of
methods with fifth order of convergence, Int. J. Comput. Math. 87 (2010), 818–833.
[10] Hueso J.L., Martinez E., Teruel C., Convergence, efficiency and dynamics of new
fourth and sixth order families of iterative methods for nonlinear systems, J. Comput.
Appl. Math. 275 (2015), 412–420.
[11] Magreñán Á.A., Estudio de la dinámica del método de Newton amortiguado, PhD
Thesis, Universidad de La Rioja, Servicio de Publicaciones, Logroño, 2013. Available
at http://dialnet.unirioja.es/servlet/tesis?codigo=38821
[12] Magreñán Á.A., Different anomalies in a Jarratt family of iterative root-finding meth-
ods, Appl. Math. Comput. 233 (2014), 29–38.
[13] Magreñán Á.A., A new tool to study real dynamics: the convergence plane, Appl. Math.
Comput. 248 (2014), 215–224.
[14] Prashanth M., Mosta S.S., Gupta D.K., Semi-local convergence of the Supper-Halley’s
method under w-continuous second derivative in Banach space. Submitted.
[15] Rheinboldt W.C., An adaptive continuation process for solving systems of nonlinear
equations, in: Tikhonov A.N., et al. (eds.), Mathematical Models and Numerical Meth-
ods, Banach Center Publ., 3, PWN, Warsaw, 1978, pp. 129–142.
40 Ioannis K. Argyros, Santhosh George
[16] Traub J.F., Iterative Methods for the Solution of Equations, Prentice-Hall Series in
Automatic Computation, Prentice Hall, Inc., Englewood Cliffs, New Jersey, 1964.