Ph. D. Research Proposal Iterative Processes For Solving Nonlinear Operator Equations
Ph. D. Research Proposal Iterative Processes For Solving Nonlinear Operator Equations
Research Proposal
Iterative Processes for Solving Nonlinear Operator Equations
Oganeditse A. Boikanyo
Supervisor: Professor G. Moroşanu
Department of Mathematics and its Applications
Central European University
1 Introduction
An important and perhaps interesting topic in nonlinear analysis and convex optimization con-
cerns solving inclusions of the form 0 ∈ A(x), where A is a maximal monotone operator on a
Hilbert space H. Its importance in convex optimization is evidenced from the fact that many
problems that involve convexity can be formulated as finding zeros of maximal monotone oper-
ators. For example, convex minimizations and convex-concave mini-max problems, to mention
but a few, can be formulated in this way. In particular, the subdifferential of a proper, convex
and lower semi-continuous (lsc) function f , ∂f , is a maximal monotone operator and a point
p ∈ H minimizes f if and only if 0 ∈ ∂f (p). One of the most powerful and versatile solution
techniques for solving variational inequalities, convex minimizations, and convex-concave mini-
max (saddle-point) problems is the proximal point algorithm (PPA).
The PPA was first introduced by B. Martinet (1970) and it is based on the notion of the proxi-
mal mapping Jβ (x) = xβ = arg min{f (z) + kz − xk2 /2β : z ∈ H}, introduced by J. J. Moreau
(1965). For the problem of minimizing a proper, lower semi-continuous convex function f on a
Hilbert space, the proximal point algorithm in exact form generates a sequence {xn } by taking
the (n + 1)th iterate to be the minimizer of f (x) + kx − xn k2 /2βn , where βn > 0. It was shown
by Y. Censor and S. A. Zenois (1992) that the quadratic additive term appearing above can be
replaced by more general D-functions which resembles (but are not strictly) distance functions.
They characterized the properties of such D-functions which when used in the proximal mini-
mization algorithm preserve its convergence. It was further shown by J. Eckstein (1993) that
for every Bregman function (a strictly convex differentiable function that induces the distance
measure or a D-function on the Euclidean space) there exists a “nonlinear” version of the PPA.
Many mathematicians have studied the PPA, and other iterative processes such as the Mann and
the Mann-Ishikawa iteration processes for solving nonlinear operator equations. They investi-
gated the convergence of such iterative processes and in some cases gave the rate of convergence
of such methods. Among them, the work of R. T. Rockafellar (1976), O. Güler (1991), C. D. Ha
(1990), P. Tseng (2000), H. K. Xu (2002) and P. Tossings (1994), is worth mentioning. Other
methods for finding zeros of operators have been shown to be strongly connected with the above
mentioned methods. For instance, J. Eckstein and D. P. Bertsekas (1992) showed by means of an
operator called a “splitting operator” that the Doughlus-Rachford splitting method for finding a
zero of the sum of two operators is a special case of the PPA. They observed that applications of
Doughlus-Rachford splitting, such as the alternating direction method of multipliers for convex
1
programming decomposition, are also special cases of the PPA, an observation which allows the
unification and generalization of a variety of convex programming algorithms.
0 ∈ A(x). (1)
As pointed out earlier, one method for finding the zeros of (1) is the PPA, which starts at an
arbitrary point x0 ∈ H and generates recursively a sequence of points
where {βn } ⊂ (0, ∞) and {en } is considered to be the error sequence. Güler [7] constructed an
example showing that Rockafellar’s algorithm 2 with en = 0 for all n ≥ 0 does not converge
strongly, in general. Since weak convergence is not enough for an efficient algorithm and the PPA
does not converge strongly in general, much of research have been devoted to finding algorithms
which will always converge strongly, or at least modify Rockafellar’s algorithm in such a way
that strong convergence is guaranteed. One such modification have been obtained by Solodov
and Svaiter [15]. In an attempt to obtain strong convergence, Solodov and Svaiter proposed an
algorithm which generates a sequence {xn } satisfying
0 ∈ A(x) + µn (x − xn ),
Hn := {z ∈ H : hz − yn , vn i ≤ 0} and Wn := {z ∈ H : hz − xn , x0 − xn i ≤ 0}.
It was proved in [15] that if the sequence {µn } is bounded from above, then the sequence {xn }
constructed above converges strongly to PA−1 (0) x0 . Though their algorithm is strongly conver-
gent, it needs more computing time since it requires at each iterate, to calculate a projection, a
task which may not always be easy. Xu’s idea was to construct a less time consuming algorithm
which still converge strongly. In view of Halpern’s algorithm, Xu [20] proposed the following
algorithm
and showed that algorithm 3 converge strongly provided that {en } ∈ `1 and the sequences {αn },
{βn } of real numbers are chosen appropriately. In [2], we showed that strong convergence is still
ensured even if x0 is replaced by any arbitrary point u of H (not necessarily the starting point
of the PPA) and {en } ∈ `p for 1 ≤ p < 2.
2
Recently, Takahashi [16] studied the PPA in a Banach space by the viscosity approximation
method, where the (n + 1)th iterate was given as
3 Further Investigations
First we will consider Xu’s modified algorithm (algorithm 1 of [2]) and hope to prove a strong
convergence result under general errors. More precisely, we will show that for ken k → 0 and
βn → ∞ the sequence generated by algorithm 1 is strongly convergent. This will lead us
to the following open question: can one design a PPA by choosing appropriate regularization
parameters αn such that strong convergence of {xn } is preserved, for ken k → 0 and βn bounded?
We will also try to investigate and give the convergence rates of some algorithms.
Secondly, we note that Marino et al. [10] proved some convergence results of the Mann
iterative process for strict pseudo-contractions without any error terms. However, since errors
are bound to occur in any practical algorithm, we will investigate the results of [10] taking into
account the errors. Xu [19] proposed a regularization method for the PPA which essentially
includes the prox-Tikhonov method of Lehdili and Moudafi [9]. The algorithm proposed in [19]
is easily seen to be equivalent to algorithm 1 of [2]. On the other hand, following the ideas
contained in [2], one can generalize Xu’s regularization method by considering the case when
I is replaced by any nonexpansive map f , and in this case the resulting algorithm can not be
reduced to any of the algorithms of section 3 of [2] (except of course when f = I).
Lastly, we intend to consider iterative processes for semigroups, and prove some convergence
theorems associated with them.
References
[1] V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Noordhoff,
Leyden, 1976.
[2] O. A. Boikanyo and G. Moroşanu, Modified Rockafellar’s algorithms, Math. Sci. Res. J.,
accepted.
[3] H. Brézis, Operateurs Maximaux Monotones et Semi-groupes de Contractions dans les Es-
paces de Hilbert, North-Holland, Amsterdam, 1973.
[4] Y. Censor and S. A. Zenois, Proximal minimization algorithm with D-functions, Journal of
Optimization Theory and Applications, 73 (1992) no. 3, 451-464.
[5] J. Eckstein, Nonlinear proximal point algorithms using Bregman functions with applications
to convex programming, Math. Oper. Res., 18 (1993), no 1, 202-226.
3
[6] J. Eckstein and D. P. Bertsekas, On the Douglas-Rachford splitting method and the proximal
point algorithm for maximal monotone operators, Math. Programming, 55 (1992), no 3, Ser.
A, 293-318.
[7] O. Güler, On the convergence of the proximal point algorithm for convex minimization,
SIAM J. Control Optim. 29 (1991), 403-419.
[8] C. D. Ha, A generalization of the proximal point algorithm, SIAM J. Control Optim. 28
(1990), 503-512.
[9] N. Lehdili and A. Moudafi, Combining the proximal point for convex optimization, Opti-
mization 37 (1996), 239-252.
[10] G. Marino, V. Colao, X. Qin and S. M. Kang, Strong convergence of the modified Mann
iterative method for strict pseudo-contractions, Comput. Math. Appl. 57 (2009), no. 3,
455-465.
[12] J. J. Moreau, Proximit et dualitd dans un espace Hilbertien, Bull. Soc. Math., France, 93
(1965), 273-299.
[13] G. Moroşanu, Nonlinear Evolution Equations and Applications, Reidel, Dordrecht, 1988.
[14] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control
Optim. 14 (1976), 877-898.
[15] M. V. Solodov and B. F. Svaiter, Forcing strong convergece of proximal point iterations in
a Hilbert space, Math. Program. Ser. A 14 (2000), 189-202.
[17] P. Tossings, The perturbed proximal point algorithm and some of its applications, Appl.
Math. Optim., 29 (1994), no. 2, 125-159.
[18] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings,
SIAM J. Control Optim. 38 (2000), no. 2, 431-446.
[19] H.K. Xu, A regularization method for the proximal point algorithm, J. Glob. Optim. 36
(2006), 115-125.
[20] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc. (2) 66 (2002),
240-256.