0% found this document useful (0 votes)
7 views

Algorithms For Split Equality Variational Inequality and Fixed Point Problems

Uploaded by

Liliana Guran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Algorithms For Split Equality Variational Inequality and Fixed Point Problems

Uploaded by

Liliana Guran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Applicable Analysis

An International Journal

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/gapa20

Algorithms for split equality variational inequality


and fixed point problems

Gedefaw Mekuriaw, Habtu Zegeye, Mollalgn Haile Takele & Abebe Regassa
Tufa

To cite this article: Gedefaw Mekuriaw, Habtu Zegeye, Mollalgn Haile Takele & Abebe Regassa
Tufa (20 May 2024): Algorithms for split equality variational inequality and fixed point
problems, Applicable Analysis, DOI: 10.1080/00036811.2024.2348669

To link to this article: https://doi.org/10.1080/00036811.2024.2348669

© 2024 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group.

Published online: 20 May 2024.

Submit your article to this journal

Article views: 352

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=gapa20
APPLICABLE ANALYSIS
https://doi.org/10.1080/00036811.2024.2348669

Algorithms for split equality variational inequality and fixed point


problems
Gedefaw Mekuriawa,b , Habtu Zegeyec , Mollalgn Haile Takelea and Abebe Regassa Tufad
a Department of Mathematics, Bahir Dar University, Bahir Dar, Ethiopia; b Department of Mathematics, Debre Markos
University, Debre Markos, Ethiopia; c Department of Mathematics, Botswana International University of Science and
Technology, Palapye, Botswana; d Department of Mathematics, University of Botswana, Gaborone, Botswana

ABSTRACT ARTICLE HISTORY


This study presents algorithms for addressing split equality variational Received 30 May 2023
inequality and fixed point problems in real Hilbert spaces that are inertial- Accepted 20 April 2024
like subgradient extragradient and inertial-like Tseng extragradient, respec- COMMUNICATED BY
tively. We prove that the resulting sequences of the proposed algorithms J.-C. Yao
converge strongly to solutions of the problem provided that the under-
lying mappings are quasi-monotone, uniformly continuous and quasi- KEYWORDS
nonexpansive mappings under some mild conditions. Furthermore, numer- Inertial iterative algorithm;
quasi-monotone mappings;
ical experiments are shown to demonstrate the effectiveness of our weakly sequentially
techniques. continuous mappings;
variational inequality
problem split equality
problems
MATHEMATICS SUBJECT
CLASSIFICATIONS
47H09; 47J20; 65K15; 47J05;
90C25

1. Introduction
Let C and D be closed convex subsets of the real Hilbert spaces H1 and H2 , respectively. The split
equality feasibility problem (SEFP) is a problem of finding

x ∈ C, y ∈ D such that Ax = By, (1)

where A: H1 → H3 and B: H2 → H3 are two bounded linear mappings and H3 is also a real Hilbert
space. The SEFP was initially studied by Moudafi [1] and became one of the main concerns of
researchers in the field of optimization (see, for example, [2–7]). The SEFP (1) permits partial and
asymmetric relationships between the variables x and y. It also generalizes many other essential prob-
lems. For example, if H3 = H2 and B = I, the identity mapping on H2 , then the SEFP (1) reduces
to the problem of finding x ∈ C, y ∈ D such that , which is the well known split feasibility prob-
lem (SFP). The SFP was introduced by Elfving [8] and is known to be applicable in many fields
such as medical image reconstruction, phase retrieval, signal processing, radiation therapy treatment
planning, among others. The main interest of SEFPs is mainly to consider more general types of prob-
lems and cover many situations such as decomposition methods for partial differential equations and
applications in game theory.

CONTACT Habtu Zegeye [email protected]


© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License
(http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided
the original work is properly cited, and is not altered, transformed, or built upon in any way. The terms on which this article has been published allow
the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
2 G. MEKURIAW ET AL.

The other interesting optimization problem which was introduced and studied by both Stampac-
chia [9] and Fichera [10] in the early 1960s, is the classical Variational Inequality Problem (VIP).
A VIP associated with a nonempty closed convex subset C of a Hilbert space H and a mapping
T : C → H is defined as a problem of finding

x∗ ∈ C such that Tx∗ , x − x∗  ≥ 0, ∀ x ∈ C. (2)

We simply denote the solution set of VIP by VI(C, T). The reality z ∈ VI(C, T) if and only if z =
PC (z − λTz) is widely known, where λ is any positive real number and PC is the metric projection of
H onto C. Below is how PC is defined. For every x ∈ H, there is a unique nearest point in C, denoted
by PC (x) such that
PC x − x = inf{ x − y : y ∈ C}.
Sometimes this type of problem is called Stampacchia variational inequality problem. This terminol-
ogy is used to separate VIPs from other variational like problems. One of such a problem is the Minty
variational inequality problem (MVIP) which requires finding

x∗ ∈ C such that Tx, x − x∗  ≥ 0, ∀ x ∈ C,

where T and C are identical to those in (2). We refer to the set of solutions for the Minty vari-
ational inequality problem that is associated with C and T by MVI(C, T). The set MVI(C, T) is
obviously closed and convex. If T is continuous and C is convex, then MVI(C, T) ⊆ VI(C, T) (see,
[11]). However, the inclusion VI(C, T) ⊆ MVI(C, T) is not necessarily true when T is continuous
and quasi-monotone mapping (see, [12]). In fact, if T is continuous and pseudomonotone, then we
have that VI(C, T) = MVI(C, T) (see, [13]).
Variational inequality problems have a wide range of applications in industry, finance, economics,
pure and applied sciences (see, for example, [14, 15]). These types of problems led to remarkable
developments in the theory of existence and regularity of solutions, algorithms, and applications.
Extensive studies of variational inequality problems have been made by many authors. They devel-
oped efficient and implementable numerical methods for solving variational inequality and other
related optimization problems in Hilbert, Banach and Hadamard spaces, (see, for example, [15–19]
and the relevant references therein).
Under different suitable conditions, different projection-like iterative methods were developed for
solving VIP (2). The simplest one is the projection gradient method for solving optimization problems
given by

x1 ∈ H,
(3)
xn+1 = PC (xn − τ Txn ), ∀ n ≥ 1,

where τ is a positive constant. One can easily show that iterative method (3) converges weakly to a
unique solution of VI(C, T) provided that T is Lipschitz continuous and inverse strongly monotone.
In order to weaken the strong monotonicity assumption to monotonicity, Korpelevich [20]
proposed an extragradient approach to solving VIP (2):


⎨x1 ∈ H,
yn = PC (xn − τ Txn ), (4)


xn+1 = PC (xn − τ Tyn ), ∀ n ≥ 1,

where τ is a positive constant, T is monotone and Lipschitz continuous from C into a real Hilbert
space H, where C is a non-empty closed convex subset C of H. The weak convergence of this method
was obtained when τ ∈ (0, L1 ). The major weakness of this method is that it requires us to calculate
APPLICABLE ANALYSIS 3

two metric projections onto C in each iterative step. If the set C is not simple, then the extragradient
method becomes very difficult and its implementation becomes costly. This motivated many authors
to propose some modified extragradient methods.
In order to solve this shortcoming, the subgradient extragradient approach presented below was
introduced by Censor et al. [21] and substitutes the projection onto a simple half-space Cn for the
second projection onto C:



⎪ x1 ∈ H,

⎨y = P (x − τ Tx ),
n C n n
(5)

⎪ C = {x ∈ H : x n − τ Txn − yn , x − yn  ≤ 0},


n
xn+1 = PCn (xn − τ Tyn ), ∀ n ≥ 1,

where τ ∈ (0, L1 ) and L is Lipschitz constant of T. They proved that if T is monotone and Lipschitz,
then (5) converges weakly to an element u ∈ VI(C, T). Several authors have established weak con-
vergence of subgradient extragradient method (5) when T is monotone (or pseudomonotone) and
Lipschitz continuous (see, for example, [22, 23] for some recent results).
Note that the Censor et al. [21] technique still has to compute two projections and two operator
evaluations for each iteration. These make the subgradient extragradient method (5) computationally
expensive in situations where T has a complex evaluation and the structure of C is complex.
In 2000, Tseng [24] introduced the following method called the Tseng extragradient method,
which has received great attention from many authors.



⎨x1 ∈ H,
yn = PC (xn − τ Txn ),


xn+1 = yn − τ (Tyn − Txn ), ∀ n ≥ 1,

with τ ∈ (0, L1 ) and L the Lipschitz constant of T. The sequence {xn } was found to have a weak conver-
gence to a point in VI(C, A) by him. The Tseng technique requires only a single projection onto the
feasible set per iteration, which gives it an advantage over Censor et al. Several extensions of Tseng’s
algorithm have appeared in the literature (see, for example, [25–27]).
Replacing solution sets of variational inequality problems in the places of C and D in the definition
of SEFP (1), we get the split equality variational inequality problem (SEVIP) that composed of finding
points

x ∈ VI(C, T), y ∈ VI(D, S) such that Ax = By, (6)

where C and D are nonempty closed convex subsets of the Hilbert spaces H1 and H2 , respectively,
A : H1 → H3 and B : H2 → H3 are bounded linear mappings provided that H3 is also a Hilbert
space and T : H1 → H1 and S : H2 → H2 are mappings. The SEVIP is also a generalization of many
other essential problems like split equality zero point problem [28], common solution of variational
inequality problem [28] and split equality feasibility problem [1]. It is applicable in data compression,
phase retrievals and medical image reconstruction (see, for example, [29, 30]).
In 2022, Kwelegano et al. [31] viewed the related mappings T : H1 → H1 and S : H2 → H2 as
pseudomonotone, uniformly continuous, and sequentially weakly continuous on bounded subsets of
the nonempty closed convex subsets C and D of H1 and H2 , correspondingly. They developed the
subsequent iterative approach to solve the corresponding SEVIP in real Hilbert spaces.
Let  ∈ (0, 1), μ > 0, λ ∈ (0, μ1 ). Choose (x0 , u0 ) ∈ C × D arbitrarily.
4 G. MEKURIAW ET AL.

For n ≥ 1, compute in parallel,




⎪ zn = PC (xn − λTxn ),



⎪ yn = xn − τn (xn − zn ),





⎪ v n = PD (vn − λSun ),

⎨s = u − δ (u − v ),
n n n n n

(7)

⎪ dn = PC [xn − γn A (Axn − Bvn )],



⎪ x n+1 = αn f (xn ) + (1 − αn )[an PCn xn + (1 − an )dn ],





⎪ tn = PD [un − γn B∗ (Bun − Axn )],

⎩u
n+1 = αn g(un ) + (1 − αn )[an PDn un + (1 − an )tn ],

where
Cn = {x ∈ C : Tyn , x − yn  ≤ 0}, Dn = {u ∈ D : Ssn , v − sn  ≤ 0},
τn = jn and δn = kn and jn and kn are the smallest nonnegative integers j and k, respectively such
that
Txn − T[xn − j (xn − zn )], xn − zn  ≤ μ xn − zn 2 ,
(8)
Sun − S[un − k (un − vn )], un − vn  ≤ μ un − vn 2 ,
and {γn }, {αn }, {an } ⊂ R+ . Under suitable assumptions, they showed that the resulting sequence
converges strongly to a solution of problem (6). A line search method is employed to avoid the require-
ment of the Lipschitz continuity of the underlying mappings T and S which has been used by several
researchers.
Note that we can derive the split equality fixed point problem by considering closed, convex and
nonempty subsets of Hilbert spaces in (1) as a set of solutions to the fixed point problems. The split
equality fixed point problem (SECFP), introduced by Moudafi [32], is defined by

finding x ∈ F(T), y ∈ F(S) such tht Ax = By,

where T : H1 → H1 and S : H2 → H2 are nonlinear mappings with nonempty fixed point sets
F(T) := {x ∈ H1 : Tx = x} and F(S) := {x ∈ H2 : Sx = x}. Some literatures are already available for
this problem (see, for instance, [33, 34]).
In 2015, Zhao [35] introduced the following iterative process for the class of quasi-nonexpansive
mappings:
⎧ ∗
⎪un = xn − γn A (Axn − Byn ),


xn+1 = αn un + (1 − αn )Tun ,
∗ (9)

⎪ v n = yn − γn B (Byn − Axn ),

yn+1 = αn vn + (1 − αn )Svn ,
where T : H1 → H1 and S : H2 → H2 are quasi-nonexpansive mappings with nonempty fixed point
sets F(T) and F(S). Without requiring previous knowledge of the norms of A and B, he demonstrated
that the method in (9) weakly converges to a solution of the split equality fixed point problem under
specific assumptions. Also, see [36] for information on the split variational inclusion problem.
Another interesting general case of SEFP is obtained by replacing C and D in the definition of
SEFP (1) with sets of solutions of the variational inequality and fixed point problems. Then we get
the split equality variational inequality and fixed point problem (SEVIFPP) that composed of finding
points
x∗ ∈ VI(C, T) ∩ F(K) and u∗ ∈ VI(D, S) ∩ F(G) such that Ax∗ = Bu∗ , (10)
where T, K : H1 → H1 and S, G : H2 → H2 are nonlinear mappings.
APPLICABLE ANALYSIS 5

Many authors were drawn to the SEVIFPP because of its remarkable usefulness and wide range of
applications in practical mathematics, particularly in inverse problems arising from phase retrievals
and medical picture reconstruction (see, for example, [14, 15]). In decision sciences, this allows to
consider agents who interplay only via some components of their decision variables (see, [37]). In
Intensity-modulated radiation therapy (IMRT), this amounts to envisaging a weak coupling between
the vector of doses absorbed in all voxels and that of the radiation intensity (see, [38]).
There has been a growing interest in accelerating the rate at which iterative algorithms converge. In
order to accelerate the pace of convergence of an algorithm, Polyak [39] created the inertial approach,
which is one of the more modern techniques. This method is an iterative procedure through which
subsequent terms of the sequence are obtained from the preceding two terms.
Recently, a number of authors have investigated various methods using the inertial technique to
find common solutions for fixed point and variational inequality problems (see, for example, [40–44]).
In 2021, Tan [45] presented the following approach to solve the common solution of variational
inequality problem connected to monotone and Lipschitz continuous mapping T and fixed point
problem connected to demicontractive mapping S.


⎪ wn = xn + θn (xn − xn−1 );

yn = PC (wn − τn Twn );
(11)

⎪ z n = yn + τn (Twn − Tyn );

xn+1 = αn f (xn ) + (1 − αn )[(1 − βn )zn + βn Szn ].
Using the inertial and viscosity approach, the algorithm (11) converges strongly to a common point
in (VI(C,T) and F(S).
Inspired and motivated by the results of Tan and Cho [46], Kwelegano et al. [31], Thong and
Vuong [47], Zhao [35] and Polyak [39], in order to solve split equality variational inequality and
fixed point problems involving uniformly continuous quasi-monotone mappings for the MVIP and
quasi-nonexpansive demiclosed mappings for the fixed point problem in Hilbert spaces, we suggest an
inertial-like subgradient extragradient algorithm and an inertial-like Tseng extragradient algorithm.
We establish robust convergence results for our approaches to the solution of the underlined prob-
lems. Finally, we give some numerical experiments to show the applicability and efficiency of the
proposed methods.
Some of the contributions of our results over some well known results found in the literature are:

(1) The Lipschitz continuity requirement of the mappings is not needed in our method.
(2) We used uniform continuity and quasi-monotonitciy assumptions of the underlying mappings in
our inertial-like extragradient and inertial-like Tseng extragradient algorithms, which are weaker
assumptions than those in the literature.
(3) We suggested techniques for approximating the solution of a more comprehensive problem
known as split equality variational inequality and fixed point problems, which encompasses
other crucial problem categories such as VIPs, SFPs, equality difficulties, and common null point
problems.

The rest of the paper is organized as follows: In Section 2, we state some definitions and known
results which will be used to prove the main results. In Section 3, we present our iterative methods
and show that the ensuing sequences strongly converge to a solution of SEVIFPP. In Section 4, we
provide numerical examples to show how effective our methods are. Finally, concluding remarks are
given in Section 5.

2. Preliminaries
Here, some well-known and practical definitions and lemmas that are required for the proof of our key
results are stated. In the sequel, we denote strong and weak convergence by ‘→’ and ‘ ’, respectively.
6 G. MEKURIAW ET AL.

Consider a real Hilbert space H. A non linear mapping T : H → H is called

(i) L-Lipschitz continuous if there is a constant L > 0, such that


Tx − Ty ≤ L x − y , ∀ x, y ∈ H. If 0 < L < 1, then T is called a contraction mapping and if
L = 1, then T is called nonexpansive.
(ii) quasi-nonexpansive if F(T) is nonempty and Tx − y ≤ x − y ; ∀ x ∈ C, y ∈ F(T).
(iii) monotone if Tx − Ty, x − y ≥ 0, ∀ x, y ∈ H.
(iv) pseudomonotone if Ty, x − y ≥ 0 implies that Tx, x − y ≥ 0, ∀ x, y ∈ H.
(v) quasi-monotone if Ty, x − y > 0 implies that Tx, x − y ≥ 0, ∀ x, y ∈ C.

For any sequence {xn } ⊂ H converging weakly to x0 and (I − T)xn → 0, the mapping I − T,
where I is the identity mapping on H, is said to be demiclosed at 0 if (I − T)x0 = 0.
It can be shown that monotone mappings are pseudomonotone and pseudomonotone mappings
are quasi-monotone. However, we observe that quasi-monotone is not necessarily pseudomonotone
(see, the following example).

Example 2.1: Let S : H = R → R be defined by S(x) = x2 . Then, the quasi-monotonic nature of S


may thus be seen easily. But if we take for example x = 0 and y = −2, then we get S(x), y − x =
−2S(0) = 0 and S(y), y − x = −2S(−2) = −8 < 0 which shows that S is not pseudomonotone.
Hence, S represents a continuous and quasi-monotone mapping on C = [1, 2], where MVI(C, S) =
{1} = ∅.

The following characteristics of PC are widely known:

(P1)
2
PC x − PC y ≤ PC x − PC y, x − y.

That is, PC is firmly nonexpansive. In particular, PC is nonexpansive from H onto C.


Furthermore, we have

(P2)
x − PC x, y − PC x ≤ 0, ∀ y ∈ C.

Consequently, for every x ∈ H, we get

(P3)
2 2
PC x − y ≤ x−y − PC x − x 2 , ∀ y ∈ C.

The following lemmas will be employed to prove our main results.

Lemma 2.2 ([48]): Consider a real Hilbert space H. The following outcomes are then valid:

(1) x + y 2 = x 2 + 2x, y + y 2 , ∀ x, y ∈ H.
(2) x + y 2 ≤ x 2 + 2y, x + y, ∀ x, y ∈ H.
(3) αx + βy + γ z 2 = α x 2 + β y 2 + γ z 2 − αβ x − y 2 − αγ x − z 2 − βγ y − z 2 ,
∀ x, y, z ∈ H, where α, β, γ ∈ [0, 1], α + β + γ = 1.

Lemma 2.3 ([12]): Suppose we have a Hilbert space H, a map T : H → H, and a nonempty, closed,
convex subset C. The following indicate where MVI(C, T) is nonempty. If either
APPLICABLE ANALYSIS 7

(i) T is pseudomonotone on C and VI(C, T) = ∅ or


(ii) T is quasi-monotone on C, T = 0 on C and C is bounded or
(iii) T is quasi-monotone on C, int(C) = ∅ and there exists x∗ ∈ VI(C, T) such that Tx∗ = 0 hold,
then MVI(C, T) is nonempty.

Lemma 2.4 ([49]): If T : C → C is continuous and quasi-nonexpansive, and C is a closed convex subset
of a Hilbert space, then F(T) is closed, convex and nonempty.

Lemma 2.5 ([50]): Let C be a convex, closed, nonempty subset of real Hilbert space H. Consider a map-
ping A : H → H. Then, the following inequalities hold for all u ∈ H and α ≥ β > 0: u−PC (u−αAu)
α ≤
u−PC (u−βAu)
β .

Lemma 2.6 ([3]): Let H = H1 × H2 , where H1 and H2 are real Hilbert spaces. If C is nonempty, closed
and convex subset of H, (u, v) ∈ H and (u∗ , v∗ ) = PC (u, v), then we have

(u, v) − (u∗ , v∗ ), (x, y) − (u∗ , v∗ ) ≤ 0, for all (x, y) ∈ C.

Lemma 2.7 ([51]): Let {an } be a sequence of nonnegative real numbers, {bn } be a sequence of real
numbers, and {αn } be a sequence in (0, 1) satisfying

an+1 ≤ (1 − αn )an + αn bn , ∀ n ≥ 1.

If lim supn→∞ bn ≤ 0, then limn→∞ an = 0.

Lemma 2.8 ([52]): If {an } is a sequence of nonnegative real numbers with a subsequence {anj } satisfying
anj < anj +1 ∀ j ∈ N, then there exists a nondecreasing sequence {lk } of N such that lim supk→∞ lk = ∞
and

max{alk , ak } ≤ alk +1 .

3. Main results
This section covers the convergence analysis of the proposed inertial-like subgradient extragradient
and inertial-like Tseng extragradient algorithms. In the sequel, we’ll make the following assumptions.
Conditions:

(C1) Let the sets C and D be nonempty closed convex subsets of the real Hilbert spaces H1 and H2 ,
respectively.
(C2) Let T : H1 → H1 and S : H2 → H2 be quasi-monotone, uniformly continuous and Txn Tx
and Sun Su, whenever {xn } and {un } are sequences in C and D, respectively, such that xn x
and un u.
(C3) Let K: H1 → H1 and G: H2 → H2 be quasi-nonexpansive mappings such that I − K and I − G
are demiclosed at zero.
(C4) Let A : H1 → H3 and B : H2 → H3 be bounded linear mappings and let A∗ and B∗ be adjoints
of A and B,  respectively, where H3 is another real Hilbert space. 
(C5) Let ϒ = (p, q) ∈ (MVI(C, T) ∩ F(K)) × (MVI(D, S) ∩ F(G)) : Ap = Bq = ∅.
(C6) Let { n }, {βn }, {σn } and {an } be real sequences satisfying ∞
n=1 n < ∞, limn→∞ αn = 0, where
n


{αn } ⊂ (0, 1) with n=1 αn = ∞ and limn→∞ αn = 0, {βn }, {an }, {σn } ⊆ [a, b] ⊂ (0, 1), for
some a, b > 0.
8 G. MEKURIAW ET AL.

3.1. Inertial-like subgradient extragradient algorithm


Algorithm 3.1: Initialization: Let x, x0 , x1 ∈ C, u, u0 , u1 ∈ D, 0 ≤ θ < 1, ρ, ξ , κ, , ω ∈ (0, 1). Set n =
1. Iterative steps:

Step 1. Given the iterates (xn−1 , un−1 ) and (xn , un ) in C × D, choose θn such that 0 ≤ θn ≤ θ̄n , where

⎨min θ, n
,
n
, amp; if xn = xn−1 and un = un−1 ,
θ̄n = xn − xn−1 un − un−1 (12)

θ, amp; otherwise.

Step 2. Set

wn = xn + θn (xn−1 − xn ), and
vn = un + θn (un−1 − un ). (13)

Step 3. Compute


⎪ yn = PC (wn − τn Twn ),

⎨zn = PTn (wn − τn Tyn ),
(14)

⎪sn = PD (vn − ϕn Svn ),


tn = PSn (vn − ϕn Ssn ),
where

Tn = {x ∈ C : wn − τn Twn − yn , x − yn  ≤ 0} and


Sn = {u ∈ D : vn − ϕn Svn − sn , u − sn  ≤ 0},

τn = ξ ωjm and jm the smallest nonnegative integer j satisfying

ξ ωj Twn − Tyn ≤ ρ wn − yn , (15)

and ϕn = κki and ki the smallest nonnegative integer k satisfying

κk Svn − Ssn ≤ ρ vn − sn . (16)

Step 4. Compute


⎪ dn = PC (wn − γn A∗ (Awn − Bvn )),



⎪ hn = PC (σn dn + (1 − σn )Kdn ) ,


⎨x
n+1 = αn x + (1 − αn ) [(1 − βn )wn + βn (an zn + (1 − an )hn )] ,
(17)
⎪en = PD (vn + γn B∗ (Awn − Bvn )),




⎪rn = PD (σn en + (1 − σn )Gen ) ,



un+1 = αn u + (1 − αn ) [(1 − βn )vn + βn (an tn + (1 − an )rn )] ,

where 0 < γ ≤ γn ≤ ρˆn with

Awn − Bvn 2
ρˆn = min γ + 1, , (18)
[ A∗ (Awn − Bvn ) 2 + B∗ (Bvn − Awn ) 2 ]

for n ∈ ϒ̄ = {m ∈ N : Awm − Bvm = 0}, otherwise γn = γ , for some γ > 0. Set n := n + 1


and go to Step 1.
APPLICABLE ANALYSIS 9

Remark 3.1: Note that from (12) and condition (C6), we have θn xn − xn−1 ≤ n for all n ≥ 1 and
αn → 0 as n → ∞, which implies that
n

θn
lim xn − xn−1 = 0,
n→∞ αn
and
θn
lim θn xn − xn−1 = lim αn xn − xn−1 = 0.
n→∞ n→∞ αn
θn
Similarly, we get limn→∞ αn un − un−1 = 0 and limn→∞ θn un − un−1 = 0.

Remark 3.2: Assume that conditions (C1)–(C6) are satisfied. Then the line search rules (15) and (16)
are well defined.

Proof: If we consider the case when wn ∈ VI(C, T), then wn = PC (wn − τn Twn ). Thus, we have wn =
yn and hence (15) is satisfied for j = 0. Let wn ∈
/ VI(C, T) and assume on the contrary that

ξ ωj Twn − Tyn > ρ wn − yn , ∀ j ≥ 0. (19)

That is,
1 1
Twn − TPC (wn − ξ ωj Twn ) > wn − PC (wn − ξ ωj Twn ) , ∀ j ≥ 0. (20)
ρ ξ ωj
Since PC is continuous, we have

lim PC (wn − ξ ωj wn ) − wn = 0, (21)


j→∞

which implies by the uniform continuity of T on C that

TPC (wn − ξ ωj wn ) − Twn = 0. (22)

Thus, we have from (20) and (22) that


1
lim wn − PC (wn − ξ ωj Twn ) = 0. (23)
j→∞ ξ ωj

Now, put qj = PC (wn − ξ ωj Twn ). Then, we have by the projection property that

wn − ξ ωj Twn − qj , x − qj  ≤ 0, ∀ x ∈ C, (24)

which implies that


 
wn − qj
, x − qj + Twn , qj − wn  ≤ Twn , x − wn , ∀ x ∈ C. (25)
ξ ωj
Taking the limit as j → ∞ of (25) and making use of (21) and (23), we obtain that Twn , x − wn  ≥
0, ∀ x ∈ C, which is a contradiction to our assumption that wn ∈ / VI(C, T). Thus, there exists a non-
negative integer j which satisfies (15). Thus, (15) is well defined. The proof concerning line search
rule (16) is similar and hence the proof is complete. 

Lemma 3.3: Suppose that conditions (C1)–(C6) hold. Let Tx = 0, ∀ x ∈ C and Su = 0, ∀ u ∈ D.


Let {wn }, {yn }, {vn } and {sn } be sequences generated by Algorithm 3.1. Let {(wnk , vnk )} be a sub-
sequence of {(wn , vn )} such that wnk − ynk → 0, vnk − snk → 0 and (wnk , vnk ) (z, v). Then
(z, v) ∈ MVI(C, T) × MVI(D, S).
10 G. MEKURIAW ET AL.

Proof: The fact that ynk − wnk → 0 and wnk z imply that z ∈ C as C is closed and convex. Now,
let pnk = PC (wnk − τnk ω−1 Twnk ). Then, we have by Lemma 2.5 that

1  
wnk − pnk ≤ wnk − ynk  → 0, as k → ∞. (26)
ω

Thus, we get that pnk z and hence the sequence {pnk } is bounded. Since T is uniformly continuous,
we have from (26) that

lim Twnk − Tpnk = 0. (27)


k→∞

We also have by (15) that

τnk ω−1 Tpnk − Twnk > ρ wnk − pnk , (28)

that is,
1   1  
Tpnk − Twnk  > wn − pn
 . (29)
ρ τnk ω−1 k k

From (27) and (29), we obtain that

1  
lim −1
wnk − pnk  = 0. (30)
k→∞ τnk ω

By the definition of pnk and the property of the projection, we have

wnk − τnk ω−1 Twnk − pnk , x − pnk  ≤ 0, ∀ x ∈ C,

or equivalently,

1  
−1
wnk − pnk , x − pnk ≤ Twnk , x − wnk + wnk − pnk , ∀ x ∈ C,
τnk ω

and thus

1  
−1
wnk − pnk , x − pnk + Twnk , pnk − wnk  ≤ Twnk , x − wnk , ∀ x ∈ C. (31)
τnk ω

From (28) and the fact that {pnk } and {Twnk } are bounded, and wnk − pnk → 0 as k → ∞, we
obtain

lim supTwnk , x − wnk  ≥ lim inf Twnk , x − wnk  ≥ 0, ∀ x ∈ C. (32)


k→∞ k→∞

Next, we consider two possible cases.


Case 1. If for any x ∈ C, lim supk→∞ Twnk , x − wnk  > 0, then there exists a subsequence {wnkj } of
{wnk } such that limj→∞ Twnkj , x − wnkj  > 0. That is, there exits N0 such that Twnkj , x − wnkj  > 0,
for all j > N0 . The quasi-monotonocity of T implies that Tx, x − wnkj  ≥ 0, for all j > N0 and hence
Tx, x − z ≥ 0, that is, z ∈ MVI(C, T).
APPLICABLE ANALYSIS 11

Case 2. If lim supk→∞ Twnk , x − wnk  = 0, then from (32) we get limk→∞ Twnk , x − wnk  = 0.
1
Let φk = |Twnk , x − wnk | + k+1 , k = 1, 2, . . .. Then, we obtain that

Twnk , x − wnk  + φk > 0, for all k ≥ 1. (33)

Twnk
Since Tx = 0, ∀ x ∈ C, we have Twnk = 0, ∀ k ∈ N. Now, set ψnk = Twnk 2 . Thus, we have

Twnk , ψnk  = 1. (34)

Now, from from (33) and (34), it follows that

Twnk , x − wnk  + φk Twnk , ψnk  > 0 and hence Twnk , x + φk ψnk − wnk  > 0,

which implies by the quasi-monotonicity of T that

T(x + φk ψnk ), x + φk ψnk − wnk  ≥ 0, (35)

or equivalently

T(x), x − wnk  + T(x + φk ψnk ) − Tx, x − wnk  + T(x + φk ψnk ), φk ψnk  ≥ 0, for all k ≥ 1.
(36)
We now show that limk→∞ φk ψnk = 0. In fact, since wnk z and by (C2) we have Twnk Tz and
hence by the property of weakly convergent and the fact that Tz = 0 (as z ∈ C), we have

0 < Tz ≤ lim inf Twnk . (37)


k→∞

Therefore, we obtain that


 
φk lim supk→∞ φk 0
0 ≤ lim sup φk ψnk = lim sup ≤ ≤ = 0.
k→∞ k→∞ Twnk lim inf k→∞ Twnk Tz

Consequently, we get limk→∞ φk ψnk = 0. So, taking the limit on both sides of (36) as k → ∞, we
get Tx, x − z ≥ 0 for all x ∈ C and hence z ∈ MVI(C, T). Similarly, we get that v ∈ MVI(D, S) and
hence the proof is complete. 

Theorem 3.4: Suppose conditions (C1)–(C6) are satisfied. Then, the sequences {xn } and {un } generated
by Algorithm 3.1 are bounded.

Proof: Suppose (p, q) ∈ ϒ and let

bn = (1 − βn )wn + βn (an zn + (1 − an )hn );


(38)
cn = (1 − βn )vn + βn (an tn + (1 − an )rn ).

Then, we obtain
2 2 2
bn − p ≤ (1 − βn ) wn − p + βn an zn − p + βn (1 − an ) hn − p 2 , (39)

and
2 2 2
cn − q ≤ (1 − βn ) vn − q + βn an tn − q + βn (1 − an ) rn − q 2 . (40)
12 G. MEKURIAW ET AL.

Moreover, we have from the quasi-nonexpansive property of K that


2 2
hn − p = PC (σn dn + (1 − σn )Kdn ) − PC (p)
2
≤ (σn dn + (1 − σn )Kdn ) − p
2
= σn (dn − p) + (1 − σn )(Kdn − p)
2 2 2
= σn dn − p + (1 − σn ) Kdn − p − σn (1 − σn ) Kdn − dn
2
≤ dn − p − σn (1 − σn ) Kdn − dn 2 . (41)

Similarly, we obtain that


2 2
rn − q ≤ en − q − σn (1 − σn ) Gen − en 2 . (42)

Substituting (41) into (39), we get


2 2 2
bn − p ≤ (1 − βn ) wn − p + βn an zn − p
2
+ βn (1 − an ) dn − p − σn βn (1 − an )(1 − σn ) Kdn − dn 2 . (43)

Similarly, we get
2 2 2
cn − q ≤ (1 − βn ) vn − q + βn an tn − q
2
+ βn (1 − an ) en − q − σn βn (1 − an )(1 − σn ) Gen − en 2 . (44)

From (17), (38) and Lemma 2.2, we get


2 2
xn+1 − p = αn x + (1 − αn )bn − p
2
= αn (x − p) − (1 − αn )(bn − p)
2
≤ αn x − p + (1 − αn ) bn − p 2 . (45)

Similarly, we have
2 2
un+1 − q ≤ αn u − q + (1 − αn ) cn − q 2 . (46)

Combining (45) and (46), we get


2 2 2
xn+1 − p + un+1 − q ≤ αn ( x − p + u − q 2 ) + (1 − αn )( bn − p 2
+ cn − q 2 ).
(47)
Substituting (43) and (44), into (47), we obtain
2 2 2
xn+1 − p + un+1 − q ≤ αn ( x̄ − p + ū − q 2 )
 
+ (1 − αn )(1 − βn ) wn − p 2 + vn − q 2
 
+ (1 − αn )βn an zn − p 2 + tn − q 2
 
+ (1 − αn )βn (1 − an ) dn − p 2 + en − q 2
2
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn
− (1 − αn )σn βn (1 − an )(1 − σn ) Gen − en 2 . (48)
APPLICABLE ANALYSIS 13

Since p ∈ Tn , using the definition of zn in (14), property (P1) of metric projection and Lemma 2.2,
we have
2 2
zn − p = PTn (wn − τn Tyn ) − p
2
= PTn (wn − τn Tyn ) − PTn p
≤ zn − p, wn − τn Tyn − p
1
= [ zn − p 2 + wn − τn Tyn − p 2 − zn − wn + τn Tyn 2 ]
2
1
= [ zn − p 2 + wn − p 2 + τn2 Tyn 2 − zn − wn 2 − τn2 Tyn 2
2
− 2wn − p, τn Tyn  − 2zn − wn , τn Tyn ]
1 2 2 2
= [ zn − p + wn − p − zn − wn + 2p − zn , τn Tyn ]. (49)
2
Thus, the inequality in (49) imply that
2 2 2
zn − p ≤ wn − p − zn − wn + 2p − zn , τn Tyn . (50)

Since p ∈ MVI(C, T), we have Tx, x − p ≥ 0, ∀ x ∈ C. Taking x = yn ∈ C and rearranging, we get


Tyn , p − yn  ≤ 0.
Hence,

p − zn , τn Tyn  = τn Tyn , p − yn  + τn Tyn , yn − zn  ≤ τn Tyn , yn − zn . (51)

Using (50) and (51) and Lemma 2.2,


2 2 2
zn − p ≤ wn − p − zn − wn + 2τn Tyn , yn − zn 
2 2 2
= wn − p − zn − yn − yn − wn − 2zn − yn , yn − wn  + 2τn Tyn , yn − zn 
2 2 2
= wn − p − yn − zn − wn − yn + 2wn − τn Tyn − yn , zn − yn . (52)

Since zn ∈ Tn by definition of zn , we obtain wn − τn Twn − yn , zn − yn  ≤ 0.


This together with the definition of τn , the Schwartz inequality and the fact that 2ab ≤ a2 + b2 for
any two real numbers a and b give us,

2wn − τn Tyn − yn , zn − yn  = 2wn − τn Twn − yn , zn − yn  + 2τn Twn − Tyn , zn − yn 


≤ 2τn Twn − Tyn , zn − yn 
≤ 2τn Twn − Tyn yn − zn
≤ 2ρ wn − yn yn − zn
2
≤ ρ[ wn − yn + yn − zn 2 ]. (53)

Combining (52) and (53),


2 2 2
zn − p ≤ wn − p − (1 − ρ)[ yn − wn + zn − yn 2 ]. (54)

Similarly we get,
2 2 2
tn − q ≤ vn − q − (1 − ρ)[ sn − vn + tn − sn 2 ]. (55)
14 G. MEKURIAW ET AL.

Adding (54) and (55), we get


2 2 2 2 2
zn − p + tn − q ≤ wn − p + vn − q − (1 − ρ)[ yn − wn + zn − yn 2 ]
2
− (1 − ρ)[ sn − vn + tn − sn 2 ]. (56)

Now, using the definition of dn in (17) and property (P3) of the metric projection PC , we have

dn − p 2
= PC (wn − γn A∗ (Awn − Bvn )) − p 2

≤ wn − γn A∗ (Awn − Bvn ) − p 2
− dn − (wn − γn A∗ (Awn − Bvn )) 2

= wn − p 2
+ γn A∗ (Awn − Bvn ) 2
− 2γn wn − p, A∗ (Awn − Bvn )
− dn − (wn − γn A∗ (Awn − Bvn )) 2

= wn − p 2
+ γn2 A∗ (Awn − Bvn ) 2
− 2γn Awn − Ap, Awn − Bvn 
∗ 2
− dn − (wn − γn A (Awn − Bvn )) . (57)

Similarly,

en − q 2
≤ vn − q 2
+ γn2 B∗ (Awn − Bvn ) 2
+ 2γn Bvn − Bq, Awn − Bvn 
∗ 2
− en − (vn + γn B (Awn − Bvn )) . (58)

Adding (57) and (61) and using (18) for the case Awn = Bvn , we have

dn − p 2
+ en − q 2
≤ wn − p 2
+ vn − q 2
+ γn2 [ A∗ (Awn − Bvn ) 2
+ B∗ (Awn − Bvn ) 2 ]
− 2γn Awn − Ap − Bvn + Bq, Awn
− Bvn  − dn − (wn − γn A∗ (Awn − Bvn )) 2

− en − (vn + γn B∗ (Awn − Bvn )) 2

2 2 2 2
≤ wn − p + vn − q + γn Awn − Bvn − 2γn Awn − Bvn
− dn − (wn − γn A∗ (Awn − Bvn )) 2
− en − (vn + γn B∗ (Awn − Bvn )) 2

2 2 2
≤ wn − p + vn − q − γn Awn − Bvn
− dn − (wn − γn A∗ (Awn − Bvn )) 2

− en − (vn + γn B∗ (Awn − Bvn )) 2 . (59)

For the case Awn = Bvn in (18), we can easily show that (59) holds. Using the definitions of wn and
vn given in (12) and applying Lemma 2.2, we have
2 2
wn − p = xn + θn (xn−1 − xn ) − p
2
= (1 − θn )(xn − p) + θn (xn−1 − p)
2
≤ (1 − θn ) xn − p + θn xn−1 − p 2 ,

and
2 2
vn − q ≤ (1 − θn ) un − q + θn un−1 − q 2 ,
which imply that
2 2 2
wn − p + vn − q ≤ (1 − θn )[ xn − p + un − q 2 ] + θn [ xn−1 − p 2
+ un−1 − q 2 ].
(60)
APPLICABLE ANALYSIS 15

From (48), (56) and (59), we obtain

2 2 2
xn+1 − p + un+1 − q ≤ αn ( x − p + u − q 2)
 
+ (1 − αn )(1 − βn ) wn − p 2 + vn − q 2

+ (1 − αn )βn an wn − p 2 + vn − q 2
2
− (1 − ρ)[ yn − wn + zn − yn 2 ]

−(1 − ρ)[ sn − vn 2 + tn − sn 2 ]

+ (1 − αn )βn (1 − an ) wn − p 2 + vn − q 2
− γn Awn − Bvn 2

− dn − (wn − γn A∗ (Awn − Bvn )) 2



− en − (vn + γn B∗ (Awn − Bvn )) 2

2
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn
− (1 − αn )σn βn (1 − an )(1 − σn ) Gen − en 2 , (61)

and hence

2 2 2
 
xn+1 − p + un+1 − q ≤ αn ( x − p + u − q 2 ) + (1 − αn ) wn − p 2 + vn − q 2

− (1 − αn )βn an (1 − ρ)[ yn − wn 2 + zn − yn 2 ]

−(1 − ρ)[ sn − vn 2 + tn − sn 2 ]

− (1 − αn )βn (1 − an ) γn Awn − Bvn 2
+ dn − (wn − γn A∗ (Awn − Bvn )) 2

+ en − (vn + γn B∗ (Awn − Bvn )) 2

2
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn
− (1 − αn )σn βn (1 − an )(1 − σn ) Gen − en 2 . (62)

Substituting (60) into (62) and taking the properties of ρ, βn , αn , σn and an into account, we obtain

2 2 2
xn+1 − p + un+1 − q ≤ αn ( x − p + u − q 2)
 2
+ (1 − αn ) (1 − θn )[ xn − p + un − q 2 ]

+θn [ xn−1 − p 2 + un−1 − q 2 ]

− (1 − αn )βn an (1 − ρ)[ yn − wn 2 + zn − yn 2 ]

−(1 − ρ)[ sn − vn 2 + tn − sn 2 ]

− (1 − αn )βn (1 − an ) γn Awn − Bvn 2
+ dn − (wn − γn A∗ (Awn − Bvn )) 2

+ en − (vn + γn B∗ (Awn − Bvn )) 2

2
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn
2
− (1 − αn )σn βn (1 − an )(1 − σn ) Gen − en
16 G. MEKURIAW ET AL.

2
≤ αn ( x − p + u − q 2)
 2
+ un − q 2 ]
+ (1 − αn ) (1 − θn )[ xn − p

+θn [ xn−1 − p 2 + un−1 − q 2 ] , (63)
which can be written as
  2
n+1 (p, q) ≤ (1 − αn ) (1 − θn )n (p, q) + θn n−1 (p, q) + αn ( x − p + u − q 2 ), (64)
where, n (p, q) = xn − p 2 + un − q 2 .
Using (64) and property of convex combinations of real numbers, we get
2
n+1 (p, q) ≤ max { x − p + u − q 2 , max {n (p, q), n−1 (p, q)}}. (65)
But,
max{n (p, q), n−1 (p, q)}
 
≤ max{(1 − αn−1 ) (1 − θn−1 )n−1 (p, q) + θn−1 n−2 (p, q)
2
+ αn−1 ( x − p + u − q 2 ), n−1 (p, q)}
2
≤ max {max { x − p + u − q 2 , max {n−1 (p, q), n−2 (p, q)}}, n−1 (p, q)}
2
= max { x − p + u − q 2 , max {n−1 (p, q), n−2 (p, q)}}. (66)
Substituting (66) into (65) and repeating the process n−2 times, we get
2
n+1 (p, q) ≤ max { x − p + u − q 2 , max {2 (p, q), 1 (p, q)}},
which implies that {n (p, q)} is bounded. Therefore, {xn } and {un } are bounded and hence {wn }, {vn }
{Awn } and {Bvn } are bounded too. 

Theorem 3.5: Suppose conditions (C1)–(C6) hold. Let Tx = 0, ∀ x ∈ C and Su = 0, ∀ u ∈ D. Then,


the sequence {(xn , un )} produced by Algorithm 3.1 strongly converges to a point (p∗ , q∗ ) ∈ ϒ such that
(p∗ , q∗ ) = Pϒ (x, u).

Proof: Let (p∗ , q∗ ) = Pϒ (x, u). From the definitions of xn+1 and un+1 , (38), (43), (44) and Lemma 2.2
we have,
xn+1 − p∗ 2
+ un+1 − q∗ 2
= αn (x − p∗ ) + (1 − αn )(bn − p∗ ) 2

+ αn (u − q∗ ) + (1 − αn )(cn − q∗ ) 2

≤ (1 − αn )2 bn − p∗ 2
+ 2αn x − p∗ , xn+1 − p∗ 
+ (1 − αn )2 cn − q∗ 2
+ 2αn u − q∗ , un+1 − q∗ 
≤ (1 − αn )2 ( bn − p∗ 2
+ cn − q∗ 2 )
+ 2αn [x − p∗ , xn+1 − p∗  + u − q∗ , un+1 − q∗ ].
This yields
 
xn+1 − p∗ 2
+ un+1 − q∗ 2
≤ (1 − αn )(1 − βn ) wn − p∗ 2 + vn − q∗ 2
 
+ (1 − αn )βn an zn − p∗ 2 + tn − q∗ 2
 
+ (1 − αn )βn (1 − an ) dn − p∗ 2 + en − q∗ 2
 
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn 2 + Gen − en 2
+ 2αn (x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ). (67)
APPLICABLE ANALYSIS 17

But (56) and (59) imply that zn − p∗ 2 + tn − q∗ 2 ≤ wn − p∗ 2 + vn − q∗ 2 and dn −


p∗ 2 + en − q∗ 2 ≤ wn − p∗ 2 + vn − q∗ 2 , respectively.
Thus, substituting these two inequalities into (67) and using the property of σn , we get
 
xn+1 − p∗ 2 + un+1 − q∗ 2 ≤ (1 − αn )(1 − βn ) wn − p∗ 2 + vn − q∗ 2
 
+ (1 − αn )βn an wn − p∗ 2 + vn − q∗ 2
 
+ (1 − αn )βn (1 − an )σn wn − p∗ 2 + vn − q∗ 2
 
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn 2 + Gen − en 2
+ 2αn (x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ),

which yields
 
xn+1 − p∗ 2
+ un+1 − q∗ 2
≤ (1 − αn )(1 − βn ) wn − p∗ 2 + vn − q∗ 2
 
+ (1 − αn )βn an wn − p∗ 2 + vn − q∗ 2
 
+ (1 − αn )βn (1 − an ) wn − p∗ 2 + vn − q∗ 2
 
− (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn 2 + Gen − en 2
+ 2αn (x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ )
 
≤ (1 − αn ) wn − p∗ 2 + vn − q∗ 2
+ 2αn (x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ). (68)

From Remark 3.1, we have that the sequences {θn xn − xn−1 } and {θn un − un−1 } are bounded.
Thus, we obtain from the boundedness of {xn } and {un } that

wn − p∗ 2
= xn − p∗ + θn (xn−1 − xn ) 2

≤ xn − p∗ 2
+ θn2 xn − xn−1 2
+ 2θn xn − p∗ xn − xn−1
≤ xn − p∗ 2
+ M1 θn xn − xn−1 , (69)

for some M1 ≥ 0. Similarly, we have

vn − q∗ 2
≤ un − q∗ 2
+ M2 θn un − un−1 , (70)

for some M2 ≥ 0.
From (69) and (70), we obtain

wn − p∗ 2
+ vn − q∗ 2
≤ xn − p∗ 2
+ un − q∗ 2
+ M3 θn [ xn − xn−1 + un − un−1 ],
(71)

where M3 = max{M1 , M2 }.
Combining (68) and (71), we get

xn+1 − p∗ 2
+ un+1 − q∗ 2
≤ (1 − αn )[ xn − p∗ 2
+ un − q∗ 2 ]
θn
+ αn M3 ( xn − xn−1 + un − un−1 )
αn

+2(x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ) ,


18 G. MEKURIAW ET AL.

which could be written as

n+1 (p∗ , q∗ ) ≤ (1 − αn )n (p∗ , q∗ ) + αn ωn , (72)

where ωn = M3 αθnn ( xn − xn−1 + un − un−1 ) + 2(x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ).
We obtain from (62) and (71) that

(1 − αn )βn an (1 − ρ)[ yn − wn 2 + zn − yn 2 ]

+(1 − ρ)[ sn − vn 2 + tn − sn 2 ]

+ (1 − αn )βn (1 − an ) γn Awn − Bvn 2

+ dn − (wn − γn A∗ (Awn − Bvn )) 2 + en − (vn + γn B∗ (Awn − Bvn )) 2
2
+ (1 − αn )σn βn (1 − an )(1 − σn ) Kdn − dn
2
+ (1 − αn )σn βn (1 − an )(1 − σn ) Gen − en
≤ n (p∗ , q∗ ) − n+1 (p∗ , q∗ ) − αn n (p∗ , q∗ ) + (1 − αn )M3 θn [ xn − xn−1 + un − un−1 ]
+ αn ( x̄ − p∗ 2
+ ū − q∗ 2 )
 
∗ ∗ ∗ ∗ θn
≤ n (p , q ) − n+1 (p , q ) + αn M3 [ xn − xn−1 + un − un−1 ] − n (p∗ , q∗ )
αn
+ αn ( x̄ − p∗ 2
+ ū − q∗ 2 ). (73)

We now consider two cases on the sequence {n (p∗ , q∗ )}.


Case 1. Let n+1 (p∗ , q∗ ) ≤ n (p∗ , q∗ ) ∀ n ≥ N ∈ N.
Since it is bounded, limn→∞ n (p∗ , q∗ ) exists. Then, taking the limit on both sides of (73) as n →
∞ and taking the conditions on the parameters an , ρ, αn , βn , σn into account, we obtain

yn − wn → 0 and zn − yn → 0, and hence zn − wn → 0, as n → ∞, (74)

In the same way, we also get

sn − vn → 0 and tn − sn → 0, and hence tn − vn → 0. (75)

Furthermore, we have,

Awn − Bvn → 0, dn − (wn − γn A∗ (Awn − Bvn )) → 0, (76)

and

en + (vn − γn B∗ (Awn − Bvn )) → 0, Kdn − dn → 0, Gen − en → 0 as n → ∞. (77)

By the definition of dn in (17) and property (P3) of metric projection, we have

dn − (wn − γn A∗ (Awn − Bvn )) 2


+ dn − wn 2
≤ γn A∗ (Awn − Bvn ) 2 . (78)

Thus, combining (76) and (78), we get

dn − wn → 0 and also en − vn → 0. (79)

We also have from (77) that

hn − dn = (1 − σn ) Kdn − dn → 0 as n → ∞. (80)
APPLICABLE ANALYSIS 19

By the boundedness of {(xn , un )} and Theorem 3.4, there is a subsequence {(xnk , unk )} of {(xn , un )}
with {(xnk , unk )} (z, v) and

lim sup[(x, u) − (p∗ , q∗ ), (xn , yn ) − (p∗ , q∗ )] = lim [(x, u) − (p∗ , q∗ ), (xnk , unk ) − (p∗ , q∗ )].
n→∞ k→∞
(81)

Hence, from (74) and (75), we get

lim ynk − wnk = 0 and lim snk − vnk = 0. (82)


k→∞ k→∞

Now, from (12), we get that wnk − xnk → 0 and vnk − unk → 0 as k → ∞ and hence wnk
z and vnk v. Using (82) and Lemma 3.3, we obtain (z, v) ∈ MVI(C, T) × MVI(D, S). From (79)
and (12), we obtain

dnk − xnk ≤ dnk − wnk + wnk − xnk → 0 as k → ∞. (83)

Thus, we have from (83) that dnk z, which together with (77) and the demiclosedness of (I − K)
gives that z ∈ F(K). Similarly, we obtain that v ∈ F(G). Hence, we obtain

(z, v) ∈ (MVI(C, T) ∩ F(K)) × (MVI(D, S) ∩ F(G)).

Then using Lemma 2.2, we get

2 2
Az − Bv = Awnk − Bvnk + Az − Awnk + Bvnk − Bv
2
≤ Awnk − Bvnk + 2Az − Awnk + Bvnk − Bv, Az − Bv. (84)

Since A is a bounded linear mapping, we get Awnk → Az, and similarly we have Bvnk → Bv.
Thus, taking limsup on both sides of (84) and using (76), we obtain Az − Bv = 0. Thus, we
obtain Az = Bv and hence (z, v) ∈ ϒ. In addition, since (p∗ , q∗ ) = Pϒ (x, u), Equation (81) and
Lemma 2.6 gives

lim sup[(x, u) − (p∗ , q∗ ), (xn , un ) − (p∗ , q∗ )] = lim [(x, u) − (p∗ , q∗ ), (xnk , unk ) − (p∗ , q∗ )]
n→∞ k→∞

= (x, u) − (p∗ , q∗ ), (z, v) − (p∗ , q∗ ) ≤ 0. (85)

From (12), (14) and (17),

xn+1 − xn ≤ xn+1 − wn + wn − xn
= αn (x − wn ) + (1 − αn ) [βn (an (zn − wn ) + (1 − an )(hn − wn )) ] + wn − xn
≤ αn x − wn + (1 − αn )βn [an zn − wn + (1 − an ) hn − wn ] + wn − xn .
(86)

From (79) and (80), we have that

hn − wn ≤ hn − dn + dn − wn → 0 as n → ∞. (87)

Using the assumptions on αn , an and βn , since {wn } is bounded, Remark 3.1, (74) and (87) imply that
the limit of the right hand side of the last inequality in (86) is zero.
20 G. MEKURIAW ET AL.

Thus, we get
xn+1 − xn → 0. (88)
Similarly we can get
un+1 − un → 0. (89)
Using (88) and (89) together with (85), we have

lim sup(x, u) − (p∗ , q∗ ), (xn+1 , un+1 ) − (p∗ , q∗ ) ≤ 0. (90)


n→∞

Hence, combining the assumption on αn , Remark 3.1 and (90), we obtain, lim supn→∞ ωn ≤ 0.
Therefore, from (72) and Lemma 2.7, we get n (p∗ , q∗ ) → 0 which implies that xn → p∗ and un →
q∗ as n → ∞.
Case 2. Assume that there is a subsequence {nj (p∗ , q∗ )} of {n (p∗ , q∗ )} such that {nj (p∗ , q∗ )} <
{nj +1 (p∗ , q∗ ), } ∀ j ∈ N. In this instance, a nondecreasing sequence {lk } of N exists, as indicated by
Lemma 2.8, such that limk→∞ lk = ∞ and the following inequality holds for k ∈ N:

max{lk (p∗ , q∗ ), k (p∗ , q∗ )} ≤ lk +1 (p∗ , q∗ ). (91)

From (72) we have


lk +1 (p∗ , q∗ ) ≤ (1 − αlk )lk (p∗ , q∗ ) + αlk ωlk . (92)
Combining (91) and (92) we obtain

lk +1 (p∗ , q∗ ) ≤ (1 − αlk )lk +1 (p∗ , q∗ ) + αlk ωlk and hence k (p∗ , q∗ ) ≤ ωlk .

This yields that


lim sup k (p∗ , q∗ ) ≤ lim sup ωlk .
k→∞ k→∞
Similar reasoning to that of Case 1 leads us to the conclusion that lim supk→∞ ωlk ≤ 0 which implies
that limk→∞ k (p∗ , q∗ ) = 0, that is, (xk , uk ) → (p∗ , q∗ ) as n → ∞. 

3.2. Inertial-like Tseng’s extragradient algorithm


In this subsection, we solve the split equality variational inequality and fixed point problem (SEV-
IFFP) using the inertial-type Tseng’s extragradient algorithm.

Algorithm 3.2: Initialization: Choose { n }, {αn }, {an }, {σn } and {βn } satisfying assumption (C6). Let
x, x0 , x1 ∈ C, u, u0 , u1 ∈ D, 0 ≤ θ < 1, ρ, ξ , κ ∈ (0, 1), , ω ∈ (0, 1). Set n = 1.

Step 1. Provided that θn is updated by (12), compute

wn = xn + θn (xn−1 − xn ) and
vn = un + θn (un−1 − un ). (93)

Step 2. Compute


⎪ yn = PC (wn − τn Twn ),

⎨z = y − τ (Ty − Tw ),
n n n n n
(94)
⎪sn = PD (vn − ϕn Svn ),



tn = sn − ϕn (Ssn − Svn ),
where τn and ϕn are defined by (15) and (16), respectively.
APPLICABLE ANALYSIS 21

Step 3. Compute


⎪ dn = PC (wn − γn A∗ (Awn − Bvn )),



⎪ hn = σn dn + (1 − σn )Kdn ,


⎨x
n+1 = αn x + (1 − αn ) [(1 − βn )wn + βn (an zn + (1 − an )hn )] ,

(95)

⎪ en = PD (vn + γn B (Awn − Bvn )),



⎪rn = σn en + (1 − σn )Gen ,



un+1 = αn u + (1 − αn ) [(1 − βn )vn + βn (an tn + (1 − an )rn )] ,

where γn is updated by (18).


Step 4. Set n := n + 1 and go to step 1.

Theorem 3.6: Suppose conditions (C1)–(C6) hold. Let Tx = 0, ∀ x ∈ C and Su = 0, ∀ u ∈ D. Then,


the sequence {(xn , un )} produced by Algorithm 3.2 strongly converges to a point (p∗ , q∗ ) in ϒ such that
(p∗ , q∗ ) = Pϒ (x, u).

Proof: Let (p∗ , q∗ ) ∈ ϒ such that (p∗ , q∗ ) = Pϒ (x, u). For the proof let us show first that
2 2 2 2
zn − p + tn − q ≤ wn − p + vn − q − (1 − ρ 2 ) wn − yn 2
− (1 − ρ 2 ) vn − tn 2 .
(96)

As a direct consequence of the definitions of zn , sn , τn and ϕn , we find

zn − yn ≤ ρ wn − yn and sn − tn ≤ ρ vn − tn . (97)

Now, since (p, q) ∈ ϒ, then using the definition of zn ,


2 2
zn − p = yn − τn (Tyn − Twn ) − p
2
= yn − p + τn2 Tyn − Twn 2
− 2τn yn − p, Tyn − Twn 
2 2
= wn − p + yn − wn − 2yn − wn , yn − wn  + 2yn − wn , yn − p
+ τn2 Tyn − Twn 2
− 2τn yn − p, Tyn − Twn 
2 2
= wn − p − yn − wn + 2yn − wn , yn − p
+ τn2 Tyn − Twn 2
− 2τn yn − p, Tyn − Twn . (98)

Since yn = PC (wn − τn Twn ) and p ∈ C, yn − (wn − τn Twn ), yn − p ≤ 0, which implies that

yn − wn , yn − p ≤ −τn Twn , yn − p. (99)

Furthermore, using the definition of τn , we have

τn Twn − Tyn ≤ ρ wn − yn . (100)

Combining (98), (99) and (100), we get


2 2
zn − p ≤ wn − p − (1 − ρ 2 ) wn − yn 2
− 2τn Tyn , yn − p. (101)

But p ∈ MVI(C, T), yn ∈ C and quasi-monotonicity of T imply that Tyn , yn − p ≥ 0. Therefore,


2 2
zn − p ≤ wn − p − (1 − ρ 2 ) wn − yn 2 . (102)
22 G. MEKURIAW ET AL.

Similarly, we get
2 2
tn − q ≤ vn − q − (1 − ρ 2 ) vn − sn 2 . (103)
Combining (102) and (103), we get
2 2 2 2
zn − p + tn − q ≤ wn − p + vn − q − (1 − ρ 2 ) wn − yn 2
− (1 − ρ 2 ) vn − sn 2 .
(104)
Now, by substituting the result in (56) with (104), replacing (p, q) with (p∗ , q∗ ) and following the
methods of proof of Theorem 3.5, we obtain the required assertion. 

If u = 0 and x = 0 in Theorems 3.5 and 3.6, then we get the following corollary.

Corollary 3.7: Suppose conditions (C1)–(C6) hold. Then, the sequence {(xn , un )} produced by Algo-
rithms 3.1 and 3.2 with u = 0 and x = 0, strongly converges to a point (p∗ , q∗ ) = P (0, 0).

In Theorems 3.5 and 3.6, if T and S are uniformly continuous on bounded subsets of H1 and H2 ,
respectively, and pseudomonotone mappings on H1 and H2 , respectively, then we get MVI(C, T) =
VI(C, T). The requirement that T and S are nonzero on C and D, respectively, is not needed. Indeed,
below is the outcome.

Corollary 3.8: Suppose conditions (C1), (C3), (C4) and (C6) hold. Let T : H1 → H1 and S : H2 → H2
be pseudomonotone, uniformly continuous and Txn Tx and Sun Su, whenever {xn } and {un }
are
 sequences in C and D, respectively, such that xn x and
 u n u. Let the solution set  =
(p, q) ∈ (VI(C, T) ∩ F(K)) × (VI(D, S) ∩ F(G)) : Ap = Bq = ∅. Then, the sequence {(xn , un )} pro-
duced by Algorithms 3.1 and 3.2 strongly converges to a point (p∗ , q∗ ) = P (x, u).

Remark 3.9: If in Corollary 3.8, we assume H1 = H2 = H3 , A = B = I, T = S : H → H, C = D,


K = G = I and an = 1, ∀ n, then our algorithms become algorithms for solving VIP(C, T). Thus,
our result extends the results of Bing Tan and Sun Young Cho [46] which use Lipschitz continuity and
pseudomonotone assumptions on the underlying mapping T to the more general class of uniformly
continuous and quasi-monotone mappings.

Next, we present some more remarks with corollaries of the theorems proved above.

Remark 3.10: If we take H1 = H2 = H3 = H and A = B = IH , then our main result reduces to


a problem of finding common solution of Minty variational inequality and fixed point problems,
which is defined as finding two points (x∗ , u∗ ) ∈ (MVI(C, T) ∩ F(K)) × (MVI(D, S) ∩ F(G)) such
that x∗ = u∗ . Let

∗ = {(x∗ , x∗ ) ∈ (MVI(C, T) ∩ F(K)) × (MVI(D, S) ∩ F(G))}.

We now deduce the following corollary.

Corollary 3.11: Assume that conditions (C1)–(C3) and (C6), with H1 = H2 = H3 = H hold. Assume
that Tx = 0, ∀ x ∈ C and Su = 0, ∀ u ∈ D. If ∗ = ∅, then the sequence {(xn , un )} produced by
Algorithms 3.1 and 3.2, with A = B = IH , strongly converges to (x∗ , u∗ ), where (x∗ , u∗ ) = P∗ (x, u).

Remark 3.12: Let H1 and H2 , be the real Hilbert spaces, and let C and D be their convex, closed, and
nonempty subsets. Let T : H1 → H1 and S : H2 → H2 be two nonlinear mappings. Let K : H1 → H1
and G : H2 → H2 be quasi-nonexpansive mappings. Let A : H1 → H2 be a bounded linear mapping
APPLICABLE ANALYSIS 23

and let A∗ be its adjoint. The Minty split variational inequality and fixed point problem (MSVIFFP)
(see, e.g. Censor et al. [53]) is to find x∗ ∈ H1 and u∗ ∈ H2 such that:

x∗ ∈ MVI(C, T) ∩ F(K), u∗ ∈ MVI(D, S) ∩ F(G) and u∗ = Ax∗ . (105)

In 2012, Censor et al. [53] studied the SVIP. Phase retrieval, image and signal processing and data
compression are among the areas where this problem is applicable (see, [54, 55]).

In Algorithms 3.1 and 3.2, if H2 = H3 and B is the identity mapping on H2 , then the subsequent
result is obtained.

Corollary 3.13: Suppose condition (C1)–(C4) and (C6) meet with H2 = H3 and B = I, the identity
mapping on H2 . Let Tx = 0, ∀ x ∈ C and Su = 0, ∀ u ∈ D. Assume that ϒ  = {(p, q) ∈ (MVI(C, T) ∩
F(K)) × (MVI(D, S) ∩ F(G)) : Ap = q} is nonempty. Then, the sequence {(xn , un )} produced by Algo-
rithms 3.1 and 3.2 strongly converges to a point (p∗ , q∗ ) = Pϒ  (x, u).

Proof: Taking H2 = H3 and B = I, the conclusion follows from Theorems 3.5 and 3.6. 

4. Numerical examples
This section contains some numerical examples that show how our suggested systems behave.


Example 4.1: Let H1 = H2 = R2 with norm y = y12 + y22 and inner product y, z =
(y1 , y2 ), (z1 , z2 ) = y1 z1 + y2 z2 , for y = (y1 , y2 ), z = (z1 , z2 ) ∈ R2 . Let C = [1, 2] × [−1, 1] and
D = [−1, 2] × [1, 2] which are closed, convex and nonempty subsets of R2 . We define T and S on
R2 by:
   
y22 y12
T(y1 , y2 ) = 0, and S(y1 , y2 ) = , 0 .
1 + y22 1 + y12

We see that both T and S are quasi-monotone and uniformly continuous on R2 . If we also
define A : R2 → R2 and B : R2 → R2 by A(y1 , y2 ) = (2y1 , 3y2 ) and B(y1 , y2 ) = (1.5y2 , 3y1 ), respec-
tively, then A and B are both bounded linear mappings with adjoint mappings A∗ and B∗
given by A∗ y = (2y1 , 3y2 )) and B∗ x = (1.5y2 , 3y1 )), respectively. Let K, G : R2 → R2 given by
K(y1 , y2 ) = (y1 , −1) and G(y1 , y2 ) = (−1, y2 ). Take ξ = 0.5, κ = 0.1, ω = 12 = , θ = 0.7, ρ = 0.2,
σ = 0.3, n = n12 , αn = n1 , βn = 0.75, an = 0.5, σn = 0.8 and ξn = n11.1 with given point (x, u) =
(1.2, −0.2), (−0.2, 1.6)) ∈ C × D and initial points (x0 , u0 ) = ((1.5, 0.5), (1.2, −0.5)) and (x1 , u1 ) =
((1.5, 1.5), (0, 1.1)). Then, conditions (C1)–(C6) are satisfied. Using MATLAB, we get the figure below
which shows that the sequences generated by both Algorithms 3.1 (Alg. 2) and 3.2 (Alg. 3) converge
strongly to the solution (p∗ , q∗ ), where (p∗ , q∗ ) = ((1.2, −1), (−1, 1.6)) (see, Figure 1).

 1
1 2
Example 4.2: Let H = H1 = H2 = H3 = L2 ([0, 1]) endowed with norm x = 0 |x(t)|2 dt for
1
all x ∈ H and inner product x, y = 0 x(t)y(t) dt for all x, y ∈ H. Consider C = {x ∈ H : x ≤ 1}
and D = {x ∈ H : x ≤ 4}. Then C and D are convex and closed subsets of H. Let T, S : H → H be
24 G. MEKURIAW ET AL.

Figure 1. Convergence of {(xn , un )} with tolerance Dn < 10−4 .

defined by

|2x(t) − 4|, if x > 1,
Tx(t) = 2
x (t) + 1, if x ≤ 1,
and

|3y(t) − 12|, if y > 4,
Sy(t) =
y2 (t) + 2, if y ≤ 4.
Then it can be shown that T and S are quasi-monotone and uniformly continuous mappings on H
which are not pseudomnotone. In fact, if we take x(t) = 2 and y(t) = −1, then Tx(t), y(t) − x(t) =
0, but Ty(t), y(t) − x(t) = 4, −1 − 2 < 0, showing that T is not pseudomonotone. In the same
way, we can show that S is quasi-monotone. In addition, one can show that MVI(C, T) = {−1} and
MVI(D, S) = {−2}. Define the mappings K, G: H → H by

⎨ x(t) − 1
, if x ∈ C,
Kx(t) = 2
⎩−1, if x ∈
/ C,

and

⎨ 2y(t) − 2
, if y ∈ D,
Gy(t) = 3
⎩−2, if y ∈
/ D.
APPLICABLE ANALYSIS 25

One can easily show that F(K) = {−1} and F(G) = {−2} and thus MVI(C, T) ∩ F(K) = {−1} and
MVI(D, S) ∩ F(G) = {−2}
Moreover, we have
 
 x(t) − 1  1

Kx(t) − K(−1) =  − (−1)
2  = 2 x(t) + 1 ≤ x(t) + 1 .

Thus, K is quasi-nonexpansive. Similarly, one can show that G is quasi-nonnexpansive. We also note
that the mappings I − K and I − G are demiclosed at zero.
Define A, B : H → H by Ax(t) = 4x(t) and By(t) = 2y(t). Then A and B are bounded linear map-
pings and A∗ x(t) = 4x(t) and B∗ y(t) = 2y(t). Moreover, we have that A(−1) = −4 = B(−2) and
hence (−1, −2) ∈ ϒ. Taking ξ = 0.4, κ = 0.4, ω = 0.7 = , θ = 0.7, ρ = 0.9, n = n21+5 , αn = n+3 1
,
βn = 0.75, an = 0.5 and σn = 0.8, the conditions (C1) − −(C6) are satisfied. Using MATLAB, we get
the figures below which show the convergence of the error term En − (x∗ , u∗ ), where En = {(xn , un )},
for the sequence generated by both Algorithms 3.1 (Alg. 2) and 3.2 (Alg. 3) for different initial points.

Remark 4.3: From Figure 2, we can see that the error term sequence {En − (x∗ , u∗ )} converges to zero
as n → ∞ for different values of the initial point (x1 , u1 ). Thus, the sequence {(xn , un )} produced by
Algorithms 3.1 (Alg. 2) and 3.2 (Alg. 3) strongly converges to the solution (x∗ , u∗ ) = (−1, −2) ∈ ϒ.
Furthermore, Figure 3 reveals that the convergence rate of the sequence generated by inertial-
like Tseng’s extragradient algorithm is relatively faster than that of the inertial-like subgradient
extragradient algorithm.

Figure 2. Convergence of Alg. 2 with different initial pints (x1 , u1 ).


26 G. MEKURIAW ET AL.

Figure 3. Convergence rates of the sequences generated by Alg. 2 and Alg. 3.

5. Conclusion
In this study, two inertial extragradient algorithms for split equality variational inequality and fixed
point problems in real Hilbert space settings are proposed. For solving the problems, we established
strongly convergent sequences assuming that the corresponding mappings with the VIPs are uni-
formly continuous and quasi-monotone which are a more general than the Lipschitz continuity and
pseudomonotone assumptions made by several authors in the literature (see, for example, Korpele-
vich [20], Censor et al. [21], Thong and Vuong [47], Kwelegano et al. [31]). We used the inertial-like
subgradient extragradient algorithm method and an inertial-like Tseng extragradient method in the
algorithms for better performance of our schemes. Additionally, we provided some applications of
our findings to other types of problems. Finally, we explained the effectiveness of our methods with
numerical examples.

Disclosure statement
No potential conflict of interest was reported by the author(s).

References
[1] Moudafi A. Alternating CQ-algorithm for convex feasibility and split fixed-point problems. J Nonlinear Convex
Anal. 2014;15:809–818.
[2] Boikanyo OA, Zegeye H. The split equality fixed point problem for quasi-pseudo-contractive mappings without
prior knowledge of norms. Numer Funct Anal Optim. 2020;41:759–777. doi: 10.1080/01630563.2019.1675170
[3] Boikanyo OA, Zegeye H. Split equality variational inequality problems for pseudomonotone mappings in Banach
spaces. Stud Univ Babes-Bolyai Math. 2021;66:139–158. doi: 10.24193/subbmath
APPLICABLE ANALYSIS 27

[4] Chang SS, Lin W, Lijuan Q, et al. Strongly convergent iterative methods for split equality variational inclusion
problems in Banach spaces. Acta Math Sci. 2016;36:1641–1650. doi: 10.1016/S0252-9602(16)30096-0
[5] Guo H, He H, Chen R. Strong convergence theorems for the split equality variational inclusion problem and fixed
point problem in Hilbert spaces. Fixed Point Theory Appl. 2015;2015:1–18. doi: 10.1186/1687-1812-2015-1
[6] Zhang Y, Li Y. A relaxed CQ algorithm involving the alternated inertial technique for the multiple-sets split
feasibility problem. J Nonlinear Var Anal. 2022;6:317–332.
[7] Zhang X, Zhang Y, Wang Y. Viscosity approximation of a relaxed alternating CQ algorithm for the split equality
problem. J Nonlinear Funct Anal. 2022;2022:43.
[8] Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numer
Algorithms. 1994;8:221–239. doi: 10.1007/BF02142692
[9] Stampacchia G. Formes bilinéaires coercitives sur les ensembles convexes. C R Hebd Séances Acad Sci.
1964;258:4413–4416.
[10] Fichera G. Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al
contorno. Accademia nazionale dei Lincei., serie VIII, v. VII; 1964.
[11] Zheng L. A double projection algorithm for quasimonotone variational inequalities in Banach space. J Inequal
Appl. 2018;2018:256. doi: 10.1186/s13660-018-1852-2
[12] Ye ML, He YR. A double projection method for solving variational inequalities without monotonicity. Comput
Optim Appl. 2015;60:141–150. doi: 10.1007/s10589-014-9659-7
[13] Cottle RW, Yao JC. Pseudo-monotone complementarity problems in Hilbert space. J Optim Theory Appl.
1992;75:281–295. doi: 10.1007/BF00941468
[14] Kinderlehrer D, Stampacchia G. An introduction to variational inequalities and their applications. New York and
London: SIAM, Academic Press; 2000.
[15] Trémolières R, Lions JL, Glowinski R. Numerical analysis of variational inequalities. Amsterdam: Elsevier; 2011.
Adv. Nonlinear Var. Inequalities.
[16] Jolaoso LO, Taiwo A, Alakoya TO, et al. A strong convergence theorem for solving pseudo-monotone variational
inequalities using projection methods. J Optim Theory Appl. 2020;185:744–766. doi: 10.1007/s10957-020-01
672-3
[17] Marcotte P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems.
INFOR. 1991;29:258–270.
[18] Ogwo G, Izuchukwu C, Aremu K, et al. A viscosity iterative algorithm for a family of monotone inclusion problems
in an Hadamard space. B Belg Math Soc-Sim. 2020;27:127–152.
[19] Zegeye H, Ofoedu EU, Shahzad N. Convergence theorems for equilibrium problem, variational inequality prob-
lem and countably infinite relatively quasi-nonexpansive mappings. Appl Math Comput. 2010;216:3439–3449.
[20] Korpelevich GM. The extragradient method for finding saddle points and other problems. Matecon.
1976;12:747–756.
[21] Censor Y, Gibali A, Reich S. Strong convergence of subgradient extragradient methods for the variational inequal-
ity problem in Hilbert space. Optim Methods Softw. 2011;26:827–845. doi: 10.1080/10556788.2010.551536
[22] Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert
space. J Optim Theory Appl. 2011;148:318–335. doi: 10.1007/s10957-010-9757-3
[23] Kraikaew R, Saejung S. Strong convergence of the Halpern subgradient extragradient method for solving varia-
tional inequalities in Hilbert spaces. J Optim Theory Appl. 2014;163:399–412. doi: 10.1007/s10957-013-0494-2
[24] Tseng P. Modified forward-backward splitting method for maximal monotone mappings. SIAM J Control Optim.
2000;38:431–446. doi: 10.1137/S0363012998338806
[25] Shehu Y. Single projection algorithm for variational inequalities in Banach space with application to contact
problem. Acta Math Sci. 2020;40:1045–1063. doi: 10.1007/s10473-020-0412-2
[26] Zegeye H, Shahzad N. Extragradient method for solutions of variational inequality problems in Banach spaces.
Abstr Appl Anal. 2013;2013:832–548.
[27] Zhu LJ, Liou YC. A Tseng-Type algorithm with self-adaptive techniques for solving the split problem
of fixed points and pseudomonotone variational inequalities in Hilbert spaces. Axioms. 2021;10:152. doi:
10.3390/axioms10030152
[28] Chang SS, Yao JC, Wen CF, et al. Common zero for a finite family of monotone mappings in Hadamard spaces
with applications. Mediterr J Math. 2018;15:1–16. doi: 10.1007/s00009-018-1205-x
[29] Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse
Probl. 2003;20:103–120. doi: 10.1088/0266-5611/20/1/006
[30] Combettes P. The convex feasibility problem in image recovery. Adv Imaging Electron Phys. 1996;95:155–270.
doi: 10.1016/S1076-5670(08)70157-5
[31] Kwelegano KM, Zegeye H, Boikanyo OA. An iterative method for split equality variational inequality
problems for non-Lipschitz pseudomonotone mappings. Rend Circ Mat Palermo. 2022;71:325–348. doi:
10.1007/s12215-021-00608-8
[32] Moudafi A. Alternating CQ algorithm for convex feasibility and split fixed point problems. J Nonlinear Convex
Anal. 2014;15:809–818.
28 G. MEKURIAW ET AL.

[33] Moudafi A. Split monotone variational inclusions. J Optim Theory Appl. 2011;150:275–283. doi: 10.1007/s10957-
011-9814-6
[34] Moudafi A, Al-Shemas E. Simultaneous iterative methods for split equality problem. Trans Math Program Appl.
2013;1:1–11.
[35] Zhao J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of
operators norms. Optimization. 2015;64:2619–2630. doi: 10.1080/02331934.2014.883515
[36] Ogbuisi FU. Approximation methods for solutions of some nonlinear problems in Banach spaces [PhD Thesis].
South Africa: School of Mathematics, Statistics and Computer Science University of KwaZulu-Natal; 2017.
[37] Attouch H, Bolte J, Redont P, et al. Alternating proximal algorithms for weakly couple dminimization problems,
applications to dynamical games and PDEs. J Convex Anal. 2008;15:485–506.
[38] Censor Y, Bortfeld T, Martin B, et al. A unified approach for inversion problems in intensity modulated radiation
therapy. Phys Med Biol. 2006;51:2353–2365. doi: 10.1088/0031-9155/51/10/001
[39] Polyak BT. Some methods of speeding up the convergence of iteration methods. Comput Math Math Phys.
1964;4:1–17. doi: 10.1016/0041-5553(64)90137-5
[40] Ceng L-C, Petrusel A, Wen C-F, et al. Inertial-Like subgradient extragradient methods for variational inequal-
ities and fixed points of asymptotically nonexpansive and strictly pseudocontractive mappings. Mathematics.
2019;7:860. doi: 10.3390/math7090860
[41] Gibali A, Shehu Y. An efficient iterative method for finding common fixed point and variational inequalities in
Hilbert spaces. Optimization. 2019;68:13–32. doi: 10.1080/02331934.2018.1490417
[42] Oyewole OK, Reich S. A totally relaxed self-adaptive algorithm for solving a variational inequality and fixed point
problems in Banach spaces. Appl Set-Valued Anal Optim. 2022;4:349–366.
[43] Shehu Y, Ogbuisi FU. An iterative algorithm for approximating a solution of split common fixed point problem
for demi-contractive maps. Dyn Contin Discrete Impuls Syst Ser B Appl Algorithms. 2016;23:205–216.
[44] Yao Y, Shahzad N, Yao JC. Convergence of Tseng-type self-adaptive algorithms for variational inequalities and
fixed point problems. Carpathian J Math. 2021;37:541–550. doi: 10.37193/CJM
[45] Tan B, Zhou Z, Li S. Viscosity-type inertial extragradient algorithms for solving variational inequality problems
and fixed point problems. J Appl Math Comput. 2022;68:1387–1411. doi: 10.1007/s12190-021-01576-z
[46] Tan B, Cho SY. Inertial extragradient algorithms with non-monotone stepsizes for pseudomonotone variational
inequalities and applications. Comput Appl Math. 2022;41:1–25. doi: 10.1007/s40314-022-01819-0
[47] Thong DV, Vuong PT. Modified Tseng’s extragradient methods for solving pseudo-monotonevariational inequal-
ities. Optimization. 2019;68:2207–2226. doi: 10.1080/02331934.2019.1616191
[48] Zegeye H, Shahzad N. Convergence of Mann’s type iteration method for generalized asymptotically nonexpansive
mappings. Comput Math with Appl. 2011;62:4007–4014. doi: 10.1016/j.camwa.2011.09.018
[49] Dotson WG. Fixed points of quasi-nonexpansive mappings. Published online by Cambridge University Press;
1972. https://doi.org/10.1017/S144678870001123X
[50] Denisov SV, Semenov VV, Chabak LM. Convergence of the modified extragradient method for variational
inequalities with non-Lipschitz operators. Cybern Syst Anal. 2015;51:757–765. doi: 10.1007/s10559-015-9768-z
[51] Maingé PE. A hybrid extragradient-viscosity method for monotone mappings and fixed point problems. SIAM J
Control Optim. 2008;47:1499–1515. doi: 10.1137/060675319
[52] Xu HK. Iterative algorithms for nonlinear operators. J London Math Soc. 2002;66:240–256. doi: 10.1112/jlms.2002.
66.issue-1
[53] Censor Y, Gibali A, Reich S. Algorithms for the split variational inequality problem. Numer Algorithms.
2012;59:301–323. doi: 10.1007/s11075-011-9490-5
[54] Vanderbei R. Uniform continuity is almost Lipschitz continuity. Statistics and Operations Research Series; 1991.
(Tech. rep. Technical Report. SOR-91-11).
[55] Yao Y, Chen R, Marino G, et al. Applications of fixed-point and optimization methods to the multiple-set split
feasibility problem. J Appl Math. 2012;2012:927530.

You might also like