A least-change secant algorithm for solving generalized_VPA
A least-change secant algorithm for solving generalized_VPA
https://doi.org/10.1007/s11075-024-01870-4
ORIGINAL PAPER
Abstract
In this paper, we propose a least-change secant algorithm to solve the generalized
complementarity problem indirectly trough its reformulation as a nonsmooth system
of nonlinear equations using a one-parametric family of complementarity functions.
We present local and superlinear convergence results of new algorithm and analyze
its numerical performance.
1 Introduction
The generalized complementarity problem (GCP(F, G)) consists of finding a vector
x ∈ Rn , such that
B H. Vivas
[email protected]
R. Pérez
[email protected]
C. Arias
[email protected]
1 Departamento de Matemáticas, Universidad del Cauca, Calle 5 No 4-70, Popayán, Cauca, Colombia
123
Numerical Algorithms
2 Preliminaries
In this section we present some basic concepts and results necessary for the develop-
ment of this work.
123
Numerical Algorithms
with D L , the set where L is differentiable. The convex hull of ∂ B L(x) is called the
generalized Jacobian of L at x, denoted ∂ L(x).
To define the concepts of BD-regularity and R-regularity for generalized com-
plementarity, we consider a solution x ∗ of GCP(F, G) and the index sets, α =
{i ∈ I : Fi (x ∗ ) > 0 = G i (x ∗ )} and β = {i ∈ I : Fi (x ∗ ) = 0 = G i (x ∗ )} .
Definition 2 [13]. Let x ∗ be a solution of GCP(F, G).
1. If all matrices H ∈ ∂ B (x ∗ ) are nonsingular, x ∗ is called a BD-Regular solution.
2. Let F (x ∗ ) be nonsingular and K = G (x ∗ )F (x ∗ )−1 . If the submatrix1 K αα is
nonsingular and the Schur’s complement of K ,
−1
K ββ − K βα K αα K αβ ∈ R|β|×|β|
where Da (x) = diag (a1 (x) − 1, . . . , an (x) − 1) and Db (x) = diag(b1 (x) − 1,
. . . , bn (x) − 1) are diagonal matrices with ai (x) and bi (x) given by
where Da (x) y Db (x) are negative semidefinite diagonal matrices such that Da (x) +
Db (x) is negative definite.
1 Let A = (a ) ∈ Rm×n be. The matrix A
ij αβ is one whose elements ai j such that i ∈ α and j ∈ β.
2 A matrix M ∈ Rn×n is a P-matrix if for every nonzero vector z there is an index j ∈ {1, . . . , n} such
that z j [M z] j > 0.
123
Numerical Algorithms
The following results are useful to guarantee the nonsingularity of the elements of
the generalized Jacobian of λ in a solution x ∗ of the GCP(F, G).
Lemma 1 [5] If F is strongly BD-regular on x then there exists a neighborhood N of
x and a constant c such that for any y ∈ N and V ∈ ∂ B F( y), V is nonsingular and
V −1 ≤ c. (7)
The next lemma gives an upper bound for the partial derivatives of (4) and (5),
when (Fi (x), G i (x)) = (0, 0). This bound which will be used in later results.
Lemma 2 [7]. Let f λ : R2 → R be defined by
f λ (a, b) = (a − b)2 + λab, λ ∈ (0, 4). (8)
√
then ∇ f λ (a, b) ≤ 2, for all nonzero vector (a, b) ∈ R2 .
The following result will be useful to prove that the new algorithmic proposal is well
defined.
C −1
A−1 ≤ ·
1 − In − C −1 A
123
Numerical Algorithms
Bk d k = −λ (x k ). (12)
3: Do
x k+1 = x k + d k ,
4: Update Bk , such that Bk+1 is nonsingular. Go to Step 1.
5: end while
6: return x ∗
Remark 3 when λ → 0+ , the Algorithm 1 reduces to that presented in [16], for the
generalized complementarity problem.
Under the following Assumptions, we prove that the sequence {x k } generated by the
Algorithm 1 is well defined and converges linearly to a solution of (3).
123
Numerical Algorithms
for all x ∈ B(x ∗ ; δ), where · denotes any norm in Rn and also its respective induced
matrix norm.
A3. x ∗ is an R-regular solution of GCP(F, G).
From Theorem 1, Lemma 1 and the Assumption A3, we can deduce that the gener-
alized Jacobian matrices are nonsingular and their inverses are bounded. That is, for
H∗ ∈ ∂λ (x ∗ ), there exists a positive constant μ, such that H∗−1 ≤ μ.
Let H ∈ ∂λ (x) and H∗ ∈ ∂λ (x ∗ ) be defined by (6) and the matrix B define by
(9). The following two lemmas guarantee that, for every x close enough to a solution
of GCP(F, G), H − B ∞ and H − H∗ ∞ are bounded from above.
H−B ∞ ≤ θ.
Suppose that the maximum is reached in the row j. We consider two cases.
If j ∈
/ β̄, then
H−B ∞ = [H ] j − [B] j 1
= (a j − 1)(∇ F j (x)T − [A] j ) + (b j − 1)(∇G j (x)T − [C] j ) 1
≤ |a j − 1| ∇ F j (x)T − [A] j 1 + |b j − 1| ∇G j (x)T − [C] j ) 1
≤ (|a j | + 1) ∇ F j (x)T − [A] j 1 + (|b j | + 1) ∇G j (x)T − [C] j ) 1
.
√ √
From Lemma 2 and Proposition 1, |a j | < 2 and |b j | < 2, so
√
H−B ∞ ≤ (1 + 2) ∇ F j (x)T − [A] j 1
+ ∇G j (x)T − [C] j ) 1
. (13)
123
Numerical Algorithms
On the other hand, using equivalence between the norms · ∞ and · 1 , the Assump-
tion A2 and the fact that A ∈ B(F (x ∗ ) ; δ F ), we get
∇ F j (x) − [A] j 1
≤ n ∇ F j (x) − ∇ F j (x ∗ ) ∞ + ∇ F j (x ∗) − [A] j ∞
≤ n γ x − x ∗ ∞ + ∇ F j (x ∗ ) − [A] j ∞
≤ n (γ x − x ∗ ∞ + δF ) ≤ n ( γ + δF ) ,
where δ = max {δ F , δG } .
Now, if j ∈ β, we have that
H−B ∞ = [H ] j − [B] j 1
≤ |χ j − 1| ∇ F j (x)T − [A] j 1 + |ξ j − 1| ∇G j (x)T − [C] j ) 1
≤ (|χ j | + 1) ∇ F j (x)T − [A] j 1 + (|ξ j | + 1) ∇G j (x)T − [C] j ) 1
.
H−B ∞ ≤ 6n ( γ + δ) = θ2 , (15)
where δ = max {δ F , δG } .
Finally, from (14) and (15), since θ1 < θ2 and defining θ = θ2 = 6n ( γ + δ) , we
conclude that
H − B ∞ ≤ θ. (16)
Lemma 5 Assume that the Assumptions A1 and A2 are verified. Then for each
H ∈ ∂λ (x) and H∗ ∈ ∂λ (x ∗ ), there exists a positive constant α such that if
x − x ∗ ∞ < then H − H∗ ∞ < α.
123
Numerical Algorithms
Since F and G satisfy the Assumption A2, from Lemma 2 and Proposition 1, we
have that
√ √
H − H∗ ∞ ≤ 2γ x − x ∗ ∞ + 2 2γ x − x ∗ ∞ + 4 2M
√ √
≤ (2 + 2 2)γ + 4 2M = θ̂ ,
where M = max F (x ∗ ) ∞ , G (x ∗ ) ; that is, there exists the positive constant
√ √ ∞
θ̂ := (2 + 2 2)γ + 4 2M, such that
H − H∗ ∞ < θ̂ , (17)
where the matrix B is given by (10). If x ∗ satisfies the Assumption A3 then there exist
constants , δ > 0, such that
if x − x∗ ∞ ≤ , C − G (x ∗ ) ∞
≤ δ and A − F (x ∗ ) ∞
≤ δ,
Q(x, A, C) − x ∗ ∞ ≤ r x − x∗ ∞. (19)
r r
Proof Let r ∈ (0, 1), 1 < and δ < , where γ is the Lipschitz constant
96nμγ 96nμ
in Assumption A2, and μ is the constant of Lemma 1.
To prove that Q is well defined, we must guarantee that B −1 exists. For this, let us
consider the inequality
B − H∗ ∞ ≤ B−H ∞ + H − H∗ ∞. (20)
123
Numerical Algorithms
r √
√ − 4 2M
8(2 + 2 2)μ
2 < ,
γ
√ √ r
we get H − H∗ ∞ ≤ θ̂ ( 2 ) = (2 + 2 2)γ 2+ 4 2M < ·
8μ
Consequently, if we define = min { 1 , 2 } , we have that
r r
B−H ∞ < and H − H∗ ∞ < · (21)
8μ 8μ
r r r
B − H∗ ∞ ≤ B−H ∞ + H − H∗ ∞ < + = ·
8μ 8μ 4μ
r 1
B − H∗ ∞ ≤ < ·
4μ 4 H∗−1
∞
Then,
1
H∗−1 B − In ≤ H∗−1 B − H∗ ∞ < ,
∞ ∞ 4
therefore, H∗−1 B − In ∞
< 1 and, by Lemma 3, the function Q is well defined.
Moreover,
H −1 μ 4
B −1 ≤ < = μ.
1 − In − H −1 B 1 3
1−
4
Now, to prove (19), we subtract x∗ in (18) and after some algebraic manipulations,
we have
Q(x, A, C) − x ∗ ∞ = x − x ∗ − B −1 λ (x) ∞
= (x − x ∗ ) −B −1 λ ( x ) + B −1 H∗ (x − x ∗ ) −B −1 H∗ (x − x ∗ ) ∞
≤ B −1 ∞ (B − H∗ )(x − x ∗ ) − [λ (x) − λ (x ∗ ) + H∗ (x − x ∗ )] ∞
4
≤ μ B − H ∗ ∞ x − x ∗ ∞ + λ ( x ) − λ ( x ∗ ) + H ∗ (x − x ∗ ) ∞ ·
3
123
Numerical Algorithms
λ (x) − λ (x ∗ ) + H (x − x ∗ ) ∞
lim = 0,
x→x ∗ x − x∗ ∞
thus, for all ρ > 0, there exists 3 > 0, such that if x − x ∗ ∞ < 3 then
λ (x) − λ (x ∗ ) + H (x − x ∗ ) ∞
< ρ.
x − x∗ ∞
r
In particular, for ρ = , we have that
8μ
λ (x) − λ (x ∗ ) + H (x − x ∗ ) ∞ r
< · (23)
x − x∗ ∞ 8μ
The following result, analogous to the Theorem of two neighborhoods [15], guar-
antees q-linear convergence of Algorithm 1 in the infinity norm. Here, we have three
neighborhoods: one of the solution, another one of the Jacobian matrix of F at the
solution and the third one of the Jacobian matrix of G at the solution.
Theorem 2 Given r ∈ (0, 1), there exist > 0 and δ > 0 such that, if
x0 − x∗ ∞ ≤ , Ak − F (x ∗ ) ∞
≤ δ and Ck − G (x ∗ ) ∞
≤δ
x k+1 − x ∗ ∞ ≤ r xk − x∗ ∞ ,k = 0, 1, . . .
Proof Consider the function Q defined by (18). Thus, x k+1 is given by (24). Let
r ∈ (0, 1), 1 ∈ (0, ) and δ1 ∈ (0, δ), where and δ are the constants of Lemma 6.
We will use mathematical induction on k.
• For k = 0. If x 0 − x ∗ ∞ ≤ 1 ≤ , since
A0 − F (x ∗ ) ∞
≤ δ1 ≤ δ and C0 − G (x ∗ ) ∞
≤ δ1 ≤ δ,
123
Numerical Algorithms
x1 − x∗ ∞ ≤ r x0 − x∗ ∞.
xm − x∗ ∞ ≤ r x m−1 − x ∗ ∞· (25)
xm − x∗ ∞ ≤ r x m−1 − x ∗ ∞ ≤ r m x0 − x∗ ∞ ≤ rm ≤ .
x m+1 − x ∗ ∞ ≤ r xm − x∗ ∞
Theorem 3 Let Bk be defined by (10). Assume the Assumptions A1 to A3 and that the
sequence {x k } generated by
(Bk − H∗ ) d k
lim = 0, (27)
k→∞ dk
123
Numerical Algorithms
λ (x k ) − H∗ (x k − x ∗ )
lim ∗ = 0. (28)
x k →x xk − x∗
furthermore,
λ (x k ) − H∗ x k − x ∗ ≤ λ (x k ) − H∗ x k − x ∗ . (30)
| λ (x k ) − H∗ (x k − x ∗ ) |
lim ∗ = 0,
x k →x
H∗−1 H∗ (x k − x ∗ )
then
| λ (x k ) − H∗ (x k − x ∗ ) |
lim ∗ = 0.
x k →x H∗ (x k − x ∗ )
1
This implies that for ρ = , there exists > 0, such that if x k − x ∗ < then
2
1 λ (x k ) − H∗ (x k − x ∗ ) 1
− < < ,
2 H∗ (x k − x ∗ ) 2
thus,
1 3
H∗ x k − x ∗ < λ (x k ) < H∗ x k − x ∗ .
2 2
From the left inequality and by (29), we get
1 1
λ (x k ) > H∗ x k − x ∗ ≥ xk − x∗ . (31)
2 2 H∗−1
0 = Bk d k + λ (x k ) , (32)
123
Numerical Algorithms
Applying norm to both sides and the triangular inequality, we deduce that
λ (x k+1 ) = Bk d k − H∗ d k + λ (x k ) + H∗ d k − λ (x k+1 )
≤ (Bk − H∗ ) d k + λ (x k ) + H∗ d k − λ (x k+1 )
then
The first term of the right side converges to zero by the hypothesis (27). The second
one converges to zero by semismoothness of λ . So,
λ (x k+1 )
lim = 0. (33)
x k →x ∗ dk
λ (x k+1 ) 1 x k+1 − x ∗
0 = lim ∗ ≥ lim ∗
x k →x xk 2 H∗−1 x k →x sk
1 x k+1 − x ∗
= lim ∗ ∗ ∗
2 H∗−1 x k →x x k+1 − x + x − x k
1 x k+1 − x ∗
≥ lim ∗
2 μ x k →x x k+1 − x ∗ + x k − x ∗
x k+1 − x ∗
1 xk − x∗
= lim ∗ ·
2 μ x k →x x k+1 − x ∗
+1
xk − x∗
Hence,
x k+1 − x ∗
lim ∗ = 0,
x k →x xk − x∗
whereby the sequence {x k } converges q-superlinearly to x ∗ .
It is clear that Algorithm 1 is generic because it does specify how to update the matrices
Bk . In this section, we propose least-change secant updates for those matrices following
the theory developed by [1], which was used in [4] for nonlinear complementarity. In
123
Numerical Algorithms
this sense, we will see that the proves are practically analogous to those presented in
that document.
Observe that to update Bk in the Algorithm 1 implies to update the matrices Ak and
Ck , respectively. Following [1], the updates Ak+1 and Ck+1 must satisfy the secant
equations:
and the change between Ak and Ak+1 , Ck and Ck+1 , must be minimum; i.e.,
where the sets VF = VF (x, y) and VG = VG (x, y) contain all matrices that satisfy
the secant equations (34) and (35), respectively. Thus, it is natural to think in the
orthogonal projection of these matrices onto VF and VG . That is Ak+1 = PVF (Ak )
and Ck+1 = PVG (Ck ). Some examples de least-secant updates are the follows,
Next, we present a least-change secant update algorithm to solve the GCP(F, G).
In order to develop the convergence theory of Algorithm 2, we assume an additional
Assumption to the previous three.
A4. Suppose that for all x, z ∈ Rn there exist matrices A ∈ VF (x, z) and C ∈
VG (x, z) such that
123
Numerical Algorithms
Bk d k = −λ (x k ). (36)
3: Do
x k+1 = x k + d k ,
4: Update Ak and Ck such that
Ak+1 = PVF (x k ,x k+1 ) (Ak )
Ck+1 = PVG (x k ,x k+1 ) (Ck ).
Go back to Step 1.
5: end while
6: return x ∗
and
PVG (x, y) (C) − G (x ∗ ) ≤ C − G (x ∗ ) + γ̄ x − x ∗ ,
for all x, y ∈ B(x ∗ ; ), A ∈ B(F (x ∗ ); δ), C ∈ B(G (x ∗ ); δ) and y − x ∗ ≤
x − x∗ .
In the following lemma, we show that the updates to matrices Ak+1 and Ck+1 , can
deteriorate but in a controlled way.
Lemma 8 Suppose that the Assumptions A1 to A4 are verified. Let A+ be the orthog-
onal projection of A onto the set VF (x, z), C+ the orthogonal projection of C onto
the orthogonal projection of F (x) onto VF (x, z) and C
VG (x, z), A the orthogonal
projection of G (x) onto VG (x, z) then
and
C+ − G (x) ≤ C − G (x) + β2 σ (x, z), (38)
where α1 > 0, α2 > 0 and σ (x, z) = max{ x − x ∗ , z − x ∗ }.
123
Numerical Algorithms
A+ − F (x) ≤ A+ − A − F (x) .
+ A (39)
=
A+ − A min ,
A− A
A∈VF (x,z)
A+ − A ≤ A − F (x) + A
≤ A− A − F (x) . (40)
− F (x) .
A+ − F (x) ≤ A − F (x) + 2 A
The following result guarantees that the Algorithm 2 is well defined and converges.
Its proof is analogous to that of a similar result for nonlinear complementarity demon-
strated in [4].
Theorem 4 Suppose the Assumptions A1 to A4. Let {Ak } and {Ck } be the sequences
generated by the Algorithm 2. Given r ∈ (0, 1), there exist positive constants and δ
such that, if
x0 − x∗ ≤ , A0 − F (x ∗ ) ≤ δ and C0 − G (x ∗ ) ≤ δ,
x k+1 − x ∗ ≤ r x k − x ∗ .
123
Numerical Algorithms
where
δ−δ 1−r
δ= and δr = (δ − δ) ,
γ γ
with γ the constant of (37).
From (41), we can deduce that
γ (δ − δ)
δ+γ 0 < δ + γ δ0 = δ + =δ
γ
and
γ δr γ γ (δ − δ)(1 − r )
δ+ <δ+ <δ+ = δ· (42)
1−r 1−r γ (1 − r )
We use induction on k.
1. For k = 0. If
x 0 − x ∗ ≤ , A0 − F (x ∗ ) ≤ δ and C0 − G (x ∗ ) ≤ δ,
x1 − x∗ ≤ r x0 − x∗ ,
−1
x m = x m−1 − Bm−1 (x m−1 ),
Am − F (x ∗ ) ≤ Am−1 − F (x ∗ ) + γ x m−1 − x ∗
m−1
≤ δ+γ rj
j=0
∞
≤ δ+γ rj <δ+γ ≤ δ,
1−r
j=0
Am − F (x ∗ ) < δ.
123
Numerical Algorithms
x m+1 − x ∗ ≤ r x m − x ∗ ,
Lemma 10 Under the hypotheses of Theorem 4,
lim Bk+1 − Bk = 0.
k→∞
Proof From (10) and (11), we have that if [Bk ]i = Bk+1 i − [Bk ]i then
⎧
⎪
⎪ (a i − 1) Ak+1 i + (bi − 1) Ck+1
⎨ i
− (ai − 1) [Ak ]i + (b
i − 1)[C k ] ,
i ∈
/β
[Bk ]i = i
(44)
⎪ (χi − 1) Ak+1 i + ξi − 1 Ck+1
⎪ i
⎩
− (χi − 1) [Ak ]i + (ξi − 1) [Ck ]i , i ∈ β.
whereby
√ √
Bk+1 − Bk ∞ ≤ (1 + 2) Ak+1 − Ak ∞ + (1 + 2) Ck+1 − Ck ∞,
lim Bk+1 − Bk = 0,
k→∞
The following theorem gives a sufficient condition for q-superlinear convergence
of Algorithm 2.
123
Numerical Algorithms
Theorem 5 Let us suppose the Assumptions A1 to A4, {Ak } and {Ck } are the sequences
defined by the Algorithm 2 and lim x k = x ∗ . If
k→∞
(Bk+1 − H∗ ) d k
lim =0 (45)
k→∞ dk
x0 − x∗ ≤ , Ak − F (x ∗ ) ≤ δ and Ck − G (x ∗ ) ≤ δ,
(Bk+1 − H∗ ) sk
lim =0
k→∞ sk
7 Numerical experimentation
123
Numerical Algorithms
that define the remaining three problems, with their starting points and solution(s).
For simplicity, we denote en as the vector of ones of order n.
Problem8 [6]. The function F is the same as in Problem
4 in [14] and G is defined
by G(x) = 2.5 − x12 , 2.5 − x22 , 2.5 − x32 , 2.5 − x42 .
We consider an initial radius r0 , such that for 1000 random points in Br0 (x ∗ ), the
respective algorithm converges. Then, we increase the radius by 0.1 units and get a
radius r1 . Again, we generate 1000 random points in Br1 (x ∗ ). If there is no convergence
for any of those points then the radius of convergence will be r0 ; otherwise, we continue
the process until the radius is estimated.
123
Numerical Algorithms
The results of this experiment are shown in Table 1 for the three ways in which the
parameter λ was chosen. The table contains the following information: Problem (P),
convergence radius (r ) and average number of iterations (k) for which each variant of
the algorithm converged.
In general, we observe that in 50% of the cases (Problems P2, P4, P6, P7 and P9)
there are no significant variations in the radius of convergence for the choices of the
parameter λ. For Problems P3, P5 and P10 the radius of convergence of the algorithm
increases significantly for λ = 2 with the good Broyden and bad Broyden updates.
Also, for Problems P1 and P8 the radius of convergence increases when λ = 10−5
with the good Broyden update. This suggests that the size of the convergence radius
depends not only on the characteristics of each problem, but also on the value of λ.
It is important to mention that we carried out the experiments for others values of
λ (namely, λ = 3.9, 3.7, 3.5, 3.1, 2.7, 2.52, 1.5, 1, 0.5, 0.1, 10−3 , 10−5 ), the results
were similar to those shown in Table 1 without significant changes in the convergence
radii of the algorithm. For this reason we do not report it in the paper.
In the second experiment, in order to illustrate numerically the rate of convergence
of the Algorithm 2, we choose the Problem 2, the starting point x 1 and the nine
versions of the algorithm. We calculate the quotient Rk , defined by
x k+1 − x ∗
Rk = ·
xk − x∗
Table 1 Results of Experiment 1. Convergence radius varying λ
123
Numerical Algorithms
Rk Rk Rk
13 0.008
14 0.000
The results are shown in the Table 2, which contains the iteration k and the value
of Rk , for each version of the algorithm. The symbol “−” indicates divergence.
We observe that in all cases where there is convergence, the quotient Rk goes to
zero, when k increases. This indicates at least superlinear convergence of the pro-
posed algorithm, which agrees with the theoretical results in previous section. This
experiment also suggests that the convergence rate is not affected by the choice of the
parameter λ.
In the third experiment, we illustrate the region of convergence of three versions of
Algorithm 2, for
the problem P5, and the six versions for the
problem P10. We associate
the set X i∗ = y = xi,∗ + 0.1t : t = −10, −9, . . . , 9, 10 to each of the components
of the solution x ∗ and we run the algorithms with all points of the Cartesian product
n ∗
i=1 X i as starting points.
The Figures 1, 2 and 3 present in blue color the points for which the algorithms
converged and in magenta those that diverged.
Fig. 1 Region and percent of convergence of Algorithm 2 for problem P5, with λ dynamic choice
123
Numerical Algorithms
Fig. 2 Region and percent convergence of Algorithm 2 for problem P10, with λ = 10−5
In these figures, we can see that for problem P5, the good Broyden update with the
dynamic choice of λ improves the region of convergence.
For Problem 10, the dynamic λ turns out to be the best option since for λ = 10−5 ,
the Schubert’s update converges only for 38.4% of the cases, compared to the other
updates. In general, we see the need to implement a globalization strategy in each of
these methods.
In the four experiment, we analyze the efficiency and robustness of the algorithms.
That is, its ability of solving a problem from different starting points. For this, we use
the efficiency (E), robustness (R) and combined robustness and efficiency (C) indices
[20–22]. An algorithm is more robust, efficient or balanced, if the indices are near to
1.
The Table 3 shows that the Algorithm 2 is more robust with the bad Broyden update
for the different λ choices; for λ = 10−5 and λ = 2 the algorithm was more efficient
when Bk was updated using the bad Broyden formula, meanwhile for λ dynamic, the
Algorithm was more efficient with the Schubert update. Finally, The most balanced
versions of the Algorithm 2 were obtained for λ = 10−5 and λ dynamic strategy,
respectively, with the Schubert update.
Fig. 3 Region and percent convergence of Algorithm 2 for problem P10, with λ dynamic
123
Numerical Algorithms
8 Final remarks
In this work we propose a nonsmooth least-change secant algorithm to solve the gen-
eralized complementarity problem through its reformulation as a system of nonlinear
equations using a one-parametric family of complementarity functions. This algorithm
can be seen as an extension to generalized complementarity of the least-change secant
family of methods proposed in [4] for nonlinear complementarity. We prove, under
suitable assumptions, local and q-superlinear convergence.
The numerical tests show a good performance of the algorithmic proposal and allow
us to observe that most of the convergence radii of the algorithms are relatively small.
The numerical tests indicates that the choice of th parameter λ does not affect the
convergence rate of the algorithm, but it does affect its radius of convergence, robust-
ness and efficiency. In addition, it was observed that a small or tending to zero value of λ
can improve the performance of the algorithm. This is in agreement with observations
of other researchers who, from their numerical experiments, have concluded that for
local algorithms, the Minimum function performs better than other complementarity
functions.
On the other hand, since the methods are local, the choice of the starting point influ-
ences the region of convergence as evidenced in the first experiment. An alternative
may be to incorporate a globalization strategy and do the numerical analysis of the
globalized algorithm.
Acknowledgements We are grateful to the University of Cauca for the time allowed for this research.
Funding Open Access funding provided by Colombia Consortium. This work was not funded by any entity.
Availability of Data and Material The data obtained in this research are available.
Code Availability This work introduce an algorithm completely available for the readers.
Declarations
Conflicts of Interest The authors declare that they have no conflict of interest.
Competing Interests The authors have no financial or proprietary interests in any material discussed in this
article.
Ethics Approval The authors declare that they followed all the rules of a good scientific practice.
123
Numerical Algorithms
Consent for Publication The authors approve the publication of this research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence,
and indicate if changes were made. The images or other third party material in this article are included
in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If
material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
1. Martínez, J.M.: On the relation between two local convergence theories of least-change secant update
methods. Math. Comput. 59(200), 457–481 (1992)
2. Martínez, J.M.: Local convergence theory of inexact newton methods based on structured least change
updates. Math. Comput. 55(191), 143–167 (1990)
3. Arenas, F., Martínez, H.J., Pérez, R.: Least change secant update methods for nonlinear complemen-
tarity problem. Ingeniería y Ciencia. 11(21), 11–36 (2015). https://doi.org/10.17230/ingciencia.11.21.
1
4. Lopes, V.L.R., Martínez, J.M., Pérez, R.: On the local convergence of quasi-Newton methods for
nonlinear complementarity problems. J. Comput. Appl. Math. 30(1), 3–22 (1999). https://doi.org/10.
1016/S0168-9274(98)00080-4
5. Qi, L.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res.
18(1), 227–244 (1993)
6. Outrata, J.V., Zowe, J.: A newton method for a class of quasi-variational inequalities. Comput. Optim.
Appl. 4, 5–21 (1995). https://doi.org/10.1007/BF01299156
7. Kanzow, C., Kleinmichel, H.: A new class of semi-smooth Newton-type methods for nonlinear com-
plementarity problems. Comput. Optim. Appl. 11(3), 227–251 (1998)
8. Clarke, F.H.: Necessary conditions for nonsmooth problems in optimal control and the calculus of
variations. PhD thesis, University of Washington (1973)
9. Jiang, H., Fukushima, M., Qi, L., Sun, D.: A trust region method for solving generalized complemen-
tarity problems. SIAM J. Optim. 8(1), 140–157 (1998). https://doi.org/10.1137/S1052623495296541
10. De Luca, T., Facchinei, F., Kanzow, C.: A semi-smooth equation approach to the solution of nonlinear
complementarity problems. Math. Program. 75, 407–439 (1996)
11. Song, L., Gao, Y.: On the local convergence of a levenberg-marquardt method for nonsmooth nonlinear
complementarity problems. Sci. Asia. 43, 377 (2017). https://doi.org/10.2306/scienceasia1513-1874.
2017.43.377
12. Du, S.-q.: Generalized Newton method for a kind of complementarity problem. In: Abstr. Appl. Anal.,
vol. 2014 (2014). https://doi.org/10.1155/2014/745981
13. Arias, C.A., Martínez, H.J., Pérez, R.: A global quasi-Newton algorithms for nonlinear complementarity
problems. Pacific J. Optim. 13(1), 1–15 (2017)
14. Vivas, H., Pérez, R., Arias, C.A.: A nonsmooth newton method for solving the generalized comple-
mentarity problem. Numer. Algor (2023). https://doi.org/10.1007/s11075-023-01581-2
15. Dennis, J.E., Jr., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear
Equations. SIAM, New Jersey (1996)
16. Grueso, G.Z.T.: Un algoritmo local secante de cambio mínimo para complementariedad generalizada.
Master’s thesis, Universidad del Cauca (2021)
17. Broyden, C.G.: A class of methods for solving nonlinear simultaneous equations. Math. Comput.
19(92), 577–593 (1965)
18. Brown, P.N., Saad, Y.: Convergence theory of nonlinear newton-krylov algorithms. SIAM J. Optim.
4(2), 297–330 (1994). https://doi.org/10.1137/0804017
123
Numerical Algorithms
19. Schubert, L.K.: Modification of a quasi-newton method for nonlinear equations with a sparse jacobian.
Math. Comput. 24(109), 27–30 (1970)
20. Bogle, I.D.L., Perkins, J.D.: A new sparsity preserving quasi-newton update for solving nonlinear
equations. SIAM J. Sci. Comput. 11(4), 621–630 (1990)
21. Ralevic, N.M., Cebic, D.: The properties of modifications of newton’s method for solving nonlinear
equations. In: 2013 IEEE 11th International Symposium on Intelligent Systems and Informatics (SISY),
pp. 341–345 (2013). https://doi.org/10.1109/SISY.2013.6662599
22. Buhmiler, S., Krejic, N.: A new smoothing quasi-newton method for nonlinear complementarity prob-
lems. J. Comput. Appl. Math. 211(2), 141–155 (2008)
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
123