AIMS2022 Two New Generalized Iteration Methods For Solving AVE Using M-Matrix 2022
AIMS2022 Two New Generalized Iteration Methods For Solving AVE Using M-Matrix 2022
DOI: 10.3934/math.2022455
Received: 19 October 2021
Revised: 24 January 2022
Accepted: 06 February 2022
http://www.aimspress.com/journal/Math Published: 25 February 2022
Research article
Two new generalized iteration methods for solving absolute value equations
using M-matrix
1 School of Mathematics and Statistics, HNP-LAMA, Central South University, Changsha 410083,
Hunan, China
2 Department of Mathematics, College of Science Al-Zulfi, Majmaah University, Al-Majmaah
11952, Saudi Arabia
3 Research Centre, Future University in Egypt, New Cairo 11745, Egypt
Abstract: In this paper, we present two new generalized Gauss-Seidel iteration methods for solving
absolute value equations Ax − |x| = b, where A is an M-matrix. Furthermore, we demonstrate their
convergence under specific assumptions. Numerical tests indicate the efficiency of the suggested
methods with suitable parameters.
Keywords: absolute value equations; convergence analysis; M-matrix; numerical tests
Mathematics Subject Classification: 90C30, 65F10
1. Introduction
Ax − |x| = b, (1.1)
where A ∈ Rn×n is an M-matrix, |x| represents all the elements of the vector x ∈ Rn by absolute value
and b ∈ Rn . If “|x|” is replaced by “B|x|” in (1.1), then the general AVE is obtained, see [24, 30] . The
AVE has received considerable attention recently, as it is suitable for a wide variety of optimization
problems, e.g., linear programming, linear complementarity problems (LCP) and convex quadratic
programming [1–7, 9–16, 23, 25, 26].
In recent years, a wide variety of procedures have been developed for solving AVE (1.1). For
example, Wu and Li [34] presented a special shift splitting technique for determining the AVE (1.1)
and performed a convergence analysis. Ke and Ma [19] established the SOR-like process to solve the
8177
AVE (1.1). Chen et al. [8] modified the approach of [19] and analyzed the SOR-like approach using
optimal parameters. Fakharzadeh and Shams [12] recommended the mixed-type splitting iterative
scheme for determining (1.1) and established the convergence properties. Hu with Huang [17] have
developed the AVE system as an LCP without any premise and demonstrated the existence and
convexity properties. Caccetta et al. [7] studied a smoothing Newton procedure for solving (1.1) and
established that the procedure is globally convergent when kA−1 k < 1. Ning and Zhou [40] evaluated
improved adaptive differential evolution for AVEs; in this technique, they use local and global search.
Salkuyeh [41] addressed the Picard HSS iteration approach and provided sufficient conditions for its
convergence, while Edalatpour et al. [11] offered a generalization of the Gauss-Seidel (GGS)
approach for AVE (1.1). Cruz et al. [ 39] utilized the inexact non-smooth Newton approach and
designated global linear convergence of the approach. Moosaei et al. [22] proposed two techniques
for determining AVE (1.1), namely, the Newton technique with the Armijo step and the Homotopy
perturbation technique. For more details, see [18, 20, 27–29, 31–38, 43].
In this article, inspired by the work in [11], based on the GGS iteration method, the new generalized
Gauss-Seidel (NGGS) iteration methods are presented to solve the AVE (1.1), and its convergence
conditions are discussed in detail. By using some numerical tests, we demonstrate the efficacy of the
newly developed methods.
The rest of the article is designed as follows: Section 2 discusses some preliminary information.
Section 3 provides details of the proposed methodologies and its convergence conditions. Section 4
reports some tests to indicate the efficiency of the offered methods. Finally, section 5 draws some
conclusions.
2. Preliminaries
Here, we will provide some notations, the description of an M-matrix, as well as some helpful
lemmas for the later research.
Let A = (ai j ) ∈ Rn×n , we represent the absolute value, tridiagonal and infinity norm of A as |A| =
(|ai j |), T rd(A) and k A k∞ , respectively. The matrix A ∈ Rn×n is called an Z-matrix if ai j ≤ 0 for i , j,
and an M-matrix if it is a nonsingular Z-matrix and with A−1 ≥ 0.
Lemma 2.1. [33] The matrix A = (ai j ) ∈ Rn×n is said to be strictly diagonally dominant when
Here, we discuss the two NGGS methods: Method I represents the first method, while Method II
represents the second method.
Ax − |x| = b.
Let
A = DA − L − U = (Ω̄ + DA − L) − (Ω̄ + U) (3.2)
where, DA , L and U respectively, are the diagonal, the strictly lower and upper-triangular parts of A.
Moreover, Ω̄ = Ψ(2 − Ψ)(I − D)−1 , where 0 ≤ Ψ ≤ 2 and I stands for the identity matrix. Using
Eqs (3.1) and (3.2), the Method I is suggested as:
Where i = 0, 1, 2, ..., and 0 < λ ≤ 1. Note that if λ = 1 and Ω̄ = 0, then the Eq (3.4) is reduces to
the GGS method [11].
In order to demonstrate the convergence of Method I, we prove the theorem listed below.
Theorem 3.1. Assume that the diagonal elements of matrix A are all greater than one, and the DA −L−I
matrix is a strict row-wise diagonally dominant matrix. If
k(Ω̄ + DA − λL)−1 [(1 − λ)(Ω̄ + DA ) + λ(Ω̄ + U)]k∞ < 1 − λk(Ω̄ + DA − λL)−1 k∞ . (3.5)
Then the sequence {xi } of Method I converges to the unique solution x? of AVE (1.1).
Proof. We will show first k(Ω̄ + DA − λL)−1 k∞ < 1. Clearly, if we put L = 0, then
According to this inequality, the convergence of Method I is possible when condition Eq (3.5) is
fulfilled.
In order to demonstrate the convergence of Method II, we prove the theorem listed below.
Theorem 3.2. Assume that the diagonal elements of matrix A are all greater than one, and the DA −L−I
matrix is a strict row-wise diagonally dominant matrix. Then the sequence {xi } of Method II converges
to the unique solution x? of AVE (1.1).
Proof. The uniqueness result follows from Theorem 3.1. To demonstrate the convergence, suppose
xi+1 − x? = λ(Ω̄ + DA − λL)−1 |xi+1 | + (Ω̄ + DA − λL)−1 [((1 − λ)(Ω̄ + DA ) + λ(Ω̄ + U))xi+1 + λb]
−(λ(Ω̄ + DA − λL)−1 |x? | + (Ω̄ + DA − λL)−1 [((1 − λ)(Ω̄ + DA ) + λ(Ω̄ + U))x? + λb]),
Axi+1 − |xi+1 | = b.
Therefore, xi+1 solves the AVE (1.1).
4. Numerical tests
The purpose of this section is to present a number of numerical tests that demonstrate the
effectiveness of new approaches from three perspectives: The iteration steps (Itr), computing time
(Time), and norm of absolute residual vectors (RVS). Where, RVS is defined by
kAxi −|xi |−bk2
RVS := kbk2
≤ 10−6 .
All calculations are run on Intel (C) Core (TM) i5-3337U, 4 GB RAM, 1.80 GHz, and MATLAB
(2016a). Furthermore, the zero vector is the initial vector for Example 4.1.
All methods in Table 1 analyze the solution x? for various values of n, respectively. Clearly, Method
I is more effective than SLM and SSM procedures, and the ‘Time’ of Method I is less than the GGS.
Moreover, Method II demonstrates high computational performance from the perspective of ‘Itr’ and
‘Time’.
Problem 4.2. Let A = M + I ∈ Rn×n and the vector b = Ax? − |x? | ∈ Rn , such that
Problem 4.3. Let A = M + 4I ∈ Rn×n and the vector b = Ax? − |x? | ∈ Rn , such that
M = T rd(−In , Hn , −In ) ∈ Rn×n , xi? = ((−1)i , (i = 1, 2, .., n))T ∈ Rn ,
where Hn = T rd(−1, 4, −1) ∈ Rv×v , I ∈ Rv×v is the unit matrix and n = v2 . In this problem, we use the
same initial vector and stopping criteria described in [12]. We compare the offered procedures with
the AOR method [21], the mixed-type splitting (MT) iterative scheme [12], and the technique
presented in [14] (expressed by SISA). The computational outcomes are listed in Table 3.
All methods in Table 3 analyze the solution x? for various values of n, respectively. Clearly, Method
I is more effective than AOR and MT procedures, and the ‘Time’ of Method I is less than the SISA
method. Moreover, Method II demonstrates high computational performance from the perspective of
‘Itr’ and ‘Time’.
Problem 4.4. Let
A = T rd(−1, 8, −1) ∈ Rn×n , xi? = ((−1)i , (i = 1, 2, ..., n))T ∈ Rn
and b = Ax? − |x? | ∈ Rn . Using the same initial vector and the stopping criteria described in [14]. We
compare the novel approaches with the technique offerd in [14] (expressed by SISA using ω = 1.0455),
the SOR-like method proposed in [19] (written by SOR) and the modulus-based SOR method presented
in [42] (written as MSOR). The outcomes are listed in Table 4.
It is clear from Table 4 that all the tested methods provide a quick calculation of AVE (1.1). We
observe that the ‘Itr’ and ‘Time’ of the recommended methods are less than the existing techniques.
The results of our study indicate that our suggested methods for AVEs are feasible and highly effective.
5. Conclusions
In this work, two NGGS methods (Method I and Method II) are presented to solve the AVEs. The
convergence properties of the strategies are examined. A number of experiments have been conducted
in order to establish the effectiveness of the new approaches.
The GGS technique has been successfully extended by two additional parameters when A is an
M-matrix. The cases for more general coefficient matrices are the next issue to be considered.
Appendix
The following is an explanation of how our proposed techniques can be implemented. From Ax −
|x| = b, we have
x = A−1 (|x| + b).
Thus, we can approximate xi+1 as follows,
xi+1 ≈ A−1 (|xi | + b).
This process is known as the Picard technique [31]. Now, we examine the procedure for Method I.
Algorithm for Method I. (1) Choose the parameters, an starting vector x0 ∈ Rn and set i = 0.
(2) Compute yi = xi+1 ≈ A−1 (|xi | + b),
(3) Calculate xi+1 = λ(Ω̄ + DA − λL)−1 |yi | + (Ω̄ + DA − λL)−1 [((1 − λ)(Ω̄ + DA ) + λ(Ω̄ + U))xi + λb].
(4) If xi+1 = xi , then stop. Else, apply i = i + 1 and repeat step 2.
For Method II, follow the same steps.
Conflict of interest
References
1. J. Feng, S. Liu, An improved generalized Newton method for absolute value equations,
SpringerPlus, 5 (2016), 1042. https://doi.org/10.1186/s40064-016-2720-5
2. J. Feng, S. Liu, A new two-step iterative method for solving absolute value equations, J. Inequal.
Appl., 2019 (2019), 39. https://doi.org/10.1186/s13660-019-1969-y
3. L. Abdallah, M. Haddou, T. Migot, Solving absolute value equation using
complementarity and smoothing functions, J. Comput. Appl. Math., 327 (2018), 196–207.
https://doi.org/10.1016/j.cam.2017.06.019
4. F. Mezzadri, On the solution of general absolute value equations, Appl. Math. Lett., 107 (2020),
106462. https://doi.org/10.1016/j.aml.2020.106462
5. I. Ullah, R. Ali, H. Nawab, Abdussatar, I. Uddin, T. Muhammad, et al., Theoretical analysis of
activation energy effect on prandtleyring nanoliquid flow subject to melting condition, J. Non-
Equil. Thermody., 47 (2022), 1–12. https://doi.org/10.1515/jnet-2020-0092
6. M. Amin, M. Erfanian, A dynamic model to solve the absolute value equations, J. Comput. Appl.
Math., 333 (2018), 28–35. https://doi.org/10.1016/j.cam.2017.09.032
7. L. Caccetta, B. Qu, G. L. Zhou, A globally and quadratically convergent method for absolute value
equations, Comput. Optim. Appl., 48 (2011), 45–58. https://doi.org/10.1007/s10589-009-9242-9
8. C. Chen, D. Yu, D. Han, Optimal parameter for the SOR-like iteration method for solving the
system of absolute value equations, arXiv. Available from:
https://arxiv.org/abs/2001.05781.
9. M. Dehghan, A. Shirilord, Matrix multisplitting Picard-iterative method for solving
generalized absolute value matrix equation, Appl. Numer. Math., 158 (2020), 425–438.
https://doi.org/10.1016/j.apnum.2020.08.001
10. X. Dong, X. H. Shao, H. L. Shen, A new SOR-like method for solving absolute value equations,
Appl. Numer. Math., 156 (2020), 410–421. https://doi.org/10.1016/j.apnum.2020.05.013
11. V. Edalatpour, D. Hezari, D. K. Salkuyeh, A generalization of the Gauss-Seidel iteration
method for solving absolute value equations, Appl. Math. Comput., 293 (2017), 156–167.
https://doi.org/10.1016/j.amc.2016.08.020
12. A. J. Fakharzadeh, N. N. Shams, An efficient algorithm for solving absolute value equations, J.
Math. Ext., 15 (2021), 1–23. https://doi.org/10.30495/JME.2021.1393
13. X. M. Gu, T. Z. Huang, H. B. Li, S. F. Wang, L. Li, Two-CSCS based iteration
methods for solving absolute value equations, J. Appl. Math. Comput., 7 (2017), 1336–1356.
https://doi.org/10.11948/2017082
14. P. Guo, S. L. Wu, C. X. Li, On the SOR-like iteration method for solving absolute value equations,
Appl. Math. Lett., 97 (2019), 107–113. https://doi.org/10.1016/j.aml.2019.03.033
15. F. Hashemi, S. Ketabchi, Numerical comparisons of smoothing functions for optimal correction
of an infeasible system of absolute value equations, Numer. Algebra Control Optim., 10 (2020),
13–21. https://doi.org/10.3934/naco.2019029
16. I. Uddin, I. Ullah, R. Ali, I. Khan, K. S. Nisar, Numerical analysis of nonlinear mixed convective
MHD chemically reacting flow of Prandtl-Eyring nanofluids in the presence of activation energy
and Joule heating, J. Therm. Anal. Calorim., 145 (2021), 495–505. https://doi.org/10.1007/s10973-
020-09574-2
17. S. L. Hu, Z. H. Huang, A note on absolute value equations, Optim. Lett., 4 (2010), 417–424.
https://doi.org/10.1007/s11590-009-0169-y
18. S. Ketabchi, H. Moosaei, Minimum norm solution to the absolute value equation in the convex
case, J. Optim. Theory Appl., 154 (2012), 1080–1087. https://doi.org/10.1007/s10957-012-0044-3
19. Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math.
Comput., 311 (2017), 195–202. https://doi.org/10.1016/j.amc.2017.05.035
20. Y. F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020),
105990. https://doi.org/10.1016/j.aml.2019.07.021
21. C. X. Li, A preconditioned AOR iterative method for the absolute value equations, Int. J. Comput.
Methods, 14 (2017), 1750016. https://doi.org/10.1142/S0219876217500165
22. H. Moosaei, S. Ketabchi, M. A. Noor, J. Iqbal, V. Hooshyarbakhsh, Some techniques
for solving absolute value equations, Appl. Math. Comput., 268 (2015), 696–705.
https://doi.org/10.1016/j.amc.2015.06.072
23. O. L. Mangasarian, R. R. Meyer, Absolute value equation, Linear Algebra Appl., 419 (2006), 359–
367. https://doi.org/10.1016/j.laa.2006.05.004
24. O. L. Mangasarian, Absolute value programming, Comput. Optim. Applic., 36 (2007), 43–53.
https://doi.org/10.1007/s10589-006-0395-5
25. O. L. Mangasarian, Absolute value equation solution via concave minimization, Optim. Lett., 1
(2007), 3–8. https://doi.org/10.1007/s11590-006-0005-6
26. O. L. Mangasarian, Linear complementarity as absolute value equation solution, Optim. Lett., 8
(2014), 1529–1534. https://doi.org/10.1007/s11590-013-0656-z
27. X. H. Miao, J. T. Yang, B. Saheya, J. S. Chen, A smoothing Newton method for absolute
value equation associated with second-order cone, Appl. Numer. Math., 120 (2017), 82–96.
https://doi.org/10.1016/j.apnum.2017.04.012
28. C. T. Nguyen, B. Saheya, Y. L. Chang, J. S. Chen, Unified smoothing functions for absolute
value equation associated with second-order cone, Appl. Numer. Math., 135 (2019), 206–227.
https://doi.org/10.1016/j.apnum.2018.08.019
29. O. A. Prokopyev, On equivalent reformulations for absolute value equations, Comput. Optim. Appl.,
44 (2009), 363. https://doi.org/10.1007/s10589-007-9158-1
30. J. Rohn, A theorem of the alternatives for the equation Ax + B|x| = b, Linear Multilinear Algebra,
52 (2004), 421–426. https://doi.org/10.1080/0308108042000220686
31. J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value
equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35–44.
https://doi.org/10.1007/s11590-012-0560-y
32. B. Saheya, C. H. Yu, J. S. Chen, Numerical comparisons based on four smoothing
functions for absolute value equation, J. Appl. Math. Comput., 56 (2018), 131–149.
https://doi.org/10.1007/s12190-016-1065-0
33. R. S. Varga, Matrix iterative analysis, New Jersey: Prentice-Hall, Englewood Cliffs, 1962.
34. S. L. Wu, C. X. Li, A special shift splitting iteration method for absolute value equation, AIMS
Math., 5 (2020), 5171–5183. https://doi.org/10.3934/math.2020332
35. S. L. Wu, The unique solution of a class of the new generalized absolute value equation, Appl.
Math. Lett., 116 (2021), 107029. https://doi.org/10.1016/j.aml.2021.107029
36. R. Ali, M. R. Khan, A. Abidi, S. Rasheed, A. M. Galal, Application of PEST and PEHF in
magneto-Williamson nanofluid depending on the suction/injection, Case Stud. Therm. Eng., 27
(2021). https://doi.org/10.1016/j.csite.2021.101329
37. C. X. Li, S. L. Wu, Modified SOR-like iteration method for absolute value equations, Math. Probl.
Eng., 2020 (2020), 9231639. https://doi.org/10.1155/2020/9231639
38. M. R. Khan, M. X. Li, S. P. Mao, R. Ali, S. Khan, Comparative study on heat transfer and friction
drag in the flow of various hybrid nanofluids efiected by aligned magnetic fleld and nonlinear
radiation, Sci. Rep., 11 (2021), 3691.
39. J. Y. Bello Cruz, O. P. Ferreira, L. F. Prudente, On the global convergence of the inexact semi-
smooth Newton method for absolute value equation, Comput. Optim. Appl., 65 (2016), 93–108.
https://doi.org/10.1007/s10589-016-9837-x
40. G. Ning, Y. Zhou, An improved differential evolution algorithm for solving absolute value
equations, In: J. Xie, Z. Chen, C. Douglas, W. Zhang, Y. Chen, High performance
computing and applications, Lecture Notes in Computer Science, Springer, 9576 (2016), 38–47.
https://doi.org/10.1007/978-3-319-32557-6
41. D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8
(2014), 2191–2202. https://doi.org/10.1007/s11590-014-0727-9
42. Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems,
Numer. Linear Algebra Appl., 17 (2010), 917–933. https://doi.org/10.1002/nla.680
43. R. Ali, A. Ali, S. Iqbal, Iterative methods for solving absolute value equations, J. Math. Comput.
Sci., 26 (2022), 322–329. https://doi.org/10.22436/jmcs.026.04.01