A New Class of Quasi Newton Updating Formulas For Unconstrained Optimization
A New Class of Quasi Newton Updating Formulas For Unconstrained Optimization
To cite this article: Basim A. Hassan & Mohammed Abdulrazaq Kahya (2021) A new class
of quasi-Newton updating formulas for unconstrained optimization, Journal of Interdisciplinary
Mathematics, 24:8, 2355-2366, DOI: 10.1080/09720502.2021.1961980
Basim A. Hassan †
Department of Mathematics
College of Computers Sciences and Mathematics
University of Mosul
Mosul
Iraq
Abstract
Problems of sciences concerned with minimizing an objective function that depends
on real values without restrictions on there is called unconstrained optimization problems.
Quasi-Newton methods are one of the most common approaches to solve unconstrained
optimization problems. Mainly, the quasi-Newton equation is the focus of quasi-Newton
methods. In this paper, we extended the quasi-Newton equation introduced by Razieh et al.
[1] and some new quasi-Newton methods are presented. The convergence behaviors of the
proposed methods were debated. It is worth mentioning that the proposed method is proved
better in performance over other competitive methods and it’s able to solve unconstrained
optimization problems.
†
E-mail: [email protected]
* E-mail: [email protected] (Corresponding Author)
©
2356 B. A. HASSAN AND M. A. KAHYA
1. Introduction
Many iterative methods have been proposed to find a minimum of a
problem of the form, f : Rn ® R, f Î C 2 , for more details see [2]. Quasi-
Newton methods are an important type of iterative methods for solving
optimization problems. As it is known, the goal of quasi-Newton creates
a perfect approximation to the Hessian matrix to construct a sequence of
symmetric positive definite matrix Bk +1 . The quasi-Newton methods take
the form:
xk +1 = xk + a k dk (1)
where 0 < d < s < 1 [3]. To obtain the search direction, we do the following
steps:
Bk dk + gk = 0 (5)
where matrices {Bk } approximate Hessian matrix Ñ 2 f (xk ) and they satisfy
the quasi-Newton equation :
Bk +1 sk = y k (6)
H k y k skT + sk y kT H k é y T H y ù s sT
H kBFGS
+1
= Hk - + ê1 + k T k k ú Tk k . (8)
skT y k êë sk y k úû sk y k
The BFGS method has accurate numerical results, but it may fail in
proving convergence for non-convex functions with an inexact line search
[6]. To overcome convergence problem, different approaches have been
proposed to make modifications to the quasi-Newton equations in order
to obtain different updating formulas that represented approximation
for the Hessian matrix (Wei et al. [7], Biglari et al. [8], Chen et al. [9]
and Hassan [10]). Many other modified methods were presented in [3].
[11]–[14]. An important point to be addressed in the papers of the BFGS
methods (or quasi-Newton methods) is global convergence property when
solving nonconvex minimization problems. It has been regarded as one of
the most fundamental problems in the field of unconstrained optimization
[13]. In particular, many authors have proved the global convergence of
the quasi-Newton methods via line search techniques enhancement [12],
[14] . On the other hand, there is another way to get a global convergence
via improved approximations of the Hessian matrix. In order to overcome
the BFGS hurdles and to get a more accurate approximation of the second
curvature of the objective function, we have presented an alternative
quasi-Newton equation as below:
1 12 5 æ 8 -ak -g ö T
skT Bk +1 sk = skT y k + ( fk - fk +1 ) + gkT+1 sk + ç ÷ gk sk (12)
g g g è g ø
1 12 5 æ 8 -ak -g ö T
skT yˆ k = skT y k + ( fk - fk +1 ) + gkT+1 sk + ç ÷ gk sk (14)
g g g è g ø
12 6 æ 7 -ak -g ö T
= ( fk - fk +1 ) + skT gk +1 + ç ÷ gk sk
g g è g ø
Using the Wolfe condition with the above equation thus we get:
æ 12 6 7 -ak -g ö
skT yˆ k ³ gkT dk ç - da k + sa k + a k ÷ (15)
è g g g ø
Make a note of the skT gk = a k dkT gk < 0, we know that there exists a
constant w < 0 such that:
skT yˆ k ³ w dkT gk > 0. (17)
Algorithm 1
and
g(x ) £ g , " x , y Î N (19)
Theorem 2 : Let {xk } be generated by the new algorithm and there exist constants
a1 and a2 such that they satisfy the relation :
2
Bk sk £ a1 sk and skT Bk s2 ³ a2 sk (21)
1 ìï 12 5 æ 8 -ak -g ö T üï
skT yˆ k = skT y k + Max í0 , ( fk - fk +1 ) + gkT+1 sk + ç ÷ gk sk ý
g îï g g è g ø þï
1
³ skT y k . (26)
g
and
ìï 12 5 æ 8 -ak -g ö T ïü
Max í0 , ( fk - fk +1 ) + gkT+1 sk + ç ÷ø gk sk ý
1 ïî g g è g ïþ
yˆ k = y + r
g k skT rk
k
k
1
£ yk + yk (27)
g
yˆ k
2
êg + 1ú y k
£ë û £ M. (28)
skT yˆ k 1 T
sk y k
g
4. Numerical Experiments
For a fair test of the method performance, the proposed method
(extension of BFGS) was compared with standard BFGS and BBFGS
proposed by Hassan[10].
The following tables contain the number of iterations and the
number of function evaluations. The methods were terminated if
“| f (xk ) | > 10 -5 , let stop 1 = | f (xk ) - f (xk +1 ) | / | f (xk ) | ; Otherwise, let
stop 1 = | f (xk ) - f (xk +1 ) | . For every problem, if gk < e or stop 1 < 10 -5
is satisfied, the program will be stopped”. Stop rule used as the Himmeblau
stop rule used by Yuan Y. and et al.[18]. In all algorithms, the H 0 = I where
I is an identity matrix, and the step length a k was computed satisfying
the Wolfe conditions, with d = 0.1, s = 0.9 and e = 10 -5. All programs
were written in Matlab, using 30 test problems. Tested problems used in
this paper are the same benchmark problems were used by More J. and et
al. [19].
Table 1
Comparison between the influence of gamma values over the proposed
method and rival methods in terms of #iterations and #functions evaluation.
g = (8-ak)/2
BFGS with
BFGS with
gamma=12
BFGS with
algorithm
algorithm
gamma=5
BBFGS
BFGS
Problems n NI NF NI NF NI NF NI NF NI NF
ROSE 2 35 140 13 51 8 35 5 17 21 90
FROTH 2 9 26 9 26 7 22 9 30 8 47
BADSCB 2 3 30 3 30 3 30 3 30 3 30
BEALE 2 15 50 15 51 10 38 7 24 6 46
JENSAM 2 2 27 2 27 2 27 2 27 2 27
BARD 3 16 54 9 28 17 56 12 38 11 38
GAUSS 3 2 4 2 4 2 4 2 4 2 4
BOX 3 2 27 2 27 2 27 2 27 2 27
SING 4 20 60 34 105 26 104 25 90 28 98
WOOD 4 19 61 9 33 9 32 8 29 8 28
KOWOSB 4 21 65 26 94 11 32 8 24 11 33
BD 4 17 54 13 44 11 40 6 21 11 39
Contd...
QUASI-NEWTON UPDATING FORMULAS FOR UNCONSTRAINED OPTIMIZATION2363
OSB1 5 2 27 2 27 2 27 2 27 2 27
BIGGS 6 25 72 4 12 3 9 3 9 3 9
OSB2 11 3 31 3 31 3 31 3 31 3 31
WATSON 20 31 102 6 20 6 43 5 17 6 21
ROSEX 100 231 806 10 41 8 35 5 17 10 50
SINGX 400 64 209 31 116 40 143 29 105 37 135
PEN1 400 2 27 2 27 2 27 2 27 2 27
PEN2 200 2 5 2 5 2 5 2 5 2 5
VARDIM 100 2 27 2 27 2 27 2 27 2 27
TRIG 500 9 33 8 33 9 32 5 18 10 36
BV 500 2 4 2 4 2 4 2 4 2 4
IE 500 6 16 6 16 8 22 12 34 7 19
TRID 500 53 170 12 45 11 41 9 33 11 42
BAND 500 57 281 9 54 5 17 5 17 5 17
LIN 500 2 4 2 4 2 4 2 4 2 4
LIN1 500 3 7 3 7 3 7 3 7 3 7
LIN0 500 3 7 3 7 3 7 3 7 3 7
Total 658 2426 244 996 219 928 183 750 223 975
Table 2
Relative efficiency of the proposed method compared to BFGS
Table 3
Relative efficiency of the proposed method compared to BBFGS
5. Conclusion
In this paper, a new class of quasi-Newton updating formulas
was investigated to show the efficiency and effectiveness of updating
techniques over the quasi-Newton equation. The proposed approach was
presented as an extension to the quasi-Newton equation which is inspired
by the technique presented by Razieh et al. [1]. Numerical experiments
showed that the proposed approach is effective to solve test problems.
Insight into Table 1, we can be concluded that BFGS with recommended
values of gamma in this paper outweighed over BFGS and BBFGS. In
particular, the proposed method with gamma=12 was the best of all, it has
the ability to reduce the numbers of iterations and functions evaluation,
and it gives rise to the high numerical accuracy with more favorable in
the convergence. Based on the above talk, we can be recommended to
the proposed method as a successful approach for solving unconstrained
optimization problems.
QUASI-NEWTON UPDATING FORMULAS FOR UNCONSTRAINED OPTIMIZATION2365
References
[12] G. Yuan, Z. Wei, and Y. Wu, “Modified limited memory BFGS method
with nonmonotone line search for unconstrained optimization,” J.
Korean Math. Soc., vol. 47, no. 4, pp. 767–788, 2010.
[13] Z. Wan, K. L. Teo, X. Shen, and C. Hu, “New BFGS method for
unconstrained optimization problem based on modified Armijo line
search,” Optimization, vol. 63, no. 2, pp. 285–304, 2014.
[14] P. Mtagulwa and P. Kaelo, “A convergent modified HS-DY hybrid
conjugate gradient method for unconstrained optimization
problems,” J. Inf. Optim. Sci., vol. 40, no. 1, pp. 97–113, 2019.
[15] B. A. Hassan and M. W. Taha, “A new variants of quasi-newton
equation based on the quadratic function for unconstrained
optimization,” Indones. J. Electr. Eng. Comput. Sci., vol. 19, no. 2, pp.
701–708, 2020.
[16] X. Fang, Q. Ni, and M. Zeng, “A modified quasi-Newton method for
nonlinear equations,” J. Comput. Appl. Math., vol. 328, pp. 44–58, 2018.
[17] Y. H. Xiao, Z. X. Wei, and L. Zhang, “A modified BFGS method
without line searches for nonconvex unconstrained optimization,”
Adv. Theor. Appl. Math, vol. 1, no. 2, pp. 149–162, 2006.
[18] W. Findeisen, J. Szymanowski, and A. Wierzbicki, “Theory and
Methods of Optimization,” Warsaw Polish Sci. Publ., 1977.
[19] J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained
optimization software,” ACM Trans. Math. Softw., vol. 7, no. 1, pp.
17–41, 1981.