Ajac 2015
Ajac 2015
Received: 16 March 2015; Received in revised form: 22 May 2015; Accepted: 12 June 2015; Published
online 27 June 2015
Abstract
A new optimization algorithm is developed in this work based on the globally convergent method of
moving asymptotes (GCMMA) of (Svanberg, 1987; Svanberg, 2002). The new algorithm is optimized via a
new internal iterative strategy, and it is then analyzed via a parametric study for the best possible
performances. It serves as an important numerical tool (in C++ language with external Linear Algebra
Libraries) for solving inequality-constrained nonlinear programming problems of large number of
variables. The new numerical tool is applied to two well-known academic problems which are both
nonlinear constrained minimization large scale problems. The convergence time is reduced by a factor up
to 28 after comparing the numerical results with previous ones from literature (Svanberg, 2002; Gomes-
Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011). The algorithm is an
efficient and robust scientific computation tool that can be used in many applied complex engineering
optimization problems. Moreover, it can be easily coupled (single class of IO (Input Output) arguments) to
many already existing external commercial and free multiphysics solvers such as Finite Elements, Finite
Volumes, Computational Fluid Dynamics and Solid mechanics solvers, etc.
1. Introduction
Design process of industrial devices is vital in industry before any manufacturing procedure. For
example, conjugated heat transfer system devices are very important representing different
products in many sectors such as in the automotive industry, in heat exchangers networks, in
engines designs, in generators and converters, etc... The optimization of these industrial devices
for designs that are more compact with less mass, less frictional losses and increased thermal
efficiency is a huge need for cost reduction and better performances.
It is known that the complexity of an optimization problem increases with the increase of the
__________________________________________________________________________________________________________________________
* Corresponding e-mail: [email protected]
1 Mines Douai, EI, F-59508 Douai, France. University of Lille.
32
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
number of design variables. So, in order to solve a complex optimization problem, one needs an
efficient optimization algorithm that can handle complexity and assures a convergence towards
a solution that is optimum.
For that reasons, optimization algorithms are considered nowadays as an unavoidable numerical
tool like "a swiss knife" for most engineers in many engineering applications such as aerospace,
chemical, automotive, electrical, infrastructure, process and manufacturing. These optimization
algorithms, when embedded inside multiphysics solvers (i.e. fluid and solid mechanics, heat
transfer, electromagnetics, etc..) allow the engineer to design better optimized systems which are
more efficient, less expensive and with an improved performance with respect to an initial
unoptimized system or design. For example, a civil engineer uses an optimization algorithm to
optimize the size, shape or even the material distribution of a bridge as in Topology Optimization
of Fig. 1 taken from Bendsoe and sigmund (2004). However, a mechanical engineer may use an
optimization algorithm to optimize the shape of a formula-one car or an airplane to reduce the
drag forces. However a thermal engineer may use an optimization algorithm to optimize the
shape of a solar cell with an increase of its thermal efficiency.
Fig 1. Taken from Bendsoe and Sigmund (2004). a) Sizing optimization of a struss structure, b)
shape optimization and c) topology optimization. The initial problems are shown at the
left side and the optimal solutions are shown at the right side.
Moreover, the optimization algorithms are gaining recently more attention in many complex new
research and development areas such as in electromagnetics research for designing new
Microstrip Antennas as done by Hassan et al. (2014).
Over the years, scientists developed variety of optimization Algorithms depending on the
33
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
complexity of the numerical problem to be solved. We mention some of them which are linear
programming (LP) and nonlinear programming (NLP) methods like: Simplex, Karmarkar,
Fibonacci, Newton and Secant methods, penalty function methods, and Augmented Lagrangian
methods (ALM), etc...(see Rao, 2013).
In the present work, we are interested in developing via a template meta-programming (in C++
language with external Linear Algebra Libraries) a fast and high performance optimization
algorithm. This is after investigating the numerical performance (convergence and
computational cost) of several optimization algorithms present in literature (Svanberg, 2002 ;
Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011), on
different computer system architectures. When it comes to large scale problems, these
algorithms require more improvements (and special linear algebra libraries) to achieve a high
computational speed. We mean by large scale problems where the optimization problem is an
inequality-constrained nonlinear programming problem with a huge number of n bounded
variables (n ≥ 10000).
For such type of optimization problems, we present here a fast and high performance algorithm
based on a new iterative strategy (following the Method initially provided by Svanberg, 2002)
which is globally convergent in solving constrained nonlinear optimization problems. The
developed algorithm serves as a powerful numerical tool that can be coupled easily to different
existing multiphysics softwares for optimizing many applied engineering problems (Bendsoe
and Sigmund, 2004 ; Rao, 2013) thanks to a simple object-oriented structure : a single C++ class
with clear IO arguments.
The importance of this work is that the new algorithm developed here is optimized in terms of
CPU speed via a new internal iterative strategy, and analyzed via a parametric study for the best
possible performances which was not tackled before. It serves as an important numerical tool for
solving inequality-constrained nonlinear problems of large number of variables.
The history of the Globally Convergent Method of Moving Asymptotes (GCMMA) goes back to the
initial works of Svanberg (2002) as an inherited class of NLP methods. It is a special optimization
algorithm for complex optimization problems where the objective function (as a function of
multi bounded variables) is subject to numerous inequality constraints. It is worth noting that
this GCMMA is an extended version of the Method of Moving Asymptotes (MMA) of Svanberg
(1987) which is a famous optimization algorithm (but not globally convergent) that was used
previously in numerous structural optimization problems (see Bendsoe and Sigmund, 2004).
34
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
minimize f 0 ( x )
subject to f i ( x ) ≤ 0, i = 1,..., m (2.1)
x ∈ X,
Following Svanberg's approach, by introducing artificial variables, problem (2.1) can be written
as the following general form:
m
2
f 0 ( x ) + a0 z + ∑ ci y i + d i y i
1
minimize
i=1 2
subject to f i ( x ) − a i z − y i ≤ 0, i = 1,..., m (2.2)
x ∈ X, y ≥ 0, z ≥ 0.
Optimization problems of the form as in (2.2) can be solved either by special optimization
algorithms such as the ordinary MMA algorithm (Svanberg, 1987) or the GCMMA algorithm
(Svanberg, 2002) as will be shown in the coming sections.
35
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
We describe now briefly the mathematical structure of the MMA in a fluent way for the reader.
Note that this algorithm is not globally convergent. We will show later another version of this
algorithm that is globally convergent.
( )
Starting from an iteraton k (given a point x ( k ) , y (k ) , z ( k ) ), the MMA algorithm generates the
following subproblem:
m
2
minimize ξ 0(k ) ( x ) + a0 z + ∑ c i y i + d i y i
1
i=1 2
subject to ξ i ( x ) − a i z − y i ≤ 0, i = 1,..., m
(k )
(3.1)
(k )
x∈X , y ≥ 0, z ≥ 0.
{
X (k ) = x ∈ X | 0.9l (jk ) + 0.1x (jk ) ≤ x j ≤ 0.9u (jk ) + 0.1x (jk ) , }
j = 1,..., n . The subproblem (3.1)
is thus generated by replacing the functions f 0 ( x ) and f i ( x ) of (2.2) by certain chosen convex
functions ξ 0(k ) ( x ) and ξ i(k ) ( x ) , respectively. These convex functions, are updated iteratively,
(
based on the gradient information at the current iteration point x ( k ) , y (k ) , z ( k ) , and also on the )
lower and upper moving asymptotes l j ( (k )
and u j (k )
) that are updated based on information
from the two previous iteration points (x ( k −1 )
) ( )
, y (k −1) , z (k −1) and x (k − 2 ) , y (k − 2 ) , z (k − 2 ) .
The subproblem (3.1) can be solved at the iteration k, and the optimal solution is updated
( )
becoming the next iteration point x ( k +1 ) , y ( k +1 ) , z (k +1) . Then a new subproblem is regenerated
from this last point, and the iterative loop continues by regenerating subproblems until a certain
convergence stopping criterion is satisfied (when the squared-norm of the Karush-Kuhn-Tucker
(KKT) conditions becomes less than a positive real number ε such that ε << 1 ). The KKT
conditions are explained below.
36
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
(
Let us consider that x is an optimal solution vector of the problem on the following form :
minimize W0 ( x )
subject to Wi ( x ) ≤ 0, i = 1,..., m (3.2)
x ∈ℜ .
n
Then, if there is a vector ∆x which satisfies ∇Wi ( x )∆x < 0 for all i > 0 such that Wi ( x ) = 0 ,
( (
then there exist Lagrange multipliers ψ i , i = 1,..., m, that satisfy what is known as the KKT
conditions as the following :
∂W0 (
( x ) + ∑ ψ i ∂Wi ( x( ) = 0, j = 1,..., n
m
∂x j i=1 ∂x j
Wi ( x ) ≤ 0, i = 1,..., m
(
(3.3)
ψ i ≥ 0, i = 1,..., m
ψ iWi ( x ) = 0, i = 1,..., m
(
There are different approaches for solving the subproblem (3.1), such as the "primal-dual (PD)
interior point approach" and the "dual approach (DA)" (see Rao, 2013).
The first approach is based on a sequence of relaxed KKT conditions that are solved by Newton's
method (approach used in the present manuscript). The second approach (i.e. used in Svanberg,
1987) is based on solving the dual problem corresponding to the subproblem (3.1) (thus
maximization) by a modified Newton method (Fletcher-Reeves method) that consider well the
non-negativity constraints on the dual variables.
The functions ξ 0(k ) ( x ) and ξ i(k ) ( x ) are given by first order approximations of the original
functions f 0 ( x ) and f i ( x ) as the following:
p ij(k )
n qij(k )
ξi(k )
(x ) = ∑ (k ) + + ri(k ) ; i = 0,1,..., m; j = 1,2,..., n. (3.4)
j=1 u j − x j x j − l (jk )
(
pij(k ) = u (jk ) − x (jk ) ) 2
1.001∇f ij(k )+ + 0.001∇f ij(k )−
+
(
ρi
x j − x min
max
)
,
(3.5)
j
(
q ij(k ) = x (jk ) − l (jk ) )
2
0.001∇f ij(k )+ + 1.001∇f ij(k )−
+
(x max
ρi
− x min )
,
(3.6)
j j
pij(k ) q ij(k )
( )
n
(k ) (k )
ri = fi x − ∑ (k ) (k ) + (k ) (k )
. (3.7)
j=1 u j − x j xj −lj
37
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Here,
∂f ∂f
∂x
( )
∇f ij(k )+ = max i x (k ) , 0 and ∇f ij(k )− = max − i x (k ) , 0 .
∂x
( ) (3.8)
j j
(k ) (k ) ( k )+
Note that pij , qij , ∇f ij and ∇f ij(k )− are matrices (of real number coefficients) each of
dimension [m × n] .
The upper and lower asymptotes are updated at each iteration as the following :
(k )
(
x (jk ) + 0.5 x max
j − x min
j )
if k = 1, 2
= (k ) ,
( )
uj ( k ) ( k −1 ) (k −1 )
(3.9a)
x j + γ j u j − x j if k ≥ 3
(k )
(
x (jk ) − 0.5 x max
j − x min
j )
if k = 1, 2
= (k ) ,
( )
lj ( k ) (k −1 ) ( k −1 )
(3.9b)
x j − γ j x j − l j if k ≥ 3
γ a if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) < 0
γ (jk ) = γb if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) > 0 . (3.9c)
1 if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) = 0
The values: γ a = 0.7 , γb = 1.2 and ρi = 10 −5 for all i = 0,1,..., m were chosen in (Svanberg,
2002 ; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010). However, in the original
algorithm of Svanberg (1987) 0 < γ a < 1 and γb = 1/γ a or γb = 1/ γ a were proposed.
That's why the author in (Svanberg, 2002) changed from the ordinary MMA towards a new
version named (GCMMA) that is globally convergent towards a feasible solution of the original
problem. This GCMMA introduces a new inner iteration loop (of index η ), where the
approximation functions are updated on both (k,η) and the subproblem is solved many times
until its obtained optimal solution is a feasible solution of the original problem.
38
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
More precisely, the GCMMA subproblem (4.1) is solved at the iterations (k,η) , and the optimal
( )
solution x (k,η ) , y ( k,η ) , z (k,η ) is tested for feasibility.
If it is not a feasible solution, then the subproblem is regenerated in inner iterations
( )
k, η : ηinitial → η final and solved until it becomes feasible at k, η final . Then, this last recent ( )
optimal and feasible solution of the subproblem becomes the next outer iteration point
( )
x (k +1,η ) , y (k +1,η ) , z (k +1,η ) .
m
2
minimize ξ 0(k,η ) ( x ) + a0 z + ∑ c i y i + d i y i
1
i=1 2
subject to ξ i ( x ) − ai z − y i ≤ 0, i = 1,..., m
( k,η )
(4.1)
(k )
x∈ X , y ≥ 0 , z ≥ 0.
The functions ξ 0(k,η ) ( x ) and ξ i(k,η ) ( x ) are given also by first order approximations of the original
functions f 0 ( x ) and f i ( x ) of (2.2) as the following:
n pij(k,η ) q ij(k,η )
ξ i(k,η ) ( x ) = ∑ +
(k ) + ri
( k,η )
; i = 0,1,..., m; j = 1,2,..., n. (4.2)
(k )
j=1 u j − x j xj −lj
ρi(k,η )
(
pij(k,η ) = u (jk ) − x (jk ) ) 2
1.001∇f ij(k )+ + 0.001∇f ij(k )−
+
(
x max − x min )
,
(4.3)
j j
ρi(k,η )
(
q ij(k,η ) = x (jk ) − l (jk ) ) 2
0.001∇f ij(k )+ + 1.001∇f ij(k )−
+
(x max − x min )
,
(4.4)
j j
p ij(k,η ) qij(k,η )
( )
n
( k,η ) (k )
ri = fi x − ∑ (k ) (k ) + , (4.5)
j=1 u j − x j x (jk ) − l (jk )
With equations (3.8), (3.9a to 3.9c) still hold here too. It is important to note that the main
difference between both algorithms lies in the parameter ρ i that was made constant (a value of
10 −5 ) in the original MMA algorithm, but now it is dynamically updated at each (k,η) iteration
as we will show next.
ρi(k,0 ) = max ε ,
ν n max
∑
n j=1
x j − x min
j ( ) ∂∂xf (x ( ) ) ;
i k
ν = 0.1; i = 0,1,..., m.
j
(4.6)
ε is a positive real number such that ε << 1 .
39
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
(k,η+1)
=
{
min ςρi(k,η ) , λ ρi(k,η ) + ωi(k,η ) if ( )} ωi(k,η ) > 0
ρi ; ς = 10; λ = 1.1.
ρi(k,η ) if ωi(k,η ) ≤ 0
(4.7)
( )
ξ i(k,η ) ( x ) = vi(k ) x , x (k ) , σ (k ) + ρi(k,η ) τ i(k ) x , x (k ) , σ (k ) . ( ) (4.8)
Note that, there exist in the literature different forms of conserved convex separable
approximations (CCSA) functions vi( k ) and τ i(k ) such as "linear and separable quadratic
approximations", "linear and separable algorithmic", "linear and separable square root
approximations", etc...(see Rao, 2013).
However, as taken by Svanberg (2002), v i( k ) and τ i(k ) are chosen here as the following
approximations:
∂f i (k ) ∂f
(σ ( ) )
j
k 2
∂x j
x ( )(
x j − x (jk ) ) ∂x
( )(x
+ σ (jk ) i x (k ) j − x (jk ) )
2
( ) ( )
n
vi(k ) x , x (k ) , σ (k ) = f i x (k ) + ∑ j ,
j=1 (σ ) (
(k )
j
2
− xj − xj )
(k ) 2
(4.9)
τi (k )
(x , x (k )
,σ (k )
) 1 n
= ∑
( )
x j − x (jk )
,
2
(4.10)
( ) ( )
2 j=1 σ (jk ) 2 − x j − x (jk ) 2
(
Let x (k, η ) be the solution of the most recent subproblem (4.1), then ωi(k,η ) is defined as:
( k,η )
(
( (
)
f i x (k,η ) − ξ i(k,η ) x (k,η ) ( )
ωi
(
= (k ) ( (k,η ) (k ) (k ) .
τ i x , x ,σ ) (4.11)
For that reason, some authors tried to improve the computational speed of both algorithms like
(Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011), while
40
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
others developed completely different new robust ones (that are also globally convergent, but
sometimes only tested and applied on linear problems) seeking better performances with less
computational costs.
For example, Gomes and Senne (2014) presented a new Sequential Piecewise Linear
Programming (SPLP) algorithm applied to topology optimization problems of geometrically
nonlinear structures. Their method was based on solving convex piecewise linear programming
subproblems by including second order information about the objective function, and by
considering that structures in topology optimization are no more under small displacements.
They showed topology optimization interesting results for different structural problems.
Moreover, the same authors Gomes and Senne (2011) developed a new sequential linear
programming (SLP) algorithm based on a trusted-region (TR) constraint technique. This SLP
was limited only to linear compliance optimization problems.
They applied their SLP algorithm to topology optimization problems, and showed that it is faster
than the GCMMA of Svanberg (2002) when applied to the same linear problems. Nevertheless
this fact, Gomes and Senne (2011) SLP algorithm was only applied to problems of a maximum
number of bounded variables of 3750.
In addition to that, this SLP algorithm cannot be applied to complex large scale NLP bounded
constrained optimization problems, in contrast with what we are seeking, where the number of
variables may exceed 10000.
That’s why (Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al.,
2011) tried to analyze and improve the robustness and convergence speed of the algorithms of
Svanberg (1987, 2002). Their techniques were based on different strategies : one time by using a
new updating strategy of the spectral parameter (Gomes-Ruggiero et al., 2008; Gomes-Ruggiero
et al., 2010), and another time by solving the dual subproblem of the MMA using a trust-region
(TR) scheme (Gomes-Ruggiero et al., 2011).
Nevertheless, all these improvements of these algorithms in literature, the computational cost
still needs to be reduced in order to be able to optimize systems including a very large number of
variables (i.e. large number of mesh cells in CFD problems of complex geometries).
We present the different computer system characteristics that were used previously in literature
in the following table:
41
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Table 1 Different computer system characteristics used by the different authors for their
Optimization Algorithms.
It is obvious from Table 1 that each author had used different computer system characteristics
with respect to the other authors, even so, these authors allowed themselves to compare the
numerical results (computational time of their improved algorithms) one with another. Of
course, this might be confusing, but we are forced to do so.
Our new algorithm is developed as a new generation thanks to its implementation via a template
meta-programming (C++) with the following external Linear Algebra (LA) open source libraries:
Lapack, Blas and Armadillo (v 4.550.2). These libraries support multiple matrix operations and
decompositions and can be used as said by the author: "for fast prototyping and computationally
intensive experiments" (Sanderson, 2010).
Our developed algorithm is constructed in the soul of the GCMMA of Svanberg (2002), but with a
new modified iterations strategy.
After many numerical tests on problems of different scales, we found that most of resolutions of
the subproblem (4.1) (at a single outer iteration k ) converged to a feasible solution in inner
iterations of η final that is an average value of 5. Thus, we followed a new strategy as the
following:
We retain the value η final = 1 at each k' = βk outer iterations ( β ∈ N | β > 0 ; here
β = 2 ) and then we set it free to change to η final with no limits. In this manner, the test on
feasibility of the most recent solution of the subproblem (4.1) is preserved, but accelerated once
a time after each k outer iteration and the global convergence of the algorithm is not violated.
42
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
First, two sets A and B for different algorithm parameters values are used and presented in Table
2.
Note that γ a = 0.7 was used for the data in sets A and B. However, a deeper parametric study is
conducted later to study the effect of all of the parameters on the global performance of the
algorithm with the new present strategy.
6. Results
In order to compare well our numerical results (convergence and computational speed) with
those from literature, we chose the same two academic problems that were used by (Svanberg,
2002 ; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).
These nonlinear problems are given in the following table:
Problem1 Problem2
Minimize f 0 ( x ) = x T [S ] x Minimize f 0 ( x ) = − x T [S ] x
Subject to : f 1 ( x ) = − x T [P ]x ≤ 0, Subject to : f 1 ( x ) = − + x T [P ]x ≤ 0,
n n
2 2
f 2 ( x ) = − x T [Q ]x ≤ 0, f 2 ( x ) = − + x T [Q ]x ≤ 0,
n n
2 2
− 1 ≤ x j ≤ 1, j = 1,..., n. − 1 ≤ x j ≤ 1, j = 1,..., n.
with x (k=1 ) = (0.5,0.5,...,0.5 ) ∈ ℜ n with x (k =1 ) = (0.25,0.25,...,0.25 ) ∈ ℜ n
T T
[S ] = [s ], [P ]
ij = [p ] and [Q ]
ij = [q ],
ij are [n × n] matrices ( n ∈ N | n > 1)
defined respectively by the following real valued coefficients:
2 + sin(4ππ ij ) 1 + 2α ij 3 − 2αij
sij = , pij = and qij = , (6.1)
(1 + (i − j ))lnn (1 + (i − j ))lnn (1 + (i − j ))lnn
i+ j − 2
with α ij = ∈ [0,1] for all (i; j ) ∈ [1, n] .
2n − 2
43
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
It is important to note that the nonlinear objective function f 0 ( x ) is one time strictly convex (in
Problem1) and one time strictly concave (in Problem2). However, the nonlinear inequality
constraint functions f 1 ( x ) and f 2 ( x ) are one time strictly concave (in Problem1), and one time
strictly convex (in Problem2).
Problems 1 and 2 are solved iteratively (with the two parameters sets A and B) for (n=100, 500,
1000, 2000, 5000 and 10000) until the convergence criterion (the squared-norm of the KKT
conditions) reached a positive small value ε << 1. Figure 2 shows that using the present
strategy applied to Problem1 and thanks to the used numerical libraries, the convergence time
is reduced at least by a factor of 13 when compared to the previous results of (Svanberg, 2002 ;
Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).
The detailed results for all the values of Fig. 2 are shown in Table 4. It shows that a slight change
in the choice of the algorithm parameters (sets A and B of Table 2) found to affect the
convergence time but without affecting at all the objective function value computations.
Nevertheless this slight effect on the convergence time, the latter is still reduced at least 13 times
with respect to the best previous results in literature (Gomes-Ruggiero et al., 2010; Gomes-
Ruggiero et al., 2011).
44
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
The precision in computing the objective function values for Problem1 at all the selected
number of variables n (between 100 and 10000) is still more than satisfying as it is shown in
Table 5.
For more deep comparisons with previous results, we present the total number of inner and
outer iterations in Table 6, needed by the solver to achieve convergence in solving Problem1 for
the parameters of set A and set B.
45
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Now we apply the solver to the optimization problem Problem2. Fig. 3 shows that using the
present strategy applied to Problem2 and thanks to the used numerical libraries, the
convergence time is reduced at least by a factor of 28 when compared to the previous results
(Svanberg, 2002; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et
al., 2011) from the literature.
Table 7 presents all the detailed results obtained in Fig. 3 and shows that a slight change in the
choice of the algorithm parameters (sets A and B of Table 2) found to affect the convergence time
but without affecting at all the objective function value computations for Problem2.
Nevertheless this slight effect on the convergence time, the latter is still reduced at least 28 times
with respect to the best previous results in literature (Gomes-Ruggiero et al., 2010; Gomes-
Ruggiero et al., 2011).
The precision in computing the objective function values for Problem2 for all n between 100
and 10000 is still more than convincing as it is shown in Table 8.
46
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
The memory required for all computations (for each Problem) using our system characteristics
(Table 1) varied between 0.001 and 4.8 GB depending on the number of variables.
We show the total number of inner and outer iterations in Table 9, needed by the solver to
achieve convergence in solving Problem2 using the present strategy for parameters (set A and
set B).
In order to quantify well the effect of parameters on the global performance of the algorithm
(with the present strategy), a parametric study is conducted next. Figs. 4, 5 and 6 show the effect
of different parameter values on the global performance of the algorithm using the present
strategy while solving for Problem1 at n=1000.
48
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Similarly, the effect of such parameters on the global performance of the algorithm (with the
present strategy) is also examined for Problem2 at n=1000. Figs. 7, 8 and 9 show the effect of
these parameter values on the global performance of the algorithm using the present strategy.
49
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
50
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Thus, after analyzing the effect of the different algorithm parameter values (with the present
strategy), it can be observed that the following parameter values :
γb = 1 / γ a ; γ a ∈ [0.3 − 0.5] ; λ ∈ [1.1 − 1.15] ; ν ∈ [0.1 − 0.15] stand for the best
global performance of the algorithm. Of course here one may ask if a greater value (or another
value) of n might result in a sensibility of the algorithm performance to another range of
parameters values ? For that issue, we conducted a sensibility analysis again for Problem1 but
now at n=3000 in Fig. 10 confirming that these parameter interval values identified at n=1000
still hold too for a best performance of the algorithm at higher n values. Moreover, in Fig. 11 we
present a logarithmic plot for the effect of the number of variables n on the convergence
computation time (in seconds) when solving for both problems (Problem1 and Problem2).
51
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
In Fig. 12 we show the effect of choosing an initial vector values x(k = 1) = constant∀n on the
algorithm performance (using set A). We observe that for different 11 initial chosen vector
values (values are equal for all of the 2000 variables) the algorithm converges very well in
solving Problem1 with only a maximum increase of around 11 % with respect to a minimum
time of convergence obtained in t_min=27.2 seconds. The solutions of both problems are shown
in Fig. 13 for more illustration for the reader.
However, we observe from Fig. 14 that starting from an initial vector values (a sinusoidal form)
(1000 − i )
x(k = 1) = sin π with i ∈ [1,2,...,1000] and n = 2000 , the algorithm converges (also
n
with set A) very well in solving Problem1 with only 11.02 seconds which is about three times
less than t_min=27.2 in the case of x(k = 1) = constant∀n . This finding is logical of course but
what's surprising is that the convergence time is reduced by around a factor of three.
52
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Fig 12. Effect of the initial vector on algorithm general performance (using set A).
Thus a conclusion can be made which is that if one would like to push the algorithm towards the
best performances possible, an initial vector that is nonuniform over n is recommended at the
departure.
53
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Fig 14. Effect of a sinusoidal initial vector. Convergence is achieved in 11.02 seconds using set A.
7. Conclusion
Thanks to the C++ programming language and external Linear Algebra Libraries (Sanderson,
2010), a fast, robust and high-performance globally convergent optimization algorithm is
developed. It serves as an important numerical tool for solving inequality-constrained nonlinear
programming problems of large number of design variables n (n≥10000).
The developed tool is validated by solving two famous academic complex nonlinear constrained
minimization problems of large scale (Problem1 and Problem2). The convergence time is
reduced up to a factor of 28 when compared to previous results from the literature (Svanberg,
2002; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).
This is thanks to a new internal iterative strategy applied to the GCMMA algorithm (Svanberg,
2002).
The effect of the different parameters on the new algorithm global performance is investigated.
The best performance (with the new present iterative strategy) is found to be for the following
algorithm parameter values :
γb = 1/ γ a ; γ a ∈ [0.3 − 0.5] ; λ ∈ [1.1 − 1.15] ; ν ∈ [0.1 − 0.15] .
The effect of the form of the initial vector on the algorithm performance is analyzed. We found
that an initial vector which has a nonuniform form (i.e. sinusoidal) over the number of variables
n improves very well the general performance of the algorithm.
The numerical achievement in this work is a promising and robust scientific computation tool
that can be used and applied to different complex engineering optimization problems. For
example, it may be coupled easily (due to an object-oriented C++ class) to many already existing
external multiphysics commercial and free solvers (such as Finite Elements, Finite Volumes,
Computational Fluid Dynamics and Solid Mechanics Solvers).
54
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Acknowledgements
The authors are very grateful for Prof. K. Svanberg for the useful discussions and for supplying
the Matlab® source codes of his algorithms (Svanberg, 1987; Svanberg, 2002).
References
Bendsoe, M. P., & Sigmund, O. (2004). Topology Optimization: Theory, Methods and Applications. Second
Edition, ISBN 3-540-42992-1, Springer.
http://dx.doi.org/10.1007/978-3-662-05086-6
Burger F.H., Dirker J. and Meyer J.P. (2013). Three-dimensional conductive heat transfer topology
optimisation in a cubic domain for the volume-to-surface problem. International Journal of Heat and
Mass Transfer, vol. 67, pp. 214-224.
http://dx.doi.org/10.1016/j.ijheatmasstransfer.2013.08.015
Dede E. M. (2009). Multiphysics Topology Optimization of Heat Transfer and Fluid Flow Systems.
Proceedings of the COMSOL Conference, Boston.
Gersborg-Hansen A., Bendsøe M. P. and Sigmund O. (2006). Topology Optimization of Heat Conduction
Problems Using The Finite Volume Method. Structural and Multidisciplinary Optimization, vol. 31, no.
4, pp. 251–259.
http://dx.doi.org/10.1007/s00158-005-0584-3
Gomes, F. A. M., & Senne, T. A. (2014). An algorithm for the topology optimization of geometrically
nonlinear structures. Int. J. Numer. Meth. Engng, 99, 391-409. DOI: 10.1002/nme.4686
http://dx.doi.org/10.1002/nme.4686
Gomes, F. A. M., & Senne, T. A. (2011). An SLP algorithm and its application to topology optimization.
Computational and Applied Mathematics, 30(1), 53-89.
Gomes-Ruggiero M. A., Sachine, M. & Santos S. A. (2008). Analysis of a Spectral Updating for the Method of
Moving Asymptotes. EngOpt - International Conference on Engineering Optimization, Rio de Janeiro,
Brazil, 01 - 05 June.
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2010). A spectral updating for the method of moving
asymptotes. Optim. Methods Softw., 25(6), 883–893.
http://dx.doi.org/10.1080/10556780902906282
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2010). Globally convergent modifications to the Method
of Moving Asymptotes and the solution of the subproblems using trust regions: theoretical and
numerical results. Instituto de Matemática, Estatística e Computação Científica.
http://www.ime.unicamp.br/conteudo/globally-convergent-modifications-method-moving-
asymptotes-and-solution-subproblems-using-t
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2011). Solving the dual subproblem of the Method of
Moving Asymptotes using a trust-region scheme. Comput. Appl. Math. 30 (1).
Hassan, E., Wadbro, E., & Berggren, M. (2014). Patch and Ground Plane Design of Microstrip Antennas by
Material Distribution Topology Optimization. Progress In Electromagnetics Research B, 59, 89-102.
http://dx.doi.org/10.2528/PIERB14030605
Lee Kyungjun (2012). Topology optimization of convective cooling system designs, PhD thesis, University of
Michigan.
Marck G., Nemer M., Harion J.-L., Russeil S. and Bougeard D. Topology optimization using the SIMP method
for multiobjective conductive problems. Numerical Heat Transfer, Part B: Fundamentals, 61(6):439–
470, June 2012.
http://dx.doi.org/10.1080/10407790.2012.687979
Oevelen T. V. and Baelmans M. (2014). OPT-i An International Conference on Engineering and Applied
Sciences Optimization. Kos Island, Greece, 4-6, June.
55
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Rao, S. S. (2013). Engineering optimization: Theory and Practice. Third Enlarged Edition, ISBN: 978-81-
224-2723-3, New Age International (P) Ltd., Publishers.
Sanderson, C. (2010). Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and
Computationally Intensive Experiments. NICTA Technical Report - October.
http://www.nicta.com.au/research/research_publications
Svanberg, K. (1987). The method of moving asymptotes - a new method for structural optimization.
Internat. J. Numer. Methods Engrg., 24, 359–373.
http://dx.doi.org/10.1002/nme.1620240207
Svanberg, K. (2002). A class of globally convergent optimization methods based on conservative convex
separable approximations. SIAM J. Optim., 12, 555–573.
http://dx.doi.org/10.1137/S1052623499362822
56