0% found this document useful (0 votes)
15 views25 pages

Ajac 2015

Uploaded by

talib dbouk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views25 pages

Ajac 2015

Uploaded by

talib dbouk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Columbia International Publishing

American Journal of Algorithms and Computing


(2015) Vol. 2 No. 1 pp. 32-56
doi:10.7726/ajac.2015.1003
Research Article

Performance of Optimization Algorithms Applied to


Large Nonlinear Constrained Problems

Talib Dbouk1* and Jean-Luc Harion1

Received: 16 March 2015; Received in revised form: 22 May 2015; Accepted: 12 June 2015; Published
online 27 June 2015

© The author(s) 2015. Published with open access at www.uscip.us

Abstract
A new optimization algorithm is developed in this work based on the globally convergent method of
moving asymptotes (GCMMA) of (Svanberg, 1987; Svanberg, 2002). The new algorithm is optimized via a
new internal iterative strategy, and it is then analyzed via a parametric study for the best possible
performances. It serves as an important numerical tool (in C++ language with external Linear Algebra
Libraries) for solving inequality-constrained nonlinear programming problems of large number of
variables. The new numerical tool is applied to two well-known academic problems which are both
nonlinear constrained minimization large scale problems. The convergence time is reduced by a factor up
to 28 after comparing the numerical results with previous ones from literature (Svanberg, 2002; Gomes-
Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011). The algorithm is an
efficient and robust scientific computation tool that can be used in many applied complex engineering
optimization problems. Moreover, it can be easily coupled (single class of IO (Input Output) arguments) to
many already existing external commercial and free multiphysics solvers such as Finite Elements, Finite
Volumes, Computational Fluid Dynamics and Solid mechanics solvers, etc.

Keywords: Optimization Algorithms; Nonlinear Programming; Scientific Computing; Convex Separable


Approximations; Method of Moving Asymptotes

1. Introduction
Design process of industrial devices is vital in industry before any manufacturing procedure. For
example, conjugated heat transfer system devices are very important representing different
products in many sectors such as in the automotive industry, in heat exchangers networks, in
engines designs, in generators and converters, etc... The optimization of these industrial devices
for designs that are more compact with less mass, less frictional losses and increased thermal
efficiency is a huge need for cost reduction and better performances.

It is known that the complexity of an optimization problem increases with the increase of the
__________________________________________________________________________________________________________________________
* Corresponding e-mail: [email protected]
1 Mines Douai, EI, F-59508 Douai, France. University of Lille.
32
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
number of design variables. So, in order to solve a complex optimization problem, one needs an
efficient optimization algorithm that can handle complexity and assures a convergence towards
a solution that is optimum.

For that reasons, optimization algorithms are considered nowadays as an unavoidable numerical
tool like "a swiss knife" for most engineers in many engineering applications such as aerospace,
chemical, automotive, electrical, infrastructure, process and manufacturing. These optimization
algorithms, when embedded inside multiphysics solvers (i.e. fluid and solid mechanics, heat
transfer, electromagnetics, etc..) allow the engineer to design better optimized systems which are
more efficient, less expensive and with an improved performance with respect to an initial
unoptimized system or design. For example, a civil engineer uses an optimization algorithm to
optimize the size, shape or even the material distribution of a bridge as in Topology Optimization
of Fig. 1 taken from Bendsoe and sigmund (2004). However, a mechanical engineer may use an
optimization algorithm to optimize the shape of a formula-one car or an airplane to reduce the
drag forces. However a thermal engineer may use an optimization algorithm to optimize the
shape of a solar cell with an increase of its thermal efficiency.

Fig 1. Taken from Bendsoe and Sigmund (2004). a) Sizing optimization of a struss structure, b)
shape optimization and c) topology optimization. The initial problems are shown at the
left side and the optimal solutions are shown at the right side.

Moreover, the optimization algorithms are gaining recently more attention in many complex new
research and development areas such as in electromagnetics research for designing new
Microstrip Antennas as done by Hassan et al. (2014).

An optimization problem is defined mathematically as the minimization (or maximization) of a


single (or mutli) objective function that is subject (or not) to single (or multi) linear or
(nonlinear) constraints. The objective function may be a function of a single (or multi) variables
(which may be linear or nonlinear, bounded or not). The constraints can be equality or inequality
constraints or even mixed. As taking into account the nonlinearity of some problems and as the
number of variables and constraints increases, the complexity of the optimization problem
increases, and thus it requires special optimization algorithms with huge computational efforts
to be solved numerically.

Over the years, scientists developed variety of optimization Algorithms depending on the
33
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
complexity of the numerical problem to be solved. We mention some of them which are linear
programming (LP) and nonlinear programming (NLP) methods like: Simplex, Karmarkar,
Fibonacci, Newton and Secant methods, penalty function methods, and Augmented Lagrangian
methods (ALM), etc...(see Rao, 2013).

In the present work, we are interested in developing via a template meta-programming (in C++
language with external Linear Algebra Libraries) a fast and high performance optimization
algorithm. This is after investigating the numerical performance (convergence and
computational cost) of several optimization algorithms present in literature (Svanberg, 2002 ;
Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011), on
different computer system architectures. When it comes to large scale problems, these
algorithms require more improvements (and special linear algebra libraries) to achieve a high
computational speed. We mean by large scale problems where the optimization problem is an
inequality-constrained nonlinear programming problem with a huge number of n bounded
variables (n ≥ 10000).
For such type of optimization problems, we present here a fast and high performance algorithm
based on a new iterative strategy (following the Method initially provided by Svanberg, 2002)
which is globally convergent in solving constrained nonlinear optimization problems. The
developed algorithm serves as a powerful numerical tool that can be coupled easily to different
existing multiphysics softwares for optimizing many applied engineering problems (Bendsoe
and Sigmund, 2004 ; Rao, 2013) thanks to a simple object-oriented structure : a single C++ class
with clear IO arguments.

The importance of this work is that the new algorithm developed here is optimized in terms of
CPU speed via a new internal iterative strategy, and analyzed via a parametric study for the best
possible performances which was not tackled before. It serves as an important numerical tool for
solving inequality-constrained nonlinear problems of large number of variables.

The history of the Globally Convergent Method of Moving Asymptotes (GCMMA) goes back to the
initial works of Svanberg (2002) as an inherited class of NLP methods. It is a special optimization
algorithm for complex optimization problems where the objective function (as a function of
multi bounded variables) is subject to numerous inequality constraints. It is worth noting that
this GCMMA is an extended version of the Method of Moving Asymptotes (MMA) of Svanberg
(1987) which is a famous optimization algorithm (but not globally convergent) that was used
previously in numerous structural optimization problems (see Bendsoe and Sigmund, 2004).

First, we start by introducing mathematically the definition of optimization problems we mean


to solve numerically. Second, the ordinary MMA algorithm is described briefly. Then our new
solver (via a template meta-programming in C++) is detailed in the continuity of the Globally
Convergent Method of Moving Asymptotes (GCMMA) of Svanberg (2002), but with an improved
iterative strategy. Finally, after solving two well-known NLP constrained minimization large scale
problems, results are presented and the algorithm performance is questioned and compared to
previous ones from literature (Svanberg, 2002; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et
al., 2010; Gomes-Ruggiero et al., 2011). Finally, a parametric study is conducted and a conclusion
is stated.

34
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

2. Optimization Problem Definition


We consider the following form of optimization problems:

minimize f 0 ( x )
subject to f i ( x ) ≤ 0, i = 1,..., m (2.1)
x ∈ X,

Where x = ( x1 ,..., xn ) ∈ ℜ n and


T
{
X = x ∈ ℜ n | x min
j ≤ x j ≤ x max
j , }
j = 1,..., n and
f 0 , f 1 , f 2 ,..., f m are provided continuously differentiable (at least twice) real-valued functions on
X . x min and x j belong to ℜ such that x j ≤ x j ∀j .
max min max
j

Following Svanberg's approach, by introducing artificial variables, problem (2.1) can be written
as the following general form:

m
 2
f 0 ( x ) + a0 z + ∑  ci y i + d i y i 
1
minimize
i=1  2 
subject to f i ( x ) − a i z − y i ≤ 0, i = 1,..., m (2.2)
x ∈ X, y ≥ 0, z ≥ 0.

y = ( y1 ,..., y m ) ∈ ℜ m and z∈ℜ


T
are the introduced artificial variables.
a0 , a i , ci and d i are given real numbers that satisfy
a0 > 0, a i ≥ 0, ci ≥ 0, d i ≥ 0 and c i + d i > 0 ∀i . Also,
a i c i > a0 for all i with a i > 0.

Now, if one sets a0 = 1 ; ai = 0 ∀i and d i = 1 ; ci ≫ 1 ∀i then, respectively, z is equal to 0 and


y is equal to 0 , in any optimal solution of (2.2), and their corresponding x is an optimal
solution of (2.1). In this way, both problems (2.1) and (2.2) are equivalent. The main advantage
of reforming problem (2.1) to the form in (2.2) is that the latter has always feasible solutions (at
least one optimal solution). In this work, ci = 10 3 ∀i as in the previous studies (Svanberg,
1987 ; Svanberg, 2002 ; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-
Ruggiero et al., 2011).

Optimization problems of the form as in (2.2) can be solved either by special optimization
algorithms such as the ordinary MMA algorithm (Svanberg, 1987) or the GCMMA algorithm
(Svanberg, 2002) as will be shown in the coming sections.

35
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

3. The Ordinary MMA Optimization Algorithm


Topology optimization is a recent research topic where further improvements, question marks,
and new scientific contributions are still needed. It couples different multiphysics conservation
laws as constraints to the optimization problem making the solution more complex. This
complexity arises from the large number of design variables that usually corresponds to the size
of the mesh defined to discretize the conservation laws of physics. In order to pass this difficulty,
one needs an efficient optimization algorithm that must be robust and that can converge always
to an optimum solution in a quiet short CPU time. One of these algorithms is the Method of
Moving Asymptotes (MMA) of Svanberg 1987 that was mostly used in literature for Topology
Optimization problems. Some of the usages of this algorithm can be found in the works of
Oevelen and Baelmans 2014, Lee K. 2012, Marck G. et al. 2012, Dede 2009, Gersborg-Hansen et
al. 2006 and Burger et al. 2013.

We describe now briefly the mathematical structure of the MMA in a fluent way for the reader.
Note that this algorithm is not globally convergent. We will show later another version of this
algorithm that is globally convergent.

( )
Starting from an iteraton k (given a point x ( k ) , y (k ) , z ( k ) ), the MMA algorithm generates the
following subproblem:

m
 2
minimize ξ 0(k ) ( x ) + a0 z + ∑  c i y i + d i y i 
1
i=1  2 
subject to ξ i ( x ) − a i z − y i ≤ 0, i = 1,..., m
(k )
(3.1)
(k )
x∈X , y ≥ 0, z ≥ 0.

{
X (k ) = x ∈ X | 0.9l (jk ) + 0.1x (jk ) ≤ x j ≤ 0.9u (jk ) + 0.1x (jk ) , }
j = 1,..., n . The subproblem (3.1)
is thus generated by replacing the functions f 0 ( x ) and f i ( x ) of (2.2) by certain chosen convex
functions ξ 0(k ) ( x ) and ξ i(k ) ( x ) , respectively. These convex functions, are updated iteratively,
(
based on the gradient information at the current iteration point x ( k ) , y (k ) , z ( k ) , and also on the )
lower and upper moving asymptotes l j ( (k )
and u j (k )
) that are updated based on information
from the two previous iteration points (x ( k −1 )
) ( )
, y (k −1) , z (k −1) and x (k − 2 ) , y (k − 2 ) , z (k − 2 ) .

The subproblem (3.1) can be solved at the iteration k, and the optimal solution is updated
( )
becoming the next iteration point x ( k +1 ) , y ( k +1 ) , z (k +1) . Then a new subproblem is regenerated
from this last point, and the iterative loop continues by regenerating subproblems until a certain
convergence stopping criterion is satisfied (when the squared-norm of the Karush-Kuhn-Tucker
(KKT) conditions becomes less than a positive real number ε such that ε << 1 ). The KKT
conditions are explained below.

36
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
(
Let us consider that x is an optimal solution vector of the problem on the following form :

minimize W0 ( x )
subject to Wi ( x ) ≤ 0, i = 1,..., m (3.2)
x ∈ℜ .
n

Then, if there is a vector ∆x which satisfies ∇Wi ( x )∆x < 0 for all i > 0 such that Wi ( x ) = 0 ,
( (

then there exist Lagrange multipliers ψ i , i = 1,..., m, that satisfy what is known as the KKT
conditions as the following :

∂W0 (
( x ) + ∑ ψ i ∂Wi ( x( ) = 0, j = 1,..., n
m

∂x j i=1 ∂x j
Wi ( x ) ≤ 0, i = 1,..., m
(
(3.3)
ψ i ≥ 0, i = 1,..., m
ψ iWi ( x ) = 0, i = 1,..., m
(

There are different approaches for solving the subproblem (3.1), such as the "primal-dual (PD)
interior point approach" and the "dual approach (DA)" (see Rao, 2013).
The first approach is based on a sequence of relaxed KKT conditions that are solved by Newton's
method (approach used in the present manuscript). The second approach (i.e. used in Svanberg,
1987) is based on solving the dual problem corresponding to the subproblem (3.1) (thus
maximization) by a modified Newton method (Fletcher-Reeves method) that consider well the
non-negativity constraints on the dual variables.

The functions ξ 0(k ) ( x ) and ξ i(k ) ( x ) are given by first order approximations of the original
functions f 0 ( x ) and f i ( x ) as the following:

 p ij(k )
n qij(k ) 
ξi(k )
(x ) = ∑  (k ) +  + ri(k ) ; i = 0,1,..., m; j = 1,2,..., n. (3.4)

j=1  u j − x j x j − l (jk ) 

 
(
pij(k ) = u (jk ) − x (jk ) ) 2
1.001∇f ij(k )+ + 0.001∇f ij(k )−

+
(
ρi
x j − x min
max
)
,

(3.5)
 j 

 
(
q ij(k ) = x (jk ) − l (jk ) )
2
 0.001∇f ij(k )+ + 1.001∇f ij(k )−

+
(x max
ρi
− x min )
,

(3.6)
 j j 

 pij(k ) q ij(k ) 
( )
n
(k ) (k )
ri = fi x − ∑  (k ) (k ) + (k ) (k )
. (3.7)

j=1  u j − x j xj −lj 

37
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Here,
 ∂f   ∂f 
 ∂x
( )
∇f ij(k )+ = max i x (k ) , 0  and ∇f ij(k )− = max − i x (k ) , 0 .
  ∂x 
( ) (3.8)
 j   j 
(k ) (k ) ( k )+
Note that pij , qij , ∇f ij and ∇f ij(k )− are matrices (of real number coefficients) each of
dimension [m × n] .

The upper and lower asymptotes are updated at each iteration as the following :

(k )
(
 x (jk ) + 0.5 x max
j − x min
j )
if k = 1, 2
=  (k ) ,
( )
uj ( k ) ( k −1 ) (k −1 )
(3.9a)
 x j + γ j u j − x j if k ≥ 3 

(k )
(
 x (jk ) − 0.5 x max
j − x min
j )
if k = 1, 2 
=  (k ) ,
( )
lj ( k ) (k −1 ) ( k −1 )
(3.9b)
 x j − γ j x j − l j if k ≥ 3 

γ a if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) < 0 
 
γ (jk ) = γb if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) > 0 . (3.9c)

 1 if (x ( ) − x ( ) )(x (
j
k
j
k −1
j
k −1 )
)
− x (jk − 2 ) = 0 

The values: γ a = 0.7 , γb = 1.2 and ρi = 10 −5 for all i = 0,1,..., m were chosen in (Svanberg,
2002 ; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010). However, in the original
algorithm of Svanberg (1987) 0 < γ a < 1 and γb = 1/γ a or γb = 1/ γ a were proposed.

4. A review of the Globally Convergent Optimization


Algorithm
The ordinary MMA algorithm solves the subproblem (3.1) but with no test for the approximating
functions ξ 0(k ) ( x ) and ξ i(k ) ( x ) to be conservative. This means that the obtained optimal solution
of (3.1) at an iteration k (outer iteration index), may not be a feasible solution of the original
problem (2.2). We mean by feasibility that the optimal solution point must satisfy well the
inequality constraints of the original problem (2.2).

That's why the author in (Svanberg, 2002) changed from the ordinary MMA towards a new
version named (GCMMA) that is globally convergent towards a feasible solution of the original
problem. This GCMMA introduces a new inner iteration loop (of index η ), where the
approximation functions are updated on both (k,η) and the subproblem is solved many times
until its obtained optimal solution is a feasible solution of the original problem.
38
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
More precisely, the GCMMA subproblem (4.1) is solved at the iterations (k,η) , and the optimal
( )
solution x (k,η ) , y ( k,η ) , z (k,η ) is tested for feasibility.
If it is not a feasible solution, then the subproblem is regenerated in inner iterations
( )
k, η : ηinitial → η final and solved until it becomes feasible at k, η final . Then, this last recent ( )
optimal and feasible solution of the subproblem becomes the next outer iteration point
( )
x (k +1,η ) , y (k +1,η ) , z (k +1,η ) .

m
 2
minimize ξ 0(k,η ) ( x ) + a0 z + ∑  c i y i + d i y i 
1
i=1  2 
subject to ξ i ( x ) − ai z − y i ≤ 0, i = 1,..., m
( k,η )
(4.1)
(k )
x∈ X , y ≥ 0 , z ≥ 0.

The functions ξ 0(k,η ) ( x ) and ξ i(k,η ) ( x ) are given also by first order approximations of the original
functions f 0 ( x ) and f i ( x ) of (2.2) as the following:

n  pij(k,η ) q ij(k,η ) 
ξ i(k,η ) ( x ) = ∑  + 
(k )  + ri
( k,η )
; i = 0,1,..., m; j = 1,2,..., n. (4.2)
 (k )
j=1  u j − x j xj −lj 

 ρi(k,η ) 
(
pij(k,η ) = u (jk ) − x (jk ) ) 2
1.001∇f ij(k )+ + 0.001∇f ij(k )−

+
(
x max − x min )
,

(4.3)
 j j 

 ρi(k,η ) 
(
q ij(k,η ) = x (jk ) − l (jk ) ) 2
 0.001∇f ij(k )+ + 1.001∇f ij(k )−

+
(x max − x min )
,

(4.4)
 j j 

 p ij(k,η ) qij(k,η ) 
( )
n
( k,η ) (k )
ri = fi x − ∑  (k ) (k ) + , (4.5)

j=1  u j − x j x (jk ) − l (jk ) 

With equations (3.8), (3.9a to 3.9c) still hold here too. It is important to note that the main
difference between both algorithms lies in the parameter ρ i that was made constant (a value of
10 −5 ) in the original MMA algorithm, but now it is dynamically updated at each (k,η) iteration
as we will show next.

At the start of each outer iteration k, the following holds:

  
ρi(k,0 ) = max ε ,

ν n max

n j=1
x j − x min
j ( ) ∂∂xf (x ( ) ) ;
i k
ν = 0.1; i = 0,1,..., m.
  j 
(4.6)
ε is a positive real number such that ε << 1 .

39
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

(k,η+1)
= 
{
min ςρi(k,η ) , λ ρi(k,η ) + ωi(k,η ) if ( )} ωi(k,η ) > 0 
ρi ; ς = 10; λ = 1.1.
 ρi(k,η ) if ωi(k,η ) ≤ 0 
(4.7)

( )
ξ i(k,η ) ( x ) = vi(k ) x , x (k ) , σ (k ) + ρi(k,η ) τ i(k ) x , x (k ) , σ (k ) . ( ) (4.8)

Note that, there exist in the literature different forms of conserved convex separable
approximations (CCSA) functions vi( k ) and τ i(k ) such as "linear and separable quadratic
approximations", "linear and separable algorithmic", "linear and separable square root
approximations", etc...(see Rao, 2013).

However, as taken by Svanberg (2002), v i( k ) and τ i(k ) are chosen here as the following
approximations:

∂f i (k )  ∂f 
(σ ( ) )
j
k 2

∂x j
x ( )(
x j − x (jk ) )  ∂x
( )(x
+ σ (jk )  i x (k ) j − x (jk ) )
2

( ) ( )
n
vi(k ) x , x (k ) , σ (k ) = f i x (k ) + ∑  j  ,
j=1 (σ ) (
(k )
j
2
− xj − xj )
(k ) 2

(4.9)

τi (k )
(x , x (k )
,σ (k )
) 1 n
= ∑
( )
x j − x (jk )
,
2

(4.10)
( ) ( )
2 j=1 σ (jk ) 2 − x j − x (jk ) 2

(
Let x (k, η ) be the solution of the most recent subproblem (4.1), then ωi(k,η ) is defined as:
( k,η )
(
( (
)
f i x (k,η ) − ξ i(k,η ) x (k,η ) ( )
ωi
(
= (k ) ( (k,η ) (k ) (k ) .
τ i x , x ,σ ) (4.11)

σ (k ) ∈ X (k ) such that X (k ) = x ∈ X { | x (jk ) − 0.9σ (jk ) ≤ x j ≤ x (jk ) + 0.9σ (jk ) , j = 1,..., n . }

5. A new Algorithm of High Performance


Nevertheless the change from MMA (Svanberg, 1987) to GCMMA (Svanberg, 2002), the
computational cost of these two algorithms when tackling large scale NLP bounded constrained
problems is still expensive (especially when the number of bounded variables n is ≥ 10000). This
is accompanied with a guarantee of the GCMMA to converge towards an optimal feasible solution
but excluding the MMA where its parameters must be adjusted in an error-and-trial way to
achieve convergence but always with a small probability.

For that reason, some authors tried to improve the computational speed of both algorithms like
(Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011), while
40
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
others developed completely different new robust ones (that are also globally convergent, but
sometimes only tested and applied on linear problems) seeking better performances with less
computational costs.

For example, Gomes and Senne (2014) presented a new Sequential Piecewise Linear
Programming (SPLP) algorithm applied to topology optimization problems of geometrically
nonlinear structures. Their method was based on solving convex piecewise linear programming
subproblems by including second order information about the objective function, and by
considering that structures in topology optimization are no more under small displacements.
They showed topology optimization interesting results for different structural problems.

Moreover, the same authors Gomes and Senne (2011) developed a new sequential linear
programming (SLP) algorithm based on a trusted-region (TR) constraint technique. This SLP
was limited only to linear compliance optimization problems.
They applied their SLP algorithm to topology optimization problems, and showed that it is faster
than the GCMMA of Svanberg (2002) when applied to the same linear problems. Nevertheless
this fact, Gomes and Senne (2011) SLP algorithm was only applied to problems of a maximum
number of bounded variables of 3750.

In addition to that, this SLP algorithm cannot be applied to complex large scale NLP bounded
constrained optimization problems, in contrast with what we are seeking, where the number of
variables may exceed 10000.

That’s why (Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al.,
2011) tried to analyze and improve the robustness and convergence speed of the algorithms of
Svanberg (1987, 2002). Their techniques were based on different strategies : one time by using a
new updating strategy of the spectral parameter (Gomes-Ruggiero et al., 2008; Gomes-Ruggiero
et al., 2010), and another time by solving the dual subproblem of the MMA using a trust-region
(TR) scheme (Gomes-Ruggiero et al., 2011).

Nevertheless, all these improvements of these algorithms in literature, the computational cost
still needs to be reduced in order to be able to optimize systems including a very large number of
variables (i.e. large number of mesh cells in CFD problems of complex geometries).
We present the different computer system characteristics that were used previously in literature
in the following table:

41
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Table 1 Different computer system characteristics used by the different authors for their
Optimization Algorithms.

Reference Used CPU RAM Compiler Syst. Model Oper. Syst.


Svanberg Single - Fortran77 Sun Enterprise -
(2002) 4000
Gomes- Single (3.4 GHz) 2 GB Matlab® 7 Pentium D ® Windows XP ®
Ruggiero et al.
(2008)
Gomes- Single (2.8 GHz ) 12 GB Matlab® Two-Xeon E5462 Mac Pro ®
Ruggiero et al.
(2010),
(2011)
LENOVO ® Linux ®
Present Study Single (3.4 GHz) 8 GB C++ 10AHS1MC00 Ubuntu 14.04
Intel Core i3 LTS

It is obvious from Table 1 that each author had used different computer system characteristics
with respect to the other authors, even so, these authors allowed themselves to compare the
numerical results (computational time of their improved algorithms) one with another. Of
course, this might be confusing, but we are forced to do so.

Our new algorithm is developed as a new generation thanks to its implementation via a template
meta-programming (C++) with the following external Linear Algebra (LA) open source libraries:
Lapack, Blas and Armadillo (v 4.550.2). These libraries support multiple matrix operations and
decompositions and can be used as said by the author: "for fast prototyping and computationally
intensive experiments" (Sanderson, 2010).

In fact, a template C++ meta-programming technique is based on a delayed evaluation approach


(that is affected during the compilation time) to combine multiple operations into one. This
reduces (or cancels) the need for temporaries, and thus increases performance and reduces
computational costs.

Our developed algorithm is constructed in the soul of the GCMMA of Svanberg (2002), but with a
new modified iterations strategy.
After many numerical tests on problems of different scales, we found that most of resolutions of
the subproblem (4.1) (at a single outer iteration k ) converged to a feasible solution in inner
iterations of η final that is an average value of 5. Thus, we followed a new strategy as the
following:

We retain the value η final = 1 at each k' = βk outer iterations ( β ∈ N | β > 0 ; here
β = 2 ) and then we set it free to change to η final with no limits. In this manner, the test on
feasibility of the most recent solution of the subproblem (4.1) is preserved, but accelerated once
a time after each k outer iteration and the global convergence of the algorithm is not violated.
42
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
First, two sets A and B for different algorithm parameters values are used and presented in Table
2.

Table 2 Two sets of algorithm parameters

Present Study γb value of (3.9c) ν value of (4.6) λ value of (4.7)


Set A 1.20 0.10 1.1

Set B 1.30 0.13 1.05

Note that γ a = 0.7 was used for the data in sets A and B. However, a deeper parametric study is
conducted later to study the effect of all of the parameters on the global performance of the
algorithm with the new present strategy.

6. Results
In order to compare well our numerical results (convergence and computational speed) with
those from literature, we chose the same two academic problems that were used by (Svanberg,
2002 ; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).
These nonlinear problems are given in the following table:

Table 3 Two nonlinear constrained minimization problems

Problem1 Problem2
Minimize f 0 ( x ) = x T [S ] x Minimize f 0 ( x ) = − x T [S ] x

Subject to : f 1 ( x ) = − x T [P ]x ≤ 0, Subject to : f 1 ( x ) = − + x T [P ]x ≤ 0,
n n
2 2
f 2 ( x ) = − x T [Q ]x ≤ 0, f 2 ( x ) = − + x T [Q ]x ≤ 0,
n n
2 2
− 1 ≤ x j ≤ 1, j = 1,..., n. − 1 ≤ x j ≤ 1, j = 1,..., n.
with x (k=1 ) = (0.5,0.5,...,0.5 ) ∈ ℜ n with x (k =1 ) = (0.25,0.25,...,0.25 ) ∈ ℜ n
T T

[S ] = [s ], [P ]
ij = [p ] and [Q ]
ij = [q ],
ij are [n × n] matrices ( n ∈ N | n > 1)
defined respectively by the following real valued coefficients:

2 + sin(4ππ ij ) 1 + 2α ij 3 − 2αij
sij = , pij = and qij = , (6.1)
(1 + (i − j ))lnn (1 + (i − j ))lnn (1 + (i − j ))lnn
i+ j − 2
with α ij = ∈ [0,1] for all (i; j ) ∈ [1, n] .
2n − 2

43
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
It is important to note that the nonlinear objective function f 0 ( x ) is one time strictly convex (in
Problem1) and one time strictly concave (in Problem2). However, the nonlinear inequality
constraint functions f 1 ( x ) and f 2 ( x ) are one time strictly concave (in Problem1), and one time
strictly convex (in Problem2).

Problems 1 and 2 are solved iteratively (with the two parameters sets A and B) for (n=100, 500,
1000, 2000, 5000 and 10000) until the convergence criterion (the squared-norm of the KKT
conditions) reached a positive small value ε << 1. Figure 2 shows that using the present
strategy applied to Problem1 and thanks to the used numerical libraries, the convergence time
is reduced at least by a factor of 13 when compared to the previous results of (Svanberg, 2002 ;
Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).

Fig 2. Convergence time in seconds for different algorithm strategies – Problem1.

The detailed results for all the values of Fig. 2 are shown in Table 4. It shows that a slight change
in the choice of the algorithm parameters (sets A and B of Table 2) found to affect the
convergence time but without affecting at all the objective function value computations.
Nevertheless this slight effect on the convergence time, the latter is still reduced at least 13 times
with respect to the best previous results in literature (Gomes-Ruggiero et al., 2010; Gomes-
Ruggiero et al., 2011).

44
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Table 4 Problem1 Convergence time in seconds

Problem1 ( convergence time in Seconds)


Gomes-
Svanberg Ruggiero
(Gomez-Ruggiero et al., 2010; Gomez-Ruggiero et al., 2011) Present Stgy. (PD)
(2002) et al.
n (2008)
Stgy. 1 Stgy. 2 Stgy. 3 Stgy. 1 Stgy. 2 Stgy. 3
PD PD DA - TR Set A Set B
PD PD PD TR TR TR
100 4,1267 15,76 3,8254 3,8818 2,1315 2,0159 1,7122 1,9412 1,1947 0,1328 0,1495
500 49,666 433,55 45,977 36,454 23,287 45,821 42,304 32,617 19,804 1,9369 1,6669
1000 214,74 1983,34 197,36 184,98 96,354 204,46 186,98 172,34 90,175 7,1938 5,8387
2000 868,98 7380,22 819,99 1206,3 404,32 857,63 802,26 866,71 353,22 29,8357 27,5651
5000 ≈3000 192,6573 178,6349
10000 ≈12000 807,205 843,076

The precision in computing the objective function values for Problem1 at all the selected
number of variables n (between 100 and 10000) is still more than satisfying as it is shown in
Table 5.

Table 5 Problem1 objective function values

Problem1 ( Computed Objective function Values)


Gomes-
Svanberg
Ruggiero et Present Stgy. (PD)
n (2002)
al. (2008)
PD PD Set A Set B
100 24,9 24,9 24,9
500 129,65 129,65 129,65
1000 260,85 260,85 260,85 260,85
2000 523,51 523,51 523,51 523,51
5000 1312,05 1312,05 1312,05
10000 2627,76 2627,76 2627,76

For more deep comparisons with previous results, we present the total number of inner and
outer iterations in Table 6, needed by the solver to achieve convergence in solving Problem1 for
the parameters of set A and set B.

45
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Table 6 Problem1 Total number of outer (inner) iterations

Problem1 total outer (inner) iterations


Gomes-
Svanberg Ruggiero
(Gomez-Ruggiero et al., 2010; Gomez-Ruggiero et al., 2011) Present Stgy. (PD)
(2002) et al.
n (2008)
Stgy. 1 Stgy. 2 Stgy. 3 Stgy. 1 Stgy. 2 Stgy. 3
PD PD DA - TR Set A Set B
PD PD PD TR TR TR
100 102 (104) 108 (101) 132 (59) 97 (11) 106 (134) 103 (96) 121 (63) 99 (9) 66 (90) 71 (95)
500 154 (159) 153 (138) 158 (36) 115 (0) 151 (184) 156 (147) 150 (39) 105 (0) 99 (137) 81 (115)
1000 177 (209) 186 (178) 179 (162) 223 (40) 128 (0) 177 (214) 180 (161) 223 (28) 124 (0) 122 (174) 95 (139)
2000 190 (224) 200 (199) 189 (185) 368 (82) 138 (0) 186 (232) 190 (186) 274 (60) 123 (0) 142 (199) 129 (181)
5000 221 (263) 162 (220) 144 (205)
10000 251 (296) 147 (218) 150 (212)

Now we apply the solver to the optimization problem Problem2. Fig. 3 shows that using the
present strategy applied to Problem2 and thanks to the used numerical libraries, the
convergence time is reduced at least by a factor of 28 when compared to the previous results
(Svanberg, 2002; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et
al., 2011) from the literature.

Table 7 presents all the detailed results obtained in Fig. 3 and shows that a slight change in the
choice of the algorithm parameters (sets A and B of Table 2) found to affect the convergence time
but without affecting at all the objective function value computations for Problem2.
Nevertheless this slight effect on the convergence time, the latter is still reduced at least 28 times
with respect to the best previous results in literature (Gomes-Ruggiero et al., 2010; Gomes-
Ruggiero et al., 2011).

The precision in computing the objective function values for Problem2 for all n between 100
and 10000 is still more than convincing as it is shown in Table 8.

46
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 3. Convergence time in seconds for different algorithm strategies – Problem2.

Table 7 Problem2 Convergence time in seconds

Problem2 ( convergence time in Seconds)


Gomes-
Svanberg Ruggiero
(Gomez-Ruggiero et al., 2010; Gomez-Ruggiero et al., 2011) Present Stgy. (PD)
(2002) et al.
n (2008)
Stgy. 1 Stgy. 2 Stgy. 3 Stgy. 1 Stgy. 2 Stgy. 3
PD PD DA - TR Set A Set B
PD PD PD TR TR TR
100 11,553 37,03 12,949 6,9761 6,3205 3,896 3,4818 2,9401 2,6735 0,2922 0,278
500 130,99 1163,69 121,86 105,12 86,954 109,26 100,49 90,385 74,973 4,0224 3,6634
1000 566,14 5033,81 475,37 459,24 378,66 469,01 450,5 423,04 346,09 13,4675 11,4151
2000 2208,1 18896,39 2084,9 1947,3 1678,3 2081,4 2008,4 1861,9 1603,6 58,943 57,204
5000 ≈7500 366,7132 304,1883

10000 ≈30000 1907,998 1563,043

Table 8 Problem2 objective function values

Problem2 ( Computed Objective function Values)


Gomes-
Svanberg
Ruggiero et Present Stgy. (PD)
n (2002)
al. (2008)
PD PD Set A Set B
100 -75,1 -75,1 -75,1
500 -370,35 -370,35 -370,35
1000 -739,15 -739,15 -739,15 -739,15
2000 -1476,49 -1476,49 -1476,49 -1476,49
5000 -3687,95 -3687,95 -3687,95
10000 -7373,24 -7373,24 -7373,24
47
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

The memory required for all computations (for each Problem) using our system characteristics
(Table 1) varied between 0.001 and 4.8 GB depending on the number of variables.

We show the total number of inner and outer iterations in Table 9, needed by the solver to
achieve convergence in solving Problem2 using the present strategy for parameters (set A and
set B).

Table 9 Problem2 Total number of outer (inner) iterations

Problem2 total outer (inner) iterations


Gomes-
Svanberg Ruggiero
(Gomes-Ruggiero et al., 2010 ; Gomes-Ruggiero et al., 2011) Present Stgy. (PD)
(2002) et al.
n (2008)
Stgy. 1 Stgy. 2 Stgy. 3 Stgy. 1 Stgy. 2 Stgy. 3
PD PD DA - TR Set A Set B
PD PD PD TR TR TR
100 227 (240) 222 (198) 189 (158) 199 (60) 223 (268) 224 (204) 189 (154) 201 (89) 139 (185) 130 (181)
500 394 (388) 392 (317) 353 (280) 357 (97) 389 (430) 390 (339) 355 (284) 353 (123) 194 (288) 181 (255)
1000 436 (415) 449 (408) 443 (337) 416 (350) 418 (142) 441 (434) 445 (379) 417 (351) 410 (153) 221 (322) 190 (278)
2000 465 (471) 491 (481) 477 (379) 442 (423) 452 (185) 481 (505) 487 (450) 445 (422) 443 (241) 267 (397) 240 (437)
5000 584 (606) 281 (463) 232 (390)
10000 682 (704) 319 (572) 258 (466)

In order to quantify well the effect of parameters on the global performance of the algorithm
(with the present strategy), a parametric study is conducted next. Figs. 4, 5 and 6 show the effect
of different parameter values on the global performance of the algorithm using the present
strategy while solving for Problem1 at n=1000.

Fig 4. Convergence time for different algorithm parameters ( γb = 1 / γ a ) - Problem1.

48
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 5. Outer Iterations number for different algorithm parameters ( γb = 1 / γ a ) - Problem1.

Fig 6. Inner Iterations number for different algorithm parameters ( γb = 1 / γ a ) - Problem1.

Similarly, the effect of such parameters on the global performance of the algorithm (with the
present strategy) is also examined for Problem2 at n=1000. Figs. 7, 8 and 9 show the effect of
these parameter values on the global performance of the algorithm using the present strategy.

49
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 7. Convergence time for different algorithm parameters ( γb = 1 / γ a ) - Problem2.

Fig 8. Outer Iterations number for different algorithm parameters ( γb = 1 / γ a ) - Problem2.

Fig 9. Inner Iterations number for different algorithm parameters ( γb = 1 / γ a ) - Problem2.

50
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Thus, after analyzing the effect of the different algorithm parameter values (with the present
strategy), it can be observed that the following parameter values :
γb = 1 / γ a ; γ a ∈ [0.3 − 0.5] ; λ ∈ [1.1 − 1.15] ; ν ∈ [0.1 − 0.15] stand for the best
global performance of the algorithm. Of course here one may ask if a greater value (or another
value) of n might result in a sensibility of the algorithm performance to another range of
parameters values ? For that issue, we conducted a sensibility analysis again for Problem1 but
now at n=3000 in Fig. 10 confirming that these parameter interval values identified at n=1000
still hold too for a best performance of the algorithm at higher n values. Moreover, in Fig. 11 we
present a logarithmic plot for the effect of the number of variables n on the convergence
computation time (in seconds) when solving for both problems (Problem1 and Problem2).

Fig 10. Convergence time for different algorithm parameters ( γb = 1 / γ a ) - Problem1.

51
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 11. Effect of number of variables n on the algorithm convergence time.

In Fig. 12 we show the effect of choosing an initial vector values x(k = 1) = constant∀n on the
algorithm performance (using set A). We observe that for different 11 initial chosen vector
values (values are equal for all of the 2000 variables) the algorithm converges very well in
solving Problem1 with only a maximum increase of around 11 % with respect to a minimum
time of convergence obtained in t_min=27.2 seconds. The solutions of both problems are shown
in Fig. 13 for more illustration for the reader.

However, we observe from Fig. 14 that starting from an initial vector values (a sinusoidal form)
 (1000 − i ) 
x(k = 1) = sin  π  with i ∈ [1,2,...,1000] and n = 2000 , the algorithm converges (also
 n 
with set A) very well in solving Problem1 with only 11.02 seconds which is about three times
less than t_min=27.2 in the case of x(k = 1) = constant∀n . This finding is logical of course but
what's surprising is that the convergence time is reduced by around a factor of three.

52
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 12. Effect of the initial vector on algorithm general performance (using set A).

Fig 13. Solutions of both problems Problem1 and Problem2.

Thus a conclusion can be made which is that if one would like to push the algorithm towards the
best performances possible, an initial vector that is nonuniform over n is recommended at the
departure.

53
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Fig 14. Effect of a sinusoidal initial vector. Convergence is achieved in 11.02 seconds using set A.

7. Conclusion
Thanks to the C++ programming language and external Linear Algebra Libraries (Sanderson,
2010), a fast, robust and high-performance globally convergent optimization algorithm is
developed. It serves as an important numerical tool for solving inequality-constrained nonlinear
programming problems of large number of design variables n (n≥10000).

The developed tool is validated by solving two famous academic complex nonlinear constrained
minimization problems of large scale (Problem1 and Problem2). The convergence time is
reduced up to a factor of 28 when compared to previous results from the literature (Svanberg,
2002; Gomes-Ruggiero et al., 2008; Gomes-Ruggiero et al., 2010; Gomes-Ruggiero et al., 2011).
This is thanks to a new internal iterative strategy applied to the GCMMA algorithm (Svanberg,
2002).

The effect of the different parameters on the new algorithm global performance is investigated.
The best performance (with the new present iterative strategy) is found to be for the following
algorithm parameter values :
γb = 1/ γ a ; γ a ∈ [0.3 − 0.5] ; λ ∈ [1.1 − 1.15] ; ν ∈ [0.1 − 0.15] .

The effect of the form of the initial vector on the algorithm performance is analyzed. We found
that an initial vector which has a nonuniform form (i.e. sinusoidal) over the number of variables
n improves very well the general performance of the algorithm.

The numerical achievement in this work is a promising and robust scientific computation tool
that can be used and applied to different complex engineering optimization problems. For
example, it may be coupled easily (due to an object-oriented C++ class) to many already existing
external multiphysics commercial and free solvers (such as Finite Elements, Finite Volumes,
Computational Fluid Dynamics and Solid Mechanics Solvers).

54
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56

Acknowledgements
The authors are very grateful for Prof. K. Svanberg for the useful discussions and for supplying
the Matlab® source codes of his algorithms (Svanberg, 1987; Svanberg, 2002).

References
Bendsoe, M. P., & Sigmund, O. (2004). Topology Optimization: Theory, Methods and Applications. Second
Edition, ISBN 3-540-42992-1, Springer.
http://dx.doi.org/10.1007/978-3-662-05086-6
Burger F.H., Dirker J. and Meyer J.P. (2013). Three-dimensional conductive heat transfer topology
optimisation in a cubic domain for the volume-to-surface problem. International Journal of Heat and
Mass Transfer, vol. 67, pp. 214-224.
http://dx.doi.org/10.1016/j.ijheatmasstransfer.2013.08.015
Dede E. M. (2009). Multiphysics Topology Optimization of Heat Transfer and Fluid Flow Systems.
Proceedings of the COMSOL Conference, Boston.
Gersborg-Hansen A., Bendsøe M. P. and Sigmund O. (2006). Topology Optimization of Heat Conduction
Problems Using The Finite Volume Method. Structural and Multidisciplinary Optimization, vol. 31, no.
4, pp. 251–259.
http://dx.doi.org/10.1007/s00158-005-0584-3
Gomes, F. A. M., & Senne, T. A. (2014). An algorithm for the topology optimization of geometrically
nonlinear structures. Int. J. Numer. Meth. Engng, 99, 391-409. DOI: 10.1002/nme.4686
http://dx.doi.org/10.1002/nme.4686
Gomes, F. A. M., & Senne, T. A. (2011). An SLP algorithm and its application to topology optimization.
Computational and Applied Mathematics, 30(1), 53-89.
Gomes-Ruggiero M. A., Sachine, M. & Santos S. A. (2008). Analysis of a Spectral Updating for the Method of
Moving Asymptotes. EngOpt - International Conference on Engineering Optimization, Rio de Janeiro,
Brazil, 01 - 05 June.
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2010). A spectral updating for the method of moving
asymptotes. Optim. Methods Softw., 25(6), 883–893.
http://dx.doi.org/10.1080/10556780902906282
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2010). Globally convergent modifications to the Method
of Moving Asymptotes and the solution of the subproblems using trust regions: theoretical and
numerical results. Instituto de Matemática, Estatística e Computação Científica.
http://www.ime.unicamp.br/conteudo/globally-convergent-modifications-method-moving-
asymptotes-and-solution-subproblems-using-t
Gomes-Ruggiero, M. A., Sachine, M., & Santos, S. A. (2011). Solving the dual subproblem of the Method of
Moving Asymptotes using a trust-region scheme. Comput. Appl. Math. 30 (1).
Hassan, E., Wadbro, E., & Berggren, M. (2014). Patch and Ground Plane Design of Microstrip Antennas by
Material Distribution Topology Optimization. Progress In Electromagnetics Research B, 59, 89-102.
http://dx.doi.org/10.2528/PIERB14030605
Lee Kyungjun (2012). Topology optimization of convective cooling system designs, PhD thesis, University of
Michigan.
Marck G., Nemer M., Harion J.-L., Russeil S. and Bougeard D. Topology optimization using the SIMP method
for multiobjective conductive problems. Numerical Heat Transfer, Part B: Fundamentals, 61(6):439–
470, June 2012.
http://dx.doi.org/10.1080/10407790.2012.687979
Oevelen T. V. and Baelmans M. (2014). OPT-i An International Conference on Engineering and Applied
Sciences Optimization. Kos Island, Greece, 4-6, June.

55
Talib Dbouk and Jean-Luc Harion / American Journal of Algorithms and Computing
(2015) Vol. 2 No. 1 pp. 32-56
Rao, S. S. (2013). Engineering optimization: Theory and Practice. Third Enlarged Edition, ISBN: 978-81-
224-2723-3, New Age International (P) Ltd., Publishers.
Sanderson, C. (2010). Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and
Computationally Intensive Experiments. NICTA Technical Report - October.
http://www.nicta.com.au/research/research_publications
Svanberg, K. (1987). The method of moving asymptotes - a new method for structural optimization.
Internat. J. Numer. Methods Engrg., 24, 359–373.
http://dx.doi.org/10.1002/nme.1620240207
Svanberg, K. (2002). A class of globally convergent optimization methods based on conservative convex
separable approximations. SIAM J. Optim., 12, 555–573.
http://dx.doi.org/10.1137/S1052623499362822

56

You might also like