0% found this document useful (0 votes)
101 views15 pages

A Feasible Directions Method For Nonsmooth Convex Optimization

This paper proposes a new algorithm for solving convex optimization problems with non-smooth objective functions. The algorithm reformulates the problem as an equivalent constrained problem and uses cutting planes to approximate the objective function. At each iteration, it computes a search direction using the Feasible Directions Interior Point Algorithm and a step length. If the step length is deemed "non-serious", a new cutting plane is added and the process repeats until a "serious" step is found, at which point the search direction is a feasible descent direction for the constrained problem. The algorithm is globally convergent and numerical tests show it performs comparably to established bundle methods. The technique was also applied successfully to structural topology optimization problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views15 pages

A Feasible Directions Method For Nonsmooth Convex Optimization

This paper proposes a new algorithm for solving convex optimization problems with non-smooth objective functions. The algorithm reformulates the problem as an equivalent constrained problem and uses cutting planes to approximate the objective function. At each iteration, it computes a search direction using the Feasible Directions Interior Point Algorithm and a step length. If the step length is deemed "non-serious", a new cutting plane is added and the process repeats until a "serious" step is found, at which point the search direction is a feasible descent direction for the constrained problem. The algorithm is globally convergent and numerical tests show it performs comparably to established bundle methods. The technique was also applied successfully to structural topology optimization problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Struct Multidisc Optim

DOI 10.1007/s00158-011-0634-y

RESEARCH PAPER

A feasible directions method for nonsmooth convex optimization


Jos Herskovits Wilhelm P. Freire
Mario Tanaka Fo Alfredo Canelas

Received: 26 July 2010 / Revised: 14 December 2010 / Accepted: 20 January 2011


c Springer-Verlag 2011


Abstract We propose a new technique for minimization


of convex functions not necessarily smooth. Our approach
employs an equivalent constrained optimization problem
and approximated linear programs obtained with cutting
planes. At each iteration a search direction and a step length
are computed. If the step length is considered non serious, a cutting plane is added and a new search direction
is computed. This procedure is repeated until a serious
step is obtained. When this happens, the search direction
is a feasible descent direction of the constrained equivalent problem. To compute the search directions we employ
the same formulation as in FDIPA, the Feasible Directions
Interior Point Algorithm for constrained optimization. We
prove global convergence of the present method. A set of
numerical tests is described. The present technique was also
successfully applied to the topology optimization of robust

J. Herskovits M. Tanaka Fo (B)


Mechanical Engineering Program, COPPE, Federal University of Rio
de Janeiro, PO Box 68503, CEP 21945-970, CT, Cidade Universitria,
Ilha do Fundo, Rio de Janeiro, Brazil
e-mail: [email protected]
J. Herskovits
e-mail: [email protected]
W. P. Freire
UFJF, Federal University of Juiz de Fora, Juiz de Fora, Brazil
e-mail: [email protected]
A. Canelas
IET, Facultad de Ingeniera, UDELAR, Montevideo, Uruguay
e-mail: [email protected]
Present Address:
M. Tanaka Fo
Faculdade de Matemtica, UFOPA, Universidade Federal do Oeste do
Par, Santarm, PA, Brazil

trusses. Our results are comparable to those obtained with


other well known established methods.
Keywords Unconstrained convex optimization
Nonsmooth optimization Cutting planes method
Feasible direction interior point methods

1 Introduction
In this paper, we propose a new algorithm for solving the
unconstrained optimization problem:

min f (x)
(P)
xRn
where f : Rn R is a convex function, not necessarily
smooth.
Nonsmooth optimization problems arise in advanced
structural analysis and optimization. This is the case for
several types of unilateral problems, as contact, impact
in solids or delamination analysis in composite materials.
Dynamic analysis and stability studies involve eigenvalues and nonsmooth functions. Most of these applications
involve nonconvex problems with constraints. The authors
consider the present study as the first step on a new technique for numerical algorithms for nonconvex optimization
with smooth and nonsmooth constraints.
It is assumed that f has bounded level sets. Thus, there
is a R such that a {x Rn | f (x) a} is compact.
Let f (x) be the subdifferential (Clarke 1983) of f at x. In
what follows, it is assumed that one arbitrary subgradient
s f (x) can be computed at any point x Rn .
A special feature of nonsmooth optimization is the fact
that f (x) can change discontinuously and is not necessarily small in the neighborhood of a local extreme of the

J. Herskovits et al.

objective function see (Hiriart Urruty and Lemarchal 1993;


Bonnans et al. 2003). For this reason, the usual smooth
gradient based optimization methods cannot be employed.
Several methods have been proposed for solving Problem P, see (Auslender 1987; Bonnans et al. 2003; Kiwiel
1985; Mkel and Neittaanmki 1992). Cutting plane methods approximate the function with a set of tangent planes.
At each iteration the approximated function is minimized
and a new tangent plane is added. The classical reference
is (Kelley 1960), where a trial point is computed by solving a linear programming problem. We mention (Nesterov
and Vial 1999; Goffin and Vial 2002) for analytic center cutting plane methods and (Schramm and Zowe 1992; Mitchell
2003) for logarithmic potential and volumetric barrier cutting plane methods. Bundle methods based on the stabilized
cutting plane idea are numerically and theoretically well
understood (Bonnans et al. 2003; Kiwiel 1985; Mkel and
Neittaanmki 1992; Schramm and Zowe 1992).
Here we show that the search direction of FDIPA
(Herskovits 1982, 1986, 1998), the Feasible Direction Interior Point Algorithm for constrained smooth optimization,
can be successfully employed in the frame of cutting plane
methods. In this way, we obtain an algorithm for nonsmooth
optimization, with a simple structure, without need of solving quadratic programming subproblems. The absence of
quadratic subproblems makes our approach very easy to
implement and enables the development of codes for very
large problems.
The nonsmooth unconstrained Problem P is reformulated as the equivalent constrained Problem EP with a linear
objective function and one nonsmooth inequality constraint,

min z
(x,z)Rn+1
(EP)
s.t. f (x) z,
where z R is an auxiliary variable. With the
present approach, a decreasing sequence of feasible points
{(x k , z k )} converging to a minimum of f (x) is obtained.
That is, we have that z k+1 < z k and z k > f (x k ) for all
k. At each iteration, an auxiliary linear program is defined
using cutting planes. A feasible descent direction of the
linear program is obtained employing FDIPA, and a steplength is computed. Then, a new iterate (x k+1 , z k+1 ) is
defined according to suitable rules. To determine a new iterate, the algorithm produces auxiliary points (yi , wi ) and
when an auxiliary point is at the interior of epi( f ), that
means wi > f (yi ), we say that the step is serious and
we take it as the new iterate. Otherwise, the iterate is not
changed and we say that the step is null. A new cutting
plane is then added and the procedure is repeated until a
serious step is obtained. It will be proved that, when a serious step is obtained, the search direction given by FDIPA is
also a feasible descent direction for Problem EP.

This paper is organized in six sections. In the next one


we describe the FDIPA. In Section 3 the main features of
the new method are presented. Global convergence of the
algorithm is shown in Section 4. In the subsequent section,
numerical results are presented. Comparing with four bundle algorithms, our results show that the present approach is
robust and efficient. An application to robust structural optimization is described in Section 6. Finally, our conclusions
an suggestions for future research are presented.

2 The feasible direction interior point method


The present approach for nonsmooth optimization employs
procedures and concepts involved in the Feasible Direction
Interior Point Algorithm. Even if FDIPA (Herskovits 1998)
is a technique for smooth optimization, to improve the clearness and completeness of the present paper, we describe
some of the involved basic ideas.
We consider now the inequality constrained optimization
problem

min f (x)
xRn

s.t. g(x) 0,

(1)

where f : Rn R and g : Rn Rm are continuously differentiable. The FDIPA requires the following
assumptions about Problem 1:
Assumption 1 Let  {x Rn | g(x) 0} be the
feasible set. There exists a real number a such that the set
a {x  | f (x) a} is compact and has an interior
int(a ).
Assumption 2 Each x int(a ) satisf ies g(x) < 0.
Assumption 3 The functions f and g are continuously
dif ferentiable in a and their derivatives satisfy a Lipschitz
condition.
Assumption 4 (Regularity Condition) A point x a
is regular if the gradient vectors gi (x), for i such that
gi (x) = 0, are linearly independent. FDIPA requires
regularity assumption at a local solution of Problem 1.
Let us remind the reader of some well known concepts
(Luenberger 1984), widely employed in this paper.
Definition 1 d Rn is a descent direction for a smooth
function : Rn R at x  if d T (x) < 0.
Definition 2 d Rn is a feasible direction for Problem 1,
at x , if for some > 0 we have x + td  for all
t [0, ].

A feasible directions method for nonsmooth convex optimization

Definition 3 A vector field d(x) defined on  is said to be


a uniformly feasible directions f ield of Problem 1, if there
exists a step length > 0 such that x + td(x)  for all
t [0, ] and for all x .
It can be shown that d is a feasible direction if
d T gi (x) < 0 for any i such that gi (x) = 0,
(Herskovits 1998). Definition 3, introduces a condition on
the vector field d(x), which is stronger than the simple feasibility of any element of d(x). When d(x) constitutes a
uniformly feasible directions field, it supports a feasible
segment [x, x + (x)d(x)], such that (x) is bounded below
in  by > 0.
Let x be a regular point of Problem 1. Karush-KuhnTucker (KKT) first order necessary optimality conditions
are expressed as follows: If x is a local minimum of
Problem 1 then there exists Rm such that
f (x ) + g(x ) = 0

(2)

G(x ) = 0

(3)

(4)

g(x ) 0,

(5)

where G(x) is a diagonal matrix with G ii (x) gi (x).


We say that x such that g(x) 0 is a Primal Feasible
Point, and 0 a Dual Feasible Point. Given an initial
feasible pair (x 0 , 0 ), FDIPA finds KKT points by solving
iteratively the nonlinear system of Eqs. 2 and 3 in (x, ), in
such a way that all the iterates are primal and dual feasible.
Therefore, convergence to feasible points is obtained.
A Newton-like iteration to solve the nonlinear system of
Eqs. 2 and 3 in (x, ) can be stated as


  k+1

x
Sk
g(x k )
xk
k g(x k )T G(x k )
k+1
k



f (x k ) + g(x k )k
=
G(x k )k

x k+1 x k , we obtain the following linear system in


(dk , k+1
):

S k dk + g x k k+1
(7)
= f x k

k g x k dk + G x k k+1
= 0.

(8)

It is easy to prove that dk is a descent direction of the


objective function (Herskovits 1998). However, dk cannot
be employed as a search direction, since it is not necessarily
a feasible direction. In effect, in the case when gl (x k ) = 0
it follows from Eq. 8 that gl (x k )T dk = 0.
To obtain a feasible direction, the following perturbed
linear system with unknowns d k and k+1 is defined, by
adding the negative vector k k to the right side of Eq. 8,
with k > 0,

S k d k + g x k k+1 = f x k

T

k g x k d k + G x k k+1 = k k .
The addition of a negative vector in the right hand side of
Eq. 8 produces the effect of deflecting dk into the feasible
region, where the deflection is proportional to k . As the
deflection of dk grows with k , it is necessary to bound k ,
in a way to ensure that d k remains a descent direction. Since
(dk )T f (x k ) < 0, we can get these bounds by imposing


T
(d k )T f x k dk f x k ,

(9)

with (0, 1), which implies (d k )T f (x k ) < 0. Thus,


d k is a feasible descent direction. To obtain the upper bound
on k , the following auxiliary linear system in (dk , k ) is
solved:

S k dk + g x k k+1
=0

k g x k dk + G x k k+1
= k .

(6)

where (x k , k ) is the starting point of the iteration and


(x k+1 , k+1
) is a new estimate, and a diagonal matrix
with ii i .
m

In the case when S k 2 f (x k ) +
ik 2 gi (x k ),
i=1

Eq. 6 is a Newton iteration. However, S k can be a quasiNewton approximation or even the identity matrix. FDIPA
requires S k symmetric and positive definite. Calling dk =

Now defining d k = dk + k dk and substituting in Eq. 9,


it follows that d k is a descent direction for any k > 0 in
the case when (dk )T f (x k ) 0. Otherwise, the following
condition is required,

T


T

k ( 1) dk f x k
dk f x k .
In (Herskovits 1998), is defined as follows: If
(dk )T f (x k ) 0, then k =
dk
2 . Otherwise,


T


T


2
k = min dk ,( 1) dk f x k
dk f x k .

J. Herskovits et al.

A new feasible primal point with a lower objective value


is obtained through an inexact line search along d k . FDIPA
has global convergence in the primal space for S k+1 positive
definite and k+1 > 0.

step is serious and set the new iterate (x k+1 , z k+1 ) =


k , wk ). Otherwise, a new cutting plane g k (x, z)
(y+1
+1
+1
is added to the approximated problem and the procedure
repeated until a serious step is obtained. We are now in
position to state our algorithm.

3 Description of the present technique for nonsmooth


optimization

3.1 Nonsmooth Feasible Direction Algorithm - NFDA

Parameters. , (0, 1), > 0, tmax > 0.


We employ ideas of the cutting planes method (Kelley
1960), to build piecewise linear approximations of the constraint of Problem EP. Let gik (x, z) be the current set of
cutting planes such that

Data. x 0 , a > z 0 > f (x 0 ), 00 R+ , B 0 Rn+1 Rn+1


symmetric and positive definite. Set y00 = x 0 , k = 0 and
 = 0.

gik (x, z) = f (yik ) + (sik )T (x yik ) z,

Step 1) Compute sk f (yk ). A new cutting plane at the


current iterate (x k , z k ) is defined by

i = 0, 1, ..., 

where yk Rn are auxiliary points, sik f (yik ) are subgradients at those points and  represents the number of current
cutting planes. We call,
g k (x, z)

[g0k (x, z), ..., gk (x, z)]T ,

g k

+1

: R R R
n

and the current auxiliary problem

min (x, z) = z

(x,z)Rn+1

s.t.

g k (x, z) 0.

gk (x, z) = f (yk ) + (sk )T (x yk ) z.


Consider now

sk

gk (x, z) = Rn+1 ,


1
define

(AP)

g k (x, z) = [g0k (x, z), ..., gk (x, z)]T R+1 ,


and

Instead of solving this problem, the present algorithm


only computes with FDIPA a search direction dk of
Problem AP. We note that dk can be computed even if
Problem AP has not a finite minimum.
The largest feasible step is


t = max t | g k x k , z k + tdk 0 .

g k (x, z) = [g0k (x, z), ..., gk (x, z)] R(n+1)(+1) .

Step 2) Feasible descent direction dk for Problem AP


i)

k and k , solving
Compute d

k
+ g k (x k , z k ) k = (x, z)
B k d

(12)

tk := min{tmax /, t}

k
k [ g k (x k , z k )]T d

+ G k (x k , z k ) k = 0.

(13)

where (0, 1). Then,


k
k
= x k , z k + tk dk
x+1
, z +1

k and k , solving
Compute d


Since t is not always finite, it is taken

(10)

is feasible with respect to Problem AP. Next we compute


the following auxiliary point

k
, wk+1 = x k , z k + tk dk .
(11)
y+1
k , wk ) is strictly feasible with respect to ProbIf (y+1
+1
k ) we consider that the
lem EP, that is, if wk+1 > f (y+1
current set of cutting planes is a good local approximation
of f (x) in a neighborhood of x k . Then, we say that the

k
B k d
+ g k (x k , z k ) k = 0

(14)

k
k [ g k (x k , z k )]T d

+ G k (x k , z k ) k = k , (15)

where
k := (k0 , ..., k ),
k := (k0 , ..., k ),
and

k := (k0 , ..., k ),

k := diag(k0 , ..., k )

G k (x, z) := diag(g0k (x, z), ..., gk (x, z)).

A feasible directions method for nonsmooth convex optimization

ii)

iii)

k )T (x, z) > 0, set =


d k
2 . Otherwise,
If (d

set


k )T (x, z)
(d
k 2
= min
d
, ( 1) k
.
(d )T (x, z)

Compute the feasible descent direction


k
k
dk = d
+ d
.

Step 3) Compute the step length




tk = min tmax /, max{t | g k ((x k , z k ) + tdk ) 0} .
(16)

Step 4) Compute a new point


i)
ii)

iii)

k , wk ) = (x k , z k ) + t k d k .
Set (y+1
 
+1
k
k ), we have a null step. Then, define
If w+1 f (y+1
k+1 > 0 and set  :=  + 1.
Otherwise, we have a serious step. Then, call d k =
k , d k = d k , k = k , k = k and
dk , dk = d




k , wk ), define
k = . Take (x k+1 , z k+1 ) = (y+1
+1
k+1
> 0, B k+1 symmetric and positive definite and
0
set k = k + 1,  = 0, y0k = x k .
Go to Step 1).

The new values of and B must satisfy the following


assumptions:
Assumption 5 There exist positive numbers 1 and 2 such
that 1
v
2 vT Bv 2
v
2 for any v Rn+1 .
Assumption 6 There exist positive numbers
that I i S , for i = 0, 1, . . . , .

I ,

S ,

such

Global convergence of the present algorithm will be


proved for any updating rule for and B satisfying Assumptions 5 and 6. In the numerical section we shall give a
practical choice for the updating rules.

4 Convergence analysis
In this section, we prove global convergence of the present
algorithm. We first show that the search direction dk is
a descent direction for . Then, we prove that the number of null steps at each iteration is finite. That is; since
(x k , z k ) int(epi f ), after a finite number of subiterations,
we obtain(x k+1 , zk+1 ) int(epi f ). In consequence, the
sequence (x k , z k ) kN is bounded and belongs to the interior of the epigraph of f . In what follows, it is proved

that d k converges to zero when k . This fact is


employed to establish a stopping criterium for the present
algorithm. Finally we show that the optimality condition

0 f (x ) is satisfied
the accumulation points (x , z )
 k for
k
of the sequence (x , z ) kN . Thus, any accumulation


point of the sequence (x k , z k ) kN is a solution of Problem P. In some cases, indices will be omitted to simplify the
notation.
We remark that the solutions d , , d , and of the
linear systems 12, 13, 14, and 15 are unique. This fact is a
consequence of Assumptions 5 and 6 and a lemma proved
in (Panier et al. 1988; Tits et al. 2003) and stated as follows:
Lemma 1 Let the vectors in the set {gi (x k , z k ) |
gi (x k , z k ) = 0} be linearly independent. Then, for any
vector (x, z) int(epi f ) and any positive def inite matrix
B R(n+1)(n+1) , the matrix


B
g(x,

z)
,

g(x,
[

z)]T G(x,
z)
is nonsingular.
It follows that d , d , and are bounded in a . Since
is bounded above we also have that = + is
bounded.
Lemma 2 The vector d satisf ies
dT (x, z) dT Bd .
Proof It follows from Eq. 12
dT Bd + dT g(x,

z) = dT (x, z),

(17)

and from Eq. 13

1 G(x,
dT g(x,

z) = T
z).

(18)

Replacing Eq. 18 in Eq. 17 we have

1 G(x,
dT (x, z) = dT Bd + T
z) .
1 is positive definite, in consequence of AssumpSince

tion 6, and G(x,


z) is taken negative definite in the algorithm, the result of the lemma is obtained.


As a consequence, we also have that the search direction
d is descent for the objective function of Problem AP.
Proposition 1 The direction d = d + d is a descent
direction for the objective function of Problem AP at point
(x, z).
Proof Since d = d + d , we have
d T (x, z) = dT (x, z) + dT (x, z).

J. Herskovits et al.

In the case when dT (x, z) > 0, we have


( 1)

dT (x, z)
.
dT (x, z)

Proposition 3 Let (x k , z k ) int(epi f ). The next iterate


(x k+1 , z k+1 ) int(epi f ) is obtained after a f inite number
of subiterations.

Therefore,
d T (x, z) dT (x, z) + ( 1)dT (x, z)
=

dT (x, z)

Then, for  N > L large enough, (x k , z k ) is under the


th cutting plane. Then, we arrived to a contradiction and
z k = f (x k ).

< 0.

On the other hand, when dT (x, z) 0, it follows from


Lemma 2 that d T (x, z) dT (x, z) < 0 for any
> 0.


As a consequence of the previous proposition, since
z k+1 = z k + t k dzk , we have that z k+1 < z k for all k.
Thus, the sequence {(x k , z k )}kN generated by the present
algorithm belongs to the bounded set int(epi f ) {(x, z)
Rn+1 | z < z 0 }.
Lemma 3 Let X Rn be a convex set. Consider x0
int(X ) and x X . Let {x k }kN Rn X be a sequence
such that x k x.
Let {x k }kN Rn be a sequence def ined
k
by x = x0 + (x k x0 ) with (0, 1). Then there exist
k0 N such that x k int(X ), k > k0 .
Proof We have
x k = x0 + (x k x0 ) x0 + (x x0 ) = x .
X and (0, 1) we have that
Since the segment [x0 , x]
x int(X ) and, in consequence there exist > 0 such that
B(x , ) int(X ). Since x k x there exist k0 N
such that x k B(x , ) int(X ), k > k0 .


Remark 1 The sequence {(xk , z k )}N is in a compact set.
In fact, there exist r > 0 such that a is included in the ball
centered in the origin and with radius r . Then, the sequence
is in the ball centered in the origin and radius r + tmax . For
the same reasons, the sequence {(yk , wk )}N is in the same
compact set.
(x k , z k )

be an accumulation point of the


Proposition 2 Let
sequence {(xk , z k )}N def ined in Eq. 10 for k f ixed, then
z k = f (x k ).
Proof By definition of the sequence {(xk , z k )}N , we have
always that z k f (x k ). Suppose now that z k < f (x k ) and
consider a convergent sequence {(xk , z k )}N (x k , z k )
such that {sk }N s k , where N N. This sequence
exists because {(xk , z k )}N as well as {sk }N are in compact sets. The corresponding cutting plane is represented by
f (xk )+sT (x xk )z = 0. Then, z(x k ) = f (xk )+sT (x k
xk ) is the vertical projection of (x k , z k ) on the cutting plane.
Taking the limit for  , we get z(x k ) = f (x k ).

Proof Our proof starts with the observation that in the


Step 4) of the algorithm we have that (x k+1 , z k+1 ) =
k , wk ) only if wk
k
(y+1
+1
+1 > f (y+1 ) (i.e., if we have a
serious step), consequently, we have that (x k+1 , z k+1 )
int(epi f ).
The sequence {(xk , z k )}N is bounded by construction
and, by Proposition 2, it has an accumulation point (x k , z k )
such that z k = f (x k ). Considering now the sequence
defined by Eq. 11,




(yk , wk ) = (x k , z k ) + (xk , z k ) (x k , z k ) , (0, 1)
it follows from Lemma 3, that there exist k0 N such that
(yk , wk ) int(epi f ), for k > k0 . But this is the condition
for a serious step and the proof is complete.


Lemma 4 There exists > 0 such that g  ((x, z) + td) 0,
t [0, ] for any (x, z) int(epi f ) and any direction d
given by the algorithm.
Proof Let us denote by b a vector such that bi = siT xi
f (xi ) for all i = 0, 1, ..., .
Then, g  ((x, z) + td) = ( g  (x, z))T (x, z) b, since
gi (x, z) = f (yi ) + siT (x yi ) z
= (siT , 1)(x, z) bi
= (gi (x, z))T (x, z) bi
for all i = 0, 1, ..., . The step length t is defined in the Eq.
16 in the Step 3) of the algorithm. Since the constraints of
Problem AP are linear, to satisfy the line search condition
the following inequalities must be true:
gi ((x, z) + ti d) = (gi ((x, z) + ti d)T ((x, z) + ti d) bi
= gi (x, z) + ti (gi (x, z))T d 0,
(19)
for all i = 0, 1, ..., . If (gi (x, z))T d 0 the inequality is
satisfied for all ti > 0. Otherwise, it follows from iii) in the
Step 2) that,
(gi (x, z))T d = (gi (x, z))T (d + d ).
It follows from Eqs. 13 and 15
(gi (x, z))T d = gi (x, z)

i
i

A feasible directions method for nonsmooth convex optimization

the sequence of all the points obtained by the sub-iterations


of the algorithm. As this sequence is bounded, we can find
convergent subsequences. In particular, since x k+1 = ykk ,
Y that converges to x .
we can assert that there is Y
k

We call Y Y {y0k , y1k , ..., ykk }. It follows from


Eq. 12 in the Step 2) of the algorithm, that

and
(gi (x, z))T d = 1 gi (x, z)

i
.
i

Then, Eq. 19 is equivalent to


gi (x, z)(1 ti

i
) ti 0,
i







lim

where = + . Since ti > 0, the last inequality is


true when ti i / i .
By Assumption 6, > 0 is bounded and we also have
that is bounded from above. Thus, there exists 0 < <
tmax / such that < i / i for all i = 0, 1, ..., . Therefore,
for all t [0, ] the line search condition gi ((x, z)+td) 0
is satisfied for all i = 0, 1, ..., .


Proposition 4 Let d be a accumulation point of the
sequence {dk }kN . Then d = 0.

ki sik

= 0 and

lim

i=0

ki = 1.

(21)

i=0

k }. In consequence k > 0
We define Ik = {i | yik Y
i
for i Ik and k large enough. Then,

lim

ki sik = 0

and

lim

iIk

ki = 1.

(22)

iIk

Proof From Step 3) of the algorithm, we have

Consider the auxiliary point yik and the subgradient sik


f (yik ) such that i Ik . By definition of the subdifferential
(Clarke 1983), we can write

(x k+1 , z k+1 ) = (x k , z k ) + t k (dxk , dzk ),

f (x) f (yik )+(sik )T (xyik ) = f (x k )+(sik )T (xx k )ki ,

thus

where ki = f (x k ) f (yik ) (sik )T (x k yik ). Since


x k x and yik x for i Ik , we have that
lim ki = 0. Thus, sik i f (x k ), where f (x) repre-

z k+1 = z k + t k dzk

(20)

and the sequence {z k }kN is decreasing and belongs to the


compact set a . Let us denote by z = lim z k and N
k

N such that {t k }kN t . It follows from Lemma 4 that


t > 0. When k , k N on Eq. 20 we have z =
z + t dz , thus dz = 0. From Proposition 1 it follows that

sents the -subdifferential of f , (Clarke 1983). Then, we


can write




ki f (x)
ki f (x k )
iIk

iIk

0 = dz (d )T (x, z) = dz
0, thus dz
= 0.

T
ki sik


ki ki ,
x xk

iIk

Further, by Lemma 2, we have

iIk

and

0 = dz
= (d )T (x, z) (d )T Bd 0,

thus d = 0, since B is positive definite.


k
f (x) f x k +
 i

d k = dk + k dk 0 when k , k N ,
since k 0 if dk 0. In consequence, the iterations can
be stopped when dk is small enough.
Proposition 5 For any accumulation point (x , z ) of the
sequence {(x k , z k )}kN , we have 0 f (x ).
Proof Consider
Y := {y00 , y10 , ..., y00 , y01 , y11 , ..., y11 , ....., y0k , y1k , ..., ykk , ....},

sk
k i

iIk i

iIk

It follows from the previous result that


where k =


k i
k
iI ik k .
iIk i

lim ki = 0 and lim

we have that

ki
iIk 

iIk

sk .
ki i

x x k k ,

Since


ki = 1,

iIk

lim k

0.

Considering s k

It follows from (22) that s k k f (x k ) and 0 f (x ).


With this result the proof of convergence is complete.

J. Herskovits et al.
Table 1 Numerical results

5 Numerical results

Solver

In this section we present the numerical results obtained


with the NFDA algorithm, employing a set of fixed default
parameters. The algorithm was implemented in MATLAB
and the numerical experiments were performed in a 2GB
RAM microcomputer.
Section 5.1 reports the collection of test problems and
solvers employed to study the performance of NFDA. In
Section 5.2 we show a visual representation of the results
by using performance profiles (Dolan and Mor 2002).
5.1 Test problems and solvers
We employed the following set of test problems: CB2, CB3,
DEM, QL, LQ, Mifflin1, Rosen-Suzuki, Shor, Maxquad,
Maxq, Maxl, Goffin and TR48, described in detail in
Lukan and Vlcek (2000) or in Mkel and Neittaanmki
(1992).
The default parameters for the present method are:
B = I,

= 0.75,

= 0.1,

= 0.7,

and

tmax = 1.

The iterates stop when


dk
= 104 .
The Lagrange multipliers vector is updated according
with the following formulation:
Set, for i = 0, 1, ..., ,


i := max i ; 
d
2 ,  > 0.

(23)

With the purpose of improving the numerical results, at


each iteration k we add to the set of cutting planes defined
by the algorithm the last 5n cutting planes computed in the
previous iterations. That is, for a subiteration  the algorithm
works with up to  + 5n cutting planes.
We shall compare our numerical results with the following classic solvers:

Problem

BTC Bundle Trust,


PB Proximal Bundle,
SBM Standard Bundle.
This methods can be found in (Lemarechal and Bancora Imbert 1985), (Schramm and Zowe 1992), (Mkel
and Neittaanmki 1992) and (Lukan and Vlcek 2000). The
results are summarized in Tables 1, 2 and 3 where NI represents the number of iterations for each method and NF the
number of functions and subgradients computations. The
optimal function and the computed values are f and f ,
respectively. Our results are described in Tables 1, 2 and 3.

M1FC1

BTC

PB

NFDA

NI

NI

NI

NI

NI

CB2

31

11

13

15

CB3

14

12

13

15

24

DEM

17

10

09

07

22

QL

13

12

12

17

27

13

LQ

11

16

10

14

15

Mifflin1

66

143

49

22

21

Rosen

43

22

22

40

49

Shor

27

21

29

26

51

Maxquad

10

74

29

45

41

115

Maxq

20

150

144

125

158

272

Maxl

20

39

138

74

34

70

TR48

48

245

163

165

152

317

Goffin

50

52

72

51

51

79

5.2 Performance profiles


In order to present a visual comparison of the solvers
behavior, we give the performance profiles as described in
(Dolan and Mor 2002). These compare the performance of
n s solvers of a set S with respect to solution of n p problems
of a set P, using some performance measure, like number
of iterations, number of function computations or CPU time.
t p,s denote the performance measure required to solve problem p by solver s. For each problem p and solver s, the
performance ratio value is
r p,s =

t p,s
,
min{t p,s : s S}

Table 2 Numerical results


Solver

Problem

M1FC1 -steepest descent,

SBM

SBM

M1FC1

BTC

PB

NFDA

NF

NF

NF

NF

NF

CB2

33

31

16

16

14

CB3

16

44

21

16

25

DEM

19

33

13

08

22

QL

15

30

17

18

28

LQ

12

52

11

15

16

Mifflin1

68

281

74

23

22

Rosen

45

61

32

41

50

Shor

29

71

30

27

52

Maxquad

10

75

69

56

42

116

Maxq

20

151

207

128

159

273

Maxl

20

40

213

84

35

71

TR48

48

251

284

179

153

318

Goffin

50

53

94

53

52

80

A feasible directions method for nonsmooth convex optimization


Table 3 Numerical results
Solver

Problem

SBM

M1FC1

BTC

PB

NFDA

1.95222e+0

1.95225e+0

1.95222e+0

1.95222e+0

CB2

1.95220e+0

1.95222e+0

CB3

2.00000e+0

2.00141e+0

2.00000e+0

2.00000e+0

2.00010e+0

2.00000e+0

DEM

3.00000e+0

3.00000e+0

3.00000e+0

3.00000e+0

2.98630e+0

3.00000e+0

QL

7.20000e+0

7.20001e+0

7.20000e+0

7.20001e+0

7.20000e+0

7.20000e+0

LQ

1.41421e+0

1.41421e+0

1.41421e+0

1.41421e+0

1.41390e+0

1.41421e+0

Mifflin1

9.99990e1

9.99960e1

1.00000e+0

1.00000e+0

9.99800e1

1.00000e+0

Rosen

4.399999e+1

4.399998e+1

4.399998e+1

4.399994e+1

4.399990e+1

4.400000e+1

Shor

2.260016e+1

2.260018e+1

2.260016e+1

2.260016e+1

2.260020e+1

8.4140e1

8.4140e1

0.16712e6

0.00000e+0

0.00000e+0

0.00000e+0

3.17800e7

0.00000e+0

0.12440e12

0.00000e+0

0.00000e+0

0.00000e+0

2.83150e4

0.00000e+0

48

6.3853048e+5

6.3362550e+5

6.3856500e+5

6.3856000e+5

6.3856499e+5

6.3856500e+5

50

0.11665e11

0.00000e+0

0.00000e+0

0.00000e+0

1.33720e4

0.00000e+0

10

Maxq

20

Maxl

20

TR48
Goffin

8.4140e1

if a solution of the problem p was obtained by solver s.


Otherwise, r p,s = r M , where r M is a large parameter.
The distribution function s : R [0, 1], for the performance ratio r p,s represents the total performance of the
solver s to solve the set of test problems P. This function is
defined by
s ( ) =

1
|{ p : r p,s }|,
np

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

This section presents an application of the proposed algorithm in the topology design of robust trusses. A largely
employed model for truss topology optimization consid-

0.9

0.5

0.4

0.4

0.3

0.3

0.2

0.2

M1FC1
BTC
PB
SBM
NFDA

0.1

8.4140e1

Functions and derivatives computation, for engineering


applications, are generally very expensive in terms of processing time. For this reason we consider the number of
function evaluations as a measure of performance. Figures 1
and 2 show the performance profiles for these solvers using
the measures mentioned.

8.4140e1

6 Truss topology robust design via nonsmooth


optimization

where |{}| is the number of elements in the set {}. The value
of s (1) indicates the probability of the method s to be the
best method (among all the belonging to the set S), by using
as the performance measure t p,s .

2.260016e+1

8.4135e1

Maxquad

Fig. 1 Performance profileiteration counts

M1FC1
BTC
PB
SBM
NFDA

0.1

10

12

Fig. 2 Performance profilenumber of function evaluations

14

J. Herskovits et al.

ers structures submitted to a set of nodal loadings, that we


call primary loadings, and looks for the volume of each
of the bars that minimizes the structural compliance, see
(Bendse 1995). The structural topology changes if the
volume of some of the bars are zero in the solution.
A truss can be considered robust if it is reasonable rigid
when any set of small possible uncertain loads act on it. The
model considered here, additionally to the primary loadings,
includes a set of secondary loadings that are uncertain in
size and direction and can act over the structure. The compliance to be minimized is the worst possible one of those
produced for a loading in one of both sets. This model was
proposed by Ben-Tal and Nemirovski (1997), as well as
a formulation based on semidefinite programming. In this
section we derive an alternative formulation that leads to a
nonsmooth convex optimization problem that we solve with
the present algorithm.
Let us consider a two or a three-dimensional ground elastic truss with n nodes and m degrees of freedom, submitted
to a finite set of loading conditions P { p 1 , p 2 , . . . , p s }
such that pi Rm for i = 1, 2, . . . , s, and let b be the number of initial bars. The design variables of the problem are
the volumes of the bars, denoted x j , j = 1, 2, ..., b. The
reduced stiffness matrix is
K (x) =

b


xjKj ,

(24)

j=1

K j Rmm , j = 1, 2, ..., b, are the reduced stiffness matrices corresponding to bars of unitary volume. To obtain a

well-posed problem, the matrix bj=1 K j must be positive
definite (Ben-Tal and Nemirovski 1997). The compliance
related to the loading condition pi P can be defined as
(Bendse 1995):


(x, pi ) = sup 2u T pi u T K (x)u, u Rm .
(25)
u

Let be (x)
= sup{(x, pi ) ,

pi P} the worst possi-

pi

ble compliance for the set P. A energy model for topology


optimization with several loading conditions can be stated
as follows:

min (x)

xRb

b

(26)
s.t.
xj V ,

j=1

x j 0 , j = 1, . . . , b
The value V > 0 is the maximum quantity of material to
distribute in the truss.
Instead of maximizing on the finite domain P, we consider a model proposed by Ben-Tal and Nemirovski (1997)

that maximizes on the ellipsoid M of loading conditions


defined as follows:


M = Qe | e Rq , e T e 1 ,
(27)
where


[Q] = p 1 , . . . , p s , r f 1 , . . . , r f qs .

(28)

The vectors p 1 , . . . , p s must be linearly independent


and r f i , represents the i-th secondary loading. The value
r is the magnitude of the secondary loadings and the set
{ f 1 , . . . , f qs } must be chosen as an orthonormal basis of
a linear subspace orthogonal to the linear span of P. The
procedure to chose a convenient basis { f 1 , . . . , f qs } is
explained later.
For this model we have

(x)
= sup{(x, p) , p M}.
p

A robust design is then obtained by solving Eq. 26 with

(x) previously defined.


Other convex formulations of the truss optimization
problem are described in (Makrodimopoulos et al. 2010).
An interesting discussion considering similar problems of
topology optimization of structures is presented in (Stolpe
2010). Other similar formulations for obtaining robust
designs are studied in (Cherkaev and Cherkaev 2008).
In (Ben-Tal and Nemirovski 1997), it was proved the
equivalence of the following two expressions:

(x)

(29)

and

A(, x) =

Iq
Q

QT
 0,
K (x)

(30)

where R and A  0 means that A is positive semidefinite. Since the epigraph of coincides with
{(, x) | A(, x)  0}, and this last set is convex
(Vandenberghe and Boyd 1996) then is a convex function. Then, for robust optimization we can employ the
model proposed by Ben-Tal and Nemirovski (1997), and
solve the problem of Eq. 26 using the present nonsmooth
optimization algorithm. Note that this problem has linear
inequality constraints. To solve the optimization problem
we must include all these constraints in the initial set of
linear inequality constraints of the auxiliary Problem AP.
In addition, the initial point x 0 must be interior to the feasible region defined by the linear inequality constraints.
It remains to show how to compute the function at an
interior point x.

A feasible directions method for nonsmooth convex optimization

It follows that Eq. 30 is equivalent to (Ben-Tal and


Nemirovski 1997):
K (x) Q Q T  0.

(31)

Then, the equivalence of Eqs. 29 and 31 shows that (x)


is equal to that solves the problem:


min

s.t. K (x) Q Q T  0 .

(32)

Since in our case K (x)  0, we have a so called Gen


eralized Eigenvalue Problem. In consequence, (x)
is the
highest generalized eigenvalue of the system (Q Q T , K (x)),
(Vandenberghe and Boyd 1996). There are efficient routines
available for the computation of the generalized eigenvalues, (Bai et al. 2000).
If the highest eigenvalue is single, the function is
differentiable. If it is multiple, the function is generally
nondifferentiable. In both cases it is possible to compute the
required subgradients, see (Seyranian et al. 1994; Rodrigues
et al. 1995; Choi and Kim 2004).
We consider four test problems. In all of them, Youngs
modulus of the material is E = 1.0 and the maximum
volume is V = 1.0.
6.1 Example 1

of freedom of Fig. 3, the primary loading and the matrix


A = [e1 , e2 , e3 , e4 ] of the vectors of a basis of F are:

0
0

0

2

p1 =
0 ,

0

0
0

0
0

0
A=
0

0
0

0
0
0
1
0
0
0
0

0
0
0
0
0
0
1
0

0
0

0
.
0

0
1

We have to find the orthonormal basis { f 1 , f 2 , f 3 } of the


orthogonal complement of L(P) in F. As each of the vectors f i are in F, they satisfy f i = Avi for some vi R4 . As
they are normal to p 1 , the vectors vi satisfy ( p 1 )T Avi = 0.
Then, we can find {v1 , v2 , v3 } as a orthonormal basis of the
kernel of ( p 1 )T A. This basis can be found using the singular value decomposition of ( p 1 )T A (Golub and Van Loan
1996). The result obtained for vi is

1
0
1 2 3
[v , v , v ] =
0
0

0
0
1
0

0
0
.
0
1

The final result is:

The first example considers the ground structure of Fig. 3.


The length of each of the horizontal and vertical bars is
equal to 1.0 and the magnitude of the loads is 2.0. The secondary loadings have a magnitude r = 0.3 and define a
basis of the orthogonal complement of the linear span of P,
L(P), in the linear space F of the degrees of freedoms of
nodes 2 and 4. According to the numeration of the degrees

0
0

2
1
1
2
3
Q = [p ,r f ,r f ,r f ] =
0

0
0

Fig. 3 Truss of Example 1

Fig. 4 Result of Example 1

0
0
0.3
0
0
0
0
0

0
0
0
0

0
0

0
0
.
0
0

0
0

0.3 0
0 0.3

J. Herskovits et al.

500

450

Eigenvalues

400

350

300
250
200
150
100

Fig. 7 Result of Example 2

50
0

10

15

20
25
Iteration

30

35

40

45

Fig. 5 Example 1evolution of the four highest eigenvalues

Figure 4 shows the optimal truss obtained using the


present algorithm and Fig. 5 shows the evolution of the four
highest eigenvalues of the system (Q Q T , K (x)).
6.2 Example 2
This example considers the same ground structure of the
Example 1, and a loading condition as shown in Fig. 6. The
magnitude of the primary loads is 2.0. The secondary loadings have a magnitude r = 0.4 and define a basis of the
orthogonal complement of L(P) in the linear space F of all
the degrees of freedoms of the structure. Figure 7 shows the
optimal truss obtained and Fig. 8 shows the evolution of the
four highest eigenvalues of the system (Q Q T , K (x)).

on the horizontal plane z = 2. The structure has 8 nodes of


coordinates
x = cos(2i/N ), y = sin(2i/N ),
z = 0, i {1, . . . , N } ,
1
1
cos(2i/N ), y = sin(2i/N ),
2
2
z = 2, i {N + 1, . . . , 2N } ,

x=

(33)

with N = 4. All the possible bars between free-free or


free-fixed nodes are considered. The loading condition consists of four forces acting simultaneously and applied at the
nodes on the plane z = 2. The force at node i is
#

$
%T
pi = 1/ N (1+ 2 ) sin(2i/N ),cos(2i/N ), ,
i {N + 1, . . . , 2N },

6.3 Example 3

(34)

This example consists of a three-dimensional truss with


fixed nodes on the horizontal plane z = 0 and free nodes
700
1
2

600

Eigenvalues

500

400
300
200
100
0

Fig. 6 Truss of Example 2

10

15

20
Iteration

25

30

35

Fig. 8 Example 2evolution of the four highest eigenvalues

40

A feasible directions method for nonsmooth convex optimization


Table 5 Bar volumes of the optimal structure
Example 1

Example 2

volume

bar

volume

bar

volume

bar

volume

53

2.478e-1

53

2.448e-1

16

1.247e-1

17

1.000e-1

64

1.276e-1

31

1.195e-1

18

1.246e-1

110

9.926e-2

42

1.251e-1

64

2.448e-1

25

1.246e-1

26

9.947e-2

54

3.715e-3

42

1.195e-1

27

1.247e-1

28

1.003e-1

63

2.478e-1

54

1.265e-2

36

1.246e-1

37

9.922e-2

32

2.478e-1

63

1.265e-2

38

1.247e-1

39

1.002e-1

32

2.368e-1

45

1.247e-1

48

9.943e-2

41

9.195e-3

47

1.246e-1

410

1.001e-1

56

4.842e-4

56

1.003e-1

57

4.343e-4

59

9.935e-2

58

4.847e-4

67

2.573e-4

67

4.847e-4

68

2.169e-4

68

4.343e-4

69

2.366e-4

78

4.842e-4

610

2.530e-4

78

2.548e-4

79

2.345e-4

300

Eigenvalues

200

710

2.162e-4

89

2.728e-4

810

2.137e-4

910

2.669e-4

with = 0.001. The secondary loadings have a magnitude


r = 0.3 and define a basis of the orthogonal complement of
L(P) in the linear space F of all the degrees of freedoms
of the structure. Figure 9 shows the optimal truss obtained
for this example and Fig. 12 shows the evolution of the six
highest eigenvalues of the system (Q Q T , K (x)).

5
6

150

100

50

Example 4

bar

Fig. 9 Result of Example 3

250

Example 3

10

15

20

25

Iteration

Fig. 10 Example 3evolution of the six highest eingenvalues

Table 4 Results of the truss optimization examples


NB

NI

NF

Example 1

10

42

72

Example 2

10

37

161

278.40

Example 3

22

25

35

110.56

Example 4

35

35

112

135.27

258.24

NB: number of bars, NI: number of iterations, NF: number of function


evaluations, F: optimal value of the objective function

Fig. 11 Result of Example 4

J. Herskovits et al.
350
1
2

300

Eigenvalues

250

200

150
100
50
0

10

15
20
Iteration

25

30

35

Fig. 12 Example 4evolution of the six highest eigenvalues

6.4 Example 4
This example is similar to the previous one. The nodal
coordinates and nodal forces are given by Eqs. 33 and 34,
respectively, but with N = 5 and = 0.01. The secondary
loadings have a magnitude r = 0.3 and define a basis of the
orthogonal complement of L(P) in the linear space F of all
the degrees of freedoms of the structure. The optimal truss
of this example and the evolution of the six highest eigenvalues of the system (Q Q T , K (x)) are shown in Figs. 11
and 10.
Tables 4 and 5 describe the numerical results obtained
using the present algorithm.

of quadratic programs. The state of the art of numerical


techniques for linear systems is vast, including direct and
iterative methods for simple or high performance computers. These techniques take advantage of the structure of the
system. This suggests the possibility of solving very large
problems with the present approach.
Even if the present algorithm is formally restricted to
unconstrained convex problems, we can include linear constraints by adding them in Problem AP. This is a significant
advantage with respect to existing nonsmooth optimization
techniques.
Our algorithm was successfully applied to the topology
optimization of trusses employing a model proposed by
Ben-Tal and Nemirovski (1997), based on semidefinite programming. We reformulated this model in such a way to
have a convex optimization problem with linear constraints.
The present paper brings a new methodology for nonsmooth optimization and the authors believe that our algorithm can be significantly improved in future research. We
are also working on the generalization of this approach
for nonconvex problems and the treatment of nonsmooth
inequality constraints. Since this technique works with
internal linear systems similar to those of FDIPA it seems
possible to elaborate an efficient code for problems involving simultaneously smooth and nonsmooth functions.
Acknowledgements The authors wish to thank Claudia Sagastizabal
(CEPEL, Brazil) and Napsu Karmitsa (University of Turku, Finland)
for their careful reading of this manuscript and the helpful comments.
This research was partially supported by CNPQ, FAPERJ (Brazil) and
ANII (Uruguay).

7 Conclusions and future work

References

In this paper, a new approach for unconstrained nonsmooth


convex optimization was introduced and global convergence
was proved. The present algorithm is very simple and does
not require the solution of quadratic programming subproblems but just of two linear systems with the same matrix.
All test problems were solved with the same values of
parameters, indicating that our approach is robust and the
corresponding code can be employed in practical applications by engineers and other professionals non experts in
mathematical programming.
When comparing with well established nonsmooth optimization techniques, our algorithm obtained the same
results, requiring a comparable number of iterations and
function evaluations and exhibiting similar performance
profiles.
The main advantage of our approach, as in FDIPA,
comes from the fact that we solve linear systems, instead

Auslender A (1987) Numerical methods for nondifferentiable convex


optimization. Math Program Stud 30:102126
Bai Z, Demmel J, Dongarra J, Ruhe A, van der Vorst H (eds) (2000)
Templates for the solution of algebraic eigenvalue problems. Software, Environments, and Tools, a practical guide, vol 11. Society
for Industrial and Applied Mathematics (SIAM), Philadelphia
Ben-Tal A, Nemirovski A (1997) Robust truss topology design via
semidefinite programming. SIAM J Optim 7(4):9911016
Bendse MP (1995) Optimization of structural topology, shape, and
material. Springer-Verlag, Berlin
Bonnans JF, Gilbert JC, Lemarchal C, Sagastizbal C (2003) Numerical optimization: theoretical and practical aspects. SpringerVerlag
Cherkaev E, Cherkaev A (2008) Minimax optimization problem of
structural design. Comput Struct 86(1314):14261435
Choi KK, Kim NH (2004) Structural sensitivity analysis and optimization 1: linear systems. Springer, Berlin
Clarke FH (1983) Optimization and nonsmooth analysis. John Wiley
and Sons
Dolan ED, Mor JJ (2002) Benchmarking optimization software with
performance profile. Math Program 91:201213

A feasible directions method for nonsmooth convex optimization


Goffin J, Vial J (2002) Convex nondifferentiable optimization: a survey focused on the analytic center cutting plane method. Optim
Methods Softw 17:808867
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns
Hopkins studies in the mathematical sciences. Johns Hopkins
University Press, Baltimore
Herskovits J (1982) Dveloppement dune mthode nmerique pour
loptimisation non-linaire. PhD thesis, Paris IX University
Herskovits J (1986) A two-stage feasible directions algorithm for
nonlinear constrained optimization. Math Program 36:1938
Herskovits J (1998) Feasible direction interior-point technique for
nonlinear optimization. JOTA J Optim Theory Appl 99:121146
Hiriart Urruty JB, Lemarchal C (1993) Convex analysis and minimization algorithms I: fundamentals. Springer-Verlag
Kelley JJE (1960) The cutting-plane method for solving convex programs. J Soc Ind Appl Math 8:703712
Kiwiel KC (1985) Methods of descent for nondifferentiable optimization. Springer-Verlag
Lemarechal C, Bancora Imbert M (1985) Le module m1fc1. Tech rep,
Institut de Recherch dAutomatique
Luenberger DG (1984) Linear and nonlinear programming, 2nd edn.
Addison-Wesley
Lukan L, Vlcek J (2000) Test problems for nonsmooth unconstrained
and linearly constrained optimization, n-798. Tech rep, Institute
of Computer Science, Academy of Scienes of the Czech Republic
Mkel M, Neittaanmki P (1992) Nonsmooth optimization, analysis and algoritms with applications to optimal control. Word
Scientific Publishing

Makrodimopoulos A, Bhaskar A, Keane AJ (2010) Second-order cone


programming formulations for a class of problems in structural
optimization. Struct Multidiscipl Optim 40(16):365380
Mitchell JE (2003) Polynomial interior point cutting plane methods.
Optim Methods Softw 18:507534
Nesterov O Y Pton , Vial J (1999) Homogeneous analytic center
cutting plane methods with approximate centers. Optim Methods
Softw 11:243273
Panier E, Tits A, Herskovits J (1988) A QP-free, globally convergent, locally superlinearly convergent algorithm for inequality
constrained optimization. SIAM J Control Optim 26:788811
Rodrigues HC, Guedes JM, Bendse MP (1995) Necessary conditions
for optimal design of structures with a nonsmooth eigenvalue
based criterion. Struct Optim 9(1):5256
Schramm H, Zowe J (1992) A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis,
numerical results. SIAM J Optim 2:121152
Seyranian AP, Lund E, Olhoff N (1994) Multiple eigenvalues in
structural optimization problems. Struct Optim 8(4):207227
Stolpe M (2010) On some fundamental properties of structural topology optimization problems. Struct Multidiscipl Optim 41(5):
661670
Tits A, Wchter A, Bakhtiari S, Urban T, Lawrence C (2003) A primaldual interior-point method for nonlinear programming with strong
global and local convergence properties. SIAM J Optim 14:
173199
Vandenberghe L, Boyd S (1996) Semidefinite programming. SIAM
Rev 38(1):4995

You might also like