0% found this document useful (0 votes)
78 views7 pages

Global Convergence of Trust-Region Algorithms For Convex Constrained Minimization Without Derivatives

This document summarizes a trust-region algorithm for minimizing a convex constrained function without derivatives. The algorithm considers quadratic models that approximate the objective function based only on zero-order information. At each iteration, it considers the Euclidean projection of the gradient of the model onto the feasible set. The model is minimized subject to the feasible set and trust region. Global convergence is proven for the algorithm, showing that any accumulation points of the generated sequence are stationary points.

Uploaded by

Emerson Butyn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views7 pages

Global Convergence of Trust-Region Algorithms For Convex Constrained Minimization Without Derivatives

This document summarizes a trust-region algorithm for minimizing a convex constrained function without derivatives. The algorithm considers quadratic models that approximate the objective function based only on zero-order information. At each iteration, it considers the Euclidean projection of the gradient of the model onto the feasible set. The model is minimized subject to the feasible set and trust region. Global convergence is proven for the algorithm, showing that any accumulation points of the generated sequence are stationary points.

Uploaded by

Emerson Butyn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Applied Mathematics and Computation 220 (2013) 324–330

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Global convergence of trust-region algorithms for convex


constrained minimization without derivatives
P.D. Conejo a,⇑,1, E.W. Karas b,2, L.G. Pedroso b,3, A.A. Ribeiro b,3, M. Sachine b,3
a
State University of Paraná, Dep. of Mathematics, 85819 110 Cascavel, PR, Brazil
b
Federal University of Paraná, Dep. of Mathematics, CP 19081, 81531 980 Curitiba, PR, Brazil

a r t i c l e i n f o a b s t r a c t

Keywords: In this work we propose a trust-region algorithm for the problem of minimizing a function
Derivative-free optimization within a convex closed domain. We assume that the objective function is differentiable but
Convex constrained optimization no derivatives are available. The algorithm has a very simple structure and allows a great
Trust region deal of freedom in the choice of the models. Under reasonable assumptions for derivative-
free schemes, we prove global convergence for the algorithm, that is to say, that all
accumulation points of the sequence generated by the algorithm are stationary.
Ó 2013 Published by Elsevier Inc.

1. Introduction

In this work we will discuss global convergence of a general derivative-free trust-region algorithm for solving the non-
linear programming problem
minimize f ðxÞ;
ð1Þ
subject to x 2 X;
where X  Rn is a nonempty closed convex set and f : Rn !R is a differentiable function. We assume that X is a ‘‘simple’’ set,
in the sense that it is easy to compute the orthogonal projection of an arbitrary point onto the feasible set X. Although the
objective function is smooth, we assume that its derivatives are not available. This situation is common in a wide range of
applications [9], particularly when the objective function is provided by a simulation package or a black box. Such practical
situations have been motivating research on derivative-free optimization in the recent years [9,11].
There are many derivative-free methods with global convergence results for handling the problem (1). When the feasible
set is defined by linear constraints, the problem can be solved by GSS method (Generating Set Search) presented in [19]. This
method encompasses many algorithms, including Generalized Pattern Search [20]. In [3], the authors propose an inexact-
restoration scheme where the GSS algorithm is used in the optimality phase, therefore avoiding the evaluation of the gra-
dient of the objective function. In [14,18], the authors present Augmented Lagrangian methods for generally constrained
problems. In [18], the subproblems are linearly constrained and solved by GSS algorithm. In [14], any class of constraints
can be considered on the subproblems, since one can provide a suitable derivative-free algorithm for handling them.

⇑ Corresponding author.
E-mail addresses: [email protected] (P.D. Conejo), [email protected] (E.W. Karas), [email protected] (L.G. Pedroso), [email protected]
(A.A. Ribeiro), [email protected] (M. Sachine).
1
Ph.D. Program in Numerical Methods in Engineering, Federal University of Paraná. Partially supported by Fundação Araucária, Brazil.
2
Partially supported by CNPq, Brazil. Grants 472313/2011-8 and 307714/2011-0.
3
Partially supported by CNPq, Brazil. Grant 472313/2011-8.

0096-3003/$ - see front matter Ó 2013 Published by Elsevier Inc.


http://dx.doi.org/10.1016/j.amc.2013.06.041
P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330 325

Here we consider the class of derivative-free trust-region methods, which has been pioneered by Winfield [31] and
exhaustively studied for unconstrained and box-constrained problems by Powell [22–25,27], Conn and Toint [12], Conn,
Scheinberg and Toint [8], Conn, Scheinberg and Vicente [11], Fasano, Morales and Nocedal [15], Gratton, Toint and Tröltzsch
[17], and for linearly constrained problems by Powell [26]. Global convergence results for the unconstrained case are pre-
sented in [11,17,27]. In these works the models are based on polynomial interpolation. Essentially, the difference between
them consists in the interpolation set updating and the construction of the models.
With regard to derivative based trust-region methods, for both constrained and unconstrained problems, well-established algo-
rithms with global convergence results can be found in the literature, for example in [5–7,21,28]. When the derivatives of the objec-
tive function are not available, there are also trust-region methods with good practical performance [1,25,29,30]. However, until
now, theoretical results have not been established for the constrained case. As far as we know, this work is the first one to present
global convergence results for a class of derivative-free trust-region methods for constrained optimization.
The proposed algorithm considers quadratic models that approximate the objective function based only on zero-order
information. Nevertheless, the gradient of the models must represent adequately the gradient of the objective function at
the current point. This property can be achieved by many derivative-free techniques, most of them based in polynomial
interpolation [4,10,11,13].
At each iteration, the algorithm considers the Euclidean projection of the gradient of the model at the current point onto the fea-
sible set X. The model is minimized subject to X and to the trust region, in the sense that an approximate solution of this subproblem
must satisfy a Cauchy-type condition. This solution will be accepted or rejected as the new iterate according to the ratio between the
actual and predicted reductions, a classical trust-region procedure. In particular, the proposed algorithm can be applied for solving
unconstrained problems. In this case, the considered projection is reduced to the gradient of the model. Furthermore, the classical
condition on the Cauchy step can be used as an acceptance criterion for the solution of the subproblem.
The paper is organized as follows. In Section 2, we propose a derivative-free trust-region algorithm. In Section 3, we pres-
ent its global convergence analysis. Conclusions are stated in Section 4. Throughout the paper, the symbol k  k denotes the
Euclidean norm.

2. The algorithm

In this section we present a general trust-region algorithm for solving the problem (1) that generates a sequence of
approximate minimizers of quadratic constrained subproblems. The algorithm allows a great deal of freedom in the con-
struction and resolution of the subproblems.
At each iteration k 2 N, we consider the current iterate xk 2 X and the quadratic model
T 1 T
Q k ðdÞ ¼ f ðxk Þ þ ðg k Þ d þ d Gk d;
2
where g k 2 Rn and Gk 2 Rnn is a symmetric matrix. Any quadratic model in this form can be used, as long as it provides a
sufficiently accurate approximation of the objective function, in the sense that g k ¼ rQ k ð0Þ and Gk satisfy Hypotheses H3
and H4 discussed ahead. We assume little about the Hessian of the models, just symmetry and uniform boundedness, allow-
ing that even linear models may be used by setting Gk ¼ 0. We do not use Taylor models because we are interested in the
case where derivatives are not available, although we assume they exist.
We consider the stationarity measure at xk for the problem of minimizing Q k over the set X defined by

pk ¼ kPX ðxk  g k Þ  xk k;
where PX denotes the orthogonal projection onto X, which exists because it is a closed convex set. We say that a point x 2 X
is stationary for the original problem (1) when kPX ðx  rf ðx ÞÞ  x k ¼ 0. This is a classical definition of stationarity, since X
is convex [2,7,28].
k
For proving convergence to stationary points, we assume that an approximate solution d 2 Rn of the trust-region
subproblem
minimize Q k ðdÞ;
subject to xk þ d 2 X; ð2Þ
kdk 6 Dk
satisfies the efficiency condition given by
 
k pk
Q k ð0Þ  Q k ðd Þ P c1 pk min ; Dk ; 1 ; ð3Þ
1 þ kGk k
where c1 > 0 is a constant independent of k.
Conditions of this kind are well-known in trust-region approaches and were used by several authors in different situa-
tions. In the unconstrained case, in which X ¼ Rn , the stationarity measure pk is simply kg k k and the classical Cauchy step
satisfies a similar condition, as proved in [21, Lemma 4.5] and [11, Theorem 10.1] with and without derivatives of the objec-
tive function, respectively. Conditions of this type also appear throughout [7], under different contexts. In [16], the authors
326 P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330

prove the global convergence of a filter method for nonlinear programming by assuming that an approximate solution of the
subproblems satisfies a condition analogous to (3). For bound-constrained nonlinear optimization without derivatives,
Tröltzsch [30] also assumes such a condition.
Once computed an approximate solution of (2), we analyse whether or not it produces a satisfactory decrease in the mod-
el. As usual in trust-region methods, the trial step is assessed by means of the ratio
k
f ðxk Þ  f ðxk þ d Þ
ck ¼ k
: ð4Þ
Q k ð0Þ  Q k ðd Þ
We present now a general derivative-free trust-region algorithm with no specification on the model update and on the
internal solver for the subproblems. Later we will state assumptions to prove that any accumulation point of the sequence
generated by the algorithm is stationary.

Algorithm 1. General Algorithm


Data: x0 2 X; a > 0; D0 > 0; 0 < s1 < 1 6 s2 , g1 2 ð0; 1Þ; 0 6 g < g1 6 g2 .
Set k ¼ 0.
REPEAT
Construct the model Q k .
IF Dk > apk , then
Dkþ1 ¼ s1 Dk ; dk ¼ 0 and xkþ1 ¼ xk .
ELSE
k
Find an approximate solution d of (2).
IF ck > g, then
k
xkþ1 ¼ xk þ d .
ELSE
xkþ1 ¼ xk .
IF ck < g1 , then
Dkþ1 ¼ s1 Dk .
ELSE
k
IF ck > g2 and kd k ¼ Dk , then
Dkþ1 ¼ s2 Dk .
ELSE
Dkþ1 ¼ Dk .
k ¼ k þ 1.

Note that the model is updated whenever a new point is computed. Otherwise, the trust-region radius is reduced by the
factor s1 . We will prove in the next section that Dk ! 0 as k ! 1, what will be important in the proofs of the convergence
results. This also suggests that, in light of the Hypothesis H4 stated ahead, given a tolerance e > 0 and parameters b1 ; b2 > 0,
the combination of Dk 6 b1 e and pk 6 b2 e could make a reasonable stopping criterion on implementations of the algorithm.
When pk is small, the iterate is probably close to a solution of the problem of minimizing the model within the feasible set X.
On the other hand, if Dk is large, we cannot guarantee that the model represents properly the objective function. Hence, when
Dk > apk , the trust-region radius is reduced in the attempt of finding a more accurate model. Although we could always set
a ¼ 1, this parameter might be used to balance the magnitude of pk and Dk , according to the problem.

3. Convergence

In this section we prove, under reasonable assumptions, that every accumulation point of the sequence generated by the
algorithm is stationary. From now on we assume that the algorithm generates an infinite sequence ðxk Þ  X.
For proving the convergence we consider the following hypotheses.

H1. The function f is differentiable and its gradient rf is Lipschitz continuous with constant L > 0, in X.
H2. The function f is bounded below in X.
H3. The matrices Gk are uniformly bounded, that is, there exists a constant b P 1 such that kGk k 6 b  1 for all k P 0.
H4. There exists a constant c2 > 0 such that

kg k  rf ðxk Þk 6 c2 Dk
for all k 2 N.
P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330 327

Hypotheses H1 and H2 impose conditions on the objective function, whereas H3 and H4 describe properties that the
interpolation models must satisfy. The first three hypotheses are usual in convergence analysis for both derivative-free
and derivative-based trust-region algorithms. Hypothesis H4 states that the model has to properly represent the objective
function near the current point. There are algorithms able to find models with such properties without computing rf ðxk Þ,
for instance [11, Chapter 6]. The proposed algorithm allows the usage of any technique to fulfill Hypothesis H4, although
in literature the most usual procedure is polynomial interpolation [11,15,17,23,29].
For the purpose of our analysis, we shall consider the following sets of indices

S ¼ fk 2 N j ck > gg and S ¼ fk 2 N j ck P g1 g: ð5Þ


The set S is referred to as the set of the successful iterations. Note that S  S.
In the following lemma, the constants c1 ; L; b and c2 are the ones defined in (3) and in Hypotheses H1, H3, H4, respectively.
The lemma establishes that if the trust-region radius is sufficiently small, then the algorithm will perform a successful
iteration.

Lemma 3.1. Suppose that Hypotheses H1, H3 and H4 hold. Consider the set
  
pk ð1  g1 Þ
K¼ k 2 N j Dk 6 min ; c; apk ; 1 ; ð6Þ
b pk
Lþc2 þ2b
ð Þ
where c ¼ c1
. If k 2 K, then k 2 S.

Proof. By the Mean Value Theorem, there exists t k 2 ð0; 1Þ such that
k k T k
f ðxk þ d Þ ¼ f ðxk Þ þ rf ðxk þ t k d Þ d :
Therefore, by Hypotheses H1, H3 and H4,
    T 
 k k k  k k 1 k T k
f ðx Þ  f ðxk þ d Þ  Q k ð0Þ þ Q k ðd Þ ¼  rf ðxk þ tk d Þ  g k d þ ðd Þ Gk d 
2
  1 k
k k
6 krf ðxk þ t k d Þ  rf ðxk Þk þ krf ðxk Þ  g k k kd k þ kd k2 kGk k
2
  1
k k k 2
6 t k Lkd k þ c2 Dk kd k þ bkd k :
2
k
Since kd k 6 Dk and t k 2 ð0; 1Þ, we have that
 
 k k k  2
f ðx Þ  f ðxk þ d Þ  Q k ð0Þ þ Q k ðd Þ 6 c0 Dk ; ð7Þ

where c0 ¼ L þ c2 þ 2b.
From (6), for every k 2 K we have that Dk 6 apk and consequently pk > 0. Then, it follows from (3) that
k
Q k ð0Þ  Q k ðd Þ – 0. Thus, from expressions (4), (7) and (3), for all k 2 K
 
f ðxk Þ  f ðxk þ dk Þ  Q ð0Þ þ Q ðdk Þ c0 D2k cD2k
 k k 
jck  1j ¼  6 n o¼ n o:
 k
Q k ð0Þ  Q k ðd Þ  c pk
pk min pbk ; Dk ; 1
1 pk min b
; Dk ; 1

By (6),
 
pk c Dk
Dk ¼ min ; Dk ; 1 and 6 1  g1 :
b pk
Therefore jck  1j 6 1  g1 and hence ck P g1 . Consequently k 2 S . h

Hypothesis H4 says that the smaller Dk , the better the models represent the objective function. Therefore, it is reasonable
to expect that the trust-region radius converges to zero. In the following lemma, we show that the proposed algorithm has
this property.

Lemma 3.2. Suppose that Hypotheses H2 and H3 hold. Then the sequence ðDk Þ converges to zero.

Proof. If S is finite, then there exists k0 2 N such that for all k P k0 ; Dkþ1 6 s1 Dk . Thus, ðDk Þ converges to zero. We assume
henceforth S is infinite. For any k 2 S, using (3) and Hypothesis H3 we have
   
k pk
f ðxk Þ  f ðxkþ1 Þ P g1 Q k ð0Þ  Q k ðd Þ P g1 c1 pk min ; Dk ; 1 :
b
328 P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330

As k 2 S, we have that Dk 6 apk and hence


 
Dk Dk
f ðxk Þ  f ðxkþ1 Þ P g1 c1 min ; Dk ; 1 :
a ab
k
Since ðf ðx ÞÞ is a nonincreasing sequence and bounded below by Hypothesis H2, the left-hand side of the above expression
converges to zero. Then,
limDk ¼ 0: ð8Þ
k2S

Consider the set


 
U¼ k2Njk R S :
If U is finite, then by (8) we have that lim Dk ¼ 0. Now suppose that U is infinite. Consider k 2 U and define ‘k the last index in
k!1
S before k. Then ‘k is well-defined for all large k and Dk 6 s2 D‘k , which implies that
limDk 6 s2 limD‘k ¼ s2 limD‘k :
k2U k2U ‘k 2S

By (8) it follows that lim Dk ¼ 0 which completes the proof. h


k2U

The next lemma provides a weak convergence result for the problem of minimizing the model within the feasible set X.
We prove that the sequence ðpk Þ has a subsequence converging to zero.

Lemma 3.3. Suppose that Hypotheses H1 to H4 hold. Then

liminf pk ¼ 0:
k!1

Proof. The proof is by contradiction. Suppose that there exist a constant e > 0 and an integer K > 0 such that pk P e for each
n o
~ ¼ min e ; ð1g1 Þe ; ae; 1 where b is the constant of Hypothesis H3, c is defined in Lemma 3.1, g and a > 0 are
k P K. Take D b c 1

given in Algorithm 1.
~ , then k 2 K, with K given in (6). By Lemma 3.1, k 2 S and thus Dkþ1 P Dk . It follows that the radius
Consider k P K. If Dk 6 D
of the trust region can only decrease if Dk > D~ , and in this case, Dkþ1 ¼ s1 Dk > s1 D
~.

Therefore, one can see that for all k P K


n o
Dk P min s1 D~ ; DK ; ð9Þ

which contradicts Lemma 3.2 and concludes the proof. h

Assuming a sufficient decrease in the objective function, that is, setting g > 0 in the algorithm, we can prove that not only
there exists a subsequence of pk converging to zero, as stated in previous lemma, but also that the convergence prevails for
the whole sequence.

Lemma 3.4. Suppose that Hypotheses H1 to H4 hold and g > 0. Then

lim pk ¼ 0:
k!1

Proof. Suppose by contradiction that for some e > 0 the set

N0 ¼ f k 2 N j pk P eg ð10Þ
is infinite. By Lemma 3.2, the sequence ðDk Þ converges to zero. Then, there exists k0 2 N such that for all k P k0 ,
 
e ð1  g1 Þe
Dk 6 min ; ; ae; 1 ; ð11Þ
b c
where the constants b and c are given in Lemma 3.1, and a > 0 and g1 are defined in Algorithm 1.
By (10), for all k 2 N0 with k P k0 ,
 
pk ð1  g1 Þpk
Dk 6 min ; ; apk ; 1 ; ð12Þ
b c
consequently by Lemma 3.1, k 2 S  S.
P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330 329

Given k 2 N0 with k P k0 , consider ‘k the first index such that ‘k > k and p‘k 6 e=2. The existence of ‘k is ensured by
Lemma 3.3. So, pk  p‘k P e=2. Using the definition of pk , the triangle inequality and the contraction property of projections,
we have that
e
6 kPX ðxk  g k Þ  xk k  kPX ðx‘k  g ‘k Þ  x‘k k
2
6 kPX ðxk  g k Þ  xk  PX ðx‘k  g ‘k Þ þ x‘k k 6 2kxk  x‘k k þ kg k  g ‘k k
¼ 2kxk  x‘k k þ kg k  rf ðxk Þ þ rf ðxk Þ  rf ðx‘k Þ þ rf ðx‘k Þ  g ‘k k
6 2kxk  x‘k k þ kg k  rf ðxk Þk þ krf ðxk Þ  rf ðx‘k Þk þ krf ðx‘k Þ  g ‘k k:
So, using Hypotheses H1 and H4,
e
6 ð2 þ LÞkxk  x‘k k þ c2 ðDk þ D‘k Þ: ð13Þ
2
Let us consider C k ¼ fi 2 S j k 6 i < ‘k g. Note that, by (12), k 2 S, so C k – ;. For each i 2 C k , using the fact that i 2 S, con-
dition (3) and Hypothesis H3, we conclude that
   
i pi
f ðxi Þ  f ðxiþ1 Þ P g Q i ð0Þ  Q i ðd Þ P gc1 pi min ; Di ; 1 :
b
By the definition of ‘k , we have that pi > e=2 for all i 2 C k . As i P k, by (11) Di 6 e=b and Di 6 1. Therefore
Di e pi
6 6 :
2 2b b
It follows that
gc1 eDi
f ðxi Þ  f ðxiþ1 Þ >
4
and hence
4
Di < ðf ðxi Þ  f ðxiþ1 ÞÞ: ð14Þ
gc 1 e
On the other hand,
X X
kxk  x‘k k 6 kxi  xiþ1 k 6 Di ;
i2C k i2C k

which combined with (14) provides


4
kxk  x‘k k < ðf ðxk Þ  f ðx‘k ÞÞ:
gc1 e
By Hypothesis H2, the sequence ðf ðxk ÞÞ is bounded below, and since it is nonincreasing, f ðxk Þ  f ðx‘k Þ ! 0. Therefore the sub-
sequence ðkxk  x‘k kÞk2N0 converges to zero, which together with Lemma 3.2 contradicts (13), completing the proof. h

Now we have all the ingredients for proving global convergence to first-order stationary points. In the following theorem
we establish a relation between the measure of stationarity for the original problem and the measure of stationarity given in
Lemmas 3.3 and 3.4, which provides the global convergence result.

Theorem 3.5. Suppose that Hypotheses H1 to H4 hold. Then

(i) If g ¼ 0, then lim inf kPX ðxk  rf ðxk ÞÞ  xk k ¼ 0.


k!1
(ii) If g > 0, then lim kPX ðxk  rf ðxk ÞÞ  xk k ¼ 0.
k!1

Proof. By the triangle inequality, the contraction property of projections and Hypothesis H4, we have that

kPX ðxk  rf ðxk ÞÞ  xk k ¼ kP X ðxk  rf ðxk ÞÞ  P X ðxk  g k Þ þ PX ðxk  g k Þ  xk k


6 kPX ðxk  rf ðxk ÞÞ  PX ðxk  g k Þk þ kP X ðxk  g k Þ  xk k
6 krf ðxk Þ  g k k þ kPX ðxk  g k Þ  xk k
6 c 2 Dk þ p k :
Using Lemmas 3.2, 3.3 and 3.4, we complete the proof. h
330 P.D. Conejo et al. / Applied Mathematics and Computation 220 (2013) 324–330

We emphasize that liminf-type convergence result is still guaranteed if the new iterate is moved to a point with lower
objective function value, since Lemmas 3.1, 3.2 and 3.3 remain valid. This can be an interesting strategy if a better point
is found during the construction of the model, which can occur, for example, when the model is obtained by interpolation.

4. Conclusions

In this work we proposed a general derivative-free trust-region algorithm for minimizing a smooth objective function in a
closed convex set. The algorithm has a very simple structure and it allows a certain degree of freedom on the choice of the
models, as long as they approximate sufficiently well the objective function, in the sense of Hypothesis H4. Furthermore, any
internal algorithm can be utilized for solving the subproblems, provided that it generates a sequence of points satisfying the
efficiency condition (3). Under these hypotheses and other standard assumptions, we have established global convergence of
the algorithm in a neat way. Further research is necessary in order to extend the analysis to derivative-free problems in gen-
eral domains.

Acknowledgement

The authors are grateful to José Mario Martínez who encouraged them to pursue this investigation and to Celso Penteado
Serra (in memorian) and Luis Mauricio Graña Drummond for their valuable comments and suggestions. We also thank the
anonymous referees whose suggestions led to great improvements in the paper.

References

[1] B.M. Arouxét, N. Echebest, A. Pilotta, Active-set strategy in Powell’s method for optimization without derivatives, Computational and Applied
Mathematics 30 (1) (2011) 171–196.
[2] D.P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, USA, 1995.
[3] L.F. Bueno, A. Friedlander, J.M. Martínez, F.N.C. Sobral, Inexact restoration method for derivative-free optimization with smooth constraints. SIAM
Journal on Optimization 23 (2) (2013) 1189–1213.
[4] P.G. Ciarlet, P.A. Raviart, General Lagrange and Hermite interpolation in Rn with applications to finite elements methods, Archive for Rational
Mechanics and Analysis 46 (1972) 177–199.
[5] A.R. Conn, N.I.M. Gould, A. Sartenaer, Ph.L. Toint, Convergence properties of minimization algorithms for convex constraints using a structured trust
region, SIAM Journal on Optimization 6 (4) (1996) 1059–1086.
[6] A.R. Conn, N.I.M. Gould, Ph.L. Toint, Global convergence of a class of trust region algorithms for optimization with simple bounds, SIAM Journal on
Numerical Analysis 25 (2) (1988) 433–460.
[7] A.R. Conn, N.I.M. Gould, Ph.L. Toint, Trust-Region Methods, MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2000.
[8] A.R. Conn, K. Scheinberg, Ph.L. Toint, On the convergence of derivative-free methods for unconstrained optimization, in: M.D. Buhmann, A. Iserles
(Eds.), Approximation Theory and Optimization: Tributes to M.J.D. Powell, Cambridge University Press, 1997, pp. 83–108.
[9] A.R. Conn, K. Scheinberg, Ph.L. Toint. A derivative free optimization algorithm in practice, in: Proceedings of the AIAA Conference, St Louis, 1998.
[10] A.R. Conn, K. Scheinberg, L.N. Vicente, Geometry of interpolation sets in derivative free optimization, Mathematical Programming 111 (2008) 141–172.
[11] A.R. Conn, K. Scheinberg, L.N. Vicente, Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2009.
[12] A.R. Conn, Ph. L. Toint, An algorithm using quadratic interpolation for unconstrained derivative free optimization, in: G. Di Pillo, F. Gianessi (Eds.),
Nonlinear Optimization and Applications, Plenum, 1996, pp. 27–47.
[13] P.J. Davis, Interpolation and Approximation, Blaisdell, New York, 1963.
[14] M.A. Diniz-Ehrhardt, J.M. Martínez, L.G. Pedroso, Derivative-free methods for nonlinear programming with general lower-level constraints,
Computational and Applied Mathematics 30 (2011) 19–52.
[15] G. Fasano, J.L. Morales, J. Nocedal, On the geometry phase in model-based algorithms for derivative-free optimization, Optimization Methods and
Software 24 (2009) 145–154.
[16] C.C. Gonzaga, E.W. Karas, M. Vanti, A globally convergent filter method for nonlinear programming, SIAM Journal on Optimization 14 (3) (2003) 646–
669.
[17] S. Gratton, Ph.L. Toint, A. Tröltzsch, An active set trust-region method for derivative-free nonlinear bound-constrained optimization, Optimization
Methods and Software 26 (2011) 873–894.
[18] T.G. Kolda, R.M. Lewis, V. Torczon, A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and
linear constraints, Technical Report SAND2006-5315, Sandia National Laboratories, 2006.
[19] T.G. Kolda, R.M. Lewis, V. Torczon, Stationarity results for generating set search for linearly constrained optimization, SIAM Journal on Optimization 17
(2006) 943–968.
[20] R.M. Lewis, V. Torczon, Pattern search algorithms for linearly constrained minimization, SIAM Journal on Optimization 10 (2000) 917–941.
[21] J. Nocedal, S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer-Verlag, 1999.
[22] M.J.D. Powell, A Direct Search Optimization Method that Models the Objective and Constraint Functions by Linear Interpolation, in: S. Gomez, J.P.
Hennart (Eds.), Advances in Optimization and Numerical Analysis, Kluwer Academic, Dordrecht, 1994, pp. 51–67.
[23] M.J.D. Powell, The NEWUOA software for unconstrained optimization without derivatives, in: G. Di Pillo, M. Roma (Eds.), Large-Scale Nonlinear
Optimization, Springer, New York, 2006, pp. 255–297.
[24] M.J.D. Powell, Developments of NEWUOA for minimization without derivatives, IMA Journal of Numerical Analysis 28 (2008) 649–664.
[25] M.J.D. Powell, The BOBYQA algorithm for bound constrained optimization without derivatives, Technical Report DAMTP 2009/NA06, Department of
Applied Mathematics and Theoretical Physics, Cambridge, England, August 2009.
[26] M.J.D. Powell, On derivative-free optimization with linear constraints, in: 21st ISMP, Berlin, Germany, 2012.
[27] M.J.D. Powell, On the convergence of trust region algorithms for unconstrained minimization without derivatives, Computational Optimization and
Applications, 2012, 1–29. (Online First).
[28] A.A. Ribeiro, E.W. Karas. Otimização Contínua: aspectos teóricos e computacionais, Cengage Learning, São Paulo, Brazil, in press (in Portuguese).
[29] K. Scheinberg, Ph.L. Toint, Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization, SIAM Journal on
Optimization 20 (6) (2010) 3512–3532.
[30] A. Tröltzsch. An active-set trust-region method for bound-constrained nonlinear optimization without derivatives applied to noisy aerodynamic
design problems, Ph.D. thesis, Université de Toulouse, 2011.
[31] D. Winfield, Function minimization by interpolation in a data table, IMA Journal of Applied Mathematics 12 (3) (1973) 339–347.

You might also like