0% found this document useful (0 votes)
25 views15 pages

Global Optimization Problems and Domain Reduction Strategies

The document discusses domain reduction strategies for global optimization problems. It introduces a standard domain reduction strategy and its iterated version. A new domain reduction strategy is presented and proven equivalent to the iterated version of the standard one, allowing for a new interpretation. It is shown that any reasonable domain reduction strategy is independent of variable processing order.

Uploaded by

omega unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views15 pages

Global Optimization Problems and Domain Reduction Strategies

The document discusses domain reduction strategies for global optimization problems. It introduces a standard domain reduction strategy and its iterated version. A new domain reduction strategy is presented and proven equivalent to the iterated version of the standard one, allowing for a new interpretation. It is shown that any reasonable domain reduction strategy is independent of variable processing order.

Uploaded by

omega unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Math. Program., Ser.

A (2010) 125:123–137
DOI 10.1007/s10107-008-0263-4

FULL LENGTH PAPER

Global optimization problems and domain reduction


strategies

Alberto Caprara · Marco Locatelli

Received: 21 May 2007 / Accepted: 25 November 2008 / Published online: 20 January 2009
© Springer-Verlag 2009

Abstract In this paper we discuss domain reduction strategies for global


optimization problems with a nonconvex objective function over a bounded convex
feasible region. After introducing a standard domain reduction and its iterated version,
we will introduce a new reduction strategy. Under mild assumptions, we will prove
the equivalence between the new domain reduction and the iterated version of the
standard one, allowing a new interpretation of the latter and a new way of computing
it. Finally, we prove that any “reasonable” domain reduction strategy is independent
of the order by which variables are processed.

Mathematics Subject Classification (2000) 90C26

1 Introduction

In this paper we consider constrained global optimization problems in maximization


form with a general (nonconcave) objective function and with a convex bounded (and
closed) feasible region. We indicate explicitly the lower and upper bounds on the vari-
ables implied by the boundedness of the feasible region, considering problems of the
form
max { f (x) : x ∈ F ∩ B}, (1)

A. Caprara
DEIS, Università di Bologna, Viale Risorgimento 2,
40136 Bologna, Italy
e-mail: [email protected]

M. Locatelli (B)
Dipartimento di Informatica, Università di Torino,
Corso Svizzera 185, 10149 Turin, Italy
e-mail: [email protected]

123
124 A. Caprara, M. Locatelli

where F ⊆ Rn is a closed convex set, f : Rn → R, x = (x1 , . . . , xn ) ∈ Rn and


N = {1, . . . , n}, and
 
B = x ∈ Rn : l j ≤ x j ≤ u j , j ∈ N

is the box defined by the lower and upper bounds of the variables. These bounds might
be either explicitly given in the definition of the problem or can be derived by mini-
mizing and maximizing each variable in turn over the original feasible region, solving
the associated convex problem. In any case, we will assume throughout the paper that,
for each k ∈ N and t ∈ [lk , u k ], there exists (x1 , . . . , xk−1 , t, xk+1 , . . . , xn ) ∈ F,
observing that the bounds on xk can be tightened if this is not the case. For the sake
of simplicity, in the paper we will employ the following notation:
• x −k = (x1 , . . . , xk−1 , xk+1 , . . . , xn ) ∈ Rn−1 ;
• x (k) (t) = (x1 , . . . , xk−1 , t, xk+1 , . . . , xn ) ∈ Rn .
These problems are difficult ones. Even maximizing a quadratic function over the
unit simplex
⎧ ⎫
⎨  ⎬
= x ∈ Rn : x j = 1, 0 ≤ xi ≤ 1, i ∈ N ,
⎩ ⎭
j∈N

turns out to be strongly NP-hard (this follows from the continuous reformulation of the
MAX-CLIQUE problem proposed in [11]). We refer, e.g. to [4,19] for more complexity
results.
In spite of their difficulty, these global optimization problems often have enough
structure to be solved by deterministic approaches returning an optimal or, at least,
approximate solution in a finite time (we refer to [7,8,12] for a review of such
approaches and we mention software packages like BARON [15,17] that can solve
problems of this kind).
Among the solution approaches a prominent role is played by branch-and-bound
ones. These are based on: (a) branching, which is the successive subdivision of the
original problem (1) into smaller and smaller subproblems, generally corresponding
to the subdivision of the original box B into smaller and smaller subboxes; (b) the
computation of a lower bound, denoted in what follows with ∗ , on the optimal value
of (1) (the best, i.e. highest, function value observed within the feasible region); (c)
the computation of upper bounds for the subproblems into which the original one is
subdivided.
We do not give here the details of branching and lower bounding, for which we
refer, e.g. to [8], but we briefly discuss the upper bounding procedure. We restrict our
attention to the computation of an upper bound for the original problem (1) because
upper bounds over subboxes are obtained in a completely analogous way.
First, we need an overestimator of f over the box B, i.e. a function g such that

g(x) ≥ f (x), x ∈ B. (2)

123
Global optimization problems and domain reduction strategies 125

The results of this paper apply to general overestimators, although the overestimators
of interest are generally concave functions, as discussed below. Note that it would be
enough to have an overestimator over the feasible region B ∩ F but, except in special
cases, this is much more difficult to obtain. As a result, the specific structure of F will
not play a relevant role in the strategies that we will discuss.
Remark 1 Function g usually depends on the box B, i.e. on the lower and upper bounds
of all variables.
Once we have an overestimator g, an upper bound for (1) is computed by solving the
following problem

max {g(x) : x ∈ F ∩ B}. (3)

In case g is concave (3) is a convex problem and can be solved efficiently.


Remark 2 In some cases we can use the concave envelope of f over B, i.e. the best
(smallest) concave overestimator of f over B, denoted by ConcEnv f,B . Besides sat-
isfying (2), the concave envelope also satisfies the following

ConcEnv f,B (x) ≤ g(x), x ∈ B,

for any concave overestimator g of f over B.


It is well known that the efficiency of a branch-and-bound approach strongly
depends upon the fathoming rule: fathom a node (subproblem) if its upper bound
is not larger than the current lower bound ∗ . Therefore, it is important to compute
upper bounds which are as low as possible. But how can we improve (reduce) the
upper bound computed in (3)? Improving the concave overestimator over the current
box B is usually difficult, and even impossible if g is already the concave envelope
of f over B. Nevertheless, in view of the dependency of g on the box B, instead of
trying to improve g by keeping B fixed, we can try to improve it by reducing the box
B. This finally leads us to the main subject of this paper: domain reduction strategies.

2 Domain reduction strategies

Domain reduction strategies aim at reducing the current box B into a smaller one
B̃ ⊆ B. They can be classified into two broad classes:
• feasibility-based domain reductions, which reduce B without losing feasible solu-
tions, i.e.

B ∩ F = B̃ ∩ F;

• optimality-based domain reductions, which reduce B without losing optimal solu-


tions but (possibly) losing feasible ones, i.e. given the current lower bound ∗ ,

B̃ ∩ {x ∈ F : f (x) ≥ ∗ } = B ∩ {x ∈ F : f (x) ≥ ∗ }.

123
126 A. Caprara, M. Locatelli

We can further classify domain reduction strategies into:

• problem dependent;
• general purpose.

The former exploit the special structure of the problem at hand and cannot be gen-
eralized to other problems, while the latter are applicable to all the problems we are
discussing.
Many domain reduction strategies have been proposed in the literature (see, e.g.
[5,6,10,14,21]). We also recall that in [18] a theory of domain reduction tools is
developed and many earlier results are re-derived from it.
The importance of domain reduction strategies is widely recognized in the liter-
ature. For instance, in [16, p. 23] it is stated that a preprocessing of the problem in
order to tighten the lower and upper bounds on the original problem variables can
significantly enhance algorithmic performance. This fact has also been confirmed by
personal experiences of the authors (see [2,9]).
What we will show in this paper is that the iterated version of a well known (prob-
ably the best known) optimality-based, general purpose domain reduction strategy
is equivalent to a strategy based on parameterizing the problem with respect to the
single variable whose domain we want to reduce. In this way, we will provide a new
interpretation of this widely employed reduction tool.
As a side result of this interpretation we will also observe that, for some subclasses
of problems with a special structure, parametric analysis allows one to perform the
domain reduction in a different way with respect to the iterated version (although the
final result is the same). In particular, we will observe that, while the iterated version is
just convergent to the final result after infinitely many iterations, parametric analysis
may return such a result after a finite time.
The paper is organized as follows. In Sect. 3 we will present the well known opti-
mality-based, general purpose domain reduction strategy together with its iterated
version, whereas in Sect. 4 we will introduce the new domain reduction strategy. In
Sect. 5 we will prove, under mild assumptions, the equivalence between these two
strategies, which, as pointed out above, allows a new interpretation of the former and
a different way to apply it. These mild assumptions are shown to be verified when the
overestimators used are the concave envelopes in Sect. 6. Finally, in Sect. 7 we discuss
the (in)dependency of the results on the order in which the (domains of the) variables
are processed, a fact that, to the authors’ knowledge, has not been explicitly stated in
the literature.

3 A customary reduction strategy

If a lower bound ∗ on the optimal value of (1) is available, then the best we can do
in order to reduce the domain of variable xk in order to exclude feasible points with
objective function lower than ∗ is to solve the two problems

 
min/max xk : f (x) ≥ ∗ , x ∈ F ∩ B .

123
Global optimization problems and domain reduction strategies 127

Unfortunately, these problems can be as difficult as the original problem (1); in partic-
ular they are nonconvex if f is not a concave function. What we can do is to substitute
f with its overestimator g. This leads to a quite popular domain reduction strategy,
based on moving the objective function of (3) into the constraints by imposing it does
not fall below ∗ , and then minimizing/maximizing in turn each variable xk . Formally,
the following two programs are solved for a given variable xk :
 
min/max xk : g(x) ≥ ∗ , x ∈ F ∩ B . (4)

In case g is concave, these are two convex programs. We call this standard domain
reduction (SDR) and denote the corresponding new lower and upper bound for xk
respectively by lkS D R and u kS D R .
An obvious way to improve the results of SDR is by iterating it (we stress that in
the procedure below both k and ∗ are fixed).
Iterated standard domain reduction (ISDR)
Step 0 Set lk0 = lk , u 0k = u k and i = 0.
Step 1 Solve the two programs
 
min/max xk : gi (x) ≥ ∗ , x ∈ F ∩ Bi , (5)

where

Bi = {x ∈ Rn : lki ≤ xk ≤ u ik , l j ≤ x j ≤ u j , j ∈ N \{k}}, (6)

is the current box and gi is the overestimator of f with respect to Bi , and


denote by lki+1 and u i+1k respectively the new lower and upper bounds on xk
obtained.
Step 2 If lki+1 = lki and u i+1
k = u ik then stop. Otherwise, set i = i + 1 and repeat Step
1.
The above procedure either terminates after a finite number of iterations, or gen-
erates an infinite nondecreasing sequence {lki } of lower bounds and an infinite nonin-
creasing sequence {u ik } of upper bounds. In both cases the values

lkISDR = sup lki = lim lki and u ISDR


k = inf u ik = lim u ik
i i→∞ i i→∞

define a valid domain reduction for variable xk .


In the next sections we will introduce a new domain reduction strategy and establish
its relations with SDR and ISDR.

4 A new reduction strategy

The new reduction strategy is based on the definition of a one-dimensional function h k


for each variable xk . In order to define h k , first we fix xk = t in (1), thus reducing by

123
128 A. Caprara, M. Locatelli

one the dimension of the problem and eliminating all the nonlinearities due to variable
xk . The latter fact justifies the name nonlinearities removal domain reduction (NRDR)
of this domain reduction. By fixing xk = t we are led to the following problem:

max{ f kt (x −k ) : x −k ∈ Fkt ∩ B −k }. (7)

where

B −k = {x −k ∈ Rn−1 : l j ≤ x j ≤ u j , j ∈ N \{k}} (8)

is the box for the variables x1 , . . . , xk−1 , xk+1 , . . . , xn ,

Fkt = {x −k ∈ Rn−1 : x (k) (t) ∈ F}

is the (closed and nonempty) convex set given by the projection of the intersection
of F with the hyperplane {x ∈ Rn : xk = t} on the subspace of the variables
x1 , . . . , xk−1 , xk+1 , . . . , xn , and

f kt (x −k ) = f (x (k) (t))

is the associated restriction of f . Then, we consider an overestimator gkt for f kt over


B −k (again, gkt will be concave in most cases of interest). Finally, function h k is defined
as the upper bound for (7) derived as in (3), i.e.

h k (t) = max{gkt (x −k ) : x −k ∈ Fkt ∩ B −k }. (9)

Once function h k is available along with a lower bound ∗ on the optimal value of
(1), we compute the values

lkNRDR = inf{t : h k (t) ≥ ∗ , t ∈ [lk , u k ]} (10)

and

u NRDR
k = sup{t : h k (t) ≥ ∗ , t ∈ [lk , u k ]}. (11)

Then, obviously, the domain of xk can be reduced to [lkNRDR , u NRDR


k ]. For the sake of
precision, we point out that the two extremes in (10) and (11) are attained, namely,
“inf” and “sup” can be replaced by “min” and “max”, if h k is a continuous function,
which is the case under mild assumptions, as we will show in Observation 3.
Having defined the new domain reduction, we would like to establish its relation
with SDR and ISDR. In order to establish the relation with SDR we introduce the
following natural assumption.
Assumption 1 For each t ∈ [lk , u k ] and for each x ∈ B ∩ F with xk = t, it holds that

gkt (x −k ) ≤ g(x).

123
Global optimization problems and domain reduction strategies 129

The assumption simply states that, after fixing the value of variable xk to t, we are able
to find an overestimator gkt of f which is better (i.e. smaller or, at least, not larger)
than g.
Under this assumption we can prove that NRDR dominates SDR.
Observation 1 Under Assumption 1, it holds that

[lkNRDR , u NRDR
k ] ⊆ [lkSDR , u SDR
k ].

Proof We just prove that lkSDR ≤ lkNRDR . The other inequality is proved in a completely
analogous way. We notice that proving lkSDR ≤ lkNRDR is implied by proving that

h k (t) < ∗ , t ∈ [lk , lkSDR ). (12)

For all feasible solutions of (1) with xk < lkSDR it holds, by definition, that g(x) < ∗ .
Moreover, also in view of Assumption 1, it holds that

gkt (x −k ) ≤ g(x) for t such that x (k) (t) ∈ B ∩ F ⇒ h k (t) ≤ g(x),

which combined with g(x) < ∗ gives (12) and then the desired result.

Actually, strict dominance holds, as it can be seen through simple examples. The dom-
inance of NRDR with respect to SDR can also be extended to ISDR, if the following
extension of Assumption 1 holds
Assumption 2 For each i, for each t ∈ [lki , u ik ], and for each x ∈ B ∩ F with xk = t,
it holds that

gkt (x −k ) ≤ gi (x).

Indeed, if Assumption 2 holds, the proof of Observation 1 can be immediately extended


to show that each intermediate bound lki computed by ISDR is dominated by lkNRDR .
Then, we have also the following dominance result.
Observation 2 Under Assumption 2, it holds that

[lkNRDR , u NRDR
k ] ⊆ [lkISDR , u ISDR
k ].

But the above result can actually be strengthened: under mild assumptions we will
prove in the next section that equality holds. We will also show in Sect. 6 that these
assumptions are verified when gi and gki are the concave envelopes of f and f ki in the
respective domains.
This result is of theoretical interest because it allows a new interpretation of ISDR.
Indeed, it states the non obvious fact that iterating the standard domain reduction has
the same effect as removing all the nonlinearities involving variable xk in (1) and study-
ing the resulting relaxation. Moreover, the result also shows that we can substitute the
iterative solutions of problems performed by ISDR with the direct analysis of function

123
130 A. Caprara, M. Locatelli

h k performed by NRDR. Of course, the latter requires to know how function h k is


defined, which depends on the particular problem at hand. Special cases for which h k
is defined are discussed in [1].

5 Equivalence between ISDR and NRDR

Recall that at iteration i of the ISDR procedure we need to solve (5) to compute lki+1
and u i+1
k . We introduce the following two, quite reasonable, assumptions.
Assumption 3 The functions gkt are continuous both with respect to the variables
x1 , . . . , xk−1 , xk+1 , . . . , xn and with respect to the parameter t ∈ [lk , u k ].
Assumption 3 immediately implies the following continuity result for h k , whose proof
is reported below in order to make the paper self contained, but which could also be
derived from other results already existing in the literature [3,13,20].
Observation 3 Under Assumption 3, function h k is continuous over [lk , u k ].
Proof Given ε > 0 and t ∈ [lk , u k ], we have to prove that there exists δ > 0 such that

|h k (t) − h k (t)| ≤ ε

for each t ∈ [lk , u k ] with |t − t| ≤ δ. Since B −k is compact, all continuous functions


on B −k are uniformly continuous. Hence Assumption 3 implies that there exists δ > 0
such that

|gku (y −k ) − gkv (z −k )| ≤ ε

for all u, v ∈ [lk , u k ], y −k ∈ B −k ∩ Fku , z −k ∈ B −k ∩ Fkv with

y (k) (u) − z (k) (v) ≤ δ .

We let x −k ∈ B −k ∩ Fkt be such that h k (t) = g t (x −k ). Moreover, we define the


required δ > 0 so that, for each t ∈ [lk , u k ] with |t − t| ≤ δ, letting x −k ∈ B −k ∩ Fkt
be such that h k (t) = g t (x −k ), the following hold:
(i) there exists y −k ∈ B −k ∩ Fkt such that

y (k) (t) − x (k) (t) ≤ δ ,

(ii) there exists z −k ∈ B −k ∩ Fkt such that

z (k) (t) − x (k) (t) ≤ δ .

Note that the convexity and closedness of F guarantees the existence of such a
δ > 0. The proof now follows from

h k (t) − h k (t) = g t (x −k ) − g t (x −k ) ≤ g t (z −k ) + ε − g t (x −k ) ≤ ε,

123
Global optimization problems and domain reduction strategies 131

and

h k (t) − h k (t) = g t (x −k ) − g t (x −k ) ≤ g t (y −k ) + ε − g t (x −k ) ≤ ε.



Assumption 4 There exists some η∗ > 0 such that for each i, for each t ∈ [lki , u ik ],
and for each x −k ∈ B −k ∩ Fkt

gi (x (k) (t)) ≤ gkt (x −k ) + η∗ min{t − lki , u ik − t}.

Assumption 4 simply states that the gap between gi and gkt can not grow too fast
as we move away from the borders of the current domain [lki , u ik ] for xk . In partic-
li ui
ular, the assumption implies gkk (x −k ) = gi (x (k) (lki )) and gk k (x −k ) = gi (x (k) (u ik ))
for x −k ∈ B −k . Note that Assumptions 2 and 4 together give rise to the following
bracketing property

gkt (x −k ) ≤ gi (x (k) (t)) ≤ gkt (x −k ) + η∗ min{t − lki , u ik − t}, x (k) (t) ∈ B.

Under the above assumptions we can state the announced equivalence result.
Theorem 1 Under Assumptions 2, 3 and 4, it holds that

[lkNRDR , u NRDR
k ] ≡ [lkI S D R , u ISDR
k ].

Proof We just prove that lkNRDR = lkISDR . The proof that u NRDR
k = u ISDR
k is completely
analogous. Let us assume, by contradiction, that lk ISDR < lk NRDR . Then, in view of the
continuity of h k , established in Observation 3, and in view of the definition (10) of
lkNRDR , we must have

h k (lki ) ≤ ∗ − ρk ,

for some ρk > 0 small enough and for each i. In view of Assumption 4 we have that

gi (x) ≤ h k (t) + η∗ min{t − lki , u ik − t},

for each x ∈ B ∩ F with xk = t. By definition we also have that there exists a feasible
(in fact, optimal) solution (x1 , . . . , xn ) of problem (5) with xk = lki+1 . Then

∗ ≤ g(x) ≤ h k (lki+1 ) + η∗ (lki+1 − lki ) ≤ ∗ − ρk + η∗ (lki+1 − lki ),

from which
ρk
lki+1 − lki ≥ > 0,
η∗

which contradicts the boundedness of the sequence {lki }.


123
132 A. Caprara, M. Locatelli

6 Concave envelope overestimators

In this section we prove that Assumptions 2, 3 and 4 hold if the original objective func-
tion f is Lipschitzian and the overestimators considered are the concave envelopes,
i.e. if, for each i,

gi = ConcEnv f,Bi , (13)

with Bi defined by (6), and, for each t ∈ [lk , u k ],

gkt = ConcEnv f t ,B t , (14)


k k

with Bkt defined by (8).

Assumption 5 Function f is Lipschitzian over the box B with (possibly unknown)


Lipschitz constant L, i.e. for all x, y ∈ B

| f (x) − f (y) |≤ Lx − y.

Observation 4 If gi , gkt are as in (13), (14), then Assumption 2 holds.

Observation 5 Under Assumption 5, if gkt is as in (14), then Assumption 3 holds.

Proof We first prove continuity of the gkt functions with respect to parameter t. Con-
sider t¯ ∈ [lk , u k ] and the associated gkt¯ . We show that for any ε > 0 and for any
t ∈ [lk , u k ] such that |t − t¯| ≤ ε/L, we have

|gkt (x −k ) − gkt¯ (x −k )| ≤ ε (15)

for all x −k ∈ B −k . Indeed, the relation

f (x (k) (t)) ≤ f (x (k) (t)) + L|t − t¯| ≤ f (x (k) (t)) + ε

implies that the function gkt¯ (x −k ) + ε is a concave overestimator of f kt over B −k , i.e.

gkt (x −k ) ≤ gkt¯ (x −k ) + ε.

Analogously,

f (x (k) (t)) ≤ f (x (k) (t) + ε

implies

gkt¯ (x −k ) ≤ gkt (x −k ) + ε,

showing (15) and hence the continuity with respect to t.

123
Global optimization problems and domain reduction strategies 133

We complete the proof by observing that the concave envelope g of a Lipschitzian


function f over the box B is continuous. Suppose this is not the case. Since any con-
cave function over an open convex set is continuous, the only discontinuity can arise
at a boundary point, say without loss of generality x (k) (lk ), such that

lim g(x (k) (xk )) > g(x (k) (lk )).


xk →lk+

Let gklk be the concave envelope of f klk over B −k , for which gklk (x −k ) ≤ g(x (k) (lk ))
since the map x −k → g(x (k) (lk )) is concave over B −k . We have, for each x ∈ B,

f (x) ≤ f (x (k) (lk )) + L(xk − lk ) ≤ gklk (x −k ) + L(xk − lk ),

i.e. the right-hand side is a concave overestimator of f over B, implying

g(x) ≤ gklk (x −k ) + L(xk − lk ).

Putting xk = lk and x −k = x −k , we arrive at

g(x (k) (lk )) = gklk (x −k ).

But then the new function

q(x) = min{g(x), gklk (x −k ) + L(xk − lk )}

is a concave overestimator of f over B such that

lim q(x (k) (xk )) = g(x (k) (lk )),


xk →lk+

i.e. a better overestimator than g, which is a contradiction.


Observation 6 Under Assumption 5, if gi , gkt are as in (13), (14), then Assumption 4


holds.

Proof First we note that, in view of Assumption 5, for each x ∈ Bi it holds that

f (x) ≤ f (x (k) (lki )) + L(xk − lki ).

In view of the same assumption it also holds that for each fixed t ∈ [lki , u ik ]

f (x (k) (lki )) ≤ f (x (k) (t)) + L(t − lki ).

Then, combining the two inequalities above, we get

f (x) ≤ f kt (x −k ) + L(xk − lki ) + L(t − lki ).

123
134 A. Caprara, M. Locatelli

Since gkt overestimates f kt , we have that over the whole box Bi

f (x) ≤ gkt (x −k ) + L(xk − lki ) + L(t − lki ),

where the right-hand side is a concave overestimator of f over Bi and, consequently


(recalling that t is fixed)

gi (x) ≤ gkt (x −k ) + L(xk − lki ) + L(t − lki ).

In a completely similar way we can also prove that

gi (x) ≤ gkt (x −k ) + L(u ik − xk ) + L(u ik − t).

Then, for each x (k) (t) ∈ Bi we have

gi (x (k) (t)) ≤ gkt (x −k ) + 2L min{t − lki , u ik − t}

proving Assumption 4 with η∗ = 2L.

Therefore, we have the following corollary of Theorem 1:


Corollary 1 Under Assumption 5, if gi , gkt are as in (13), (14), it holds that

[lkNRDR , u NRDR
k ] ≡ [lkISDR , u ISDR
k ].

In [1] three special classes of problems are considered for which Assumption 5 holds
and for which we can compute gi and gkt as in (13), (14), implying that the equivalence
result stated in the previous section holds. For each of these classes, function h k is
provided in [1].

7 Iterating domain reduction over all the variables

In this section we consider a broad family of domain reduction procedures, which


includes the procedures considered so far, and prove that, by applying any procedure
in the family to all the variables (and iterating), the final outcome does not depend on
the order in which the variables are considered.
Any domain reduction procedure can be viewed as a procedure that takes on input
a given variable xk and the current domain [l j , u j ] for all the variables x j , and returns
a new, possibly reduced, domain for xk , i.e.

[lk , u k ] = domain reduction(xk ; [l j , u j ], j ∈ N ).

A common practice is to fix some order P of the variables, to reduce the domain of
each variable in the given order, and to iteratively repeat this procedure until there are
reductions (in practical implementations, large enough reductions) from one iteration
to the next. The procedure is the following.

123
Global optimization problems and domain reduction strategies 135

Multiple domain reduction


Step 0 Let P = {x j1 , . . . , x jn } be a given order of the variables. Set l 0j = l j , u 0j = u j
for j ∈ N and i = 0.
Step 1 For k = 1, . . . , n set

[lii+1
k
, u ii+1
k
] = domain reduction(xik ; [lii+1
j
, u ii+1
j
],
j = 1, . . . , k − 1, [lii j , u ii j ], j = k, . . . , n).

Step 2 If l i+1
j = l ij and and u i+1
j = u ij for j ∈ N then stop. Otherwise, set i = i + 1
and repeat Step 1.

Assumption 6 Domain reduction procedure domain reduction is monotone, i.e. given


domains [l˜j , ũ j ] and [l j , u j ] that satisfy

[l˜j , ũ j ] ⊆ [l j , u j ], j ∈ N ,

we have that, for any variable xk ,

domain reduction(xk ; [l˜j , ũ j ], j ∈ N ) ⊆ domain reduction(xk ; [l j , u j ], j ∈ N ).

Note that the domain reduction procedures of the previous sections are monotone, as
well as all reasonable domain reductions we can think of.
Proposition 1 Under Assumption 6, the domains computed by Multiple domain
reduction satisfy:

l ij → l¯j and u ij → ū j as i → ∞, j ∈ N ,

and the limits l¯j and ū j do not depend on the order P of the variables.

Proof Let P = {x j1 , . . . , x jn } and P = {xk1 , . . . , xkn } be two distinct orderings of


the variables. We denote by l j (P) and u j (P) (l j (P ) and u j (P )) the lower and upper
q q q q

bound for variable x j after q executions of Step 1 when ordering P (P ) is employed.


Note that lower and upper bounds are initialized independently of the ordering of the
variables. It follows from the monotonicity condition that

[l 1j1 (P ), u 1j1 (P )] ⊆ [l 1j1 (P), u 1j1 (P)].

Next a further application of the monotonicity condition leads to

[l 2j2 (P ), u 2j2 (P )] ⊆ [l 1j2 (P), u 1j2 (P)].

Iterating the above results we can prove that for any q = 1, . . . , n

[l jq (P ), u jq (P )] ⊆ [l 1jq (P), u 1jq (P)].


q q

123
136 A. Caprara, M. Locatelli

Therefore, after n iterations of the iterative procedure with variable ordering P , the
domain of each variable is at least as tight as its domain after one iteration with variable
ordering P, i.e.

[l nj (P ), u nj (P )] ⊆ [l 1j (P), u 1j (P)], j ∈ N .

Next we can iterate the proof to show that for each ν = 1, 2, . . ., after at most νn
iterations with variable ordering P , the domain of each variable is at least as good
as its domain after ν iterations with variable ordering P. Symmetrically we have that
after at most νn iterations with variable ordering P, the domain of each variable is at
least as good as its domain after ν iterations with variable ordering P . This is only
possible if

l j (P), l j (P ) → l¯j and u j (P), u j (P ) → ū j as q → ∞, j ∈ N ,


q q q q

and the above limits do not depend on variable orderings.


8 Conclusions

In this paper we have discussed domain reduction strategies for nonconvex problems
over convex domains and box constraints. We proved that, under mild assumptions, the
iterated version of a well known domain reduction strategy is equivalent to a domain
reduction strategy based on fixing the value of the variable whose domain has to be
reduced, thus removing all the nonlinearities introduced by this variable. The result
is of theoretical interest because it allows a new interpretation of the results for the
iterated version of the standard domain reduction. Moreover, it also allows a different
way to compute such results by direct analysis of the function h k , on which the new
reduction strategy depends and whose definition is strictly related to the particular
problem at hand (see [1]).
Finally, we also proved that, under the natural monotonicity assumption, the result
of applying domain reduction strategies to all the variables in a given order and iterat-
ing until reductions are observed does not depend on the order in which the variables
are considered.

Acknowledgments We are grateful to Andrea Cassioli, Nick Sahinidis, Fabio Schoen and Fabio Tardella
for helpful discussions on the subject and to two anonymous referees for their comments. Michele Monaci
contributed to this work with some useful observations but still asked (or, better, pretended) not to be a
co-author; in absence of explicit laws that force people who deserve it to be included as co-authors we had
to remove his name eventually.

References

1. Caprara, A., Locatelli, M.: Domain reduction strategies: general theory and special cases. Technical
Report, Dipartimento di Informatica, Università di Torino (2008)
2. Caprara, A., Monaci, M.: Bidimensional packing by bilinear programming. Math. Program. (to appear)
3. Choquet, G.: Outlis topologiques et métriques de l’analyse mathématique. Technical Report, Centre
de documentation universitaire et SEDES, Paris (1969)

123
Global optimization problems and domain reduction strategies 137

4. de Klerk, E.: The complexity of optimizing over a simplex, hypercube or sphere: a short survey. CentER
Discussion Paper Series No. 2006-85 (2006)
5. Hamed, A.S.E., McCormick, G.P.: Calculation of bounds on variables satisfying nonlinear inequality
constraints. J. Global Optim. 3, 25–47 (1993)
6. Hansen, P., Jaumard, B., Lu, S.-H.: An analytical approach to global optimization. Math. Program.
52, 227–254 (1991)
7. Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization. Kluwer, Dordrecht (1995)
8. Horst, R., Tuy, H.: Global Optimization: Deterministic Approaches, 3rd edn. Springer, Berlin (1996)
9. Locatelli, M., Raber, U.: Packing equal circles into a square: a deterministic global optimization
approach. Discrete Appl. Math. 122, 139–166 (2002)
10. Maranas, C.D., Floudas, C.A.: Global optimization in generalized geometric programming. Comp.
Chem. Eng. 21, 351–370 (1997)
11. Motzkin, T.S., Straus, E.G.: Maxima for graphs and a new proof of a theorem of Turan. Can. J. Math.
17, 533–540 (1965)
12. Neumaier, A.: Complete search in continuous global optimization and constraint satisfaction. Acta
Numer. 13, 271–369 (2004)
13. Rockafellar, R.T.: Integrals which are convex functionals. Pacific J. Math. 39, 439–469 (1971)
14. Ryoo, H.S., Sahinidis, N.V.: A branch-and-reduce approach to global optimization. J. Global Optim.
8, 107–139 (1996)
15. Sahinidis, N.V.: BARON: A general purpose global optimization software package. J. Global Optim.
8, 201–205 (1996)
16. Sherali, H.D.: Tight relaxations for nonconvex optimization problems using the Reformulation-Lin-
earization/Convexification Technique (RLT). In: Pardalos, P.M., Romeijn, H.E. (eds.) Handbook of
global optimization, vol. 2, pp. 1–64. Kluwer, Dordrecht (2002)
17. Tawarmalani, M., Sahinidis, N.V.: BARON on the Web, http://archimedes.scs.uiuc.edu/cgi/run.pl
18. Tawarmalani, M., Sahinidis, N.V.: Global optimization of mixed-integer nonlinear programs: a theo-
retical and computational study. Math. Program. 99, 563–591 (2004)
19. Vavasis, S.A. Complexity issues in gobal optimization: a survey. In: Horst, R., Pardalos, P.M. (eds.)
Handbook of Global Optimization. Kluwer, Dordrecht (1995)
20. Wets, R.J.-B.: On inf-compact mathematical programs. Lect. Notes Comp. Sci. 5, 426–436 (1974)
21. Zamora, J.M., Grossmann, I.E.: A branch and contract algorithm for problems with concave univariate,
bilinear and linear fractional terms. J. Global Optim. 14, 217–249 (1999)

123

You might also like