0% found this document useful (0 votes)
57 views13 pages

A Robust Ant Colony Optimization For Continuous Functions: Expert Systems With Applications March 2017

Uploaded by

Wondwesen Feleke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views13 pages

A Robust Ant Colony Optimization For Continuous Functions: Expert Systems With Applications March 2017

Uploaded by

Wondwesen Feleke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/315630553

A robust ant colony optimization for continuous functions

Article  in  Expert Systems with Applications · March 2017


DOI: 10.1016/j.eswa.2017.03.036

CITATIONS READS

14 147

3 authors, including:

Jieting Luo
Centrum Wiskunde & Informatica
8 PUBLICATIONS   20 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Jieting Luo on 06 December 2017.

The user has requested enhancement of the downloaded file.


Expert Systems With Applications 81 (2017) 309–320

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

A robust ant colony optimization for continuous functions


Zhiming Chen a,∗, Shaorui Zhou b, Jieting Luo c
a
Department of Credit Management, Guangdong University of Finance, Street address: No. 527 YingFu Road, YingFu Road, Guangzhou, 510521 China
b
Department of Management Science, School of Business, Sun Yat-sen University, Guangzhou, 510275 China
c
Department of Information and Computing Sciences, Faculty of Science, Utrecht University, Utrecht, 3584CC Netherlands

a r t i c l e i n f o a b s t r a c t

Article history: Ant colony optimization (ACO) for continuous functions has been widely applied in recent years in differ-
Received 7 September 2016 ent areas of expert and intelligent systems, such as steganography in medical systems, modelling signal
Revised 15 March 2017
strength distribution in communication systems, and water resources management systems. For these
Accepted 16 March 2017
problems that have been addressed previously, the optimal solutions were known a priori and contained
Available online 25 March 2017
in the pre-specified initial domains. However, for practical problems in expert and intelligent systems,
Keywords: the optimal solutions are often not known beforehand. In this paper, we propose a robust ant colony
Broad-range search optimization for continuous functions (RACO), which is robust to domains of variables. RACO applies self-
Ant colony algorithm adaptive approaches in terms of domain adjustment, pheromone increment, domain division, and ant size
Continuous optimization without any major conceptual change to ACO’s framework. These new characteristics make the search of
Robustness ants not limited to the given initial domain, but extended to a completely different domain. In the case of
initial domains without the optimal solution, RACO can still obtain the correct result no matter how the
initial domains vary. In the case of initial domains with the optimal solution, we also show that RACO is
a competitive algorithm. With the assistance of RACO, there is no need to estimate proper initial domains
for practical continuous optimization problems in expert and intelligent systems.
© 2017 Elsevier Ltd. All rights reserved.

1. Introduction expert and intelligent systems in terms of data clustering, train-


ing neural networks, scheduling in power systems, steganography
When solving practical problems, ideally people can set up in medical systems, modeling signal strength distribution in com-
mathematical models to make better decisions. However, since munication systems, and water resources management systems.
the models’ complexity increases proportionally with the prob- (Afshar, Massoumi, Afshar, & Mariño, 2015; Chen & Wang, 2014;
lems’ difficulties, the mathematical methods show poor perfor- Dorigo & Stützle, 2010; Edward, Ramu, & Swaminathan, 2016; Fe-
mance in handling the highly complicated models. Fortunately tanat & Khorasaninejad, 2015; Fetanat & Shafipour, 2011).
people see hope as intelligent algorithms emerge. Due to the ad- The framework of ACO consists of three parts: ant based so-
vantage of computer’s powerful calculation, the optimal solution lution construction, pheromone update, daemon action (optional).
can be achieved in a short time. Among the various intelligent al- Based on the usage or not of this framework, relevant algorithms
gorithms, ant colony optimization (ACO) which is inspired by the can be divided into two categories: ant-related algorithms and
foraging behavior of ants has the outstanding performance in solv- extension of ACO to continuous functions. One of the early at-
ing combinatorial optimization problems (Demirel & Toksarı, 2006; tempts is CACO (Bilchev & Parmee, 1995) in which a domain is
Ding, Hu, Sun, & Wang, 2012; Dorigo & Gambardella, 1997; Huang, divided into randomly distributed regions updated by local search
Yang, & Cheng, 2013). However, Due to the limitation of search- and global search. API (Monmarché, Venturini, & Slimane, 20 0 0)
ing mechanism, ACO is not good at dealing with continuous vari- adopted a strategy that ants try to cover a given area around their
ables. Over the past years, a lot of attempts have been made to fill nest moved periodically and their searching sites are sensitive to
this gap. Now based on the great development, ACO can be widely the successful sites. CIAC (Dréo & Siarry, 2004) used stigmergic
applied in continuous optimization decisions, which can improve information and direct inter-individual communication to inten-
sify ants’ search. COAC (Hu, Zhang, & Li, 2008) divided a domain
into many regions of various sizes and utilized pheromone and

Corresponding author.Tel.: +86 13662521750. the orthogonal exploration to guide ants’ movement. Although the
E-mail addresses: [email protected] (Z. Chen), [email protected] (S. Zhou),
above algorithms got inspiration from ACO, they did not follow the
[email protected] (J. Luo).

http://dx.doi.org/10.1016/j.eswa.2017.03.036
0957-4174/© 2017 Elsevier Ltd. All rights reserved.
310 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

framework of ACO strictly. Therefore, they are considered as ant- uses a discrete probability density function. (3) For those algo-
related algorithms, not real extensions of ACO to continuous func- rithms, pheromone update is accomplished by replacing the worst
tions. solutions with the newly generated solutions. But RACO follows the
The first algorithm that can be classified as an extension of ACO conventional way in which the pheromone values associated with
to continuous functions is ACOR (Socha & Dorigo, 2008), where a promising solutions are increased and all pheromone values are
weighting Gaussian CDF is used to generate solutions which are decreased by pheromone evaporation. Since RACO does not change
stored in an archive, and the pheromone update is accomplished the original ACO’s methods in terms of solution construction, tran-
by replacing the worst solutions in the archive with the new solu- sition probability and pheromone update, it is much simpler to use
tions. Based on the principles ACOR , in order to escape from a local than other ACOs for continuous functions.
optimum, DACOR (Leguizamόn & Coello, 2010) was proposed to im- The remainder of this paper is organized as follows. In
prove diversity in the population to explore more regions. Inspired Section 2, we present the basic problem and an overview of the
by ACOR , HACO (Xiao & Li, 2011) combined continuous population- key principles of the grid method. In Section 3, we introduce RACO,
based incremental learning and differential evolution so that it which is based on the grid method, and present its detailed steps.
could learn the mean and variance values of next generation and In Section 4, we show the experimental results of RACO when
avoid local optimum. As another variant of ACOR , IACOR -LS (Liao, the initial domains contain and do not contain the optimal solu-
Montes de Oca, Aydın, Stützle, & Dorigo, 2011) used three types of tion, and discuss the self-adaptive mechanism in depth. Finally, in
local search procedures and increased the size of archive over time. Section 5, we present some conclusions and future research direc-
UACOR (Liao, Stützle, Montes de Oca, & Dorigo, 2014) put forward a tions.
framework that in combination with an automatic parameter tun-
ing method enables the automatic synthesis of ACOR , DACOR and 2. Problem and grid method
IACOR -LS. ACOMV (Liao, Socha, Montes de Oca, Stutzle, & Dorigo,
2014) divided the archive into three parts to store continuous vari- A continuous optimization problem can be formally defined as
ables, ordinal variables and categorical variables respectively and the following model: P = (X, , f ), where X is an n-dimensional
made innovation in the calculation of the weight. Thus, it can deal solution vector with continuous variables xi (i = 1, 2, . . . , n),  is a
with not only continuous optimization but also mixed-variable op- set of constraints among the variables, and f is an objective func-
timization. AM-ACO (Yang et al., 2016), which adopted an adaptive tion mapping from X to a set of non-negative real numbers. S is
parameter adjusting strategy, differential evolution operator and assumed as the value space of X. To solve the continuous optimiza-
local search, can get a good balance between exploration and ex- tion problem, the optimal X∗ which minimizes the function should
ploitation in addressing multimodal continuous optimization. be obtained: f(X∗ ) ≤ f(X), ∀X ∈ S. If  = ∅, then P is called an un-
Although there are many improved ACOs for continuous func- constrained problem model; otherwise, P is called a constrained
tions, they all belong to a category we call limited-range-search al- one. A constrained problem model can typically be converted to an
gorithms, which search for the optimal solution in the pre-specific unconstrained one through a penalty function. Hence, P discussed
domains. It has been shown in the literature that these algorithms in this study is an unconstrained model. The grid method is sim-
are highly sensitive to the independent variable’s domain since ilar to drawing a grid over the domain of the variable. Thereafter,
their results are affected by the domain’s properties such as length, the continuous domain can be discretized into a few points. The
border and symmetry (Eiben & Bäck, 1997; Fogel & Bayer, 1995). If principles of the grid method are introduced in the following sub-
the given domains do not contain the optimal solution, these al- sections.
gorithms will result in an incorrect solution since the ants cannot
go outside the domains. Thus, it is of great importance to correctly
estimate the initial domains beforehand. However, the optimal so- 2.1. Solution construction
lution is sometimes hard to estimate due to the complexity of the
Based on the nature of the problem, we first estimate the ini-
function derived from the real world. These algorithms will not
tial domains: xi ∈ [ximin , ximax ] (i = 1, 2, . . . , n). Assume our estima-
work well when the initial domains do not contain the optimal
tion is correct and the initial domains contain the optimal solution.
solution.
Then we divide the domain of xi into k equivalent shares so that
In this paper, we propose a robust ant colony optimization
the value of xi can be selected from the k + 1 points. The equiva-
(RACO) which is robust to domains of variables. RACO is developed xi −xi
based on the grid method (Gao, Zhong, & Mo, 2003) which dis- lent share’s value is hi = max k min (i = 1,2,…,n). Beginning from the
cretizes continuous variables such that continuous optimization is first variable, each ant selects a point in the domain of each vari-
transformed into combinatorial optimization. In particular, if the able. Then a route that corresponds to a solution vector X is com-
initial domains are estimated incorrectly, RACO is still able to find pleted after the n choices of an ant. Fig. 1 describes one route as
(2,1,3,…,k), which corresponds to the following solution:
the optimal solution. Compared with the limited-range-search al-
gorithms, RACO uses an extensive search that is not restricted  
(x1 , x2 , . . . , xn ) = x1min + 2 · h1 , x2min + h2 , x3min + 3 · h3 , . . . , xnmin + k · hk
within the initial domains. Instead, it can change the borders of
domains based on the quality of the solution found so that the ants
can reach completely different domains. Thus, RACO is a broad-
range-search algorithm. Given arbitrary domains, RACO is able to 2.2. Transition probability and pheromone update
correct their borders towards the direction of the optimal solution.
RACO also belongs to an extension of ACO to continuous func- Based on the distribution of the points, we construct a corre-
tions. Compared with other algorithms in this category, RACO is sponding pheromone matrix Tau. Note that Tau is a (k + 1 ) × n ma-
more aligned with the original ACO. There are three reasons. (1)For trix. The number of rows is equal to the number of discrete values
those algorithms, pheromone information is reflected by the solu- assigned to each variable. The number of columns is equal to the
tions stored in an archive. However, for RACO, pheromone infor- number of variables. An element of Tau is denoted as τ ij, which
mation is stored in a matrix, of which the structure is consistent represents the amount of pheromones left on the ith point within
with the distribution of discrete values derived from the contin- the domain of the jth variable. Each ant begins its trip from the
uous domains. (2)Those algorithms use a continuous probability first variable x1 , and completes a route by going through all the
density function to generate a candidate solution, whereas RACO other variables. For each ant, the probability of choosing the ith
Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 311

3.1. Self-adaptive domain adjustment

In the grid method, unequal pheromones left on different points


present the quality of routes. A point with more pheromones
must be closer to the optimal solution. If a given domain contains
the optimal solution, then the pheromones tend to converge to a
point (near the optimal solution) within the domain after iterative
searches. If a given domain does not contain the optimal solution,
then the pheromones tend to converge to a point near the bor-
der because the border is closer to the optimal solution. When the
point with the most pheromones is near the border, the ants emit
a signal that the optimal solution is near the border and is either
within or outside of the border. Hence, the ants should be given
the ability to adjust the domain. We use θ as a threshold for de-
ciding whether to launch the mechanism of the self-adaptive do-
Fig. 1. Construction of the search space of an ant. main adjustment. Here θ with the recommended value 0.1–0.3 also
represents the percentage of the number of rows. The value of θ
should not make θ (k + 1) too large or (1 − θ ) (k + 1) too small in
point within the domain of the jth variable is denoted by pij as order to represent that the node with the most pheromones is near
follows: the domain border. The row number of the largest element in each
 k+1 column of Tau is denoted by ri (i = 1, 2, . . . , n).
p i j = τi j τi j (1) When ri ≤ θ (k + 1 ) or ri ≥ (1 − θ )(k + 1 ), we can know that the
i=1
point with the most pheromones is near the lower or upper bor-
Eq. (1) follows the format of conventional transition probabil- der. It means that the optimal solution is probably outside of the
ity, that is, τ ij is used to represent the attraction intensity of a border. Therefore, the ants need to explore a new domain using ri
point. Note that we do not use the weighting function η(·), the pos- as the center. The new borders are reset as follows:
itive parameters α and β which determine the relation between
ximin ← ri − (k/2 + 1 ) · hi (4)
pheromone information and heuristic information, and therefore
pij is very simple.
After one trip, each ant leaves a pheromone increment τ ij ximax ← ri − (k/2 + 1 ) · hi (5)
upon each point that it went through. We calculate τ ij using the
ant-cycle system model as follows:
( i = 1, 2, . . . , n )
τi j = Q/ f (2)
where 1 presents the increment of the domain. Note that 1
where f is the objective function value that corresponds to the is valued based on the performance of the algorithm. Moreover,
route of an ant, and Q is a constant that represents the pheromone 1 should be small in order to make the new domain slightly
amount. The lower the value of the objective function is, the larger longer than the old one. The reason for widening the domain is
the pheromone increment is. In the new trip, the point with more that the value hi of the equivalent share can be changed dynami-
pheromones has a more positive effect on directing the search of cally. Changing hi facilitates finding the optimal solution. The new
an ant. In order to prevent the algorithm from being prematurely domain is 21 hi longer than the old one.
trapped in a local optimum, we use the elite tactic for pheromone When θ (k + 1 ) ≤ ri ≤ (1 − θ )(k + 1 ), we can know that the
update. In other words, only the pheromones of the best route, point with the most pheromones is far from the border. This
the objective function value of which is minimal, are increased; means that the optimal solution is most likely within the domain.
the pheromones of the other routes remain unchanged. The update Therefore, the ants need to narrow the domain to improve the ac-
method is provided as follows: curacy of the search. The new borders are reset as follows:
 
τi j new = (1 − ρ ) · τi j old + τi j (3) ximin ← ximin + ximax − ximin · 2 (6)
where ρ is a constant below 1, and presents the evaporation rate.  
The larger ρ is, the stronger the forgetting effect is. Hence, it is ximax ← ximax − ximax − ximin · 2 (7)
possible for ants to explore a new area in the new round.

( i = 1, 2, . . . , n )
3. RACO for continuous functions
where 2 represents the percentage of the length of the domain.
It is of great important for conventional ACO for continuous Note that 2 should be small, since otherwise, the optimal solu-
functions to correctly estimate the domains of independent vari- tion would be missed due to substantial narrowing. The new do-
ables. If the given domains do not contain the optimal solution, main is 22 (ximax − ximin ) shorter than the old one.
they will obtain a wrong solution because the search of the ants is In the case of initial domains without the optimal solution, Eqs.
conducted in the wrong area. Thus, offering ants the ability to cor- (4) and (5) allow the ants to perform a broad-range search. Due
rect the domains is the key to solving this problem. In this section, to the possibility that the range of the search can be expanded,
we propose a robust ant colony optimization (RACO) that applies the ants can finally reach a new domain which may be far from
novel approaches in terms of domain adjustment, pheromone in- the initial one. With the help of the broad-range search, RACO is
crement, domain division and ant size. It is worthwhile to note able to find the optimum beyond the given domains. In the case
that RACO uses the grid method, and thus, the solution con- of the given domains with the optimal solution, there is no need
struction, transition probability and pheromone update remain the to explore a new area. Therefore, we could use another method to
same as those aforementioned. make the ants search more efficiently.
312 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

If ri ≤ θ (k + 1 ), then we retain the lower border unchanged and on the number of the equivalent shares. The ants will inevitably
merely diminish the upper border as follows: miss a few values in the continuous domains because they search
  only the limited points. Although the ants can find the best result
ximax ← ximax − ximax − ximin · 3 (8)
among the given points, this best result may not be the global best
If ri ≥ (1 − θ )(k + 1 ), the we retain the upper border unchanged because of the structural limitation of the search space. Moreover,
and merely increase the lower border as follows: if the length of a given domain is substantial, only a few equiv-
 
ximin ← ximin + ximax − ximin · 3 (9) alent shares may have a negative effect on the algorithm’s cor-
rectness. However, numerous equivalent shares may increase the
where 3 also represents the percentage of the length of the
search time. Thus, obtaining a reasonable number of equivalent
domain, and its value doubles the value of 2 . Note that
shares is a trial-and-error process.
one side of the domain is adjusted using Eq. (8) or (9), whereas
In this paper, we propose a self-adaptive domain division
both sides of the domain are adjusted using Eq. (6) or (7). Thus, if
method. The number of equivalent shares is denoted by k, the
we require that the domains decrease to the same degree in these
value of which is changed dynamically depending on the perfor-
two cases, then we should make 3 = 22 .
mance of RACO. At the beginning, we assign a small value to k to
If θ (k + 1 ) ≤ ri ≤ (1 − θ )(k + 1 ), we change the lower and up-
increase the convergence speed (k is a positive integer). If the ants
per borders using Eqs. (6) and (7). The purpose of Eqs. (8) and
cannot find a better result after several iterations, we let k ← k + 1.
(9) is to make the ants search within the initial domains.
In this way, the structure of the search space is changed dynami-
cally so that the ants can search other values in the continuous do-
3.2. Self-adaptive pheromone increment
mains and avoid getting stuck. When k is small, there is no need
to set a large ant size (i.e., the number of ants) m. However, as
When the length of the initial domain is very large, it is diffi-
k increases, the ant size needs to be consistent with k. Thus, we
cult for the conventional ACO for continuous functions to obtain
make the ant size self-adaptive by letting m ← k + m. m is a
an accurate result. The reason is that the parameter Q used for
small positive integer, thereby making m just slightly larger than k
pheromone increment in Eq. (3) is constant. However, the function
and preventing too large an ant size from producing a long search
value would change as the ants’ search area varies. The order of
time.
magnitude of Q may become inconsistent with that of the function
value, thereby leading to little change in the pheromone increment
τ . Because the ants choose the points based on the pheromones,
the algorithm cannot converge to the optimal solution if different 3.4. Procedures of RACO algorithm
points have similar amount of pheromones.
In the case of the initial domains without the optimal solu- Step 1: Initializations. Determine the number of loops nc_max for
tion, RACO could shift the search area of the ants shift to a new, the search of ants in one iteration, the accuracy
of the
completely different domain. Moreover, the order of magnitude of termination condition, and the ant size’s increment m.
the function value may change significantly. If RACO uses a fixed Assign an initial value to k. Provide an arbitrary initial
Q, then the accuracy problem will become more serious. Hence, domain for each variable: xi ∈ [ximin , ximax ] (i = 1,2,...,n).
we propose a self-adaptive pheromone increment that changes the Step 2: Divide the domain of each variable into k equivalent
value of Q based on the order of magnitude of the function value. xi −xi
shares: hi = max k min (i = 1,2,...,n), where hi is the value
At the beginning of each iteration, we randomly generate a num-
of the equivalent share.
ber of routes and calculate the corresponding function values. The
Step 3: If max(h1 , h2 , . . . , hn ) <
(other termination conditions
order of magnitude of the minimum function value is denoted by
can be used here), then the algorithm is over. Thereafter,
OMmin . For each iteration, we value Q dynamically as follows:
output the global minimal function value Fmin . Otherwise,
Q = 10OMmin +1 (10) go to Step 4.
The self-adaptive property guarantees that the order of magni- Step 4: Set the loop number nc ← 0 and the self-adaptive ant
tude of Q is consistent with that of the function value; thus, the size m ← k + m. Initialize the pheromone matrix Tau by
precision of the algorithm can be improved. Moreover, generating letting each element be 1. Thereafter, randomly generate
routes at random can be considered a function sampling, which a certain amount of routes. Update Tau according to some
can not only infer the order of magnitude of Q but also optimize better routes to enable Tau to own the heuristic infor-
the initial pheromone matrix Tau. Since the conventional ACO gen- mation. Calculate the self-adaptive pheromone amount Q
erally makes initial Tau’s elements equal, the ants will select differ- using Eq. (10).
ent routes with equal probability, and it thus takes a long time to Step 5: Beginning from x1 to xn , each ant selects one point in the
find better routes. If Tau is initialized with certain heuristic infor- domain of each variable based on the transition probabil-
mation, the algorithm’s convergence speed can be improved. For ity pij (i = 1,2,…,k + 1; j = 1,2,…,n) calculated by Eq. (1).
example, we randomly generate 100 routes and sort them in de- Step 6: For the ncth loop, find the best route with the minimal
scending order of function value. The first 30 routes are given the objective function value among the m routes generated
pheromone increments calculated by Eqs. (2) and (10). In this way, by the m ants. Update Tau using Eq. (2) based on the best
the ants’ search in the early stages becomes more effective because route. Let nc ← nc + 1.
they are inclined to choose the better points. Step 7: If nc ≤ nc_max, go back to Step 5. Otherwise, find the
minimal function value fmin among all loops. If fmin <
3.3. Self-adaptive domain division and ant size Fmin , then Fmin ← fmin . If the condition of fmin ≥ Fmin is
satisfied for t times, then let k ← k + 1.
The substance of the grid method is that the continuous do- Step 8: Find out the row number ri (i = 1,2,...,n) of the largest ele-
main is converted into finite discrete points. We find that the do- ment in each column of Tau. If ri is near the domain bor-
main division is very important for the performance of RACO, be- der, then widen the domain using Eqs. (4) and (5) in the
cause the different number of equivalent shares for the same do- case of a domain without the optimal solution, or nar-
main may lead to a completely different result. Note that the struc- row the domain using Eq. (8) or (9) in the case of a do-
ture of the ants’ search space (the values of the points) depends main with the optimal solution. If ri is far from the do-
Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 313

Fig. 2. The flow of RACO for continuous functions.

main border, narrow the domain using Eqs. (6) and (7). parts. (1) Given initial domains with the optimal solution, com-
Go back to Step 2. pare RACO with other metahueristics for continuous optimization.
(2) Given initial domains without the optimal solution, test RACO’s
The description of our algorithm is provided in Fig. 2. robustness. (3) Perform sensitivity analysis for the key parame-
ters of RACO, and demonstrate the mechanism of the self-adaptive
4. Experimental results and analysis domain adjustment. The initial parameters used in RACO are k =
11, θ = 0.2, m = 2, 1 = 1.25, 2 = 0.05, 3 = 0.1, nc_max = 50.
In this section, we evaluate the performance of RACO through Once the amount of iterations in which the ants cannot find a
various numerical experiments, which can be summarized as three better solution reaches 15, we make k ← k + 1. Note that RACO
314 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

Table 1
First part of benchmark functions.

Function Formula Optimal x∗ Minimal f (x∗ )



n
Sphere f (x ) = xi 2 x∗ = (0, . . . , 0) f min = 0
i=1
n i−1
Ellipsoid f (x ) = (100 n−1 xi ) 2
x = ( 0, . . . , 0 )
∗ f min = 0
i=1

n
Cigar f (x ) = x21 + 10 4
xi 2
x = ( 0, . . . , 0 )
∗ f min = 0
i=2

n
Tablet f (x ) = 104 x21 + xi 2 x∗ = (0, . . . , 0) f min = 0
i=2
n
−1
2
f (x ) = [100(x2i − xi+1 ) + (xi − 1 ) ] x∗ = (1, . . . , 1)
2
Rosenbrock f min = 0
i=1

can solve different benchmark functions under the same parameter where f is the best function value found by RACO, f∗ is the (known
setting, and the value of k will dynamically vary during computa- a priori) optimal value for the benchmark function, and
=10−10 .
tion, as will ant size m. We directly take the results of ACOR , (1 + 1)ES, CSA-ES, CMA-ES,
IDEA and MBOA from (Socha & Dorigo, 2008). Table 2 shows the
4.1. Tests on initial domains with the optimal solution median number of function evaluations (MNFE) obtained by RACO
and other 6 algorithms with the best results highlighted. In addi-
There are so many algorithms proposed to solve continuous op- tion, the average error (AE) obtained by RACO is reported in the
timization problem. Obviously, it is impractical to compare RACO last column to help future researchers perform accuracy compar-
with each of them. Since ACOR (Socha & Dorigo, 2008) is recog- isons. The AE represents the average difference between the min-
nized by ACO’s founder as an effective algorithm, we compare the imum function values found and the optimal value in runs. We
results found and referred by their paper with those obtained by can see that RACO performs quite well in terms of search speed.
RACO. As they did, the comparison metahueristics are divided into It gets the lowest MNFE which is much smaller than the second
three groups: best algorithm. Note that the order of magnitude of the MNFE
achieved by the second best algorithm on each function is 4, while
(1) Probability-learning methods which explicitly model and
for RACO the order of magnitude of the MNFE is 3 in 80% of bench-
sample probability distributions.
mark functions. Although RACO gets the MNFE beyond 10 0 0 on
(2) Other metahueristics which are originally developed for
Rosenbrock function, the result is still 82% less than the second
combinatorial optimization and later adapted to continuous
best algorithm. The main cause responsible for the large differ-
domains.
ences between the performance of RACO and reference algorithms
(3) Ant-related methods which draw the inspiration from the
is the usage of the mechanism of domain adjustment. For a do-
behavior of ants.
main including the optimal solution, it can be narrowed smaller
We use the number of function evaluations rather than CPU and smaller by RACO in the iterative process. A smaller search area
time as the performance measure due to the advantages that it is facilitates ants to locate the optimal solution. Moreover, the grid
insensitive to different programming languages, code-optimization method discretizes a continuous domain into limited values, and
skills and computer configurations. In order to make the results there is no need for ants to consider numerous candidate solutions
obtained by RACO comparable to those obtained with other algo- in each iteration. Thus, the exploration phase is fostered.
rithms, the performance measure, which is consistent with that The other metahueristics used in comparison are Continu-
used in the original papers, differs slightly in different experiment ous Genetic Algorithm (CGA), Enhanced Continuous Tabu Search
groups. The median number of function evaluations is used in the (ECTS), Enhanced Simulated Annealing (ESA) and Differential Evo-
1st experiment group, and the average number of function evalua- lution (DE). The parameters of above algorithms are essentially
tions is used in the 2nd and 3rd experiment groups. Moreover, we picked by a trial-and-error procedure. The ant-related methods
follow the same initial domains, termination conditions and run- used in comparison are Continuous ACO (CACO), API algorithm and
ning times as the competing algorithms used. Continuous Interacting Ant Colony (CIAC). The parameters settings
The probability-learning methods in comparison include are described as follows: for CACO, the ant size m = 100, number
(1 + 1)-Evolution Strategy with 1/5th-success-rule ((1 + 1)-ES), of regions r = 200, mutation probability p1 = 0.5, fashion crossover
Evolution Strategy with Cumulative Step Size Adaptation (CSA-ES), probability p2 = 1; for API, the ant size m = 20, number of explo-
Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), rations for each ant t = 50, failed searching times Placal = 50; for
Iterated Density Estimation Evolutionary Algorithm (IDEA) and CICA, the ant size m = 100, ranges distribution ratio σ = 0.5, persis-
Mixed Bayesian Optimization Algorithm (MBOA). The size of the tence of pheromonal spots ρ = 0.1, initial messages number μ = 10;
population used in above algorithms is chosen separately for each for ACOR , the ant size m = 2, speed of convergence n = 0.85, local-
algorithm-problem pair. The smallest population is selected from ity of the search process q = 10−1 and archive size k = 50. All the
the set p ∈ [10, 20, 50, 100, 200, 400, 800, 1600, 3200]. For a results of referenced algorithms are provided by (Socha & Dorigo,
detailed description of the parameter settings, we refer the reader 2008). The benchmark sets consisting of unimodal functions and
to (Kern, Müller, Hansen, Büche, Ocenasek, & Koumoutsakos, multimodal functions are presented in Tables 3 and 4, with the
2004). The basic parameters of ACOR are as follows: the number exception of Sphere and Rosenbrock function listed in Table 1. In
of ants m = 2, speed of convergence n = 0.85, locality of the search order to ensure a fair comparison, the experimental setups are the
process q = 10−4 and archive size k = 50. The benchmark functions same as the literature. RACO was independently run for 100 times
ranging from simple to complex, with dimension n = 10, are listed on each function, and the termination condition is given as:
in Table 1. For a fair comparison, we follow the same experimental
setups as the literature did. All the benchmark functions have | f − f ∗ | <
1 f ∗ +
2
been executed 20 times each. The termination condition is given
as:
where f is the best function value found by RACO, f∗ is the
|f − f | <


(known a priori) optimal value for the benchmark function, and
Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 315

Table 2
Comparison of results of RACO, ACOR and probability-learning methods.

Function MNFE MNFE AE

ACOR (1 + 1)ES CSA-ES CMA-ES IDEA MBOA RACO

Sphere x :[−3,7]n , n=10 1371.1 1370 1371.6 1371.3 1375 1418 192.5 8.11E−11
Ellipsoid x :[−3,7]n , n= 10 4452.6 4516 4560 4450 4451.6 4464 218 8.96E−11
Cigar x :[−3,7]n , n = 10 3841.4 4450 4640 3840 3844.6 3852 235 8.19E−11
Tablet x :[−3,7]n , n = 10 2567 2613 2632 2568.7 2569.9 2591 203 8.74E−11
Rosenbrock x :[−5,5]n , n = 10 7191.1 7241 7370 7190 7400 8290 1256 9.91E−11

Table 3
Second part of benchmark functions.

Function Formula Optimal x∗ Minimal f (x∗ )


 

4 
3
f (x ) = − ai j ( x j − pi j )
2
Hartmann (H3,4 ) ci exp − x∗1 = 0.114 fmin = −3.8628
i=1

j=1
x∗2 = 0.555
3 . 0 10.0 30.0 1.0 x∗3 = 0.855

0 . 1 10.0 35.0 1.2
ai j = , ci =
3 . 0 10 . 0 30 . 0 3.0
0 . 1 10.0 35.0 3.2

0.3689 0.1170 0.2673

0.4699 0.4387 0.7570
pi j =
0.1091 0.8732 0.5547
0.0382 0.5743 0.8828
 

4 
3
f (x ) = − ai j ( x j − pi j )
2
Hartmann (H6,4 ) ci exp − x∗1 = 0.201 f min = −3.3223
i=1

j=1
x∗2 = 0.150
10.0 3.00 17.0 3.50 1.70 8.00 1.0 x∗3 = 0.477

0.05 10.0 17.0 0.10 8.00 14.0 1.2 x∗4 = 0.275
ai j = , ci =
3.00 3.50 1.70 10.0 17.0 8.00 3.0 x∗5 = 0.311
17.0 8.00 0.05 10.0 0.10 14.0 3.2
x∗6 = 0.657
0.1312 0.1696 0.5569 0.0124 0.8283 0.5886

0.2329 0.4135 0.8307 0.3736 0.1004 0.9991
pi j =
0.2348 0.1451 0.3522 0.2883 0.3047 0.6650
0.4047 0.8828 0.8732 0.5743 0.1091 0.0381


k
Shekel (S4, k ,k=5,7,10) − ( (x − −
f (x ) =
→  − → T
a )(x − ai ) + ci )−1 x∗1 =4 k=5
f min = −10.1532
4.0i=1 4.0 i 4.0 4.0 0.1
x∗2 =4 k=7
f min = −10.4029
1 . 0 1.0 1.0 1.0 0.2 x∗3 k=10
8 . 0 8.0 8.0 8.0
0.2 =4 f min = −10.5364
x∗4 =4
6 . 0 . . . 0.4
6 0 6 0 6 0
3 . 0 7.0 3.0 7.0 0.4
ai j = , c i =
0.6
2 . 0 9.0 2.0 9.0
5 . 0 5.0 3.0 3.0 0.3

8 . 0 1.0 8.0 1.0 0.7

6 . 0 2.0 6.0 2.0 0.5
7 . 0 3.6 7.0 3.6
0.5

Table 4
Third part of benchmark functions.

Function Formula Optimal x∗ Minimal f (x∗ )



n
x2i

n
Griewangk f (x ) = 40 0 0
− cos( √xi ) + 1 x∗ = (0, . . . , 0) f min = 0
i
i=1 i=1
Goldstein & Price f (x ) = [1 + (x1 + x2 + 1 )2 (19 − 14x1 + 3x21 − 14x2 + x∗ = (0, −1 ) f min = 3
6x1 x2 + 3x22 )].[30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x21 +
48x2 − 36x1 x2 + 27x22 )]
Martin & Gaddy f (x ) = (x1 − x2 )2 + ( x1 +x32 −10 )2 x∗ = (5, 5 ) f min =0
B2 f (x ) = x21 + 2x22 − 0.3cos(3π x1 ) − 0.4cos(4π x2 )+0.7 x∗ = (0, 0 ) f min =0
2
Branin RCOS f (x ) = (x2 − 54xπ12 + 5πx1 − 6 )2 + 10(1 − 81π ) cos(x1 ) + 10 3 optima f min = 0.397887
f (x ) = − cos(x1 ) cos(x2 )exp(−( (x1 − π ) + (x2 − π ) ) ) x∗ = (π , π )
2 2
Easom  f min = −1
n  n n
Zakharov f (x ) =
 xi + ( 0.5ixi ) + ( 0.5ixi )
2 2 4 x∗ = (0, . . . , 0) f min = 0
i=1 i=1 i=1
De Jong f (x ) = x21 + x22 + x23 x∗ = (0, 0, 0) f min = 0

1 =
2 =10−4 which are respectively the relative and absolute er- (ANFE), the success rates are reported because some of the algo-
rors. rithms failed to find the optima. The results highlighted indicate
Tables 5 and 6 present the results obtain by RACO, ACOR , and that RACO outperforms or perform equally best as the other algo-
respectively other ant-related algorithms and other metaheuris- rithms. Also, the average error (AE) obtained by RACO is reported
tics. In addition to the average number of function evaluations in the last column. Note that the ANFE and AE are computed in
316 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

Table 5
Comparison of results of RACO, ACOR and ant-related methods.

Function ANFE Success rate (%) AE

RACO ACOR CACO API CIAC RACO ACOR CACO API CIAC RACO

Rosenbrock (R2 ) x : [−5,10]n , n=2 78.9 820 828.3 832 834 100 100 100 100 100 5.61E−05
Sphere x : [−5.12,5.12]n , n= 6 47 781 809 794 845 100 100 100 100 100 9.90E−05
Griewangk x :[−5.12,5.12]n , n = 10 30 1390 1426 – 1426 97 61 100 – 52 8.13E−05
Goldstein and Price x :[−2,2]n , n = 2 49 384 398 – 445 100 100 100 – 56 2.56E−04
Martin and Gaddy x :[−20,20]n , n = 2 16 345 350 – 379 100 100 100 – 20 2.11E−05
B2 x :[−100,100]n , n = 2 81 544 – – 566 100 100 – – 100 8.91E−05
Rosenbrock (R5 ) x :[−5,10]n , n = 5 184.3 2487 – – 2503 100 97 – – 90 7.17E−05
Shekel (S4,5 ) x :[0,10]n , n = 4 47.9 787 – – 837 56 57 – – 5 7.63E−04

Table 6
Comparison of results of RACO, ACOR and other metahueristics.

Function ANFE Success rate (%) AE

RACO ACOR CGA ECTS ESA DE RACO ACOR CGA ECTS ESA DE RACO

Branin RCOS x :[−5,15]n , n = 2 79.8 248.5 247.5 245 – – 100 100 100 100 – – 6.68E−05
B2 x :[−100,100]n , n = 2 81 431.3 430 – – – 100 100 100 – – – 8.91E−05
Easom x :[−100,100]n , n = 2 55.2 772 773.9 – – – 70 98 100 – – – 1.27E−04
Goldstein and Price x :[−2,2]n , n = 2 49 232.7 232.8 231 234.4 – 100 100 100 100 100 – 2.56E−04
Rosenbrock (R2) x :[−5,10]n , n = 2 78.9 481.7 482 480 481.7 481.3 100 100 100 100 100 100 5.61E−05
Zakharov (Z2) x :[−5,10]n , n = 2 20 196.5 198.2 195 276 – 100 100 100 100 100 – 1.62E−05
De Jong x :[−5.12,5.12]n , n = 3 42 393 393.9 – – 392 100 100 100 – – 100 9.47E−05
Hartmann (H3,4) x :[0,1]n , n = 3 26.8 342 343.7 343.6 344 – 100 100 100 100 100 – 3.19E−04
Shekel (S4,5) x :[0,10]n , n = 4 47.9 611.3 610 611.4 611.9 – 56 57 76 75 54 – 7.63E−04
Shekel (S4,7) x :[0,10]n , n = 4 48.9 681.1 680 681.3 681.8 – 92 79 83 80 54 – 7.87E−04
Shekel (S4,10) x :[0,10]n , n = 4 49.6 651.1 650 651.4 651.8 – 97 81 83 80 50 – 7.23E−04
Rosenbrock (R5) x :[−5,10]n , n = 5 184.3 2143.2 2143.9 2142 2144.5 – 100 97 100 100 100 – 7.17E−05
Zakharov (Z5)x :[−5,10]n , n = 5 61.8 727 728.9 730.1 823 – 100 100 100 100 100 – 6.58E−05
Hartmann (H6,4) x :[0,1]n , n = 6 28.6 722 723.3 724.1 725.7 – 50 100 100 100 100 – 2.67E−04
Griewangk x :[−5.12,5.12]n , n = 10 29.7 1390 – – – 1399.2 97 61 – – – 100 8.13E−05

relation to only the successful run. The symbol “—” represents the rithms in this subsection. Here we use a benchmark set consisting
datum is unavailable for the algorithm. With respect to the ANFE, of 9 functions with integer solutions and 5 functions with frac-
clearly we can see that RACO is a winner in the two groups of tional solutions. The benchmark functions except for those afore-
comparisons. The differences in the ANFE between RACO and the mentioned are presented in Table 7. The results obtained by RACO
considered algorithms are huge since the second best one used 2– are reported in terms of the average number of function evalua-
45 times more ANFE. The reason is that, while the iterative process tions (ANFE), average error (AE) and success rate. Note that the
continues, the length of the domain is reduced gradually and the ANFE and AE are evaluated only for the successful run. If the differ-
ants just need to search limited values in each newly-formed do- ence between the minimum function value found by RACO and the
main, which can speed up the optimization process. In terms of optimum is smaller than or equal to 10−5 , a run is considered suc-
the success rate, RACO gets the best results on 11 of the 15 func- cessful. On each benchmark function, RACO was run for 20 times
tions in the 2nd group (Table 5) and on 6 of the 8 functions in the using the following termination condition:
3rd group (Table 6). When compared with ACOR and other ant-
related algorithms, on Griewangk and Shekel (S4,5 ) function, RACO max(h1 , h2 , . . . , hn ) <

obtains slightly worse success rates than the best one. When com-
where hi (i = 1,2,...,n) is the value of the equivalent share and
pared with ACOR and other metaheuristics, on Easom, Shekel (S4,5 )

=10−5 . In all experiments, we use 2-dimensional functions, and
and Hartmann (H6,4 ) function, the success rates obtained by RACO
give 5 types of initial domains without the optimal solution: (1)
are much worse than the best one. The less ideal results reveal
Both domains are positive and large; (2) Both domains are nega-
that there exists a risk in the domain adjustment. In the itera-
tive and large; (3) One positive domain and one negative domain,
tive process, the domain is adjusted towards a smaller area hav-
both of which are far from the optimal solution; (4) One positive
ing the point with the most pheromones. However, the point with
domain and one negative domain, both of which are extremely
the most pheromones might be not near the global optimum but
narrow; (5) One positive and small domain, and one negative and
the local optimum. Thus, there is a possibility that the global op-
large domain.
timum is removed from the newly-formed domain. Because RACO
Table 8 presents the results obtained by RACO, and the sym-
does not take into account the correlation between variables and
bol “×” indicates RACO failed to find the optimal solution. We can
local search, the point with the most pheromones might mislead
see that RACO shows a good performance under the condition of
the direction of domain adjustment and make the ants stuck in
initial domains without the optimal solution. Across the 5 types
the local optimum. Nevertheless, given the comprehensive perfor-
of domains, RACO can find the correct results with 100% success
mance, RACO can be considered a competitive approach.
rate on 9 of the 14 functions. Statistically speaking, the 3rd type
4.2. Tests on initial domains without the optimal solution of domains cost RACO the most ANFE, and the 4th type of do-
mains cost the least. The results reflect a general trend of RACO
Since there is no literature on ACO in terms of the robustness spending more ANFE on the domains far from the optimal solu-
of domains of variables, we do not compare RACO with other algo- tion and less ANFE on the domains close to the optimal solution.
Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 317

Table 7
Fourth part of benchmark functions.

Function Formula Optimal x∗ Minimal f (x∗ )

Rastrigin f (x ) = x21 + x22 − cos(18x1 ) − cos(18x2 ) x∗ = (0, 0 ) f min = −2


5 
5
Shubert f (x ) = ( icos(i + (i + 1 )x1 ) )( icos(i + (i + 1 )x2 ) ) 18 optima f min = −186.7309
i=1 i=1

n 
n
f (x ) = −20exp(−0.2 1
x2i ) − exp( 1
cos(2π xi ) )
Ackley n
i=1
n
i=1
x∗ = (0, . . . , 0) f min = 0
+20 + e
n
−1
f (x ) = sin2 (π y1 ) + [(yi − 1 ) (1 + 10sin2 (π yi + 1 ))] x∗ = (1, . . . , 1)
2
Levy f min = 0
i=1
+ (yn − 1 ) (1 + 10sin (2π yn ))
2 2
( − 1)
, yi = 1 + 14 xi
f (x ) = (1.5 − x1 + x1 x2 )2 + (2.25 )
− x1 + x1 x2 2 2
Bleale x∗ = (3, 0.5) f min = 0
+ (2.625 − x1 + x1 x2 3 )2
Six-Hump Camel-Back f (x ) = 4x21 − 2.1x41 + x61 /3 + x1 x2 − 4x22 + 4x42 x∗ = (0.08983, −0.7126), f min = −1.0316
(−0.08983, 0.7126)
Bohachevsky f (x ) = x21 + x22 − 0.3cos(3π x1 ) + 0.3cos(4π x2 ) + 0.3 x∗ =(0,0.24),
(0,−0.24) f min = −0.24
Hansen f (x ) = (cos(1 ) + 2 cos(x1 + 2 ) + 3 cos(2x1 + 3 ) 9 optima
+4 cos(3x1 + 4 ) + 5cos(4x1 + 5 )) · (cos(2x2 + 1 ) x∗ =
+ 2 cos(3x2 + 2 ) + 3 cos(4x2 + 3 ) + 4 cos(5x2 + 4 ) (−1.30671,4.85806), etc. f min = −176.5418
+ 5cos(6x2 + 5 ))

Table 8
Results of RACO given the incorrect domains.

Initial domain Function

x1 : [100, 200] x1 : [−300, −180] x1 : [1800, 1900] x1 : [1, 2] x1 : [100, 110]


x2 : [50, 80] x2 : [−600, −50] x2 : [−230, −110] x2 : [−3, −1] x2 : [−300, −190]

ANFE 383 370 1559 111 292


AE 9.18E−10 1.99E−09 7.22E−10 5.17E−11 1.17E−09 Goldstein & Price
Success rate 100 100 100 100 100
ANFE 155 171 187 108 163
AE 1.84E−11 2.58E−12 4.53E−12 2.14E−11 8.42E−12 Zakharov
Success rate 100 100 100 100 100
ANFE 155 169 184 125 164
AE 4.10E−12 2.23E−12 3.74E−12 2.87E−12 2.11E−12 Martin & Gaddy
Success rate 100 100 100 100 100
ANFE 188 220 217 108 249
AE 2.64E−12 1.15E−12 1.42E−12 5.40E−12 1.23E−12 Griewangk
Success rate 70 80 40 100 25
ANFE 156 189 190 111 159
AE 1.16E−10 2.91E−11 9.18E−11 2.08E−11 1.23–09 Rastrigin
Success rate 100 100 100 100 100
ANFE 151 282 284 × 266
AE 0 0 0 × 0 Shubert
Success rate 100 100 100 0 100
ANFE 171 × × 108 ×
AE 2.77E−06 × × 1.15E−05 × Ackley
Success rate 100 0 0 100 0
ANFE 157 170 182 108 156
AE 3.84E−11 5.64E−10 7.23E−12 6.55E−11 3.33E−10 B2
Success rate 100 100 100 100 100
ANFE 157 170 261 × ×
AE 1.70E−12 4.91E−13 5.93E−12 × × Levy
Success rate 100 100 100 0 0
ANFE 178 178 190 118 167
AE 2.93E−12 1.11E−11 3.41E−12 2.03E−12 3.95E−12 Beale
Success rate 100 100 100 100 100
ANFE 253 242 802 163 423
AE 4.74E−11 3.35E−11 1.37E−11 1.29E−10 2.01E−11 Rosenbrock
Success rate 100 100 100 100 100
ANFE 153 169 182 119 168
AE 0 0 0 0 0 Bohachevsky
Success rate 100 100 100 100 100
ANFE 205 261 260 113 188
AE 1.31E−07 1.30E−07 1.36E−07 1.33E−07 1.29E−07 Hansen
Success rate 100 100 100 100 95
ANFE 166 174 177 × 176
AE 4.65E−08 4.65E−08 4.65E−08 × 4.65E−08 Six-Hump Camel-Back
Success rate 100 100 100 0 100
318 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

Rosenbrock
1400

1200

1000

800
Domain of X1
600

400

200

−200

−400

−600
0 50 100 150 200 250 300
Iteration

Fig. 3. Domain adjustment for x1 of Rosenbrock.

On Griewangk and Ackley function, RACO gets stuck in the 2nd, Table 9
Sensitivity analysis for the parameters 1 and 2 . The blod values of 1 and
3rd and 5th types of domains. On Shubert, Levy and Six-Hump
2 are the initial values provided at the beginning of Section 4.
Camel-Back function, RACO gets stuck in the 4th type of domains.
In terms of accuracy, the orders of magnitude of average errors are Function Parameter ANFE Success rate (%) AE
well controlled in 10−14 –10−7 except for Ackley function. These ex- 1 0 × 0 ×
periments show that the deviation about the optimal solution in 0.25 1130.3 100 3.96E−11
an initial domain has a very small negative effect on RACO. The 0.75 521.1 100 1.52E−10
1.25 421.1 100 2.58E−11
self-adaptive domain adjustment is the key reason why RACO owns
1.75 678.4 100 7.56E−11
the robustness in terms of domains of variables. Using this mech- Rosenbrock (R2 ) 2.25 499.4 70 1.00E−10
anism, the ants can change the border of the initial domain to ex- x1 : [100,110] 2 0.025 1105.5 100 1.55E−11
plore a completely new area. The direction of border adjustment is x2 : [−300,−190] 0.05 421.1 100 2.58E−11
0.1 687.9 100 7.60E−11
not blind, but based on the position of the element owing the most
0.15 429 100 3.19E−10
pheromones in Tau. Once a newly-formed domain contains the op- 0.2 1380.3 80 1.69E−10
timal solution, its border can be narrowed gradually to speed up 0.25 × 0 ×
the convergence.

4.3. Discussion

The characteristic of RACO is the self-adaptive domain adjust- In order to help understand the self-adaptive domain adjust-
ment, from which the self-adaptive pheromone increment, domain ment in depth, we make sensitivity analysis for the key parameters
division and ant size derive. In order to see clearly how the self- in Eqs. (4)–(7). Note that the larger 1 is, the longer the expanded
adaptive domain adjustment works, given the 5th type of domains domain is; the larger 2 is, the shorter the reduced domain is.
for Rosenbrock function, we take one run of RACO as an example. Given different values of the parameters 1 and 2 , RACO was
Figs. 3 and 4 display the process of domain adjustment with each run for 20times on Rosenbrock function, and the average number
bar presenting a new domain obtained by RACO after one itera- of function evaluations (ANFE), average error (AE) and success rate
tion. We can see that a new domain may extend larger so that the are reported with the benchmark results outlined in bold in Table
ants are able to explore a new area. After the corrected domain 9. The symbol “×” indicates RACO failed to find the optimal solu-
contains a promising solution, it tends to shrink smaller. Finally, tion. Compared with the benchmark results, a decline in 1 makes
it converges to the optimum after iterative adjustments. It is also the ANFE increase. The reason is that a smaller domain increment
found that a newly-formed domain after several iterations may be cost RACO more times of domain adjustment before the correct do-
far from the old one. Thus, the function value can change signifi- main is located. The case of 1 = 0 indicates that the new domain
cantly as the domain varies. Fig. 5 describes the change of the or- is as long as the old one, which makes RACO fail to find the opti-
der of magnitude of Rosenbrock function value. At the beginning mum. However, the value of 1 should not be large. Otherwise, it
of the iterative process, the order of magnitude is 10. Near the end will have a negative effect on RACO’s convergence and correctness.
of iterations, it falls to −11. The dramatic change of the function As 1 increases to 2.25, the ANFE increases from 421.1 to 499.4,
value requires the Q in Eq. (2) to be changed dynamically. That is and the success rate decrease from 100% to 70%. Further, compared
the reason why we put forward the self-adaptive pheromone incre- with the benchmark results, a growth in 2 makes the ANFE in-
ment. Moreover, in the scenario that a domain is enlarged signifi- crease and the success rate decrease. The reason is that the area
cantly, the small number of equivalent shares which can speed up containing the optimal solution may be missed if the domain is
the searching in the early stage is not applicable any more. Thus, reduced in a large scale. Although decreasing the value of 2 can
we put forward the self-adaptive domain division, which also re- improve the accuracy of RACO, the searching speed is sacrificed.
quires the ant size to become self-adaptive. In the case that 2 decreases from 0.05 to 0.025, the AE decreases
Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320 319

Rosenbrock
120

100

80
Domain of X2

60

40

20

0
0 50 100 150 200 250 300
Iteration

Fig. 4. Domain adjustment for x2 of Rosenbrock.

Rosenbrock
10

5
Order of magnitude of Y

−5

−10

−15
0 50 100 150 200 250 300
Iteration

Fig. 5. The order of magnitude of Rosenbrock function value Y.

simultaneously which is at the cost of increasing of ANFE. In a sen- ventional ACO, it is very simple to use. There are several new char-
tence, the parameter setting is a trial-and-error procedure. acteristics in RACO: self-adaptive domain adjustment, self-adaptive
ant size, self-adaptive pheromone increment and self-adaptive do-
5. Conclusions main division. Experiments show that RACO can obtain the opti-
mal solution based on arbitrary domains. With this ability of RACO,
Although there are many improved ACOs for continuous func- people no longer need to estimate a correct domain with the opti-
tions, they all belong to a category that we call the limited-range- mal solution beforehand.
search algorithm, in which the ants search for solution only in the The work presented here can be extended along two directions.
given initial domains. The quality of their results is quite sensi- Firstly, due to RACO’s closeness to the formulation of the origi-
tive to the domain’s properties, such as length, symmetry and bor- nal ACO used to solve combinatorial optimization problems, it is
der. For a complex function established from a real problem, if possible to apply RACO in mixed discrete-continuous optimization
people cannot give a correct domain containing the optimal so- problems. Secondly, when the dimension of benchmark function
lution, those algorithms cannot work. To overcome this shortcom- increases to a large size, the precision of RACO will decrease in the
ing, we propose a robust ant colony optimization (RACO) owning case of given domains without the optimal solution. Local search
the robustness with respect to the domains of variables. RACO can heuristics can be employed to intensify the ants’ search.
change the border of the domain based on the quality of the so-
lution found. Thus, the ants’ search is not limited to the given ini- Acknowledgment
tial domain, but can extend to a completely different domain. In
this sense, RACO is a broad-range-search algorithm which is able This research is supported by MOE (Ministry of Education
to find the correct result even if the given domains do not contain in China) Project of Humanities and Social Sciences (Grant No.
the optimal solution. Because RACO inherits the framework of con- 15YJC630013)
320 Z. Chen et al. / Expert Systems With Applications 81 (2017) 309–320

References Gao, S., Zhong, J., & Mo, S. J. (2003). Research on ant colony algorithm for continu-
ous optimization problem. Microcomputer Development, 13(1), 21–22.
Afshar, A., Massoumi, F., Afshar, A., & Mariño, M. A. (2015). State of the art review Hu, X. M., Zhang, J., & Li, Y. (2008). Orthogonal methods based ant colony search
of ant colony optimization applications in water resource. Water Resources Man- for solving continuous optimization problems. Journal of Computer Science and
agement, 29(11), 3891–3904. Technology, 23(1), 2–18.
Bilchev, G., & Parmee, I. C. (1995). The ant colony metaphor for searching continuous Huang, R. H., Yang, C. L., & Cheng, W. C. (2013). Flexible job shop scheduling with
design spaces. In Proceedings of the AISB workshop on evolutionary computation, due window—A two-pheromone ant colony approach. International Journal of
university of Sheffield, UK. LNCS (pp. 25–39). Springer-Verlag. 933. Production Economics, 141(2), 685–697.
Chen, Z., & Wang, C. (2014). Modeling RFID signal distribution based on neural Kern, S., Müller, S. D., Hansen, N., Büche, D., Ocenasek, J., & Koumoutsakos, P. (2004).
network combined with continuous ant colony optimization. Neurocomputing, Learning probability distributions in continuous evolutionary algorithms—A
123(123), 354–361. comparative review. Natural Computing, 3(1), 77–112.
Dorigo, M., & Gambardella, L. M. (1997). Ant colony system: A cooperative learning Leguizamon, G., & Coello, C. A. C. (2010). An alternative ACOR algorithm for contin-
approach to the traveling salesman problem. IEEE Transaction on Evolutionary uous optimization problems. In Proceedings of the seventh international confer-
Computation, 1(1), 53–66. ence on swarm intelligence, ANTS 2010. LNCS (pp. 48–59). Springer. 6234.
Dréo, J., & Siarry, P. (2004). Continuous interacting ant colony algorithm based on Liao, T. J., Montes de Oca, M. A., Aydın, D., Stützle, T., & Dorigo, M. (2011). An in-
dense hierarchy. Future Generation Computer Systems, 20(5), 841–856. cremental ant colony algorithm with local search for continuous optimization.
Demirel, N. Ç., & Toksarı, M. D. (2006). Optimization of the quadratic assignment In Proceedings of the genetic and evolutionary computation conference. GECCO’11
problem using an ant colony algorithm. Applied Mathematics & Computation, (pp. 125–132). ACM.
183(1), 427–435. Liao, T. J., Stützle, T., Montes de Oca, M. A., & Dorigo, M. (2014). A unified ant colony
Dorigo, M., & Stützle, T. (2010). Ant colony optimization: Overview and recent ad- optimization algorithm for continuous optimization. European Journal of Opera-
vances. In Handbook of metaheuristics (pp. 227–263). US: Springer. tional Research, 234(3), 597–609.
Ding, Q. L., Hu, X. P., Sun, L. J., & Wang, Y. Z. (2012). An improved ant colony op- Liao, T. J., Socha, K., Montes de Oca, M., Stutzle, T., & Dorigo, M. (2014). Ant colony
timization and its application to vehicle routing problem with time windows. optimization for mixed-variable optimization problems. IEEE Transactions on
Neurocomputing, 98, 101–107. Evolutionary Computation, 18(4), 503–518.
Eiben, A. E., & Bäck, T. (1997). Empirical investigation of multiparent recombination Monmarché, N., Venturini, G., & Slimane, M. (20 0 0). On how Pachycondyla apicalis
operators in evolution strategies. Evolutionary Computation, 5(3), 347–365. ants suggest a new search algorithm. Future generation computer systems, 16(8),
Edward, J. S., Ramu, P., & Swaminathan, R. (2016). Imperceptibility—Robustness 937–946.
tradeoff studies for ECG steganography using continuous ant colony optimiza- Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains.
tion. Expert Systems with Application, 49, 123–135. European Journal of Operational Research, 185(3), 1155–1173.
Fetanat, A., & Khorasaninejad, E. (2015). Size optimization for hybrid photo- Xiao, J., & Li, L. P. (2011). A hybrid ant colony optimization for continuous domains.
voltaic–wind energy system using ant colony optimization for continuous do- Expert Systems with Applications, 38(9), 11072–11077.
mains based integer programming. Applied Soft Computing, 31, 196–209. Yang, Q., Chen, W. N., Yu, Z., Gu, T., Li, Y., & Zhang, H. (2016). Adaptive multimodal
Fetanat, A., & Shafipour, G. (2011). Generation maintenance scheduling in power continuous ant colony optimization. IEEE Transactions on Evolutionary Computa-
systems using ant colony optimization for continuous domains based 0–1 in- tion, 99, 1–14.
teger programming. Expert Systems with Applications, 38(8), 9729–9735.
Fogel, D. B., & Bayer, H. G. (1995). A note on the empirical evaluation of intermedi-
ate recombination. Evolutionary Computation, 3(4), 491–495.

View publication stats

You might also like