An improved differential evolution algorithm - 副本
An improved differential evolution algorithm - 副本
Abstract
The traditional differential evolution method in optimization is easy to be trapped
into local minima, inadequate and convergence rate is slow, this paper proposes an
improved Differential Evolution algorithm based on Bidirectional searching optimal
and Normal perturbation. The algorithm through a direction learning, acquired
information excellent solution direction. This greatly improves the probability of
finding good solutions. In order to improve the performance of the algorithm, and
avoid the dimension of stagnation and strengthen the ability of local search, this paper
proposes normal perturbation mutation. Experimental studies on Test Function, show
that the algorithm is effective and competitive with other algorithms.
Keywords: differential evolution; bidirectional searching; normal disturbance;
function optimization
1 Introduction
In the process of solving high-dimensional optimization problems, a variety of
competitive swarm intelligence optimization algorithms have been produced. The
main algorithms include Genetic Algorithm [1], Differential Evolution (DE) [2], Ant
Colony Algorithm [3], Particle Swarm Optimization (PSO) [4], etc. Among them, DE
algorithm has the characteristics of simple implementation and excellent effect, and
people have done more and more research on it. DE algorithm was first proposed by
Storn R, Price K et al in 1995, and obtained excellent results in the IEEE Evolutionary
Computing Competition in 1996. In recent 20 years, DE algorithm and its
improvement have become an important research direction of evolutionary
computing, and have been widely used in multi-objective optimization, function
optimization and so on [5].
DE algorithm is a kind of natural heuristic algorithm, which does not rely on the
gradient information of the problem, guides the mutation direction of new individuals
through the differences between individuals, and realizes individual selection through
greedy algorithm. In order to further improve the performance of DE algorithm, Beast
proposed a JDE algorithm with dynamic parameters [6]. Its mutation operator F and
crossover operator CR are dynamically adjusted with a certain probability in each
generation, and the parameter adjustment follows a normal distribution. Liu and
Lampinen proposed a JADE algorithm [7]. With the improved DE / current-to-pbest
strategy, it selects the optimal solution direction from the P optimal individuals, and
generate new parameters F and CR from normal distribution and Cauchy distribution.
Qin et al. proposed the SaDE algorithm [8], which selects the mutation strategy of
each generation according to the success rate in the mutation strategy candidate
group, and generates the F and CR values of each particle according to the normal
distribution. Mallipeddi et al. proposed the EPSDE algorithm [9], which uses
mutation strategy pool and parameter pool to generate new evolutionary adaptation
competitively. In addition, the combination of DE and other algorithms is also a hot
research direction. For example, a large number of research results have also emerged,
such as the DEPSO algorithm [10] formed by the combination of DE and PSO
algorithm, and the combination of DE and Simulated Annealing algorithm [11].
According to the theorem of "no free lunch", various DE algorithms are difficult to
achieve the best search efficiency, space breadth and depth, and can not avoid falling
into local extremum. In order to improve the efficiency of the algorithm and achieve a
balance between search breadth and depth, this paper improves the search effect and
the performance of the algorithm through new mutation strategy and normal random
disturbance. By comparing with the existing algorithms, it is proved that the algorithm
has better optimization performance.
In this fomula rand represents a random number in the range (0,1), and randj is a
random integer in the range (1, D), which ensures that at least one dimension of Vi,G
enters the newly generated individual Ui,G.
where i ∈ (1, NP) is the individual number in the population, and f is the fitness
function.
3 BNDE
The evolution of individuals in DE algorithm depends on the difference and
direction of vectors between other individuals. In population evolution, such vector
combination has uncertainty. In the process of evolution, a considerable number of
evolutionary processes have been abandoned due to the deterioration of individual
results, and this part of information has not been effectively used. In addition, the
group is easy to fall into local extr emum, and its dimension will stagnate due to
convergence and will not be improved when the number of iterations increases. In
order to use the evolutionary information of the population effectively and avoid local
extremum and dimensional stagnation, an improved algorithm is proposed in this
paper. The algorithm is the improved Differential Evolution algorithm based on
Bidirectional searching optimal and Normal perturbation(BNDE).
σ= +η (12)
represents the optimal fitness value of the first generation. During the
evolution of each population, the standard deviation of each generation is directly
related to its initial fitness value, current fitness value and previous generation fitness
value, which is generated by formula (12). In the process of evolution, the standard
deviation decreases with the convergence of fitness values. In this way, the probability
can be gradually searched in the individual adjacent domain, while avoiding
dimensional stagnation. In order to avoid making the standard deviation 0 due to the
same fitness value and stop the occurrence of mutation, the formula is added η As the
minimum standard deviation of normal distribution, the initial value η= 0.01。
4.1 benchmark-functions
F1= Shifted Schwefel's Problem 1.2 with Noise in Fitness
luBounds=[-100 , 100]
F2= Schwefel's Problem 2.6 with Global Optimum on Bounds
luBounds=[-100 , 100]
F3= Shifted Rotated Rastrigin's Function
luBounds=[-5 , 5]
F4= Hybrid Composition Function
luBounds[-5 , 5]
F5= Rotated Hybrid Composition Function
luBounds=[-5 , 5]
F6= Rotated Hybrid Composition Function with a Narrow Basin for the Global Optimum
luBounds=[-5 , 5]
F7= Rotated Hybrid Composition Function with the Global Optimum on the Bounds
luBounds[-5 , 5]
F8= Rotated Hybrid Composition Function with High Condition Number Matrix
luBounds[-5 , 5]
F=[0.4 : 0.9],
EPSDE [9]
CR=[0.1 : 0.9]
BDE F=0.9,CR=0.9
F=0.9,CR=0.9,
BNDE
ε=0.05,η=0.01
4.3 results
The experimental environment is as follows: the CPU is Intel Core i7 3.4GHz, the
memory is 8g, the operating system is Windows10, and the experimental software is
MATLAB.
In order to verify the effectiveness of bnde algorithm, DE/rand/1/bin (DEr1) and
BDE are compared with BNDE. Each algorithm is independently verified for 30
times, and its mean and standard deviation are calculated. The comparison effect is
shown in Table 2. The hit Counts in Table 2 refer to the number of times to obtain a
better solution than the current one.
According to the description above, the theoretical value of the number of iterations
is (5000-10000), and the greater the value, the better the optimization effect.
Moreover, it satisfies the fomula as follows :
hit times / 2 + 5000 = iteration times in BNDE where F=0.9 (13)
The reasons are as follows: in bnde algorithm, when all F = 0.9 can not get a better
value, it is necessary to jump back to F=-F/2, so as to obtain 5000 iterations. When all
F = 0.9 can get better values, the number of iterations is 10000. That is, 5000 more
iterations can be achieved when all 10000 hits are made. Therefore, 5000 + n/2
iterations are obtained for every n more hits. This is shown in formula (13).
In addition, according to the empirical estimation formula (14) obtained from the
actual statistical data, it can be seen that the new mutation strategy F = - F / 2 in
BNDE is better than the original F = 0.9, which improves the search accuracy.
(hits of F = - F / 2 in bnde) ≈ (hits of F = 0.9 in bnde) * 3 (14)
It can be seen from Table 2 that the solution accuracy of BNDE is ahead of DEr1 in
most test functions, BDE and BNDE get better results in most functions, and only
inferior to DEr1 in F2, which proves the effectiveness of the bidirectional searching
optimization algorithm. In addition, the results obtained by BNDE are better than
BDE as a whole, which proves the effectiveness of the normal perturbation algorithm.
ns
functions Average value Average value Average value Average value Average value
(standard deviation) (standard deviation) (standard deviation) (standard deviation) (standard deviation)
≈ ≈ ↓ ↓
≈ ↓ ↓ ≈
↓ ↓ ↓ ↓
↓ ↓ ↓ ↓
↓ ↓ ↓ ↓
↓ ↓ ↓ ↓
2
Average log10(f)
-1
-2
0 0.5 1 1.5 2 2.5 3
5
Times of fitness evaluation x 10
Fig. 1 Convergence curve of function F1
5
BNDE
jDE
SaDE
4.5 EPSDE
JADE
Average log10(f)
3.5
2.5
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation x 10
5
Average log10(f)
1.5
0 0.5 1 1.5 2 2.5 3
5
Times of fitness evaluation x 10
2.8
Average log10(f)
2.7
2.6
2.5
2.4
2.3
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation 5
x 10
3.05
Average log10(f)
2.95
3.05
Average log10(f)
2.95
3.05
Average log10(f)
2.95
3.1
Average log10(f)
3
2.9
2.8
2.7
2.6
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation x 10
5
5 Conclusion
In order to solve the problems of slow convergence rate and strong randomness of
DE algorithm, a new improved differential evolution algorithm based on bidirectional
optimization and normal disturbance is proposed in this paper. The algorithm
improves the optimization effect through two-way optimization, enhances the local
search ability through normal disturbance and jumps out of the local extreme value.
The optimization results of benchmark functions test are compared with the results of
standard and mainstream optimization algorithms. It is proved that BNDE algorithm
has certain advantages in solution accuracy and convergence speed.
This algorithm can be extended to cross-border processing. In DE and its
optimization algorithm, cross-border processing has always been one of the more
important steps. When the new individual crosses the boundary when F= 0.9, it can be
adjusted to F = - F/2, which can theoretically reduce the probability of crossing the
boundary and be easy to handle. At the same time, the algorithm also plays a role in
practical application. For example, in robot path planning, the search direction can be
adjusted in time to mention the opportunity to get the optimal path. In addition, in
complex functions such as mixed combination functions, the convergence speed of
this algorithm is better than other algorithms, and the convergence accuracy is better
than other algorithms. It has obvious advantages in solving complex problems.
Bnde algorithm can effectively improve the performance of DE algorithm under 30
dimensional data and is competitive. However, it can also be seen that the algorithm
still has room to improve the iterative hit rate. For example, change the overall F to
the matrix F to make it change separately for each individual, which will save a lot of
evaluation times; At the same time, there is still a lot of improvement work for the
effectiveness of high-dimensional data. Therefore, the follow-up work will be verified
and optimized in terms of improving the number of hits, high-dimensional
optimization and practical application.
References
[1] R. Storn and K. Price, Differential Evolution—A Simple and efficient adaptive
scheme for global optimization over continuous spaces, Berkeley, CA, Tech. Rep. TR-
95-012, 1995 [Online]. Available:citeseer.ist.psu.edu/article/storn95differential.html.
[2] Storn R, Price K. Differential Evolution – A Simple and Efficient Heuristic for
Global Optimization over Continuous Spaces[J]. Journal of Global Optimization.
1997(11): 341–359.
[3] Dorigo M,Maria L.Ant colony system:a cooperative learning approach to the
traveling salesman Problem[J]. IEEE Transon Evolutionary
Computation,1997,1(1):53-66.
[4] Brest J, Greiner S, Boskovic B, et al. Self-adapting control parameters in
differential evolution: a comparative study on numerical benchmark problems [J].
IEEE Transactions on Evolutionary Computation, 2006, 10:646-657.
[5] Zhang J Q, Sanderson A C. JADE: adaptive differential evolution with optional
external archive[J]. IEEE Transactions on Evolutionary Computation, 2009,13:945-
958.
[6] Qin A K,Huang V L, Suganthan P N. Differential evolution algorithm with
strategy adaptation for global numerical optimization[J]. IEEE Transactions on
Evolutionary Computation, 2009,13:398-417.
[7] Mallipeddi R, Suganthana P N, Pan Q K, et al. Differential evolution algorithm
with ensemble of parameters and mutation strategies[J]. Applied Soft Computing,
2011,11:1679-1696.
[8] Chakraborty U K , Das S , Konar A. Differential Evolution with local
Neighbothood. Proc of the IEEE Congress on Evolutionary Computation. Vancouver,
USA , 2006:2042-2049.
[9] Suganthan P N,Hansen N,Liang J J,et al. Problem Definitions and Evaluation
Criteria for CEC2005 Special Session on Real-Parameter Optimization [EB/OL].
[2012-05-01].