0% found this document useful (0 votes)
3 views

An improved differential evolution algorithm - 副本

This paper presents an improved Differential Evolution (DE) algorithm that addresses issues of local minima and slow convergence by incorporating bidirectional searching and normal perturbation strategies. The proposed algorithm enhances the search capability and efficiency, demonstrating competitive performance against existing optimization algorithms through experimental validation on benchmark functions. Key innovations include a new mutation strategy and a normal disturbance approach to prevent stagnation and improve local search abilities.

Uploaded by

olianfee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

An improved differential evolution algorithm - 副本

This paper presents an improved Differential Evolution (DE) algorithm that addresses issues of local minima and slow convergence by incorporating bidirectional searching and normal perturbation strategies. The proposed algorithm enhances the search capability and efficiency, demonstrating competitive performance against existing optimization algorithms through experimental validation on benchmark functions. Key innovations include a new mutation strategy and a normal disturbance approach to prevent stagnation and improve local search abilities.

Uploaded by

olianfee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

An improved differential evolution algorithm

Yurui Wang Jiamian Wang Shulin Gao

Abstract
The traditional differential evolution method in optimization is easy to be trapped
into local minima, inadequate and convergence rate is slow, this paper proposes an
improved Differential Evolution algorithm based on Bidirectional searching optimal
and Normal perturbation. The algorithm through a direction learning, acquired
information excellent solution direction. This greatly improves the probability of
finding good solutions. In order to improve the performance of the algorithm, and
avoid the dimension of stagnation and strengthen the ability of local search, this paper
proposes normal perturbation mutation. Experimental studies on Test Function, show
that the algorithm is effective and competitive with other algorithms.
Keywords: differential evolution; bidirectional searching; normal disturbance;
function optimization

1 Introduction
In the process of solving high-dimensional optimization problems, a variety of
competitive swarm intelligence optimization algorithms have been produced. The
main algorithms include Genetic Algorithm [1], Differential Evolution (DE) [2], Ant
Colony Algorithm [3], Particle Swarm Optimization (PSO) [4], etc. Among them, DE
algorithm has the characteristics of simple implementation and excellent effect, and
people have done more and more research on it. DE algorithm was first proposed by
Storn R, Price K et al in 1995, and obtained excellent results in the IEEE Evolutionary
Computing Competition in 1996. In recent 20 years, DE algorithm and its
improvement have become an important research direction of evolutionary
computing, and have been widely used in multi-objective optimization, function
optimization and so on [5].
DE algorithm is a kind of natural heuristic algorithm, which does not rely on the
gradient information of the problem, guides the mutation direction of new individuals
through the differences between individuals, and realizes individual selection through
greedy algorithm. In order to further improve the performance of DE algorithm, Beast
proposed a JDE algorithm with dynamic parameters [6]. Its mutation operator F and
crossover operator CR are dynamically adjusted with a certain probability in each
generation, and the parameter adjustment follows a normal distribution. Liu and
Lampinen proposed a JADE algorithm [7]. With the improved DE / current-to-pbest
strategy, it selects the optimal solution direction from the P optimal individuals, and
generate new parameters F and CR from normal distribution and Cauchy distribution.
Qin et al. proposed the SaDE algorithm [8], which selects the mutation strategy of
each generation according to the success rate in the mutation strategy candidate
group, and generates the F and CR values of each particle according to the normal
distribution. Mallipeddi et al. proposed the EPSDE algorithm [9], which uses
mutation strategy pool and parameter pool to generate new evolutionary adaptation
competitively. In addition, the combination of DE and other algorithms is also a hot
research direction. For example, a large number of research results have also emerged,
such as the DEPSO algorithm [10] formed by the combination of DE and PSO
algorithm, and the combination of DE and Simulated Annealing algorithm [11].
According to the theorem of "no free lunch", various DE algorithms are difficult to
achieve the best search efficiency, space breadth and depth, and can not avoid falling
into local extremum. In order to improve the efficiency of the algorithm and achieve a
balance between search breadth and depth, this paper improves the search effect and
the performance of the algorithm through new mutation strategy and normal random
disturbance. By comparing with the existing algorithms, it is proved that the algorithm
has better optimization performance.

2 Differential Evolution algorithm


Differential Evolution (DE) algorithm is an iterative greedy search algorithm with
search direction information. The main idea of DE algorithm is similar to genetic
operation. It also uses mutation operation with a certain probability to generate new
individuals, and all of them use greedy algorithm for individual selection. The
difference is that DE algorithm adopts real number coding, and its evolution direction
is not completely random, but guided by the difference vectors between individuals.
The basic DE algorithm is generally composed of population initialization, mutation,
correction, crossover, selection and other steps.

2.1 population initialization


Let the initial population size be NP and the dimension of individual variables be d.
For each individual Xi(i=1,2,3,…,NP). Its component in each dimension Xi,j(i=1,2,3,…,D), is
randomly generated between upper and lower bounds (Xmin ,Xmax). The generation formula is as
follows:
Xi,j=Xmin+rand*(Xmax-Xmin) (1)
where rand represents the random number within the range of (0,1).

2.2 mutation operation


In order to enhance the effectiveness of evolution, DE algorithm generates mutation
vectors through individual differences. Experiments have proved that more effective
mutation strategies as follows[12]:
rand/1/bin
Vi,G=Xi1,G+F*(Xi2,G - Xi3,G) (2)
current-to-best/1/bin
Vi,G=Xi1,G+F1*(Xibest,G-Xi1,G)+...+F2*(Xi2,G - Xi3,G) (3)
best/2/bin
Vi,G=Xibest,G+F1*(Xi1,G-Xi2,G)+...+F2*(Xi3,G-Xi4,G) (4)
In the formula above, Vi,G represents the new variant individuals of generation G;
Xk,G(k=i1 , i2 , i3 , i4 , which are different from each other) represent the individuals
participating in the G generation variation; F, F1 and F2 are mutation operators to
scale the difference vector. The recommended value is (0.1-1.0); Xibest,G represents the
individual with the best fitness in the G generation.

2.3 correction operation


The new individual Vi,G has a certain probability of exceeding the range of ( Xmin,Xmax).
Therefore the values of Vi,G need to be corrected. Let the dimension of the cross-border
individual be Vi,j ( j∈ (1,D)) , the common correction methods are to regenerate new
random values by using formula (1) and cross-border rebound:
Vi,j=Xmin+(Xmin - Vi,j)=2*Xmin - Vi,j if (Vi,j<Xmin) (5)
Vi,j = Xmax-(Vi,j - Xmax)=2*Xmax - Vi,j if Vi,j>(Xmax) (6)

2.4 crossover operation


The newly generated variation result Vi,j , crosses the original individual with a
certain probability CR ∈ (0,1), to produce a new individual Ui,j .

Ui,j if (rand<CR) or randj=j


Ui,j = (7)
Vi,j else

In this fomula rand represents a random number in the range (0,1), and randj is a
random integer in the range (1, D), which ensures that at least one dimension of Vi,G
enters the newly generated individual Ui,G.

2.5 selection operation


The DE algorithm selects the optimal individual according to the fitness between the
new individual Ui,G and the original individual Xi,G by the greedy principle method to
get the next generation Xi,G+1 .
Ui,G if f(Ui,G)>f(Xi,G)
Xi,G+1 = (8)
Xi,G else

where i ∈ (1, NP) is the individual number in the population, and f is the fitness
function.

3 BNDE
The evolution of individuals in DE algorithm depends on the difference and
direction of vectors between other individuals. In population evolution, such vector
combination has uncertainty. In the process of evolution, a considerable number of
evolutionary processes have been abandoned due to the deterioration of individual
results, and this part of information has not been effectively used. In addition, the
group is easy to fall into local extr emum, and its dimension will stagnate due to
convergence and will not be improved when the number of iterations increases. In
order to use the evolutionary information of the population effectively and avoid local
extremum and dimensional stagnation, an improved algorithm is proposed in this
paper. The algorithm is the improved Differential Evolution algorithm based on
Bidirectional searching optimal and Normal perturbation(BNDE).

3.1 the new mutation strategy


A strategy ,rand/1/bin, is a mutation way with good overall optimization effect, which
is used as the basis of many optimization algorithms in evolutionary algorithms.
Compared with the more greedy and selective best/2/bin algorithms, it can effectively
avoid local extremum and has strong global search ability.
In the search of rand/1/bin algorithm, when the fitness of the evolved individual is
poor, the poor individual will be abandoned, that is, there is no evolution. In this
paper, a new mutation strategy is adopted. When the overall fitness of the new
population is not improved, the search direction is changed and the mutation factor F
is updated to -F/2 so that it has a high probability to obtain a better fitness value. The
specific implementation formula is:
Vnew,i,G=Xi1,G+F*(Xi2,G - Xi3,G) (9)
In the formula, Vnew,i,G is each individual in the newly generated population, i∈(1,NP).
When min (new species population value) < min (original species population value),
that is, the new population optimal value is better than the original population optimal
value, the current round of updating ends. Otherwise:
Vrev,i,G=Xi1,G–F/2*(Xi2,G-Xi3,G) (10)
Population V is generated in a new direction. If the new variation factor takes – F or
greater, the instability increases; If the scaling value of –F/10 or less is taken, the
optimization efficiency is low. The variation factor of comprehensive balance is –F/2,
which has stronger stability, better evolutionary efficiency and higher probability of
obtaining better solutions. Let the original population be V, and the new population
consists of the optimal values of each individual in Vi,G,Vnew,i,G,Vrev,i,G.
Under this mutation strategy, the number of iterations of the algorithm changes
dynamically. When a better solution can be obtained through formula (9) every time,
the maximum number of theoretical iterations is 10000*D/NP=10000; When the
optimization cannot be realized each time, the theoretical minimum number of
iterations is 10000*D/NP/2=5000 times.

3.2 normal disturbance strategy


With the continuous evolution of the population, the individuals in the population
tend to assimilate. Because the direction of differential evolution depends on the
difference between two individual vectors, when their dimension values tend to be the
same, the phenomenon of dimensional stagnation cannot be avoided. Therefore, this
paper proposes a new strategy to carry out normal random disturbance with a certain
probability to enhance its ability of local search and jumping out of local extremum.
Vi,G=Vi,G+normrnd(0,σ) if(rand<ε) (11)
Where normrnd (0, σ) It is a normal distribution function, the mean value is 0, and
the standard deviation is σ , With probability ε Disturbance, default ε= 0.05 。
Through the analysis of different test functions, it can be seen that each function has
different numerical intervals, and the fitness values differ greatly, so the same
numerical value cannot be used to describe the normal disturbance interval σ 。
Analysis shows that the greater the difference between the two adjacent generations in
the iteration, the larger the search space can be increased by large-scale variation; The
smaller the difference between the two adjacent generations, the larger the range of
variation does not help to improve the search accuracy, so the variation range should
be reduced. According to the different fitness of each function, this paper uses
formula (12) to describe the normal standard deviation of each function at different
evolutionary stages:

σ= +η (12)

Where represents the optimal fitness value of the N + 1 generation of the

population, represents the optimal fitness value of the N generation, and

represents the optimal fitness value of the first generation. During the
evolution of each population, the standard deviation of each generation is directly
related to its initial fitness value, current fitness value and previous generation fitness
value, which is generated by formula (12). In the process of evolution, the standard
deviation decreases with the convergence of fitness values. In this way, the probability
can be gradually searched in the individual adjacent domain, while avoiding
dimensional stagnation. In order to avoid making the standard deviation 0 due to the
same fitness value and stop the occurrence of mutation, the formula is added η As the
minimum standard deviation of normal distribution, the initial value η= 0.01。

3.3 algorithm steps


According to the above analysis, the main implementation steps of the improved
differential evolution algorithm based on bidirectional optimization and normal
disturbance are as follows:
Step1 Set the value of population size NP, the value of problem dimension D, and the
value of maximum fitness evaluation times maxFES.
Step 2 Set the variation factor value F and the cross factor value CR, and the normal
disturbance probability ε and normal standard deviation basis η.

Step 3 Generate the initial population X and calculates the fitness .


Step 4 Use formula (9) to calculate Vnew, i,G.
Step 5 Use formula (7) to cross operate the population.
Step 6 Use formula (12) to generate normal standard deviation σ. The population is
disturbed by formula (11).

Step 7 Calculate the fitness value of . If < , turn to step4.


Otherwise step8.
Step 8 Use formula (10) to generate a new population Vrev,i,G, formula (7) for crossover
operation, and formulas (12) and (11) to disturb the population.
Step 9 Compare each individual i(i∈(1,NP)) in Vi,G ,Vnew,i,G ,Vrev,i,G, and select the best
fitness individual to generate a new species group.
Step 10 If current evaluation times FES < maxFES, turn to step4. Otherwise, stop the
calculation and exit.

4 experiment and analysis


In order to verify the effectiveness of the algorithm, the benchmark-functions is
selected for experimental verification. The formulas and description of the function
set is in references [13]. Eight representative test functions are verified and analyzed,
including unimodal function, multimodal function and mixed combination function.
These functions have different characteristics, which can better verify the
effectiveness of the algorithm.

4.1 benchmark-functions
F1= Shifted Schwefel's Problem 1.2 with Noise in Fitness
luBounds=[-100 , 100]
F2= Schwefel's Problem 2.6 with Global Optimum on Bounds
luBounds=[-100 , 100]
F3= Shifted Rotated Rastrigin's Function
luBounds=[-5 , 5]
F4= Hybrid Composition Function
luBounds[-5 , 5]
F5= Rotated Hybrid Composition Function
luBounds=[-5 , 5]
F6= Rotated Hybrid Composition Function with a Narrow Basin for the Global Optimum
luBounds=[-5 , 5]
F7= Rotated Hybrid Composition Function with the Global Optimum on the Bounds
luBounds[-5 , 5]
F8= Rotated Hybrid Composition Function with High Condition Number Matrix
luBounds[-5 , 5]

4.2 comparison algorithm and setting


Four mainstream optimization algorithms ( jDE,SaDE,EPSDE,JADE) and classical
DE algorithm (DEr1) are compared to show the consistency of the algorithm. In order
to prove the effectiveness of the improved differential evolution algorithm ( BNDE)
based on two-way optimization and normal disturbance, another reference object is
set: the differential evolution algorithm (BDE) with only two-way optimization and
no normal disturbance. The initial settings of all algorithms are dimension D = 30,
population number NP = D, and the maximum evaluation times maxFES=10000*D. The
parameter settings of each algorithm are shown in Table 1.
Table 1 Parameter setting of each algorithm
Algorithm Parameter setting Reference
DEr1 F=0.5,CR=0.9 [5]

jDE τ1=0.1,τ2=0.1 [6]

SaDE LG=50,F=normrnd(0.5,0.3) [8]

F=[0.4 : 0.9],
EPSDE [9]
CR=[0.1 : 0.9]

JADE p=0.05,c=0.1 [7]

BDE F=0.9,CR=0.9

F=0.9,CR=0.9,
BNDE
ε=0.05,η=0.01

4.3 results
The experimental environment is as follows: the CPU is Intel Core i7 3.4GHz, the
memory is 8g, the operating system is Windows10, and the experimental software is
MATLAB.
In order to verify the effectiveness of bnde algorithm, DE/rand/1/bin (DEr1) and
BDE are compared with BNDE. Each algorithm is independently verified for 30
times, and its mean and standard deviation are calculated. The comparison effect is
shown in Table 2. The hit Counts in Table 2 refer to the number of times to obtain a
better solution than the current one.
According to the description above, the theoretical value of the number of iterations
is (5000-10000), and the greater the value, the better the optimization effect.
Moreover, it satisfies the fomula as follows :
hit times / 2 + 5000 = iteration times in BNDE where F=0.9 (13)
The reasons are as follows: in bnde algorithm, when all F = 0.9 can not get a better
value, it is necessary to jump back to F=-F/2, so as to obtain 5000 iterations. When all
F = 0.9 can get better values, the number of iterations is 10000. That is, 5000 more
iterations can be achieved when all 10000 hits are made. Therefore, 5000 + n/2
iterations are obtained for every n more hits. This is shown in formula (13).
In addition, according to the empirical estimation formula (14) obtained from the
actual statistical data, it can be seen that the new mutation strategy F = - F / 2 in
BNDE is better than the original F = 0.9, which improves the search accuracy.
(hits of F = - F / 2 in bnde) ≈ (hits of F = 0.9 in bnde) * 3 (14)
It can be seen from Table 2 that the solution accuracy of BNDE is ahead of DEr1 in
most test functions, BDE and BNDE get better results in most functions, and only
inferior to DEr1 in F2, which proves the effectiveness of the bidirectional searching
optimization algorithm. In addition, the results obtained by BNDE are better than
BDE as a whole, which proves the effectiveness of the normal perturbation algorithm.

Table 2 Performance comparison of BNDE, DEr1 and BDE


Benchmark Evaluation DEr1 BDE BNDE BNDE BNDE/ BNDE/
Number

Average value Average value Average value of F=0.9 F=-F/2


-functions times
(standard deviation) (standard deviation) (standard deviation) iteratio Hit Counts Hit Counts

ns

3.7485e+01 3.6872e+01 7.3100e-02


F1 300000 5061 121 364
(4.4619e+01) (2.0191e+02) (3.4130e-01)

8.2119e+02 1.4039e+03 1.2514e+03*


F2 300000 5119 239 629
(2.9109e+02) (6.6778e+02) (6.5371e+02)

1.0711e+02 4.8366e+01 3.5553e+01


F3 300000 5069 138 431
(8.7329e+01) (2.9034e+01) (9.9528e+00)

4.2000e+02 2.9151e+02 2.8264e+02


F4 300000 5045 90 305
(8.9443e+01) (3.5827e+01) (3.0757e+01)

9.0078e+02 8.1722e+02 8.1749e+02


F5 300000 5167 334 840
(2.3764e+01) (7.2970e-01) (0.7315e+00)

9.0597e+02 8.1767e+02 8.1740e+02


F6 300000 5181 362 885
(1.7626e+00) (8.6740e-01) (9.5330e-01)

9.0597e+02 8.1735e+02 8.1739e+02*


F7 300000 5169 338 881
(1.9799e+00) (6.6370e-01) (5.7550e-01)

9.1696e+02 5.1103e+02 5.0011e+02


F8 300000 5093 185 456
(1.4834e+01) (5.8553e+01) (1.6310e-01)

In order to further verify the effectiveness of the algorithm, BNDE algorithm is


compared with jDE, SaDE, EPSDE, JADE algorithms. 30 independent experiments were
carried out for each algorithm to avoid errors,. For the statistical results, the
significance level of t-test was analyzed at the significance level of P = 0.05. "↑"
indicates that the result of this algorithm under this function is significant and better
than bnde algorithm; The "↓" indicates that the result of the algorithm under this
function is significant and inferior to bnde algorithm; The "≈" indicates that there is
no significant difference between the results of the algorithm under this function and
bnde algorithm, so the number of samples should be increased for comparison. The
performance of BNDE is compared with the mainstream algorithms jDE, SaDE, EPSDE,
JADE algorithms. The experimental results are shown in Table 3. The black value in
the table is the optimal mean value in the results, * represents the suboptimal mean
value.
Table 3 Performance comparison of BNDE, jDE, SaDE, EPSDE and JADE algorithms
Benchmark- jDE SaDE EPSDE JADE BNDE

functions Average value Average value Average value Average value Average value

(standard deviation) (standard deviation) (standard deviation) (standard deviation) (standard deviation)

F1 1.6403e+02 1.1297e+03 1.6492e+03 7.5632e+01 7.3100e-02

(4.9021e+02) (1.3451e+03) (2.6122e+03) (2.4189e+02) (3.4130e-01)

≈ ≈ ↓ ↓

F2 1.8775e+03 4.5791e+03 2.0710e+03 3.5516e+02 1.2514e+03*

(6.5049e+02) (6.8971e+02) (1.0643e+03) (6.6895e+02) (6.5371e+02)


↓ ↓ ↓ ↑

F3 3.6689e+01* 5.7343e+01 5.0605e+01 3.7458e+01 3.5553e+01

(8.1335e+00) (1.5841e+01) (1.1588e+01) (9.0613e+00) (9.9528e+00)

≈ ↓ ↓ ≈

F4 3.5507e+02 3.4582e+02 2.2944e+02 3.5336e+02 2.8264e+02*

(9.5055e+01) (6.6921e+01) (5.1615e+01) (1.3890e+02) (3.0757e+01)


0
↓ ↓ ↑ ↓

F5 9.0688e+02 9.0439e+02 8.2765e+02 9.0683e+02 8.1749e+02*

(1.8937e+00) (5.9313e+01) (5.2852e+00) (2.0601e+00) (0.7315e+00)

↓ ↓ ↓ ↓

F6 90640e+02 9.0940e+02 8.2858e+02 9.0647e+02 8.1740e+02

(2.2451e+00) (5.6711e+01) (4.8647e+00) (1.9000e+00) (9.5330e-01)

↓ ↓ ↓ ↓

F7 9.0766e+02 8.9869e+02 8.2770e+02 9.0615e+02 8.1739e+02*

(2.3825e+00) (6.0927e+01) (3.2334e+00) (1.9020e+00) (5.7550e-01)

↓ ↓ ↓ ↓

F8 8.8535e+02 9.5150e+02 5.3331e+02 9.0217e+02 5.0011e+02

(1.7290e+01) (2.0647e+01) (6.4071e+01) (3.2596e+01) (1.6310e-01)

↓ ↓ ↓ ↓

4.4 Analysis of experimental results


As seen from Table 2, BNDE algorithm performs well in unimodal function,
multimodal function and mixed function. Compared with other mainstream
optimization algorithms, most of them have better solutions and better stability under
the same conditions; In a few cases, suboptimal solutions can also be obtained in all
five comparison algorithms. BNDE algorithm has advantages in unimodal, multimodal
and mixed functions, and can obtain ideal solutions. Through t-test, it is verified that
compared with other algorithms, BNDE algorithm can significantly improve the
quality of solution and is an optimization algorithm with good optimization ability.
The convergence curve of the function in Table 3 is given below, as shown in Fig. 1-
fig. 8. In the following figure, the horizontal axis represents the times of fitness
evaluation, and the vertical axis represents the average value log10(f).
6
BNDE
5 jDE
SaDE
EPSDE
4
JADE

2
Average log10(f)

-1

-2
0 0.5 1 1.5 2 2.5 3
5
Times of fitness evaluation x 10
Fig. 1 Convergence curve of function F1
5
BNDE
jDE
SaDE
4.5 EPSDE
JADE

Average log10(f)

3.5

2.5
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation x 10
5

Fig. 2 Convergence curve of function F2


3
BNDE
jDE
SaDE
EPSDE
JADE
2.5

Average log10(f)

1.5
0 0.5 1 1.5 2 2.5 3
5
Times of fitness evaluation x 10

Fig. 3 Convergence curve of function F3


3.2
BNDE
3.1 jDE
SaDE
3 EPSDE
JADE
2.9

2.8

Average log10(f)
2.7

2.6

2.5

2.4

2.3
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation 5
x 10

Fig. 4 Convergence curve of function F4


3.2
BNDE
jDE
3.15 SaDE
EPSDE
JADE
3.1

3.05
Average log10(f)

2.95

0 0.5 1 1.5 2 2.5 3


5
Times of fitness evaluation x 10

Fig. 5 Convergence curve of function F5


3.2
BNDE
jDE
3.15 SaDE
EPSDE
JADE
3.1

3.05
Average log10(f)

2.95

0 0.5 1 1.5 2 2.5 3


Times of fitness evaluation x 10
5

Fig. 6 Convergence curve of function F6


3.2
BNDE
jDE
3.15 SaDE
EPSDE
JADE
3.1

3.05
Average log10(f)

2.95

0 0.5 1 1.5 2 2.5 3


Times of fitness evaluation x 10
5

Fig. 7 Convergence curve of function F7


3.5
BNDE
3.4 jDE
SaDE
3.3 EPSDE
JADE
3.2

3.1

Average log10(f)
3

2.9

2.8

2.7

2.6
0 0.5 1 1.5 2 2.5 3
Times of fitness evaluation x 10
5

Fig. 8 Convergence curve of function F8


BNDE algorithm has good optimization ability because it makes full use of the
"evolutionary failure" information of the population, changes the search direction in
time, and can jump to the optimal solution with greater probability. This strategy
avoids the blindness of search, enhances the ability of regional exploration, speeds up
the global optimization ability and convergence rate of the population, and improves
the evolutionary efficiency of the algorithm.

5 Conclusion
In order to solve the problems of slow convergence rate and strong randomness of
DE algorithm, a new improved differential evolution algorithm based on bidirectional
optimization and normal disturbance is proposed in this paper. The algorithm
improves the optimization effect through two-way optimization, enhances the local
search ability through normal disturbance and jumps out of the local extreme value.
The optimization results of benchmark functions test are compared with the results of
standard and mainstream optimization algorithms. It is proved that BNDE algorithm
has certain advantages in solution accuracy and convergence speed.
This algorithm can be extended to cross-border processing. In DE and its
optimization algorithm, cross-border processing has always been one of the more
important steps. When the new individual crosses the boundary when F= 0.9, it can be
adjusted to F = - F/2, which can theoretically reduce the probability of crossing the
boundary and be easy to handle. At the same time, the algorithm also plays a role in
practical application. For example, in robot path planning, the search direction can be
adjusted in time to mention the opportunity to get the optimal path. In addition, in
complex functions such as mixed combination functions, the convergence speed of
this algorithm is better than other algorithms, and the convergence accuracy is better
than other algorithms. It has obvious advantages in solving complex problems.
Bnde algorithm can effectively improve the performance of DE algorithm under 30
dimensional data and is competitive. However, it can also be seen that the algorithm
still has room to improve the iterative hit rate. For example, change the overall F to
the matrix F to make it change separately for each individual, which will save a lot of
evaluation times; At the same time, there is still a lot of improvement work for the
effectiveness of high-dimensional data. Therefore, the follow-up work will be verified
and optimized in terms of improving the number of hits, high-dimensional
optimization and practical application.

References
[1] R. Storn and K. Price, Differential Evolution—A Simple and efficient adaptive
scheme for global optimization over continuous spaces, Berkeley, CA, Tech. Rep. TR-
95-012, 1995 [Online]. Available:citeseer.ist.psu.edu/article/storn95differential.html.
[2] Storn R, Price K. Differential Evolution – A Simple and Efficient Heuristic for
Global Optimization over Continuous Spaces[J]. Journal of Global Optimization.
1997(11): 341–359.
[3] Dorigo M,Maria L.Ant colony system:a cooperative learning approach to the
traveling salesman Problem[J]. IEEE Transon Evolutionary
Computation,1997,1(1):53-66.
[4] Brest J, Greiner S, Boskovic B, et al. Self-adapting control parameters in
differential evolution: a comparative study on numerical benchmark problems [J].
IEEE Transactions on Evolutionary Computation, 2006, 10:646-657.
[5] Zhang J Q, Sanderson A C. JADE: adaptive differential evolution with optional
external archive[J]. IEEE Transactions on Evolutionary Computation, 2009,13:945-
958.
[6] Qin A K,Huang V L, Suganthan P N. Differential evolution algorithm with
strategy adaptation for global numerical optimization[J]. IEEE Transactions on
Evolutionary Computation, 2009,13:398-417.
[7] Mallipeddi R, Suganthana P N, Pan Q K, et al. Differential evolution algorithm
with ensemble of parameters and mutation strategies[J]. Applied Soft Computing,
2011,11:1679-1696.
[8] Chakraborty U K , Das S , Konar A. Differential Evolution with local
Neighbothood. Proc of the IEEE Congress on Evolutionary Computation. Vancouver,
USA , 2006:2042-2049.
[9] Suganthan P N,Hansen N,Liang J J,et al. Problem Definitions and Evaluation
Criteria for CEC2005 Special Session on Real-Parameter Optimization [EB/OL].
[2012-05-01].

You might also like