0% found this document useful (0 votes)
51 views10 pages

A Hybrid Genetic Algorithm Based On Information Entropy and Game Theory

Uploaded by

Karlo Mišković
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views10 pages

A Hybrid Genetic Algorithm Based On Information Entropy and Game Theory

Uploaded by

Karlo Mišković
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Received December 17, 2019, accepted January 22, 2020, date of publication February 3, 2020, date of current version

March 2, 2020.
Digital Object Identifier 10.1109/ACCESS.2020.2971060

A Hybrid Genetic Algorithm Based on


Information Entropy and
Game Theory
LI JIACHENG AND LI LEI
Faculty of Science and Engineering, Hosei University, Tokyo 184-8584, Japan
Corresponding author: Li Jiacheng ([email protected])

ABSTRACT To overcome the disadvantages of traditional genetic algorithms, which easily fall to local
optima, this paper proposes a hybrid genetic algorithm based on information entropy and game theory. First,
a calculation of the species diversity of the initial population is conducted according to the information
entropy by combining parallel genetic algorithms, including using the standard genetic algorithm (SGA),
partial genetic algorithm (PGA) and syncretic hybrid genetic algorithm based on both SGA and PGA
for evolutionary operations. Furthermore, with parallel nodes, complete-information game operations are
implemented to achieve an optimum for the entire population based on the values of both the information
entropy and the fitness of each subgroup population. Additionally, the Rosenbrock, Rastrigin and Schaffer
functions are introduced to analyse the performance of different algorithms. The results show that compared
with traditional genetic algorithms, the proposed algorithm performs better, with higher optimization ability,
solution accuracy, and stability and a superior convergence rate.

INDEX TERMS Genetic algorithm, partheno-genetic algorithm, information entropy, game theory, parallel
genetic.

I. INTRODUCTION by decoding the optimal individual from the previous gener-


A. GENETIC ALGORITHM ation, the approximate optimal solution of the problem can be
A Genetic Algorithm refers to a computational model that obtained.
simulates the natural selection of Darwin’s biological evolu- The genetic algorithm process can be approximately
tion and the biological evolution process of a genetic mech- expressed as follows.
anism; specifically, this approach involves searching for the As an intelligent algorithm for solving NP problems,
optimal solution by simulating the natural evolution process. genetic algorithms have been extensively studied.
Genetic algorithms are typically based on finding a potential Davis edited and published the book ‘‘Handbook of
solution set for a given population. After the original popu- Genetic Algorithms’’, which covers many application exam-
lation is generated, according to the evolutionary principle of ples of genetic algorithms in the engineering, technology and
survival of the fittest, each subsequent generation evolves to scientific computing fields.
produce a better approximate solution considering the fitness In ‘‘Genetic Algorithms for Search, Optimization and
level and size of the problem domain for individual selection Machine Learning’’, Goldberg proposed combining Pareto
in each generation. Then, the processes of crossover and theory and genetic algorithms in economics to solve opti-
mutation occur with genetic operators. Finally, a population mization problems.
that represents a new solution set is produced. Generally, Bernabe introduced the cell genetic algorithm with
this process results in an epigenetic population and is similar real-number coding to solve continuous optimization prob-
to natural population evolution methods but with better per- lems, and the results were better than those for other
formance related to environmental adaptability. In addition, algorithms.
Nagham proposed a new structured population approach
The associate editor coordinating the review of this manuscript and for genetic algorithms based on the customs, behaviours and
approving it for publication was Amjad Ali. patterns of the human community.

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/
36602 VOLUME 8, 2020
L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

Tu Chengyuan et al. proposed overcoming the problem of population in each generation. Additionally, the genetic algo-
‘‘premature convergence’’ by constructing new genetic oper- rithm involves genetic and selection operations; thus, whether
ators, such as restoration, reconstruction and self-intersection the new population is retained and how it is retained will
operators, in the so-called partheno-genetic algorithm. affect the final optimization result and the speed of opti-
Generally, the above algorithms are relatively classic mization. Therefore, the concept of information entropy is
genetic algorithms; they have efficient search capabilities, introduced in this paper. Sexual optimization and game theory
but there are still inherent drawbacks to these algorithms. optimization are applied for new populations generated by
To find a better solution to the NP problem, many scholars genetic operations. Based on a review of relevant studies,
have studied hybrid genetic algorithms and improved genetic scholars have begun to study the combination of informa-
algorithms. tion entropy and genetic algorithms and the combination of
Li Jia proposed a new genetic algorithm, namely, a hybrid game theory and genetic algorithms to optimize algorithms.
genetic algorithm that integrated the tabu search algorithm However, applications involving information entropy, game
into a genetic algorithm. theory and genetic algorithms for optimization have not been
Zhang Tao et al. also proposed a new hybrid genetic algo- reported in China.
rithm in which a genetic algorithm and the 3-OPT algorithm
are combined. This approach utilizes the efficient global B. INFORMATION ENTROPY
search ability of the genetic algorithm and the local search Entropy was introduced in the 1860s by the German physicist
ability of the 3-OPT algorithm. Clausius as a concept of thermodynamics. This term was
Chen Xiangzhou et al. embedded a reversal operator in the originally used to describe the amount of energy conversion
genetic algorithm and improved the convergence speed of the and the direction of transformation. In 1854, Clausius defined
algorithm in the late stage of operation. the state function entropy of a reversible thermodynamic
Zhang et al. introduced the idea of cloning in genetic algo- system. The mathematical expression is as follows.
rithms. Dai Xiaoming et al. introduced the concept of parallel
dQ
evolution in genetic algorithms. Fang Xia et al. applied an ds = (1)
immunity algorithm with a genetic algorithm to solve the dT
VRP problem. A change in entropy is equal to the ratio of heat absorption
The hybrid algorithm effectively balances the diversity to the temperature from state X0 to state X in a reversible
and convergence properties of the algorithms and embodies element-based process.
the advantages of each algorithm. However, the diversity of Information entropy is an important concept in the field
most hybrid algorithms is relatively low, and the information of information theory. The famous paper ‘‘The Mathematical
exchange between populations is not considered. With the Theory of Communication’’ published by American scientist
advent of interdisciplinary studies, scholars have introduced C.E. Shannon in 1948 laid the theoretical foundation for
information entropy and game theory into genetic algorithms information theory. In information theory, entropy describes
to further optimize these algorithms. the degree of uncertainty of random variables, and informa-
Xue Feng et al. used information entropy to generate the tion entropy is often used to reflect the degree of uncertainty
initial population, which increased the diversity of the initial in selection or information.
population and provided the basis for subsequent processing. The information entropy can be defined as the average
Chen Xiaofeng used the concept of information entropy to amount of information provided by each representation of
improve and fuse quantum evolution algorithms and immune source X, or the statistical average of the amount of informa-
genetic algorithms and proposed a quantum immune genetic tion contained in various representations sent by information
algorithm based on information entropy. source X in the probability space of information source X.
Wei Qinfang et al. proposed a genetic algorithm for wire- This average value is H(X):
less sensor network intrusion detection based on information X
H (x) = − P (x) LnP (x) (2)
entropy.
xX
Yang Mei and others borrowed relevant ideas from game
theory for strategy optimization and proposed a hybrid where X represents all solutions.
optimization mechanism for multi-subgroup and multi- The distribution degree of individuals in a population can
strategy methods; this approach improved the search accu- be calculated based on the information entropy. If the popula-
racy and convergence speed, but it requires the selection of tion diversity is high, the corresponding information entropy
subpopulations. will be high, and vice versa, so that the population can be
The optimization ability of the genetic algorithm is prevented from falling to local optima.
reflected by the diversity of the population and the conver-
gence speed of the algorithm. The traditional genetic algo- C. GAME THEORY
rithm easily falls to local optima, and this issue is largely Game theory is related to the direct interaction of decision-
related to the diversity of individuals in the population. making behaviours and the equilibrium problem associated
Therefore, it is necessary to maintain the diversity of the with such decisions. Game theory originated in the early

VOLUME 8, 2020 36603


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

FIGURE 2. Path optimization diagram.

2) CALCULATE THE INFORMATION ENTROPY


BASED ON CODING
In the process of using the genetic algorithm to optimize a
FIGURE 1. Genetic algorithm process. route, calculating the information entropy based on solely
on the fitness function cannot accurately reflect the degree
of diversity of the population. Typically, issues are encoun-
20th century. In 1994, von Neumann and Morgan Stein tered in the VRP problem. In Figure 2(a), paths a and b are
co-authored ‘‘Game Theory and Economic Behaviour’’ to lay basically the same and can be considered similar, but if the
the theoretical foundation for this theory. Since the 1950s, fitness function is based on the length of each path, the two
Nash, Zelten, and Haisani expanded on and improved game fitness function values will greatly differ. For paths c and d
theory. In the past 20 years, game theory has been widely used in Fig. 2(b), in a fitness function with the path length as the
to analyse and solve conflicts and cooperative problems in main criterion, the difference in the resulting values for these
fields such as economics, complex networks, power systems, two paths may not be large. However, based on individual
transportation and path planning. habits, these two paths may vary greatly. Thus, the fitness
A typical problem in Game Theory research is that function value is not suitable as a measure of the path-based
two or more participants (called in-house players) make population diversity.
decisions on a confrontational or competitive basis so that Therefore, a metric of population diversity called the
their own party obtains the best possible results. Moreover, encoding information entropy can be obtained based on the
the so-called game is a set of rules that stipulate the methods differences in coding.
and regulations that should be followed throughout the game Let the population consist of N chromosomes of length L.
(or competition, struggle, etc.), including the players, strate- First, the degree of coding similarity for L positions in N chro-
gies, outcome after strategy selection, and so on. mosomes is determined. Then, chromosomes with similarity
The elements of the game include participants, strategies, values higher than 90% are classified into one class. Next,
and utility functions. according to the probability of each class P(x), formula (1)
Players are the decision-making bodies involved in the is used to calculate the information entropy. To ensure the
game, also called participants. Players are typically repre- accuracy of the information entropy values, the coding infor-
sented by the common symbol Yi (i = 1, 2, · · · , n). mation entropy is used to determine the population diversity.
The strategy involves the actions S of each participant.
The utility function F is a function of the set S and is used 3) INITIALIZE THE POPULATION USING THE
to measure the profit of the participants in the game. Notably, INFORMATION ENTROPY
this function provides an important basis for the participants To uniformly distribute the initial population in the solution
to make rational decisions. space, avoid a centralized distribution in the local region of
the solution space, and increase the diversity of the initial
II. APPLICATION OF INFORMATION ENTROPY AND population, the population can be initialized by calculating
GAME THEORY IN GENETIC ALGORITHMS the information entropy.
A. APPLICATION OF INFORMATION ENTROPY The process of using the information entropy to generate
IN POPULATIONS an initial population is as follows.
1) CALCULATE THE INFORMATION ENTROPY BASED 1) Step 1: Set the critical entropy value S0 .
ON THE FITNESS FUNCTION 2) Step 2: Randomly generate the first chromosome in the
In the genetic algorithm, the fitness function value of each chromosome domain.
individual can be calculated, and the distribution of each func- 3) Step 3: Generate a chromosome each time in the same
tion is obtained by statistical analysis. Next, the probability way and calculate the entropy value S of the chro-
P(x) of different solutions is obtained, and the information mosome and the existing individual. If S > S0 , then
entropy is calculated with formula (1). Finally, the degree of accept the chromosome; otherwise, reject the chromo-
diversity of the population is determined. some. In this case, regenerate a new chromosome, and

36604 VOLUME 8, 2020


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

calculate the entropy value S until the condition of TABLE 1. Element mapping relations for game theory and the hybrid
optimization algorithm.
S > S0 is met.
4) Step 4: Repeat Step until the number of chromosomes
reaches the specified initial population.

4) SETTING THE INFORMATION ENTROPY THRESHOLD S0


The execution process of genetic algorithms is similar to
a system evolution process. At the beginning of the algo-
rithm, the internal diversity of the population is high, and
the algorithm has a wide search space. As the population
multiplies from generation to generation, some individuals
with large adaptation values and their descendants account
for the majority of the population. As a result, the population
diversity decreases, the algorithm search space shrinks, and
the population tends to be stable. The process of population
evolution in the genetic algorithm is consistent with the phy-
logenetic process; therefore, the entropy threshold S0 value
needs to be gradually reduced according to the evolution of
the population. The method of changing S0 used in this paper
is as follows:
choices for individuals. The game theory and element map-
S0 (K) = S0 (K − 1) ∗ γ , K = 1, 2 . . . , n (3) ping relationships in this algorithm are as follows.
where K represents evolutionary algebra and γ is the thresh-
old reduction factor. 1) GAME MODEL IN THE ALGORITHM
We define three subgroups, pop1, pop2 and pop3, par-
B. PARALLEL ALGORITHM ticipating in independent optimization and three strate-
To perform game calculations with the standard genetic algo- gies, strategy1, strategy2 and strategy3.Then,for participants
rithm, partheno-genetic algorithm, and standard partheno- N = {1,2,3}, each subgroup i ∈ N is the decision-making
genetic hybrid algorithm, three groups of populations must subject in the game problem, and it is assumed that all
be generated. A coarse-grained model, also known as a dis- three subgroups have collective rationality. The purpose of
tributed model or an island model, can be constructed, which the game is to maximize the global benefit. Each sub-
is a type of parallel genetic algorithm. group has a separate strategy, popi , yielding the strategy set
However, this approach divides a group into several sub- Si = {strategy1, strategy2, strategy3}. The game result is
groups according to the number of nodes. The algorithm represented by (q1, q2, q3), where q1, q2, and q3 correspond
is independently run for each child’s generation in parallel to the fitness values of pop1, pop2 and pop3, respectively,
based on the respective nodes. During each evolutionary considering the adaptation of the best individual in the group.
generation time step, each subgroup will exchange individ- The degree value is determined by equation (4).
ual information. This process can identify and maintain the
best individuals, enriches the diversity of the population and qi = best(popi ) (4)
prevents early convergence.
2) SOLVING THE GAME MODEL
C. APPLICATION OF GAME THEORY IN GENETIC The income of the three subgroups is recorded as P = (P1,
OPERATION MODE P2, P3 ), and the income of subgroup i is Pi . This value
Genetic theory can be optimized using game theory. Specif- depends not only on the strategy of each subgroup but also the
ically, classical game theory is based on classical game strategies of other subgroups as function of the combination
theory and individual rationality, and the purpose of each of strategies.
participant is to maximize their own income function. This Classical game theory is based on personal rationality. The
paper assumes that the group population is a participant who objective of each participant is to maximize their own income
emphasizes collective rationality and that the two parties function. This paper assumes that the three subgroups are
reach a cooperative agreement to maximize the benefit of participants who emphasize collective rationality and that the
the entire population. The game at this time is a complete three parties reach a cooperative agreement to maximize P∗
information cooperation game. for the entire population. That is, each individual implements
In addition, this paper introduces information entropy to a single strategy but shares the optimal solution of the com-
evaluate the diversity of populations in the evolution pro- bination of strategies, namely, P∗ = best(P).
cess, calculates the value of information entropy through Under these assumptions, when the subpopulations are
formula (3), and combines the strategy set to make game searched in parallel, the population strategy at the nodes is

VOLUME 8, 2020 36605


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

updated according to formula (5): population excludes individuals whose fitness function value
is less than the median.
Xit+1 = Xit + δxit+1 (P∗ ) (5) b. The population is optimized to the Kth (K = 1,
where Xit representsthe position at node t of subgroup i and 2, . . ., n) generation, and the entropy values Q1 , Q2 , and Q3
δx it+1 (P∗ ) is the strategy for subgroup updating based on the of the three populations are calculated. When the entropy Q
following three modes. of two of the populations is less than the critical entropy S0 ,
the information entropy of onepopulation is greater than the
a: COOPERATION MODE critical entropy S0 . At this time, from the population with an
When the population is optimized to the Kth (K = 1, information entropy that satisfies the threshold requirement,
2, . . ., n) generation, the entropy values Q1 , Q2 , and Q3 of the the individuals with fitness values larger than the median are
three populations are calculated according to the following retained, and other individuals are excluded.
formula. In general, coordination mode is used to improve the
diversity of the algorithm and reduce the possibility of local
max(Q1, Q2, Q3) < S0 (6) populations being trapped.
At this time, cooperative mode begins. Moreover, the three III. IMPROVED GENETIC ALGORITHM BASED ON
populations are merged into one population, and according INFORMATION ENTROPY AND GAME THEORY
to the fitness function value f of the individuals in the pop- Based on the above analysis, the steps used to develop the
ulation, the individuals with function values less than the hybrid genetic algorithm in this paper based on information
media value are eliminated, and the remaining are retained entropy and game theory are as follows.
and evenly distributed among the three populations. These 1) First, determine the coding method according to the
populations are used to regenerate the population in the same relevant requirements. Then, complete the initialization of
manner as discussed for the initial population. the three populations based on the population information
The cooperative mode is a multi-group self-coordination entropy in the search space. Next, set the search boundary
mechanism. By synergizing with other populations, the per- and initialize the maximum iteration number MaxDT and the
formance of at least one population is improved, and the parallel optimization threshold MaxJT. Finally, initialize the
diversity and convergence of the algorithm are balanced. global optimal solution fbest.
2) Implement the parallel genetic algorithm for the three
b: COMPETITION MODE
populations according to the standard genetic algorithm,
When the population is optimized to the Kth (K = 1, partheno-genetic algorithm and hybrid genetic algorithm
2, . . ., n) generation, the entropy values Q1 , Q2 , andQ3 of the methods. Record the fitness value of each population i.
three populations are calculated according to the following 3) Calculate the global maximum return P∗ and the pop-
formula. ulation information entropy according to the payment utility
min(Q1 ,Q2 ,Q3 ) > S0 (7) rule; then, obtain the population updating strategy δx t+1 ∗
i (P ).
4) Update the population according to the population
At this time, competition mode begins. The maximum updating strategy determined in (3).
fitness value Fmax of each population is calculated, and 5) Determine whether the global maximum return P∗ is
the largest function value Fmax is obtained. The individual better than the global optimal fbest and whether to update
represented by the function value is copied into the other two fbest.
populations, and the individuals with the smallest function 6) Evolve the MaxJT generation according to the free
values are excluded. strategy for the updated population.
Competition mode yields excellent populations in the algo- 7) Determine whether the maximum number of iterations
rithm and improves the convergence. M is reached. If so, the process proceeds to step (8); other-
wise, the process proceeds to step (3).
c: COORDINATION MODE 8) Output the results.
Coordination mode is divided into two situations. The flow chart of this process is shown in Figure 3.
a. The population is optimized to the Kth (K = 1,
2, . . ., n) generation, and the entropy values Q1 , Q2 , and Q3 IV. SIMULATION EXPERIMENT
of the three populations are calculated. When the entropy Q In this section, the hybrid optimization algorithm is tested
of two of the populations is greater than the critical entropy based on the information entropy and game theory process
S0 , the information entropy of one population is lower than developed in the previous sections. Three simulation func-
the critical entropy S0 . At this time, each of the two popula- tions, namely, the Rosenbrock function, Rastrigin function
tions with information entropy values that meet the threshold and Schaffer function, are used in the numerical simulation
requirement account for 1/4 of the population used to obtain experiments. SGA, PGA and SGA-PGA are selected as the
the best fitness function. In this case, the information entropy reference algorithms to verify the rationality and effective-
is less than the critical threshold S0 in the population, and the ness of the proposed algorithm.

36606 VOLUME 8, 2020


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

FIGURE 4. 3D diagram of the Rosenbrock function.

A. ROSENBROCK FUNCTION
Xn  2
f (X ) = [100 xi+1 − xi2 + (1−xi )2 ] (8)
i=1

The test function takes a minimum value of 0 at (1, 1, . . ., 1),


and a three-dimensional diagram of the test function is shown
in Fig. 4.
As shown in Fig. 4, the Rosenbrock function is a unimodal
function. The function is very simple in regions farm from
the most advantageous area, but the areas near the most
advantageous area is banana shaped, with strong correlation
FIGURE 3. Flow chart of the hybrid genetic algorithm for an information between variables and gradient information. Thus, it is often
entropy game. difficult to optimize the search direction of the algorithm and
find the extreme values of the function. Selecting this test
function can test the optimization ability of the algorithm.
To examine the scalability of the algorithm, different vari-
able dimensions are used for each function test, including
10, 20 and 30 dimensions. When performing simulation
experiments, the common parameters are set as follows:
100 chromosomes, 300 generations of genetic algebra,
a crossover probability of 0.8, a probability of variation
of 0.1, an information entropy threshold S0 of 0.7, and
a threshold reduction factor γ of 0.99. All the optimiza-
tion algorithms were implemented in the MATLAB R2014b
environment.
To measure the convergence accuracy, robustness and con-
vergence speed of different optimization algorithms, the opti-
mal value, average optimal value, worst value and standard
deviation of each function were determined from 50 indepen-
dent runs as the final evaluation indexes. The average optimal
fitness curve was plotted with the number of iterations as FIGURE 5. Curve of the average optimal fitness value of the Rosenbrock
the abscissa and the average optimal value of each function function with the iteration number.
as the ordinate. The average optimal fitness value character-
izes the accuracy that the algorithm can achieve for a given For n = 20, the average optimal fitness value of the
number of iterations, reflecting the convergence speed of the Rosenbrock function changes with the number of iterations,
algorithm. as shown in Fig. 5.

VOLUME 8, 2020 36607


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

TABLE 2. Standard test functions and parameter values.

FIGURE 6. 3D diagram of the Rastrigin function.


FIGURE 7. The average optimal fitness value of the Rastrigin function
varies with the number of iterations.

B. RASTRIGIN FUNCTION
Xn
f (X ) = [xi2 − 10 cos (2π xi ) + 10] (9)
i=1

The Rastrigin test function takes the minimum value of 0


at (0, 0, . . ., 0), and a three-dimensional diagram of the test
function is shown in Fig. 6.
Fig. 6 shows that the test function contains a plurality
of extreme points. As a result, the algorithm easily falls to
local optima when the minimum value of the test function is
obtained, so the function can be used to verify the optimiza-
tion ability of the algorithm.
For n = 20, the average fitness value of the Rastrigin func-
tion varies with the number of iterations, as shown in Figure 7.

C. SCHAFFER FUNCTION FIGURE 8. 3D diagram of the Schaffer function.


The mathematical expression of the Schaffer function is
shown in equation (10). Fig. 8 shows that there are multiple extreme points in the
q test function and that there is oscillation between the extreme
sin2 x12 + x22 + 0.5 points; therefore, the test function can be selected to verify
f (X ) =  2 − 0.5 (10)
[1 + 0.001 x12 + x22 ] the optimization ability of the algorithm.
For n = 20, the average fitness value of the Schaffer func-
The test function takes a minimum value of 0 at (0,0), and tion varies with the number of iterations, as shown in Figure 9.
a three-dimensional diagram of the test function is shown By comparing the test function optimization results
in Fig. 8. in Table 2, Table 3 and Table 4 with the fitness value

36608 VOLUME 8, 2020


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

TABLE 3. Results for the Rosenbrock function under different optimization algorithms.

TABLE 4. Results for the Rosenbrock function under different optimization algorithms.

dimension increases, the optimal value of each algorithm also


changes. For the Rosenbrock function, the variable dimen-
sion has a considerable influence on each algorithm. For
low-dimensional functions, the algorithm can achieve global
optimization with other algorithms, but for high-dimensional
functions, the algorithm still yields good results. Further-
more, the information entropy game genetic algorithm has
the best optimization effect based on the convergence tests
with the three test functions, and the convergence speed is
faster than that of the SGA optimization, PGA optimization
and SGA-PGA optimization methods. For the Rosenbrock
and Schaffer functions, in the process of function optimiza-
tion, the SGA optimization method achieves local optimiza-
FIGURE 9. The average optimal fitness value of the Schaffer function tion quickly due to a premature convergence phenomenon.
varies with the number of iterations.
Although the SGA-PGA optimization algorithm has a fast
search speed and overcomes local optima, the convergence
speed and convergence accuracy are lower than those of the
curves in Figure 5, Figure 7 and Figure 9, we find that information entropy game genetic algorithm. Additionally,
the algorithm developed in this paper has a strong opti- the information entropy game genetic algorithm also achieves
mization ability. In the optimization of each of the above local optimization in the search process. However, due to the
three functions, the dimension of the function has a sig- introduction of information entropy, the optimization algo-
nificant influence on the obtained optimal value. As the rithm can quickly avoid local optima and find the global

VOLUME 8, 2020 36609


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

TABLE 5. Results for the Rosenbrock function under different optimization algorithms.

solution in the feasible domain. The optimal value indicates population genetics. Additionally, combined with game the-
that the information entropy game genetic algorithm has good ory, various types of evolutionary processes occur. The
global and local search abilities. changes in group information entropy using the game strategy
In the comparison of algorithm stability, the experiments that is most conducive to the diversity and adaptability of
in this paper are repeated 50 times. In addition to the strong the whole group are considered to strengthen good individ-
search ability of the proposed algorithm, the stability of the uals and eliminate invalid individuals. Three test functions
algorithm is another indicator of performance. Based on the are introduced to assess the validity and convergence of
50 repetitions, the variance of the experimental results was commonly used test algorithms. Based on the Rosenbrock,
determined to assess the fluctuations in the optimal value Rastrigin and Schaffer functions and a coding test, the results
obtained by the algorithm. Although the Rosenbrock and show that the proposed improved hybrid genetic algorithm
Schaffer functions yielded unstable results for most algo- has a considerable advantage over traditional genetic score-
rithms, the proposed method displayed good stability. based methods in initializing the population and obtaining
For the function value of the initial population, the informa- high fitness values, rapid convergence, and a high optimiza-
tion entropy game genetic algorithm yielded a better fitness tion speed.
function can be obtained using traditional genetic score meth-
ods, indicating that the method based on information entropy REFERENCES
can increase population diversity and optimize the initial pop- [1] J. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor, MI,
ulation. Additionally, the information entropy game genetic USA: Univ. of Michigan Press, 1975, pp. 21–24.
algorithm uses parallel genetic operations; therefore, the effi- [2] L. D. Davie, Handbook of Genetic Algorithm. New York, NY, USA: Van
Nostrand Reinhold, 1991, pp. 248–345.
ciency of algorithm optimization can far exceed that of [3] D. E. Goldberg, Genetic Algorithms for Search, Optimization and Machine
traditional genetic methods. Thus, with the proposed algo- Learning. Boston, MA, USA: Addison-Wesley, 1989.
rithm, the optimal value is found earlier, and the informa- [4] B. Dorronsoro and E. Alba, ‘‘A simple cellular genetic algorithm for
tion entropy-based population game operations are associated continuous optimization,’’ in Proc. CEC, Vancouver, BC, Canada, 2006,
pp. 2838–2844.
with genetic nodes, thus avoiding the main disadvantage of [5] N. A. Al-Madi, ‘‘De Jong’s sphere model test for a human community
the traditional genetic algorithm, which easily falls to local based genetic algorithm model (HCBGA),’’ Int. J. Adv. Comput. Sci., vol. 5,
optima. no. 1, pp. 166–172, Jan. 2014, doi: 10.14569/IJACSA.2014.050123.
[6] M. Liand and T. Tong, ‘‘Partheno-genetic algorithm and its global con-
Based on comprehensive analysis of the above experimen- vergence analysis,’’ Acta Automatica Sin., vol. 25, no. 1, pp. 68–72,
tal results, the information entropy game genetic algorithm Jan. 1999.
can be applied to complex nonlinear and high-dimensional [7] J. Li, M. G. Wang, L. X. Tang, and J. H. Son, ‘‘A special vehi-
functions with multiple extreme points and obtain high- cle routing problem,’’ J. Northeast. Univ., vol. 22, no. 3, pp. 245–248,
Jun. 2001.
precision global optimal values with low computational costs. [8] T. Zhang and M. Wang, ‘‘Genetic algorithm and 3-OPT combination for
Notably, the proposed algorithm not only has fast conver- solving VRP with capacity constraints,’’ J. Northeast. Univ., vol. 20, no. 3,
gence speed but also has better global and local optimization pp. 253–256, Jun. 1999.
[9] X. Z. Chen, Z. M. Li, and Z. R. Liu, ‘‘Application of an improved inte-
performance than traditional methods. ger coding genetic algorithm in vehicle routing optimization problem,’’
J. Southern Inst. Metall., vol. 25, no. 1, pp. 36–41, Apr. 2004, doi: 10.3969/
j.issn.2095-3046.2004.01.010.
V. CONCLUSION [10] W. Zhang and W. L. Chen, ‘‘A multi-logistics center distribution model
In this paper, the genetic algorithm is improved, and and its genetic algorithm,’’ Comput. Sci. Devel, vol. 18, no. 2, pp. 46–50,
Jan. 2008, doi: 10.3969/j.issn.1673-629X.2008.02.013.
multi-group genetic operations are performed in parallel.
[11] X. M. DAI and C. L. Chen, ‘‘Genetic algorithm for extracting mutation
Information entropy is introduced to quantitatively analyse operator based on improved model,’’ J. Shanghai Jiaotong Univ., vol. 4,
the diversity in the evolution process and ensure diversity in pp. 25–29, Aug. 2003, doi: 10.3321/j.issn:1006-2467.2002.08.024.

36610 VOLUME 8, 2020


L. Jiacheng, L. Lei: Hybrid Genetic Algorithm Based on Information Entropy and Game Theory

[12] X. Fang, S. F. Chen, H. Huang, and G. Z. Zhen, ‘‘Research on vehicle rout- LI LEI received the Ph.D. degree in science from
ing optimization problem of logistics distribution based on immune genetic Xi’an Jiaotong University, China, in 1989, and
algorithm,’’ China Civil Eng. J., vol. 36, no. 7, pp. 43–46, Oct. 2003, Ph.D. degree in engineering from Tohoku Univer-
doi: 10.3321/j.issn:1000-131X.2003.07.009. sity, Japan, in 1994. He was a Ph.D. and a Research
[13] F. Xue, C. G. Wang, and F. Yan, ‘‘Genetic-ant colony cooperative optimiza- Assistant with Hirosaki University, Japan, from
tion algorithm based on information entropy and chaos theory,’’ Control April 1989 to March 1992. Since April 1992, he
Decis., vol. 26, no. 1, pp. 44–48, Jan. 2011. has been on the Faculty of Computer Science and
[14] X. F. Chen and G. M. Yang, ‘‘Quantum immune genetic algorithm based on
Engineering. He was an Associate Professor with
information entropy,’’ J. Liaoning Tech. Univ., vol. 32, no. 4, pp. 549–556,
Aomori University, Japan, from 1992 to 1997, and
Jun. 2013.
[15] Q. F. Wei, Y. Cheng, and X. D. Hu, ‘‘Genetic algorithm-based intrusion Yamaguchi University, Japan, from 1997 to 2001.
detection genetic algorithm for wireless sensor networks,’’ J. Chongqing He was a Visiting Professor with the State University of New York at Stony
Univ. Posts Telecommun., vol. 28, no. 1, pp. 107–112, May 2016, doi: 10. Brook, from 1999 to 2000. Since 2002, he has been a Professor and the Dean
3979/j.issn.1673-825X.2016.01.016. of Hosei University, Japan. He has been acting as an author/coauthor or the
[16] M. Yang and J. Liu, ‘‘A hybrid optimization algorithm based on game the- Editor/Co-Editor of 20 books. His research interest includes the fast algo-
ory,’’ Comput. Appl. Res., vol. 33, no. 8, pp. 2350–2352, 2362, Oct. 2016, rithms, parallel algorithms, genetic algorithms, neural networks, machine
doi: 10.3969/j.issn.1001-3695.2016.08.025. learning algorithms, and so on. He has published around 220 articles in
[17] E. Shannon, ‘‘A mathematical theory of communication,’’ Bell Syst. refereed journals, conference proceedings, and book chapters in these areas.
Tech. J., vol. 196, no. 4, pp. 519–520, Jan. 1948, doi: 10.1016/S0016- He has been involved in more than 35 conferences and workshops as a
0032(23)90506-5. Program/General/Organizing Chair. He received the Hitachi to Software
[18] G. Q. Yao, Game Theory. Beijing, China: Higher Education Press, 2007. Papers International Award in Sendai, Japan, in 1990, the Outstanding Lead-
[19] J. von Neumann and O. Morgenstern, Theory of Games and Economic ership Award from the International Conference on Computer Convergence
Behavior. Princeton, NJ, USA: Princeton Univ. Press, 1947. Technology, Seoul, Korea, in 2011, the Outstanding Service Award from
[20] Y. Jin and G. Kesidis, ‘‘Equilibria of a noncooperative game for heteroge-
ACM Research in Applied Computation Symposium, Miami, USA, in 2011,
neous users of an ALOHA network,’’ IEEE Commun. Lett., vol. 6, no. 7,
and the Asia Education Friendship Award from Education Forum for Asia
pp. 282–284, Jul. 2002, doi: 10.1109/lcomm.2002.801326.
[21] R. Ferrero, S. Shahidehpour, and V. Ramesh, ‘‘Transaction analysis in Annual Conference in Chengdu, China, in 2019.
deregulated power systems using game theory,’’ IEEE Trans. Power Syst.,
vol. 12, no. 3, pp. 1340–1347, Aug. 1997, doi: 10.1109/59.630479.
[22] C. Fisk, ‘‘Game theory and transportation systems modelling,’’ Transp.
Res. B, Methodol., vol. 18, nos. 4–5, pp. 301–313, Aug. 1984.
[23] D. Wu, J. Cao, Y. Ling, J. C. Liu, and L. Sun, ‘‘Routing algorithm based on
multi-community evolutionary game for VANET,’’ J. Netw., vol. 7, no. 7,
pp. 597–601, Jul. 2012, doi: 10.4304/jnw.7.7.1106-1115.

LI JIACHENG was born in Sheyang County,


Yancheng, Jiangsu, China, in 1990. He received
the bachelor’s degree in computer engineering
from the Jiangsu University of China, in 2012 and
the master’s degree in computer engineering from
Japan Hosei University, in 2015, where he is cur-
rently pursuing the Ph.D. degree. His research
interests include fast algorithm, parallel algorithm,
genetic algorithm, optimization theory, vehicle
routing problem, and so on. He has published
seven articles in reference journals, meeting minutes and book chapters in
these fields. He received awards and honors include the scholarship for
the 100th anniversary of the Hosei University and the scholarship for the
Ministry of Science and Culture of Japan.

VOLUME 8, 2020 36611

You might also like