0% found this document useful (0 votes)
29 views11 pages

A Novel Evolutionary Algorithm With Column and Sub-Block Local Search For Sudoku Puzzles

Uploaded by

Bhuvnesh Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views11 pages

A Novel Evolutionary Algorithm With Column and Sub-Block Local Search For Sudoku Puzzles

Uploaded by

Bhuvnesh Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

162 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO.

1, MARCH 2024

A Novel Evolutionary Algorithm With Column and


Sub-Block Local Search for Sudoku Puzzles
Chuan Wang, Bing Sun, Student Member, IEEE, Ke-Jing Du, Jian-Yu Li , Member, IEEE,
Zhi-Hui Zhan , Senior Member, IEEE, Sang-Woon Jeon , Member, IEEE, Hua Wang , Senior Member, IEEE,
and Jun Zhang , Fellow, IEEE

Abstract—Sudoku puzzles are not only popular intellectual I. INTRODUCTION


games but also NP-hard combinatorial problems related to vari-
UDOKU is a popular logic-based combinatorial puzzle
ous real-world applications, which have attracted much attention
worldwide. Although many efficient tools, such as evolutionary
computation algorithms, have been proposed for solving Sudoku
S game for people of all ages, it was invented in 1979 and
was officially named “Sudoku” in 1984 [1]. The typical Sudoku
puzzles, they still face great challenges with regard to hard and is composed of 81 cells (a 9 × 9 grid), as shown in Fig. 1. Fig. 1(a)
large instances of Sudoku puzzles. Therefore, to efficiently solve
Sudoku puzzles, this article proposes a genetic algorithm (GA) is a Sudoku puzzle with several given numbers, and Fig. 1(b) is
based method with a novel local search technology called local the solution to this puzzle. Moreover, with rapid development,
search-based GA (LSGA). The LSGA includes three novel de- more complex high-dimensional Sudoku puzzles have appeared
sign aspects. First, it adopts a matrix coding scheme to represent in recent years, with dimensions of 16 × 16, 25 × 25, and even
individuals and designs the corresponding crossover and muta- 100 × 100.
tion operations. Second, a novel local search strategy based on
column search and sub-block search is proposed to increase the
The rules of Sudoku are as follows: The game begins with
convergence speed of the GA. Third, an elite population learning several given numbers in an N×N grid. Then, the player must fill
mechanism is proposed to let the population evolve by learning the in the empty cells with numbers 1 to N in such a way that no num-
historical optimal solution. Based on the above technologies, LSGA ber appears twice in the same row, column, or sub-block. Sudoku
can greatly improve the search ability for solving complex Sudoku puzzles are simple in form and definition, but it is not easy to find
puzzles. LSGA is compared with some state-of-the-art algorithms
at Sudoku puzzles of different difficulty levels and the results show
solutions [2]. In 2003, Yato and Seta proved that solving Sudoku
that LSGA performs well in terms of both convergence speed and is an NP-hard problem [3]. Generally, the factors for evaluating
success rates on the tested Sudoku puzzle instances. the difficulty of a Sudoku puzzle include the dimension of the
problem, percentage and distribution of the given numbers, and
Index Terms—Combinatorial optimization problems,
evolutionary computation (EC), genetic algorithm (GA), local time cost to solve the Sudoku by a baseline solver. To evaluate
search, Sudoku puzzle. the difficulty of Sudoku, some typical tools have been developed,
including SUDOKUSAT, Sudoku explainer (SE), and Hoduku
Manuscript received 5 July 2022; revised 1 November 2022 and 20 December explainer [4]. Currently, the most used tool is SE, which can
2022; accepted 4 January 2023. Date of publication 12 January 2023; date give a corresponding SE score. Generally, a higher SE score
of current version 19 March 2024. This work was supported in part by the
National Natural Science Foundations of China under Grant 62176094, in part indicates that the Sudoku puzzle is more difficult. For example,
by the Guangdong Natural Science Foundation Research Team under Grant the Sudoku puzzles can be divided into levels of easy, medium,
2018B030312003, in part by the National Research Foundation of Korea under hard, evil, and even more difficult.
Grant NRF-2022H1D3A2A01093478, and in part by the Korea Institute of
Ocean Science and Technology under Grant PE99732. (Corresponding authors: Nowadays, Sudoku is not only a game but also a kind of
Zhi-Hui Zhan; Jun Zhang.) core problem in many real-world applications in daily life and
Chuan Wang is with the College of Software, Henan Normal University, industrial engineering, such as in data encryption [5], radar
Xinxiang 453007, China (e-mail: [email protected]).
Bing Sun is with the College of Computer and Information, Henan Normal waveform design [6], and education [7]. For example, Jana
University, Xinxiang 453007, China (e-mail: [email protected]). et al. [5] proposed a video steganography technique that hid
Ke-Jing Du and Hua Wang are with the Institute for Sustainable Industries and encrypted data in videos by a Sudoku-based reference matrix.
Liveable Cities, Victoria University, Melbourne, VIC 8001, Australia (e-mail:
[email protected]; [email protected]). This technique shows good performance in resisting fault attacks
Jian-Yu Li and Zhi-Hui Zhan are with the School of Computer Science and if the Sudoku puzzle can be solved efficiently. Li et al. [8]
Engineering, South China University of Technology, Guangzhou 510006, China applied retracing extended Sudoku to image data-hiding tech-
(e-mail: [email protected]; [email protected]).
Sang-Woon Jeon is with the Department of Electronics and Communication nology. As the retracing extended Sudoku is a kind of Sudoku
Engineering, Hanyang University, Ansan 15588, South Korea (e-mail: sang- containing multiple solutions, which imposes high demands on
[email protected]). the robustness of the algorithm for solving the Sudoku puzzle.
Jun Zhang is with the Zhejiang Normal University, Jinhua 321004, China, and
also with the Hanyang University, Ansan 15588, South Korea (e-mail: junzhang To improve the efficiency of photovoltaic systems, Horoufiany
@ieee.org). and Ghandehari [9] proposed a Sudoku-based arrangement rule
This article has supplementary material provided by the authors and color to avoid mutual shading between fixed photovoltaic arrays and
versions of one or more figures available at https://doi.org/10.1109/TG.
2023.3236490. obtained the optimal arrangement by solving the corresponding
Digital Object Identifier 10.1109/TG.2023.3236490 Sudoku puzzle by a genetic algorithm (GA). Moreover, the
© 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see
https://creativecommons.org/licenses/by/4.0/
WANG et al.: NOVEL EVOLUTIONARY ALGORITHM WITH COLUMN AND SUB-BLOCK LOCAL SEARCH FOR SUDOKU PUZZLES 163

Fig. 1. Example of 9 × 9 Sudoku puzzle and its solution. (a) Sudoku puzzle.
(b) Solution to the puzzle.

Sudoku is also studied as a representative of the exact cover


problem [10]. Therefore, the Sudoku puzzle widely exists in
various applications. The development of algorithms for solving Fig. 2. Structure of standard N×N Sudoku puzzle.
Sudoku has not only of academic research significance, but
also helpful for real-world applications, which has attracted
increasing attention. first published in the newspaper “Times” [16]. Fig. 2 shows
So far, the existing algorithms for solving Sudoku puzzles the structure of a standard
√ Sudoku puzzle. To summarize the
can be divided into mathematical algorithms [11] and heuristics definition, the N × N ( N is an integer greater than 0) Sudoku
algorithms [12]. The exact algorithms are faster at solving Su- puzzle must satisfy the following constraints.
doku puzzles, but lack portability [13]. Therefore, as a type of 1) Unique Solution Restriction: A Sudoku puzzle has only
heuristics algorithm, GA has gained widespread attention due one unique solution.
to its powerful search ability and versatility. During the past 2) Rule of Rows: All 1 to N numbers in each row should
decade, some studies have shown the great potential of GA in appear and not be repeated.
solving Sudoku puzzles [14]. However, the GA-based methods 3) Rule of Columns: All 1 to N numbers in each column
still have some shortcomings. When solving difficult Sudoku should appear and not be repeated. √ √
puzzles, some GA-based methods still need to take a long time to 4) Rule of Sub-Blocks: All 1 to N numbers in each N × N
solve or even be unsolvable [15]. Therefore, developing a more sub-block should appear and should not be repeated.
efficient method to solve Sudoku puzzles remains a challenge. Many researchers have tried tackling Sudoku through different
In this article, we propose an improved GA with local search methods. A widely used method is dancing links [17], which is
(LSGA) to effectively and efficiently solve Sudoku puzzles. a brute force algorithm. This method transforms Sudoku puzzles
Specifically, the LSGA has three novel designs. First, we adopt into exact cover problems and employs the backtracking method
the matrix-based encoding for Sudoku, and based on this encod- to solve them. However, brute force algorithms cannot handle
ing scheme, the crossover and mutation operations in LSGA are those high-dimensional Sudoku puzzles at an acceptable time
designed. Second, we present a novel local search mechanism and memory cost [18].
based on column search and sub-block search to increase the To overcome the shortage of brute force algorithms, many
convergence speed of the GA. Third, to avoid being trapped in heuristic approaches have been reported in the literature to solve
local optimal solutions, an elite population learning mechanism Sudoku. For example, Sevkli and Hamza. [19] proposed two
is proposed to randomly replace poor individuals with new in- novel models based on the variable neighborhood search (VNS)
dividuals, which is very effective when solving difficult Sudoku algorithm to solve Sudoku: Unfiltered-VNS and filtered-VNS.
puzzles. To illustrate the efficiency of the proposed LSGA, we The experiments showed that filtered-VNS can obtain better
evaluate it on Sudoku puzzles at different difficulty levels and solution quality than Unfiltered-VNS for easy- and medium-
compare it with some state-of-the-art approaches. level puzzles, while Unfiltered-VNS performs better in solving
The rest of the article is organized as follows: Section II hard-level puzzles. Betar et al. [20] introduced an improved
reviews studies on solving Sudoku puzzles in recent years. hill-climbing algorithm called the β-Hill-Climbing algorithm,
Then, in Section III, the matrix-based GA is elaborated, and the which could escape local optima by using a random operator.
effectiveness and efficiency of the proposed LSGA are illustrated Experimental results showed that the β-Hill-Climbing algorithm
by extensive experiments in Section IV. Finally, conclusion is can find solutions within a very short time under the best param-
given in Section V. eter configuration.
Traditional Sudoku solution methods are ineffective in solv-
ing complex and high-dimension Sudoku puzzles because the
II. RELATED WORK
Sudoku puzzles have a huge search space [21]. Evolutionary
The charm of Sudoku is that it is easy to learn but difficult computation (EC) algorithms, such as GA [22], [23], [24], ant
to master. Therefore, it has received much attention since it was colony optimization (ACO) [25], [26], [27], particle swarm
164 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO. 1, MARCH 2024

Sudoku is recorded at position (i, j) of the major matrix. Mean-


while, for the associated matrix, if there is a given number at a
position, then the corresponding value of the associated matrix
is “1;” otherwise, it is “0.” This coding method facilitates the im-
plementation of crossover operation and local search operation.
The initialization stage is an important process of the GA. To
reduce the complexity of the puzzle, all non-given numbers in
each row are randomly assigned to the empty spaces, as shown
in Fig. 3. Therefore, the initial solutions will satisfy the row rules
of Sudoku.
Fig. 3. Initialization stage: non-given numbers in row i are randomly assigned
to empty spaces.
B. Fitness Function
The goal of solving a Sudoku puzzle is to find the solution
optimization (PSO) [28], [29], [30], [31], [32], differential evo- where each number occurs only once on each row, column, and
lution [34], [35], [36], and estimation of distribution algorithms sub-block. Therefore, when evaluating an individual, we count
[37] have shown promising performance in solving Sudoku puz- how many rows, columns, and sub-blocks are incorrect (i.e., not
zles and many other complex or real-world problems [38], [39], including all the numbers from 1 to N). Therefore, the fitness of
[40], [41]. For example, Mantere and Koljonen [15] adopted GA the optimal solution is 0, which means that the rows, columns,
to solve the Sudoku puzzle, but this article could not effectively and sub-blocks of this individual satisfy all rules of Sudoku.
solve difficult Sudoku puzzles. Pathak and Kumar [42] proposed As the individuals generated by the initialization already
the wisdom of a crowd aggregate function for Sudoku puzzles, satisfy the constraints of the row rule, LSGA only needs to
which can effectively prevent GA from being trapped in local optimize the numbers in the columns and sub-blocks without
optima. Deng and Li [43] proposed an efficient hybrid algorithm breaking the row rule. In summary, the fitness of each individual
for Sudoku. In this hybrid algorithm, the improved GA can can be evaluated by
produce more abundant individuals to participate in crossover.
Then, they combined PSO with GA, so that the population could 
N 
N
F = ci + sj (1)
better evolve towards the optimal solution. Lloyd and Amos [16] i=1 j=1
adopted ACO to solve high-dimensional Sudoku puzzles and
compared the effect of the percentage of given numbers on the where N is the dimension of the Sudoku puzzle, and ci indicates
difficulty of Sudoku. whether the ith column satisfies the rule of Sudoku. That is, ci
Summarizing the above algorithms, we can conclude that equals 0 if the ith column satisfies the rule and equals 1 if it does
the search ability and convergence speed of the algorithms are not. Correspondingly, sj represents whether the jth sub-block
key indicators for solving Sudoku puzzles, because of the huge satisfies the limitations. F is the sum of ri and sj . When F equals
search space and unique solutions of Sudoku puzzles. In this 0, it indicates that the algorithm finds the optimal solution.
article, we present an improved GA, called LSGA, which adopts
the new local search method designed for Sudoku puzzles and C. Crossover and Mutation
a new elite population learning mechanism to solve Sudoku
Crossover operations emphasize the exchange of genes
puzzles more effectively and efficiently.
among individuals. In LSGA, the crossover operator is per-
formed by rows. Specifically, the parents are selected based on
III. PROPOSED LSGA METHOD the individual crossover rate PC1. Subsequently, based on the
row crossover rate PC2, the same rows from the parents are
A. Representations and Initialization selected to participate in the swap operation. An example of the
When solving Sudoku puzzles, it is necessary to encode the crossover is shown in Fig. 4. In this figure, two parents are
possible solutions into data structures, which facilitate the evolu- selected based on PC1. Then, the same rows of the two parents
tionary operations of the GA. For example, Rodríguez-Vázquez are selected to swap based on PC2. Because the given numbers
[44] recorded the given numbers and solutions of Sudoku with in the same row are in the same position, such as “2” and “7” in
two strings of length N2 , and a string of length N to record Fig. 4, the swap operation will not change the original Sudoku
the numbers waiting to be selected in each row. Mantere and puzzle.
Koljonen [15] used two arrays of N2 numbers to represent the The pseudocode of the crossover operation is shown in Algo-
Sudoku solution: one represented the solution, and the other rithm 1. In lines 2–3 of Algorithm 1, we select two individuals
recorded the position of the given numbers. based on the individual crossover rate PC1. Subsequently, in
In our algorithm, we adopt two matrices to represent a chro- lines 4-8, rows are selected to swap based on the row crossover
mosome, one matrix is the major matrix that records the numbers rate PC2. Finally, in line 10, the offspring of the crossover are
in each position of the Sudoku grids, while another matrix is the preserved.
associated matrix that records where each position is occupied Mutation is an important operation for the population to
by a number. Specifically, the number in row i and column j in explore the solution space and helps populations escape local
WANG et al.: NOVEL EVOLUTIONARY ALGORITHM WITH COLUMN AND SUB-BLOCK LOCAL SEARCH FOR SUDOKU PUZZLES 165

Algorithm 2: Pseudocode of Mutation.


Input: population, swap mutation rate PM1 and
reinitialization mutation rate PM2
1: For each individual in the population:
2: For each row in the individual:
3: If rand1<PM1: // rand1 is a random variable in
[0, 1]:
4: If the number of non-given numbers≥ 2:
5: Select two non-given numbers to exchange
positions;
Fig. 4. Crossover between two individuals. 6: End If
7: End If
8: If rand2<PM2: // rand2 is a random variable in
[0, 1]:
9: Reinitialize the row;
10: End If
11: End For
12: End For

Output: new population


Fig. 5. Two designed mutation operations. (a) Swap mutation. (b) Reinitialize
mutation.
The reinitialization mutation performs the mutation by reini-
Algorithm 1: Pseudocode of Crossover. tializing the distribution of the random rows. As shown in
Input: population, individual crossover rate PC1, row Fig. 5(b), the number of given numbers is retained while the
crossover rate PC2 non-given numbers are randomly assigned to the empty space
1: For each individual in the population: at random. The reinitialization mutation can help the algorithm
2: If rand1<PC1: // rand1 is a random variable in [0,1]: jump out of the local optima better than the swap mutation. How-
3: Select the second parent from the population ever, a high mutation probability for reinitialization mutation
randomly; will slow the convergence of the algorithm, so the reinitialization
4: For each row in the individual: mutation rate PM2 is a value smaller than 0.1 and the fitness of
5: If rand2<PC2: // rand2 is a random variable in individuals is the worst.
[0, 1]: The pseudocode of the mutation operator is shown in Al-
6: Parents exchange the selected rows; gorithm 2. In lines 3–7 of Algorithm 2, rows are selected to
7: End If participate in the swap mutation based on the PM1. In lines 4–6,
8: End For here is a judgment on the feasibility of the swap. If there is only
9: End If one non-given number in a row, this row cannot participate in
10: Save the offspring to the new population; the swap. In lines 8–10, rows are reinitialized based on PM2.
11: End For
D. Column and Sub-Block Local Search
Output: new population Many studies have shown that local search is an effective
technique for improving the convergence speed of the algorithm
[45]. Therefore, we design a new novel local search method
optima. Here are two different mutation strategies to help the in LSGA for solving Sudoku puzzles. It has two components:
GA improve its exploration capabilities: swap mutation and column local search and sub-block local search.
reinitialization mutation. The first component is the column local search, which is
The swap mutation operation is performed as a swap of two designed to eliminate the repeating numbers in columns. First,
positions inside random rows to ensure that each row satisfies count all columns that do not meet the rules (called illegal
constraint (2) of Sudoku (see Section II). The associated matrix columns). We define the set C to record these columns. Then,
is used to check if the position is appropriate for mutation. If each illegal column is randomly paired with the other columns
the value is “1,” this position is occupied by a given number in C, which will be swapped if the repeat numbers are in the
and this corresponding position is illegal to exchange; thus, the same row and none of them are in each other’s column. For
given numbers will not be changed during the mutation. The example, Fig. 6 depicts a part of the solution to a 9 × 9 Sudoku
probability of the swap is determined by the swap mutation rate puzzle, where we use 1 to mark the position of the repeated
PM1. As shown in Fig. 5(a), the above mutation is legal, while number. According to the rules of Sudoku, the number “1” is
the below mutation is illegal. the repeat number in column A and “2” is the repeat number in
166 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO. 1, MARCH 2024

Algorithm 3: Pseudocode of Local Search.


Input: population
1: For each individual in population:
2: Record all illegal columns (sub-blocks) in the set C (S);
3: For each column (sub-block) in C (S):
4: Randomly select another column (sub-block) from
C (S);
5: If the repeat numbers are in the same row:
6: If repeat numbers do not exist in both columns
(sub-blocks):
7: Swap these repeat numbers;
8: End If
9: End If
10: End For
11: End For
Fig. 6. Example of column local search. Repeat numbers are marked as 1, and
others are marked as 0.
Output: new population

E. Elite Population Learning


As the local optimal solution and the global optimal solution
of Sudoku puzzles are very different, it is difficult for the GA to
jump out of the local optima. Thus, a learning mechanism based
on elite populations is proposed to avoid the GA falling into local
optima. The elite population is a queue structure, that records
the best individuals of each generation and updates them with
new optimal individuals. In elite population learning, the worst
individuals in the population are replaced by a random individual
xrandom from the elite population or are reinitialized. We define
the probability Pb to control this process.
The replacement operation is as follows:

xrandom , if rand() < Pb
xworst = (2)
Fig. 7. Example of sub-block local search. Repeat numbers are marked as 1,
init(), otherwise
and others are marked as 0.
Maxf x − f x(xrandom )
s.t. Pb = (3)
Maxf x
where xworst is the worst individual, Maxfx is the fitness of
column B. Therefore, both columns all have repeat numbers in
xworst , xrandom is a randomly selected elite individual with
the sixth row, so we can exchange “1” and “2” to make column B
fitness is fx(xrandom ), rand() outputs a random variable in
a legal column. Then, column A continues to swap with column
(0, 1), and init() is the initialization function.
C, which could make both of them meet the column rules.
According to (2), the worst individual in each generation
The second component is the sub-block local search. Similar
has only two choices: to be replaced or to be reinitialized.
to the column local search, the sub-block local search swaps the
Therefore, in most cases, the algorithm tends to search toward the
repeat number in the same row. First, it counts all sub-blocks that
current optimal solution via replacement but still explores new
do not meet the rules (called illegal sub-blocks). We define the
search directions via reinitialization. Thus, LSGA can balance
set S to record these sub-blocks. Then, each illegal sub-block is
exploration and exploitation.
randomly paired with the other sub-blocks in S, swapping them
if the repeat numbers are in the same row and none of them is
F. Overall LSGA Method
in each other’s sub-block. For example, in Fig. 7, sub-block A
and sub-block B both have repeated numbers, one of which is Integrating the above techniques using GA, the developed
“9” and the other is “8”. Therefore, we can exchange them on LSGA is outlined in Algorithm 4. In detail, the individuals are
the same row to make both sub-blocks satisfy the Sudoku rules. generated through initialization in line 1. Then, the population is
In summary, the basic idea of local search is to make the optimized by evolutionary operations in lines 4-6. Subsequently,
columns and sub-blocks on both sides gradually satisfy the rules in lines 7–8, local search operations are applied to speed up the
of Sudoku by exchanging repeated values. Algorithm 3 describes convergence of the algorithm. Then, the fitness of individuals is
the basic framework of the local search. evaluated in line 9 and the elite population learning strategy is
WANG et al.: NOVEL EVOLUTIONARY ALGORITHM WITH COLUMN AND SUB-BLOCK LOCAL SEARCH FOR SUDOKU PUZZLES 167

TABLE I the compared algorithms are obtained directly from their original
PARAMETERS IN LSGA
papers. Table II gives the experimental results.
From Table II, we can see that in these six Sudoku puzzles,
LSGA, NB-COIN, and GA-II all obtain the final results in all
100 runs, while GA-I in Hard 106 only finds the solution in 96
runs. Thus, LSGA, NB-COIN, and GA-II are better than GA-I.
Subsequently, compared with NB-COIN, the performance of
LSGA is worse than NB-COIN when solving easy-level puzzles.
Furthermore, as NB-COIN depends on probability distributions
to generate solutions for Sudoku, NB-COIN is less influenced by
local optimal solutions than other GAs. Specifically, both LSGA
Algorithm 4: Pseudocode of LSGA. and the other comparison algorithms solve Hard 106 with more
Input: maximum number of generations FESmax , generations than Hard 77, but the performance of NB-COIN on
population. the two puzzles is not very different. By analyzing the solution
1: Initialize population; process, we found that there is a very competitive local optimal
2: Evaluate population; solution in the Hard 106. This local optimal solution has only
3: While (count≤ FESmax ) do: two columns that do not conform to the rules of Sudoku, but its
4: Tournament selection; structure is different from the best solution. As a result, the GAs
5: Crossover; can easily fall into this local optimum. In general, comparing the
6: Mutation; results of solving medium-level, and hard-level Sudoku puzzles,
7: Column local search; the average number of generations of LSGA is less than GA-I,
8: Sub-block local search; GA-II, and NB-COIN. Therefore, the performance of LSGA in
9: Evaluate population; solving Sudoku puzzles is very competitive.
10: Elite population learning; Next, we conduct an experiment on three so-called super diffi-
11: Reserve the best individual as gbest; cult Sudoku puzzles named super difficult-1 (SD1), AI-escargot
12: If fx(gbest) = = 0: (SD2), and super difficult-2 (SD3) selected from [14]. These are
13: Break; shown in Fig. 9. Among these three puzzles, the AI Escargot
14: End If is one of the most difficult Sudoku puzzles in the world [46].
15: End While Table III gives the comparison result of LSGA and some other
16: Obtain the best solution gbest and its fitness fx(gbest); algorithms that have successfully solved these super difficult
puzzles: GA-III [14], GPU-GA [47], and GA-I [46]. Each
Output: fx(gbest) and gbest algorithm runs 100 times on each puzzle, where Succ_Count
is the solution success rate among the 100 runs within 1 × 104
generations, and Avg_Gen is the average number of generations
executed in line 10. The algorithm iteratively repeats the above required to find the optimal solution.
operations until the optimal solution is found or the maximal From Table III, we see that LSGA and GPU-GA can solve all
number of generations is reached. puzzles with 100% success rate, while GA-I and GA-III cannot.
Moreover, both LSGA and GPU-GA require fewer number of
IV. EXPERIMENTAL STUDIES generations than the other algorithms in each puzzle, and LSGA
requires the fewest. Furthermore, we evaluate the difficulty of
A. Comparisons With State-of-the-Art Methods SD1, SD2, and SD3 with the help of the SE and get scores of 7.2,
To illustrate the performance of LSGA, we compare it with 10.5, and 2.8, respectively, which means, for the Sudoku solving
state-of-the-art algorithms, including the node-based coinci- methods that have been recorded in SE (such as WXYZ-Wing,
dence algorithm named NB-COIN [13], the preserve building Swampfish, ALS-Wing, etc.), SD2 is very difficult to solve,
blocks GA named GA-I [46], and the GA with local optima but SD3 is much simpler. However, the experimental results
handling named GA-II [18]. To make a fair comparison, the pop- in Table III are different. Compared with SD1 and SD2, LSGA
ulation size is set to 150, while all algorithms run 1 × 104 gen- uses much more generations to solve the SD3, this situation
erations. The parameter settings of LSGA are given in Table I. not only occurs in LSGA, but also in other compared GAs in
In the experiments, six classic Sudoku puzzles, which are also Table III. Therefore, we consider that if some known methods
solved by the compared algorithms NB-COIN, GA-I, and GA-II, for solving Sudoku can be introduced into GA, the efficiency of
are selected. These Sudoku puzzles cover three difficulty levels solving difficult Sudoku puzzles could be greatly improved.
(i.e., easy, medium, and hard), as shown in Fig. 8. Each algorithm
runs 100 times on each puzzle, where Succ_Count is the number
of runs among the 100 runs that can find the optimal solutions B. Statistical Performance on Open Sudoku Puzzles
within 1 × 104 generations, and Avg_Gen is the average number To illustrate the statistical performance of LSGA on more
of generations required to find the optimal solution. Note that to Sudoku puzzles, we conduct experiments based on a large
ensure the fairness of the comparison, the experimental results of number of Sudoku puzzles selected from the open website
168 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO. 1, MARCH 2024

Fig. 8. Six Sudoku puzzles and their solutions. (a) Initial Sudoku puzzle. (b) Solution to puzzle. (c) Initial Sudoku puzzle. (d) Solution to puzzle. (e) Initial Sudoku
puzzle. (f) Solution to puzzle. (g) Initial Sudoku puzzle. (h) Solution to puzzle. (i) Initial Sudoku puzzle. (j) Solution to puzzle. (k) Initial Sudoku puzzle. (l) Solution
to puzzle.

TABLE II
RESULTS OF PROPOSED LSGA AND OTHER METHODS FOR SOLVING SUDOKU ON SIX DIFFERENT SUDOKU PUZZLES

TABLE III
RESULTS OF PROPOSED LSGA AND OTHER METHODS FOR SOLVING SUDOKU ON THREE SUPER DIFFICULT SUDOKU PUZZLES
WANG et al.: NOVEL EVOLUTIONARY ALGORITHM WITH COLUMN AND SUB-BLOCK LOCAL SEARCH FOR SUDOKU PUZZLES 169

TABLE IV
RESULTS OF PROPOSED LSGA FOR SOLVING SUDOKU PUZZLES AT
WWW.WEBSUDOKU.COM

Fig. 10. Distributions of the average generations needed by LSGA to solve


Sudoku puzzles with different difficulty levels.

From Table IV, we see that LSGA efficiently solves all puz-
zles. As the difficulty level increases, the number of generations
needed by LSGA to obtain the optimal solution also increases
exponentially, especially for the puzzles of the evil level. There-
fore, we conduct a further investigation on the factors affecting
the performance of the LSGA in the following part.
Fig. 9. Three super difficult Sudoku puzzles and their solutions. (a) Initial
Sudoku puzzle. (b) Solution to puzzle. (c) Initial Sudoku puzzle. (d) Solution to C. Further Investigation and Discussion
puzzle. (e) Initial Sudoku puzzle. (f) Solution to puzzle.
To further study the factors affecting the performance of
LSGA, we decide to rate and generate Sudoku puzzles by using
www.websudoku.com. We select Sudoku puzzles from four SE. The score of a Sudoku puzzle in SE is determined by the
difficulty levels: easy; medium; hard; and evil. In each difficulty complexity of the skills required to solve it. The more complex
level, 30 puzzles are randomly selected. Therefore, totally 120 the skills required to solve a Sudoku puzzle, the higher SE
puzzles are adopted for testing. The details of all the 120 puzzles score it will get, which can determine the difficulty level of
are provided in the Supplemental_Material. The configurations the Sudoku puzzle accordingly. This type of evaluation is very
of LSGA are the same as those in Table I. LSGA runs 10 times effective for players and is widely used [4]. Therefore, we use
on each puzzle and the average performance of the 10 runs is SE to generate Sudoku puzzles with seven difficulty levels, each
calculated and given as Avg_Gen in Table S.I in the Supplemen- difficulty level containing 10 different Sudoku puzzles. These
tal_Material. Then, the mean performance of the LSGA on all the seven levels are called easy, medium, hard, superior, fiendish,
30 puzzles (i.e., the 30 Avg_Gen values) in each difficulty level is super, and advance, and their SE score intervals are [1.0, 1.2],
given as Mean_Gen in the last row of Table S.I. Moreover, all the [1.3, 1.5], [1.6, 2.6], [2.7, 3.9], [4.0, 5.9], [6.0, 6.9], and [7.0,
Mean_Gen values of the four difficulty levels and other statistical 8.0], respectively. The details of all the 70 puzzles are given in
values are given in Table IV. For example, in the second row the Supplemental_Material. The LSGA is adopted to solve these
of Table IV for all the 30 puzzles in easy level, the average Sudoku puzzles. Similar to the experiments in Section IV-B, each
number of generations needed by LSGA to obtain the optimal puzzle is solved ten times and the average number of generations
solution to each puzzle among the 10 runs is calculated, and then needed by LSGA to obtain the optimal solution of the ten runs
the maximal average number and the minimal average number is calculated. Then, we can obtain ten average results on each
among the 30 puzzles are given as Max_Gen and Min_Gen. difficulty level (i.e., there are ten puzzles and each puzzle has
Moreover, the mean of the 30 average numbers is given as an average result). The details of these ten average results are
Mean_Gen and the Mean_Succ_Rate is the success rate of LSGA given in Table V and their distribution is plotted as Box in
in solving all the 30 puzzles in all the 10 runs. Fig. 10. There are seven columns in Table V and seven Boxes in
170 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO. 1, MARCH 2024

TABLE V more difficult to be solved because these given numbers do not


AVERAGE GENERATIONS NEEDED BY LSGA TO SOLVE EACH SUDOKU
PUZZLE WITH DIFFERENT DIFFICULTY LEVELS
provide enough clues to determine the non-given numbers. For
example, in Section IV-A, Sudoku puzzle hard 106 (with 24
given numbers) is more difficult than Sudoku puzzle SD2 (with
23 given numbers), because some Sudoku puzzles like Hard
106 have many local optimal solutions and their given numbers
cannot effectively help LSGA to escape from the local optimal
solutions.

V. CONCLUSION
In this article, we propose an improved GA with a local search
(named LSGA) for Sudoku. In particular, we adopt a matrix-
based encoding GA and devise mutation and crossover operators
for this coding scheme. Then, to improve the convergence speed
of LSGA, a local search method incorporating column and sub-
block search is proposed. Finally, by comparing with GA-based
algorithms in different dimensions and levels of Sudoku puzzles,
LSGA successfully solves all of these puzzles and shows good
performance.
LSGA can also be applied to solve other types of Sudoku
puzzles such as Mini Sudoku and Ring Sudoku. However, for
Sudoku variants, such as Killer Sudoku and Kakuro Sudoku
[48], the initialization and local search strategies need to be
redesigned because the local search in LSGA is designed for
regularly shaped sub-blocks. Furthermore, for puzzles without
sub-blocks, such as those of Futoshiki and Takuzu [49], the
column local search strategy is still applicable. Therefore, LSGA
deserves further research to better solve other Sudoku puzzles.
Fig. 11. Mean generations needed by LSGA to solve Sudoku puzzles with
Although our algorithm is successful in solving many Sudoku
different given numbers. puzzles, there is still room for improvement. For example, with
the help of manual Sudoku solving methods, such as direct hid-
den pair and fish methods [50], humans can easily find numbers
Fig. 10 for seven difficulty levels. Moreover, we also look into the in the irrational position and adjust them, whereas LSGA needs
number of given numbers in all the 70 puzzles. Fig. 11 shows several, tens, or even hundreds of generations to achieve the same
the average number of generations needed by LSGA to solve results. Therefore, in the future, we can improve the performance
Sudoku puzzles with different given numbers. For example, the of LSGA by combining it with other Sudoku-solving methods.
first bar means that, there may be several puzzles among the
70 puzzles that are with 23 given numbers, then the average REFERENCES
number of generations needed by LSGA to solve each of these [1] J.-P. Delahaye, “The science behind SUDOKU,” Sci. Amer., vol. 294, no. 6,
several puzzles among the 10 runs is calculated, and at last the pp. 80–87, Jun. 2006.
mean of these several average values is calculated, which is 177. [2] X. Qi, G. Li, N. Wang, X. Wang, and L. Wen, “Method study on solving
sudoku problem,” in Proc. 3rd Int. Conf. Data Sci. Bus. Analytics, 2019,
As shown in Fig. 10 and Table V, we can conclude that the pp. 269–271.
required generations for LSGA to solve Sudoku are not signifi- [3] T. Yato and T. Seta, “Complexity and completeness of finding another solu-
cantly affected by difficulty levels. For example, the generations tion and its application to puzzles,” IEICE Trans. Fundamentals Electron.,
Commun. Comput. Sci., vol. E86-A, no. 5, pp. 1052–1060, May 2003.
required to solve most Sudoku puzzles at the hard and superior [4] M. Henz and H.-M. Truong, “SudokuSat—A tool for analyzing difficult
levels are less than that at the Medium level. That is, LSGA sudoku puzzles,” in Tools and Applications with Artificial Intelligence,
inherits the problem-independent characteristics of GA and is vol. 166, New York, NY, USA: Springer, 2009, pp. 25–35.
[5] S. Jana, A. K. Maji, and R. K. Pal, “A novel SPN-based video stegano-
more general for Sudoku puzzles. Moreover, as shown in Fig. 11, graphic scheme using Sudoku puzzle for secured data hiding,” Innov. Syst.
we can conclude that there is a correlation between the difficulty Softw. Eng., vol. 15, no. 1, pp. 65–73, Jan. 2019.
of solving Sudoku puzzles and the number of given numbers. [6] N. R. Munson, T. D. Bufler, and R. M. Narayanan, “Sudoku based
phase-coded radar waveforms,” in Proc. Radar Sensor Technol., Apr. 2021,
More given numbers can give LSGA more help in finding a vol. 11742, pp. 81–93.
solution, because the given numbers can effectively reduce the [7] S. Jose and R. Abraham, “Influence of Chess and Sudoku on cognitive
search space and eliminate interference solutions. However, the abilities of secondary school students,” Issues Ideas Educ., vol. 7, no. 1,
pp. 23–30, Mar. 2019.
relationship between the given numbers and difficulty is not [8] X. Li, Y. Luo, and W. Bian, “Retracing extended sudoku matrix for high-
strictly linear or exponential. That is, there exists the situation capacity image steganography,” Multimedia Tools Appl., vol. 80, no. 12,
that some Sudoku puzzles with more given numbers but are pp. 18627–18651, Feb. 2021.
WANG et al.: NOVEL EVOLUTIONARY ALGORITHM WITH COLUMN AND SUB-BLOCK LOCAL SEARCH FOR SUDOKU PUZZLES 171

[9] M. Horoufiany and R. Ghandehari, “Optimization of the Sudoku based [34] J. Y. Li, K. J. Du, Z. H. Zhan, H. Wang, and J. Zhang, “Distributed
reconfiguration technique for PV arrays power enhancement under mutual differential evolution with adaptive resource allocation,” IEEE Trans.
shading conditions,” Sol. Energy, vol. 159, pp. 1037–1046, Jan. 2018. Cybern., to be published, doi: 10.1109/TCYB.2022.3153964.
[10] M. Harrysson and H. Laestander, “Solving sudoku efficiently with dancing [35] Z. H. Zhan, Z. J. Wang, H. Jin, and J. Zhang, “Adaptive distributed dif-
links,” Exp. Math., vol. 23, pp. 190–217, Dec. 2014. ferential evolution,” IEEE Trans. Cybern., vol. 50, no. 11, pp. 4633–4647,
[11] J. Gunther and T. Moon, “Entropy minimization for solving Sudoku,” IEEE Nov. 2020.
Trans. Signal Process., vol. 60, no. 1, pp. 508–513, Jan. 2012. [36] J. Y. Li, Z. H. Zhan, K. C. Tan, and J. Zhang, “A meta-knowledge transfer-
[12] L. Clementis, “Advantage of parallel simulated annealing optimization based differential evolution for multitask optimization,” IEEE Trans. Evol.
by solving sudoku puzzle,” in Proc. Emergent Trends Robot. Intell. Syst., Comput., vol. 26, no. 4, pp. 719–734, Aug. 2022.
2015, pp. 207–213. [37] J. Y. Li, Z. H. Zhan, J. Xu, S. Kwong, and J. Zhang, “Surrogate-
[13] K. Waiyapara, W. Wattanapornprom, and P. Chongstitvatana, “Solving assisted hybrid-model estimation of distribution algorithm for mixed-
sudoku puzzles with node based coincidence algorithm,” in Proc. 10th Int. variable hyperparameters optimization in convolutional neural net-
Joint Conf. Comput. Sci. Softw. Eng., 2013, pp. 11–16. works,” IEEE Trans. Neural Netw. Learn. Syst., to be published,
[14] M. Becker and S. Balci, “Improving an evolutionary approach to Sudoku doi: 10.1109/TNNLS.2021.3106399.
puzzles by intermediate optimization of the population,” in Proc. Int. Conf. [38] Z. H. Zhan, L. Shi, K. C. Tan, and J. Zhang, “A survey on evolutionary
Inf. Sci. Appl., 019, pp. 369–375. computation for complex continuous optimization,” Artif. Intell. Rev.,
[15] T. Mantere and J. Koljonen, “Solving and rating sudoku puzzles with vol. 55, no. 1, pp. 59–110, Jan. 2022.
genetic algorithms,” in Proc. 12th Finnish Artif. Intell. Conf. Step, 2006, [39] Z. H. Zhan, J. Y. Li, and J. Zhang, “Evolutionary deep learning: A survey,”
pp. 86–92. Neurocomputing, vol. 483, pp. 42–58, Apr. 2022.
[16] H. Lloyd and M. Amos, “Solving Sudoku with ant colony optimization,” [40] Z. G. Chen, Z. H. Zhan, S. Kwong, and J. Zhang, “Evolutionary computa-
IEEE Trans. Games, vol. 12, no. 3, pp. 302–311, Sep. 2020. tion for intelligent transportation in smart cities: A survey,” IEEE Comput.
[17] D. E. Knuth, “Dancing links,” in Millennial Perspectives in Computer Intell. Mag., vol. 17, no. 2, pp. 83–102, May 2022.
Science, Boston, MA, USA: Palgrave Macmillan, 2000, pp. 187–214. [41] J. Y. Li, Z. H. Zhan, and J. Zhang, “Evolutionary computation for expensive
[18] F. Gerges, G. Zouein, and D. Azar, “Genetic algorithms with local optima optimization: A survey,” Mach. Intell. Res., vol. 19, no. 1, pp. 3–23,
handling to solve sudoku puzzles,” in Proc. Int. Conf. Comput. Artif. Intell., Jan. 2022.
Mar. 2018, pp. 19–22. [42] N. Pathak and R. Kumar, “Improved wisdom of crowds heuristic for
[19] A. Z. Sevkli and K. A. Hamza, “General variable neighborhood search solving sudoku puzzles,” in Proc. Soft Comput. Signal Process., 2019,
for solving sudoku puzzles: Unfiltered and filtered models,” Soft Comput., pp. 369–377.
vol. 23, no. 15, pp. 6585–6601, Aug. 2019. [43] X. Q. Deng and Y. D. Li, “A novel hybrid genetic algorithm for solving
[20] M. A. Al-Betar, M. A. Awadallah, A. L. Bolaji, and B. O. Alijla, “β-hill sudoku puzzles,” Optim. Lett., vol. 7, no. 2, pp. 241–257, Oct. 2013.
climbing algorithm for Sudoku game,” in Proc. Palestinian Int. Conf. Inf. [44] K. Rodríguez-Vázquez, “GA and entropy objective function for solving
Commun. Technol., 2017, pp. 84–88. sudoku puzzle,” in Proc. Genet. Evol. Comput. Conf. Companion, 2018,
[21] J. Horn, “Solving a large sudoku by co-evolving numerals,” in Proc. Genet. pp. 67–68.
Evol. Comput. Conf. Companion, 2017, pp. 29–30. [45] N. Musliu and F. Winter, “A hybrid approach for the Sudoku problem:
[22] J.-Y. Li, Z.-H. Zhan, H. Wang, and J. Zhang, “Data-driven evolutionary Using constraint programming in iterated local search,” IEEE Intell. Syst.,
algorithm with perturbation-based ensemble surrogates,” IEEE Trans. vol. 32, no. 2, pp. 52–62, Mar./Apr. 2017.
Cybern., vol. 51, no. 8, pp. 3925–3937, Aug. 2021. [46] Y. Sato and H. Inoue, “Solving Sudoku with genetic operations that
[23] Z. H. Zhan et al., “Matrix-based evolutionary computation,” IEEE Trans. preserve building blocks,” in Proc. IEEE Conf. Comput. Intell. Games,
Emerg. Topics Comput. Intell., vol. 6, no. 2, pp. 315–328, Apr. 2022. 2010, pp. 23–29.
[24] J.-Y. Li, Z.-H. Zhan, C. Wang, H. Jin, and J. Zhang, “Boosting data-driven [47] Y. Sato, N. Hasegawa, and M. Sato, “GPU acceleration for Sudoku solution
evolutionary algorithm with localized data generation,” IEEE Trans. Evol. with genetic operations,” in Proc. IEEE Congr. Evol. Comput., 2011,
Computation, vol. 24, no. 5, pp. 923–937, Oct. 2020. pp. 296–303.
[25] X. Zhang, Z.-H. Zhan, W. Fang, P. Qian, and J. Zhang, “Multipopulation [48] N. Pillay, “Finding solutions to Sudoku puzzles using human intuitive
ant colony system with knowledge-based local searches for multiobjective heuristics,” South Afr. Comput. J., vol. 49, no. 1, pp. 25–34, Sep. 2012.
supply chain configuration,” IEEE Trans. Evol. Comput., vol. 26, no. 3, [49] A. Groza, “Japanese puzzles,” in Modelling Puzzles in First Order Logic,
pp. 512–526, Jun. 2022. Berlin, Germany: Springer, 2021, pp. 221–253.
[26] L. Shi, Z. H. Zhan, D. Liang, and J. Zhang, “Memory-based ant colony [50] X. Peng, Y. Huang, and F. Li, “A steganography scheme in a low-bit rate
system approach for multi-source data associated dynamic electric vehicle speech codec based on 3D-sudoku matrix,” in Proc. IEEE 8th Int. Conf.
dispatch optimization,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, Commun. Softw. Netw., 2016, pp. 13–18.
pp. 17491–17505, Oct. 2022, doi: 10.1109/TITS.2022.3150471.
[27] J. Y. Li et al., “A multipopulation multiobjective ant colony system
considering travel and prevention costs for vehicle routing in COVID-
19-like epidemics,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, Chuan Wang received the B.S. degree in computer
pp. 25062–25076, Dec. 2022, doi: 10.1109/TITS.2022.3180760. science and M.S. degree in education from Henan
[28] J. R. Jian, Z. G. Chen, Z. H. Zhan, and J. Zhang, “Region encoding Normal University, Xinxiang, China, in 1999 and
helps evolutionary computation evolve faster: A new solution encoding 2009, respectively.
scheme in particle swarm for large-scale optimization,” IEEE Trans. Evol. He is currently an Associate Professor with the
Comput., vol. 25, no. 4, pp. 779–793, Aug. 2021. College of Software, Henan Normal University.
[29] J. Y. Li, Z. H. Zhan, R. D. Liu, C. Wang, S. Kwong, and J. Zhang, “Gen- His current research interests computational intelli-
eration level parallelism for evolutionary computation: A pipeline-based gence and its applications on intelligent information
parallel particle swarm optimization,” IEEE Trans. Cybern., vol. 51, no. 10, processing and Big Data.
pp. 4848–4859, Oct. 2021.
[30] X. F. Liu, Z. H. Zhan, Y. Gao, J. Zhang, S. Kwong, and J. Zhang,
“Coevolutionary particle swarm optimization with bottleneck objective
learning strategy for many-objective optimization,” IEEE Trans. Evol.
Bing Sun (Student Member, IEEE) received the B.S.
Comput., vol. 23, no. 4, pp. 587–602, Aug. 2019.
degree in computer science and technology from
[31] Z. J. Wang et al., “Dynamic group learning distributed particle swarm
Henan University of Science and Technology, Henan,
optimization for large-scale optimization and its application in cloud China, in 2020. He is currently working toward the
workflow scheduling,” IEEE Trans. Cybern., vol. 50, no. 6, pp. 2715–2729,
M.S. degree in electronic and information engineer-
Jun. 2020.
ing with Henan Normal University, Xinxiang, China.
[32] X. Xia et al., “Triple archives particle swarm optimization,” IEEE Trans.
His current research interests mainly include evo-
Cybern., vol. 50, no. 12, pp. 4862–4875, Dec. 2020.
lutionary computation, swarm intelligence, and their
[33] J. Y. Li, Z. H. Zhan, K. C. Tan, and J. Zhang, “Dual differential grouping: A
applications in real-world problems.
more general decomposition method for large-scale optimization,” IEEE
Trans. Cybern., to be published, doi: 10.1109/TCYB.2022.3158391.
172 IEEE TRANSACTIONS ON GAMES, VOL. 16, NO. 1, MARCH 2024

Ke-Jing Du received the B.S. degree from Sun Yat- Sang-Woon Jeon (Member, IEEE) received the B.S.
Sen University, Guangzhou, China, in 2012, and the and M.S. degrees from Yonsei University, Seoul,
M.S. degree from City University of Hong Kong, South Korea, in 2003 and 2006, respectively, and the
Hong Kong, in 2014. She is currently working toward Ph.D. degree from the Korea Advanced Institute of
the Ph.D. degree with Victoria University, Melbourne, Science and Technology, Daejeon, South Korea, in
VIC, Australia. 2011, all in electrical engineering.
Her current research interests include evolutionary He has been an Associate Professor with the De-
computation (EC) and supply chain management, partment of Military Information Engineering (under-
especially the distributed EC and application of EC graduate school) and the Department of Electronics
in supply chain, feature selection, and games. and Communication Engineering (graduate school),
Hanyang University, Ansan, South Korea, since 2017.
From 2011 to 2013, he was a Postdoctoral Associate with the School of Computer
and Communication Sciences, Ecole Polytechnique Federale de Lausanne, Lau-
sanne, Switzerland. From 2013 to 2017, he was an Assistant Professor with the
Department of Information and Communication Engineering, Andong National
University, Andong, Korea. His research interests include network information
theory, wireless communications, sensor networks, and their applications to the
Jian-Yu Li (Member, IEEE) received the Bachelor’s Internet of Things and Big Data.
and Ph. D. degrees in computer science and technol- Dr. Jeon was a recipient of the Haedong Young Scholar Award in 2017, which
ogy from the South China University of Technology, was sponsored by the Haedong Foundation and given by the Korea Institute of
Guangzhou, China, in 2018 and 2022, respectively. Communications and Information Science (KICS), the Best Paper Award of the
His research interests mainly include computa- KICS journals in 2016, the Best Paper Award of IEEE International Conference
tional intelligence, data-driven optimization, machine on Communications in 2015, the Best Thesis Award from the Department of
learning including deep learning, and their applica- Electrical Engineering, KAIST, in 2012, the Best Paper Award of the KICS
tions in real-world problems, and in environments of Summer Conference in 2010, and the Bronze Prize of the Samsung Humantech
distributed computing and Big Data. Paper Awards in 2009.
Dr. Li has been the Reviewer for IEEE TRANS-
ACTIONS ON EVOLUTIONARY COMPUTATION and the
Neurocomputing journal, and the program committee member and reviewer of
some international conferences.
Hua Wang (Senior Member, IEEE) received the
Ph.D. degree from the University of Southern
Queensland, Toowoomba, QLD, Australia, in 2004.
He is currently a Full-Time Professor with Vic-
toria University, Footscray, VIC, Australia. He has
expertise in electronic commerce, business process
modeling, and enterprise architecture. As a Chief
Zhi-Hui Zhan (Senior Member, IEEE) received the
Investigator, three Australian Research Council Dis-
Bachelor’s and Ph. D. degrees in computer sci-
covery grants have been awarded since 2006, and 200
ence from the Sun Yat-Sen University, Guangzhou
peer-reviewed scholar papers have been published.
China, in 2007 and 2013, respectively.
He is currently the Changjiang Scholar Young
Professor with the School of Computer Science and
Engineering, South China University of Technology,
Guangzhou, China. His current research interests in-
clude evolutionary computation, swarm intelligence, Jun Zhang (Fellow, IEEE) received the Ph.D. degree
and their applications in real-world problems and from the City University of Hong Kong, Hong Kong,
environments of cloud computing and Big Data. in 2002.
Dr. Zhan was a recipient of IEEE Computational Intelligence Society Out- He is currently a Korea Brain Pool Fellow Pro-
standing Early Career Award in 2021, the Outstanding Youth Science Foundation fessor with Hanyang University, South Korea. He has
from National Natural Science Foundations of China in 2018, and the Wu authored or coauthored more than 150 IEEE Transac-
Wen-Jun Artificial Intelligence Excellent Youth from the Chinese Association tions papers in his research areas. His current research
for Artificial Intelligence in 2017. His doctoral dissertation was awarded the interests include computational intelligence, cloud
IEEE CIS Outstanding Ph.D. Dissertation and the China Computer Federation computing, operations research, and power electronic
Outstanding Ph. D. Dissertation. He is one of the World’s Top 2% Scientists for circuits.
both Career-Long Impact and Year Impact in Artificial Intelligence and one of Dr. Zhang was a recipient of the Changjiang Chair
the Highly Cited Chinese Researchers in Computer Science. He is currently the Professor from the Ministry of Education, China, in 2013, The National Science
Chair of Membership Development Committee in IEEE Guangzhou Section and Fund for Distinguished Young Scholars of China in 2011 and the First-Grade
the Vice-Chair of IEEE CIS Guangzhou Chapter. He is currently an Associate Award in Natural Science Research from the Ministry of Education, China,
Editor for IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, the Neuro- in 2009. He is currently an Associate Editor for IEEE TRANSACTIONS ON
computing, the Memetic Computing, and the Machine Intelligence Research. EVOLUTIONARY COMPUTATION and the IEEE TRANSACTIONS ON CYBERNETICS.

You might also like