Accepted Manuscript
Accepted Manuscript
PII: S1877-7503(17)31053-0
DOI: https://doi.org/10.1016/j.jocs.2017.09.015
Reference: JOCS 768
To appear in:
Please cite this article as: Chunteng Bao, Lihong Xu, Erik
D.Goodman, Leilei Cao, A Novel Non-Dominated Sorting Algorithm
for Evolutionary Multi-objective Optimization, Journal of Computational
Science https://doi.org/10.1016/j.jocs.2017.09.015
This is a PDF file of an unedited manuscript that has been accepted for publication.
As a service to our customers we are providing this early version of the manuscript.
The manuscript will undergo copyediting, typesetting, and review of the resulting proof
before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that
apply to the journal pertain.
A Novel Non-Dominated Sorting Algorithm for Evolutionary
Multi-objective Optimization
a
College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China;
b
BEACON Center for the study of Evolution in Action, Michigan State University, East Lansing, MI 48824, USA;
* Corresponding author.
E-mail addresses: [email protected] (C. Bao), [email protected] (L. Xu), [email protected] (E. Goodman),
[email protected] (L. Cao).
Highlights
1-A Hierarchical Non-Dominated Sorting (HNDS) Algorithm for Evolutionary Multi-objective
Optimization is proposed.
2- HNDS uses quick-sort to obtain a non-dominated solution at each round of comparison and adopts a
hierarchical strategy to make a rapid distinction between different quality solutions quickly.
3-Solutions which are non-dominated with the first one should be transferred to the next round according
to their current order.
4-HNDS avoids many unnecessary dominance comparisons, so as to reduce the computational complexity.
5- HNDS plays an effective role in non-dominated sorting, especially for populations with large numbers
of solutions and many objectives.
Abstract—Evolutionary computation has shown great performance in solving many multi-objective optimization
problems; in many such algorithms, non-dominated sorting plays an important role in determining the relative quality of
solutions in a population. However, the implementation of non-dominated sorting can be computationally expensive,
especially for populations with a large number of solutions and many objectives. The main reason is that most existing
non-dominated sorting algorithms need to compare one solution with almost all others to determine its front, and many of
these comparisons are redundant or unnecessary. Another reason is that as the number of objectives increases, more and
more candidate solutions become non-dominated solutions, and most existing time-saving approaches cannot work
effectively. In this paper, we present a novel non-dominated sorting strategy, called Hierarchical Non-Dominated Sorting
(HNDS). HNDS first sorts all candidate solutions in ascending order by their first objective. Then it compares the first
solution with all others one by one to make a rapid distinction between different quality solutions, thereby avoiding many
unnecessary comparisons. Experiments on populations with different numbers of solutions, different numbers of
objectives and different problems have been done. The results show that HNDS has better computational efficiency than
fast non-dominated sort, Arena’s principle and deductive sort.
Index Terms-- Multi-objective Evolutionary Algorithm, Non-dominated sort, Pareto front, Computational complexity.
I. INTRODUCTION
During the past two decades, there has been a surge in research on evolutionary multi-/many-objective optimization
(EMO) [1, 2], especially for optimization problems with more than three objectives. The interest in multi-objective
evolutionary algorithms (MOEAs) [3] is mainly inspired by the massive emergence of recognized real-world
multi-objective optimization problems (MOPs)[4-7]. In fact, there are many scientific and engineering problems
involving multiple conflicting objectives in the real world—e.g., greenhouse microclimate control [8, 9], control system
design [10, 11], and search-based software engineering [12, 13]. These objectives must be optimized simultaneously to
obtain a tradeoff among the multiple objectives. To solve such problems, some effective MOEAs have been developed,
such as genetic algorithm (GA) [14, 15], differential evolution (DE)[16, 17], indicator-based evolutionary algorithm
(IBEA)[1, 18], particle swarm optimization (PSO)[19, 20], ant colony optimization (ACO)[21], decomposition-based
algorithm (MOEA/D)[22] and so on.
It is worth noting that among these solution strategies, many MOEAs based on Pareto dominance [23, 24] have
shown excellent performance in solving MOPs. A large number of MOEAs based on this dominance mechanism have
been developed, such as NSGA [25], SPEA [4], PESA [26], TDEA [27], MOGA [28], PAES [29], MOGA [25], NPGA
[30], MOPSO [20], NSGA-II [14], SPEA2 [15], PESA-II [31] and so on. Recently, there has been increasing interest in
addressing many-objective evolutionary algorithms—for example, Deb et al.’s NSGA-III [32, 33], Yang et al.’s
Grid-based strategy [34] and in several others [35-37].
Although various strategies have been proposed to solve the original problems faced by Pareto-based MOEAs, a
common problem most existing Pareto-based MOEAs face is that Pareto dominance needs to conduct a large number of
objective comparisons, which is computationally expensive, especially for populations with large number of solutions
and many objectives. As an example, we implemented a multi-objective evolutionary algorithm called MODE [38] on a
WFG6 [39] problem with five objectives and 800 initial individuals for 400 generations. We noted that nearly 78% of the
computer instructions and 84% of the runtime were spent on non-dominated sorting. For different MOEAs and MOPs,
this ratio may change, but the runtime spent on non-dominated sorting occupies a large proportion of the total time when
populations are large and more than two objectives are involved.
In the past decade, there has been much research on improving the computational efficiency of non-dominated
sorting in multi-/many-objective evolutionary algorithms. Originally, when N. Srinivas et al [25] applied non-dominated
sorting to multi-objective evolutionary algorithms, it had a computational complexity of O(MN3) and a space complexity
of O(N), where M is the number of objectives and N is the population size. In 2002, Deb et al [14] put forward an
improved version of non-dominated sorting called fast non-dominated sorting; the idea of this method is to avoid
duplicated comparisons in determining the dominance relationships among solutions, and it reduced the computational
complexity to O(MN2), at the cost of increasing its space complexity to O(N2). In 2014, Deb and Jain proposed an
improved version of NSGA-II called NSGA-III [32, 33], which uses a reference-point-based non-dominated sorting
approach to solve many-objective optimization problems: two parts of the algorithm use the usual dominance principle
[40].
In 2003, Jensen [41] applied the divide-and-conquer strategy [42] to non-dominated sorting, and reduced the
run-time complexity to O(GNlogM-1N). It is worth noting that Jensen’s approach assumed that no individuals share the
same value for any objective, and it cannot work efficiently for some cases. Luckily, in 2013, Fortin et al.[43] put
forward an improved version of Jensen’s algorithm and compensated for this incompleteness without changing the
original time complexity. In 2003, Ding et al [44] proposed an improved non-dominated sorting approach called MLNFC,
which adopts two techniques, getting an integer rank set and locking the non-dominated solutions as soon as possible.
N 1 C N
The computational complexity of this approach is O (MN log N) +O (NL), where L N Ni . In 2008, Tang et al.[45]
M M
i 0 CN M
applied Arena’s principle to non-dominated sorting—that is, each winner is preserved as the host and continues to take
part in the next round of comparisons. Experiments showed that this approach outperformed Deb’s fast non-dominated
sort, especially for populations with a large proportion of dominated solutions. The time complexity of this approach can
achieve O(MNN ) , among which N is the number of non-dominated solutions in the population. In 2012, McClymont and
Keedwell [46] proposed two notable algorithms for non-dominated sorting, called climbing sort and deductive sort,
which use the existing results of comparisons to infer certain dominance relationships between solutions.
Recently, in 2015, Drozdik [47] et al. introduced a data structure, called an M-front, to reduce the time complexity
of non-dominated sorting. This data structure holds the non-dominated solutions and updates this information as the
population changes. The average time complexity of this dynamic approach can achieve O( M 2 N 2(1 M 1) ) .
Experimental results show that this approach outperforms Jensen-Fortin’s algorithm, especially for the many-objective
case.Modify1 In addition, there are also many other novel non-dominated sorting methods [44, 48, 49] or algorithms that
can efficiently determine the non-dominated set [50-52]. All of these methods produce identical Pareto fronts; they
simply aim to reduce the time complexity of non-dominated sorting as far as possible.
As mentioned above, the non-dominated sorting in most existing Pareto-based MOEAs is computationally
expensive, especially for the case of many-objective optimization algorithms. This mechanism needs to spend a large
amount of time in determining the dominance relationships between solutions in population; that is, most of the running
time is consumed by objective comparisons. This can be attributed to two factors: first, non-dominated sorting is a
complex process and must be implemented serially, so it costs a lot of time to determine the dominance relationships
between solutions. Second, as the number of objectives increases, the number of non-dominated solutions increases very
quickly, so the existing time-saving MOEAs do not work very effectively. Thus, how to avoid unnecessary or duplicative
comparisons between solutions in many-objective evolutionary algorithms is an urgent problem to solve. In view of this,
we perform this study, mainly engaging in the further improvement of non-dominated sorting in terms of time
complexity.
In this paper, we present a new non-dominated sorting algorithm, termed Hierarchical Non-Dominated Sorting
(HNDS). Unlike other non-dominated sort algorithms, HNDS does not need to compare each solution with all others
before determining its Pareto rank; it first sorts all solutions by their first objective values in ascending order, which
means that the first individual is a non-dominated solution and can never be dominated by any solutions further down the
list. Then, HNDS compares the first solution with all other solutions one by one; the solutions dominated by the first
solution will be discarded and will not be compared again in the determination of the current front. Thus, this method can
save a large number of duplicative and unnecessary dominance comparisons, which improves the efficiency of the
algorithm significantly. The running time complexity of HNDS can achieve O (MN N ) in some best cases. However, in
the worst case, HNDS has a time complexity of O (MN2), which is the same as that of Deb’s fast non-dominated sort.
However, average time complexity of HNDS is much lower. In addition, it is worth noting that HNDS has a space
complexity of O (N). Theoretical analysis and experimental results both demonstrate that HNDS is more computationally
efficient than the state-of-the-art non-dominated sorting algorithms to which we compare it, especially for large
population sizes and many objectives.
The remainder of this paper is organized as follows. Section II gives a brief review of non-dominated sorting
methods and a few well-known existing improved approaches. Section III describes the concrete implementation of our
proposed non-dominated sorting algorithm and theoretically analyzes its time complexity. Extensive experiments are
carried out in Section IV, and the results are presented to empirically compare with three state-of-the-art non-dominated
sorting algorithms. Finally, in Section V, conclusions of this study are drawn.
In this section, we will give a simple description of non-dominated sorting, and then give a brief review of current
widely used non-dominated sorting algorithms together with an analysis of their computational complexities.
A. Non-dominated Sorting
Non-dominated sorting is mainly used to sort the solutions in population according to the Pareto dominance
principle, which plays a very important role in the selection operation of many multi-objective evolutionary algorithms.
Although different non-dominated sorting algorithms have been proposed based on different ideas, most of them follow a
unified framework as shown in Fig. 1, and all of them achieve the same result, as shown in Fig. 2. In non-dominated
sorting, an individual A is said to dominate another individual B, if and only if there is no objective of A worse than that
objective of B and there is at least one objective of A better than that objective of B. Taking a minimization problem
without loss of generality, we assume that the solutions of a population P can be assigned to K Pareto fronts Fi, i =1,
2, … , K. Non-dominated sorting first selects all the non-dominated solutions from population P and assigns them to F1
(the rank 1 front); it then selects all the non-dominated solutions from the remaining solutions and assigns them to F2 (the
rank 2 front); it repeats the above process until all individuals have been assigned to a front. (Note that these fronts are
relative to the given population, without consideration of the optima of any process that may be generating them.)
Since the non-dominated sorting algorithm [53] was first applied to the selection operation of multi-objective
evolutionary algorithm, there have been many improved versions of the original approach, all of which try to reduce the
number of redundant objective comparisons required to obtain the right dominance relationships among solutions. Here,
we review a few of the widely used non-dominated sorting methods.
1) Fast Non-dominated Sorting: Fast non-dominated sorting [14] is the first improved version of the original
non-dominated sorting algorithm. It compares each solution with each other and stores the results so as to avoid duplicate
comparisons between every pair of solutions. For a population with N solutions, this method needs to conduct MN(N-1)
objective comparisons, which means that it has a time complexity of O(MN2). In addition, for each solution p, it needs a
set SP to store the individuals that are dominated by p and a counter np to record the number of individuals that dominate
p, so this method has a space complexity of O(N2). As the number of solutions increases, both the time complexity and
the space complexity increase quickly.
2) Arena’s Principle: Arena’s principle [45] is an efficient strategy to reduce the runtime of non-dominated
sorting. This method first selects a solution x from the population randomly, then it compares x with each other solution
(for example y) in the population one by one; if x dominates y, y is removed from the population; if x and y are mutually
non-dominating, y is removed to a set R; otherwise, x is replaced by y and the comparisons continue until all solutions in
the population have been compared. Finally, x will be a non-dominated solution, and it is put into the non-dominated
solution set Q. If there are any other solutions in set R, all of them are treated as new candidate solutions and the above
process is repeated until set R is empty. The time complexity of this approach can achieve O (M N N), where N is the
number of the non-dominated solutions in the population. Experimental results show that this approach outperforms
Deb’s fast non-dominated sort.
3) Deductive Sort: Deductive sort [46] is another novel approach to reduce the unnecessary objective
comparisons in non-dominated sorting. It assesses each solution in a fixed natural order and compares each solution with
all following solutions. In this process, any solution that is dominated by the current solution will be marked and ignored.
At the same time, if the current solution is found to be dominated by a certain solution, it will also be marked and the
algorithm will skip over it to assess the next solution. By recording the results of dominance relationship between
solutions, deductive sort can avoid some duplicate comparisons. It has a time complexity of O (MN N ) in the best case
and a time complexity of O (MN2) in the worst case. It has a space complexity of O (N).
III. A NEW STRATEGY FOR NON-DOMINATED SORTING
The main program of HNDS is given in Algorithm 1. Taking the minimization problem as an example, it first sorts
all solutions in set Q in ascending order by their first objective values, from which we can get the following conclusions:
1) according to the definition of non-dominated solution, the first solution is a non-dominated solution. 2) No matter
what the number of objective is, this approach can obtain a non-dominated solution by comparing only the solutions’ first
objective values. 3) There are only two possible relationships between two solutions: either the preceding solution
dominates the succeeding one, or they are mutual non-dominated. 4) A preceding solution can never be dominated by any
succeeding solution. 5) Since all solutions have been sorted by their first objective values, it is necessary to compare only
the remaining (M-1) objective values when determining their dominance relationships. This comparison strategy can
continue until all solutions in a population have been sorted.
Based on the sorted solutions, HNDS begins to determine the front of a given rank one rank at a time, starting from
F1 and ending with Fend. As pointed out earlier, a solution may dominate any one of the succeeding solutions but it can
never be dominated by any one of them, which means that the earlier a solution appears in the sorted list, the more
solutions it may dominate. With this in mind, we can further infer that if we compare the first solution with all
succeeding solutions one by one and discard the ones dominated by the first solution, it can reduce the number of
objective comparisons needed as much as possible in this context. That is because once a solution is discarded, it will not
again be compared with the first solution and also not with any solutions which are non-dominated with the first solution.
In order to make this comparison mechanism throughout the whole algorithm, as shown in Fig. 4, the solutions
which are non-dominated with the first one should be transferred to the next round according to their current order. For
example, in the first round of comparison, solution 3 is the first solution that is non-dominated with solution 6, so it will
be ranked first in the second round of comparisons; solution 18 is the second one that is non-dominated with solution 6,
so it will be ranked second in the second round of comparison. If it is also the first solution non-dominated with solution
3 in the second round of comparison, it will be ranked first in the third round of comparisons. It is worth noting that the
solutions dominated by the first one in each round of comparisons do not necessarily follow this pattern, because a
solution may be non-dominated with the first solution in the current round of comparisons, but may be dominated by any
one of the first solutions in succeeding rounds; if this occurs, the latter part of list R may not be sorted according to the
first objective.
In this way, the first solution in each round of comparison is always a non-dominated solution, because it is always
non-dominated with the solutions that have been assigned to the current front. It also means that, from the second round
of comparison, HNDS can obtain a non-dominated solution without any dominance comparison, which is different from
other non-dominated sorting algorithms.
Within the framework of HNDS, the fronts of the solutions will be determined from small to large one by one, with
respect to their first objective. The pseudo code of the implementation process is given in Algorithm 2. At the beginning
of each iteration, HNDS first assigns the first solution of Q to the current front, in what follows, it compares the first
individual with the succeeding solutions one by one; if a succeeding solution is non-dominated with the first one, it will
be moved to set ND for the next round of comparison; if it is dominated by the first one, it will be moved to set R for the
determination of the next front. In this process, the solutions which are moved to set R do not involve sort operations,
while the solutions that are non-dominated with the first one should be transferred to set ND according to their original
order as shown in lines 5 to 10.
It is worth noting that the while loop in Algorithm 2 only applies to the case that set Q contains more than one
solutions, which is different from the while loop in Algorithm 1. The while loop in Algorithm 2 is mainly used to
generate a non-dominated solution in each round of comparison and to exclude dominated solutions as soon as possible.
However, the while loop in Algorithm 1 is mainly used to ensure that each solution in set Q can be assigned to a front.
The premise of the while loop in Algorithm 1 is that the set Q is not empty, which includes three possible cases: 1) if set
Q is empty, the program will break. 2) If there is exactly one solution in set Q, it will be assigned to the next front in line
15 of Algorithm 2 and the program will stop at the end of this loop. 3) If there is more than one solution in set Q, the
program will jump to the while loop in Algorithm 2 until the number of solutions in set Q is no more than one, then
execute the while loop in Algorithm 1.
As an example, we use the necessity analysis method [54] to compare the number of comparisons conducted by
HNDS with that of deductive sort on a population shown in Fig. 5. According to [55], the necessity of comparison
between two solutions A and B can be divided into four cases: 1) solution A dominates solution B, which means that they
belong to different fronts, if the front which solution B belongs to is next to the current front, it only needs to compare
solution A with solution B, the other comparisons are redundant. 2) Solutions A and B are mutual non-dominated, if they
belong to the same front and the front is the current front, the comparison is necessary. 3) Solutions A and B are mutual
non-dominated, if they belong to the same front and the front is not the current front, the comparison is unnecessary. 4)
Solutions A and B are mutual non-dominated, if they belong to different fronts, the comparison is unnecessary.
Deductive sort determines the dominance relationships among solutions in a natural order. It must compare each
solution with all the following solutions one by one. For the population shown in Fig. 5, it first compares p1 with p2;
because p1 is not dominated by p2, the algorithm continues to compare p1 with p3; p1 is also not dominated by p3. Then
p1 is compared with p4, p5 and so on, until it is dominated by p6. At this point, p1 is ignored and will no longer be
compared in the determination of the current front. Then the algorithm continues to compare p2 with the following
solutions and so on. In the comparison of p4, the algorithm first compares it with p5; because it is not dominated by p5,
the algorithm continues to compare it with p6; it is also not dominated by p6, which means that it is a non-dominated
solution and should be assigned to F1. Solutions p5 and p6 will be assigned to a corresponding front similarly to the
assignment process for p4.
The detail of comparisons conducted by deductive sort for the population in Fig. 5 is listed in Table I. In general, a
total of 18 comparisons have been made. More concretely, there are three instances of Case 1, six instances of Case 2,
three instances of Case 3 and six instances of Case 4. The comparisons of Case 3 and Case 4 account for half of the total
comparisons, from which we know that there are quite a few redundant comparisons in deductive sort. Table II lists the
number of comparisons conducted by Hierarchical Non-Dominated Sorting, from which we can easily find that there are
three instances of Case 1, six instances of Case 2, and no comparisons resulting in Case 3 or Case 4. It only conducts the
necessary comparisons to accomplish non-dominated sorting for the population in Fig. 5, which are many fewer than
those of deductive sort. It is worth noting that although HNDS does not conduct any comparison of Case 4 for the
population in Fig. 5, this is not always the case for all populations. But from the idea of Hierarchical Non-Dominated
Sorting, we can further infer that HNDS will never encounter an instance of Case 3 for any form of population.
According to the implementation of HNDS, it must first sort all candidate solutions in ascending order according to
their first objective values. Then it continues using a hierarchical strategy to assign each solution to a certain front, one
by one. Its computational complexity can also be divided into two parts: the computational complexity of the sorting and
the computational complexity of the dominance comparisons.
Let N be the number of solutions in the population, and let M be the number of objectives. Let F be the number of
rank i fronts Fi that result from the completion of the HNDS algorithm, and denote by Ni the number of individuals in
each front Fi, where 1≤ i ≤ F. Before the determination of each front Fi, the number of current candidate solutions Pi is
N-N1-N2-…-Ni-1. The computational complexity of sorting these candidate solutions (using quick-sort) is O (Pi log Pi).
Then the total computational complexity of sorting candidate solutions that belong to different fronts is
F
T1 O( ( Pi log Pi )) (1)
i 1
From the idea of Hierarchical Non-Dominated Sorting, we know that it can determine at least one non-dominated
solution at each round of comparison, from which we can infer that it must conduct at most Ni rounds of comparisons to
determine each front. In the first round of comparisons, HNDS must to compare the first solution with all following
solutions one by one, which requires (N-1) dominance comparisons. At the same time, HNDS moves the first solution to
the current front and moves the solutions dominated by the first one to a set R for the determination of next front, the
remaining solutions are still sorted by their original order. Based on these remaining solutions, HNDS begins to conduct
the second round of comparisons; we know that the first solution is also a non-dominated solution. It is compared with
the following solutions one by one, which requires (N-2-d1) dominance comparisons, where d1 is the number of solutions
dominated by the first one in the first round of comparisons. During this comparison process, HNDS obtains the
candidate solutions for the third round of comparisons. In what follows, HNDS continues the third round of comparisons;
it must conduct (N-3-d1-d2) dominance comparisons, where d2 is the number of solutions dominated by the first one in
the second round of comparison. Similarly, for the ith round of comparison, HNDS must conduct (N-i-d1-d2-…-di-1)
dominance comparisons. HNDS repeats this process until all solutions belonging to the current front have been
determined. The number of dominance comparisons in determining front Fi can be calculated by adding up the
comparisons conducted in each round of comparison; it can achieve
The worst case of HNDS occurs in two cases: 1) all solutions in the population are non-dominated ones, which also
means that all solutions in the population can be assigned to one and only one front; then Ni=N1=N and di=0.
Alternatively, 2) all solutions in the population can be assigned to different fronts and each solution is dominated by all
solutions belonging to preceding fronts. In these two cases, all solutions must be compared with each other, and the
computational complexity of determining all fronts is:
T2 _ worst M * N ( N 1) 2 O( MN 2 ).
In the best case, N solutions in the population are evenly assigned to N fronts, and each solution is dominated by
all other solutions that belong to preceding fronts. According to HNDS, the number of comparisons for determining all
fronts is N ( N -1) +1, and the computational complexity is:
T2 _ best M *[ N ( N 1) 1] O( MN N ).
Through the above analysis, we can get that the total computational complexity of HNDS in the worst case is:
In the implementation of HNDS, all solutions in the population should be sorted according to their first objective. At
the same time, the solutions which are dominated by the first one at each round of comparison should be moved to the
next round according to their current order. Thus, HNDS has a space complexity of O (N).
In order to verify the efficiency of our approach, we conduct four groups of experiments and compare HNDS with
three famous non-dominated sorting approaches, including the fast non-dominated sort [14], Arena’s principle [45] and
deductive sort [46]. Each non-dominated sorting approach is embedded in the naive NSGA-II, and each approach
produces exactly the same output; the only difference is that they use different non-dominated sorting mechanisms and
consume different amounts of runtime.
The experiments are implemented on different kinds of populations, including randomly generated populations with
different number of individuals, random populations with different numbers of objectives and random populations
generated in different benchmark problems. The performance indicators of the non-dominated sorting algorithms are
presented by the number of dominance comparisons and the execution times they consume. All experiments are
conducted on a PC under a Windows 7 SP1 64-bit operation system and 1.60GHz AMD E-350 CPU with 6.00G memory.
Each algorithm is implemented in MATLAB R2014a.
In the first group of experiments, we explore the relationship between the computational complexity and the number
of solutions (i.e., population size). The experiments are conducted on three types of populations: a population with two
objectives, a population with five objectives and a population with ten objectives. Each type of population is generated
randomly—in other words, each objective value of all solutions in a population is randomly generated from a uniform
distribution over the interval [0, 1]. The number of solutions in each type of population is set from 100 to 5000 with an
increment of 100.
For each non-dominated sorting algorithm, Figs. 6 and 7 present the numbers of comparisons of individual
objectives and the execution times versus the number of solutions, respectively. In these figures, each algorithm has been
run 10 times independently and the average values have been recorded.
From Figs. 6 and 7, we can see that for each non-dominated sorting approach, both the number of dominance
comparisons and the execution time increase with a certain exponent as the number of solutions increases. For each type
of population, the naïve NDS from NSGA-II conducts the most comparisons and consumes the most execution time,
Arena’s principle ranks second. For the case of two objectives, HNDS clearly outperforms the other three non-dominated
sorting algorithms. For the case of five objectives, when the population size is no more than 700, HNDS does about the
same number or even a few more objective comparisons than deductive sort, but as the population size continues to
increase, HNDS clearly outperforms deductive sort. The case of ten objectives has similar results. Also, as the number of
objectives increases, the difference between HNDS and deductive sort decreases slightly, while the difference between
deductive sort and Arena’s principle increases slightly.
In order to further explore the relationship between the number of objectives and the execution times of different
non-dominated sorting algorithms, we consider random populations containing 200, 500 or 800 solutions, and for each
size, having 2, 5 or 10 objectives. Table III shows the mean values and standard deviations of execution times of each
algorithm. The best performance in each case is highlighted with bold font and gray shaded. From Table III, we can see
that HNDS outperforms the naïve NSGA-II, Arena’s principle and deductive sort in all but one case. The exception is for
the population with 800 solutions and 5 objectives, for which case deductive sort consumes less time than HNDS.
modify2
B. Explanation Regarding Populations with Fixed (Artificially Generated) Fronts
modify 2
Many papers [46, 55] have used a very artificial procedure (given in Algorithm 3) to enable them to generate
problem instances with a specified number of fronts. It is our contention that the extremely uncharacteristic distribution
of the solutions generated by this algorithm means that it provides no useful insight into the relative performance of NDS
algorithms in sorting sets of instances that arise either from random generation or from populations in a multi-objective
optimization problem. That is, the performance of NDS algorithms on a set with a particular number of fronts should be
done either by 1) generating random populations and counting the number of fronts, then tabulating the sorting
performance for THIS number of fronts, or 2) analyzing the behavior of an NDS algorithm embedded in a particular
multi-objective optimization algorithm, once again tabulating the performance versus the number of fronts encountered
at any particular generation. This type of analysis will provide far greater insight into the expected relative performance
of the NDS algorithm in actual usage.
Let’s examine briefly the problem with the commonly used algorithm for generating problem instances with a
pre-specified number of fronts. Figure 8 shows the effect of generating an instance with a population of 9 solutions and
with 3 fronts. As you will observe, the generation algorithm (Algorithm 3) generates a problem instance with a very
special property: every individual in any front dominates ALL individuals in any higher-numbered front. This is FAR
from the usual distribution of individuals in a set of fronts, and changes significantly the ease of sorting such individuals.
Thus, it is a very poor benchmark. It is a convenient benchmark, in that it allows, for example, generating a population of
50 individuals in 50 fronts, but that is clearly very far from a typical distribution of fronts in a population with 50
individuals.
Therefore, results on this benchmark will not be analyzed in this paper, although they again proved favorable to
HNDS.
Algorithm 3: Creation of populations with fixed numbers of fronts
Input: Random Population P={s1, s2, …, sN}
Output: Populations with fixed numbers of fronts PF(t) , where t=1,2, …, 50
1: P=k*rand(N, M); { generate a random population P}
2: for i=1 to M do {sort all objectives in ascending order}
3: b=P( :, i );
4: P (:, i)= sort(b);
5: end for
6: f= P (:, 2) ; {select the second objectives of all solutions}
7: S=[ ] ; {used to store the second objectives of all fixed fronts}
8: for m=1: 50 do {the number of fronts is set from 1 to 50 with an increment of one}
9: N=2000; {the number of solutions}
10: while (mod(N, m) ~= 0) do {make sure that each front has the same
number of individuals}
11: N=N-1;
12: end while
13: n=N/m; {the number of solutions in each front}
14: s=[ ] ; {used to store the second objectives of the solutions in current fixed front}
15: for i=0:( m-1) {create the second objectives of the solutions in current fixed
front}
16: j=i*n;
17: k=f(j+1: j+n);
18: k= sort(k, ‘descend’);
19: s=[s; k];
20: end for
21: if ( mod(2000, m)~= 0) do {if all solutions cannot be equally assigned to front m,
move the remaining solutions to the last front}
22: i= m-1;
23: j= i*n;
24: k= f(j+1: end);
25: k= sort(k, ‘descend’);
26: s=s(1: n*(m-1));
27: s=[s; k];
28: end if
29: S=[S, s]; {store the second objectives of each fixed front to set S}
30: end for
31: PF(t)= [P (:,1), S(:, t), P (:,3), P (:,4), P (:,5), P (:,6), P (:,7), P (:,8), P (:,9), P (:,10)] ;
In order to further explore the influence of the number of objectives on random populations, we conduct a group of
experiments on populations with 1000, 3000 and 5000 solutions respectively. The number of objectives is set from 2 to
modify 3
30 with an increment of one for each type of population and each objective value of the solution is randomly
generated from a uniform distribution over the interval [0, 1]. The relationship between the number of fronts and the
number of objectives for random population is shown in Fig. 9, from which we can see that the number of fronts
decreases very quickly as the number of objectives increases.
Figs. 10 and 11 present the results of the four non-dominated sorting algorithms on populations used in this group of
experiments. We can easily see that the number of comparisons conducted by fast non-dominated sort remains the same
as the number of objectives increases. The numbers of comparisons conducted by the other three non-dominated sorting
algorithms tend to be similar to the numbers of fast non-dominated sort when the number of fronts exceeds 15 to 20; for
lower numbers of fronts, HNDS performs the fewest comparisons. For populations with 1000 and 3000 solutions, fast
non-dominated sort consumes the most execution time, while Arena’s principle and deductive sort rank second and third,
respectively, and HNDS consumes the least. In addition, for the population with 5000 solutions, fast non-dominated sort
also consumes the most execution time, HNDS consumes the least. The execution time consumed by Arena’s principle
tends to be similar to or even less than that of deductive sort as the number of objectives increases.
D. Experiments on the various Non-dominated Sorting Algorithms imbedded in the NSGA-II Framework
To demonstrate the performance of HNDS in multi-objective evolutionary algorithms, we embed HNDS and the
other three non-dominated sorting algorithms into the naïve NSGA-II respectively. In the experiments, we choose the
modify 4
TNK problem and DTLZ2 problem [56] as the test problems on which to compare the algorithms’ performances.
The TNK problem has a Pareto front of several discontinuous curves, while the Pareto front of the DTLZ2 problem is
composed of the first quadrant circle. The number of generation is set at 200, the simulated binary crossover (SBX) is
used, the other parameters are the same as in [57]. For the TNK problem, 2 objectives are used, and populations contain
modify 4
200, 300 or 500 individuals. For the DTLZ2 problem, which has a concave, continuous Pareto front, 2, 3, 5, 8 and
10 objectives are used, and population size is 200 individuals. In addition, to ensure that each non-dominated sorting
algorithm is tested on exactly the same problems, matching random seeds are employed across algorithms.
Table IV shows the time ratio (TR) [55] of the NSGA-II framework embedded with different non-dominated sorting
approaches. The best performances are shown in bold font and gray shading. The definition of time ratio is shown in
formula 3. The NSGA-II framework embedded with each non-dominated sorting algorithm is run ten times
independently and each execution time is recorded. The time ratio is the ratio of the total execution time consumed by the
NSGA-II framework embedded with a certain non-dominated sorting algorithm and the total execution time consumed
by the naive
10 i
T
TR i 1
(3)
10 i
T
i 1 naive
From Table IV, we can easily see that the NSGA-II frameworks embedded with HNDS consume the least execution
time for all cases, which also means that HNDS can effectively reduce the computational intensity of multi-objective
evolutionary algorithms in terms of execution time.
V. CONCLUSION
In this paper, we propose a novel non-dominated sorting algorithm, called HNDS, to speed up the non-dominated
sorting of a population of solutions. This approach adopts a hierarchical search strategy to reduce the number of
dominance comparisons, so as to reduce the computational complexity. HNDS has a best-case time complexity of O(MN
N ) and a space complexity of O(N), although its worst-case time complexity is O(MN2), similar to several others. But
experimental results show that in terms of dominance comparisons and objective comparisons performed and execution
time, HNDS outperforms fast non-dominated sort, Arena’s principle and deductive sort across a wide range of situations.
HNDS has played an effective role in non-dominated sorting, especially for populations with large numbers of
solutions and many objectives. It is worth noting that in the process of creating candidates for the second front through
last fronts, the sorting of all these candidate solutions by their first objectives may take advantage of their already
partially-sorted order. Moving these solutions according to their initial order in the first objective can save many
comparisons and reduce runtime. We plan to explore sorting that takes advantage of the special partially-sorted nature of
the solutions internal to the HNDS process. This could prove to be beneficial, especially for populations with large
numbers of solutions and many objectives.
VI. ACKNOWLEDGMENTS
This work was supported in part by the National High-Tech R&D Program of China (863 Program) under grant
2013AA102305, in part by the National 863 Program of China under grant 2013AA103006-2, in part by the National
Natural Science Foundation of China under grant 61573258, and in part by the U.S. National Science Foundation’s
BEACON Center for the Study of Evolution in Action, under cooperative Agreement DBI-0939454.
VII. REFERENCES
[1] Wagner T, Beume N, Naujoks B. Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In: Obayashi
S, Deb K, Poloni C, Hiroyasu T, Murata T, editors. Evolutionary Multi-Criterion Optimization, Proceedings. Lecture Notes in
Computer Science. 44032007. p. 742-56.
[2] Hisao I, Noritaka T, Yusuke N, editors. Evolutionary many-objective optimization: A short review. 2008 IEEE Congress on
Evolutionary Computation (IEEE World Congress on Computational Intelligence); 2008 1-6 June 2008.
[3] Zhou AM, Qu BY, Li H, Zhao SZ, Suganthan PN, Zhang QF. Multiobjective evolutionary algorithms: A survey of the state of the
art[J]. Swarm and Evolutionary Computation. 2011,1(1):32-49.
[4] Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach[J]. IEEE
Transactions on Evolutionary Computation. 1999,3(4):257-71.
[5] Yuan Y, Xu H. Multiobjective Flexible Job Shop Scheduling Using Memetic Algorithms[J]. Ieee Transactions on Automation
Science and Engineering. 2015,12(1):336-53.
[6] Khare V, Yao X, Deb K. Performance Scaling of Multi-objective Evolutionary Algorithms: Springer Berlin Heidelberg; 2003.
376-90 p.
[7] Coello CAC, Pulido GT, Lechuga MS. Handling multiple objectives with particle swarm optimization[J]. IEEE Transactions on
Evolutionary Computation. 2004,8(3):256-79.
[8] Gerasimov DN, Lyzlova MV. Adaptive control of microclimate in greenhouses[J]. Journal of Computer and Systems Sciences
International. 2014,53(6):896-907.
[9] Ramdani M, Hamza A, Boughamsa M, Ieee. Multiscale fuzzy model-based short term predictive control of greenhouse
microclimate. Proceedings 2015 Ieee International Conference on Industrial Informatics. IEEE International Conference on
Industrial Informatics INDIN2015. p. 1348-53.
[10] Herrero JG, Berlanga A, Lopez JMM. Effective Evolutionary Algorithms for Many-Specifications Attainment: Application to Air
Traffic Control Tracking Filters[J]. Ieee Transactions on Evolutionary Computation. 2009,13(1):151-68.
[11] Fleming PJ, Purshouse RC, Lygoe RJ. Many-objective optimization: An engineering design perspective. In: Coello CAC, Aguirre
AH, Zitzler E, editors. Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. 34102005. p. 14-32.
[12] Sayyad AS, Menzies T, Ammar H, editors. On the value of user preferences in search-based software engineering: A case study in
software product lines. 2013 35th International Conference on Software Engineering (ICSE); 2013 18-26 May 2013.
[13] Praditwong K, Harman M, Yao X. Software Module Clustering as a Multi-Objective Search Problem[J]. IEEE Transactions on
Software Engineering. 2011,37(2):264-82.
[14] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II[J]. Ieee Transactions on
Evolutionary Computation. 2002,6(2):182-97.
[15] E Ziztler ML, L Thiele. Spea2: Improving the strength pareto evolutionary algorithm for multiobjective optimization[J].
Evolutionary Methods for Design, Optimization, and Control. 2002:95-100.
[16] Storn R, Price K. Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces[J].
Journal of Global Optimization. 1997,11(4):341-59.
[17] Robic T, Filipic B. DEMO: Differential evolution for multiobjective optimization. In: Coello CAC, Aguirre AH, Zitzler E, editors.
Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. 34102005. p. 520-33.
[18] Ishibuchi H, Doi T, Nojima Y. Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization
algorithms. In: Runarsson TP, Beyer HG, Burke E, MereloGuervos JJ, Whitley LD, Yao X, editors. Parallel Problem Solving from
Nature - Ppsn Ix, Proceedings. Lecture Notes in Computer Science. 41932006. p. 493-502.
[19] Kennedy J, Eberhart R, editors. Particle swarm optimization. Neural Networks, 1995 Proceedings, IEEE International Conference
on; 1995 Nov/Dec 1995.
[20] Coello CAC, Lechuga MS, editors. MOPSO: a proposal for multiple objective particle swarm optimization. Evolutionary
Computation, 2002 CEC '02 Proceedings of the 2002 Congress on; 2002 2002.
[21] Chaharsooghi SK, Kermani AHM, Ieee. An intelligent multi-colony multi-objective ant colony optimization (ACO) for the 0-1
knapsack problem2008. 1195-202 p.
[22] Zhang Q, Li H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition[J]. IEEE Transactions on
Evolutionary Computation. 2007,11(6):712-31.
[23] Van Veldhuizen DA, Lamont GB. Multiobjective Evolutionary Algorithms: Analyzing the State-of-the-Art[J]. Evolutionary
Computation. 2000,8(2):125-47.
[24] Chen MR, Weng J, Li X, Zhang X. Handling Multiple Objectives with Integration of Particle Swarm Optimization and Extremal
Optimization. In: Wen Z, Li T, editors. Foundations of Intelligent Systems. Advances in Intelligent Systems and Computing. 2772014.
p. 287-97.
[25] Siinivas N, Deb K. Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms[J]. Evolutionary
Computation. 1994,2(3):221-48.
[26] Corne DW, Knowles JD, Oates MJ, editors. The Pareto Envelope-Based Selection Algorithm for Multiobjective Optimization.
International Conference on Parallel Problem Solving From Nature; 2000.
[27] Karahan I, Koksalan M. A Territory Defining Multiobjective Evolutionary Algorithms and Preference Incorporation[J]. IEEE
Transactions on Evolutionary Computation. 2010,14(4):636-64.
[28] Fonseca CM, Fleming PJ, editors. Genetic algorithms for multiobjective optimization: formulation, discussion and generalization.
Proceedings of ICGA-93: Fifth International Conference on Genetic Algorithms, 17-22 July 1993; 1993; San Mateo, CA, USA:
Morgan Kaufmann.
[29] Knowles J, Corne D, editors. The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective
optimisation. Evolutionary Computation, 1999 CEC 99 Proceedings of the 1999 Congress on; 1999 1999.
[30] Horn J, Nafpliotis N, Goldberg DE, editors. A niched Pareto genetic algorithm for multiobjective optimization. Evolutionary
Computation, 1994 IEEE World Congress on Computational Intelligence, Proceedings of the First IEEE Conference on; 1994 27-29
Jun 1994.
[31] Corne D W JNR, Knowles J D, et al., editor PESA-II: Region-based selection in evolutionary multiobjective optimization.
Proceedings of the genetic and evolutionary computation conference; 2001; San Francisco: Morgan Kaufmann Publisher.
[32] Deb K, Jain H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting
Approach, Part I: Solving Problems With Box Constraints[J]. Ieee Transactions on Evolutionary Computation. 2014,18(4):577-601.
[33] Jain H, Deb K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting
Approach, Part II: Handling Constraints and Extending to an Adaptive Approach[J]. IEEE Transactions on Evolutionary Computation.
2014,18(4):602-22.
[34] Yang SX, Li MQ, Liu XH, Zheng JH. A Grid-Based Evolutionary Algorithm for Many-Objective Optimization[J]. Ieee
Transactions on Evolutionary Computation. 2013,17(5):721-36.
[35] Zhang XY, Tian Y, Jin YC. A Knee Point-Driven Evolutionary Algorithm for Many-Objective Optimization[J]. Ieee Transactions
on Evolutionary Computation. 2015,19(6):761-76.
[36] Li K, Deb K, Zhang QF, Kwong S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and
Decomposition[J]. Ieee Transactions on Evolutionary Computation. 2015,19(5):694-716.
[37] Yuan Y, Xu H, Wang B, Yao X. A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization[J].
IEEE Transactions on Evolutionary Computation. 2016,20(1):16-37.
[38] Xue F, Sanderson AC, Graves RJ, Ieee. Pareto-based multi-objective differential evolution2003. 862-9 p.
[39] Huband S, Hingston P, Barone L, While L. A review of multiobjective test problems and a scalable test problem toolkit[J]. Ieee
Transactions on Evolutionary Computation. 2006,10(5):477-506.
[40] Vira C, Haimes YY. Multiobjective decision making: theory and methodology: North-Holland; 1983.
[41] Jensen MT. Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms[J]. Ieee Transactions on
Evolutionary Computation. 2003,7(5):503-15.
[42] Kung H T LF, Preparata F P. On finding the maxima of a set of vectors[J]. Journal of the ACM (JACM). 1975,22(4):469-76.
[43] Fortin FA, Grenier S, Parizeau M. Generalizing the Improved Run-Time Complexity Algorithm for Non-Dominated Sorting[J].
Gecco'13: Proceedings of the 2013 Genetic and Evolutionary Computation Conference. 2013:615-22.
[44] Ding LX, Zeng SY, Kang LS, Ieee. A fast algorithm on finding the non-dominated set in multi-objective optimization2003.
2565-71 p.
[45] Tang S, Cai Z, Zheng J. A Fast Method of Constructing the Non-Dominated Set: Arena's Principle. Guo MZ, Zhao L, Wang LP,
editors2008. 391-5 p.
[46] McClymont K, Keedwell E. Deductive Sort and Climbing Sort: New Methods for Non-Dominated Sorting[J]. Evolutionary
Computation. 2012,20(1):1-26.
[47] Drozd M, x00Ed, Akimoto Y, Aguirre H, Tanaka K. Computational Cost Reduction of Nondominated Sorting Using the
M-Front[J]. IEEE Transactions on Evolutionary Computation. 2015,19(5):659-78.
[48] Chuan S, Ming C, Zhongzhi S, editors. A Fast Nondominated Sorting Algorithm. 2005 International Conference on Neural
Networks and Brain; 2005 13-15 Oct. 2005.
[49] Fang H, Wang Q, Tu YC, Horstemeyer MF. An Efficient Non-dominated Sorting Method for Evolutionary Algorithms[J].
Evolutionary Computation. 2008,16(3):355-84.
[50] Zhou X, Shen J, Shen JX. An Immune Recognition Based Algorithm for Finding Non-dominated Set in Multi-objective
Optimization. Zhang Y, Tan H, Luo Q, editors2008. 291-6 p.
[51] Zheng JH, Shi ZZ, Ling CX, Xie Y. A new method to construct the non-dominated set in multi-objective genetic algorithms. In:
Shi Z, He Q, editors. Intelligent Information Processing Ii. International Federation for Information Processing. 1632005. p. 457-70.
[52] Fieldsend JE, Everson RM, Singh S. Using unconstrained elite archives for multiobjective optimization[J]. IEEE Transactions on
Evolutionary Computation. 2003,7(3):305-23.
[53] Holland J H GDE. Genetic Algorithms in Search, Optimization and Machine Learning[J]. 1989.
[54] Zhang XY, Tian Y, Cheng R, Jin YC. An Efficient Approach to Nondominated Sorting for Evolutionary Multiobjective
Optimization[J]. Ieee Transactions on Evolutionary Computation. 2015,19(2):201-13.
[55] Y Z, Chen Z, Zhang J. Ranking Vectors by Means of the Dominance Degree Matrix[J]. IEEE Transactions on Evolutionary
Computation. 2016,PP(99):1-.
[56] Deb K, Thiele L, Laumanns M, Zitzler E. Scalable test problems for evolutionary multiobjective optimization[J]. Evolutionary
Multiobjective Optimization Theoretical Advances and Applications. 2005:105-45.
[57] Deb K, Sundar J. Reference point based multi-objective optimization using evolutionary algorithms. Keijzer M, editor. New York:
Assoc Computing Machinery; 2006. 635-42 p.
Chunteng Bao received the B.S. degree in Mechanical design manufacturing and automation from Henan
University of Technology, Zhengzhou, China, in 2010, and the M.S. degree in control theory and control
engineering from Shanghai Maritime University, Shanghai, China, in 2013. He is currently pursuing the Ph.D
degree in Control Science and Engineering at the College of Electronics and Information Engineering, Tongji
University, Shanghai, China. His research interests include intelligent control, evolutionary computation, and
multi-objective optimization.
Lihong Xu received the Ph.D. degree from the Department of Automatic Control, Southeast University in Nanjing,
China in 1991. He became a member of IEEE in 1989.
In 1994, he was specially appointed as a professor in Southeast University. He transferred to Tongji University
in August, 1997, and has been a professor in Tongji University since then. His research fields include control
theory, computational intelligence, and optimization theory. He got the first prize of Science and Technology
Advancement Award of Ministry of Education of China in 2005 and the second prize of National Science and
Technology Advancement Award of China in 2007, respectively.
Dr. Xu is a member of ACM, and the president of IEEE CIS’s Shanghai Chapter. He was the co-Chair of the
2009 GEC Summit in Shanghai. He is now doing joint research work as a visiting professor and advisor with the
greenhouse research team of BEACON, USA.
Erik D. Goodman received the B.S. degree in mathematics and the M.S. degree in system science from Michigan
State University (MSU), East Lansing, MI, USA, and the Ph.D. degree in computer and communication sciences
from the University of Michigan, Ann Arbor, MI, USA, in 1972.
He is currently the Director of the BEACON Center for the Study of Evolution in Action, an NSF Science
and Technology Center, MSU. He has been a Professor with the Department of Electrical and Computer
Engineering, MSU, since 1984, where he also co-directs the Genetic Algorithms Research and Applications Group.
He is the Co-Founder in 1999 and the former Vice President for Technology of Red Cedar Technology, Inc. He is
an Advisory Professor with Tongji University, Shanghai, China, and East China Normal University, Shanghai.
Dr. Goodman is a Senior Fellow of the International Society for Genetic and Evolutionary Computation. He
was the Chair of the Executive Board of that society from 2001 to 2004. From 2005 to 2007, he was the Founding
Chair of the ACM SIG for Genetic and Evolutionary Computation.
Leilei Cao received the B.S. degree in vehicle engineering from Yangzhou University, Yangzhou, China, in 2011,
and the M.S. degree in vehicle engineering from Jiangsu University, Zhenjiang, China, in 2014. He is currently
pursuing the Ph.D degree in Control Science and Engineering at the College of Electronics and Information
Engineering, Tongji University, Shanghai, China. His research interests include evolutionary computation, dynamic
optimization, and multi-objective optimization.
Assign to F 2
F2
Assign to F k
Fk
f2
F4
F3
F2
F1
f1
Figure 2. An example of a Population with 12 solutions and 4 Pareto fronts
The Pareto Front
PF
6 3 18
Q
6 3 18
15 18 1
3 1 5
11 5
18
1 3rd round
2nd round
R
15 11 9 19 12 17
12
1st round
Second round screening
Q
6 3 18
1
15 1 18 1
2
3 1 5
3
2
11 5
3
18
4
1 3rd round
2nd round
R
15 11 9 19 12 17
12
1st round
Second round screening
Figure 4. Transfer rule of the solutions non-dominated with the first one in each round of comparison
f2
7 F2
6 3
2
5
F1 1
4
6
3
2 5
4
1
f1
0 1 2 3 4 5 6 7
1 1
f1
0 1 2 3 4 5 6 7
Figure 9. Relationship between the Number of Fronts and the Number of Objectives for Random Populations
(N=1000, 2000, 3000, 4000 and 5000)
Comparison
Front Comparison Operation
Result
(p1, p2) Case3
(p1, p3) Case3
(p1, p4) Case4
(p1, p5) Case4
(p1, p6) P1 is ignored Case1
(p2, p3) Case3
(p2, p4) Case4
F1 (p2, p5) Case4
(p2, p6) P2 is ignored Case1
(p3, p4) Case4
(p3, p5) Case4
(p3, p6) P3 is ignored Case1
(p4, p5) Case2
(p4, p6) P4 is assigned to F1 Case2
(p5, p6) P5 and p6 are assigned to F1 Case2
(p1, p2) Case2
F2 (p1, p3) P1 is assigned to F2 Case2
(p2, p3) P2 and p3 are assigned to F2 Case2
Case1: 3 Case2: 6 Case3: 3 Case4: 6
Total: 18
TABLE II
THE NUMBER OF DOMINANCE COMPARISONS CONDUCTED BY
HNDS FOR THE POPULATION SHOWN IN FIG. 5
Comparison
Front Comparison Operation
Result
(p6, p3) Case1
(p6, p2) Case1
(p6, p1) Case1
F1 (p6, p5) Case2
(p6, p4) p6 is assigned to F1 Case2
(p5, p4) p5 is assigned to F1 Case2
p4 is assigned to F1
(p3, p2) Case2
(p3, p1) p3 is assigned to F2 Case2
F2
(p2, p1) p2 is assigned to F2 Case2
p1 is assigned to F2
Case1: 3 Case2: 6 Case3: 0 Case4: 0
Total: 9
TABLE III
EXECUTION TIME OF HNDS ON NINE TYPES OF POPULATIONS COMPARED WITH THREE WELL-KNOWN
NON-DOMINATED SORTING ALGORITHMS. THE BEST PERFORMANCE IS HIGHLIGHTED WITH BOLD FONT AND
GRAY SHADING.
TABLE IV
TIME RATIO OF NSGA-II FRAMEWORK EMBEDDED WITH DIFFERENT NON-DOMINATED SORTING ALGORITHMS
ON TNK AND DTLZ2 PROBLEMS. THE BEST PERFORMANCES ARE SHOWN IN BOLD FONT AND GRAY
SHADING.
Test problem Number of Number of Time Ratio
solutions objectives (TR)
NSGA-II Arena’s Principle Deductive Sort HNDS
200 1.000 0.623 0.412 0.409
TNK 300 2 1.000 0.492 0.319 0.278
500 1.000 0.465 0.321 0.292
NSGA-II framework.