A_Multiobjective_Multitask_Optimization_Algorithm_Using_Transfer_Rank
A_Multiobjective_Multitask_Optimization_Algorithm_Using_Transfer_Rank
The effectiveness of the proposed algorithm was verified by subject to xi ∈ asi , bsi , i = 1, 2, . . . , k (1)
studying benchmark MMO problems. The experimental results s=1
showed the proposed algorithm was more effective than other nk
s=1 [ak , bk ]
where s s represents the decision space of the kth
conventional MMO techniques.
task, nk denotes the dimension of the kth task decision space,
Index Terms—KNN model classifier, knowledge transfer, and Fk represents the kth multiobjective optimization task.
multiobjective multitask optimization (MMO), transfer rank. nk
s=1 [ak , bk ] → R
fk : s s mk consists of m real-valued objec-
k
m
tive functions. R represents the objective space of the kth
k
introduced a denoising auto-encoder for explicit genetic trans- KNN model classifier could be used to distinguish solutions
fer. A method of identifying valuable solutions for efficient with the same transfer rank.
knowledge transfer (EMTET) [38] was later proposed. In The primary contributions of this article are as follows.
EMTET, the neighbors of solutions that achieve positive 1) Transfer rank is calculated in a population and solutions
transfer are also transferred. with a high transfer rank are selected first.
Several new multitasking algorithms have been proposed 2) A KNN model classifier is used to divide solutions with
recently. For example, EMaTOMKT [39] is a multisource the same transfer rank into four categories: a) solu-
knowledge transfer technique that includes three strate- tions within the positive transfer area; b) solutions near
gies: 1) adaptive mating probability (AMP); 2) maximum the positive transfer area; c) solutions near the nega-
mean discrepancy (MMD)-based task selection (MTS); and tive transfer area; and d) solutions within the negative
3) local distribution estimation knowledge transfer (LEKT). transfer area. The first category is prioritized.
The MOMFEA-SADE [40] algorithm is based on subspace The remainder of this article is organized as follows.
alignment (SA) and self-adaptive differential evolution (DE), Section II introduces the details of transfer rank and KNN
which are used to reduce the probability of a negative trans- model classification. And, the performance of these two strate-
fer and generate promising solutions. EEMTA [41] is a credit gies is further analyzed. Section III introduces the proposed
assignment technique used for selecting appropriate tasks, algorithm. Section IV reports and analyzes the experimental
based on feedback across solutions. results. The final section sums up this article.
Although these algorithms have shown promise for solving
MMO problems, their ability to find solutions that achieve II. T RANSFER R ANK AND KNN M ODEL C LASSIFICATION
positive transfer could be improved further. Some common
In this section, the definition of transfer rank and the con-
knowledge exists between MMO tasks [42], which plays a sig-
struction process for the KNN model classifier are introduced.
nificant role in improving algorithm performance [43], [44].
In MO-MFEA [31], solutions with the same probability are
A. Definition of Transfer Rank
selected as transferred solutions. In other words, transfer deci-
sions are made randomly. Solutions that are not useful for Transfer rank, defined for a population P = {p1 , . . . , pN },
other tasks are therefore most likely to be transferred, which is used to quantify the priority of transferred solutions and
results in a waste of computing resources. In EMEA [37], determine the optimal choice. Historical transferred solutions
nondominant solutions in each task are selected as transferred of size u can be represented as HTS = {s1 , s2 , . . . , su }. HTSt
solutions, the effectiveness of which depends on the high sim- is then composed of positive transferred solutions Post and
ilarity between tasks. Combined with a knowledge of machine negative transferred solutions Negt ∪ Negt−1 ∪ · · · ∪ Negt−m+1
learning, EMTIL [45] uses an incremental Bayes classifier in generation t. Here, Negt denotes solutions with labeled 1
to divide a population into positive-transferred solutions and and dominated in the corresponding target task. It is worth
negative-transferred solutions, selecting the positive outcomes noting that transferred solutions achieved a positive transfer
for transfer. This division of a population into two categories in this study if the solution was nondominant in a target task.
represents an improvement over previous algorithms, but it is Steps for calculating transfer rank can be described as follows.
relatively rough to be divided into two categories. 1) Calculate the distance matrix D between the historical
In this study, a KNN model classifier is used to divide a transferred solution HTS and the current population P.
population into four categories, which is more accurate than a 2) For each sj in HTS, D is used to identify the solution pi at
Bayes classifier. When information concerning the assistance the smallest distance from sj , placing sj into the associ-
of one task by another is unknown, the accurate selection ated subset i containing historical transferred solutions
of useful solutions for knowledge transfer is of significant associated with pi . This subset can be expressed as
importance. Since most of the transferred solutions belong to follows:
a positive or negative category, neighboring solutions have a
i = s ∈ HTS|d s, pi ≤ d s, pj , j = 1, . . . , N (2)
higher probability of being found in the same group. This
study also defines a transfer rank, which quantifies the prior- where d(s, pi ) is the Euclidean distance between s and
ity of transferred solutions in order to improve the probability pi . In other words, s is contained in i if and only if
of a positive transfer. However, there are often multiple solu- pi is at the shortest distance to s among all N solutions.
tions with the same transfer rank. To solve this problem, a It is also worth noting that since the number of HTS is
KNN model classifier [46] is introduced to categorize solu- smaller than N, some sets i will be empty.
tions with the same rank, divided into four categories, and 3) If s is a positive-transferred solution for each s in HTS,
increase the probability of a positive transfer. The KNN clas- its associated degree is given by αs = 1. Otherwise,
sifier is used not only because it has no neighbor parameters, αs = −1.
but also because it can efficiently solve classification problems, These steps can be used to determine the associated degree
the resulting accuracy of which is relatively high compared to a αs and define the corresponding transfer rank.
Bayes classifier. This proposed MMO algorithm, using trans- Definition 1 (Transfer Rank):
If i = ∅, the transfer rank
fer rank and a KNN model (MMOTK), was used to select ϕi of pi is given by ϕi = s∈i αs ; otherwise, ϕi = 0.
transferred solutions. By calculating individual transfer ranks, Solutions with a high transfer rank are adjacent to multiple
solutions with the higher transfer value were selected and the positive transferred solutions and have a higher probability of
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 239
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
240 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
Fig. 2. Illustration of the process of the model construction. (a) Original distribution of data. (b) Construction of the first model. (c) Construction of the
second model. (d) Construction of the third model. (e) Construction of the model set M is complete. (f) Discard the trained data and save model set M.
Fig. 3. Process of the KNN model classification. (a) Model set M. (b) Distribution of the test data. (c) Result of classification.
3) If st is covered by at least two models, the category are classified as a positive-transferred solution, as shown in
of st is the same as that of the model with the largest Fig. 3(c).
Num(sj ).
4) If no model in M covers st , the category of st is the same
as that of the model with its boundary closest to st . D. Performance Analysis of the Transfer Rank and
These solutions can be divided, based on whether the KNN Model Classifier
transferred solutions are covered by the model set M into The effectiveness of the transfer rank and KNN model
four categories: 1) solutions within the positive transfer area; classifier for improving the accuracy of positive transfer was
2) solutions near the positive transfer area; 3) solutions near demonstrated by comparing the proposed MMOTK algorithm
the negative transfer area; and 4) solutions within the negative with MOMFEA [31], EMEA [37] applied to the test function
transfer area. In this way, the solutions within the posi- PIHS. A 2-D decision space with 11 generations was used for
tive transfer area are selected, with a higher probability of visualization purposes.
achieving positive transfer. The accuracy of positive transfer in MOMFEA was calcu-
An example illustrating the use of the KNN Model classifier lated using a population P2 of size 100. Transfer P2 to P1
is provided in Fig. 3. The model set M is shown in Fig. 3(a) and get P12 . A nondominant ranking of P12 ∪ P1 identified the
and the distribution of the test data is demonstrated by the nondominant individual in P12 to be 46, corresponding to an
black dots in Fig. 3(b). These test data from left to right are accuracy of 46/100 = 0.46.
called the first, the second, the third and the fourth point. The In EMEA, the nondominant solutions in the current popu-
first point is contained in the negative class M2 , indicated by lation were selected for transfer and the accuracy of positive
a blue dot. The second point lies outside the model set M and transfer was calculated using 60 nondominant solutions in
is near the boundary of M3 , the category of which is marked P2 , which were transferred to P1 . Among these, 33 solu-
by a blue dot and is the same as that of M3 . The third point tions achieve positive transfer, resulting in an accuracy of
is contained in M1 and M3 , with Num(sj ) values of 5 and 2, 33/60 = 0.55.
respectively. The category of this solution is consistent with Finally, the accuracy of positive transfer in MMOTK can
the category of the model with the largest Num(sj ). and is thus be calculated as shown in Fig. 4. Fig. 4(a) and (c) show
marked by a red (positive) dot. In the same way, the last point the distribution of transfered solutions in the decision space
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 241
(a) (b)
(c) (d)
Fig. 4. Performance analysis of the transfer rank and KNN model classifier. (a) Distribution of transfered solutions in the decision space of Task 2. (b) Solutions
of realizing positive transfer in the decision space of Task 1. (c) Distribution of transfered solutions in the objective space of Task 2. (d) Solutions of realizing
positive transfer in the objective space of task 1.
and objective space of Task 2, respectively. Fig. 4(b) and (d) indicating transfer rank and the KNN model classifier can
represent solutions to achieve positive transfer in the deci- improve the accuracy of positive transfer. The reason for this
sion space and objective space of Task 1, respectively. In result may be that the solutions which are close to the positive
Fig. 4(a), the light blue circles, red triangles, red circles, and transferred solutions of the previous generation generally have
blue circles represent P2, the transferred solutions, the positive a greater probability to achieve positive transfer. The transfer
models, and the negative models trained by HTS, respec- rank is calculated according to associates the current popula-
tively. The green circle in Fig. 4(b) represents P1 ; the red tion with the previous generation of transferred solutions. If
triangle denotes the transferred solutions in the target task; there are more positive transferred solutions in the previous
and the blue triangle indicates solutions that achieved pos- generation near the solution, the transfer rank of this solu-
itive transfer in the target task. The light blue circle and tion will be higher. Thus, the solutions with higher transfer
red triangles in Fig. 4(c) denote P2 and transferred solutions rank will be selected as the transferred solutions. Furthermore,
in the objective space, respectively. In Fig. 4(d), the green when the transfer ranks of the solutions are the same, they are
circle, red triangle, and blue triangle represent P1 , positive classified by the KNN model classifier. Through the previous
transferred solutions, and negative transferred solutions in the generation of transferred solutions training to form a KNN
objective space, respectively. There are eight blue triangles in model classifier, using the KNN model classifier to divide the
Fig. 4(b), indicating the number of positive-transferred solu- population into four categories: 1) solutions within the posi-
tions is 8 and the accuracy is 8/n = 0.8 (For the value of n, tive transfer area; 2) solutions near the positive transfer area;
see Section IV-C) 3) solutions near the negative transfer area; and 4) solutions
This analysis suggests the accuracy of positive transfer in within the negative transfer area, giving priority to the first
the proposed MMOTK algorithm applied to PIHS (0.8) is types of solutions, because the first types of solutions have a
higher than that of MEMA (0.55) and MOMFEA (0.46), higher probability to achieve positive transfer.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
242 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 243
TABLE II
P ROPERTIES OF M ULTIOBJECTIVE M ULTITASKING O PTIMIZATION P ROBLEMS
6) MO-MFEA-II [34]; and 7) NSGA-II [10]. The benchmark medium similarity, and low similarity according to the
MMO problems [42], [49], [50] were used to verify the fitness landscape calculated using a Spearman rank correlation
effectiveness of the each algorithm and experimental results coefficient [42].
showed the proposed technique outperformed comparable The WCCI2020 competition evolutionary MMO test suite
models in selecting effective transferred solutions. (test suite 2) was also used to assess the performance of
the proposed algorithm. These test problems included 10
A. Test Problems MMO benchmark questions, each containing 50 multiobjective
In this study, the revised CEC 2017 evolutionary multitask- optimization tasks involving both ZDT [49] and DTLZ
ing optimization competition benchmark (test suite 1) [45] was problems [50]. There were some similarities between the
used to assess the performance of the proposed algorithm. Test Pareto optimal solutions and fitness landscapes for the
suit CEC2017 was modified in [45] to increase the difficulty 50 tasks.
of test problems and prevent unfair advantages for specific
Pareto sets (PSs), wherein Sph2 = (0, . . . , 0, 5.1, . . . , 5.1)T ,
B. Performance Indicators
40 10
Spl2 = (0, . . . , 0, 40, . . . , 40)T (see Table II). The test suite In this study, the inverted generational distance (IGD) [51]
and hypervolume (HV) [52] metrics were used to evaluate the
25 25
included nine MMO problems, each consisting of two tasks performance of the seven algorithms.
involving a multiobjective optimization task with two or 1) Inverted Generational Distance: The IGD metric rep-
three objectives. These problems are designed by consider- resents the average distance between optimal solutions and
ing the degree of intersection of Pareto-optimal solutions and the nondominant solution set obtained by an algorithm. Small
the similarity of the fitness landscape between optimization IGD values indicate better comprehensive performance and
tasks. The degree of intersection for the global minimum reflect not only the convergence of MOEA but also the unifor-
can be divided into three categories: 1) complete intersection; mity and diversity of a distribution. Suppose P∗ is the optimal
2) partial intersection; and 3) no intersection. The test prob- solution set of the uniform distribution in Pareto front (PF)
lems in each category were divided into high similarity, and P denotes the nondominant solution set obtained by an
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
244 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
TABLE III
R EFERENCE P OINTS OF HV IN N INE T EST Q UESTIONS
TABLE IV
AVERAGE AND S IGNIFICANCE A NALYSIS (S HOWN IN THE B RACKETS ) OF IGD O BTAINED BY MMOTK, MOMFEA-SADE,
EMTIL, EMEA, MO-MFEA, MO-MFEA-II, AND NSGA-II ON N INE MMO B ENCHMARKS
TABLE V
AVERAGE AND S IGNIFICANCE A NALYSIS (S HOWN IN THE B RACKETS ) OF HV O BTAINED BY MMOTK, MOMFEA-SADE,
EMTIL, EMEA, MO-MFEA, MO-MFEA-II, AND NSGA-II ON N INE MMO B ENCHMARKS
algorithm. IGD can then be calculated as follows: and a reference point. A larger HV indicates higher quality in
the final solution, strictly abiding by the principle of Pareto
∗
p∈P∗ d(p, P)
IGD P, P = (4) domination. Taking the minimization problem as an example,
|P∗ | if solution A dominates solution B, the HV value of A must
where d(p, P) = minp∈P |p − p| denotes the minimum be greater than that of B. The calculation of HV, which is
Euclidean distance from the point p in PF to the individual p in independent of PF, is then given by
the final solution set P. The magnitude of P∗ was |P∗ | = 1000 ⎛ ⎞
for 2-D problems and 10 000 for 3-D problems. r p
HV P|z = Vol⎝ f1 , zr1 × · · · × fmp , zrm ⎠ (5)
2) Hypervolume: The HV evaluation metric is popular
p∈P
because of its theoretical foundation. It evaluates the com-
prehensive performance of MOEA by calculating the hyper- where Vol(·) is the Lebesgue measure and zr =
volume of space surrounded by the nondominant solution set (zr1 , zr2 , . . . , zrm ) is the reference point dominated by all points
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 245
TABLE VI
AVERAGED IGD VALUE O BTAINED BY NSGA-II, MOMFEA, EBS, MATEA, EM ATOMKT, MMOTK, AND MMOTK’ ON WCCI20-P ROBLEM 1
P ROBLEMS A FTER 30 I NDEPENDENT RUNS , W HERE THE G RAY BACKGROUND I S THE B EST R ESULT OF A LL A LGORITHMS
in the PF. HV reference points for the nine test questions 5) The number of generation saved by the negative class
CIHS-NILS are shown in Table III. set was m = 3.
6) Random mating probability (rmp) in MO-MFEA was set
C. Experimental Settings to 0.3.
Parameter settings for test suite 1 [45] can be described as 7) The size of the transferred population was r = 5.
follows. 8) The significance of the algorithm was analyzed using a
1) The population size of each algorithm is set to Wilcoxon rank-sum test at a significance level of 0.05. The
100 for 2-D optimization problems and 120 for 3-D symbols “+,” “=,” and “−” indicate the results of other
optimization problems. All algorithms were run 30 times algorithms were significantly better than, similar to, and
independently. worse than those of the MMOTK approach, respectively.
2) The maximum generation of all algorithms was 1000.
3) Parameters of the crossover mutation operator were set
to ηc = 2, ηm = 20, pc = 1, and pm = 1. D. Simulation Results and Analysis for Test Suit 1
4) The number of transferred solutions for all algorithms Tables IV and V list the average IGD and HV values
was n = 10. for each algorithm running 30 times independently. The
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
246 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
TABLE VII
AVERAGED IGD VALUE O BTAINED BY NSGA-II, MOMFEA, EBS, MATEA, EM ATOMKT, MMOTK, AND MMOTK’ ON WCCI20-P ROBLEM 2
P ROBLEMS A FTER 30 I NDEPENDENT RUNS , W HERE THE G RAY BACKGROUND I S THE B EST R ESULT OF A LL A LGORITHMS
MMOPRT and MOMFEA-SADE algorithms were run on NSGA-II is a single-task optimization model with no com-
MATLAB 2020b and the results of EMTIL, MMEA, MO- munication between tasks. As such, the performance of the
MFEA, MO-MFEA-II, and NSGA-II were collected from the NSGA-II algorithm is poor for most problems, indicating
experimental portion of this article [45]. Algorithms that per- that transfer between tasks is beneficial to population evo-
form best were marked with a gray background. It is evident lution. Transfer between tasks is random in MO-MFEA and
the proposed MMOTK approach outperformed the other algo- some solutions that may achieve negative transfer are also
rithms in 12 of the 18 tasks. The results of the Wilcoxon transferred, resulting in a waste of resources. As the experi-
rank-sum test were 12/2/4, 14/1/3, 15/0/3, 15/0/3, 16/0/2, mental results showed, this approach failed for most problems.
and 18/0/0 in the last row of Table IV and 12/0/6, 16/1/1, Nondominant solutions are selected as transferred solutions in
16/1/1, 16/1/1, 17/0/1, and 16/2/0 in Table V. The quantities EMEA and the algorithm performs well given the high sim-
of “−” were also significantly higher than those of “+” in ilarity between tasks, but not for low similarity. The results
the two tables. Thus, the MMOTK algorithm exhibited better of EMEA, MO-MFEA, and MO-MFEA-II were superior to
performance for most problems, due to transfer rank and the those of NSGA-II, which offered no advantages over other
KNN model classifier. algorithms.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 247
TABLE VIII
EMTIL uses a Bayesian classifier to divide transferred solu- AVERAGE AND S IGNIFICANCE A NALYSIS OF IGD O BTAINED BY
tions into two categories. While this approach is unique, accu- MMOTK, MMOTK’, AND EM ATOMKT ON N INE MMO B ENCHMARKS
racy of the Bayesian classifier was relatively low compared
with the KNN model classifier. EMTIL offers some advan-
tages over EMEA, MO-MFEA, MO-MFEA-II, and NSGA-II,
but worse than the proposed MMOTK approach.
MOMFEA-SADE performed well in the five tasks CIMS-
T1, CIMS-T2, PIMS-T2, NIHS-T1, NIMS-T1 of Table III
because CIMS provides more intertask information. However,
becoming trapped in a local optimization is common when
solving PIMST2, which MOMFEA-SADE avoids by using the
two mechanisms of SA and DE simultaneously. For NIHS-T1,
NIMS-T1 offers less intertask information and the transfers
between tasks are of little help for population evolution, As
such, the advantages of MMOTK are not evident in NIHS-T1,
NIMS-T1. The performance of MOMFEA-SADE in Table V
is similar to that of Table IV, except inexcluding NILS-T1.
The nine test suites discussed above are composed of both
easy and difficult tasks. The suites were divided into CIHS-CILS,
PIHS-PILS, and NIHS-NILS based on the degree of intersection
for the global minimum. The first category (CIHS-CILS) is TABLE IX
composed of two tasks with the same Pareto set. This increases AVERAGE AND S IGNIFICANCE A NALYSIS OF HV O BTAINED BY MMOTK,
MMOTK’, AND EM ATOMKT ON N INE MMO B ENCHMARKS
their relevance, especially when the easy task converges to
the Pareto set and can directly assist with the difficult task.
Knowledge transfer is often effective in these cases, as can be
seen in Tables IV and V. Here, the proposed MMOTK algorithm
achieved the best results in four out of the six CIHS-CILS cases,
which confirms effectiveness of MMOTK.
The second category (PIHS-PILS) consists of two tasks
with different PSs that represent partial intersections and
is more difficult to solve than CIHS-CILS. As shown in
Tables IV and V, MMOTK outperformed for most problems
in CIHS-CILS.
The last category (NIHS-NILS) consists of two tasks
exhibiting no intersection in the Pareto set. The relevance of
these questions is low and knowledge transfer is of little help
in these cases. Results produced by MMOTK do not obviously
reflect the advantages of tasks with less intertask information.
Finally, the effectiveness of transfers in MMOTK are illus-
trated using the CIHS function in 29th generation as an
example. The blue circle in Fig. 5 represents nondominant
solutions in a population and the red dot represents transferred
solutions. It is evident that transferred solutions in the first evolutionary many-task optimization. The decision spaces of
task dominate the most of nondominant solutions in the pop- these test problems are rotated by a matrix. It is not very effi-
ulation, thereby accelerating population evolution. This result cient to generate good solutions by the classical crossover and
demonstrates the effectiveness of MMOTK for improving the mutation operators for the rotated problems. Therefore, a vari-
probability of positive transfers and accelerating population ant of MMOTK, denoted as MMOTK’, was designed for these
evolution, using transfer rank and a KNN model classifier problems. Gaussian distribution is used to generate offspring
strategy. MMOTK also narrowed the selection range of trans- as used in EMaTOMKT [39].
ferred solutions and was able to choose positive solutions Tables VI and VII list the average IGD obtained by the
with higher probability. These experimental results confirm compared algorithms on problems 1 and 2 in algorithm
the validity of MMOTK for solving MMO problems. running 30 times independently. (The experimental results
of the remaining eight test problems are provided in the
supplementary file). The results of NSGA-II, MOMFEA,
E. Simulation Results of Test Suit 2 and the Validity EBS [53], and MaTEA [54] were from the experimen-
Analysis of the Transfer Strategy tal studies of paper [39]. And the results of MMOTK,
Test suite 2 includes ten test problems. Each test problem MMOTK’, and EMaTOMKT [39] were produced by running
is with 50 tasks, and from the WCCI2020 competition on on MATLAB2020.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
248 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
(a) (b)
TABLE X
AVERAGE AND S TANDARD D EVIATION OF IGD VALUES O BTAINED BY MMOTK W ITH n = 6,
n = 8, n = 10, n = 12, n = 14, n = 16, n = 18, AND n = 20 ON THE T EST S UIT 1
From these tables, we can see that MMOTK’ and F. Parameter Sensitivity Analysis
EMaTOMKT were superior to the other compared algo- The influence of the number of transfers n on the
rithms. EMaTOMKT and MMOTK’ perform well in test suit performance of MMOTK was also investigated as part of the
2 because the problems in the test set 2 are all with rota- study. In the experiment, n was set to eight different values,
tion matrix, and it works well to generate offspring with including 6, 8, 10, 12, 14, 16, 18, and 20. The experimen-
the Gaussian distribution. On the other hand, the effect of tal results are provided in Tables X and XI, which display the
MMOTK is poor when it does not use the Gaussian distri- mean and standard deviation of IGD and HV for ten runs given
bution to produce offspring. This also shows that the way of different values of n. As seen, the results were highly similar,
generating offspring with the Gaussian distribution is more with the best performance marked by a gray background. This
conducive to solve this problem. demonstrates the performance of MMOTK is not sensitive to
In order to further illustrate the transfer efficiency of n, which was set to 10 in the study.
MMOTK, we compared MMOTK with MMOTK’ and
EMaTOMKT on the test suit 1. The experimental results are
shown in Tables VIII and IX. From these tables we can V. C ONCLUSION
see that both MMOTK and MMOTK’ perform better than In this study, a novel MMO algorithm MMOTK was
EMaTOMKT, indicating that MMOTK is more adaptable and proposed for knowledge transfer between tasks. Transfer rank
works for both 50 and 2 dimensions. This verified the effec- and a KNN model classifier were developed to improve
tiveness of the transfer strategy based on the transfer rank and the probability of positive transfers. The effectiveness of
KNN model classifier. MMOTK was verified through a comparison with seven
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 249
TABLE XI
AVERAGE AND S TANDARD D EVIATION OF HV VALUES O BTAINED BY MMOTK W ITH n = 6,
n = 8, n = 10, n = 12, n = 14, n = 16, n = 18, AND n = 20 ON THE T EST S UIT 1
state-of-the-art algorithms (EMTIL, EMaTOMKT, MOMFEA- [13] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining conver-
SADE, EMEA, MO-MFEA, MO-MFEA-II, and NSGA-II). gence and diversity in evolutionary multiobjective optimization,” Evol.
Comput., vol. 10, no. 3, pp. 263–282, 2002.
The results showed MMOTK was significantly better for [14] R. Tanabe and H. Ishibuchi, “A niching indicator-based multi-modal
MMO problems. There remain several challenges in [55] that many-objective optimizer,” Swarm Evol. Comput., vol. 49, pp. 134–146,
are worthy of further discussion and research. Sep. 2019.
[15] L. Wei, X. Li, and R. Fan, “A new multi-objective particle swarm opti-
misation algorithm based on R2 indicator selection mechanism,” Int. J.
Syst. Sci., vol. 50, no. 10, pp. 1920–1932, 2019.
R EFERENCES [16] N. Yang, H.-L. Liu, and J. Yuan, “Performance investigation of
i -indicator and i+ -indicator based on lp -norm,” Neurocomputing,
[1] A. Arias-Montano, C. A. Coello Coello, and E. Mezura-Montes, vol. 458, pp. 546–558, Oct. 2021.
“Multiobjective evolutionary algorithms in aeronautical and aerospace [17] J. Yuan, H.-L. Liu, F. Gu, Q. Zhang, and Z. He, “Investigating the
engineering,” IEEE Trans. Evol. Comput., vol. 16, no. 5, pp. 662–694, properties of indicators and an evolutionary many-objective algorithm
Oct. 2012. using promising regions,” IEEE Trans. Evol. Comput., vol. 25, no. 1,
[2] A. Goicoechea, D. R. Hansen, and L. Duckstein, “Multi-objective deci- pp. 75–86, Feb. 2021.
sion analysis with engineering and business applications,” J. Oper. Res. [18] J. Falcón-Cardona, H. Ishibuchi, C. A. C. Coello, and M. Emmerich, “On
Soc., vol. 34, no. 5, pp. 449–450, 1982. the effect of the cooperation of indicator-based multi-objective evolution-
[3] V. Bhaskar, S. K. Gupta, and A. K. Ray, “Applications of multiobjective ary algorithms,” IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 681–695,
optimization in chemical engineering,” Rev. Chem. Eng., vol. 16, no. 1, Aug. 2021.
pp. 1–54, 2000. [19] E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective
[4] W. Gong, Z. Cai, and Z. Li, “An efficient multiobjective differential search,” in Proc. Int. Conf. Parallel Problem Solving Nat., 2004,
evolution algorithm for engineering design,” Struct. Multidiscip. Optim., pp. 832–842.
vol. 38, no. 2, pp. 137–157, 2012. [20] K. Li, Á. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selec-
[5] I. Otero-Muras, A. A. Mannan, J. R. Banga, and D. Oyarún, tion with bandits for a multiobjective evolutionary algorithm based on
“Multiobjective optimization of gene circuits for metabolic engineering,” decomposition,” IEEE Trans. Evol. Comput., vol. 18, no. 1, pp. 114–130,
IFAC-PapersOnLine, vol. 52, no. 26, pp. 13–16, 2019. Feb. 2014.
[6] J.-Y. Tzeng, T. Liu, and J. Chou, “Applications of multi-objective evo- [21] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm
lutionary algorithms to cluster tool scheduling,” in Proc. 1st Int. Conf. based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6,
Innov. Comput. Inf. Control Vol. I (ICICIC), vol. 2, 2006, pp. 531–5534. pp. 712–731, Dec. 2007.
[7] B. Rosenberg, M. D. Richards, J. T. Langton, S. Tenenbaum, and [22] J. Yuan, H.-L. Liu, and C. Peng, “Population decomposition-based
D. W. Stouch, “Applications of multi-objective evolutionary algo- greedy approach algorithm for the multi-objective knapsack prob-
rithms to air operations mission planning,” in Proc. GECCO, 2008, lems,” Int. J. Pattern Recognit. Artif. Intell., vol. 31, no. 4, 2017,
pp. 1879–1886. Art. no. 1759006.
[8] M. G. C. Tapia and C. A. Coello Coello, “Applications of multi-objective [23] H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective
evolutionary algorithms in economics and finance: A survey,” in Proc. optimization problem into a number of simple multiobjective sub-
IEEE Congr. Evol. Comput., Singapore, 2007, pp. 532–539. problems,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 450–455,
[9] X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithm Jun. 2014.
for solving many-objective optimization problems,” IEEE Trans. Syst., [24] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing conver-
Man, Cybern. B, Cybern., vol. 38, no. 5, pp. 1402–1412, Oct. 2008. gence and diversity in decomposition-based many-objective optimizers,”
[10] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast elitist non- IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 180–198, Apr. 2016.
dominated sorting genetic algorithm for multi-objective optimization: [25] J. Rice, C. R. Cloninger, and T. Reich, “Multifactorial inheritance with
NSGA-II,” in Proc. Int. Conf. Parallel Problem Solving Nat., 2000, cultural transmission and assortative mating. I. Description and basic
pp. 849–858. properties of the unitary models,” Amer. J. Human Genet., vol. 30, no. 6,
[11] G. Wang and Y. Wang, “Fuzzy-dominance-driven GA and its application pp. 618–643, 1978.
in evolutionary many objective optimization,” Dyn. Continuous Discrete [26] C. R. Cloninger, J. Rice, and T. Reich, “Multifactorial inheritance with
Impulsive Syst., vol. 14, no. 4, pp. 538–543, 2007. cultural transmission and assortative mating. II. A general model of
[12] K. Deb and H. Jain, “An evolutionary many-objective optimization algo- combined polygenic and cultural inheritance,” Amer. J. Human Genet.,
rithm using reference-point-based nondominated sorting approach, part vol. 31, no. 2, pp. 176–198, 1979.
I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., [27] S. J. Pan and Y. Qiang, “A survey on transfer learning,” IEEE Trans.
vol. 18, no. 4, pp. 577–601, Aug. 2014. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
250 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023
[28] Z. Xu and K. Zhang, “Multiobjective multifactorial immune algorithm [53] R.-T. Liaw and C.-K. Ting, “Evolutionary many-tasking based on bio-
for multiobjective multitask optimization problems,” Appl. Soft Comput., coenosis through symbiosis: A framework and benchmark problems,” in
vol. 107, Aug. 2021, Art. no. 107399. Proc. Congr. Evol. Comput., Donostia, Spain, 2017, pp. 2266–2273.
[29] S. O. Zare, B. Saghafian, A. Shamsai, and S. Nazif, “Multi-objective [54] Y. Chen, J. Zhong, L. Feng, and J. Zhang, “An adaptive archive-
optimization,” in Encyclopedia of Machine Learning and Data Mining. based evolutionary framework for many-task optimization,” IEEE Trans.
Boston, MA, USA: Springer, 2017. Emerg. Topics Comput. Intell., vol. 4, no. 3, pp. 369–384, Jun. 2020.
[30] Z. Chen, R. Guo, Z. Lin, T. Peng, and X. Peng, “A data-driven [55] K. C. Tan, L. Feng, and M. Jiang, “Evolutionary transfer optimization—
health monitoring method using multiobjective optimization and stacked A new frontier in evolutionary computation research,” IEEE Comput.
autoencoder based health indicator,” IEEE Trans. Ind. Informat., vol. 17, Intell. Mag., vol. 16, no. 1, pp. 22–33, Feb. 2021.
no. 9, pp. 6379–6389, Sep. 2021.
[31] A. Gupta, Y.-S. Ong, L. Feng, and K. C. Tan, “Multiobjective multifac-
torial optimization in evolutionary multitasking,” IEEE Trans. Cybern.,
vol. 47, no. 7, pp. 1652–1665, Jul. 2017. Hongyan Chen is currently pursuing the mas-
[32] A. Gupta, Y.-S. Ong, and L. Feng, “Multifactorial evolution: Toward ter’s degree with the School of Mathematics and
evolutionary multitasking,” IEEE Trans. Evol. Comput., vol. 20, no. 3, Statistics, Guangdong University of Technology,
pp. 343–357, Jun. 2016. Guangzhou, China.
[33] M. Gong, Z. Tang, H. Li, and J. Zhang, “Evolutionary multitasking Her current research interests include evolution-
with dynamic resource allocating strategy,” IEEE Trans. Evol. Comput., ary computation, multitask learning, and machine
vol. 23, no. 5, pp. 858–869, Oct. 2019. learning.
[34] K. K. Bali, A. Gupta, Y.-S. Ong, and P. S. Tan, “Cognizant multitasking
in multiobjective multifactorial evolution: Mo-MFEA-II,” IEEE Trans.
Cybern., vol. 51, no. 4, pp. 1784–1796, Apr. 2021.
[35] X. Ma, Q. Chen, Y. Yu, Y. Sun, and Z. Zhu, “A two-level transfer learn-
ing algorithm for evolutionary multitasking,” Front. Neurosci., vol. 13,
p. 1408, Jan. 2020.
[36] K. K. Bali, A. Gupta, L. Feng, Y. S. Ong, and T. P. Siew, “Linearized Hai-Lin Liu (Senior Member, IEEE) received
domain adaptation in evolutionary multitasking,” in Proc. IEEE Congr. the B.S. degree in mathematics from Henan
Evol. Comput. (CEC), 2017, pp. 1295–1302. Normal University, Xinxiang, China, in 1984, the
[37] F. Liang et al., “Evolutionary multitasking via explicit autoencoding,” M.S. degree in applied mathematics from Xidian
IEEE Trans. Cybern., vol. 49, no. 9, pp. 3457–3470, Sep. 2019. University, Xi’an, China, in 1989, and the Ph.D.
[38] J. Lin, H.-L. Liu, K. C. Tan, and F. Gu, “An effective knowledge trans- degree in control theory and engineering from the
fer approach for multiobjective multitasking optimization,” IEEE Trans. South China University of Technology, Guangzhou,
Cybern., vol. 51, no. 6, pp. 3238–3248, Jun. 2021. China, in 2002.
[39] Z. Liang, X. Xu, L. Liu, Y. Tu, and Z. Zhu, “Evolutionary He is a Full Professor with the School of
many-task optimization based on multi-source knowledge trans- Mathematics and Statistics, Guangdong University
fer,” IEEE Trans. Evol. Comput., early access, Aug. 2, 2021, of Technology, Guangzhou. He is a Postdoctoral
doi: 10.1109/TEVC.2021.3101697. Fellow with the Institute of Electronic and Information, South China
[40] Z. Liang, H. Dong, C. Liu, W. Liang, and Z. Zhu, “Evolutionary University of Technology. He has published over 100 research papers in jour-
multitasking for multiobjective optimization with subspace alignment nals and conferences. His research interests include evolutionary computation
and adaptive differential evolution,” IEEE Trans. Cybern., early access, and optimization, wireless network planning and optimization, and their appli-
Jun. 24, 2020, doi: 10.1109/TCYB.2020.2980888. cations.
[41] Q. Shang et al., “A preliminary study of adaptive task selection in Prof. Liu currently serves as an Associate Editor for IEEE T RANSACTIONS
explicit evolutionary many-tasking,” in Proc. IEEE Congr. Evol. Comput. ON E VOLUTIONARY C OMPUTATION .
(CEC), 2019, pp. 2153–2159.
[42] Y. Yuan et al., “Evolutionary multitasking for multiobjective continuous
optimization: Benchmark problems, performance metrics and baseline Fangqing Gu received the B.S. degree from
results,” 2017, arXiv:1706.02766. Changchun University, Jilin, China, in 2007, the
[43] K. Deb, Multiobjective Optimization Using Evolutionary Algorithms. M.S. degree from the Guangdong University of
Chichester, U.K.: Wiley, 2001. Technology, Guangzhou, Guangdong, China, in
[44] A. Gupta and Y.-S. Ong, “Genetic transfer or population diversifi- 2011, and the Ph.D. degree from the Department of
cation? deciphering the secret ingredients of evolutionary multitask Computer Science, Hong Kong Baptist University,
optimization,” in Proc. IEEE Symp. Ser. Comput. Intell., 2016, pp. 1–7. Hong Kong, in 2016.
[45] J. Lin, H. L. Liu, B. Xue, M. Zhang, and F. Gu, “Multiobjective multi- He joined the School of Mathematics and
tasking optimization based on incremental learning,” IEEE Trans. Evol. Statistics, Guangdong University of Technology as a
Comput., vol. 24, no. 5, pp. 824–838, Oct. 2020. Lecturer. His research interests include data mining,
[46] G. Guo, H. Wang, D. A. Bell, Y. Bi, and K. Greer, “KNN model-based machine learning, and evolutionary computation.
approach in classification,” in Proc. OTM Int. Conf. Move Meaningful
Internet Syst., 2003, pp. 986–996.
[47] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and
A. J. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, Kay Chen Tan (Fellow, IEEE) received the B.Eng.
pp. 723–773, Mar. 2012. (First Class Hons.) and Ph.D. degrees from the
[48] R. B. Agrawal, K. Deb, and R. B. Agrawal, “Simulated binary crossover University of Glasgow, Glasgow, U.K., in 1994 and
for continuous search space,” Complex Syst., vol. 9, no. 3, pp. 115–148, 1997, respectively.
1994. He is currently the Chair Professor of
[49] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evo- Computational Intelligence with the Department of
lutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, Computing, The Hong Kong Polytechnic University,
pp. 173–195, Jun. 2000. Hong Kong. He has published over 300 refereed
[50] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi- articles and seven books.
objective optimization test problems,” in Proc. Congr. Evol. Comput., Prof. Tan is currently the Vice-President
2002, pp. 825–830. (Publications) of IEEE Computational Intelligence
[51] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and Society, USA. He served as the Editor-in-Chief for IEEE Computational
V. G. da Fonseca, “Performance assessment of multiobjective optimiz- Intelligence Magazine from 2010 to 2013 and the IEEE T RANSACTIONS
ers: An analysis and review,” IEEE Trans. Evol. Comput., vol. 7, no. 2, ON E VOLUTIONARY C OMPUTATION from 2015 to 2020. He currently
pp. 117–132, Apr. 2003. serves as an editorial board member for more than ten journals. He is an
[52] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A IEEE Distinguished Lecturer Program Speaker and the Chief Co-Editor of
comparative case study and the strength pareto approach,” IEEE Trans. Springer Book Series on Machine Learning: Foundations, Methodologies,
Evol. Comput., vol. 3, no. 4, pp. 257–271, 1999. and Applications.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.