0% found this document useful (0 votes)
4 views

A_Multiobjective_Multitask_Optimization_Algorithm_Using_Transfer_Rank

This document presents a multiobjective multitask optimization (MMO) algorithm that utilizes transfer rank and a KNN model to enhance the efficiency of solving multiple optimization tasks simultaneously. The proposed algorithm prioritizes solutions based on their transfer rank, which quantifies the likelihood of positive knowledge transfer between tasks, and employs a KNN classifier to categorize solutions with the same rank. Experimental results demonstrate that this approach outperforms traditional MMO techniques in finding optimal solutions more effectively.

Uploaded by

Tong Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

A_Multiobjective_Multitask_Optimization_Algorithm_Using_Transfer_Rank

This document presents a multiobjective multitask optimization (MMO) algorithm that utilizes transfer rank and a KNN model to enhance the efficiency of solving multiple optimization tasks simultaneously. The proposed algorithm prioritizes solutions based on their transfer rank, which quantifies the likelihood of positive knowledge transfer between tasks, and employs a KNN classifier to categorize solutions with the same rank. Experimental results demonstrate that this approach outperforms traditional MMO techniques in finding optimal solutions more effectively.

Uploaded by

Tong Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO.

2, APRIL 2023 237

A Multiobjective Multitask Optimization


Algorithm Using Transfer Rank
Hongyan Chen, Hai-Lin Liu , Senior Member, IEEE, Fangqing Gu, and Kay Chen Tan , Fellow, IEEE

Abstract—Multiobjective multitask optimization (MMO) multiobjective multitask optimization (MMO) strate-


attempts to solve several problems simultaneously. This is gies [27], [28]. Unlike traditional MOPs [29], [30], MMO
commonly done by identifying useful knowledge to transfer optimizes several multiobjective tasks simultaneously using
between tasks, thereby producing optimal solutions more
quickly. In this study, an MMO algorithm using transfer knowledge transfer between tasks. An MMO problem with k
rank and a KNN model is proposed to achieve this goal. The tasks can generally be expressed as follows:
definition of transfer rank is first introduced for quantifying the ⎧  

⎪ F1 (x1 ) = f11 (x1 ), . . . , f1m1 (x1 )
priority of transfer solutions, to improve the probability of a ⎨
positive result. The solution with the higher rank was assumed F2 (x2 ) = f21 (x2 ), . . . , f2m2 (x2 )
Minimize :
to be the most suitable for transfer, as solutions were sorted in ⎪
⎪ ...  
⎩ m
descending order based on transfer rank. Priority was given to Fk (xk ) = fk1 (xk ), . . . , fk k (xk )
previous and positive-transfer solutions and those with the same
transfer rank were distinguished using a KNN model classifier. ni

The effectiveness of the proposed algorithm was verified by subject to xi ∈ asi , bsi , i = 1, 2, . . . , k (1)
studying benchmark MMO problems. The experimental results s=1
showed the proposed algorithm was more effective than other nk
s=1 [ak , bk ]
where s s represents the decision space of the kth
conventional MMO techniques.
task, nk denotes the dimension of the kth task decision space,
Index Terms—KNN model classifier, knowledge transfer, and Fk represents the kth multiobjective optimization task.
multiobjective multitask optimization (MMO), transfer rank. nk
s=1 [ak , bk ] → R
fk : s s mk consists of m real-valued objec-
k
m
tive functions. R represents the objective space of the kth
k

task. In this expression, all multiobjective optimization tasks


I. I NTRODUCTION are minimized simultaneously and information concerning the
NGINEERING problems often require the simulta-
E neous optimization of several potentially conflicting
objectives [1]–[5]. Such problems are termed multiobjective
assistance of one task by another is different. The knowledge
of one task can be transferred to another task through a unified
space Y [31], to achieve knowledge transfer.
optimization problems (MOPs) [6]–[8] and often exhibit Gupta et al. [32] proposed one of the earliest single-
Pareto-optimal solutions that have long been of interest objective multitask optimization algorithms (MFEAs) [32].
to researchers. Several algorithms have been proposed to Later, a multiobjective multifactorial optimization algorithm
improve the performance of MOPs, which can be categorized (MO-MFEA) [31], based on NSGA-II, was proposed. MO-
as dominance-based [9]–[13], indicator-based [14]–[19], and MFEA uses a population to optimize multiple related tasks
decomposition-based techniques [20]–[24]. simultaneously through a unified space Y, which contains the
Prior studies have identified potential relationships decision space for k tasks. Solutions for different tasks can be
between MOPs [25], [26] and have proposed corresponding transformed in this space to achieve transfer between tasks.
A multifactorial evolutionary algorithm with online transfer
Manuscript received 13 June 2021; revised 17 September 2021 and parameter estimation (MFEA-II) [33] was later introduced
29 December 2021; accepted 25 January 2022. Date of publication
31 January 2022; date of current version 31 March 2023. This work was to learn useful knowledge, without any human intervention
supported in part by the National Natural Science Foundation of China required for MFEA-II. MO-MFEA-II [34] learns relationships
under Grant 62172110 and Grant 61876162; in part by the Natural Science between tasks through self-awareness and adjusts the extent
Foundation of Guangdong Province under Grant 2020A1515011500; in part
by the Programme of Science and Technology of Guangdong Province under of genetic transfers accordingly. In other words, MFEA-II
Grant 2021A0505110004 and Grant 2020A0505100056; and in part by the solves single objective multitask optimization problems, while
Research Grants Council of the Hong Kong SAR under Grant PolyU11202418 MO-MFEA-II solves MMO problems. Ma et al. [35] proposed
and Grant PolyU11209219. (Corresponding author: Hai-Lin Liu.)
Hongyan Chen, Hai-Lin Liu, and Fangqing Gu are with the School a two-level learning algorithm (TLTL), in which the upper
of Mathematics and Statistics, Guangdong University of Technology, level uses elite individuals to transfer knowledge between tasks
Guangzhou 510520, China (e-mail: [email protected]; [email protected]; and the lower level transfers information from one dimen-
[email protected]).
Kay Chen Tan is with the Department of Computing, The Hong Kong sion to other dimensions within a task. Negative knowledge
Polytechnic University, Hong Kong, SAR (e-mail: [email protected]). transfer is also alleviated using a linearized domain adapta-
This article has supplementary material provided by the tion (LDA) [36] strategy, which attempts to induce a search
authors and color versions of one or more figures available at
https://doi.org/10.1109/TEVC.2022.3147568. space that is highly correlated with complex tasks, in order
Digital Object Identifier 10.1109/TEVC.2022.3147568 to promote knowledge transfer. Similarly, Liang et al. [37]
1089-778X 
c 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
238 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

introduced a denoising auto-encoder for explicit genetic trans- KNN model classifier could be used to distinguish solutions
fer. A method of identifying valuable solutions for efficient with the same transfer rank.
knowledge transfer (EMTET) [38] was later proposed. In The primary contributions of this article are as follows.
EMTET, the neighbors of solutions that achieve positive 1) Transfer rank is calculated in a population and solutions
transfer are also transferred. with a high transfer rank are selected first.
Several new multitasking algorithms have been proposed 2) A KNN model classifier is used to divide solutions with
recently. For example, EMaTOMKT [39] is a multisource the same transfer rank into four categories: a) solu-
knowledge transfer technique that includes three strate- tions within the positive transfer area; b) solutions near
gies: 1) adaptive mating probability (AMP); 2) maximum the positive transfer area; c) solutions near the nega-
mean discrepancy (MMD)-based task selection (MTS); and tive transfer area; and d) solutions within the negative
3) local distribution estimation knowledge transfer (LEKT). transfer area. The first category is prioritized.
The MOMFEA-SADE [40] algorithm is based on subspace The remainder of this article is organized as follows.
alignment (SA) and self-adaptive differential evolution (DE), Section II introduces the details of transfer rank and KNN
which are used to reduce the probability of a negative trans- model classification. And, the performance of these two strate-
fer and generate promising solutions. EEMTA [41] is a credit gies is further analyzed. Section III introduces the proposed
assignment technique used for selecting appropriate tasks, algorithm. Section IV reports and analyzes the experimental
based on feedback across solutions. results. The final section sums up this article.
Although these algorithms have shown promise for solving
MMO problems, their ability to find solutions that achieve II. T RANSFER R ANK AND KNN M ODEL C LASSIFICATION
positive transfer could be improved further. Some common
In this section, the definition of transfer rank and the con-
knowledge exists between MMO tasks [42], which plays a sig-
struction process for the KNN model classifier are introduced.
nificant role in improving algorithm performance [43], [44].
In MO-MFEA [31], solutions with the same probability are
A. Definition of Transfer Rank
selected as transferred solutions. In other words, transfer deci-
sions are made randomly. Solutions that are not useful for Transfer rank, defined for a population P = {p1 , . . . , pN },
other tasks are therefore most likely to be transferred, which is used to quantify the priority of transferred solutions and
results in a waste of computing resources. In EMEA [37], determine the optimal choice. Historical transferred solutions
nondominant solutions in each task are selected as transferred of size u can be represented as HTS = {s1 , s2 , . . . , su }. HTSt
solutions, the effectiveness of which depends on the high sim- is then composed of positive transferred solutions Post and
ilarity between tasks. Combined with a knowledge of machine negative transferred solutions Negt ∪ Negt−1 ∪ · · · ∪ Negt−m+1
learning, EMTIL [45] uses an incremental Bayes classifier in generation t. Here, Negt denotes solutions with labeled 1
to divide a population into positive-transferred solutions and and dominated in the corresponding target task. It is worth
negative-transferred solutions, selecting the positive outcomes noting that transferred solutions achieved a positive transfer
for transfer. This division of a population into two categories in this study if the solution was nondominant in a target task.
represents an improvement over previous algorithms, but it is Steps for calculating transfer rank can be described as follows.
relatively rough to be divided into two categories. 1) Calculate the distance matrix D between the historical
In this study, a KNN model classifier is used to divide a transferred solution HTS and the current population P.
population into four categories, which is more accurate than a 2) For each sj in HTS, D is used to identify the solution pi at
Bayes classifier. When information concerning the assistance the smallest distance from sj , placing sj into the associ-
of one task by another is unknown, the accurate selection ated subset i containing historical transferred solutions
of useful solutions for knowledge transfer is of significant associated with pi . This subset can be expressed as
importance. Since most of the transferred solutions belong to follows:
a positive or negative category, neighboring solutions have a    
i = s ∈ HTS|d s, pi ≤ d s, pj , j = 1, . . . , N (2)
higher probability of being found in the same group. This
study also defines a transfer rank, which quantifies the prior- where d(s, pi ) is the Euclidean distance between s and
ity of transferred solutions in order to improve the probability pi . In other words, s is contained in i if and only if
of a positive transfer. However, there are often multiple solu- pi is at the shortest distance to s among all N solutions.
tions with the same transfer rank. To solve this problem, a It is also worth noting that since the number of HTS is
KNN model classifier [46] is introduced to categorize solu- smaller than N, some sets i will be empty.
tions with the same rank, divided into four categories, and 3) If s is a positive-transferred solution for each s in HTS,
increase the probability of a positive transfer. The KNN clas- its associated degree is given by αs = 1. Otherwise,
sifier is used not only because it has no neighbor parameters, αs = −1.
but also because it can efficiently solve classification problems, These steps can be used to determine the associated degree
the resulting accuracy of which is relatively high compared to a αs and define the corresponding transfer rank.
Bayes classifier. This proposed MMO algorithm, using trans- Definition 1 (Transfer Rank):
 If i = ∅, the transfer rank
fer rank and a KNN model (MMOTK), was used to select ϕi of pi is given by ϕi = s∈i αs ; otherwise, ϕi = 0.
transferred solutions. By calculating individual transfer ranks, Solutions with a high transfer rank are adjacent to multiple
solutions with the higher transfer value were selected and the positive transferred solutions and have a higher probability of

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 239

B. Construction of the KNN Model Classifier


The training data HTS = {s1 , s2 , . . . , su } are used to build a
KNN model classifier, in which M is the model set. Training
data include positive-transfer and negative-transferred solu-
tions and the models are constructed based on these categories.
The details of the KNN model classifier construction [46] can
be described as follows.
1) Create a similarity matrix from training data, set all
training data to “ungrouped,” and calculate the largest
local neighborhood N1 , N2 , . . . , Nu that covers the max-
imum number of neighbors with the same category.
Fig. 1. Explain the calculation of the transfer rank. 2) Create a set Q that includes the largest local neighbor-
hoods of all “ungrouped” data.
TABLE I
3) Find the largest Ni in Q, and establish a model Mi =
R ESULT OF C ALCULATING T RANSFER R ANK Cls(si ), Sim(si ), Num(si ), Rep(si ) in M. Set the data
covered by this model to “grouped,” where Num(si ) =
Ni and Ni is the largest local neighborhood in (si ).
4) Repeat steps 2 and 3 until all data are set to “grouped.”
These four steps are used to construct the model set M, in
which Cls(si ) is the class label for si , Sim(si ) is the lowest
similarity of si (the farthest distance between the data covered
by the model Mi ), Num(si ) represents the number of the data
covered by model Mi and Rep(si ) denotes si itself. If more than
one solution has the same maximum number of neighbors in
step 2, the solutions with the smallest Sim(si ) is chosen.
An example is provided below, to illustrate how the KNN
model classifier is constructed and applied. There are nine 2-D
training datasets in Fig. 2(a), in which the red and blue stars
achieving positive transfer. As such, when selecting transferred represent the positive-transfer and negative-transferred solu-
solutions, those with high transfer rank will be given priority. tions, respectively. In Fig. 2(b), Ni (= 5) is the largest in Q,
Fig. 1 illustrates the calculation of transfer rank, in which and si thus covers the maximum number of neighbors within
all solutions are in a decision space, P = {p1 , p2 , . . . , p10 }, the same category (Num(si ) = Ni = 5). Sim(si ) = 3 cm rep-
HTS = {s1 , s2 , . . . , s8 }, s1 , s3 , s4 , and s5 are positive- resents the distance from di to the most distant data in this
transferred solutions and s2 , s6 , s7 , and s8 are negative- circle. As such, Sim(sj ) = 2 cm [Fig. 2(c)], and Sim(sk ) = 1
transferred solutions. Detailed steps for calculating the transfer cm [Fig. 2(d)]. Once the first model M1 = +, 3, 5, si is con-
rank of P are as follows. First, the distance matrix D is structed, the set Q can be update to obtain the second model
calculated from s1 , s2 , . . . , s8 to p1 , p2 , . . . , p10 . The p1 is M2 = −, 2, 3, sj , as shown in Fig. 2(c). This process can
then the solution at the smallest distance from s1 and s1 , then be repeated until all data are marked as “grouped.” The
which is placed into 1 . Similarly, p1 is the solution at the last model, M3 = −, 1, 2, sk in Fig. 2(d) can also be con-
smallest distance from s2 and s2 , which is placed into 1 . structed in this way to complete the set M = {M1 , M2 , M3 }.
s3 , s4 , . . . , s8 can also be calculated in this way, as shown in The trained data are then discarded and the model set M is
Table I. For example, according to (2), the associated sub- saved. This process is illustrated in Fig. 2(e) and (f), where
set of p1 is given by 1 = {s1 , s2 }. Also since s1 is a the red and blue circles represent positive and negative models,
positive-transferred solution and s2 is a negative-transferred respectively.
solution, αs1 = 1, αs2 = −1, and the transfer rank of p1 is
ϕ1 = αs1 + αs2 = 0. The transfer rank of p1 , p2 , . . . , p10
were calculated and the solutions were arranged in descend-
ing order based on the value of ϕi , to produce the sequence C. KNN Model Classification
F1 , F2 , F3 , F4 , and F5 . Here, F1 = {p4 }, F2 = {p2 }, F3 = After construction of the KNN classifier, the model will
{p1 , p3 , p5 , p6 , p7 , p9 }, F4 = {p10 }, and F5 = {p8 }, such that be applied to the classification of the samples as follows:
the solutions in the same Fi have the same transfer value. The classification method of the KNN model classifier is as
Suppose, for example, the five solutions need to be transfer. follows.
The solutions in F1 and F2 will be selected first, and the pri- 1) For each data st to be classified, the Euclidean distance
mary issue then becomes the selection of three solutions from was calculated from st to all models in M.
F3 that are most likely to achieve positive transfer. The KNN 2) If st is covered by only one model Mj =
model classifier [46] can be applied to classify solutions with Cls(sj ), Sim(sj ), Num(sj ), Rep(sj ) and the Euclidean
the same transfer rank in F3 for the selection of the optimal distance from dt to dj is less than Sim(sj ), the category
solutions. of st is the same as that of Cls(sj ).

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
240 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

(a) (b) (c)

(d) (e) (f)

Fig. 2. Illustration of the process of the model construction. (a) Original distribution of data. (b) Construction of the first model. (c) Construction of the
second model. (d) Construction of the third model. (e) Construction of the model set M is complete. (f) Discard the trained data and save model set M.

(a) (b) (c)

Fig. 3. Process of the KNN model classification. (a) Model set M. (b) Distribution of the test data. (c) Result of classification.

3) If st is covered by at least two models, the category are classified as a positive-transferred solution, as shown in
of st is the same as that of the model with the largest Fig. 3(c).
Num(sj ).
4) If no model in M covers st , the category of st is the same
as that of the model with its boundary closest to st . D. Performance Analysis of the Transfer Rank and
These solutions can be divided, based on whether the KNN Model Classifier
transferred solutions are covered by the model set M into The effectiveness of the transfer rank and KNN model
four categories: 1) solutions within the positive transfer area; classifier for improving the accuracy of positive transfer was
2) solutions near the positive transfer area; 3) solutions near demonstrated by comparing the proposed MMOTK algorithm
the negative transfer area; and 4) solutions within the negative with MOMFEA [31], EMEA [37] applied to the test function
transfer area. In this way, the solutions within the posi- PIHS. A 2-D decision space with 11 generations was used for
tive transfer area are selected, with a higher probability of visualization purposes.
achieving positive transfer. The accuracy of positive transfer in MOMFEA was calcu-
An example illustrating the use of the KNN Model classifier lated using a population P2 of size 100. Transfer P2 to P1
is provided in Fig. 3. The model set M is shown in Fig. 3(a) and get P12 . A nondominant ranking of P12 ∪ P1 identified the
and the distribution of the test data is demonstrated by the nondominant individual in P12 to be 46, corresponding to an
black dots in Fig. 3(b). These test data from left to right are accuracy of 46/100 = 0.46.
called the first, the second, the third and the fourth point. The In EMEA, the nondominant solutions in the current popu-
first point is contained in the negative class M2 , indicated by lation were selected for transfer and the accuracy of positive
a blue dot. The second point lies outside the model set M and transfer was calculated using 60 nondominant solutions in
is near the boundary of M3 , the category of which is marked P2 , which were transferred to P1 . Among these, 33 solu-
by a blue dot and is the same as that of M3 . The third point tions achieve positive transfer, resulting in an accuracy of
is contained in M1 and M3 , with Num(sj ) values of 5 and 2, 33/60 = 0.55.
respectively. The category of this solution is consistent with Finally, the accuracy of positive transfer in MMOTK can
the category of the model with the largest Num(sj ). and is thus be calculated as shown in Fig. 4. Fig. 4(a) and (c) show
marked by a red (positive) dot. In the same way, the last point the distribution of transfered solutions in the decision space

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 241

(a) (b)

(c) (d)

Fig. 4. Performance analysis of the transfer rank and KNN model classifier. (a) Distribution of transfered solutions in the decision space of Task 2. (b) Solutions
of realizing positive transfer in the decision space of Task 1. (c) Distribution of transfered solutions in the objective space of Task 2. (d) Solutions of realizing
positive transfer in the objective space of task 1.

and objective space of Task 2, respectively. Fig. 4(b) and (d) indicating transfer rank and the KNN model classifier can
represent solutions to achieve positive transfer in the deci- improve the accuracy of positive transfer. The reason for this
sion space and objective space of Task 1, respectively. In result may be that the solutions which are close to the positive
Fig. 4(a), the light blue circles, red triangles, red circles, and transferred solutions of the previous generation generally have
blue circles represent P2, the transferred solutions, the positive a greater probability to achieve positive transfer. The transfer
models, and the negative models trained by HTS, respec- rank is calculated according to associates the current popula-
tively. The green circle in Fig. 4(b) represents P1 ; the red tion with the previous generation of transferred solutions. If
triangle denotes the transferred solutions in the target task; there are more positive transferred solutions in the previous
and the blue triangle indicates solutions that achieved pos- generation near the solution, the transfer rank of this solu-
itive transfer in the target task. The light blue circle and tion will be higher. Thus, the solutions with higher transfer
red triangles in Fig. 4(c) denote P2 and transferred solutions rank will be selected as the transferred solutions. Furthermore,
in the objective space, respectively. In Fig. 4(d), the green when the transfer ranks of the solutions are the same, they are
circle, red triangle, and blue triangle represent P1 , positive classified by the KNN model classifier. Through the previous
transferred solutions, and negative transferred solutions in the generation of transferred solutions training to form a KNN
objective space, respectively. There are eight blue triangles in model classifier, using the KNN model classifier to divide the
Fig. 4(b), indicating the number of positive-transferred solu- population into four categories: 1) solutions within the posi-
tions is 8 and the accuracy is 8/n = 0.8 (For the value of n, tive transfer area; 2) solutions near the positive transfer area;
see Section IV-C) 3) solutions near the negative transfer area; and 4) solutions
This analysis suggests the accuracy of positive transfer in within the negative transfer area, giving priority to the first
the proposed MMOTK algorithm applied to PIHS (0.8) is types of solutions, because the first types of solutions have a
higher than that of MEMA (0.55) and MOMFEA (0.46), higher probability to achieve positive transfer.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
242 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

Algorithm 1: Pseudocode of Proposed Algorithm Algorithm 2: Select(HTS, P, n)


Input: Input:
1) The population size of each tasks: N; 1) Historical transferred solution set: HTS;
2) The number of transferred solutions: n; 2) The current population: P;
3) The number of generations saved by the negative class set: m; 3) The number of the transferred solutions: n.
4) The size of the transferred population: r; Output: n transferred solutions S.
5) The number of tasks: K. 1 Calculate the transfer rank of P by using HTS (see
Output: K nondominated solution sets. Section II-A).
1 Initialize populations Pt1 , . . . , PtK with labels of 0 and evaluate; 2 P can be divided into F1 , F2 , . . . , FN based on transfer rank.
set t = 0. 3 Find k that makes the formula |F1 ∪ · · · ∪ Fk | <= n hold.
2 while the stopping criterion is unsatisfied do 4 S ← F1 ∪ F 2 ∪ · · · ∪ F k .
3 Calculate the MMD between tasks. 5 if |F1 ∪ · · · ∪ Fk | < n and |F1 ∪ · · · ∪ Fk+1 | > n then
4 for i=1 to K do 6 Classifier M ← Train the KNN model classifier by using
5 r similar populations TPi are selected in Pt1 , . . . , PtK HTS (see Section II-B).
(TPi has the smaller MMD value with Pti ). 7 M is used to Classify Fk+1 (see Section II-B).
6 TP1i ← Transfer TPi to unified space Y. 8 C ← n − |F1 ∪ F2 ∪ · · · ∪ Fk | positive-transferred solutions
7 if t == 0 then are selected in Fk+1 .
9 S ← S ∪ C.
8 Si1 ← n solutions randomly selected from TP1i ;
9 else
10 Si1 = Select(HTSi−1 ,TP1i , n);
11 Si is obtained by transferring from Si1 to Pti and where x is the transferred solution.
assigning a label of 1 to Si ;
12 Qti ← Apply crossover and mutation operators on
Crossover and mutation [19], [48] operators can then be
Pti ∪ Si ; applied to Pti ∪ Si to produce offspring Qti . If the point c
13 foreach c with label 1 in Pti ∪ Si do with label 1 in Pti ∪ Si is nondominant in Pti ∪ Qti ∪ Si , the
14 Suppose c1 is a solution before c transfer; point c1 is placed into the positive class set Posi . Otherwise,
15 if c is nondominated in Pti ∪ Qti ∪ Si then c1 is placed into the negative class set Negi (c1 is the point
16 put c1 in the positive-class set Posi ; before c transfer). The historical negative class set can increase
17 else the number of training samples, increase model accuracy. If
18 put c1 in the negative-class set Negti ; t < m, then HTSi = Posi ∪ Neg1i ∪ · · · ∪ Negti . Otherwise,
19 if t < m then HTSi = Posi ∪ Negt−m+1 i ∪ · · · ∪ Negti (Negti is the negative
20 HTSi ← Posi ∪ (∪th=1 Neghi ); class set in generation t). Finally, the next-generation popula-
21 else tion is generated based on nondominant sorting and crowding
t−m+h
22 HTSi ← Posi ∪ (∪mh=1 Negi ); distance [10].
The details of selecting the transferred solutions are intro-
23 Pt+1
i ← Use NSGA-II on Pti ∪ Qti ∪Si ; duced in Algorithm 2. First, the transfer rank of a population
24 Set t = t + 1; P is calculated using historical transferred solutions in HTS
(see Section II-A). Solutions with the same transfer rank are
placed in the same layer and layers are sorted in descending
order based on transfer rank.
III. P ROPOSED A LGORITHM The population can be divided into F1 , F2 , . . . , FN , where
MMOTK pseudocode is presented in Algorithm 1, where Fi represent layer i and k in |F1 ∪ F2 ∪ · · · ∪ Fk | <= n
Pt1 , Pt2 , . . . , PtK are initialized with labels of 0 and evalu- (|F1 ∪ F2 ∪ · · · ∪ Fk | represents the size of F1 ∪ F2 ∪ · · · ∪
ated. The following steps are performed when the stopping Fk ). Solutions in F1 , F2 , . . . , Fk are then input into S and, if
criteria are unsatisfactory. First, the MMD [39], [47] is |F1 ∪ F2 ∪ · · · ∪ Fk | = n, the selection of transferred solutions
calculated between high-dimensional distributions, as r sim- is completed. Otherwise, the following steps are implemented.
ilar populations TPi are identified for each task. To achieve First, the KNN model classifier M is trained through HTS,
communication between tasks, TP1i is acquired by mapping and M is used to classify Fk+1 (the classification method is
the decision space of TPi to the unified space Y = [0, 1]D . described in Section II-B). Next, the solutions within the positive
In the zeroth generation, n transferred solutions Si1 are ran- transfer area are selected first, the solutions near the positive
domly selected from TP1i . Otherwise, Si1 are selected using transfer area are selected second, and n − |F1 ∪ F2 ∪ · · · ∪ Fk |
Algorithm 2. Si is then obtained by transferring from Si1 to transferred solutions C are selected in Fk+1 . In this way, S and
Pti and assigning a label of 1 to Si . Here, D = maxk {Dk } is C are merged to obtain n transferred solutions S.
j j j j
the dimensionality of Tk , xk ∈ [lk , uk ], xk is the jth variable in
j D
Tk , Lk = (lk1 , . . . , lk , . . . , lk k ) is the lower bound of Tk , and IV. E XPERIMENTAL S ETUP AND D ISCUSSION
j D
Uk = (u1k , . . . , uk , . . . , uk k ) denotes the upper bound of Tk . OF THE R ESULTS
The transfer of a solution x from Ti to Tj can be represented In this section, the proposed MMOTK approach is
as follows: compared with the seven state-of-the-art evolution-
x − Lk   ary algorithms: 1) EMTIL [45]; 2) EMaTOMKT [39];
x = · Uj − Lj + Lj (3)
Uk − Lk 3) MOMFEA-SADE [40]; 4) EMEA [37]; 5) MO-MFEA [31];

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 243

TABLE II
P ROPERTIES OF M ULTIOBJECTIVE M ULTITASKING O PTIMIZATION P ROBLEMS

6) MO-MFEA-II [34]; and 7) NSGA-II [10]. The benchmark medium similarity, and low similarity according to the
MMO problems [42], [49], [50] were used to verify the fitness landscape calculated using a Spearman rank correlation
effectiveness of the each algorithm and experimental results coefficient [42].
showed the proposed technique outperformed comparable The WCCI2020 competition evolutionary MMO test suite
models in selecting effective transferred solutions. (test suite 2) was also used to assess the performance of
the proposed algorithm. These test problems included 10
A. Test Problems MMO benchmark questions, each containing 50 multiobjective
In this study, the revised CEC 2017 evolutionary multitask- optimization tasks involving both ZDT [49] and DTLZ
ing optimization competition benchmark (test suite 1) [45] was problems [50]. There were some similarities between the
used to assess the performance of the proposed algorithm. Test Pareto optimal solutions and fitness landscapes for the
suit CEC2017 was modified in [45] to increase the difficulty 50 tasks.
of test problems and prevent unfair advantages for specific
Pareto sets (PSs), wherein Sph2 = (0, . . . , 0, 5.1, . . . , 5.1)T ,
      B. Performance Indicators
40 10
Spl2 = (0, . . . , 0, 40, . . . , 40)T (see Table II). The test suite In this study, the inverted generational distance (IGD) [51]
      and hypervolume (HV) [52] metrics were used to evaluate the
25 25
included nine MMO problems, each consisting of two tasks performance of the seven algorithms.
involving a multiobjective optimization task with two or 1) Inverted Generational Distance: The IGD metric rep-
three objectives. These problems are designed by consider- resents the average distance between optimal solutions and
ing the degree of intersection of Pareto-optimal solutions and the nondominant solution set obtained by an algorithm. Small
the similarity of the fitness landscape between optimization IGD values indicate better comprehensive performance and
tasks. The degree of intersection for the global minimum reflect not only the convergence of MOEA but also the unifor-
can be divided into three categories: 1) complete intersection; mity and diversity of a distribution. Suppose P∗ is the optimal
2) partial intersection; and 3) no intersection. The test prob- solution set of the uniform distribution in Pareto front (PF)
lems in each category were divided into high similarity, and P denotes the nondominant solution set obtained by an
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
244 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

TABLE III
R EFERENCE P OINTS OF HV IN N INE T EST Q UESTIONS

TABLE IV
AVERAGE AND S IGNIFICANCE A NALYSIS (S HOWN IN THE B RACKETS ) OF IGD O BTAINED BY MMOTK, MOMFEA-SADE,
EMTIL, EMEA, MO-MFEA, MO-MFEA-II, AND NSGA-II ON N INE MMO B ENCHMARKS

TABLE V
AVERAGE AND S IGNIFICANCE A NALYSIS (S HOWN IN THE B RACKETS ) OF HV O BTAINED BY MMOTK, MOMFEA-SADE,
EMTIL, EMEA, MO-MFEA, MO-MFEA-II, AND NSGA-II ON N INE MMO B ENCHMARKS

algorithm. IGD can then be calculated as follows: and a reference point. A larger HV indicates higher quality in
 the final solution, strictly abiding by the principle of Pareto
 ∗
 p∈P∗ d(p, P)
IGD P, P = (4) domination. Taking the minimization problem as an example,
|P∗ | if solution A dominates solution B, the HV value of A must
where d(p, P) = minp∈P |p − p| denotes the minimum be greater than that of B. The calculation of HV, which is
Euclidean distance from the point p in PF to the individual p in independent of PF, is then given by
the final solution set P. The magnitude of P∗ was |P∗ | = 1000 ⎛ ⎞
for 2-D problems and 10 000 for 3-D problems.  r  p
HV P|z = Vol⎝ f1 , zr1 × · · · × fmp , zrm ⎠ (5)
2) Hypervolume: The HV evaluation metric is popular
p∈P
because of its theoretical foundation. It evaluates the com-
prehensive performance of MOEA by calculating the hyper- where Vol(·) is the Lebesgue measure and zr =
volume of space surrounded by the nondominant solution set (zr1 , zr2 , . . . , zrm ) is the reference point dominated by all points

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 245

TABLE VI
AVERAGED IGD VALUE O BTAINED BY NSGA-II, MOMFEA, EBS, MATEA, EM ATOMKT, MMOTK, AND MMOTK’ ON WCCI20-P ROBLEM 1
P ROBLEMS A FTER 30 I NDEPENDENT RUNS , W HERE THE G RAY BACKGROUND I S THE B EST R ESULT OF A LL A LGORITHMS

in the PF. HV reference points for the nine test questions 5) The number of generation saved by the negative class
CIHS-NILS are shown in Table III. set was m = 3.
6) Random mating probability (rmp) in MO-MFEA was set
C. Experimental Settings to 0.3.
Parameter settings for test suite 1 [45] can be described as 7) The size of the transferred population was r = 5.
follows. 8) The significance of the algorithm was analyzed using a
1) The population size of each algorithm is set to Wilcoxon rank-sum test at a significance level of 0.05. The
100 for 2-D optimization problems and 120 for 3-D symbols “+,” “=,” and “−” indicate the results of other
optimization problems. All algorithms were run 30 times algorithms were significantly better than, similar to, and
independently. worse than those of the MMOTK approach, respectively.
2) The maximum generation of all algorithms was 1000.
3) Parameters of the crossover mutation operator were set
to ηc = 2, ηm = 20, pc = 1, and pm = 1. D. Simulation Results and Analysis for Test Suit 1
4) The number of transferred solutions for all algorithms Tables IV and V list the average IGD and HV values
was n = 10. for each algorithm running 30 times independently. The

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
246 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

TABLE VII
AVERAGED IGD VALUE O BTAINED BY NSGA-II, MOMFEA, EBS, MATEA, EM ATOMKT, MMOTK, AND MMOTK’ ON WCCI20-P ROBLEM 2
P ROBLEMS A FTER 30 I NDEPENDENT RUNS , W HERE THE G RAY BACKGROUND I S THE B EST R ESULT OF A LL A LGORITHMS

MMOPRT and MOMFEA-SADE algorithms were run on NSGA-II is a single-task optimization model with no com-
MATLAB 2020b and the results of EMTIL, MMEA, MO- munication between tasks. As such, the performance of the
MFEA, MO-MFEA-II, and NSGA-II were collected from the NSGA-II algorithm is poor for most problems, indicating
experimental portion of this article [45]. Algorithms that per- that transfer between tasks is beneficial to population evo-
form best were marked with a gray background. It is evident lution. Transfer between tasks is random in MO-MFEA and
the proposed MMOTK approach outperformed the other algo- some solutions that may achieve negative transfer are also
rithms in 12 of the 18 tasks. The results of the Wilcoxon transferred, resulting in a waste of resources. As the experi-
rank-sum test were 12/2/4, 14/1/3, 15/0/3, 15/0/3, 16/0/2, mental results showed, this approach failed for most problems.
and 18/0/0 in the last row of Table IV and 12/0/6, 16/1/1, Nondominant solutions are selected as transferred solutions in
16/1/1, 16/1/1, 17/0/1, and 16/2/0 in Table V. The quantities EMEA and the algorithm performs well given the high sim-
of “−” were also significantly higher than those of “+” in ilarity between tasks, but not for low similarity. The results
the two tables. Thus, the MMOTK algorithm exhibited better of EMEA, MO-MFEA, and MO-MFEA-II were superior to
performance for most problems, due to transfer rank and the those of NSGA-II, which offered no advantages over other
KNN model classifier. algorithms.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 247

TABLE VIII
EMTIL uses a Bayesian classifier to divide transferred solu- AVERAGE AND S IGNIFICANCE A NALYSIS OF IGD O BTAINED BY
tions into two categories. While this approach is unique, accu- MMOTK, MMOTK’, AND EM ATOMKT ON N INE MMO B ENCHMARKS
racy of the Bayesian classifier was relatively low compared
with the KNN model classifier. EMTIL offers some advan-
tages over EMEA, MO-MFEA, MO-MFEA-II, and NSGA-II,
but worse than the proposed MMOTK approach.
MOMFEA-SADE performed well in the five tasks CIMS-
T1, CIMS-T2, PIMS-T2, NIHS-T1, NIMS-T1 of Table III
because CIMS provides more intertask information. However,
becoming trapped in a local optimization is common when
solving PIMST2, which MOMFEA-SADE avoids by using the
two mechanisms of SA and DE simultaneously. For NIHS-T1,
NIMS-T1 offers less intertask information and the transfers
between tasks are of little help for population evolution, As
such, the advantages of MMOTK are not evident in NIHS-T1,
NIMS-T1. The performance of MOMFEA-SADE in Table V
is similar to that of Table IV, except inexcluding NILS-T1.
The nine test suites discussed above are composed of both
easy and difficult tasks. The suites were divided into CIHS-CILS,
PIHS-PILS, and NIHS-NILS based on the degree of intersection
for the global minimum. The first category (CIHS-CILS) is TABLE IX
composed of two tasks with the same Pareto set. This increases AVERAGE AND S IGNIFICANCE A NALYSIS OF HV O BTAINED BY MMOTK,
MMOTK’, AND EM ATOMKT ON N INE MMO B ENCHMARKS
their relevance, especially when the easy task converges to
the Pareto set and can directly assist with the difficult task.
Knowledge transfer is often effective in these cases, as can be
seen in Tables IV and V. Here, the proposed MMOTK algorithm
achieved the best results in four out of the six CIHS-CILS cases,
which confirms effectiveness of MMOTK.
The second category (PIHS-PILS) consists of two tasks
with different PSs that represent partial intersections and
is more difficult to solve than CIHS-CILS. As shown in
Tables IV and V, MMOTK outperformed for most problems
in CIHS-CILS.
The last category (NIHS-NILS) consists of two tasks
exhibiting no intersection in the Pareto set. The relevance of
these questions is low and knowledge transfer is of little help
in these cases. Results produced by MMOTK do not obviously
reflect the advantages of tasks with less intertask information.
Finally, the effectiveness of transfers in MMOTK are illus-
trated using the CIHS function in 29th generation as an
example. The blue circle in Fig. 5 represents nondominant
solutions in a population and the red dot represents transferred
solutions. It is evident that transferred solutions in the first evolutionary many-task optimization. The decision spaces of
task dominate the most of nondominant solutions in the pop- these test problems are rotated by a matrix. It is not very effi-
ulation, thereby accelerating population evolution. This result cient to generate good solutions by the classical crossover and
demonstrates the effectiveness of MMOTK for improving the mutation operators for the rotated problems. Therefore, a vari-
probability of positive transfers and accelerating population ant of MMOTK, denoted as MMOTK’, was designed for these
evolution, using transfer rank and a KNN model classifier problems. Gaussian distribution is used to generate offspring
strategy. MMOTK also narrowed the selection range of trans- as used in EMaTOMKT [39].
ferred solutions and was able to choose positive solutions Tables VI and VII list the average IGD obtained by the
with higher probability. These experimental results confirm compared algorithms on problems 1 and 2 in algorithm
the validity of MMOTK for solving MMO problems. running 30 times independently. (The experimental results
of the remaining eight test problems are provided in the
supplementary file). The results of NSGA-II, MOMFEA,
E. Simulation Results of Test Suit 2 and the Validity EBS [53], and MaTEA [54] were from the experimen-
Analysis of the Transfer Strategy tal studies of paper [39]. And the results of MMOTK,
Test suite 2 includes ten test problems. Each test problem MMOTK’, and EMaTOMKT [39] were produced by running
is with 50 tasks, and from the WCCI2020 competition on on MATLAB2020.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
248 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

(a) (b)

Fig. 5. Explanation of the effect of the transfer in MMOTK.

TABLE X
AVERAGE AND S TANDARD D EVIATION OF IGD VALUES O BTAINED BY MMOTK W ITH n = 6,
n = 8, n = 10, n = 12, n = 14, n = 16, n = 18, AND n = 20 ON THE T EST S UIT 1

From these tables, we can see that MMOTK’ and F. Parameter Sensitivity Analysis
EMaTOMKT were superior to the other compared algo- The influence of the number of transfers n on the
rithms. EMaTOMKT and MMOTK’ perform well in test suit performance of MMOTK was also investigated as part of the
2 because the problems in the test set 2 are all with rota- study. In the experiment, n was set to eight different values,
tion matrix, and it works well to generate offspring with including 6, 8, 10, 12, 14, 16, 18, and 20. The experimen-
the Gaussian distribution. On the other hand, the effect of tal results are provided in Tables X and XI, which display the
MMOTK is poor when it does not use the Gaussian distri- mean and standard deviation of IGD and HV for ten runs given
bution to produce offspring. This also shows that the way of different values of n. As seen, the results were highly similar,
generating offspring with the Gaussian distribution is more with the best performance marked by a gray background. This
conducive to solve this problem. demonstrates the performance of MMOTK is not sensitive to
In order to further illustrate the transfer efficiency of n, which was set to 10 in the study.
MMOTK, we compared MMOTK with MMOTK’ and
EMaTOMKT on the test suit 1. The experimental results are
shown in Tables VIII and IX. From these tables we can V. C ONCLUSION
see that both MMOTK and MMOTK’ perform better than In this study, a novel MMO algorithm MMOTK was
EMaTOMKT, indicating that MMOTK is more adaptable and proposed for knowledge transfer between tasks. Transfer rank
works for both 50 and 2 dimensions. This verified the effec- and a KNN model classifier were developed to improve
tiveness of the transfer strategy based on the transfer rank and the probability of positive transfers. The effectiveness of
KNN model classifier. MMOTK was verified through a comparison with seven

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
CHEN et al.: MMO ALGORITHM USING TRANSFER RANK 249

TABLE XI
AVERAGE AND S TANDARD D EVIATION OF HV VALUES O BTAINED BY MMOTK W ITH n = 6,
n = 8, n = 10, n = 12, n = 14, n = 16, n = 18, AND n = 20 ON THE T EST S UIT 1

state-of-the-art algorithms (EMTIL, EMaTOMKT, MOMFEA- [13] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining conver-
SADE, EMEA, MO-MFEA, MO-MFEA-II, and NSGA-II). gence and diversity in evolutionary multiobjective optimization,” Evol.
Comput., vol. 10, no. 3, pp. 263–282, 2002.
The results showed MMOTK was significantly better for [14] R. Tanabe and H. Ishibuchi, “A niching indicator-based multi-modal
MMO problems. There remain several challenges in [55] that many-objective optimizer,” Swarm Evol. Comput., vol. 49, pp. 134–146,
are worthy of further discussion and research. Sep. 2019.
[15] L. Wei, X. Li, and R. Fan, “A new multi-objective particle swarm opti-
misation algorithm based on R2 indicator selection mechanism,” Int. J.
Syst. Sci., vol. 50, no. 10, pp. 1920–1932, 2019.
R EFERENCES [16] N. Yang, H.-L. Liu, and J. Yuan, “Performance investigation of
i -indicator and i+ -indicator based on lp -norm,” Neurocomputing,
[1] A. Arias-Montano, C. A. Coello Coello, and E. Mezura-Montes, vol. 458, pp. 546–558, Oct. 2021.
“Multiobjective evolutionary algorithms in aeronautical and aerospace [17] J. Yuan, H.-L. Liu, F. Gu, Q. Zhang, and Z. He, “Investigating the
engineering,” IEEE Trans. Evol. Comput., vol. 16, no. 5, pp. 662–694, properties of indicators and an evolutionary many-objective algorithm
Oct. 2012. using promising regions,” IEEE Trans. Evol. Comput., vol. 25, no. 1,
[2] A. Goicoechea, D. R. Hansen, and L. Duckstein, “Multi-objective deci- pp. 75–86, Feb. 2021.
sion analysis with engineering and business applications,” J. Oper. Res. [18] J. Falcón-Cardona, H. Ishibuchi, C. A. C. Coello, and M. Emmerich, “On
Soc., vol. 34, no. 5, pp. 449–450, 1982. the effect of the cooperation of indicator-based multi-objective evolution-
[3] V. Bhaskar, S. K. Gupta, and A. K. Ray, “Applications of multiobjective ary algorithms,” IEEE Trans. Evol. Comput., vol. 25, no. 4, pp. 681–695,
optimization in chemical engineering,” Rev. Chem. Eng., vol. 16, no. 1, Aug. 2021.
pp. 1–54, 2000. [19] E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective
[4] W. Gong, Z. Cai, and Z. Li, “An efficient multiobjective differential search,” in Proc. Int. Conf. Parallel Problem Solving Nat., 2004,
evolution algorithm for engineering design,” Struct. Multidiscip. Optim., pp. 832–842.
vol. 38, no. 2, pp. 137–157, 2012. [20] K. Li, Á. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selec-
[5] I. Otero-Muras, A. A. Mannan, J. R. Banga, and D. Oyarún, tion with bandits for a multiobjective evolutionary algorithm based on
“Multiobjective optimization of gene circuits for metabolic engineering,” decomposition,” IEEE Trans. Evol. Comput., vol. 18, no. 1, pp. 114–130,
IFAC-PapersOnLine, vol. 52, no. 26, pp. 13–16, 2019. Feb. 2014.
[6] J.-Y. Tzeng, T. Liu, and J. Chou, “Applications of multi-objective evo- [21] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm
lutionary algorithms to cluster tool scheduling,” in Proc. 1st Int. Conf. based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6,
Innov. Comput. Inf. Control Vol. I (ICICIC), vol. 2, 2006, pp. 531–5534. pp. 712–731, Dec. 2007.
[7] B. Rosenberg, M. D. Richards, J. T. Langton, S. Tenenbaum, and [22] J. Yuan, H.-L. Liu, and C. Peng, “Population decomposition-based
D. W. Stouch, “Applications of multi-objective evolutionary algo- greedy approach algorithm for the multi-objective knapsack prob-
rithms to air operations mission planning,” in Proc. GECCO, 2008, lems,” Int. J. Pattern Recognit. Artif. Intell., vol. 31, no. 4, 2017,
pp. 1879–1886. Art. no. 1759006.
[8] M. G. C. Tapia and C. A. Coello Coello, “Applications of multi-objective [23] H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective
evolutionary algorithms in economics and finance: A survey,” in Proc. optimization problem into a number of simple multiobjective sub-
IEEE Congr. Evol. Comput., Singapore, 2007, pp. 532–539. problems,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 450–455,
[9] X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithm Jun. 2014.
for solving many-objective optimization problems,” IEEE Trans. Syst., [24] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing conver-
Man, Cybern. B, Cybern., vol. 38, no. 5, pp. 1402–1412, Oct. 2008. gence and diversity in decomposition-based many-objective optimizers,”
[10] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast elitist non- IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 180–198, Apr. 2016.
dominated sorting genetic algorithm for multi-objective optimization: [25] J. Rice, C. R. Cloninger, and T. Reich, “Multifactorial inheritance with
NSGA-II,” in Proc. Int. Conf. Parallel Problem Solving Nat., 2000, cultural transmission and assortative mating. I. Description and basic
pp. 849–858. properties of the unitary models,” Amer. J. Human Genet., vol. 30, no. 6,
[11] G. Wang and Y. Wang, “Fuzzy-dominance-driven GA and its application pp. 618–643, 1978.
in evolutionary many objective optimization,” Dyn. Continuous Discrete [26] C. R. Cloninger, J. Rice, and T. Reich, “Multifactorial inheritance with
Impulsive Syst., vol. 14, no. 4, pp. 538–543, 2007. cultural transmission and assortative mating. II. A general model of
[12] K. Deb and H. Jain, “An evolutionary many-objective optimization algo- combined polygenic and cultural inheritance,” Amer. J. Human Genet.,
rithm using reference-point-based nondominated sorting approach, part vol. 31, no. 2, pp. 176–198, 1979.
I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., [27] S. J. Pan and Y. Qiang, “A survey on transfer learning,” IEEE Trans.
vol. 18, no. 4, pp. 577–601, Aug. 2014. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.
250 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 2, APRIL 2023

[28] Z. Xu and K. Zhang, “Multiobjective multifactorial immune algorithm [53] R.-T. Liaw and C.-K. Ting, “Evolutionary many-tasking based on bio-
for multiobjective multitask optimization problems,” Appl. Soft Comput., coenosis through symbiosis: A framework and benchmark problems,” in
vol. 107, Aug. 2021, Art. no. 107399. Proc. Congr. Evol. Comput., Donostia, Spain, 2017, pp. 2266–2273.
[29] S. O. Zare, B. Saghafian, A. Shamsai, and S. Nazif, “Multi-objective [54] Y. Chen, J. Zhong, L. Feng, and J. Zhang, “An adaptive archive-
optimization,” in Encyclopedia of Machine Learning and Data Mining. based evolutionary framework for many-task optimization,” IEEE Trans.
Boston, MA, USA: Springer, 2017. Emerg. Topics Comput. Intell., vol. 4, no. 3, pp. 369–384, Jun. 2020.
[30] Z. Chen, R. Guo, Z. Lin, T. Peng, and X. Peng, “A data-driven [55] K. C. Tan, L. Feng, and M. Jiang, “Evolutionary transfer optimization—
health monitoring method using multiobjective optimization and stacked A new frontier in evolutionary computation research,” IEEE Comput.
autoencoder based health indicator,” IEEE Trans. Ind. Informat., vol. 17, Intell. Mag., vol. 16, no. 1, pp. 22–33, Feb. 2021.
no. 9, pp. 6379–6389, Sep. 2021.
[31] A. Gupta, Y.-S. Ong, L. Feng, and K. C. Tan, “Multiobjective multifac-
torial optimization in evolutionary multitasking,” IEEE Trans. Cybern.,
vol. 47, no. 7, pp. 1652–1665, Jul. 2017. Hongyan Chen is currently pursuing the mas-
[32] A. Gupta, Y.-S. Ong, and L. Feng, “Multifactorial evolution: Toward ter’s degree with the School of Mathematics and
evolutionary multitasking,” IEEE Trans. Evol. Comput., vol. 20, no. 3, Statistics, Guangdong University of Technology,
pp. 343–357, Jun. 2016. Guangzhou, China.
[33] M. Gong, Z. Tang, H. Li, and J. Zhang, “Evolutionary multitasking Her current research interests include evolution-
with dynamic resource allocating strategy,” IEEE Trans. Evol. Comput., ary computation, multitask learning, and machine
vol. 23, no. 5, pp. 858–869, Oct. 2019. learning.
[34] K. K. Bali, A. Gupta, Y.-S. Ong, and P. S. Tan, “Cognizant multitasking
in multiobjective multifactorial evolution: Mo-MFEA-II,” IEEE Trans.
Cybern., vol. 51, no. 4, pp. 1784–1796, Apr. 2021.
[35] X. Ma, Q. Chen, Y. Yu, Y. Sun, and Z. Zhu, “A two-level transfer learn-
ing algorithm for evolutionary multitasking,” Front. Neurosci., vol. 13,
p. 1408, Jan. 2020.
[36] K. K. Bali, A. Gupta, L. Feng, Y. S. Ong, and T. P. Siew, “Linearized Hai-Lin Liu (Senior Member, IEEE) received
domain adaptation in evolutionary multitasking,” in Proc. IEEE Congr. the B.S. degree in mathematics from Henan
Evol. Comput. (CEC), 2017, pp. 1295–1302. Normal University, Xinxiang, China, in 1984, the
[37] F. Liang et al., “Evolutionary multitasking via explicit autoencoding,” M.S. degree in applied mathematics from Xidian
IEEE Trans. Cybern., vol. 49, no. 9, pp. 3457–3470, Sep. 2019. University, Xi’an, China, in 1989, and the Ph.D.
[38] J. Lin, H.-L. Liu, K. C. Tan, and F. Gu, “An effective knowledge trans- degree in control theory and engineering from the
fer approach for multiobjective multitasking optimization,” IEEE Trans. South China University of Technology, Guangzhou,
Cybern., vol. 51, no. 6, pp. 3238–3248, Jun. 2021. China, in 2002.
[39] Z. Liang, X. Xu, L. Liu, Y. Tu, and Z. Zhu, “Evolutionary He is a Full Professor with the School of
many-task optimization based on multi-source knowledge trans- Mathematics and Statistics, Guangdong University
fer,” IEEE Trans. Evol. Comput., early access, Aug. 2, 2021, of Technology, Guangzhou. He is a Postdoctoral
doi: 10.1109/TEVC.2021.3101697. Fellow with the Institute of Electronic and Information, South China
[40] Z. Liang, H. Dong, C. Liu, W. Liang, and Z. Zhu, “Evolutionary University of Technology. He has published over 100 research papers in jour-
multitasking for multiobjective optimization with subspace alignment nals and conferences. His research interests include evolutionary computation
and adaptive differential evolution,” IEEE Trans. Cybern., early access, and optimization, wireless network planning and optimization, and their appli-
Jun. 24, 2020, doi: 10.1109/TCYB.2020.2980888. cations.
[41] Q. Shang et al., “A preliminary study of adaptive task selection in Prof. Liu currently serves as an Associate Editor for IEEE T RANSACTIONS
explicit evolutionary many-tasking,” in Proc. IEEE Congr. Evol. Comput. ON E VOLUTIONARY C OMPUTATION .
(CEC), 2019, pp. 2153–2159.
[42] Y. Yuan et al., “Evolutionary multitasking for multiobjective continuous
optimization: Benchmark problems, performance metrics and baseline Fangqing Gu received the B.S. degree from
results,” 2017, arXiv:1706.02766. Changchun University, Jilin, China, in 2007, the
[43] K. Deb, Multiobjective Optimization Using Evolutionary Algorithms. M.S. degree from the Guangdong University of
Chichester, U.K.: Wiley, 2001. Technology, Guangzhou, Guangdong, China, in
[44] A. Gupta and Y.-S. Ong, “Genetic transfer or population diversifi- 2011, and the Ph.D. degree from the Department of
cation? deciphering the secret ingredients of evolutionary multitask Computer Science, Hong Kong Baptist University,
optimization,” in Proc. IEEE Symp. Ser. Comput. Intell., 2016, pp. 1–7. Hong Kong, in 2016.
[45] J. Lin, H. L. Liu, B. Xue, M. Zhang, and F. Gu, “Multiobjective multi- He joined the School of Mathematics and
tasking optimization based on incremental learning,” IEEE Trans. Evol. Statistics, Guangdong University of Technology as a
Comput., vol. 24, no. 5, pp. 824–838, Oct. 2020. Lecturer. His research interests include data mining,
[46] G. Guo, H. Wang, D. A. Bell, Y. Bi, and K. Greer, “KNN model-based machine learning, and evolutionary computation.
approach in classification,” in Proc. OTM Int. Conf. Move Meaningful
Internet Syst., 2003, pp. 986–996.
[47] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and
A. J. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, Kay Chen Tan (Fellow, IEEE) received the B.Eng.
pp. 723–773, Mar. 2012. (First Class Hons.) and Ph.D. degrees from the
[48] R. B. Agrawal, K. Deb, and R. B. Agrawal, “Simulated binary crossover University of Glasgow, Glasgow, U.K., in 1994 and
for continuous search space,” Complex Syst., vol. 9, no. 3, pp. 115–148, 1997, respectively.
1994. He is currently the Chair Professor of
[49] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evo- Computational Intelligence with the Department of
lutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, Computing, The Hong Kong Polytechnic University,
pp. 173–195, Jun. 2000. Hong Kong. He has published over 300 refereed
[50] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multi- articles and seven books.
objective optimization test problems,” in Proc. Congr. Evol. Comput., Prof. Tan is currently the Vice-President
2002, pp. 825–830. (Publications) of IEEE Computational Intelligence
[51] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and Society, USA. He served as the Editor-in-Chief for IEEE Computational
V. G. da Fonseca, “Performance assessment of multiobjective optimiz- Intelligence Magazine from 2010 to 2013 and the IEEE T RANSACTIONS
ers: An analysis and review,” IEEE Trans. Evol. Comput., vol. 7, no. 2, ON E VOLUTIONARY C OMPUTATION from 2015 to 2020. He currently
pp. 117–132, Apr. 2003. serves as an editorial board member for more than ten journals. He is an
[52] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A IEEE Distinguished Lecturer Program Speaker and the Chief Co-Editor of
comparative case study and the strength pareto approach,” IEEE Trans. Springer Book Series on Machine Learning: Foundations, Methodologies,
Evol. Comput., vol. 3, no. 4, pp. 257–271, 1999. and Applications.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 05:58:43 UTC from IEEE Xplore. Restrictions apply.

You might also like