0% found this document useful (0 votes)
9 views15 pages

A Knowledge Guided Transfer Strategy For Evolutionary Dynamic Multiobjective Optimization

The document presents a knowledge guided transfer strategy (KTS) for dynamic multiobjective evolutionary optimization (DMOEA) to effectively address dynamic multiobjective optimization problems (DMOPs) with both regular and random environmental changes. The KTS involves creating a knowledge pool from historical environments, employing a knowledge matching strategy to identify valuable knowledge for current environments, and utilizing a hybrid transfer mechanism to generate initial populations efficiently. Experimental results demonstrate that KTS outperforms existing algorithms, showcasing its versatility in solving DMOPs.

Uploaded by

Tong Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views15 pages

A Knowledge Guided Transfer Strategy For Evolutionary Dynamic Multiobjective Optimization

The document presents a knowledge guided transfer strategy (KTS) for dynamic multiobjective evolutionary optimization (DMOEA) to effectively address dynamic multiobjective optimization problems (DMOPs) with both regular and random environmental changes. The KTS involves creating a knowledge pool from historical environments, employing a knowledge matching strategy to identify valuable knowledge for current environments, and utilizing a hybrid transfer mechanism to generate initial populations efficiently. Experimental results demonstrate that KTS outperforms existing algorithms, showcasing its versatility in solving DMOPs.

Uploaded by

Tong Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1750 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO.

6, DECEMBER 2023

A Knowledge Guided Transfer Strategy for


Evolutionary Dynamic Multiobjective Optimization
Yinan Guo , Member, IEEE, Guoyu Chen , Min Jiang , Senior Member, IEEE,
Dunwei Gong , Member, IEEE, and Jing Liang , Senior Member, IEEE

Abstract—The key task in dynamic multiobjective optimization I. I NTRODUCTION


problems (DMOPs) is to find Pareto-optima closer to the true
ANY real-world optimization problems involve
one as soon as possible once a new environment occurs. Previous
dynamic multiobjective evolutionary algorithms (DMOEAs) nor-
mally focus on DMOPs with regular environmental changes, but
M multiple conflicting and time-varying objectives,
termed as dynamic multiobjective optimization prob-
neglect widespread random one, limiting their applications in lems (DMOPs) [1], [2]. To address the issues, rich studies have
real-world fields. To address this issue, a knowledge guided trans- done on tracking Pareto-optima varying over time as closely
fer strategy (KTS)-based DMOEA is proposed in this article.
First, knowledge described as a two-tuple is extracted under as possible whenever an environmental change occurs [3].
each historical environment and preserved to a knowledge pool. More especially, many population-based optimization algo-
Redundant knowledge is recognized and adaptively removed so rithms, such as particle swarm algorithm and multiobjective
as to guarantee the diversity of the pool. Second, a knowledge evolutionary algorithm based on decomposition [4], [5], have
matching strategy is developed to re-evaluate the representa-
tive of each stored knowledge under a new environment, with
obtained promising success in maintaining the better diversity
the purpose of finding the most valuable one to promote posi- of a population all the time after retriggering the evolution
tive knowledge transfer. Third, an improved knowledge transfer process as a new environment appears [6], [7].
mechanism based on subspace alignment is introduced. By inte- In dynamic multiobjective evolutionary algorithms
grating it with the knowledge reuse mechanism, a hybrid transfer (DMOEAs), various change response techniques have been
strategy is constructed to adaptively select the most suitable one
in terms of the similarity degree of selected knowledge on the designed to rationally utilize historical Pareto-optima for
current environment, and then generate a new initial popula- tracking the ones of new environment [8], [9], [10]. The
tion. Experiments on 20 benchmark problems demonstrate that prediction mechanism, as a widely used one, produces
the KTS outperforms five state-of-the-art algorithms, achieving an initial population in terms of moving trend of Pareto-
good versatility in solving DMOPs with both regular and random
optima under changing environments. For DMOPs with
changes.
periodic change, memory strategy reuses historical Pareto-
Index Terms—Dynamic multiobjective optimization, knowl- optima that found under similar environments. Being different
edge transfer, random change, regular change.
from them, transfer learning (TL) is introduced to mine latent
correlation between the distributions of Pareto-optima among
changing environments, providing a promising problem
solver for DMOPs where solution distributions obey non-
Manuscript received 17 January 2022; revised 30 July 2022 and independently identically distributed (Non-IID) at different
30 September 2022; accepted 13 November 2022. Date of publication environments [11]. More especially, Jiang et al. [12] speeded
16 November 2022; date of current version 1 December 2023. This work was up the convergence by introducing domain adaptation to
supported in part by the National Natural Science Foundation of China under
Grant 61973305, Grant 61573361, Grant 52121003, Grant 61922072, and efficiently track time-varying Pareto fronts. On this basis,
Grant 62133015; in part by the Six Talent Peak Project in Jiangsu Province a memory that preserves historical optima was integrated with
under Grant 2017-DZXX-046; in part by the Royal Society International manifold TL, forming memory-driven manifold TL-based
Exchanges 2020 Cost Share; and in part by the 111 Project under Grant
B21014. (Corresponding authors: Guoyu Chen; Jing Liang.) DMOEA (MMTL-DMOEA) [13], with the purpose of fusing
Yinan Guo is with the School of Information and Control Engineering, various forms of historical information into a current initial
China University of Mining and Technology, Xuzhou 221116, China, and population. Intuitively, the information of source and target
also with the School of Mechanical Electronic and Information Engineering,
China University of Mining and Technology (Beijing), Beijing 100083, China domains in TL-based DMOEAs is derived from historical
(e-mail: [email protected]). optima and current candidate solutions, respectively. Similar
Guoyu Chen and Dunwei Gong are with the School of Information and
Control Engineering, China University of Mining and Technology, Xuzhou
to traditional TL, the negative transfer may misguide evolution
221116, China (e-mail: [email protected]; [email protected]). as the distribution of individuals in source and target domains
Min Jiang is with the Department of Cognitive Science and Technology, are extremely distinct. To tackle it, Jiang et al. [14] filtered
Xiamen University, Xiamen 361005, China (e-mail: [email protected]).
Jing Liang is with the School of Electrical Engineering, Zhengzhou
out some elite solutions with good diversity under the current
University, Zhengzhou 450001, China (e-mail: [email protected]). environment, forming a more efficient transfer of historical
This article has supplementary material provided by the knowledge.
authors and color versions of one or more figures available at
https://doi.org/10.1109/TEVC.2022.3222844. No matter which kind of change response technique, it has
Digital Object Identifier 10.1109/TEVC.2022.3222844 been proved to be successful in solving DMOPs with regular
1089-778X 
c 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1751

change patterns. The so-called regular change refers to Pareto- most appropriate knowledge that is similar to the current
optima under adjacent environments that vary with certain environment is chosen to promote positive knowledge
rules, such as parallel mode and specific moving trends, at transfer.
steady frequency or severity [15], [16]. Previous prediction 3) A novel hybrid knowledge transfer mechanism is
or memory models mine and utilize implicit knowledge designed, with the purpose of adapting historical knowl-
from Pareto-optima changing with the above regular patterns, edge to a new environment with higher computation
with the purpose of assisting DMOEAs to explore cur- efficiency and faster evolutionary speed. According to
rent search space with higher efficiency [17], [18]. However, the similarity degree of selected knowledge on the
Pareto-optima may change over time with the irregular mode current environment, we employ the knowledge reuse
at dynamic frequency or severity in real-world dynamic technique or the transfer technique based on subspace
optimization problems. We call it a random change pattern distribution alignment to generate an initial population.
of DMOPs. Under this pattern, Pareto-optima under two adja- The remainder of this article is organized as fol-
cent environments may have weak similarity, especially for the lows. Section II provides the fundamental concepts of
significant severity of change [19]. For example, the power dynamic multiobjective optimization and existing DMOEAs.
supply in magnesium production needs to be dynamically Section III describes the principle of the proposed KTS-based
adjusted so as to maximize the yield and grade of magne- DMOEA. The experimental results of the proposed method
sia grain. The granularity and grade of raw material may and state-of-the-art DMOEAs are compared and further ana-
severely fluctuate with time, resulting in the optimal electric lyzed in Section IV. Finally, Section V concludes the main
current jumps far away from the past one [20]. Apparently, contributions and gives future works.
there are insignificant changing rules among dynamic environ-
ments in random change, leading to the inefficient initialization II. P RELIMINARY AND R ELATED R ESEARCH
under a new environment by the prediction mechanism [21]. A. Dynamic Multiobjective Optimization
Similarly, the diversity introduction strategies may be an
unpromising problem solver for DMOPs with this change Without loss of generality, a DMOP can be formulated as
pattern, because they rely on the Pareto-optima of the last envi- follows:
ronment to produce a new initial population and the diversity min F(x, t) = (f1 (x, t), . . . , fm (x, t))T
generated by mutation or reinitialization only increases the
s.t. x ∈  (1)
limited convergence pressure [22]. Regarding the TL mecha-
nism, an essential challenge for solving DMOPs with random where t is the time index. x represents an n-dimensional deci-
change is to guarantee the similarity between the source sam- sion vector in decision space . fi (x, t), i = 1, . . . , m refers
ples and target ones, avoiding the increasing occurrence of to ith objective function at time t, and m is the number of
negative transfer. Although a few works seek the latent corre- objectives.
lation across environments to improve the utilization efficiency Definition 1 (Dynamic Pareto Domination): Suppose x and
of historical knowledge [23], [24], finding appropriate histor- y are two candidate solutions at time t. x dominances y,
ical knowledge to promote positive transfer under different represented by x ≺t y, if and only if
change patterns is still an open issue. 
fi (x, t) ≤ fi (y, t) ∀i = 1, . . . , m
To further improve the versatility of TL-based change . (2)
fi (x, t) < fi (y, t), ∃i = 1, . . . , m
response technique on DMOPs with both regular and
random changes, we develop a knowledge guided trans- Definition 2 (Dynamic Pareto-Optimal Set): Denoted
fer strategy (KTS) for dynamic multiobjective evolutionary POS(t) as a dynamic Pareto-optimal set at time t, all solutions
optimization in this article. In KTS, a knowledge pool is in it are not dominated by any other individuals, satisfying
constructed to preserve knowledge extracted from the Pareto- POS(t) = {x|¬∃y, y ≺t x}. (3)
optima of historical environments. As a new environment
appears, the knowledge of the most similar environment Definition 3 (Dynamic Pareto-Optimal Front): At time t,
is found from the pool, and transferred to adapt the cur- a dynamic Pareto-optimal front, represented by POF(t), is the
rent environment, producing an initial population. Three-fold objective vector of POS(t)
contributions of this article can be summarized as follows.
POF(t) = {F(x, t)|x ∈ POS(t) }. (4)
1) Knowledge refers to Pareto-optima and its implicit fea-
ture found under each historical environment, and all DMOPs can be categorized into the following four types in
historical knowledge is preserved to a knowledge pool terms of the dynamic characteristics of POF(t) and POS(t) [7].
in chronological order. In order to maintain the diversity Type I: POS(t) changes over time, but POF(t) is fixed.
of the pool, a knowledge update strategy is designed to Type II: Both POS(t) and POF(t) change over time.
adaptively remove the redundant historical knowledge in Type III: POS(t) is fixed, but POF(t) changes over time.
terms of the similarity among them. Type IV: Both POS(t) and POF(t) remain unchanged
2) A knowledge matching strategy is developed to find the over time.
most valuable historical knowledge from a knowledge In DMOPs, each dynamic can produce environmental changes
pool. After re-evaluating the representative stored in with various characteristics [25], [26]. Except for the above-
each historical knowledge under a new environment, the mentioned dynamics in Section I, other types of environmental

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1752 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

change are also investigated: 1) quasi-static indicates that initial population under a new environment, causing low evo-
DMOPs with low change severity and frequency; 2) low lution efficiency [46]. In addition, hybrid techniques generally
change severity but high change frequency result in problems integrate more than one change response strategies to initialize
with progressive change; 3) abrupt change has high change a population once a new environment appears. Liang et al. [36]
severity and low change frequency; and 4) chaotic change proposed a hybrid of memory and prediction strategies, ini-
expresses high change severity and frequency. tializing the population at the new time. Once a historical
environment was similar to the current one, the memory
strategy was employed. Otherwise, a prediction method was
performed to generate initial individuals.
B. Dynamic Multiobjective Evolutionary Algorithms Except for the above-mentioned mainstream strategies, the
To the best of our knowledge, change response techniques change response technique based on TL has attracted increas-
of DMOEAs mainly include diversity introduction, memory ing attention in recent years [47], [48]. Jiang et al. [12] put
strategy, and prediction mechanism [26]. forward to solve DMOPs via TL, and a domain adapta-
In diversity introduction strategies [27], [28], the solutions tion method, called transfer component analysis (TCA), was
that are produced randomly or after mutation replace a cer- introduced to construct a transfer model for generating an
tain proportion of individuals to form a new initial population initial population at the new time. To improve the computa-
once an environment changes, with the purpose of increas- tional efficiency of this mechanism, a memory-driven manifold
ing the diversity. Following the above idea, Deb et al. [29] TL strategy (MMTL-DMOEA) was presented [13], in which
designed two kinds of diversity introduction strategies (i.e., Pareto-optimal solutions obtained from past environments
D-NSGA-II-A and D-NSGA-II-B) and compared their per- were stored into an external storage, and employed to generate
formances. These simple mechanisms can prevent the pop- an initial population under a new environment by mani-
ulation from falling into local optima [30], but show weak fold TL. Following that, the transfer Adaboost (TrAdaboost)
performance in solving DMOPs with severe change. Being method was introduced to construct a transfer model based
different from it, memory mechanism [31], [32], [33] records on some good solutions with better diversity that filtered out
useful information obtained from past environments and reuses by a presearch strategy [14], with the purpose of overcom-
them when an environmental change appears. Chen et al. [34] ing the negative transfer in TL-based DMOEAs. However,
proposed a dynamic two-archive evolutionary algorithm, in constructing an appropriate transfer model needs extra com-
which two co-evolving archives that concern about con- putational cost. Thus, transferring the most valuable historical
vergence and diversity, respectively, were complementary information to the current environment with higher efficiency
to each other via a mating selection mechanism. Also, is still an opening problem.
Sahmoud and Topcuoglu [35] introduced the memory mecha-
nism to NSGA-II. An explicit memory was developed to store C. Subspace Distribution Alignment Between Infinite
all nondominated solutions during the evolution process, and Subspaces
provide the ones reused for a similar newly appeared envi- Subspace distribution alignment between infinite subspaces
ronment. Intuitively, how to reuse useful information from (SDA-IS) [49], as a domain adaptation method, mines the asso-
memory with the least computational cost is a key challenge ciation between the search spaces in historical and current
of this mechanism [36]. environments. A geodesic flow path is constructed from the
Prediction-based approach [37], [38], [39], as a popular source subspace (a historical search space) to target one (cur-
one in DMOEAs, learns the changing trend of Pareto- rent search space), and a kernel trick is employed to integrate
optima obtained from historical environments to generate over an infinite number of subspaces on the path. Following
a more rational initial population, with the purpose of speed- that, the distributions of subspaces are aligned along each part
ing up the convergence at a new time. Several regression of the geodesic flow kernel.
methods, including autoregression [40], Kalman filter [41], Suppose that DS = {xS1 , . . . , xSN } ⊆ RS is the source data,
grey model [42], and support vector regression [43] have been and target data denotes DT = {xT1 , . . . , xTN } ⊆ RT . First,
introduced to construct the prediction model. Following that, PCA is employed to extract the key features of source and
Rambabu et al. [44] developed a mixture-of-experts-based target data, and the corresponding subspaces are formed, rep-
ensemble framework, and a gating network was utilized to resented by SS ∈ Rn×d and ST ∈ Rn×d , respectively. Here, n
manage the switching among various predictors in terms of is the number of decision variables, and d is the dimension of
their prediction performances. Also, Guo et al. [45] developed subspace. In this way, the mapping matrix can be formulated
an ensemble prediction model based on three heterogeneous as follows:
predictors, and three kinds of synthesis strategies had been  
1 2
designed to estimate the fitness values of candidates under MS = [SS U1 RS U2 ] A [S U R U ]T . (5)
a new environment. Prediction-based change response tech- 2 3 TS S 1 S 2
niques show competitive performances in solving DMOPs In (5), RS is the orthogonal complement of SS , U1 , and
with regular change. However, there are insignificant chang- U2 are orthogonal matrices of SST ST and RTS ST by sin-
ing rules among dynamic environments in random change. In gular value decomposition (SVD). 1 , 2 , and 3 are
this case, the prediction mechanism may not accurately cap- diagonal matrices based on the principal angle θ of SS
ture the trend of random change, thus producing inefficient and ST . The diagonal elements of 1 , 2 , and 3 are

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1753

Fig. 1. Workflow of KTS.

Algorithm 1 Framework of KTS-DMOEA occurs, the proposed KTS is done to generate a new initial
Input: Dynamic optimization function F(x, t), population size population.
N The core of KTS-DMOEA lies in KTS. Fig. 1 depicts its
Output: A series of approximated populations P workflow. Under each environment, the partial POS is selected
1: Set time step t = 0, knowledge pool KP = ∅; to store into a knowledge pool as its specific knowledge.
2: Initialize a population P0 with size N; In order to maintain the diversity of knowledge in the pool,
3: while termination criterion not met a redundant one is deleted when the capacity of pool reaches
4: if an environmental change occurs the maximum. Once an environmental change occurs, knowl-
5: t = t + 1; edge matching strategy is triggered to seek a best knowledge
6: KP = KnowledgeExtractionAndUpdate(P
 t−1 , KP); from the pool, that is, its corresponding environment is most
7: Best_K(t), Dif = KowledgeMatching(KP, F(x, t)); similar to the current one. Following that, the selected knowl-
8: Pt = KnowledgeTransfer(Best_K(t), Dif ); edge is transferred into the current environment via a hybrid
9: end if transfer strategy, forming a new initial population. Intuitively,
10: Perform MOEA; three key issues of KTS are knowledge extraction and update,
11: end while knowledge matching strategy, as well as knowledge transfer
mechanism.

λ1i =1 + sin(2θi )/2θi , λ2i =(cos(2θi ) − 1)/2θi , and λ3i =


1−sin(2θi )/2θi , i = 1, . . . , d, respectively. Additionally, A. Knowledge Extraction and Update
ATS = cov(D̄S )−0.5 cov(D̄T )0.5 , where D̄S = DS [SS U1 RS U2 ]T ,
D̄T =DT [SS U1 RS U2 ]T . Based on the mapping matrix MS , Knowledge refers to Pareto-optima and its implicit fea-
source data can well adapt to the target space via simple tures in each dynamic environment that may provide valuable
calculation. guidance for exploring a future environment [32], [34]. All
In summary, there are two advantages of the SDA-IS-based historical knowledge is preserved to a knowledge pool in
transfer strategy. First, SDA-IS aligns the distribution as well chronological order. Generally, the historical Pareto-optima in
as the bases of source and target subspaces on the geodesic a knowledge pool is updated by deleting the earliest one once
flow path, avoiding misalignment after adaptation. Second, his- the maximum storage space is reached [13]. This update strat-
torical knowledge can generate transferred individuals via the egy ignores the diversity of knowledge. To address this issue,
learned mapping matrix without additional time-consuming KTS-DMOEA proposed in this article updates the pool by
processes, such as the training of classifier in [14] and a single- deleting the redundant knowledge in terms of their crowd-
objective optimization process for finding transferred solutions ing degree. In this way, diverse knowledge is maintained
in [13], achieving promising computational efficiency. under the limited storage space, with the purpose of providing
appropriate source information for knowledge transfer with
high computational efficiency. Following that, a knowledge
III. DMOEA BASED ON K NOWLEDGE G UIDED extraction and update mechanism is developed, as shown in
T RANSFER S TRATEGY Algorithm 2.
As the framework of DMOEA based on a KTS (KTS- 1) Extracting Knowledge:
DMOEA) is shown in Algorithm 1, we randomly select Definition 4: Knowledge is described by a two-tuple,
0.1N individuals from the current population, and recalcu- denoted as K = Q, C . Here, Q ⊂ POS(t) is a Pareto-optimal
late them. A difference of objective values in two adjacent subset obtained in t-th historical environment. More especially,
generations means that a new environment has appeared [40]. all extreme solutions and a center one of POS(t) are con-
Under any environment, a static MOEA is conducted to find tained in Q. C = (μ1 , . . . , μn ) and each element is defined
the optimal Pareto solutions. Once an environmental change as the mean of all solutions in Q along each dimension of

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1754 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

Algorithm 2 Knowledge Extraction and Update under limited storage capacity. First, an angle between two
Input: The Pareto-optima at time t-1 POS, knowledge pool individuals is defined as follows:
KP
Output: Updated knowledge pool KP θ xi , xp = min arccos f (xi ), f xp , xi ∈ Q. (9)
1: Set Q = ∅; A solution xp ∈ POS(t) that has maximal angle with xi is
2: /* Extracting knowledge */ found to construct a reference vector, denoted as v = f (xp )
3: for i = 1 to m
4: Find ith extreme solution by Eq. (7) and (8); p = arg max θ xi , xp , xp ∈ POS(t). (10)
5: Save extreme solution into Q;
6: end for
Second, an elite solution xq that has minimal ASF to v is
7: while |Q| < 0.5N removed from POS(t) and saved to Q
8: Update POS = POS − Q; q = arg min ASF(xi , v), xi ∈ POS(t). (11)
9: Identify a reference vector v by Eq. (9) and (10);
10: Find an elite solution xq by Eq. (11); The above two steps are repeated until the size of Q reaches
11: Save solution xq into Q; 0.5N. It is worth mentioning that the first solution selected
12: end while by the angle-based decomposition method is the center one,
13: Construct knowledge K = Q, C ; because it has the largest angle to extreme ones than oth-
14: /* Updating knowledge */ ers, thus in the center of POF. Therefore, there are 0.5N
15: Store K into knowledge pool KP; Pareto-optima and a mean vector are recorded in each his-
16: if |KP| > L torical knowledge. Apparently, under limited storage space,
17: Calculate crowding distance Dis of each knowledge by knowledge extraction strategy can ensure a knowledge pool
Eq. (12); preserving more diverse historical knowledge.
18: Delete the most redundant knowledge having minimum 2) Updating Knowledge: Knowledge extracted from each
Dis; environment is preserved to a knowledge pool, represented by
19: end if KP. Denoted L as its maximum size, KP is updated when
the number of knowledge exceeds L. In order to maintain the
diversity of KP, knowledge is evaluated at the centroid level,
decision space and the ones with significant differences are retained.
Defined the similarity of knowledge as the crowding dis-
|Q|
1  tance among C of various knowledge, represented by Dis
μj = xij , xi ∈ Q. (6)
|Q|
i=1 min D Ci , Cj
Disi = , j = i, j = 1, . . . , |KP| (12)
To the best of our knowledge, each axis can be defined max D Ci , Cj
as a reference vector to find an extreme solution. Thus,
where min D(Ci , Cj ) and max D(Ci , Cj ) represent the mini-
the number of extreme solutions is equal to the one of
mum and maximum Euclidean distances between Ci and the
objectives [50]. Assuming that there are m objectives in an
others, respectively. Apparently, smaller Dis means that the
environment, the extreme solution along lth dimension is
corresponding knowledge provides the similar but redundant
found by minimizing achievement scalarizing function (ASF).
information for the evolution under the subsequent environ-
Let wl = (wl1 , . . . , wlm )T be a reference vector of lth axis,
ments, leading to worse diversity of the knowledge pool. Thus,
ASF is described as follows:
 the knowledge having minimum Dis will be removed from KP.
fk (xi ) That is, KP = {KP\Kr , r = arg min Disi , i = 1, . . . , |KP|}.
ASF(xi , wl ) = max (7)
wlk
where fk (xi ) represents kth objective value of ith solution in B. Knowledge Matching
POS(t), k = 1, . . . , m. wlk is defined as follows: Knowledge that is acquired from the similar historical
 environment may provide valuable guidance for the evolu-
1, l=k
wlk = (8) tion under the current environment and effectively avoid the
1e − 6, l = k.
negative transfer. Thus, how to find the most appropriate
A solution that has the minimal ASF value is selected from knowledge from a knowledge pool in terms of the similar-
POS(t), namely, xi , i = arg min ASF(xi , wl ) ∀xi ∈ POS(t), ity between historical and current environments is a key issue
and saved to Q as an extreme solution in this historical envi- for high-efficient knowledge transfer.
ronment. Subsequently, m extreme solutions are found, and Without loss of generality, two environments that appeared
the remaining Pareto-optimal solutions form a Pareto-optimal in a DMOP are judged to be similar as the corresponding
subset, represented by POS(t) = POS(t) − Q. POS and POF approximate to each other [32]. Following this
Following that, an angle-based decomposition method is assumption, a knowledge matching strategy is designed to
introduced to select Pareto-optimal solutions from POS(t) iter- find the most valuable historical knowledge belonging to the
atively, and form Q with good distribution and convergence, environment that is similar to the current one. Its evaluation
with the purpose of preserving more valuable knowledge process is presented in Algorithm 3.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1755

Algorithm 3 Knowledge Matching


Input: The knowledge pool KP, the dynamic optimization
function at time t F(x, t)
Output: The subset Q of most similar knowledge Best_K(t),
the corresponding difference Dif
1: Determine representative of each knowledge;
2: for i = 1 : |KP|
3: Calculate the difference Difi according to Eq. (13);
4: end for
5: Determine the best knowledge having minimum Dif value;
6: Define the Pareto-optima subset of best knowledge as
Fig. 2. Illustration of evaluating the difference.
Best_K(t);

According to Definition 4, a historical knowledge records


0.5N Pareto-optima and its feature. Once a new environment
occurs, judging the similarity between all historical and current
environments by the complete knowledge is time consuming.
In order to balance the computational cost and measure ability,
the extreme solutions and a center one saved in each histori-
cal knowledge are utilized as the representative for knowledge
matching. Let xik ∈ Qk be ith representative in kth historical
knowledge, F(xik ) is its objective values. After re-evaluating it
under the current environment, its new objective values, repre-
sented by F̄(xik ), is obtained. Denoted D(F(xik ), F̄(xik )) as the Fig. 3. Knowledge shows different similarity.
Euclidean distance between the historical and current objec-
tive values of xik , the difference Dif between historical and
current environments is evaluated arrived environments may be distinct. For example, K 1 is the
most similar historical knowledge for FDA3 at 2th and 3th
1 
m+1
Dif k = D F xik , F̄ xik . (13) time steps, respectively, as shown in Fig. 3. That is, K 1 shows
m+1 different similarity under various environments. Apparently,
i=1
once a historical environment is significantly similar to the cur-
Intuitively, smaller Difk means that the environment of kth rent one, the corresponding knowledge can be directly reused
knowledge is more similar to the current one. The correspond- by the same mechanism introduced in [31], [32], and [33]
ing historical knowledge may provide potential candidate solu- without extra transfer process, saving time cost. In contrast,
tions closer to the current true POS, thus guaranteeing effective the SDA-IS-based transfer strategy can mine more useful
knowledge transfer. Based on this, we name the subset of most historical knowledge to generate the initial population when
similar historical knowledge for the current environment as historical environments show certain difference to the current
Best_K(t) = Ql , l = arg min Dif k , k = 1, . . . , L. one. In order to achieve promising transfer efficiency, a hybrid
Apparently, the proposed knowledge matching process only knowledge transfer mechanism is presented, and its pseudo
re-evaluates (m+1) representative solutions obtained under code is listed in Algorithm 4.
each historical environment, not the whole solutions in each As Dif k ≤ 1e − 4 for Best_K(t), true POS under the
knowledge. Thus, the maximal total number of re-evaluated current environment significantly approximates to the Pareto-
solutions is (m+1)L once a new environment appears, defi- optima recorded in the selected historical knowledge [36].
nitely improving computational efficiency. Taking dMOP2 as Thus, all solutions in Best_K(t) are directly transferred into an
an example, knowledge obtained in two environments is initial population in the current environment, called knowledge
labeled by K 1 and K 2 , respectively. Each of them contains reuse technique. That is, the current initial population con-
20 Pareto-optimal solutions. We observe from Fig. 2 that K 2 sists of 0.5N historical Pareto-optima in Best_K(t) and 0.5N
achieves smaller Dif k and provides more valuable potential randomly generated individuals.
candidate for a new environment. As the difference of Best_K(t) cannot reach the above
threshold, the Pareto-optima saved in Best_K(t) are closer
C. Knowledge Transfer to true POS under a newly appeared environment than other
Under the premise of guaranteeing higher computational historical knowledge, but still have the relatively distinct dis-
efficiency, a good transfer strategy can adapt historical knowl- tribution with the current true POS. In order to efficiently
edge to a new environment for speeding up the convergence. adapt historical knowledge to the current environment, SDA-
Though the knowledge of the most similar environment is IS is employed. Denoted Best_K(t) as source data, the target
selected from a knowledge pool in terms of Dif k under a new one, represented by DT (t), consists of N individuals at time
environment, the similarity degree of Best_K(t) for each newly t that are generated by the strategy in [13]. To the best of

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1756 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

Algorithm 4 Knowledge Transfer various environmental changes, performance investigation on


Input: The subset Q of most similar knowledge Best_K(t), different dimensions of decision variables, and sensitivity of
the corresponding difference Dif population size on three-objective DMOPs.
Output: The initial population of new environment
1: if Dif ≤ 1e − 4 A. Benchmark Functions and Performance Metrics
2: Generate 0.5N random solutions; 20 benchmark functions, including dMOP1-dMOP3 [7],
3: Combine random solutions and Best_K(t) as initial FDA1-FDA5 [28], and DF3-DF14 [52], are employed in
population; experiment. For DMOPs with regular change, a new envi-
4: else ronment occurs as t = (1/nt )τ/τt  [52]. Here, τ , nt , and
5: Set d = m − 1, TrS = ∅; τt represent the total generation, the severity, and frequency
6: Generate set DT (t) containing N solutions at time t; of change, respectively. Intuitively, smaller τt means that
7: Use PCA for DT (t) to get ST ∈ Rn×d ; the environment changes rapidly. Being different from them,
8: Use PCA for Best_K(t) to get SS ∈ Rn×d t = rand(0, 1)τ/τt  for DMOPs with random change [53].
9: Construct the mapping matrix MS according to Eq. (5); Here, rand(0, 1) is a random number obeying uniform dis-
10: for all x ∈ Best_K(t) tribution in (0, 1], which determines the severity of change.
11: x = x · MS; Apparently, new environments appear under the random sever-
12: TrS = TrS x ; ity of change, while various τt can be employed to reflect
13: end for change speed of environments.
14: Combine TrS and Best_K(t) as initial population; In order to compare algorithm performance fairly, mean
15: end if inverted generational distance (MIGD) [54], mean hypervol-
ume (MHV) [55], and mean IGD+ [56] are employed as
performance metrics.
our knowledge, nondominated solutions can be viewed as 1) MIGD can comprehensively reflect the diversity and
(m-1)-dimensional segmented manifold, thus the dimension of convergence of Pareto-optimal solutions obtained under
subspace d is set to (m-1) in SDA-IS [13], [51]. Based on this, the total environments. Assuming that a set of uniformly
source subspace and target one, represented by SS ∈ Rn×d distributed samples of true POF(t) are denoted as R(t),
and ST ∈ Rn×d , are formed by feature vectors, respectively. the MIGD value is calculated as follows:
T |R(t)|
According to (5), a mapping matrix between two spaces,
denoted as MS , is calculated. All x ∈ Best_K(t) can adapt 1  i=1 D(POF(t), ri )
MIGD = (15)
to the current search space, and form candidate solutions after T |R(t)|
t=1
the transfer, represented by x
where T is the number of environments that occurred
x = x · MS ∀x ∈ Best_K(t). (14) in a run. D(POF(t), ri ) denotes the minimum Euclidean
distance in the objective space between ith point in R(t)
The number of transferred individuals is 0.5N. Thus, an initial
and the POF(t) obtained by an algorithm. Here, a smaller
population in a new environment is formed by all transferred
MIGD value indicates better algorithm performance.
individuals and the ones in Best_K(t), with the purpose of
2) MHV is a representative metric that measures the com-
making full use of historical knowledge.
prehensive performance of an algorithm, and formulated
as follows:
D. Computational Complexity
T |POF(t)|
The knowledge transfer strategy presented in this article 1 
MHV = {volumei } (16)
consists of three main operations, including knowledge extrac- T
t=1 i=1
tion and update, knowledge matching, and knowledge transfer.
The operations for extracting and updating knowledge call for where volumei denotes the volume of a hypercube
O(mN 2 ) and O(Ln), respectively. Because N  n and N  L, formed by reference point zref and ith solution in POF(t).
the computational cost of extracting and updating knowledge 3) MIGD+ can reflect the comprehensive quality of Pareto-
is O(mN 2 ). For knowledge matching, its computational cost optimal solutions obtained under the total environments,
is O(m(m + 1)L). Additionally, the hybrid knowledge transfer which is calculated as follows:
mechanism requires O(nN). To sum up, due to N  m, the T |R(t)|
+ 1  i=1 minp∈POF(t) dis(p, ri )
computational complexity of the proposed knowledge transfer MIGD = (17)
T |R(t)|
strategy is O(mN 2 ) + O(m(m + 1)L)+O(nN) = O(mN 2 ). t=1
with

IV. E XPERIMENTAL R ESULTS AND D ISCUSSION   2   2
dis(p, ri ) = max p1 − ri1 + · · · + max pm − rim .
To verify the rationality of the knowledge transfer strategy
proposed in this article, experimental studies consist of six Here, a smaller MIGD+ value means better performance of
components, including sensitivity analysis of key parameter, an algorithm.
effectiveness of knowledge extraction and update, rationality Each experiment independently runs 20 times, and its statis-
of hybrid transfer mechanism, performance comparisons on tical results are analyzed. More especially, the best ones among

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1757

the compared algorithms on each test instance are highlighted. KTS-DMOEA are similar. Knowledge stored in a knowledge
Also, the Wilcoxon rank-sum test [57], [58] is employed to pool has weak diversity, thus, L has an insignificant impact
point out the significance between different results at the on algorithm performance. In addition, setting appropriate
0.05 significance level, where “+,” “−,” and “=” indicate that L is necessary for achieving a promising tradeoff between
the results obtained by another DMOEA are significantly bet- algorithm performance and computational efficiency. More
ter, significantly worse, and statistically similar to that obtained especially, as L is larger than 16, MIGD values for most
by compared algorithm, respectively. test instances is slightly improved, however, more historical
knowledge needs to be evaluated for matching the current
B. Compared Algorithms and Parameter Settings environment, bringing higher computational cost. Therefore,
it is a good choice for L = 16 in the following experiments.
In order to fully compare with the mainstream change
response strategies, five popular DMOEAs are employed
for comparison, including D-NSGA-II-A [29], D-NSGA-II- D. Effectiveness of Knowledge Extraction and Update
B [29], KF [41], PPS [40], and MMTL-DMOEA [13]. The In order to verify the effectiveness of knowledge extrac-
former two adopt the traditional diversity introduction strate- tion and update mechanism, two comparative counterparts,
gies to respond a new environment. Different from them, named KTS-S2 and KTS-S3, are introduced. KTS-S2 trans-
prediction approaches are employed in KF and PPS. As for fers Pareto-optima obtained from the last environment to
MMTL-DMOEA newly published, TL was introduced to speed current one without constructing a knowledge pool. Being
up convergence by utilizing historical POSs. For D-NSGA-II-A, different from it, KTS-S3 employs the same knowledge extrac-
20% of nondominated solutions are replaced with individuals tion technique as KTS-DMOEA, but updates the knowledge
randomly generated as a new environment appears. Being dif- pool by deleting the earliest knowledge stored in it. Here,
ferent from it, 20% of nondominated solutions after mutation KTS-DMOEA proposed in this article is renamed as KTS-S1.
substitute for original ones in D-NSGA-II-B. As for KF, equal The significance test of MIGD on all test instances for
element values of Q and R diagonal matrices are set to 0.04 the three comparative algorithms is listed in Table I, and
and 0.01, respectively. PPS adopts an AR-based prediction the corresponding statistical results of MIGD, MHV, and
model. The order of the AR model p is set to 3, and the length MIGD+ for comparison algorithms on DMOPs with regular
of history mean point series M is set to 23. The key parameter and random environmental changes are compared in Tables
setting of MMTL-DMOEA refers to [13]. The size of a knowl- S1–S6 of Supplementary Material, respectively. Intuitively,
edge pool L = 16 in KTS-DMOEA. In addition, NSGA-II KTS-S1 achieves the best performance under regular change
is employed as a static optimizer under each environment in and random one. This is because that KTS-S1 adopts knowl-
all compared algorithms for fair comparisons. In NSGA-II, edge extraction and update mechanism to maintain diverse
simulated binary crossover (SBX) and polynomial mutation knowledge in the knowledge pool, which can ensure reli-
are employed. For SBX, distribution index and crossover prob- able historical knowledge for knowledge transfer. Compared
ability are set to ηc = 20 and pc = 1.0, respectively, while with KTS-S1, the knowledge pool constructed by KTS-S3 has
ηm = 20 and pm = 1/n for polynomial mutation. Also, pop- weaker diversity, leading to worse efficiency for knowledge
ulation sizes for bi- and three-objective test instances are set transfer. Regarding KTS-S2, the method shows the similar
to 100 and 190, respectively. performance to KTS-S3 under regular change, but the worst
For all benchmarks, the dimension of search space is 14. performance under random one. Apparently, KTS-S2 always
Each run contains 100 environmental changes. It is worth not- employs the knowledge of the last environment for knowl-
ing that no change takes place in the first 60 generations, so as edge transfer, whereas the knowledge cannot guarantee good
to minimize the effect of static optimization [40], [41]. There guidance under random environmental changes.
are two types of environmental change in experiments, i.e.,
regular change and random one. In order to verify the robust- E. Rationality of Hybrid Transfer Mechanism
ness of the proposed algorithm, three groups of environmental
A hybrid knowledge transfer mechanism is developed to
parameters with τt = 10 and nt = 5, 10, 20 are employed for
adapt historical Pareto-optima to the current environment. To
regular change, however, τt = 5, 10, 20 for random change.
validate its rationality, two compared versions are constructed,
termed as KTS-C2, and KTS-C3, respectively. Among them,
C. Sensitivity Analysis of L KTS-C2 only adopts a knowledge reuse strategy for utiliz-
In KTS-DMOEA, L determines how many historical knowl- ing the selected historical knowledge, while the SDA-IS-based
edge can be stored in a knowledge pool, which plays a direct knowledge transfer strategy is employed in KTS-C3. In addi-
impact on the diversity of knowledge. As L varies from 2 to tion, KTS-DMOEA is renamed as KTS-C1 for intuitive
30 every 2, Fig. 4 depicts MIGD values obtained by KTS- comparison.
DMOEA on regular and random environmental changes. We The significance test of MIGD on all test instances for
observe from statistical results that with the increase of L, the three comparative algorithms is listed in Table II, and
MIGD values become smaller, indicating better algorithm the corresponding statistical results of MIGD, MHV, and
performance, except for dMOP1 and FDA2. To the best of MIGD+ for comparison algorithms on DMOPs with regular
our knowledge, the above two benchmark functions both have and random environmental changes are compared in Tables
fixed POS. That means the true POSs of all environments S7–S12 of Supplementary Material, respectively. It is clear
are the same, and the corresponding Pareto-optima found by that KTS-C1 obtains the most competitive performance on the
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1758 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

Fig. 4. MIGD values of KTS-DMOEA with different L on regular and random environmental changes.

obtaining worse performance. By comparison, KTS-C3 is


superior to KTS-C2 on MIGD, especially for DMOPs with
random change. However, the most expensive computational
cost is paid for adapting historical Pareto-optima to current
search space due to the time-consuming SDA-IS-based transfer
strategy.
To sum up, the proposed hybrid transfer mechanism adap-
tively selects the most appropriate transfer technique in terms
of the similarity among environments, and the number of two
response mechanisms employed in solving DMOPs is depicted
in Fig. S1 of Supplementary Material. For DMOPs with reg-
ular change, the number of environments with similar POSs
become more with the increasing severity of change. Due to its
high efficiency in generating the initial population under the
similar environment. Different from that, POSs for DMOPs
Fig. 5. Running time (in seconds) on all benchmark problems.
with random change normally have distinct distribution. In this
case, the SDA-IS-based transfer strategy is mainly performed
most test instances than the two compared counterparts, and
to initialize population. In this way, the hybrid mechanism
shows the promising tradeoff between transfer efficiency and
achieves the tradeoff between promising transfer efficiency and
time cost referred to Table II and Fig. 5. In contrast, KTS-
computational efficiency.
C2 consumes the fewest time to transfer knowledge because
historical knowledge is directly reused to generate a new ini-
tial population. This technique is only suitable for the situation F. Performance Comparisons Among Various Algorithms on
that the historical and current environments are significantly DMOPs With Regular and Random Environmental Changes
similar. For DMOPs with distinct changes, it may result in In this section, KTS-DMOEA is compared with five pop-
low efficiency, even invalid transfer for a new environment, ular change response strategies on DMOPs with regular and

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1759

TABLE I TABLE II
S IGNIFICANCE T EST OF MIGD ON A LL T EST I NSTANCES S IGNIFICANCE T EST OF MIGD ON A LL T EST I NSTANCES

random environmental change. Tables III and VI present the of seeking the most appropriate transferred individuals in
significance test of MIGD and MHV on all benchmarks, manifold space.
respectively. In addition, statistical results of MIGD and MHV PPS and KF, as the typical predictive-based response tech-
for DMOPs with regular environmental change are listed niques, obtain better MIGD, MHV, and MIGD+ values on
in Tables S13 and S14 of Supplementary Material, whereas DMOPs that POSs translate over time under regular environ-
the ones for DMOPs with random environmental change are mental changes, e.g., DF5, DF6, and DF14. Conversely, there
shown in Tables S16 and S17 of Supplementary Material. To exists poorer performance in solving benchmarks with ran-
further investigate the algorithm performance, MIGD+ val- dom changes, due to their more significant prediction errors.
ues of Pareto-optima for DMOPs with regular or random Unlike the other comparative algorithms, D-NSGA-II-A and
changes are recorded in Tables S15 and S18 of Supplementary D-NSGA-II-B have the most competitive performance on
Material, respectively. DMOPs with fixed POSs, i.e., dMOP1 and FDA2. Apparently,
We observed from experimental results that KTS- building a predictive or transfer model based on the similar
DMOEA proposed in this article is superior to the other Pareto-optima under various environments is inefficient. But
competitors on MIGD, MHV, and MIGD+ for about 40 to the simple replacement strategies given by D-NSGA-II provide
59 test instances, especially, competitive robustness in solv- abundant historical information for generating a diverse initial
ing different change patterns of DMOPs. By way of contrast, population in a current environment, especially for dMOP1
MMTL-DMOEA outperforms others on DF3 and DF9 in and FDA2 with random environmental change.
which the variables are correlated. The transfer strategy To further investigate algorithm performances, Figs. S2 and
of MMTL-DMOEA explores the correlation between deci- S3 of Supplementary Material depict IGD values of initial
sion variables from source information, with the purpose populations obtained by all comparative methods under each

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1760 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

TABLE III
S IGNIFICANCE T EST ON A LL T EST I NSTANCES FOR DMOP S W ITH R EGULAR E NVIRONMENTAL C HANGE

environment regularly and randomly changed, respectively. than MMTL-DOEA due to fewer evaluations for the proposed
Apparently, under the first several environments, historical knowledge transfer strategy. In contrast, fewer computation
knowledge stored in the knowledge pool generally has weak costs have been paid by the other four comparative algorithms.
diversity, and the pool may not provide promising source Especially, D-NSGA-II-A and D-NSGA-II-B do not need to
knowledge for transfer. In this case, the transfer strategy may build transfer or prediction models for generating an initial
produce an initial population far away from the true POS population at the new time, thus, bring the best computa-
of the new environment, leading to worse evolutionary effi- tion efficiency. To sum up, KTS-DMOEA shows competitive
ciency. Intuitively, no matter which kinds of environmental tradeoff between performance and computational efficiency in
change, initial populations generated by KTS-DMOEA have solving DMOPs.
very competitive and relatively stable IGD values under chang-
ing environments. This means the robustness of the proposed H. Algorithm Performance on Different Dimension of
knowledge transfer strategy. Decision Variables
In this section, KTS-DMOEA is further investigated on a
G. Analysis of Running Time different dimension of decision variables as n varies from
To further analyze computation efficiency, the average 10 to 30 every 4. All experimental results are summarized in
running time and computational complexity of all compar- Supplementary Material. Tables S19–S21 list MIGD, MHV,
ative algorithms on various benchmarks are compared in and MIGD+ values on regular change, and Tables S22–
Tables V and VI, respectively. Apparently, KTS-DMOEA con- S24 record the ones on random change, respectively. As
sumes less time and computational cost for the evolution observed from experimental results, KTS-DMOEA generally

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1761

TABLE IV
S IGNIFICANCE T EST ON A LL T EST I NSTANCES FOR DMOP S W ITH R ANDOM E NVIRONMENTAL C HANGE

achieves the best performance when n = 10. More especially, V. C ONCLUSION


with the increasing number of decision variables, the search In this article, we develop a knowledge guided transfer
space of multiobjective optimization problems become larger, strategy-based DMOEA, termed KTS-DMOEA, for solving
increasing the difficulty in solving problems and lowering the DMOPs with both regular environmental change and random
evolution efficiency. one. After defining the knowledge by a two-tuple, histori-
cal knowledge is extracted and saved into a knowledge pool.
I. Sensitivity of Population Size on Three-Objective DMOPs In order to maintain the diverse knowledge in the pool,
In DMOPs, the population size on bi-objective test instances a knowledge update strategy is developed. Among histori-
is generally set to 100. However, the population size on three- cal knowledge, there exists a most appropriate one that can
objective ones is varied from 100 to about 200, e.g., 150, 200, promote high-efficient positive transfer from historical to cur-
and 210. To investigate the algorithm performance of differ- rent environments. Thus, a knowledge matching strategy is
ent population size on three-objective DMOPs, N varies from designed to determine the most valuable historical knowledge
150 to 210 every 20, and statistical results on DMOPs with by re-evaluating their representative under a new environment.
regular and random environmental changes are compared in Following that, the SDA-IS-based knowledge transfer mecha-
Tables S25 and S26 of Supplementary Material. As observed nism is introduced. By integrating it with the knowledge reuse
from experimental results, algorithm performance increases mechanism, a hybrid transfer strategy is constructed to adap-
with population size. To sum up, N = 190 is a good choice to tively select the most suitable one in terms of the similarity
achieve the promising tradeoff between algorithm performance degree of transferred knowledge on the current environment,
and computational efficiency. and then generate a new initial population. Experimental

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1762 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

TABLE V
AVERAGE RUNNING T IME OF C OMPARATIVE A LGORITHMS (U NIT: S ECONDS )

TABLE VI
C OMPUTATIONAL C OMPLEXITY OF C OMPARATIVE A LGORITHMS [4] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhang,
“Multiobjective evolutionary algorithms: A survey of the state of the
art,” Swarm Evol. Comput., vol. 1, no. 1, pp. 32–49, Mar. 2011.
[5] L. Li, Q. Lin, S. Liu, D. Gong, C. A. C. Coello, and Z. Ming, “A novel
multi-objective immune algorithm with a decomposition-based clonal
selection,” Appl. Soft Comput., vol. 81, Aug. 2019, Art. no. 105490.
[6] K. Zhang, C. Shen, X. Liu, and G. G. Yen, “Multiobjective evolution
strategy for dynamic multiobjective optimization,” IEEE Trans. Evol.
Comput., vol. 24, no. 5, pp. 974–988, Oct. 2020.
[7] M. Farina, K. Deb, and P. Amato, “Dynamic multiobjective optimization
problems: Test cases, approximations, and applications,” IEEE Trans.
Evol. Comput., vol. 8, no. 5, pp. 425–442, Oct. 2004.
[8] J. Zhou, J. Zou, S. Yang, G. Ruan, J. Ou, and J. Zheng, “An evolutionary
dynamic multi-objective optimization algorithm based on center-point
prediction and sub-population autonomous guidance,” in Proc. IEEE
Symp. Series Comput. Intell., 2018, pp. 2148–2154.
results on 20 benchmark functions show that the knowledge
[9] D. Gong, B. Xu, Y. Zhang, Y. Guo, and S. Yang, “A similarity-
extraction and update strategy provides the more diverse pool based cooperative co-evolutionary algorithm for dynamic interval
for promoting higher-efficient knowledge transfer, and the multiobjective optimization problems,” IEEE Trans. Evol. Comput.,
hybrid transfer strategy is capable of adapting historical knowl- vol. 24, no. 1, pp. 142–156, Feb. 2020.
[10] J. K. Kordestani, A. E. Ranginkaman, M. R. Meybodi, and
edge to the current search space, speeding up convergence. P. Novoa-Hernández, “A novel framework for improving multi-
Furthermore, statistical results indicate that KTS-DMOEA out- population algorithms for dynamic optimization problems: A scheduling
performs five state-of-the-art DMOEAs, achieving good ver- approach,” Swarm Evol. Comput., vol. 44, pp. 788–805, Feb. 2019.
satility in solving various change patterns of DMOPs. In the [11] L. Feng et al., “Solving generalized vehicle routing problem with occa-
sional drivers via evolutionary multitasking,” IEEE Trans. Cybern.,
future, multiple environmental knowledge, such as distribu- vol. 51, no. 6, pp. 3171–3184, Jun. 2021.
tion, and location of POSs or POFs, can simultaneously assist [12] M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen, “Transfer
the TL mechanism to explore the future environment, which learning-based dynamic multiobjective optimization algorithms,” IEEE
Trans. Evol. Comput., vol. 22, no. 4, pp. 501–514, Aug. 2018.
may be a potential research direction. In addition, it is also
[13] M. Jiang, Z. Wang, L. Qiu, S. Guo, X. Gao, and K. C. Tan, “A fast
meaningful to investigate dynamic constrained multiobjective dynamic evolutionary multiobjective algorithm via manifold transfer
optimization problems and dynamic large-scale multiobjective learning,” IEEE Trans. Cybern., vol. 51, no. 7, pp. 3417–3428, Jul. 2021.
optimization problems in the future. [14] M. Jiang, Z. Wang, S. Guo, X. Gao, and K. C. Tan, “Individual-based
transfer learning for dynamic multiobjective optimization,” IEEE Trans.
Cybern., vol. 51, no. 10, pp. 4968–4981, Oct. 2021.
[15] C. Rossi, M. Abderrahim, and J. C. Díaz, “Tracking moving
R EFERENCES optima using Kalman-based predictions,” Evol. Comput., vol. 16, no. 1,
pp. 1–30, Mar. 2008.
[1] R. Azzouz, S. Bechikh, and L. B. Said, “Dynamic multi-objective [16] M. R. Behnamfar, H. Barati, and M. Karami, “Multi-objective antlion
optimization using evolutionary algorithms: A survey,” in Adaptation, algorithm for short-term hydro-thermal self-scheduling with uncertain-
Learning, and Optimization. Cham, Switzerland: Springer, 2017, ties,” IETE J. Res., to be published.
pp. 31–70. [17] S. Yang, H. Cheng, and F. Wang, “Genetic algorithms with immigrants
[2] L. T. Bui, Z. Michalewicz, E. Parkinson, and M. B. Abello, “Adaptation and memory schemes for dynamic shortest path routing problems in
in dynamic environments: A case study in mission planning,” IEEE mobile ad hoc networks,” IEEE Trans. Syst. Man, Cybern. C, Appl.
Trans. Evol. Comput., vol. 16, no. 2, pp. 190–209, Apr. 2012. Rev., vol. 40, no. 1, pp. 52–63, Jan. 2010.
[3] H. Zhang, J. Ding, M. Jiang, K. C. Tan, and T. Chai, “Inverse Gaussian [18] L. Zhou et al., “Solving dynamic vehicle routing problem via evolu-
process modeling for evolutionary dynamic multiobjective optimization,” tionary search with learning capability,” in Proc. IEEE Congr. Evol.
IEEE Trans. Cybern., vol. 52, no. 10, pp. 11240–11253, Oct. 2022. Comput., 2017, pp. 890–896.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1763

[19] K. Trojanowski and Z. Michalewicz, “Evolutionary algorithms for non- [42] C. Wang, G. G. Yen, and M. Jiang, “A grey prediction-based evolution-
stationary environments,” in Proc. 8th Work. Intell. Inf. Syst., 1999, ary algorithm for dynamic multiobjective optimization,” Swarm Evol.
pp. 229–240. Comput., vol. 56, Aug. 2020, Art. no. 100695.
[20] W. Kong, T. Chai, S. Yang, and J. Ding, “A hybrid evolutionary [43] L. Cao, L. Xu, E. D. Goodman, C. Bao, and S. Zhu, “Evolutionary
multiobjective optimization strategy for the dynamic power supply dynamic multiobjective optimization assisted by a support vector regres-
problem in magnesia grain manufacturing,” Appl. Soft Comput., vol. 13, sion predictor,” IEEE Trans. Evol. Comput., vol. 24, no. 2, pp. 305–319,
no. 5, pp. 2960–2969, May 2013. Apr. 2020.
[21] P. Rohlfshagen and X. Yao, “Evolutionary dynamic optimization: [44] R. Rambabu, P. Vadakkepat, K. C. Tan, and M. Jiang, “A mixture-of-
Challenges and perspectives,” in Studies in Computational Intelligence, experts prediction framework for evolutionary dynamic multiobjective
vol. 490. Heidelberg, Germany: Springer, 2013, pp. 65–84. optimization,” IEEE Trans. Cybern., vol. 50, no. 12, pp. 5099–5112,
[22] T. T. Nguyen, S. Yang, J. Branke, and X. Yao, “Evolutionary dynamic Dec. 2020.
optimization: methodologies,” in Studies in Computational Intelligence, [45] Y. Guo, H. Yang, M. Chen, J. Cheng, and D. Gong, “Ensemble
vol. 490. Heidelberg, Germany: Springer, 2013, pp. 39–64. prediction-based dynamic robust multi-objective optimization methods,”
[23] L. Feng et al., “Evolutionary multitasking via explicit autoencoding,” Swarm Evol. Comput., vol. 48, pp. 156–171, Aug. 2019.
IEEE Trans. Cybern., vol. 49, no. 9, pp. 3457–3470, Sep. 2019. [46] I. Hatzakis and D. Wallace, “Dynamic multi-objective optimization with
[24] M. Jiang, L. Qiu, Z. Huang, and G. G. Yen, “Dynamic multi-objective evolutionary algorithms,” in Proc. 8th Annu. Conf. Genet. Evol. Comput.,
estimation of distribution algorithm based on domain adaptation and vol. 27, 2006, p. 1201.
nonparametric estimation,” Inf. Sci., vol. 435, pp. 203–223, Apr. 2018. [47] M. Jiang, Z. Wang, H. Hong, and G. G. Yen, “Knee point-based imbal-
[25] J. G. O. L. Duhain and A. P. Engelbrecht, “Towards a more com- anced transfer learning for dynamic multiobjective optimization,” IEEE
plete classification system for dynamically changing environments,” in Trans. Evol. Comput., vol. 25, no. 1, pp. 117–129, Feb. 2021.
Proc. IEEE Congr. Evol. Comput., 2012, pp. 1–8. [48] L. Feng, W. Zhou, W. Liu, Y.-S. Ong, and K. C. Tan, “Solving dynamic
[26] D. Yazdani, R. Cheng, D. Yazdani, J. Branke, Y. Jin, and X. Yao, multiobjective problem via autoencoding evolutionary search,” IEEE
“A survey of evolutionary continuous dynamic optimization over Trans. Cybern., vol. 52, no. 5, pp. 2649–2662, May 2022.
two decades—Part B,” IEEE Trans. Evol. Comput., vol. 25, no. 4, [49] B. Sun and K. Saenko, “Subspace distribution alignment for unsu-
pp. 630–650, Aug. 2021. pervised domain adaptation,” in Proc. Brit. Mach. Vis. Conf., 2015,
[27] M. Greeff and A. P. Engelbrecht, “Solving dynamic multi-objective pp. 24.1–24.10.
problems with vector evaluated particle swarm optimisation,” in [50] X. He, Y. Zhou, Z. Chen, and Q. Zhang, “Evolutionary many-objective
Proc. IEEE Congr. Evol. Comput., 2008, pp. 2917–2924. optimization based on dynamical decomposition,” IEEE Trans. Evol.
[28] C.-K. Goh and K. C. Tan, “A competitive-cooperative coevolutionary Comput., vol. 23, no. 3, pp. 361–375, Jun. 2019.
paradigm for dynamic multiobjective optimization,” IEEE Trans. Evol. [51] Q. Zhang, A. Zhou, and Y. Jin, “RM-MEDA: A regularity model-based
Comput., vol. 13, no. 1, pp. 103–127, Feb. 2009. multiobjective estimation of distribution algorithm,” IEEE Trans. Evol.
[29] K. Deb, N. U. B. Rao, and S. Karthik, “Dynamic multi- Comput., vol. 12, no. 1, pp. 41–63, Feb. 2008.
objective optimization and decision-making using modified NSGA-II: [52] S. Jiang, S. Yang, X. Yao, K. Tan, M. Kaiser, and N. Krasnogor,
A case study on hydro-thermal power scheduling,” in Evolutionary “Benchmark problems for CEC2018 competition on dynamic
Multi-Criterion Optimization. Berlin, Heidelberg: Springer, 2007, multiobjective optimisation,” in Proc. CEC Competition, 2018,
pp. 803–817. pp. 1–18.
[30] S. Sahmoud and H. R. Topcuoglu, “Sensor-based change detec- [53] S. Yang, T. T. Nguyen, and C. Li, “Evolutionary dynamic optimization:
tion schemes for dynamic multi-objective optimization problems,” in Test and evaluation environments,” in Studies in Computational
Proc. IEEE Symp. Series Comput. Intell., 2016, pp. 1–8. Intelligence, vol. 490. Heidelberg, Germany: Springer, 2013, pp. 3–37.
[31] Z. Peng, J. Zheng, and J. Zou, “A population diversity maintaining strat- [54] S. Zeng, R. Jiao, C. Li, X. Li, and J. S. Alkasassbeh, “A general frame-
egy based on dynamic environment evolutionary model for dynamic work of dynamic constrained multiobjective evolutionary algorithms for
multiobjective optimization,” in Proc. IEEE Congr. Evol. Comput., 2014, constrained optimization,” IEEE Trans. Cybern., vol. 47, no. 9, pp. 1–11,
pp. 274–281. Sep. 2017.
[32] X. Xu, Y. Tan, W. Zheng, and S. Li, “Memory-enhanced dynamic multi- [55] L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm
objective evolutionary algorithm based on LP decomposition,” Appl. Sci., for calculating hypervolume,” IEEE Trans. Evol. Comput., vol. 10, no. 1,
vol. 8, no. 9, p. 1673, Sep. 2018. pp. 29–38, Feb. 2006.
[33] R. Azzouz, S. Bechikh, and L. B. Said, “A dynamic multi-objective [56] H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima, “Modified
evolutionary algorithm using a change severity-based adaptive popula- distance calculation in generational distance and inverted genera-
tion management strategy,” Soft Comput., vol. 21, no. 4, pp. 885–906, tional distance,” in Evolutionary Multi-Criterion Optimization (Lecture
Feb. 2017. Notes in Computer Science 9019). Cham, Switzerland: Springer, 2015,
[34] R. Chen, K. Li, and X. Yao, “Dynamic multiobjectives optimization with pp. 110–125.
a changing number of objectives,” IEEE Trans. Evol. Comput., vol. 22, [57] J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on
no. 1, pp. 157–171, Feb. 2018. the use of nonparametric statistical tests as a methodology for comparing
[35] S. Sahmoud and H. R. Topcuoglu, “A memory-based NSGA-II algorithm evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput.,
for dynamic multi-objective optimization problems,” in Applications of vol. 1, no. 1, pp. 3–18, Mar. 2011.
Evolutionary Computation (Lecture Notes in Computer Science). Cham, [58] R. Jiao, S. Zeng, C. Li, and Y.-S. Ong, “Two-type weight adjustments
Switzerland: Springer, 2016, pp. 296–310. in MOEA/D for highly constrained many-objective optimization,” Inf.
[36] Z. Liang, S. Zheng, Z. Zhu, and S. Yang, “Hybrid of memory and Sci., vol. 578, pp. 592–614, Nov. 2021.
prediction strategies for dynamic multiobjective optimization,” Inf. Sci.,
vol. 485, pp. 200–218, Jun. 2019.
[37] M. Rong, D. Gong, W. Pedrycz, and L. Wang, “A multimodel prediction
method for dynamic multiobjective evolutionary optimization,” IEEE Yinan Guo (Member, IEEE) received the B.E.
Trans. Evol. Comput., vol. 24, no. 2, pp. 290–304, Apr. 2020. degree in automation and the Ph.D. degree in
[38] R. Liu, Y. Chen, W. Ma, C. Mu, and L. Jiao, “A novel cooperative control theory and control engineering from the
coevolutionary dynamic multi-objective optimization algorithm using China University of Mining and Technology,
a new predictive model,” Soft Comput., vol. 18, no. 10, pp. 1913–1929, Xuzhou, China, in 1997 and 2003, respectively.
Oct. 2014. She currently is a Professor of Computational
[39] J. Zou, Q. Li, S. Yang, H. Bai, and J. Zheng, “A prediction strat- Intelligence and Machine Learning with the
egy based on center points and knee points for evolutionary dynamic School of Information and Control Engineering,
multi-objective optimization,” Appl. Soft Comput., vol. 61, pp. 806–818, China University of Mining and Technology, and
Dec. 2017. also with the School of Mechanical Electronic
[40] A. Zhou, Y. Jin, and Q. Zhang, “A population prediction strategy for evo- and Information Engineering, China University of
lutionary dynamic multiobjective optimization,” IEEE Trans. Cybern., Mining and Technology (Beijing), Beijing, China. She has more than 90 pub-
vol. 44, no. 1, pp. 40–53, Jan. 2014. lications. Her current research interests include computation intelligence
[41] A. Muruganantham, K. C. Tan, and P. Vadakkepat, “Evolutionary in dynamic and uncertain optimization and its applications in scheduling,
dynamic multiobjective optimization via Kalman filter prediction,” IEEE path planning, big data processing, and class imbalance learning and its
Trans. Cybern., vol. 46, no. 12, pp. 2862–2873, Dec. 2016. applications in fault diagnosis.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1764 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023

Guoyu Chen received the B.E. degree from the Dunwei Gong (Member, IEEE) received the B.Sc.
Shandong Technology and Business University, degree in mathematics from the China University of
Yantai, China, in 2017, and the M.E. degree Mining and Technology, Xuzhou, China, in 1992, the
from Nanchang Hangkong University, Nanchang, M.E. degree in automation from Beihang University,
China, in 2020. He is currently pursuing the Beijing, China, in 1995, and the Ph.D. degree in
Ph.D. degree with the School of Information and control theory and control engineering from the
Control Engineering, China University of Mining China University of Mining and Technology in
and Technology, Xuzhou, China. 1999.
His main research interests include dynamic He currently is a Professor of Computational
multiobjective optimization and dynamic con- Intelligence and the Director of the Centre for
strained multiobjective optimization. Intelligent Optimization and Control with the School
of Information and Control Engineering, China University of Mining and
Technology. He has more than 180 publications. His current research interests
include computation intelligence in multiobjective optimization, dynamic and
uncertain optimization, and applications in software engineering, scheduling,
path planning, big data processing, and analysis.
Min Jiang (Senior Member, IEEE) received the
bachelor’s and Ph.D. degrees in computer science
from Wuhan University, Wuhan, China, in 2001 and
2007, respectively.
Subsequently, he is a Postdoctoral Researcher Jing Liang (Senior Member, IEEE) received the
with the Department of Mathematics, Xiamen B.E. degree in automation from the Harbin Institute
University, Xiamen, China, where he is currently of Technology, Harbin, China, in 2003, and the
a Professor with the Department of Artificial Ph.D. degree in electrical and electronic engineering
Intelligence. His main research interests are machine from Nanyang Technological University, Singapore,
learning, computational intelligence, and robotics. in 2009.
He has a special interest in dynamic multiobjective She is currently a Professor with the School
optimization, transfer learning, software development, and in the basic theo- of Electrical Engineering, Zhengzhou University,
ries of robotics. Zhengzhou, China. Her main research interests
Prof. Jiang received the Outstanding Reviewer Award from IEEE are evolutionary computation, swarm intelligence,
T RANSACTIONS ON C YBERNETICS in 2016. He is the Chair of the IEEE CIS multiobjective optimization, and neural network.
Xiamen Chapter. He is currently serving as an Associate Editor for the IEEE Prof. Liang currently serves as an Associate Editor for the IEEE
T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS and T RANSACTIONS ON E VOLUTIONARY C OMPUTATION and the Swarm and
IEEE T RANSACTIONS ON C OGNITIVE AND D EVELOPMENTAL S YSTEMS. Evolutionary Computation.

Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.

You might also like