A Knowledge Guided Transfer Strategy For Evolutionary Dynamic Multiobjective Optimization
A Knowledge Guided Transfer Strategy For Evolutionary Dynamic Multiobjective Optimization
6, DECEMBER 2023
change patterns. The so-called regular change refers to Pareto- most appropriate knowledge that is similar to the current
optima under adjacent environments that vary with certain environment is chosen to promote positive knowledge
rules, such as parallel mode and specific moving trends, at transfer.
steady frequency or severity [15], [16]. Previous prediction 3) A novel hybrid knowledge transfer mechanism is
or memory models mine and utilize implicit knowledge designed, with the purpose of adapting historical knowl-
from Pareto-optima changing with the above regular patterns, edge to a new environment with higher computation
with the purpose of assisting DMOEAs to explore cur- efficiency and faster evolutionary speed. According to
rent search space with higher efficiency [17], [18]. However, the similarity degree of selected knowledge on the
Pareto-optima may change over time with the irregular mode current environment, we employ the knowledge reuse
at dynamic frequency or severity in real-world dynamic technique or the transfer technique based on subspace
optimization problems. We call it a random change pattern distribution alignment to generate an initial population.
of DMOPs. Under this pattern, Pareto-optima under two adja- The remainder of this article is organized as fol-
cent environments may have weak similarity, especially for the lows. Section II provides the fundamental concepts of
significant severity of change [19]. For example, the power dynamic multiobjective optimization and existing DMOEAs.
supply in magnesium production needs to be dynamically Section III describes the principle of the proposed KTS-based
adjusted so as to maximize the yield and grade of magne- DMOEA. The experimental results of the proposed method
sia grain. The granularity and grade of raw material may and state-of-the-art DMOEAs are compared and further ana-
severely fluctuate with time, resulting in the optimal electric lyzed in Section IV. Finally, Section V concludes the main
current jumps far away from the past one [20]. Apparently, contributions and gives future works.
there are insignificant changing rules among dynamic environ-
ments in random change, leading to the inefficient initialization II. P RELIMINARY AND R ELATED R ESEARCH
under a new environment by the prediction mechanism [21]. A. Dynamic Multiobjective Optimization
Similarly, the diversity introduction strategies may be an
unpromising problem solver for DMOPs with this change Without loss of generality, a DMOP can be formulated as
pattern, because they rely on the Pareto-optima of the last envi- follows:
ronment to produce a new initial population and the diversity min F(x, t) = (f1 (x, t), . . . , fm (x, t))T
generated by mutation or reinitialization only increases the
s.t. x ∈ (1)
limited convergence pressure [22]. Regarding the TL mecha-
nism, an essential challenge for solving DMOPs with random where t is the time index. x represents an n-dimensional deci-
change is to guarantee the similarity between the source sam- sion vector in decision space . fi (x, t), i = 1, . . . , m refers
ples and target ones, avoiding the increasing occurrence of to ith objective function at time t, and m is the number of
negative transfer. Although a few works seek the latent corre- objectives.
lation across environments to improve the utilization efficiency Definition 1 (Dynamic Pareto Domination): Suppose x and
of historical knowledge [23], [24], finding appropriate histor- y are two candidate solutions at time t. x dominances y,
ical knowledge to promote positive transfer under different represented by x ≺t y, if and only if
change patterns is still an open issue.
fi (x, t) ≤ fi (y, t) ∀i = 1, . . . , m
To further improve the versatility of TL-based change . (2)
fi (x, t) < fi (y, t), ∃i = 1, . . . , m
response technique on DMOPs with both regular and
random changes, we develop a knowledge guided trans- Definition 2 (Dynamic Pareto-Optimal Set): Denoted
fer strategy (KTS) for dynamic multiobjective evolutionary POS(t) as a dynamic Pareto-optimal set at time t, all solutions
optimization in this article. In KTS, a knowledge pool is in it are not dominated by any other individuals, satisfying
constructed to preserve knowledge extracted from the Pareto- POS(t) = {x|¬∃y, y ≺t x}. (3)
optima of historical environments. As a new environment
appears, the knowledge of the most similar environment Definition 3 (Dynamic Pareto-Optimal Front): At time t,
is found from the pool, and transferred to adapt the cur- a dynamic Pareto-optimal front, represented by POF(t), is the
rent environment, producing an initial population. Three-fold objective vector of POS(t)
contributions of this article can be summarized as follows.
POF(t) = {F(x, t)|x ∈ POS(t) }. (4)
1) Knowledge refers to Pareto-optima and its implicit fea-
ture found under each historical environment, and all DMOPs can be categorized into the following four types in
historical knowledge is preserved to a knowledge pool terms of the dynamic characteristics of POF(t) and POS(t) [7].
in chronological order. In order to maintain the diversity Type I: POS(t) changes over time, but POF(t) is fixed.
of the pool, a knowledge update strategy is designed to Type II: Both POS(t) and POF(t) change over time.
adaptively remove the redundant historical knowledge in Type III: POS(t) is fixed, but POF(t) changes over time.
terms of the similarity among them. Type IV: Both POS(t) and POF(t) remain unchanged
2) A knowledge matching strategy is developed to find the over time.
most valuable historical knowledge from a knowledge In DMOPs, each dynamic can produce environmental changes
pool. After re-evaluating the representative stored in with various characteristics [25], [26]. Except for the above-
each historical knowledge under a new environment, the mentioned dynamics in Section I, other types of environmental
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1752 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
change are also investigated: 1) quasi-static indicates that initial population under a new environment, causing low evo-
DMOPs with low change severity and frequency; 2) low lution efficiency [46]. In addition, hybrid techniques generally
change severity but high change frequency result in problems integrate more than one change response strategies to initialize
with progressive change; 3) abrupt change has high change a population once a new environment appears. Liang et al. [36]
severity and low change frequency; and 4) chaotic change proposed a hybrid of memory and prediction strategies, ini-
expresses high change severity and frequency. tializing the population at the new time. Once a historical
environment was similar to the current one, the memory
strategy was employed. Otherwise, a prediction method was
performed to generate initial individuals.
B. Dynamic Multiobjective Evolutionary Algorithms Except for the above-mentioned mainstream strategies, the
To the best of our knowledge, change response techniques change response technique based on TL has attracted increas-
of DMOEAs mainly include diversity introduction, memory ing attention in recent years [47], [48]. Jiang et al. [12] put
strategy, and prediction mechanism [26]. forward to solve DMOPs via TL, and a domain adapta-
In diversity introduction strategies [27], [28], the solutions tion method, called transfer component analysis (TCA), was
that are produced randomly or after mutation replace a cer- introduced to construct a transfer model for generating an
tain proportion of individuals to form a new initial population initial population at the new time. To improve the computa-
once an environment changes, with the purpose of increas- tional efficiency of this mechanism, a memory-driven manifold
ing the diversity. Following the above idea, Deb et al. [29] TL strategy (MMTL-DMOEA) was presented [13], in which
designed two kinds of diversity introduction strategies (i.e., Pareto-optimal solutions obtained from past environments
D-NSGA-II-A and D-NSGA-II-B) and compared their per- were stored into an external storage, and employed to generate
formances. These simple mechanisms can prevent the pop- an initial population under a new environment by mani-
ulation from falling into local optima [30], but show weak fold TL. Following that, the transfer Adaboost (TrAdaboost)
performance in solving DMOPs with severe change. Being method was introduced to construct a transfer model based
different from it, memory mechanism [31], [32], [33] records on some good solutions with better diversity that filtered out
useful information obtained from past environments and reuses by a presearch strategy [14], with the purpose of overcom-
them when an environmental change appears. Chen et al. [34] ing the negative transfer in TL-based DMOEAs. However,
proposed a dynamic two-archive evolutionary algorithm, in constructing an appropriate transfer model needs extra com-
which two co-evolving archives that concern about con- putational cost. Thus, transferring the most valuable historical
vergence and diversity, respectively, were complementary information to the current environment with higher efficiency
to each other via a mating selection mechanism. Also, is still an opening problem.
Sahmoud and Topcuoglu [35] introduced the memory mecha-
nism to NSGA-II. An explicit memory was developed to store C. Subspace Distribution Alignment Between Infinite
all nondominated solutions during the evolution process, and Subspaces
provide the ones reused for a similar newly appeared envi- Subspace distribution alignment between infinite subspaces
ronment. Intuitively, how to reuse useful information from (SDA-IS) [49], as a domain adaptation method, mines the asso-
memory with the least computational cost is a key challenge ciation between the search spaces in historical and current
of this mechanism [36]. environments. A geodesic flow path is constructed from the
Prediction-based approach [37], [38], [39], as a popular source subspace (a historical search space) to target one (cur-
one in DMOEAs, learns the changing trend of Pareto- rent search space), and a kernel trick is employed to integrate
optima obtained from historical environments to generate over an infinite number of subspaces on the path. Following
a more rational initial population, with the purpose of speed- that, the distributions of subspaces are aligned along each part
ing up the convergence at a new time. Several regression of the geodesic flow kernel.
methods, including autoregression [40], Kalman filter [41], Suppose that DS = {xS1 , . . . , xSN } ⊆ RS is the source data,
grey model [42], and support vector regression [43] have been and target data denotes DT = {xT1 , . . . , xTN } ⊆ RT . First,
introduced to construct the prediction model. Following that, PCA is employed to extract the key features of source and
Rambabu et al. [44] developed a mixture-of-experts-based target data, and the corresponding subspaces are formed, rep-
ensemble framework, and a gating network was utilized to resented by SS ∈ Rn×d and ST ∈ Rn×d , respectively. Here, n
manage the switching among various predictors in terms of is the number of decision variables, and d is the dimension of
their prediction performances. Also, Guo et al. [45] developed subspace. In this way, the mapping matrix can be formulated
an ensemble prediction model based on three heterogeneous as follows:
predictors, and three kinds of synthesis strategies had been
1 2
designed to estimate the fitness values of candidates under MS = [SS U1 RS U2 ] A [S U R U ]T . (5)
a new environment. Prediction-based change response tech- 2 3 TS S 1 S 2
niques show competitive performances in solving DMOPs In (5), RS is the orthogonal complement of SS , U1 , and
with regular change. However, there are insignificant chang- U2 are orthogonal matrices of SST ST and RTS ST by sin-
ing rules among dynamic environments in random change. In gular value decomposition (SVD). 1 , 2 , and 3 are
this case, the prediction mechanism may not accurately cap- diagonal matrices based on the principal angle θ of SS
ture the trend of random change, thus producing inefficient and ST . The diagonal elements of 1 , 2 , and 3 are
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1753
Algorithm 1 Framework of KTS-DMOEA occurs, the proposed KTS is done to generate a new initial
Input: Dynamic optimization function F(x, t), population size population.
N The core of KTS-DMOEA lies in KTS. Fig. 1 depicts its
Output: A series of approximated populations P workflow. Under each environment, the partial POS is selected
1: Set time step t = 0, knowledge pool KP = ∅; to store into a knowledge pool as its specific knowledge.
2: Initialize a population P0 with size N; In order to maintain the diversity of knowledge in the pool,
3: while termination criterion not met a redundant one is deleted when the capacity of pool reaches
4: if an environmental change occurs the maximum. Once an environmental change occurs, knowl-
5: t = t + 1; edge matching strategy is triggered to seek a best knowledge
6: KP = KnowledgeExtractionAndUpdate(P
t−1 , KP); from the pool, that is, its corresponding environment is most
7: Best_K(t), Dif = KowledgeMatching(KP, F(x, t)); similar to the current one. Following that, the selected knowl-
8: Pt = KnowledgeTransfer(Best_K(t), Dif ); edge is transferred into the current environment via a hybrid
9: end if transfer strategy, forming a new initial population. Intuitively,
10: Perform MOEA; three key issues of KTS are knowledge extraction and update,
11: end while knowledge matching strategy, as well as knowledge transfer
mechanism.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1754 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
Algorithm 2 Knowledge Extraction and Update under limited storage capacity. First, an angle between two
Input: The Pareto-optima at time t-1 POS, knowledge pool individuals is defined as follows:
KP
Output: Updated knowledge pool KP θ xi , xp = min arccos f (xi ), f xp , xi ∈ Q. (9)
1: Set Q = ∅; A solution xp ∈ POS(t) that has maximal angle with xi is
2: /* Extracting knowledge */ found to construct a reference vector, denoted as v = f (xp )
3: for i = 1 to m
4: Find ith extreme solution by Eq. (7) and (8); p = arg max θ xi , xp , xp ∈ POS(t). (10)
5: Save extreme solution into Q;
6: end for
Second, an elite solution xq that has minimal ASF to v is
7: while |Q| < 0.5N removed from POS(t) and saved to Q
8: Update POS = POS − Q; q = arg min ASF(xi , v), xi ∈ POS(t). (11)
9: Identify a reference vector v by Eq. (9) and (10);
10: Find an elite solution xq by Eq. (11); The above two steps are repeated until the size of Q reaches
11: Save solution xq into Q; 0.5N. It is worth mentioning that the first solution selected
12: end while by the angle-based decomposition method is the center one,
13: Construct knowledge K = Q, C ; because it has the largest angle to extreme ones than oth-
14: /* Updating knowledge */ ers, thus in the center of POF. Therefore, there are 0.5N
15: Store K into knowledge pool KP; Pareto-optima and a mean vector are recorded in each his-
16: if |KP| > L torical knowledge. Apparently, under limited storage space,
17: Calculate crowding distance Dis of each knowledge by knowledge extraction strategy can ensure a knowledge pool
Eq. (12); preserving more diverse historical knowledge.
18: Delete the most redundant knowledge having minimum 2) Updating Knowledge: Knowledge extracted from each
Dis; environment is preserved to a knowledge pool, represented by
19: end if KP. Denoted L as its maximum size, KP is updated when
the number of knowledge exceeds L. In order to maintain the
diversity of KP, knowledge is evaluated at the centroid level,
decision space and the ones with significant differences are retained.
Defined the similarity of knowledge as the crowding dis-
|Q|
1 tance among C of various knowledge, represented by Dis
μj = xij , xi ∈ Q. (6)
|Q|
i=1 min D Ci , Cj
Disi = , j = i, j = 1, . . . , |KP| (12)
To the best of our knowledge, each axis can be defined max D Ci , Cj
as a reference vector to find an extreme solution. Thus,
where min D(Ci , Cj ) and max D(Ci , Cj ) represent the mini-
the number of extreme solutions is equal to the one of
mum and maximum Euclidean distances between Ci and the
objectives [50]. Assuming that there are m objectives in an
others, respectively. Apparently, smaller Dis means that the
environment, the extreme solution along lth dimension is
corresponding knowledge provides the similar but redundant
found by minimizing achievement scalarizing function (ASF).
information for the evolution under the subsequent environ-
Let wl = (wl1 , . . . , wlm )T be a reference vector of lth axis,
ments, leading to worse diversity of the knowledge pool. Thus,
ASF is described as follows:
the knowledge having minimum Dis will be removed from KP.
fk (xi ) That is, KP = {KP\Kr , r = arg min Disi , i = 1, . . . , |KP|}.
ASF(xi , wl ) = max (7)
wlk
where fk (xi ) represents kth objective value of ith solution in B. Knowledge Matching
POS(t), k = 1, . . . , m. wlk is defined as follows: Knowledge that is acquired from the similar historical
environment may provide valuable guidance for the evolu-
1, l=k
wlk = (8) tion under the current environment and effectively avoid the
1e − 6, l = k.
negative transfer. Thus, how to find the most appropriate
A solution that has the minimal ASF value is selected from knowledge from a knowledge pool in terms of the similar-
POS(t), namely, xi , i = arg min ASF(xi , wl ) ∀xi ∈ POS(t), ity between historical and current environments is a key issue
and saved to Q as an extreme solution in this historical envi- for high-efficient knowledge transfer.
ronment. Subsequently, m extreme solutions are found, and Without loss of generality, two environments that appeared
the remaining Pareto-optimal solutions form a Pareto-optimal in a DMOP are judged to be similar as the corresponding
subset, represented by POS(t) = POS(t) − Q. POS and POF approximate to each other [32]. Following this
Following that, an angle-based decomposition method is assumption, a knowledge matching strategy is designed to
introduced to select Pareto-optimal solutions from POS(t) iter- find the most valuable historical knowledge belonging to the
atively, and form Q with good distribution and convergence, environment that is similar to the current one. Its evaluation
with the purpose of preserving more valuable knowledge process is presented in Algorithm 3.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1755
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1756 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1757
the compared algorithms on each test instance are highlighted. KTS-DMOEA are similar. Knowledge stored in a knowledge
Also, the Wilcoxon rank-sum test [57], [58] is employed to pool has weak diversity, thus, L has an insignificant impact
point out the significance between different results at the on algorithm performance. In addition, setting appropriate
0.05 significance level, where “+,” “−,” and “=” indicate that L is necessary for achieving a promising tradeoff between
the results obtained by another DMOEA are significantly bet- algorithm performance and computational efficiency. More
ter, significantly worse, and statistically similar to that obtained especially, as L is larger than 16, MIGD values for most
by compared algorithm, respectively. test instances is slightly improved, however, more historical
knowledge needs to be evaluated for matching the current
B. Compared Algorithms and Parameter Settings environment, bringing higher computational cost. Therefore,
it is a good choice for L = 16 in the following experiments.
In order to fully compare with the mainstream change
response strategies, five popular DMOEAs are employed
for comparison, including D-NSGA-II-A [29], D-NSGA-II- D. Effectiveness of Knowledge Extraction and Update
B [29], KF [41], PPS [40], and MMTL-DMOEA [13]. The In order to verify the effectiveness of knowledge extrac-
former two adopt the traditional diversity introduction strate- tion and update mechanism, two comparative counterparts,
gies to respond a new environment. Different from them, named KTS-S2 and KTS-S3, are introduced. KTS-S2 trans-
prediction approaches are employed in KF and PPS. As for fers Pareto-optima obtained from the last environment to
MMTL-DMOEA newly published, TL was introduced to speed current one without constructing a knowledge pool. Being
up convergence by utilizing historical POSs. For D-NSGA-II-A, different from it, KTS-S3 employs the same knowledge extrac-
20% of nondominated solutions are replaced with individuals tion technique as KTS-DMOEA, but updates the knowledge
randomly generated as a new environment appears. Being dif- pool by deleting the earliest knowledge stored in it. Here,
ferent from it, 20% of nondominated solutions after mutation KTS-DMOEA proposed in this article is renamed as KTS-S1.
substitute for original ones in D-NSGA-II-B. As for KF, equal The significance test of MIGD on all test instances for
element values of Q and R diagonal matrices are set to 0.04 the three comparative algorithms is listed in Table I, and
and 0.01, respectively. PPS adopts an AR-based prediction the corresponding statistical results of MIGD, MHV, and
model. The order of the AR model p is set to 3, and the length MIGD+ for comparison algorithms on DMOPs with regular
of history mean point series M is set to 23. The key parameter and random environmental changes are compared in Tables
setting of MMTL-DMOEA refers to [13]. The size of a knowl- S1–S6 of Supplementary Material, respectively. Intuitively,
edge pool L = 16 in KTS-DMOEA. In addition, NSGA-II KTS-S1 achieves the best performance under regular change
is employed as a static optimizer under each environment in and random one. This is because that KTS-S1 adopts knowl-
all compared algorithms for fair comparisons. In NSGA-II, edge extraction and update mechanism to maintain diverse
simulated binary crossover (SBX) and polynomial mutation knowledge in the knowledge pool, which can ensure reli-
are employed. For SBX, distribution index and crossover prob- able historical knowledge for knowledge transfer. Compared
ability are set to ηc = 20 and pc = 1.0, respectively, while with KTS-S1, the knowledge pool constructed by KTS-S3 has
ηm = 20 and pm = 1/n for polynomial mutation. Also, pop- weaker diversity, leading to worse efficiency for knowledge
ulation sizes for bi- and three-objective test instances are set transfer. Regarding KTS-S2, the method shows the similar
to 100 and 190, respectively. performance to KTS-S3 under regular change, but the worst
For all benchmarks, the dimension of search space is 14. performance under random one. Apparently, KTS-S2 always
Each run contains 100 environmental changes. It is worth not- employs the knowledge of the last environment for knowl-
ing that no change takes place in the first 60 generations, so as edge transfer, whereas the knowledge cannot guarantee good
to minimize the effect of static optimization [40], [41]. There guidance under random environmental changes.
are two types of environmental change in experiments, i.e.,
regular change and random one. In order to verify the robust- E. Rationality of Hybrid Transfer Mechanism
ness of the proposed algorithm, three groups of environmental
A hybrid knowledge transfer mechanism is developed to
parameters with τt = 10 and nt = 5, 10, 20 are employed for
adapt historical Pareto-optima to the current environment. To
regular change, however, τt = 5, 10, 20 for random change.
validate its rationality, two compared versions are constructed,
termed as KTS-C2, and KTS-C3, respectively. Among them,
C. Sensitivity Analysis of L KTS-C2 only adopts a knowledge reuse strategy for utiliz-
In KTS-DMOEA, L determines how many historical knowl- ing the selected historical knowledge, while the SDA-IS-based
edge can be stored in a knowledge pool, which plays a direct knowledge transfer strategy is employed in KTS-C3. In addi-
impact on the diversity of knowledge. As L varies from 2 to tion, KTS-DMOEA is renamed as KTS-C1 for intuitive
30 every 2, Fig. 4 depicts MIGD values obtained by KTS- comparison.
DMOEA on regular and random environmental changes. We The significance test of MIGD on all test instances for
observe from statistical results that with the increase of L, the three comparative algorithms is listed in Table II, and
MIGD values become smaller, indicating better algorithm the corresponding statistical results of MIGD, MHV, and
performance, except for dMOP1 and FDA2. To the best of MIGD+ for comparison algorithms on DMOPs with regular
our knowledge, the above two benchmark functions both have and random environmental changes are compared in Tables
fixed POS. That means the true POSs of all environments S7–S12 of Supplementary Material, respectively. It is clear
are the same, and the corresponding Pareto-optima found by that KTS-C1 obtains the most competitive performance on the
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1758 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
Fig. 4. MIGD values of KTS-DMOEA with different L on regular and random environmental changes.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1759
TABLE I TABLE II
S IGNIFICANCE T EST OF MIGD ON A LL T EST I NSTANCES S IGNIFICANCE T EST OF MIGD ON A LL T EST I NSTANCES
random environmental change. Tables III and VI present the of seeking the most appropriate transferred individuals in
significance test of MIGD and MHV on all benchmarks, manifold space.
respectively. In addition, statistical results of MIGD and MHV PPS and KF, as the typical predictive-based response tech-
for DMOPs with regular environmental change are listed niques, obtain better MIGD, MHV, and MIGD+ values on
in Tables S13 and S14 of Supplementary Material, whereas DMOPs that POSs translate over time under regular environ-
the ones for DMOPs with random environmental change are mental changes, e.g., DF5, DF6, and DF14. Conversely, there
shown in Tables S16 and S17 of Supplementary Material. To exists poorer performance in solving benchmarks with ran-
further investigate the algorithm performance, MIGD+ val- dom changes, due to their more significant prediction errors.
ues of Pareto-optima for DMOPs with regular or random Unlike the other comparative algorithms, D-NSGA-II-A and
changes are recorded in Tables S15 and S18 of Supplementary D-NSGA-II-B have the most competitive performance on
Material, respectively. DMOPs with fixed POSs, i.e., dMOP1 and FDA2. Apparently,
We observed from experimental results that KTS- building a predictive or transfer model based on the similar
DMOEA proposed in this article is superior to the other Pareto-optima under various environments is inefficient. But
competitors on MIGD, MHV, and MIGD+ for about 40 to the simple replacement strategies given by D-NSGA-II provide
59 test instances, especially, competitive robustness in solv- abundant historical information for generating a diverse initial
ing different change patterns of DMOPs. By way of contrast, population in a current environment, especially for dMOP1
MMTL-DMOEA outperforms others on DF3 and DF9 in and FDA2 with random environmental change.
which the variables are correlated. The transfer strategy To further investigate algorithm performances, Figs. S2 and
of MMTL-DMOEA explores the correlation between deci- S3 of Supplementary Material depict IGD values of initial
sion variables from source information, with the purpose populations obtained by all comparative methods under each
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1760 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
TABLE III
S IGNIFICANCE T EST ON A LL T EST I NSTANCES FOR DMOP S W ITH R EGULAR E NVIRONMENTAL C HANGE
environment regularly and randomly changed, respectively. than MMTL-DOEA due to fewer evaluations for the proposed
Apparently, under the first several environments, historical knowledge transfer strategy. In contrast, fewer computation
knowledge stored in the knowledge pool generally has weak costs have been paid by the other four comparative algorithms.
diversity, and the pool may not provide promising source Especially, D-NSGA-II-A and D-NSGA-II-B do not need to
knowledge for transfer. In this case, the transfer strategy may build transfer or prediction models for generating an initial
produce an initial population far away from the true POS population at the new time, thus, bring the best computa-
of the new environment, leading to worse evolutionary effi- tion efficiency. To sum up, KTS-DMOEA shows competitive
ciency. Intuitively, no matter which kinds of environmental tradeoff between performance and computational efficiency in
change, initial populations generated by KTS-DMOEA have solving DMOPs.
very competitive and relatively stable IGD values under chang-
ing environments. This means the robustness of the proposed H. Algorithm Performance on Different Dimension of
knowledge transfer strategy. Decision Variables
In this section, KTS-DMOEA is further investigated on a
G. Analysis of Running Time different dimension of decision variables as n varies from
To further analyze computation efficiency, the average 10 to 30 every 4. All experimental results are summarized in
running time and computational complexity of all compar- Supplementary Material. Tables S19–S21 list MIGD, MHV,
ative algorithms on various benchmarks are compared in and MIGD+ values on regular change, and Tables S22–
Tables V and VI, respectively. Apparently, KTS-DMOEA con- S24 record the ones on random change, respectively. As
sumes less time and computational cost for the evolution observed from experimental results, KTS-DMOEA generally
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1761
TABLE IV
S IGNIFICANCE T EST ON A LL T EST I NSTANCES FOR DMOP S W ITH R ANDOM E NVIRONMENTAL C HANGE
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1762 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
TABLE V
AVERAGE RUNNING T IME OF C OMPARATIVE A LGORITHMS (U NIT: S ECONDS )
TABLE VI
C OMPUTATIONAL C OMPLEXITY OF C OMPARATIVE A LGORITHMS [4] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhang,
“Multiobjective evolutionary algorithms: A survey of the state of the
art,” Swarm Evol. Comput., vol. 1, no. 1, pp. 32–49, Mar. 2011.
[5] L. Li, Q. Lin, S. Liu, D. Gong, C. A. C. Coello, and Z. Ming, “A novel
multi-objective immune algorithm with a decomposition-based clonal
selection,” Appl. Soft Comput., vol. 81, Aug. 2019, Art. no. 105490.
[6] K. Zhang, C. Shen, X. Liu, and G. G. Yen, “Multiobjective evolution
strategy for dynamic multiobjective optimization,” IEEE Trans. Evol.
Comput., vol. 24, no. 5, pp. 974–988, Oct. 2020.
[7] M. Farina, K. Deb, and P. Amato, “Dynamic multiobjective optimization
problems: Test cases, approximations, and applications,” IEEE Trans.
Evol. Comput., vol. 8, no. 5, pp. 425–442, Oct. 2004.
[8] J. Zhou, J. Zou, S. Yang, G. Ruan, J. Ou, and J. Zheng, “An evolutionary
dynamic multi-objective optimization algorithm based on center-point
prediction and sub-population autonomous guidance,” in Proc. IEEE
Symp. Series Comput. Intell., 2018, pp. 2148–2154.
results on 20 benchmark functions show that the knowledge
[9] D. Gong, B. Xu, Y. Zhang, Y. Guo, and S. Yang, “A similarity-
extraction and update strategy provides the more diverse pool based cooperative co-evolutionary algorithm for dynamic interval
for promoting higher-efficient knowledge transfer, and the multiobjective optimization problems,” IEEE Trans. Evol. Comput.,
hybrid transfer strategy is capable of adapting historical knowl- vol. 24, no. 1, pp. 142–156, Feb. 2020.
[10] J. K. Kordestani, A. E. Ranginkaman, M. R. Meybodi, and
edge to the current search space, speeding up convergence. P. Novoa-Hernández, “A novel framework for improving multi-
Furthermore, statistical results indicate that KTS-DMOEA out- population algorithms for dynamic optimization problems: A scheduling
performs five state-of-the-art DMOEAs, achieving good ver- approach,” Swarm Evol. Comput., vol. 44, pp. 788–805, Feb. 2019.
satility in solving various change patterns of DMOPs. In the [11] L. Feng et al., “Solving generalized vehicle routing problem with occa-
sional drivers via evolutionary multitasking,” IEEE Trans. Cybern.,
future, multiple environmental knowledge, such as distribu- vol. 51, no. 6, pp. 3171–3184, Jun. 2021.
tion, and location of POSs or POFs, can simultaneously assist [12] M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen, “Transfer
the TL mechanism to explore the future environment, which learning-based dynamic multiobjective optimization algorithms,” IEEE
Trans. Evol. Comput., vol. 22, no. 4, pp. 501–514, Aug. 2018.
may be a potential research direction. In addition, it is also
[13] M. Jiang, Z. Wang, L. Qiu, S. Guo, X. Gao, and K. C. Tan, “A fast
meaningful to investigate dynamic constrained multiobjective dynamic evolutionary multiobjective algorithm via manifold transfer
optimization problems and dynamic large-scale multiobjective learning,” IEEE Trans. Cybern., vol. 51, no. 7, pp. 3417–3428, Jul. 2021.
optimization problems in the future. [14] M. Jiang, Z. Wang, S. Guo, X. Gao, and K. C. Tan, “Individual-based
transfer learning for dynamic multiobjective optimization,” IEEE Trans.
Cybern., vol. 51, no. 10, pp. 4968–4981, Oct. 2021.
[15] C. Rossi, M. Abderrahim, and J. C. Díaz, “Tracking moving
R EFERENCES optima using Kalman-based predictions,” Evol. Comput., vol. 16, no. 1,
pp. 1–30, Mar. 2008.
[1] R. Azzouz, S. Bechikh, and L. B. Said, “Dynamic multi-objective [16] M. R. Behnamfar, H. Barati, and M. Karami, “Multi-objective antlion
optimization using evolutionary algorithms: A survey,” in Adaptation, algorithm for short-term hydro-thermal self-scheduling with uncertain-
Learning, and Optimization. Cham, Switzerland: Springer, 2017, ties,” IETE J. Res., to be published.
pp. 31–70. [17] S. Yang, H. Cheng, and F. Wang, “Genetic algorithms with immigrants
[2] L. T. Bui, Z. Michalewicz, E. Parkinson, and M. B. Abello, “Adaptation and memory schemes for dynamic shortest path routing problems in
in dynamic environments: A case study in mission planning,” IEEE mobile ad hoc networks,” IEEE Trans. Syst. Man, Cybern. C, Appl.
Trans. Evol. Comput., vol. 16, no. 2, pp. 190–209, Apr. 2012. Rev., vol. 40, no. 1, pp. 52–63, Jan. 2010.
[3] H. Zhang, J. Ding, M. Jiang, K. C. Tan, and T. Chai, “Inverse Gaussian [18] L. Zhou et al., “Solving dynamic vehicle routing problem via evolu-
process modeling for evolutionary dynamic multiobjective optimization,” tionary search with learning capability,” in Proc. IEEE Congr. Evol.
IEEE Trans. Cybern., vol. 52, no. 10, pp. 11240–11253, Oct. 2022. Comput., 2017, pp. 890–896.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
GUO et al.: KTS FOR EVOLUTIONARY DYNAMIC MULTIOBJECTIVE OPTIMIZATION 1763
[19] K. Trojanowski and Z. Michalewicz, “Evolutionary algorithms for non- [42] C. Wang, G. G. Yen, and M. Jiang, “A grey prediction-based evolution-
stationary environments,” in Proc. 8th Work. Intell. Inf. Syst., 1999, ary algorithm for dynamic multiobjective optimization,” Swarm Evol.
pp. 229–240. Comput., vol. 56, Aug. 2020, Art. no. 100695.
[20] W. Kong, T. Chai, S. Yang, and J. Ding, “A hybrid evolutionary [43] L. Cao, L. Xu, E. D. Goodman, C. Bao, and S. Zhu, “Evolutionary
multiobjective optimization strategy for the dynamic power supply dynamic multiobjective optimization assisted by a support vector regres-
problem in magnesia grain manufacturing,” Appl. Soft Comput., vol. 13, sion predictor,” IEEE Trans. Evol. Comput., vol. 24, no. 2, pp. 305–319,
no. 5, pp. 2960–2969, May 2013. Apr. 2020.
[21] P. Rohlfshagen and X. Yao, “Evolutionary dynamic optimization: [44] R. Rambabu, P. Vadakkepat, K. C. Tan, and M. Jiang, “A mixture-of-
Challenges and perspectives,” in Studies in Computational Intelligence, experts prediction framework for evolutionary dynamic multiobjective
vol. 490. Heidelberg, Germany: Springer, 2013, pp. 65–84. optimization,” IEEE Trans. Cybern., vol. 50, no. 12, pp. 5099–5112,
[22] T. T. Nguyen, S. Yang, J. Branke, and X. Yao, “Evolutionary dynamic Dec. 2020.
optimization: methodologies,” in Studies in Computational Intelligence, [45] Y. Guo, H. Yang, M. Chen, J. Cheng, and D. Gong, “Ensemble
vol. 490. Heidelberg, Germany: Springer, 2013, pp. 39–64. prediction-based dynamic robust multi-objective optimization methods,”
[23] L. Feng et al., “Evolutionary multitasking via explicit autoencoding,” Swarm Evol. Comput., vol. 48, pp. 156–171, Aug. 2019.
IEEE Trans. Cybern., vol. 49, no. 9, pp. 3457–3470, Sep. 2019. [46] I. Hatzakis and D. Wallace, “Dynamic multi-objective optimization with
[24] M. Jiang, L. Qiu, Z. Huang, and G. G. Yen, “Dynamic multi-objective evolutionary algorithms,” in Proc. 8th Annu. Conf. Genet. Evol. Comput.,
estimation of distribution algorithm based on domain adaptation and vol. 27, 2006, p. 1201.
nonparametric estimation,” Inf. Sci., vol. 435, pp. 203–223, Apr. 2018. [47] M. Jiang, Z. Wang, H. Hong, and G. G. Yen, “Knee point-based imbal-
[25] J. G. O. L. Duhain and A. P. Engelbrecht, “Towards a more com- anced transfer learning for dynamic multiobjective optimization,” IEEE
plete classification system for dynamically changing environments,” in Trans. Evol. Comput., vol. 25, no. 1, pp. 117–129, Feb. 2021.
Proc. IEEE Congr. Evol. Comput., 2012, pp. 1–8. [48] L. Feng, W. Zhou, W. Liu, Y.-S. Ong, and K. C. Tan, “Solving dynamic
[26] D. Yazdani, R. Cheng, D. Yazdani, J. Branke, Y. Jin, and X. Yao, multiobjective problem via autoencoding evolutionary search,” IEEE
“A survey of evolutionary continuous dynamic optimization over Trans. Cybern., vol. 52, no. 5, pp. 2649–2662, May 2022.
two decades—Part B,” IEEE Trans. Evol. Comput., vol. 25, no. 4, [49] B. Sun and K. Saenko, “Subspace distribution alignment for unsu-
pp. 630–650, Aug. 2021. pervised domain adaptation,” in Proc. Brit. Mach. Vis. Conf., 2015,
[27] M. Greeff and A. P. Engelbrecht, “Solving dynamic multi-objective pp. 24.1–24.10.
problems with vector evaluated particle swarm optimisation,” in [50] X. He, Y. Zhou, Z. Chen, and Q. Zhang, “Evolutionary many-objective
Proc. IEEE Congr. Evol. Comput., 2008, pp. 2917–2924. optimization based on dynamical decomposition,” IEEE Trans. Evol.
[28] C.-K. Goh and K. C. Tan, “A competitive-cooperative coevolutionary Comput., vol. 23, no. 3, pp. 361–375, Jun. 2019.
paradigm for dynamic multiobjective optimization,” IEEE Trans. Evol. [51] Q. Zhang, A. Zhou, and Y. Jin, “RM-MEDA: A regularity model-based
Comput., vol. 13, no. 1, pp. 103–127, Feb. 2009. multiobjective estimation of distribution algorithm,” IEEE Trans. Evol.
[29] K. Deb, N. U. B. Rao, and S. Karthik, “Dynamic multi- Comput., vol. 12, no. 1, pp. 41–63, Feb. 2008.
objective optimization and decision-making using modified NSGA-II: [52] S. Jiang, S. Yang, X. Yao, K. Tan, M. Kaiser, and N. Krasnogor,
A case study on hydro-thermal power scheduling,” in Evolutionary “Benchmark problems for CEC2018 competition on dynamic
Multi-Criterion Optimization. Berlin, Heidelberg: Springer, 2007, multiobjective optimisation,” in Proc. CEC Competition, 2018,
pp. 803–817. pp. 1–18.
[30] S. Sahmoud and H. R. Topcuoglu, “Sensor-based change detec- [53] S. Yang, T. T. Nguyen, and C. Li, “Evolutionary dynamic optimization:
tion schemes for dynamic multi-objective optimization problems,” in Test and evaluation environments,” in Studies in Computational
Proc. IEEE Symp. Series Comput. Intell., 2016, pp. 1–8. Intelligence, vol. 490. Heidelberg, Germany: Springer, 2013, pp. 3–37.
[31] Z. Peng, J. Zheng, and J. Zou, “A population diversity maintaining strat- [54] S. Zeng, R. Jiao, C. Li, X. Li, and J. S. Alkasassbeh, “A general frame-
egy based on dynamic environment evolutionary model for dynamic work of dynamic constrained multiobjective evolutionary algorithms for
multiobjective optimization,” in Proc. IEEE Congr. Evol. Comput., 2014, constrained optimization,” IEEE Trans. Cybern., vol. 47, no. 9, pp. 1–11,
pp. 274–281. Sep. 2017.
[32] X. Xu, Y. Tan, W. Zheng, and S. Li, “Memory-enhanced dynamic multi- [55] L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm
objective evolutionary algorithm based on LP decomposition,” Appl. Sci., for calculating hypervolume,” IEEE Trans. Evol. Comput., vol. 10, no. 1,
vol. 8, no. 9, p. 1673, Sep. 2018. pp. 29–38, Feb. 2006.
[33] R. Azzouz, S. Bechikh, and L. B. Said, “A dynamic multi-objective [56] H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima, “Modified
evolutionary algorithm using a change severity-based adaptive popula- distance calculation in generational distance and inverted genera-
tion management strategy,” Soft Comput., vol. 21, no. 4, pp. 885–906, tional distance,” in Evolutionary Multi-Criterion Optimization (Lecture
Feb. 2017. Notes in Computer Science 9019). Cham, Switzerland: Springer, 2015,
[34] R. Chen, K. Li, and X. Yao, “Dynamic multiobjectives optimization with pp. 110–125.
a changing number of objectives,” IEEE Trans. Evol. Comput., vol. 22, [57] J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on
no. 1, pp. 157–171, Feb. 2018. the use of nonparametric statistical tests as a methodology for comparing
[35] S. Sahmoud and H. R. Topcuoglu, “A memory-based NSGA-II algorithm evolutionary and swarm intelligence algorithms,” Swarm Evol. Comput.,
for dynamic multi-objective optimization problems,” in Applications of vol. 1, no. 1, pp. 3–18, Mar. 2011.
Evolutionary Computation (Lecture Notes in Computer Science). Cham, [58] R. Jiao, S. Zeng, C. Li, and Y.-S. Ong, “Two-type weight adjustments
Switzerland: Springer, 2016, pp. 296–310. in MOEA/D for highly constrained many-objective optimization,” Inf.
[36] Z. Liang, S. Zheng, Z. Zhu, and S. Yang, “Hybrid of memory and Sci., vol. 578, pp. 592–614, Nov. 2021.
prediction strategies for dynamic multiobjective optimization,” Inf. Sci.,
vol. 485, pp. 200–218, Jun. 2019.
[37] M. Rong, D. Gong, W. Pedrycz, and L. Wang, “A multimodel prediction
method for dynamic multiobjective evolutionary optimization,” IEEE Yinan Guo (Member, IEEE) received the B.E.
Trans. Evol. Comput., vol. 24, no. 2, pp. 290–304, Apr. 2020. degree in automation and the Ph.D. degree in
[38] R. Liu, Y. Chen, W. Ma, C. Mu, and L. Jiao, “A novel cooperative control theory and control engineering from the
coevolutionary dynamic multi-objective optimization algorithm using China University of Mining and Technology,
a new predictive model,” Soft Comput., vol. 18, no. 10, pp. 1913–1929, Xuzhou, China, in 1997 and 2003, respectively.
Oct. 2014. She currently is a Professor of Computational
[39] J. Zou, Q. Li, S. Yang, H. Bai, and J. Zheng, “A prediction strat- Intelligence and Machine Learning with the
egy based on center points and knee points for evolutionary dynamic School of Information and Control Engineering,
multi-objective optimization,” Appl. Soft Comput., vol. 61, pp. 806–818, China University of Mining and Technology, and
Dec. 2017. also with the School of Mechanical Electronic
[40] A. Zhou, Y. Jin, and Q. Zhang, “A population prediction strategy for evo- and Information Engineering, China University of
lutionary dynamic multiobjective optimization,” IEEE Trans. Cybern., Mining and Technology (Beijing), Beijing, China. She has more than 90 pub-
vol. 44, no. 1, pp. 40–53, Jan. 2014. lications. Her current research interests include computation intelligence
[41] A. Muruganantham, K. C. Tan, and P. Vadakkepat, “Evolutionary in dynamic and uncertain optimization and its applications in scheduling,
dynamic multiobjective optimization via Kalman filter prediction,” IEEE path planning, big data processing, and class imbalance learning and its
Trans. Cybern., vol. 46, no. 12, pp. 2862–2873, Dec. 2016. applications in fault diagnosis.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.
1764 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 27, NO. 6, DECEMBER 2023
Guoyu Chen received the B.E. degree from the Dunwei Gong (Member, IEEE) received the B.Sc.
Shandong Technology and Business University, degree in mathematics from the China University of
Yantai, China, in 2017, and the M.E. degree Mining and Technology, Xuzhou, China, in 1992, the
from Nanchang Hangkong University, Nanchang, M.E. degree in automation from Beihang University,
China, in 2020. He is currently pursuing the Beijing, China, in 1995, and the Ph.D. degree in
Ph.D. degree with the School of Information and control theory and control engineering from the
Control Engineering, China University of Mining China University of Mining and Technology in
and Technology, Xuzhou, China. 1999.
His main research interests include dynamic He currently is a Professor of Computational
multiobjective optimization and dynamic con- Intelligence and the Director of the Centre for
strained multiobjective optimization. Intelligent Optimization and Control with the School
of Information and Control Engineering, China University of Mining and
Technology. He has more than 180 publications. His current research interests
include computation intelligence in multiobjective optimization, dynamic and
uncertain optimization, and applications in software engineering, scheduling,
path planning, big data processing, and analysis.
Min Jiang (Senior Member, IEEE) received the
bachelor’s and Ph.D. degrees in computer science
from Wuhan University, Wuhan, China, in 2001 and
2007, respectively.
Subsequently, he is a Postdoctoral Researcher Jing Liang (Senior Member, IEEE) received the
with the Department of Mathematics, Xiamen B.E. degree in automation from the Harbin Institute
University, Xiamen, China, where he is currently of Technology, Harbin, China, in 2003, and the
a Professor with the Department of Artificial Ph.D. degree in electrical and electronic engineering
Intelligence. His main research interests are machine from Nanyang Technological University, Singapore,
learning, computational intelligence, and robotics. in 2009.
He has a special interest in dynamic multiobjective She is currently a Professor with the School
optimization, transfer learning, software development, and in the basic theo- of Electrical Engineering, Zhengzhou University,
ries of robotics. Zhengzhou, China. Her main research interests
Prof. Jiang received the Outstanding Reviewer Award from IEEE are evolutionary computation, swarm intelligence,
T RANSACTIONS ON C YBERNETICS in 2016. He is the Chair of the IEEE CIS multiobjective optimization, and neural network.
Xiamen Chapter. He is currently serving as an Associate Editor for the IEEE Prof. Liang currently serves as an Associate Editor for the IEEE
T RANSACTIONS ON N EURAL N ETWORKS AND L EARNING S YSTEMS and T RANSACTIONS ON E VOLUTIONARY C OMPUTATION and the Swarm and
IEEE T RANSACTIONS ON C OGNITIVE AND D EVELOPMENTAL S YSTEMS. Evolutionary Computation.
Authorized licensed use limited to: BEIHANG UNIVERSITY. Downloaded on January 24,2024 at 06:21:21 UTC from IEEE Xplore. Restrictions apply.