A Collaborative Neurodynamic Approach To Multiple-Objective Distributed Optimization
A Collaborative Neurodynamic Approach To Multiple-Objective Distributed Optimization
Abstract— This paper is concerned with multiple-objective A Pareto optimal solution can be obtained by solving a para-
distributed optimization. Based on objective weighting and meterized single-objective optimization problem. By varying
decision space decomposition, a collaborative neurodynamic the parameters, multiple Pareto optimal solutions are then
approach to multiobjective distributed optimization is presented.
In the approach, a system of collaborative neural networks is obtained. Many techniques have been developed to aggre-
developed to search for Pareto optimal solutions, where each gate objective functions, such as weighted-sum method [8],
neural network is associated with one objective function and goal programming [9], and normal boundary intersection
given constraints. Sufficient conditions are derived for ascer- method [10]. Another class of approaches is population-
taining the convergence to a Pareto optimal solution of the based method. Among which, evolutionary multiobjective
collaborative neurodynamic system. In addition, it is proved
that each connected subsystem can generate a Pareto optimal optimization approaches [11], [12] have been widely inves-
solution when the communication topology is disconnected. tigated. Numerous algorithms have been proposed, such as
Then, a switching-topology-based method is proposed to compute NSGA-II [13], SPEA2 [14], MOEA/D [15], to name a few.
multiple Pareto optimal solutions for discretized approximation In addition, particle swarm optimization has also been applied
of Pareto front. Finally, simulation results are discussed to for multiobjective optimization [16].
substantiate the performance of the collaborative neurodynamic
approach. A portfolio selection application is also given. Note that the aforementioned methods are mainly discussed
from a centralized manner. In the scalarization approach, all
Index Terms— Collaborative neurodynamic approach, distrib- objective functions need to be combined in advance. In the
uted optimization, multiobjective optimization, neural networks,
Pareto optimal solutions. population-based method, each individual has to know the full
knowledge of all objective functions to evaluate its quality,
I. I NTRODUCTION
which may not be fulfilled in some multiobjective optimization
M ANY problems in science and engineering, including
multicriteria decision making, machine learning, model
predictive control, smart building design, can be formulated as
problems, such as multiparty negotiations, where decision
makers know their own value functions only due to privacy
protection [17]. Another example is the multiobjective opti-
multiple-objective optimization problems (see [1]–[6], to name
mization problems modeling business activities within a large
a few), which inherently contain several conflicting objectives
international corporation, in which decisions under multiple
to be optimized simultaneously. Usually, no single solution
objectives are made locally in each country so that the corpora-
can be found, such that every objective function attains its
tion performs at its best [18]. For these problems, a distributed
optimum. Therefore, a concept of optimality known as Pareto
approach is necessary. In addition, with an increase in problem
optimality is utilized in multiobjective framework, which
size, a distributed approach is also demanded due to the limited
describes compromise between the competing objectives. The
computing ability of a single computer.
set of all Pareto optimal values is called Pareto front.
In recent years, many efforts have been devoted to dis-
Different from single-objective optimization, in multiobjec-
tributed optimization with a single-objective function based
tive optimization, it is required to compute a set of Pareto
on multiagent systems, in which each agent knows partial
optimal solutions distributed in the Pareto front [7]. Various
information of an objective function, and all agents coop-
methods have been proposed for obtaining Pareto optimal
eratively seek for an optimal solution. Many multiagent
solutions. One class of methods is the scalarization approach.
systems have been developed [19]–[28]. In contrast, a few
Manuscript received July 11, 2016; revised October 4, 2016; accepted works on multiobjective distributed optimization are avail-
December 31, 2016. Date of publication February 1, 2017; date of current able. In [18] and [29], multiobjective distributed optimization
version March 15, 2018. This work was supported in part by the Research
Grants Council of the Hong Kong Special Administrative Region of China, based on iterative augmented Lagrangian coordination tech-
under Grant 14207614, and in part by the National Natural Science Foundation niques and diffusion strategies are proposed. More recently,
of China under Grant 61673330 and Grant 61473333. a subgradient-based algorithm is proposed in [30] for multi-
S. Yang is with the School of Computer Science and Engineering, Southeast
University, Nanjing 210018, China, and was also with the Department of objective distributed optimization and the relationship between
Computer Science, City University of Hong Kong, Kowloon, Hong Kong the selection of weight vector and the approximation error of
(e-mail: [email protected]). the Pareto front is discussed. The aforementioned subgradient-
Q. Liu is with the School of Automation, Huazhong University of Science
and Technology, Wuhan 430074, China (e-mail: [email protected]). based approaches require a diminishing step size to obtain
J. Wang is with the Department of Computer Science, City University of an exact solution, which may limit the performance of algo-
Hong Kong, Kowloon, Hong Kong (e-mail: [email protected]). rithms [20]. In [31], neural network models are developed
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. for multiobjective optimization with equality constraints only
Digital Object Identifier 10.1109/TNNLS.2017.2652478 based on a decomposition-coordination principle.
2162-237X © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
982 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
Over past three decades, neurodynamic optimization based constraints. Second, switching topology is used for generating
on neural networks, which have the inherent nature of parallel multiple Pareto optimal solutions to characterize Pareto fronts.
computing and the potential of electronic implementation, has The remaining part of this paper is organized as
received great development. A neurodynamic approach can follows. In Section II, preliminary concepts on multiobjective
solve optimization problems in running times at the orders optimization, projection operator, and graph theory are intro-
of magnitude much faster than the most popular optimiza- duced. In Section III, the collaborative neurodynamic system is
tion algorithms executed on digital computers. In mid-1980s, formulated, and then, the stability properties of the system are
Hopfield and Tank [32] and Tank and Hopfield [33] spear- analyzed. In Section IV, a switching-topology-based method
headed a neural network model for solving the traveling is proposed to generate multiple Pareto optimal solutions.
salesmen problem. From then on, neurodynamic optimization In Section V, numerical examples are presented to illustrate
approach has been extensively investigated. Various neural the proposed approach. In Section VI, an application of the
network models have been developed for different single- collaborative neurodynamic approach in portfolio selection
objective optimization problems (see [34]–[46], and references problem is shown. Finally, the conclusion remark is given
therein). The key idea is to establish neural network models in Section VII.
based on optimality condition by using the Lagrange multi- Notation: Throughout this paper, vectors are column vec-
plier method, projection method, duality theory, and so on. tors and written with bold face. · denotes 2-norm.
More recently, a neural network model with two-time-scale is diag{A1 , A2 , · · · , An } denotes a (block) diagonal matrix
proposed for multiobjective optimization [47]. However, it is consisting of diagonal entries (blocks) A1 , A2 , · · · , An .
still a centralized manner. In addition, by using multiple neural col{x 1 , x 2 , · · · , x n } denotes a column vector formed by
network models, collaborative neurodynamic approaches have stacking vectors x 1 , x 2 , · · · , x n on top of each other. In is
been developed in [48] and [49] for global optimization with denoted for an identity matrix with dimension n. 1k denotes a
a single objective function. vector of k dimensions with all entries being 1. Ik denotes the
Motivated by the above discussions, this paper aims to set {1, 2, · · · , k}. Rm
+ denotes the half space {x ≥ 0 | x ∈ R }.
m
develop a collaborative neurodynamic approach for multiob- k refers to the largest integer no larger than k.
jective distributed optimization. By using the weighted sum
method [8], a single-objective optimization problem can be II. P RELIMINARIES
formulated, in which the scalar objective is summation of A. Multiobjective Optimization
weighted original objective functions. Based on decomposition
principle, a system consisting of multiple neural networks is A multiobjective optimization problem has a number of
developed to cooperatively seek Pareto optimal solutions of a objective functions to be minimized or maximized [7].
multiobjective optimization problem. Each neural network is In mathematical notation, it can be posed as
required to know a single objective function only and even min f (x) = ( f 1 (x), f 2 (x), · · · , fk (x))T
partial information of constraints. The cooperative scheme
s.t. x ∈ X (1)
relies on the transmission of partial state information among
them. Each neural network optimizes its local objective func- where x ∈ Rn is the vector of decision variables, and X =
tion while seeks for state agreement with others, under the {x ∈ Rn : Ax = b, g(x) ≤ 0, x ∈ } is called feasible
guidance of a consensus-based coupling scheme. Sufficient region. f (x) ∈ Rk is a vector of objective functions or criteria
conditions are presented to guarantee the convergence of with fi (x) : Rn → R (i ∈ Ik ). k is the number of objective
the collaborative neurodynamic system to a Pareto optimal functions and satisfies k ≥ 2. g(x) : Rn → Rm represents the
solution. inequality constraint. A ∈ R p×n has a full row rank. ⊂ Rn is
In order to characterize a Pareto front, a switching-topology- a closed convex set. The feasible objective region F is defined
based method is proposed for computing a set of Pareto as the set { f (x) : x ∈ X }. Problem (1) is said to be convex
optimal solutions. In this method, a switching process of the if all the objective functions and the feasible region X are
communication topology in the developed system is designed. convex.
Except for the last step, the communication topology is not Due to the contradiction of the objective functions, it is often
connected during the process. We show that each connected not possible to find a single solution what would be optimal for
component is also able to converge to a Pareto optimal all the objectives simultaneously. Consequently, the solution of
solution. Therefore, when the system consists of multiple problem (1) is described by the concept of Pareto optimality,
connected components, such method has a merit of generating which refers to the solution that none of the objective functions
multiple Pareto optimal solutions in parallel. can be improved without detriment to at least one of the other
In summary, the distinguished features of the approach objective functions. A formal definition of Pareto optimality
are twofold. First, a collaborative neurodynamic system is is given in the following.
developed for computing Pareto optimal solutions to a mul- Definition 1 [7]: A decision vector x ∗ ∈ X is called
tiobjective distributed optimization problem by integrating Pareto optimal if there does not exist x ∈ X such that
existing neurodynamic model for single-objective optimization f i (x) ≤ f i (x ∗ ) for all i ∈ Ik , and f j (x) < f j (x ∗ ) for one
with a decomposition principle. In contrast to the existing j ∈ Ik . A decision vector x ∗ ∈ X is called weak Pareto
algorithms in [18], [29], and [30] for multiobjective distrib- optimal if there does not exist x ∈ X such that f i (x) < f i (x ∗ )
uted optimization, the approach herein can deal with general for all i ∈ Ik . An objective vector z ∗ ∈ F is called (weak)
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 983
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
984 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 985
Lemma 3: U is an invariant set with respect to system (7). Note that ∇T (u) is not symmetric. Therefore
Proof: Let u(t) be the trajectory of system (7) with initial H (u)T ∇T (u)H (u)
point u(t0 ) ∈ U. The lemma can be proved if we can show 1
u(t) ⊂ U for t ≥ t0 . From system (7), we have u̇ = (u). = H (u)T (∇T (u) + ∇T T (u))H (u)
2
Then, u̇ + u = PU (u − T (u)). It follows that:
t = T (u)( J( x̃, ỹ) + L)(u) ≥ 0. (13)
t0 −t −t
u(t) = e u(t0 ) + e es PU (u(s) − T (u(s)))ds Next, we consider T (u)T (u−u∗ ).
Since T (u) is differentiable,
t0
t by the fundamental theorem of integral calculus, we have
t0 −t −t 1
=e u(t0 ) + e PU (u(s0 ) − T (u(s0 ))) es ds ∗
t0 T (u) − T (u ) = ∇T (u∗ + s(u − u∗ ))(u − u∗ ))ds
= et0 −t u(t0 ) + (1−et0−t )PU (u(s0 ) − T (u(s0 ))) 0
(14)
where t0 ≤ s0 ≤ t. The second equation is derived based on where s ∈ [0, 1] is the integral variable. It follows from (14)
integrator mean theorem. Note that PU (u(s0 )−T (u(s0 ))) ∈ U, that:
u(t0 ) ∈ U, and U is convex. Therefore, u(t) ⊂ U for any
t ≥ t0 , which completes the proof. (T (u(t)) − T (u∗ ))T (u(t) − u∗ )
1
In the following, the convergence result of the collaborative = (u(t) − u∗ )T ∇T (u∗ + s(u − u∗ ))(u − u∗ ))ds
neurodynamic system is derived. 0
1
Theorem 1: Let Assumptions 1 and 2 hold, and further
assume that there exists one objective function is strictly = ( x̃(t) − x̃ ∗ )T ( J ( x̃(s), ỹ(s)) + L)( x̃(t) − x̃ ∗ )ds ≥ 0
0
convex on . Given any w > 0, for any initial value u(t0 ) ∈ U, (15)
the trajectory of each x i (t) in system (5) is asymptotically ∗
where x̃(s) = (1-s) x̃ + s x̃(t) ∈ and ỹ(s) = (1-s) ỹ∗ +
k
convergent to a Pareto optimal solution of problem (1).
s ỹ(t) ∈ R+ . Note that T (u ) (u − u∗ ) ≥ 0 hold for any
nm ∗ T
Proof: Let u∗ be an equilibrium point of system (5).
u ∈ U. Since u(t0 ) ∈ U, it follows from Lemma 3 that
Consider the following Lyapunov function:
u(t) ⊂ U. Then, there is
1
V (u) = V1 (u) + u − u∗ 2 T (u(t))T (u(t) − u∗ )
2
= (T (u(t)) − T (u∗ ))T (u(t) − u∗ ) + T (u∗ )T (u(t) − u∗ )
where V1 (u) = −T (u)T H (u) − 12 H (u)2 . According to the
inequalities in Lemma 2, we have ≥ T (u∗ )T (u(t) − u∗ ) ≥ 0. (16)
−T (u)T H (u) ≥ H (u)2 (8) Then, we have V̇ (t) ≤ 0. Based on invariant set theorem, we
can conclude that all trajectories u(t) of system (7) converge
which implies V (u) ≥ 12 u−u∗ 2 . Therefore, V (u) is positive to the largest invariant set M = {u ∈ U|V̇ (u(t)) = 0}.
definite and radially unbounded. According to [34], we can In what follows, we will characterize the set M, which
know that V (u) is differentiable and is proved to be set of the equilibrium points of system (7).
∇V1 (u) = T (u) − ∇T (u)H (u) + H (u) (9) Clearly, u̇ = 0 implies that V̇ (u(t)) = 0. We now show that
the converse still holds. Let û ∈ M, then V̇ (û) = 0 implies
where ∇T (u) is given by that
⎛ ⎞
J ( x̃, ỹ) + L ∇ g̃( x̃) A L T (û)T (û − u∗ ) = 0 (17)
⎜ −∇ g̃T ( x̃) O1 O2 O3 ⎟ ∗ ∗ ∗
∇T (u) = ⎜⎝
⎟ (10) (T (û) − T (u )) (û − u ) = 0 (18)
− AT O4 O5 O6 ⎠
−L O7 O8 O9 H (û)T ∇T (û)H (û) = 0. (19)
where O i (i ∈ I9 ) is zero matrix with compatible dimensions. Combining (15) and (18), we have
1
J ( x̃, ỹ) = diag{ J 1 (x 1 , y1 ), J 2 (x 2 , y2 ), · · · , J k (x k , yk )}
with ( x̃ˆ − x̃ ∗ )T ( J( x̃(s), ỹ(s)) + L)( x̃ˆ − x̃ ∗ )ds = 0 (20)
0
m
J i (x i , yi ) = ∇ f i (x i ) +
2
∇ 2 g j (x i )yi j (11) where x̃(s) = (1 − s) x̃ ∗ + s x̃ˆ ∈ k and ỹ(s) =
j =1 (1 − s) ỹ∗ + s ˆỹ ∈ Rnm + . Since J ( x̃(s), ỹ(s)) + L is pos-
itive definite, then it follows from (20) that ( x̃ˆ − x̃ ∗ )T
where ∇2 f i (x i ) denotes the Hessian matrix of f i with respect
( J( x̃(s), ỹ(s)) + L)( x̃ˆ − x̃ ∗ ) = 0 hold for 0 ≤ s ≤ 1. Let
to x i . Since f i and g j are all convex, then J i ≥ 0.
s = 0, then x̃(s) = x̃ ∗ , and ỹ(s) = ỹ∗ . Thus, we have
Furthermore, since there exists one f i is strictly convex,
( x̃ˆ − x̃ ∗ )T ( J( x̃ ∗ , ỹ∗ ) + L)( x̃ˆ − x̃ ∗ ) = 0. Therefore, x̂ = x ∗ .
then we can conclude that J ( x̃, ỹ) + L is positive definite.
Furthermore, we have ˙z̃ = A x̂ − b = Ax ∗ − b = 0.
In fact, x̃ T L x̃ ≥ 0, and x̃ T L x̃ = 0 if and only if x̃ = 1k ⊗ x̂,
Similarly, η̃˙ = 0. In addition, combining (13) and (19) gives
where x̂ ∈ Rn . Note that when x̃ = 1k ⊗ x̂ = 0, ˆ ˆỹ) + L)(û) = 0. Then, we can conclude
that T (û)( J( x̃,
there is x̃ T J( x̃, ỹ) x̃ = ki=1 x̂ T J i ( x̂, yi ) x̂ > 0. Therefore,
that (û) = 0. Hence, x̃˙ = 0.
J ( x̃, ỹ) + L is positive definite. Then
Next, we prove ˙ỹ = 0. According to the expres-
V̇ ≤ −T (u)T (u − u∗ ) − H (u)T ∇T (u)H (u). (12) sion of T (u) and (17), we have ( ˆỹ − ỹ∗ )T g( x̃) ˆ = 0.
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
986 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 987
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
988 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
Fig. 4. Transient behavior of x i (t) in system (5) under a ring in Example 1. Fig. 6. Transient behavior of x i (t) in system (5) under disconnected
communication topology.
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 989
Fig. 7. Boundary of Pareto front of problem (22) in Example 1. (0.271, 0.473)T and x ow2 = (0.283, 0.456)T. By comparing the
weighted sum value wT f (x), it can be seen that x ∗ is better
than x o , as wT1 f (x ∗w1 ) = −1.924 > wT1 f (x ow1 ) = −1.925, and
wT2 f (x ∗w2 ) = −1.867 > wT2 f (x ow2 ) = −1.868. As a result,
the neurodynamic approach here can find better Pareto optimal
solutions in comparison to the method in [17].
Example 3: Consider the following bilateral negotiation
problem given in [30]:
1
min f 1 (x) = (2(x 1 − 1)2 + x 22 − 2x 1 (x 2 − 1))
5
2
min f2 (x) = (2x 12 + (x 2 + 1)2 + (x 1 − 2)x 2 )
5
where the decision sets are X1 = [−1, 1] × [−1, 1] and X2 =
[0, 2] × [0, 2]. The feasible constraint set is X = X1 ∩ X2 .
Fig. 8. Transient behavior of x(t) in system (6), case I, Example 2. As shown in [30], f 1 and f 2 are strongly convex.
In the following, we use system (7) to attain Pareto optimal
The decision set X is defined by the following linear solutions. Here, system (7) consists of two neural networks and
inequalities: each neural network consists of x-layer and η-layer only, as
there are no equality and inequality constraints. The trajectory
2x 1 + 4x 2 − 5 ≤ 0 (24a) of system (7) is simulated by using Euler’s method [55]. The
−8x 1 − 6x 2 + 5 ≤ 0 (24b) associated difference equation is given as
4x 1 − 6x 2 − 3 ≤ 0. (24c)
x i (t + 1) = (1 − h)x i (t) + hPX1 (x i (t) − ∇ f i (x i (t))
In this example, we show the effectiveness of system (6) for
computing Pareto optimal solutions under different constraints −ci,3−i (x i (t) − x j (t) + ηi (t) − η j (t)))
decomposition. As there are two objective functions, the ηi (t + 1) = ηi (t) + hci,3−i (x i (t) − x j (t)) (25)
system consists of two neural networks. The neural network
where t ∈ N and i = 1, 2. Let x 1 (0) = x 2 (0) = (5, −5)T ,
knowing f i is labeled as the i th neural network. In the follow-
η1 (0) = η 2 (0) = (0, 0)T , h = 0.4, and c12 = c21 = 0.8. The
ing, two cases of constraints decomposition are considered.
same as [30], we use (t) = 0.5(x 1 (t) + x 2 (t)) − x ∗ to
1) Case I: Both neural networks know all constraints. measure the convergence error of system (25). Let T be the
2) Case II: The first neural network knows (24a) and (24b), smallest positive integer such that (T ) ≤ 0.01. System (25)
and the second one knows (24c). is run under different weight vectors w = (w1 , 1−w1 )T , w1 =
Let w1 = (1, (1/16)). The simulation results are shown in 0, 0.1, · · · , 0.9, 1. The corresponding T values are presented
Figs. 8 and 9. One can see that system (6) converge to the in the third row of Table I. The second row in Table I is
Pareto optimal solution x ∗w1 = (0.271, 0.472)T in both cases. from [30] and refers to the required T by the algorithm therein.
That is to say, in system (6), it is not required that each neural Table I shows that system (25) requires quite less iterations
network should have the full information of constraints. Hence, than the algorithm in [30] when w1 is larger than 0.8.
the structure of individual neural networks can be simplified.
Next, by letting w2 = (1, (1/32))T , the Pareto optimal VI. P ORTFOLIO O PTIMIZATION
solution is obtained as x ∗w2 = (0.282, 0.457)T. In [17], This section presents an application of the collabora-
the corresponding Pareto optimal solutions are given as x ow1 = tive neurodynamic approach to portfolio optimization. As a
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
990 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
TABLE I
W EIGHT V ECTOR w AND THE N UMBER OF I TERATIONS T
min f 1 (x) = − f1(k) (x) = − E uTk,i x (26a)
k∈C1 k∈C1
(k)
min f 2 (x) = f2 (x) = E |sTk,i x|2 (26b)
k∈C2 k∈C2 Fig. 12. Tradeoff between objective functions f 1 and f 2 .
s.t. 1Tm x ≤ d, x ≥ 0 (26c)
respectively. In addition, let
aTk x ≥ bk , k ∈ C3 (26d)
a1 = (1 2 3 1 1)T , b1 = 2
where f 1 and f 2 refer to the expected return and the variance a3 = (1 1 2 3 4)T , b3 = 2
of return, respectively. Constraint (26c) means that the invest- a5 = (4 5 6 2 3)T , b5 = 2.
ment on each asset should be nonnegative and the total amount
of investment is bounded by d. Constraint (26d) describes the It can be seen that each agent in C3 has an inequality
tax requirement and tax deductions, which is only known to constraint (26d). We also assume that only the sixth neural
the agents in C3 , where C3 is a subset of C1 ∪ C2 . network knows the constraint on total amount. Note that
In this example, let m = 5, d = 20, C1 = {1, 2, 3, 4}, only two objectives is contained in problem (26). However,
C2 = {5, 6} and C3 = {1, 3, 5}. An illustration is given in the information of f1 is distributed known by agents in C1 .
Fig. 10. The vectors uk,i and sk,i are randomly generated Similarly, f 2 is distributed known by agents in C2 . Therefore,
based on the Gaussian distribution N (15 , I5 ) and N (05 , I5 ), a distributed manner is required to optimize f1 and f 2 .
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: COLLABORATIVE NEURODYNAMIC APPROACH TO MULTIPLE-OBJECTIVE DISTRIBUTED OPTIMIZATION 991
Problem (26) can also be viewed as a six-objective optimiza- [11] K. Deb, Multi-objective Optimization Using Evolutionary Algorithms.
tion problem, and the weights of the objective functions in the Hoboken, NJ, USA: Wiley, 2001.
[12] A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P. N. Suganthan, and Q. Zhang,
same cluster are always equal. “Multiobjective evolutionary algorithms: A survey of the state of the
In the following, six neural networks are utilized to com- art,” Swarm Evol. Comput., vol. 1, no. 1, pp. 32–49, 2011.
puting the Pareto optimal solutions of problem (26). As the [13] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist
multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput.,
constraints are distributed over multiple agents, system (6) vol. 6, no. 2, pp. 182–197, Apr. 2002.
is applied here. According to Remark 3, the communica- [14] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the
tion topology should be connected. We chose it as shown strength Pareto evolutionary algorithm,” in Proc. Evol. Methods Design,
Optim. Control Appl. Ind. Problems (EUROGEN), Athens, Greece, 2001,
in Fig. 3. Simulation results are presented in Figs. 11 and 12. pp. 95–100.
Fig. 11 shows the transient behavior of system (6) and all [15] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm
neural networks reach a decision consensus. The Pareto front based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6,
pp. 712–731, Dec. 2007.
presented in Fig. 12 shows the tradeoff between f 1 and f 2 . [16] C. A. Coello Coello and M. S. Lechuga, “MOPSO: A proposal for
multiple objective particle swarm optimization,” in Proc. Congr. Evol.
VII. C ONCLUSION Comput., vol. 2. 2002, pp. 1051–1056.
This paper presents a collaborative neurodynamic approach [17] P. Heiskanen, “Decentralized method for computing Pareto solutions in
for solving multiobjective optimization problem with general multiparty negotiations,” Eur. J. Oper. Res., vol. 117, no. 3, pp. 578–590,
1999.
constraints. By transferring the multiobjective optimization [18] B. Dandurand and M. M. Wiecek, “Distributed computation of Pareto
problem into the single-objective case using weighted sum sets,” SIAM J. Optim., vol. 25, no. 2, pp. 1083–1109, 2015.
[19] A. Nedić and A. Ozdaglar, “Distributed subgradient methods for multi-
method, and using decomposition principle, a collaborative agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1,
neurodynamic system is developed. One should note that such pp. 48–61, Jan. 2009.
system can also be viewed as a novel continuous-time model [20] J. Wang and N. Elia, “Control approach to distributed optimization,”
in Proc. 48th Annu. Allerton Conf. Commun., Control, Comput., vol. 1.
for distributed optimization, which can deal with general Sep. 2010, pp. 557–561.
constraints. To obtain a set of Pareto optimal solutions, a [21] M. Zhu and S. Martínez, “On distributed convex optimization under
method based on switching topology is proposed. Such method inequality and equality constraints,” IEEE Trans. Autom. Control,
vol. 57, no. 1, pp. 151–164, Jan. 2012.
has a merit of parallelize computing and can provide a well [22] B. Gharesifard and J. Cortés, “Distributed continuous-time convex
characterization on the boundary of Pareto fronts. optimization on weight-balanced digraphs,” IEEE Trans. Autom. Control,
Many avenues are open for further investigations. The vol. 59, no. 3, pp. 781–786, Mar. 2014.
[23] S. S. Kia, J. Cortés, and S. Martínez, “Distributed convex optimization
collaborative model herein is based on the weighted-sum via continuous-time coordination algorithms with discrete-time commu-
method and could not generate the nonconvex region in Pareto nication,” Automatica, vol. 55, pp. 254–264, 2015.
front. Instead, based on some other scalarization methods (e.g., [24] Q. Liu and J. Wang, “A second-order multi-agent network for distributed
convex optimization subject to bound constraints,” IEEE Trans. Autom.
Tchebycheff method), it could obtain Pareto optimal solutions Control, vol. 60, no. 12, pp. 3310–3315, Dec. 2015.
to a multiobjective distributed optimization problem with [25] Q. Liu, S. Yang, and J. Wang, “A collective neurodynamic approach to
nonconvex objective functions. Future works may also include distributed constrained optimization,” IEEE Trans. Neural Netw. Learn.
Syst., in press, doi: 10.1109/TNNLS.2016.2549566.
applications of the approach to distributed machine learning, [26] S. Yang, Q. Liu, and J. Wang, “Distributed optimization based on a mul-
distributed estimation and control of networked systems. tiagent system in the presence of communication delays,” IEEE Trans.
Syst., Man, Cybern., Syst., in press, doi: 10.1109/TSMC.2016.2531649.
R EFERENCES [27] Y. Lou, Y. Hong, and S. Wang, “Distributed continuous-time approxi-
[1] M. Zeleny and J. L. Cochrane, Multiple Criteria Decision Making. mate projection protocols for shortest distance optimization problems,”
Columbia, SC, USA: Univ. South Carolina Press, 1973. Automatica, vol. 69, pp. 289–297, Jul. 2016.
[2] Y. Jin and B. Sendhoff, “Pareto-based multiobjective machine learning: [28] S. Yang, Q. Liu, and J. Wang, “A multi-agent system with a proportional-
An overview and case studies,” IEEE Trans. Syst., Man, Cybern. C, integral protocol for distributed constrained optimization,” IEEE Trans.
Appl. Rev., vol. 38, no. 3, pp. 397–415, May 2008. Autom. Control, in press, doi: 10.1109/TAC.2016.2610945.
[3] A. Bemporad and D. Muñoz de la Peña, “Multiobjective model predic- [29] J. Chen and A. H. Sayed, “Distributed Pareto Optimization via Dif-
tive control,” Automatica, vol. 45, no. 12, pp. 2823–2830, 2009. fusion Strategies,” IEEE J. Sel. Topics Signal Process., vol. 7, no. 2,
[4] M. H. Ahmadi, M.-A. Ahmadi, and M. Feidt, “Thermodynamic analysis pp. 205–220, Apr. 2013.
and evolutionary algorithm based on multi-objective optimization of [30] Y. Lou and S. Wang, “Approximate representation of the Pareto frontier
performance for irreversible four-temperature-level refrigeration,” Mech. in multiparty negotiations: Decentralized methods and privacy preserva-
Ind., vol. 16, no. 2, p. 207, Mar. 2015. tion,” Eur. J. Oper. Res., vol. 254, no. 3, pp. 968–976, 2016.
[5] N. Delgarm, B. Sajadi, F. Kowsary, and S. Delgarm, “Multi-objective [31] M. Mestari, M. Benzirar, N. Saber, and M. Khouil, “Solving nonlinear
optimization of the building energy performance: A simulation-based equality constrained multiobjective optimization problems using neural
approach by means of particle swarm optimization (PSO),” Appl. Energy, networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 10,
vol. 170, pp. 293–303, May 2016. pp. 2500–2520, Oct. 2015.
[6] Z. Dimitrova and F. Marechal, “Techno–economic design of hybrid [32] J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisions in
electric vehicles and possibilities of the multi-objective optimization optimization problems,” Biol. Cybern., vol. 52, no. 3, pp. 141–152, 1985.
structure,” Appl. Energy, vol. 161, pp. 746–759, Jan. 2016. [33] D. W. Tank and J. J. Hopfield, “Simple ‘neural’ optimization networks:
[7] K. Miettinen, Nonlinear Multiobjective Optimization. New York, NY, An A/D converter, signal decision circuit, and a linear programming
USA: Springer, 2012. circuit,” IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533–541,
[8] R. T. Marler and J. S. Arora, “The weighted sum method for May 1986.
multi-objective optimization: New insights,” Structural Multidisciplinary [34] Y. Xia, H. Leung, and J. Wang, “A projection neural network and its
Optim., vol. 41, no. 6, pp. 853–862, 2010. application to constrained optimization problems,” IEEE Trans. Circuits
[9] A. Charnes and W. W. Cooper, “Goal programming and multiple Syst. I, Reg. Papers, vol. 49, no. 4, pp. 447–458, Apr. 2002.
objective optimizations: Part I,” Eur. J. Oper. Res., vol. 1, no. 1, [35] Y. Xia and J. Wang, “A general projection neural network for solving
pp. 39–54, 1977. monotone variational inequalities and related optimization problems,”
[10] I. Das and J. E. Dennis, “Normal-boundary intersection: A new method IEEE Trans. Neural Netw., vol. 15, no. 2, pp. 318–328, Mar. 2004.
for generating the Pareto surface in nonlinear multicriteria optimization [36] Y. Xia, “An extended projection neural network for constrained opti-
problems,” SIAM J. Optim., vol. 8, no. 3, pp. 631–657, Jul. 1998. mization,” Neural Comput., vol. 16, no. 4, pp. 863–883, 2004.
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.
992 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 29, NO. 4, APRIL 2018
[37] Y. Xia, G. Feng, and M. Kamel, “Development and analysis of a neural Qingshan Liu (S’07–M’08–SM’15) received the
dynamical approach to nonlinear programming problems,” IEEE Trans. B.S. degree in mathematics from Anhui Normal
Autom. Control, vol. 52, no. 11, pp. 2154–2159, Nov. 2007. University, Wuhu, China, in 2001, the M.S. degree
[38] Q. Liu and J. Wang, “A one-layer recurrent neural network with a discon- in applied mathematics from Southeast University,
tinuous hard-limiting activation function for quadratic programming,” Nanjing, China, in 2005, and the Ph.D. degree in
IEEE Trans. Neural Netw., vol. 19, no. 4, pp. 558–570, Apr. 2008. automation and computer-aided engineering, The
[39] X. Hu and B. Zhang, “An alternative recurrent neural network for Chinese University of Hong Kong, Hong Kong,
solving variational inequalities and related optimization problems,” IEEE in 2008.
Trans. Syst., Man, Cybern. B, Cybern., vol. 39, no. 6, pp. 1640–1645, He was a Research Associate with the City Univer-
Dec. 2009. sity of Hong Kong, Hong Kong, in 2009 and 2011. In
[40] Q. Liu and J. Wang, “Finite-time convergent recurrent neural network 2010 and 2012, he was a Post-Doctoral Fellow with
with a hard-limiting activation function for constrained optimization with The Chinese University of Hong Kong. In 2012 and 2013, he was a Visiting
piecewise-linear objective functions,” IEEE Trans. Neural Netw., vol. 22, Scholar with Texas A&M University at Qatar, Doha, Qatar. From 2008 to
no. 4, pp. 601–613, Apr. 2011. 2014, he was an Associate Professor with the School of Automation, Southeast
[41] Q. Liu, Z. Guo, and J. Wang, “A one-layer recurrent neural network for University. He is currently a Professor with the School of Automation,
constrained pseudoconvex optimization and its application for dynamic Huazhong University of Science and Technology, Wuhan, China. His current
portfolio optimization,” Neural Netw., vol. 26, pp. 99–109, Feb. 2012. research interests include optimization theory and applications, artificial neural
[42] Q. Liu and J. Wang, “A one-layer projection neural network for non- networks, computational intelligence, and multiagent systems.
smooth optimization subject to linear equalities and bound constraints,” Dr. Liu is a member of the Editorial Board of Neural Networks. He serves
IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 5, pp. 812–824, as an Associate Editor of the IEEE T RANSACTIONS ON C YBERNETICS .
May 2013.
[43] Q. Liu, T. Huang, and J. Wang, “One-layer continuous-and discrete-
time projection neural networks for solving variational inequalities and
related optimization problems,” IEEE Trans. Neural Netw. Learn. Syst.,
vol. 25, no. 7, pp. 1308–1318, Jul. 2014.
[44] Q. Liu and J. Wang, “A projection neural network for constrained
quadratic minimax optimization,” IEEE Trans. Neural Netw. Learn.
Syst., vol. 26, no. 11, pp. 2891–2900, Nov. 2015.
[45] Y. Xia and J. Wang, “A bi-projection neural network for solving
constrained quadratic optimization problems,” IEEE Trans. Neural Netw.
Learn. Syst., vol. 27, no. 2, pp. 214–224, Feb. 2016.
[46] Q. Liu and J. Wang, “L 1 -minimization algorithms for sparse signal
reconstruction based on a projection neural network,” IEEE Trans. Jun Wang (S’89–M’90–SM’93–F’07) received the
Neural Netw. Learn. Syst., vol. 27, no. 3, pp. 698–707, Mar. 2016. B.S. degree in electrical engineering and the M.S.
[47] S. Yang, J. Wang, and Q. Liu, “Multiple-objective optimization based degree in systems engineering from the Dalian Uni-
on a two-time-scale neurodynamic system,” in Proc. 8th Int. Conf. Adv. versity of Technology, Dalian, China, in 1982 and
Comput. Intell. (ICACI), Chiang Mai, Thailand, Feb. 2016, pp. 193–199. 1985, respectively, and the Ph.D. degree in systems
[48] Z. Yan, J. Wang, and G. Li, “A collective neurodynamic optimization engineering from Case Western Reserve University,
approach to bound-constrained nonconvex optimization,” Neural Netw., Cleveland, OH, USA, in 1991.
vol. 55, pp. 20–29, Jul. 2014. He held various academic positions with the
[49] Z. Yan, J. Fan, and J. Wang, “A collective neurodynamic approach to Dalian University of Technology, with Case Western
constrained global optimization,” IEEE Trans. Neural Netw. Learn. Syst., Reserve University, with the University of North
in press, doi: 10.1109/TNNLS.2016.2524619. Dakota, Grand Forks, ND, USA, and with The Chi-
[50] H. Attouch, G. Garrigos, and X. Goudou, “A dynamic gradient approach nese University of Hong Kong, Hong Kong. He also held various short-term
to Pareto optimization with nonsmooth convex objective functions,” or part-time visiting positions with the U.S. Air Force Armstrong Laboratory,
J. Math. Anal. Appl., vol. 422, no. 1, pp. 741–771, 2015. Dayton, OH, USA, with the RIKEN Brain Science Institute, Wako, Japan,
[51] A. Nedić, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and with the Chinese Academy of Sciences, Beijing, China, with the Huazhong
optimization in multi-agent networks,” IEEE Trans. Autom. Control, University of Science and Technology, Wuhan, China, with Shanghai Jiao
vol. 55, no. 4, pp. 922–938, Apr. 2010. Tong University, Shanghai, China, as a Cheung Kong Chair Professor, and
[52] N. Biggs, Algebraic Graph Theory. Cambridge, U.K.: Cambridge Univ. with the Dalian University of Technology. He is currently the Chair Professor
Press, 1993. of Computational Intelligence with the Department of Computer Science, City
[53] D. Mueller-Gritschneder, H. Graeb, and U. Schlichtmann, “A successive University of Hong Kong, Hong Kong. His current research interests include
approach to compute the bounded Pareto front of practical multiobjective neural networks and their applications.
optimization problems,” SIAM J. Optim., vol. 20, no. 2, pp. 915–934, Dr. Wang was a member of the Editorial Board of Neural Networks. He also
2009. served as a member of the Editorial Advisory Board of the International Jour-
[54] R. H. C. Takahashi, E. G. Carrano, and E. F. Wanner, “On a stochastic nal of Neural Systems. He was a recipient of the Research Excellence Award
differential equation approach for multiobjective optimization up to from The Chinese University of Hong Kong from 2008 to 2009, two Natural
Pareto-criticality,” in Evolutionary Multi-Criterion Optimization. New Science Awards first class respectively from Shanghai Municipal Government
York, NY, USA: Springer, 2011, pp. 61–75. in 2009, and the Ministry of Education of China in 2011, the Outstanding
[55] W. Gautschi, Numerical Analysis. New York, NY, USA: Springer, 2011. Achievement Award from Asia Pacific Neural Network Assembly, the IEEE
[56] H. Markowitz, “Portfolio selection,” J. Finance, vol. 7, no. 1, pp. 77–91, T RANSACTIONS ON N EURAL N ETWORKS Outstanding Paper Award (with Q.
1952. Liu) in 2011, and Neural Networks Pioneer Award in 2014 from the IEEE
Computational Intelligence Society. He is the Editor-in-Chief of the IEEE
T RANSACTIONS ON C YBERNETICS and served as an Associate Editor of the
Shaofu Yang received the B.S. and M.S. degrees journal and its predecessor from 2003 to 2013. He served as the President
in applied mathematics from the Department of of the Asia Pacific Neural Network Assembly in 2006, the General Chair
Mathematics, Southeast University, Nanjing, China, of the 13th International Conference on Neural Information Processing in
in 2010 and 2013, respectively, and the Ph.D. degree 2006, and the IEEE World Congress on Computational Intelligence in 2008,
from the Department of Mechanical and Automation the Program Chair of the IEEE International Conference on Systems, Man,
Engineering, The Chinese University of Hong Kong, and Cybernetics in 2012. He also served as an Associate Editor of the IEEE
Hong Kong, in 2016. T RANSACTIONS ON N EURAL N ETWORKS and the IEEE T RANSACTIONS ON
He was a Post-Doctoral Fellow with The City S YSTEMS , M AN , AND C YBERNETICS - PART C. He was a Guest Editor
University of Hong Kong in 2016. He is currently of Special Issues of the European Journal of Operational Research, the
an Associate Professor with the School of Computer International Journal of Neural Systems, and the Neurocomputing. He has
Science and Engineering, Southeast University. His served on many committees, such as the IEEE Fellow Committee. He is an
current research interests include neurodynamical optimization, distributed IEEE Computational Intelligence Society Distinguished Lecturer from 2010
optimization, and multiagent systems. to 2012 and from 2014 to 2016.
Authorized licensed use limited to: HUNAN UNIVERSITY. Downloaded on May 24,2024 at 12:33:17 UTC from IEEE Xplore. Restrictions apply.