0% found this document useful (0 votes)
14 views9 pages

A Lagrange Multiplier Method For Distributed Optimization Based On Multi-Agent Network With Private and Shared Information

Uploaded by

fitri sitinjak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views9 pages

A Lagrange Multiplier Method For Distributed Optimization Based On Multi-Agent Network With Private and Shared Information

Uploaded by

fitri sitinjak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Received May 11, 2019, accepted June 13, 2019, date of publication June 24, 2019, date of current

version July 11, 2019.


Digital Object Identifier 10.1109/ACCESS.2019.2924590

A Lagrange Multiplier Method for Distributed


Optimization Based on Multi-Agent Network
With Private and Shared Information
YAN ZHAO1 AND QINGSHAN LIU 2,3 , (Senior Member, IEEE)
1 School of Common Courses, Wannan Medical College, Wuhu 241000, China
2 School of Mathematics, Southeast University, Nanjing 210096, China
3 Jiangsu Provincial Key Laboratory of Networked Collective Intelligence, Nanjing 210096, China

Corresponding author: Qingshan Liu ([email protected])


This work was supported in part by the National Natural Science Foundation of China under Grant 61876036, in part by the
‘‘333 Engineering’’ Foundation of Jiangsu Province of China under Grant BRA2018329, in part by the Scientific Research Projects of
Wannan Medical College under Grant WK2018Z08, and in part by the Fundamental Research Funds for the Central Universities.

ABSTRACT In this paper, a Lagrange multiplier method is investigated for designing distributed opti-
mization algorithm, which convergence is analyzed from the view of multi-agent networks with connected
graphs. In the network, each agent is with both private and shared information. The shared information is
shared with the agent’s neighbors via a network with a connected graph. Furthermore, a Lagrange-multiplier-
based algorithm with parallel computing architecture is designed for distributed optimization. Under mild
conditions, the convergence of the algorithm, corresponding to the consensus of the Lagrange multipliers,
is presented and proved. The experiments with simulations are presented to illustrate the performance of the
proposed method.

INDEX TERMS Distributed optimization, Lagrange multiplier, multi-agent network, convergence.

I. INTRODUCTION are of course capable of solving them [15]. However, due


Distributed optimization has attracted lots of attentions in to the decomposable properties, the distributed/parallel
the research of science and engineering [1]–[5]. Recently, algorithms provide more efficient and robust ways to
combining with the research of multi-agent networks or solve large-scale optimization problems. In fact, the con-
collective networks, distributed systems have been widely sensus algorithms based on multi-agent networks are
investigated for solving optimization problems with analysis just proposed for solving these decomposable distributed
method for dynamic systems [6]–[9]. Furthermore, to solve optimization problems. Under directed graphs of the
the class of decomposable optimization problems in a decen- networks, [16] proposes a continuous-time multi-agent sys-
tralized manner, lots of distributed/parallel algorithms have tem for solving distributed optimization problem with a
been developed in the past years (see, e.g. [10]–[14] and sum of convex objective functions. Under considering the
the references therein). In the decomposable optimization bound constraint for optimization problem, a second-order
problems, the agent in the network generally knows only multi-agent network is investigated for distributed opti-
private information or the global shared information. In fact, mization in [6]. The projection and sub-gradient meth-
in the multi-agent network, it often includes both private and ods are investigated to construct distributed algorithms
shared information, in which the private information is known for nonsmooth convex optimization [3], [17] and mini-
by the agent itself and the shared information is known by the max optimization [18]. Inspired by the neorodynamic opti-
agent’s neighbors. mization methods [19]–[22], the collective neurodynamic
For constrained optimization problems, especially the approach is proposed for distributed optimization and
decomposable ones, the centralized optimization algorithms multiple-objective distributed optimization [9], [14], [23].
Moreover, the distributed optimization algorithms are exten-
The associate editor coordinating the review of this manuscript and sively used in engineering problems, such as multi-robot
approving it for publication was Feng Liu. formation control [24], energy management of multiple

VOLUME 7, 2019 This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ 83297
Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

smart homes [25], distributed model predictive control [26], Furthermore, let
regression of distributed data [27], and optimal resource  
I −I 0 0
allocation [28]. A0 = ,
0 0 I −I
Statement of contributions: First, a distributed optimiza-
tion algorithm is proposed under considering both private in which I is identity matrix with appropriate dimension.
and shared information in the multi-agent network, which Then the constraints in (4) can be equivalently written as
can be easily implemented for single-program-multiple-data
A0 y = 0,
(SPMD) parallel algorithms. Second, here the distributed
algorithm is with fixed step size, which takes a fast con- where y = (yT1 , yT2 , yT3 )T .
vergence rate than the algorithms with diminishing step From the way in above example, for the optimization
size, especially when the iterations are close to the opti- problem (1), a constraint is necessary to be added to make
mal solutions. Third, the proposed distributed algorithm is the shared information be coincident between related agents,
initialization-free without any restriction on the initial values. then it is written as the following form
The remainders of the paper are organized as follows. m
In Section II, the investigated problem is formally stated.
X
min fi (xi , yi ),
Section III presents the distributed algorithm and proves its i=1
convergence under connected graph communication topol- subject to A0 y = 0,
ogy. In Section IV, the proposed method is utilized to solve
xi ∈ Xi , yi ∈ Yi , (5)
the optimal placement problem with two numerical examples.
Section V gives the conclusions of the paper. where x = (x1T , x2T , . . . , xmT )T , y = (yT1 , yT2 , . . . , yTm )T , and
A0 is the connection matrix between agents for sharing the
II. PROBLEM FORMULATION information on y.
We consider an m-agent system to find the optimal solutions In (5), the objective function is in the decomposable
to minimize a distributed optimization problem with private form. In order to design a distributed algorithm to solve (5),
and shared information, which is described as the equality constraint is rewritten as the following form
m
X m
min fi (xi , yi ),
X
Ai yi = 0,
i=1 i=1
subject to {information sharing rules for yi },
where Ai ∈ and yi ∈ Rri .
Rs×ri
xi ∈ Xi , yi ∈ Yi (i = 1, 2, . . . , m), (1)
Then the Lagrange function of (5) can be defined as
where fi is differentiable convex function, xi ∈ Rni is the m m
local private information of agent i, yi ∈ Rmi represents the
X X
9(x, y, β0 ) = fi (xi , yi ) + β0T Ai yi , (6)
information which the agent i will share with its neighbors, i=1 i=1
Xi and Yi are the bound constraints for xi and yi respectively.
where β0 ∈ Rs is the Lagrange multiplier. Furthermore,
Next, we give an example to illustrate the optimization
by the Saddle Point Theorem [15], (x ∗ , y∗ ) is an optimal solu-
problem in (1).
tion to problem (5), if and only if there exists β0∗ ∈ Rs such
Example: Consider an optimization problem to find the
that (x ∗ , y∗ , β0∗ ) is a saddle pointQof the Lagrange function
optimal least square error between four regions
in (6)Qon (x, y, β0 )Q∈ m m s
Q
i=1 i ×
X i=1 Yi × R ; i.e., for any
min kx1 − x2 k2 + kx2 − x4 k2 + kx3 − x4 k2 , m m
x ∈ i=1 Xi , y ∈ i=1 Yi and β0 ∈ R , (x , y , β0∗ ) satisfies
s ∗ ∗

subject to xi ∈ Xi (i = 1, 2, 3, 4). (2) the following inequalities

Here, x1 and x3 are two private vectors for agents 1 and 3 9(x ∗ , y∗ , β0 ) ≤ 9(x ∗ , y∗ , β0∗ ) ≤ 9(x, y, β0∗ ),
respectively, and x2 and x4 are two shared vectors between
which further derives (x ∗ , y∗ , β0∗ ) to be an optimal solution of
agents 1 and 2, and agents 2 and 3 respectively. We rename
the minimax problem as follows
the shared vectors in the agents. Let y1 = x2 for the agent 1,
y2 = (x2T , x4T )T for the agent 2, and y3 = x4 for the agent 3. min max 9(x, y, β0 ), (7)
x∈X ,y∈Y β0 ∈Rs
Then the problem in (2) derives the following one
Qm Qm
min kx1 − y1 k2 + ky21 − y22 k2 + kx3 − y3 k2 , where X = i=1 Xi and Y = i=1 Yi . For problem (7),
the Lagrange function (6) can be rewritten as
subject to x1 ∈ X1 , y1 ∈ X2 , y21 ∈ X2 , y22 ∈ X4 ,
m
x3 ∈ X3 , y3 ∈ X4 , (3)
X
9(x, y, β) = [fi (xi , yi ) + βiT Ai yi ],
in which (yT21 , yT22 )T = y2 . To make the problems (2) and (3) i=1
are equivalent, the following constraints are necessary where β = (β1T , β2T , . . . , βmT )T ∈ Rms and βi is the ith
y1 = y21 , y22 = y3 . (4) estimate of the ith agent on the Lagrange multiplier.

83298 VOLUME 7, 2019


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

Lemma 1: Assume the graph of the multi-agent network III. DISTRIBUTED ALGORITHM AND CONVERGENCE
for sharing the information on y to be connected and undi- ANALYSIS
rected. Then the following problem is equivalent to the min- A. DISTRIBUTED ALGORITHM
imax problem in (7) From the equalities in (9), the distributed optimization algo-
min max 9(x, y, β), rithm for solving (5) is designed as
x∈X ,y∈Y β∈Rms 
x(k + 1) = gX (x(k) − η∇x f (x(k), y(k))),
subject to Lβ = 0, (8)


+ 1) = gY (y(k) − η(∇y f (x(k), y(k))

y(k



where L = L0 ⊗ I and ⊗ is the Kronecker product, L0 is


+AT (β(k) + Ay(k)


defined as the graph’s Laplacian matrix.


+σ L(α(k) − β(k))))),


Proof: Inspired by the proof of Lemma 3 in [6], for (12)
connected graph G, Lβ = 0 if and only if for some β ∈ Rs α(k + 1) = α(k) − β(k) − Ay(k)

such that β1 = β2 = · · · = βm = β. Then the problems in

−σ L(α(k) − β(k)),




(7) and (8) are equivalent. β(k + 1) = β(k) + Ay(k + 1)



Theorem 1: (x ∗ , y∗ , β ∗ ) is an optimal solution of (8) if and


+σ L(α(k) − β(k)),


only if, there exists α ∗ ∈ Rms such that (x ∗ , y∗ , α ∗ , β ∗ ) is a
solution of the following equations where k is iteration step. The component form of the previous
algorithm (12) can be written as
x = gX (x ∗ − η∇x f (x ∗ , y∗ )),
 ∗
 
xi (k + 1) = gXi (xi (k) − η∇x f (xi (k), yi (k))),

y∗ = g (y∗ − η(∇ f (x ∗ , y∗ ) + AT β ∗ )),

Y y 
(9)

i (k + 1) = gYi (yi (k) − η(∇y fi (xi (k), yi (k))

∗ y


 Lβ = 0, 


+ATi (βi (k) + Ai yi (k)
 
Ay + σ Lα ∗ = 0,
 ∗ 



 m
aij (αi (k) − αj (k)
 P
in which gX and gY are projection operators onto X and +σ




Y respectively defined in Appendix A, matrix A is a block


 j=1,j6=i
diagonal one of A1 , A2 to Am , η and σ are positive constants. −βi (k) + βj (k))))),




Proof: From above analysis, (x ∗ , y∗ , β ∗ ) is an optimal αi (k + 1) = αi (k) − βi (k) − Ai yi (k)


(13)
solution of (8) if and only if for any (x, y, β) ∈ 2 = m
α
 P
−σ a (α (k) − (k)
{(x, y, β) : x ∈ X , y ∈ Y , Lβ = 0}, there is

 ij i j

j=1,j6=i



9(x ∗ , y∗ , β) ≤ 9(x ∗ , y∗ , β ∗ ) ≤ 9(x, y, β ∗ ). −βi (k) + βj (k)),

(10) 



β = βi (k) + Ai yi (k + 1)

According to the right inequality in (10), is a (x ∗ , y∗ )


 i (k + 1)
 m
minimum point of 9(x, y, β ∗ ) on X × Y if and only if, for

aij (αi (k) − αj (k)
 P

 +σ
any x ∈ X and y ∈ Y , (x ∗ , y∗ , β ∗ ) satisfies



 j=1,j6=i
−βi (k) + βj (k)),
( 

(x − x ∗ )T ∇x 9(x ∗ , y∗ , β ∗ ) ≥ 0,
(11)
(y − y∗ )T ∇y 9(x ∗ , y∗ , β ∗ ) ≥ 0, where aij denotes the connection weight of agents i and j.
For the convenience of following analysis, the subscript is
and they are further equivalent to the following projection
used to denote the iterations. Let xk = x(k), yk = y(k), αk =
equations
( α(k) and βk = β(k). Then the algorithm (12) is rewritten as
x ∗ = gX (x ∗ − η∇x 9(x ∗ , y∗ , β ∗ )), a compact form
y∗ = gY (y∗ − η∇y 9(x ∗ , y∗ , β ∗ )),

xk+1 = gX (xk − η∇x f (xk , yk )),


where η > 0 is a constant. Because of ∇x 9(x ∗ , y∗ , β ∗ ) = yk+1 = gY (yk − η(∇y f (xk , yk )



∇x f (x ∗ , y∗ ) and ∇y 9(x ∗ , y∗ , β ∗ ) = ∇y f (x ∗ , y∗ ) + AT β ∗ , +AT (βk + Ayk + σ L(αk − βk )))), (14)
the first and second equations in (9) are obtained. 

 αk+1 = αk − βk − Ayk − σ L(αk − βk ),
Furthermore, the left inequality in (10) derives β ∗ to be a


β
k+1 = βk + Ayk+1 + σ L(αk − βk ).

maximum point of 9(x ∗ , y∗ , β) on β ∈ 20 = {β : Lβ =
0}. For this maximum problem, the Lagrange function is
B. CONVERGENCE ANALYSIS
defined as
Assume (x ∗ , y∗ ) to be a solution of problem (5). From The-
9 0 (α, β) = 9(x ∗ , y∗ , β) + σ α T Lβ, orem 1, there exist α ∗ ∈ Rms and β ∗ ∈ Rms such that
where σ > 0 and the Lagrange multiplier α ∈ Rms . From (x ∗ , y∗ , α ∗ , β ∗ ) satisfies the equations in (9). In the first place,
the Karush-Kuhn-Tucker (KKT) conditions [15], β ∗ is an the following functions are introduced
optimal solution if and only if there exists α ∗ such that V1 (xk , yk ) = kxk −x ∗ k2 + kyk −y∗ k2 ,
∇α 9 0 (α ∗ , β ∗ ) = 0, ∇β 9 0 (α ∗ , β ∗ ) = 0; i.e., Lβ ∗ = 0 and
V2 (αk ) = σ (αk −α ∗ )T L(αk −α ∗ ), V3 (βk ) = kβk −β ∗ k2 ,
∇β 9(x ∗ , y∗ , β ∗ ) + σ Lα ∗ = 0, where ∇β 9(x ∗ , y∗ , β ∗ ) =
Ay∗ . This gets the last two equations in (9). where k · k is the Euclid norm.

VOLUME 7, 2019 83299


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

Lemma 2: The V1 (xk , yk ), V2 (αk ) and V3 (βk ) satisfy the = −kxk+1 −xk k2 −kyk+1 −yk k2
following inequalities along the iterations in (14): +2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk ))
(i) V1 (xk+1 , yk+1 ) − V1 (xk , yk ) ≤ −kxk+1 − xk k2 − +2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk ))
kyk+1 − yk k2 + 2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) −
∇x f (xk , yk )) + 2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − +2η(Ayk+1 + σ L(αk −βk ))T A(yk+1 −yk )
∇y f (xk , yk )) − 2η(Ayk+1 + σ Lα ∗ )T (ζk − β ∗ ); +ησ (ζk −βk )T L(ζk −βk )
(ii) V2 (αk+1 ) − V2 (αk ) = 2σ ζkT L(βk − αk + α ∗ ) + σ (ζk − −ησβkT Lβk −ηkβk+1 −βk k2 , (15)
βk )T L(ζk − βk ) − σβkT Lβk ;
(iii) V3 (βk+1 ) − V3 (βk ) = −kβk+1 − βk k2 + 2(Ayk+1 + where the second equality holds due to Lβ ∗ = 0 and the last
σ L(αk − βk ))T (βk+1 − β ∗ ), equality holds due to βk+1 − ζk = A(yk+1 − yk ).
where ζk = βk + Ayk + σ L(αk − βk ). In the inequality in (15), we have
Proof: The details of the proof are in Appendix B.
−ηkβk+1 − βk k2
Theorem 2: Assume the graph of the multi-agent network
for sharing the information on y to be connected and undi- +2η(Ayk+1 + σ L(αk − βk ))T A(yk+1 − yk )
rected. Then, (xk , yk ) in the distributed optimization algo- = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
rithm (14) is globally convergent to an optimal solution of
+2η(Ayk + σ L(αk − βk ))T A(yk+1 − yk )
problem (5) if η < max{1/(2lx ), 1/(2λmax (AT A + ly I ))} and
σ < 1/λmax (L), where λmax is the maximum eigenvalue of = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
matrix, lx and ly are the Lipschitz constants of ∇x f and ∇y f +2η(βk + Ayk + σ L(αk − βk ) − βk+1 )T A(yk+1 − yk )
respectively.
+2η(βk+1 − βk )T A(yk+1 − yk )
Proof: Assume (x ∗ , y∗ ) to be an optimal solution of (5).
From Theorem 1, there exist α ∗ ∈ Rms and β ∗ ∈ Rms such = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
that (x ∗ , y∗ , α ∗ , β ∗ ) satisfies the equations in (9). −2ηkβk+1 − ζk k2 + 2η(βk+1 − βk )T (βk+1 − ζk )
Let V (xk , yk , αk , βk ) = V1 (xk , yk ) + η(V2 (αk ) + V3 (βk )).
= 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2
Then combining with the results in Lemma 2, it follows that
−ηkβk+1 − βk k2 − ηkβk+1 − ζk k2
V (xk+1 , yk+1 , αk+1 , βk+1 )−V (xk , yk , αk , βk )
+2η(βk+1 − βk )T (βk+1 − ζk )
≤ V1 (xk+1 , yk+1 , )−V1 (xk , yk )
= 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2
+η(V2 (αk+1 )−V2 (αk ) + V3 (βk+1 )−V3 (βk ))
= −kxk+1 −xk k2 −kyk+1 −yk k2 −ηkβk+1 − βk − βk+1 + ζk k2
+2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk )) = 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2 − ηkβk − ζk k2
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) ≤ 2ηkAyk+1 − Ayk k2 − ηkβk − ζk k2 , (16)
−2η(Ayk+1 + σ Lα ∗ )T (ζk −β ∗ )
where the third equality holds since ζk = βk +Ayk +σ L(αk −
+2ησ ζkT L(βk −αk + α ∗ ) + ησ (ζk −βk )T L(ζk −βk )
βk ) and A(yk+1 − yk ) = βk+1 − ζk .
−ησβkT Lβk −ηkβk+1 −βk k2 Substituting (16) into (15) follows
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ )
= −kxk+1 −xk k2 −kyk+1 −yk k2 V (xk+1 , yk+1 , αk+1 , βk+1 ) − V (xk , yk , αk , βk )
+2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk )) ≤ −kxk+1 − xk k2 − kyk+1 − yk k2
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) +2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) − ∇x f (xk , yk ))
−2η(Ayk+1 + σ Lα ∗ )T (ζk −β ∗ ) +2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − ∇y f (xk , yk ))
+2ησ (ζk −β ∗ )T L(βk −αk + α ∗ ) +2ηkAyk+1 − Ayk k2 − ηkβk − ζk k2
+ησ (ζk −βk )T L(ζk −βk ) +ησ (ζk − βk )T L(ζk − βk ) − ησβkT Lβk
−ησβkT Lβk −ηkβk+1 −βk k2 = −(1 − 2ηlx )(xk+1 − xk )T (xk+1 − xk )
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ ) −(yk+1 − yk )T (I − 2η(AT A + ly I ))(yk+1 − yk )
= −kxk+1 −xk k2 −kyk+1 −yk k2 −η(βk − ζk )T (I − σ L)(βk − ζk ) − ησβkT Lβk , (17)
+2η(xk+1 −xk ) (∇x f (xk+1 , yk+1 )−∇x f (xk , yk ))
T
in which I is the identity matrix, lx and ly are the Lipschitz
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) constants of ∇x f and ∇y f respectively.
−2η(Ayk+1 + σ L(αk −βk ))T (ζk −β ∗ ) If η < max{1/(2lx ), 1/(2λmax (AT A + ly I ))} and
σ < 1/λmax (L), we have V (xk+1 , yk+1 , αk+1 , βk+1 ) −
+ησ (ζk −βk )T L(ζk −βk )
V (xk , yk , αk , βk ) ≤ 0.
−ησβkT Lβk −ηkβk+1 −βk k2 Combining with the LaSalle invariance principle [29]
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ ) (Theorem 6.4), (xk , yk ) converges to the largest invariant set

83300 VOLUME 7, 2019


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

of

4 = {(xk , yk ) :
V (xk+1 , yk+1 , αk+1 , βk+1 ) − V (xk , yk , αk , βk ) = 0}.

Note that, from (17), if V (xk+1 , yk+1 , αk+1 , βk+1 ) −


V (xk , yk , αk , βk ) = 0, it results that (xk , yk , αk , βk ) satisfies
the equations in (9). Thus (xk , yk ) is a solution to problem (5).
From the definition of V (xk , yk , αk , βk ) and (17), we have
V (xk , yk , αk , βk ) ≤ V (x0 , y0 , α0 , β0 ). Then xk , yk , Lαk
and βk are bounded. There exists an increasing subse-
quence {km }∞ m=1 with limit points x , y , αL and β such that
0 0 0 0

limm→∞ xkm = x , limm→∞ ykm = y , limm→∞ Lαkm = αL0


0 0

and limm→∞ βkm = β 0 , in which (x 0 , y0 ) is a solution of prob-


lem (5) according to above analysis. Since limm→∞ Lαkm =
αL0 , there exists a vector α 0 ∈ Rm such that Lα 0 = αL0
according to the completeness of linear space Lα. Then x 0 ,
y0 , α 0 and β 0 satisfy the equations in (9).
Finally, the following function is defined FIGURE 1. The topology structure and optimal locations of the free nodes
in example 1.
V 0 (xk , yk , αk , βk )
= kxk − x 0 k2 + kyk − y0 k2 + η(σ (αk − α 0 )L(αk − α 0 )
+kβk − β 0 k2 ). Example 1: In the plane R2 , 16 points are considered.
Among them, 10 points are fixed, labeled as bl ∈ R2 (l =
On one hand, from above analysis, one gets V 0 (xk , yk , αk , 1, 2, . . . , 10), and the other 6 points are free, labeled as xi ∈
βk ) ≤ V 0 (xk−1 , yk−1 , αk−1 , βk−1 ). On the other hand, R2 (i = 1, 2, . . . , 6). Fig. 1 depicts the links of all nodes.
for any k there exists km such that V 0 (xk , yk , αk , βk ) ≤ In addition, a bounding box constraint is set to restrict the
V 0 (xkm , ykm , αkm , βkm ). Let m → ∞, then k → ∞. location of each free point. The squared Euclidian norm is
We get limk→∞ V 0 (xk , yk , αk , βk ) = limm→∞ V 0 (xkm , used to measure the distance between two connected nodes.
ykm , αkm , βkm ) = 0. From V 0 (xk , yk , αk , βk ) ≥ kxk − x 0 k2 + Thus we get the following optimization problem
kyk − y0 k2 , it follows limk→∞ (xk , yk ) = (x 0 , y0 ). It is noted
that V 0 (xk , yk , αk , βk ) is radially unbounded on xk and yk ,
 
5
1
which follows the global convergence of (xk , yk ) to an optimal
X X
min kxi − x6 k2 + kxi − bl k2  ,
solution of problem (8) for any initial point. 2
i=1 l∈Ni
Furthermore, combining the above analysis with Lemma 1, subject to − 2 ≤ xi ≤ 2, (i = 1, 2, . . . , 6), (18)
the state vector (xk , yk ) is globally convergent to an optimal
solution of (5) provided that the graph of the multi-agent in which Ni denotes the index set of neighbors of node i.
network for sharing the information on y is undirected and We use the proposed algorithm in (13) to solve this prob-
connected. lem. Each node i is assigned with a local objective function
as
IV. APPLICATION TO OPTIMAL PLACEMENT
 
Next, the optimal placement problem [30] is investigated
1 X
using the proposed distributed optimization algorithms. fi (x) = kxi − x6 k2 + kxi − bl k2  .
We consider some entities in Rn with some pairs of the enti- 2
l∈Ni
ties with connections by links. Among the entities, the posi-
tions of some ones are fixed, and the remaining are free. Our In the network, xi (i = 1, 2, . . . , 5) is private information of
aim is to place the free entities to minimize the total length agents i and x6 is shared information between agents 6 and i
of the connections. For this problem, all the entities compose (i = 1, 2, . . . , 5). We rename the shared vectors in the agents.
a network with an undirected graph. If we set the location of Let y1 = y2 = · · · = y5 = x6 for the five agents. Then the
each node to be xi ∈ Rn , the problem can be formulated as problem in (18) derives the following problem
X
fij (xi , xj ), 5
 
(i,j)∈E
X 1 X
min kxi − yi k2 + kxi − bl k2  ,
2
in which E is the set of edges in the graph, fij : Rn × Rn → R i=1 l∈Ni
is a cost function on link (i, j), and fij = 0 for unconnected subject to − 2 ≤ xi , yi ≤ 2, (i = 1, 2, . . . , 5),
nodes i and j. A0 y = 0,

VOLUME 7, 2019 83301


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

FIGURE 2. Convergence behaviors of output variables of the network for


solving the problem in example 1.

where FIGURE 3. The topology structure and optimal locations of the free nodes
  in example 2.
I −I 0 0 0
0 I −I 0 0 
A0 =  .
0 0 I −I 0 
0 0 0 I −I
With random initial values, the simulation results are
shown in Figs. 1 and 2. The optimal locations of free nodes are
shown in Fig. 1, in which the blue solid squares are the fixed
points, the red solid disks are the free points, and the green
and pink dash lines are the links. The convergence behav-
iors of the outputs of the multi-agent network are depicted
in Fig. 2, which gets the optimal solution of the problem.
Example 2: We consider the problem in Example 1 with
a different topology of the graph. The six free points xi ∈
R2 (i = 1, 2, . . . , 6) are connected with a graph of circular
diagram, and the neighbors of each node are given in Fig. 3.
The optimization problem is formulated as
 
6
X 1 X X FIGURE 4. Convergence behaviors of output variables of the network for
min kxi − xj k2 + kxi − bl k2  , solving the problem in example 2.
2
i=1 j∈Ni l∈Ni
6
X
s.t. xi = 0, where y = (yT12 , yT21 , yT22 , yT31 , . . . , yT62 , yT11 )T and
i=1
− 2 ≤ xi ≤ 2, ( i = 1, 2, . . . , 6). (19)
 
I 0 I 0 I 0 I 0 I 0 I 0
 I 0 0 0 0 0 0 0 0 0 0 −I 
In the network, since each xi (i = 1, 2, . . . , 6) is shared 
 0 I −I 0 0 0 0 0 0 0 0 0 

with its neighbors, we rename the shared vectors in the agents.  
Let yij (i = 1, 2, . . . , 6, j = 1, 2) be the shared vectors; A0 =  0 0 0 I −I 0 0 0 0 0 0 0  .
 
i.e., y11 = y12 = x1 , y21 = y22 = x2 , . . . , y61 = y62 = x6 .  0 0 0 0 0 I −I 0 0 0 0 0 
 
 
Then the problem in (19) derives the following problem (in  0 0 0 0 0 0 0 I −I 0 0 0 
which y71 = y11 ) 0 0 0 0 0 0 0 0 0 I −I 0
 
6 We use a six-agents network to solve this problem. Sim-
X 1 X
min kyi2 − y(i+1)1 k2 + kyi2 − bl k2  , ulation results in Figs. 3 and 4 are generated with random
2
i=1 l∈Ni initial values. The convergence of output vectors is depicted
s.t. − 2 ≤ yij ≤ 2, i ∈ {1, 2, . . . , 6}, j ∈ {1, 2}, in Fig. 4, and the optimal locations of free nodes are shown
A0 y = 0, in Fig. 3.

83302 VOLUME 7, 2019


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

V. CONCLUSIONS Then,
In this paper, a distributed convergence/consensus algorithm
with Lagrange multiplier method was proposed to seek kxk+1 − x ∗ k2 − kxk − x ∗ k2
the optimal solutions of decomposable optimization prob- = kg̃X − x ∗ k2 − kxk − x ∗ k2
lems based on the communication protocol in multi-agent = (g̃X − xk )T (g̃X + xk − 2x ∗ )
networks. The private and shared information were both = −kg̃X − xk k2 + 2(g̃X − xk )T (g̃X − x ∗ )
considered in the networks. By a simple mathematical
transformation, the optimization problem was converted = −kg̃X − xk k2 + 2(g̃X − xk
into a separable one, which was easily implemented for +η∇x f (xk , yk ))T (g̃X − x ∗ )
single-program-multiple-data (SPMD) parallel algorithms. −2η∇x f (xk , yk )T (g̃X − x ∗ ).
Combining with the dynamic behavior analysis method for
multi-agent networks, the convergence/consensus was proved Let u = xk − η∇x f (xk , yk ) and v = x ∗ , by the inequality in
under connected and undirected graph. Finally, the optimal Lemma 3, one gets
placement was investigated and two numerical examples
(g̃X − xk + η∇x f (xk , yk ))T (g̃X − x ∗ ) ≤ 0.
were given to illustrate the well performance of the proposed
method. Combining with xk+1 = g̃X , we have

APPENDIX A kxk+1 − x ∗ k2 − kxk − x ∗ k2


A. GRAPH THEORY ≤ −kxk+1 − xk k2
It is generally to denote a weighted graph by G = (V, E, A) to −2η(xk+1 − x ∗ )T ∇x f (xk , yk ). (20)
describe the multi-agent network, which is with three parts:
a vertex set V = {v1 , v2 , . . . , vm }, an edge set E ⊆ V × Let
V to describe all lines between vertexes, and an adjacency g̃Y = gY (yk − η(∇y f (xk , yk )
matrix A = {aij }m×m with aij > 0 if (vi , vj ) ∈ E and
aij = 0 otherwise. The pair of nodes (vi , vj ) is used to +AT (βk + Ayk + σ L(αk − βk )))),
denote an edge eij to mean that nodes vi and vj can receive which follows yk+1 = g̃Y .
information from each other. We consider the undirected Then,
graph and the degree of node vi is generally defined as di =
m
kyk+1 − y∗ k2 − kyk − y∗ k2
P
j=1,j6=i aij . The Laplacian matrix of a graph is defined as
L0 = D−A, which is important for the analysis of the graph’s = kg̃Y − y∗ k2 − kyk − y∗ k2
properties, where D = diag{d1 , d2 , . . . , dm }. For an undi-
= (g̃Y − yk )T (g̃Y + yk − 2y∗ )
rected graph, the Laplacian matrix L0 is symmetric. A path
between different nodes vi and vj in the graph is a sequence = −kg̃Y − yk k2 + 2(g̃Y − yk )T (g̃Y − y∗ )
of edges (vi , vi1 ), (vi1 , vi2 ), . . . , (vik , vj ) with distinct nodes = −kg̃Y − yk k2 + 2(g̃Y − yk + η(∇y f (xk , yk )
vil ∈ V. In addition, from the algebraic graph theory, +AT (βk + Ayk + σ L(αk − βk ))))T (g̃Y − y∗ )
a graph is connected if and only if L0 is positive semi-definite
−2η(∇y f (xk , yk ) + AT (βk + Ayk
with a simple zero eigenvalue and L0 1m = 0, where
1m = (1, 1, . . . , 1)T ∈ Rm . +σ L(αk − βk )))T (g̃Y − y∗ ).
Let u = yk − η(∇y f (xk , yk ) + AT (βk + Ayk + σ L(αk − βk )))
B. PROJECTION OPERATOR
and v = y∗ . Combining with the inequality in Lemma 3, one
Assume  to be a closed convex set. A map g (u) from Rn gets
to  ⊆ Rn is defined as the projection operator
(g̃Y − yk + η(∇y f (xk , yk )
g (v) = arg min kγ − vk.
γ ∈ + AT (βk + Ayk + σ L(αk − βk ))))T (g̃Y − y∗ ) ≤ 0.
Lemma 3: [31] Assume g (u) is the projection of u onto As yk+1 = g̃Y , we have
, then we have
kyk+1 − y∗ k2 − kyk − y∗ k2
(u − g (u))T (g (u) − v) ≥ 0, ∀u ∈ Rn , v ∈ .
≤ −kyk+1 − yk k2 − 2η(yk+1 − y∗ )T ∇y f (xk , yk )
−2η(yk+1 − y∗ )T AT (βk + Ayk
APPENDIX B
Proof of Lemma 2: +σ L(αk − βk )). (21)
(i) Let From the convexity of f , we have
g̃X = gX (xk − η∇x f (xk , yk )), (xk − x ∗ )T ∇x f (xk , yk ) + (yk − y∗ )T ∇y f (xk , yk )
which follows xk+1 = g̃X . ≥ f (xk , yk ) − f (x ∗ , y∗ ),

VOLUME 7, 2019 83303


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

and Note that Ay∗ + σ Lα ∗ = 0, substituting it into the previous


inequality results in
(xk+1 − xk )T ∇x f (xk+1 , yk+1 )
+(yk+1 − yk )T ∇y f (xk+1 , yk+1 ) V1 (xk+1 , yk+1 ) − V1 (xk , yk )
≥ f (xk+1 , yk+1 ) − f (xk , yk ), ≤ −kxk+1 − xk k2 − kyk+1 − yk k2
which implies +2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) − ∇x f (xk , yk ))
+2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − ∇y f (xk , yk ))
(xk+1 − x ∗ )T ∇x f (xk , yk ) + (yk+1 − y∗ )T ∇y f (xk , yk )
−2η(Ayk+1 + σ Lα ∗ )T (ζk − β ∗ ), (23)
= (xk+1 − xk )T ∇x f (xk , yk ) + (yk+1 − yk )T ∇y f (xk , yk )
+(xk − x ∗ )T ∇x f (xk , yk ) + (yk − y∗ )T ∇y f (xk , yk ) where ζk = βk + Ayk + σ L(αk − βk ).
(ii) Since αk+1 = αk − βk − Ayk − σ L(αk − βk ), we have
≥ (xk+1 − xk )T ∇x f (xk , yk ) + (yk+1 − yk )T ∇y f (xk , yk )
αk+1 = αk − ζk and
+f (xk , yk ) − f (x ∗ , y∗ )
= (xk+1 − xk )T (∇x f (xk , yk ) − ∇x f (xk+1 , yk+1 )) V2 (αk+1 ) − V2 (αk )
+(yk+1 − yk )T (∇y f (xk , yk ) − ∇y f (xk+1 , yk+1 )) = σ (αk+1 − α ∗ )T L(αk+1 − α ∗ ) − σ (αk − α ∗ )T L(αk − α ∗ )
+(xk+1 − xk )T ∇x f (xk+1 , yk+1 ) = σ (αk − α ∗ − ζk )T L(αk − α ∗ − ζk )
+(yk+1 − yk )T ∇y f (xk+1 , yk+1 ) −σ (αk − α ∗ )T L(αk − α ∗ )
+f (xk , yk ) − f (x ∗ , y∗ ) = −2σ ζkT L(αk − α ∗ ) + σ ζkT Lζk
≥ (xk+1 − xk )T (∇x f (xk , yk ) − ∇x f (xk+1 , yk+1 )) = −2σ ζkT L(αk − α ∗ ) + σ (ζk − βk )T L(ζk − βk )
+(yk+1 − yk )T (∇y f (xk , yk ) − ∇y f (xk+1 , yk+1 )) +2σ ζkT Lβk − σβkT Lβk
+f (xk+1 , yk+1 ) − f (x ∗ , y∗ ). = 2σ ζkT L(βk − αk + α ∗ )
+σ (ζk − βk )T L(ζk − βk ) − σβkT Lβk .
Substituting it into (20) and (21) results in
(iii) Since βk+1 = βk + Ayk+1 + σ L(αk − βk ), we have
V1 (xk+1 , yk+1 ) − V1 (xk , yk )
≤ −kxk+1 − xk k2 − kyk+1 − yk k2 V3 (βk+1 ) − V3 (βk )
+2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) − ∇x f (xk , yk )) = kβk+1 − β ∗ k2 − kβk − β ∗ k2
+2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − ∇y f (xk , yk )) = (βk+1 − βk )T (βk+1 + βk − 2β ∗ )
+2η(f (x ∗ , y∗ ) − f (xk+1 , yk+1 )) = −kβk+1 − βk k2 + 2(βk+1 − βk )T (βk+1 − β ∗ )
−2η(yk+1 − y∗ )T AT (βk + Ayk +σ L(αk − βk )). (22) = −kβk+1 − βk k2 + 2(Ayk+1
From (11), for any (x, y) ∈ X × Y we have +σ L(αk − βk ))T (βk+1 − β ∗ ).

(x − x ∗ )T ∇x f (x ∗ , y∗ )+(y − y∗ )T (∇y f (x ∗ , y∗ ) + AT β ∗ ) ≥ 0. REFERENCES


[1] M. Rabbat and R. Nowak, ‘‘Distributed optimization in sensor networks,’’
Since (xk+1 , yk+1 ) ∈ X × Y , it follows in Proc. 3rd Int. Symp. Inform. Process. Sensor Netw., Berkeley, CA, USA,
Apr. 2004, pp. 20–27.
(xk+1 − x ∗ )T ∇x f (x ∗ , y∗ ) [2] L. Xiao, S. Boyd, and S.-J. Kim, ‘‘Distributed average consensus with
+(yk+1 − y∗ )T (∇y f (x ∗ , y∗ ) + AT β ∗ ) ≥ 0. least-mean-square deviation,’’ J. Parallel Distrib. Comput., vol. 67, no. 1,
pp. 33–46, 2007.
From the convexity of f (x, y), one gets [3] A. Nedic and A. Ozdaglar, ‘‘Distributed subgradient methods for multi-
agent optimization,’’ IEEE Trans. Autom. Control, vol. 54, no. 1,
(xk+1 − x ∗ )T ∇x f (x ∗ , y∗ ) + (yk+1 − y∗ )T ∇y f (x ∗ , y∗ ) pp. 48–61, Jan. 2009.
[4] D. Feijer and F. Paganini, ‘‘Stability of primal–dual gradient dynamics
≤ f (xk+1 , yk+1 ) − f (x ∗ , y∗ ). and applications to network optimization,’’ Automatica, vol. 46, no. 12,
pp. 1974–1981, 2010.
Then [5] A. J. Wood, B. F. Wollenberg, and G. B. Sheble, Power Generation,
Operation, and Control. Hoboken, NJ, USA: Wiley, 2013.
f (xk+1 , yk+1 ) − f (x ∗ , y∗ ) + (yk+1 − y∗ )T AT β ∗ ≥ 0. [6] Q. Liu and J. Wang, ‘‘A second-order multi-agent network for bound-
constrained distributed optimization,’’ IEEE Trans. Autom. Control,
Combining with (22) follows that vol. 60, no. 12, pp. 3310–3315, Dec. 2015.
[7] X. Wang, Y. Hong, and H. Ji, ‘‘Distributed optimization for a class of
V1 (xk+1 , yk+1 ) − V1 (xk , yk ) nonlinear multiagent systems with disturbance rejection,’’ IEEE Trans.
Cybern., vol. 46, no. 7, pp. 1655–1666, Jul. 2016.
≤ −kxk+1 − xk k2 − kyk+1 − yk k2 [8] S. Yang, Q. Liu, and J. Wang, ‘‘A multi-agent system with a proportional-
integral protocol for distributed constrained optimization,’’ IEEE Trans.
+2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) − ∇x f (xk , yk )) Autom. Control, vol. 62, no. 7, pp. 3461–3467, Jul. 2017.
+2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − ∇y f (xk , yk )) [9] Q. Liu, S. Yang, and J. Wang, ‘‘A collective neurodynamic approach to
distributed constrained optimization,’’ IEEE Trans. Neural Netw. Learn.
−2η(yk+1 − y∗ )T AT (βk + Ayk + σ L(αk − βk ) − β ∗ ). Syst., vol. 28, no. 8, pp. 1747–1758, Aug. 2017.

83304 VOLUME 7, 2019


Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network

[10] R. Mudumbai, S. Dasgupta, and B. B. Cho, ‘‘Distributed control for opti- [26] H. Zhang, D. Yue, and X. Xie, ‘‘Distributed model predictive control for
mal economic dispatch of a network of heterogeneous power generators,’’ hybrid energy resource system with large-scale decomposition coordina-
IEEE Trans. Power Syst., vol. 27, no. 4, pp. 1750–1760, Nov. 2012. tion approach,’’ IEEE Access, vol. 4, pp. 9332–9344, Dec. 2016.
[11] C. Li, X. Yu, W. Yu, T. Huang, and Z.-W. Liu, ‘‘Distributed event-triggered [27] S. S. Ram, A. Nedić, and V. V. Veeravalli, ‘‘A new class of distributed
scheme for economic dispatch in smart grids,’’ IEEE Trans. Ind. Informat., optimization algorithms: Application to regression of distributed data,’’
vol. 12, no. 5, pp. 1775–1785, Oct. 2016. Optim. Methods Softw., vol. 27, no. 1, pp. 71–88, 2012.
[12] I. Kouveliotis-Lysikatos and N. Hatziargyriou, ‘‘Fully distributed eco- [28] K. Li, Q. Liu, S. Yang, J. Cao, and G. Lu, ‘‘Cooperative optimization of
nomic dispatch of distributed generators in active distribution networks dual multiagent system for optimal resource allocation,’’ IEEE Trans. Syst.,
considering losses,’’ IET Gener. Transmiss. Distrib., vol. 11, no. 3, Man, Cybern., Syst., to be published. doi: 10.1109/TSMC.2018.2859364.
pp. 627–636, Feb. 2017. [29] J. P. La Salle, The Stability of Dynamical Systems. Philadelphia, PA, USA:
[13] Q. Liu, S. Yang, and Y. Hong, ‘‘Constrained consensus algorithms with SIAM, 1976.
fixed step size for distributed convex optimization over multiagent net- [30] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.:
works,’’ IEEE Trans. Autom. Control, vol. 62, no. 8, pp. 4259–4265, Cambridge Univ. Press, 2004.
Aug. 2017. [31] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational
[14] X. Le, S. Chen, Z. Yan, and J. Xi, ‘‘A neurodynamic approach to distributed Inequalities and Their Applications. New York, NY, USA: Academic,
optimization with globally coupled constraints,’’ IEEE Trans. Cybern., 1982.
vol. 48, no. 11, pp. 3149–3158, Nov. 2018.
[15] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming:
Theory and Algorithms, 3rd ed. Hoboken, NJ, USA: Wiley, 2006.
[16] B. Gharesifard and J. Cortés, ‘‘Distributed continuous-time convex opti-
mization on weight-balanced digraphs,’’ IEEE Trans. Autom. Control,
vol. 59, no. 3, pp. 781–786, Mar. 2014. YAN ZHAO received the B.S. and M.S. degrees
[17] A. Nedic, A. Ozdaglar, and P. A. Parrilo, ‘‘Constrained consensus and opti- in mathematics from Anhui Normal University,
mization in multi-agent networks,’’ IEEE Trans. Autom. Control, vol. 55, Wuhu, China, in 2001 and 2012, respectively. She
no. 4, pp. 922–938, Apr. 2010. is currently a Lecturer with the School of Common
[18] S. Yang, J. Wang, and Q. Liu, ‘‘Cooperative–competitive multiagent sys- Courses, Wannan Medical College, Wuhu, China.
tems for distributed minimax optimization subject to bounded constraints,’’ His current research interests include computa-
IEEE Trans. Autom. Control, vol. 64, no. 4, pp. 1358–1372, Apr. 2019. tional mathematics and optimization theory.
[19] X. Hu and J. Wang, ‘‘An improved dual neural network for solving
a class of quadratic programming problems and its k-winners-take-all
application,’’ IEEE Trans. Neural Netw., vol. 19, no. 12, pp. 2022–2031,
Dec. 2008.
[20] X. Hu, C. Sun, and B. Zhang, ‘‘Design of recurrent neural networks
for solving constrained least absolute deviation problems,’’ IEEE Trans.
Neural Netw., vol. 21, no. 7, pp. 1073–1086, Jul. 2010.
[21] Q. Liu and Y. Yang, ‘‘Global exponential system of projection neural QINGSHAN LIU (S’07–M’08–SM’15) received
networks for system of generalized variational inequalities and related the B.S. degree in mathematics from Anhui Nor-
nonlinear minimax problems,’’ Neurocomputing, vol. 73, nos. 10–12, mal University, Wuhu, China, in 2001, the M.S.
pp. 2069–2076, 2010. degree in applied mathematics from Southeast
[22] Q. Liu, T. Huang, and J. Wang, ‘‘One-layer continuous-and discrete-time University, Nanjing, China, in 2005, and the
projection neural networks for solving variational inequalities and related Ph.D. degree in automation and computer-aided
optimization problems,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 25, engineering from The Chinese University of
no. 7, pp. 1308–1318, Jul. 2014.
Hong Kong, Hong Kong, in 2008.
[23] S. Yang, Q. Liu, and J. Wang, ‘‘A collaborative neurodynamic approach
He is currently a Professor with the School
to multiple-objective distributed optimization,’’ IEEE Trans. Neural Netw.
Learn. Syst., vol. 29, no. 4, pp. 981–992, Apr. 2018. of Mathematics, Southeast University, Nanjing,
[24] H. Xiao and C. L. P. Chen, ‘‘Leader-follower consensus multi-robot for- China. His current research interests include optimization theory and applica-
mation control using neurodynamic-optimization-based nonlinear model tions, artificial neural networks, computational intelligence, and multi-agent
predictive control,’’ IEEE Access, vol. 7, pp. 43581–43590, Apr. 2019. systems. He is a member of the Editorial Board of Neural Networks.
[25] I.-Y. Joo and D.-H. Choi, ‘‘Distributed optimization framework for energy He serves as an Associate Editor for the IEEE TRANSACTIONS ON CYBERNETICS
management of multiple smart homes with distributed energy resources,’’ and the IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS.
IEEE Access, vol. 5, pp. 15551–15560, Aug. 2017.

VOLUME 7, 2019 83305

You might also like