A Lagrange Multiplier Method For Distributed Optimization Based On Multi-Agent Network With Private and Shared Information
A Lagrange Multiplier Method For Distributed Optimization Based On Multi-Agent Network With Private and Shared Information
ABSTRACT In this paper, a Lagrange multiplier method is investigated for designing distributed opti-
mization algorithm, which convergence is analyzed from the view of multi-agent networks with connected
graphs. In the network, each agent is with both private and shared information. The shared information is
shared with the agent’s neighbors via a network with a connected graph. Furthermore, a Lagrange-multiplier-
based algorithm with parallel computing architecture is designed for distributed optimization. Under mild
conditions, the convergence of the algorithm, corresponding to the consensus of the Lagrange multipliers,
is presented and proved. The experiments with simulations are presented to illustrate the performance of the
proposed method.
VOLUME 7, 2019 This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ 83297
Y. Zhao, Q. Liu: Lagrange Multiplier Method for Distributed Optimization Based on Multi-Agent Network
smart homes [25], distributed model predictive control [26], Furthermore, let
regression of distributed data [27], and optimal resource
I −I 0 0
allocation [28]. A0 = ,
0 0 I −I
Statement of contributions: First, a distributed optimiza-
tion algorithm is proposed under considering both private in which I is identity matrix with appropriate dimension.
and shared information in the multi-agent network, which Then the constraints in (4) can be equivalently written as
can be easily implemented for single-program-multiple-data
A0 y = 0,
(SPMD) parallel algorithms. Second, here the distributed
algorithm is with fixed step size, which takes a fast con- where y = (yT1 , yT2 , yT3 )T .
vergence rate than the algorithms with diminishing step From the way in above example, for the optimization
size, especially when the iterations are close to the opti- problem (1), a constraint is necessary to be added to make
mal solutions. Third, the proposed distributed algorithm is the shared information be coincident between related agents,
initialization-free without any restriction on the initial values. then it is written as the following form
The remainders of the paper are organized as follows. m
In Section II, the investigated problem is formally stated.
X
min fi (xi , yi ),
Section III presents the distributed algorithm and proves its i=1
convergence under connected graph communication topol- subject to A0 y = 0,
ogy. In Section IV, the proposed method is utilized to solve
xi ∈ Xi , yi ∈ Yi , (5)
the optimal placement problem with two numerical examples.
Section V gives the conclusions of the paper. where x = (x1T , x2T , . . . , xmT )T , y = (yT1 , yT2 , . . . , yTm )T , and
A0 is the connection matrix between agents for sharing the
II. PROBLEM FORMULATION information on y.
We consider an m-agent system to find the optimal solutions In (5), the objective function is in the decomposable
to minimize a distributed optimization problem with private form. In order to design a distributed algorithm to solve (5),
and shared information, which is described as the equality constraint is rewritten as the following form
m
X m
min fi (xi , yi ),
X
Ai yi = 0,
i=1 i=1
subject to {information sharing rules for yi },
where Ai ∈ and yi ∈ Rri .
Rs×ri
xi ∈ Xi , yi ∈ Yi (i = 1, 2, . . . , m), (1)
Then the Lagrange function of (5) can be defined as
where fi is differentiable convex function, xi ∈ Rni is the m m
local private information of agent i, yi ∈ Rmi represents the
X X
9(x, y, β0 ) = fi (xi , yi ) + β0T Ai yi , (6)
information which the agent i will share with its neighbors, i=1 i=1
Xi and Yi are the bound constraints for xi and yi respectively.
where β0 ∈ Rs is the Lagrange multiplier. Furthermore,
Next, we give an example to illustrate the optimization
by the Saddle Point Theorem [15], (x ∗ , y∗ ) is an optimal solu-
problem in (1).
tion to problem (5), if and only if there exists β0∗ ∈ Rs such
Example: Consider an optimization problem to find the
that (x ∗ , y∗ , β0∗ ) is a saddle pointQof the Lagrange function
optimal least square error between four regions
in (6)Qon (x, y, β0 )Q∈ m m s
Q
i=1 i ×
X i=1 Yi × R ; i.e., for any
min kx1 − x2 k2 + kx2 − x4 k2 + kx3 − x4 k2 , m m
x ∈ i=1 Xi , y ∈ i=1 Yi and β0 ∈ R , (x , y , β0∗ ) satisfies
s ∗ ∗
Here, x1 and x3 are two private vectors for agents 1 and 3 9(x ∗ , y∗ , β0 ) ≤ 9(x ∗ , y∗ , β0∗ ) ≤ 9(x, y, β0∗ ),
respectively, and x2 and x4 are two shared vectors between
which further derives (x ∗ , y∗ , β0∗ ) to be an optimal solution of
agents 1 and 2, and agents 2 and 3 respectively. We rename
the minimax problem as follows
the shared vectors in the agents. Let y1 = x2 for the agent 1,
y2 = (x2T , x4T )T for the agent 2, and y3 = x4 for the agent 3. min max 9(x, y, β0 ), (7)
x∈X ,y∈Y β0 ∈Rs
Then the problem in (2) derives the following one
Qm Qm
min kx1 − y1 k2 + ky21 − y22 k2 + kx3 − y3 k2 , where X = i=1 Xi and Y = i=1 Yi . For problem (7),
the Lagrange function (6) can be rewritten as
subject to x1 ∈ X1 , y1 ∈ X2 , y21 ∈ X2 , y22 ∈ X4 ,
m
x3 ∈ X3 , y3 ∈ X4 , (3)
X
9(x, y, β) = [fi (xi , yi ) + βiT Ai yi ],
in which (yT21 , yT22 )T = y2 . To make the problems (2) and (3) i=1
are equivalent, the following constraints are necessary where β = (β1T , β2T , . . . , βmT )T ∈ Rms and βi is the ith
y1 = y21 , y22 = y3 . (4) estimate of the ith agent on the Lagrange multiplier.
Lemma 1: Assume the graph of the multi-agent network III. DISTRIBUTED ALGORITHM AND CONVERGENCE
for sharing the information on y to be connected and undi- ANALYSIS
rected. Then the following problem is equivalent to the min- A. DISTRIBUTED ALGORITHM
imax problem in (7) From the equalities in (9), the distributed optimization algo-
min max 9(x, y, β), rithm for solving (5) is designed as
x∈X ,y∈Y β∈Rms
x(k + 1) = gX (x(k) − η∇x f (x(k), y(k))),
subject to Lβ = 0, (8)
+ 1) = gY (y(k) − η(∇y f (x(k), y(k))
y(k
where L = L0 ⊗ I and ⊗ is the Kronecker product, L0 is
+AT (β(k) + Ay(k)
defined as the graph’s Laplacian matrix.
+σ L(α(k) − β(k))))),
Proof: Inspired by the proof of Lemma 3 in [6], for (12)
connected graph G, Lβ = 0 if and only if for some β ∈ Rs α(k + 1) = α(k) − β(k) − Ay(k)
such that β1 = β2 = · · · = βm = β. Then the problems in
−σ L(α(k) − β(k)),
(7) and (8) are equivalent. β(k + 1) = β(k) + Ay(k + 1)
Theorem 1: (x ∗ , y∗ , β ∗ ) is an optimal solution of (8) if and
+σ L(α(k) − β(k)),
only if, there exists α ∗ ∈ Rms such that (x ∗ , y∗ , α ∗ , β ∗ ) is a
solution of the following equations where k is iteration step. The component form of the previous
algorithm (12) can be written as
x = gX (x ∗ − η∇x f (x ∗ , y∗ )),
∗
xi (k + 1) = gXi (xi (k) − η∇x f (xi (k), yi (k))),
y∗ = g (y∗ − η(∇ f (x ∗ , y∗ ) + AT β ∗ )),
Y y
(9)
i (k + 1) = gYi (yi (k) − η(∇y fi (xi (k), yi (k))
∗ y
Lβ = 0,
+ATi (βi (k) + Ai yi (k)
Ay + σ Lα ∗ = 0,
∗
m
aij (αi (k) − αj (k)
P
in which gX and gY are projection operators onto X and +σ
Y respectively defined in Appendix A, matrix A is a block
j=1,j6=i
diagonal one of A1 , A2 to Am , η and σ are positive constants. −βi (k) + βj (k))))),
Proof: From above analysis, (x ∗ , y∗ , β ∗ ) is an optimal αi (k + 1) = αi (k) − βi (k) − Ai yi (k)
(13)
solution of (8) if and only if for any (x, y, β) ∈ 2 = m
α
P
−σ a (α (k) − (k)
{(x, y, β) : x ∈ X , y ∈ Y , Lβ = 0}, there is
ij i j
j=1,j6=i
9(x ∗ , y∗ , β) ≤ 9(x ∗ , y∗ , β ∗ ) ≤ 9(x, y, β ∗ ). −βi (k) + βj (k)),
(10)
β = βi (k) + Ai yi (k + 1)
According to the right inequality in (10), is a (x ∗ , y∗ )
i (k + 1)
m
minimum point of 9(x, y, β ∗ ) on X × Y if and only if, for
aij (αi (k) − αj (k)
P
+σ
any x ∈ X and y ∈ Y , (x ∗ , y∗ , β ∗ ) satisfies
j=1,j6=i
−βi (k) + βj (k)),
(
(x − x ∗ )T ∇x 9(x ∗ , y∗ , β ∗ ) ≥ 0,
(11)
(y − y∗ )T ∇y 9(x ∗ , y∗ , β ∗ ) ≥ 0, where aij denotes the connection weight of agents i and j.
For the convenience of following analysis, the subscript is
and they are further equivalent to the following projection
used to denote the iterations. Let xk = x(k), yk = y(k), αk =
equations
( α(k) and βk = β(k). Then the algorithm (12) is rewritten as
x ∗ = gX (x ∗ − η∇x 9(x ∗ , y∗ , β ∗ )), a compact form
y∗ = gY (y∗ − η∇y 9(x ∗ , y∗ , β ∗ )),
xk+1 = gX (xk − η∇x f (xk , yk )),
where η > 0 is a constant. Because of ∇x 9(x ∗ , y∗ , β ∗ ) = yk+1 = gY (yk − η(∇y f (xk , yk )
∇x f (x ∗ , y∗ ) and ∇y 9(x ∗ , y∗ , β ∗ ) = ∇y f (x ∗ , y∗ ) + AT β ∗ , +AT (βk + Ayk + σ L(αk − βk )))), (14)
the first and second equations in (9) are obtained.
αk+1 = αk − βk − Ayk − σ L(αk − βk ),
Furthermore, the left inequality in (10) derives β ∗ to be a
β
k+1 = βk + Ayk+1 + σ L(αk − βk ).
maximum point of 9(x ∗ , y∗ , β) on β ∈ 20 = {β : Lβ =
0}. For this maximum problem, the Lagrange function is
B. CONVERGENCE ANALYSIS
defined as
Assume (x ∗ , y∗ ) to be a solution of problem (5). From The-
9 0 (α, β) = 9(x ∗ , y∗ , β) + σ α T Lβ, orem 1, there exist α ∗ ∈ Rms and β ∗ ∈ Rms such that
where σ > 0 and the Lagrange multiplier α ∈ Rms . From (x ∗ , y∗ , α ∗ , β ∗ ) satisfies the equations in (9). In the first place,
the Karush-Kuhn-Tucker (KKT) conditions [15], β ∗ is an the following functions are introduced
optimal solution if and only if there exists α ∗ such that V1 (xk , yk ) = kxk −x ∗ k2 + kyk −y∗ k2 ,
∇α 9 0 (α ∗ , β ∗ ) = 0, ∇β 9 0 (α ∗ , β ∗ ) = 0; i.e., Lβ ∗ = 0 and
V2 (αk ) = σ (αk −α ∗ )T L(αk −α ∗ ), V3 (βk ) = kβk −β ∗ k2 ,
∇β 9(x ∗ , y∗ , β ∗ ) + σ Lα ∗ = 0, where ∇β 9(x ∗ , y∗ , β ∗ ) =
Ay∗ . This gets the last two equations in (9). where k · k is the Euclid norm.
Lemma 2: The V1 (xk , yk ), V2 (αk ) and V3 (βk ) satisfy the = −kxk+1 −xk k2 −kyk+1 −yk k2
following inequalities along the iterations in (14): +2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk ))
(i) V1 (xk+1 , yk+1 ) − V1 (xk , yk ) ≤ −kxk+1 − xk k2 − +2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk ))
kyk+1 − yk k2 + 2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) −
∇x f (xk , yk )) + 2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − +2η(Ayk+1 + σ L(αk −βk ))T A(yk+1 −yk )
∇y f (xk , yk )) − 2η(Ayk+1 + σ Lα ∗ )T (ζk − β ∗ ); +ησ (ζk −βk )T L(ζk −βk )
(ii) V2 (αk+1 ) − V2 (αk ) = 2σ ζkT L(βk − αk + α ∗ ) + σ (ζk − −ησβkT Lβk −ηkβk+1 −βk k2 , (15)
βk )T L(ζk − βk ) − σβkT Lβk ;
(iii) V3 (βk+1 ) − V3 (βk ) = −kβk+1 − βk k2 + 2(Ayk+1 + where the second equality holds due to Lβ ∗ = 0 and the last
σ L(αk − βk ))T (βk+1 − β ∗ ), equality holds due to βk+1 − ζk = A(yk+1 − yk ).
where ζk = βk + Ayk + σ L(αk − βk ). In the inequality in (15), we have
Proof: The details of the proof are in Appendix B.
−ηkβk+1 − βk k2
Theorem 2: Assume the graph of the multi-agent network
for sharing the information on y to be connected and undi- +2η(Ayk+1 + σ L(αk − βk ))T A(yk+1 − yk )
rected. Then, (xk , yk ) in the distributed optimization algo- = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
rithm (14) is globally convergent to an optimal solution of
+2η(Ayk + σ L(αk − βk ))T A(yk+1 − yk )
problem (5) if η < max{1/(2lx ), 1/(2λmax (AT A + ly I ))} and
σ < 1/λmax (L), where λmax is the maximum eigenvalue of = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
matrix, lx and ly are the Lipschitz constants of ∇x f and ∇y f +2η(βk + Ayk + σ L(αk − βk ) − βk+1 )T A(yk+1 − yk )
respectively.
+2η(βk+1 − βk )T A(yk+1 − yk )
Proof: Assume (x ∗ , y∗ ) to be an optimal solution of (5).
From Theorem 1, there exist α ∗ ∈ Rms and β ∗ ∈ Rms such = −ηkβk+1 − βk k2 + 2ηkAyk+1 − Ayk k2
that (x ∗ , y∗ , α ∗ , β ∗ ) satisfies the equations in (9). −2ηkβk+1 − ζk k2 + 2η(βk+1 − βk )T (βk+1 − ζk )
Let V (xk , yk , αk , βk ) = V1 (xk , yk ) + η(V2 (αk ) + V3 (βk )).
= 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2
Then combining with the results in Lemma 2, it follows that
−ηkβk+1 − βk k2 − ηkβk+1 − ζk k2
V (xk+1 , yk+1 , αk+1 , βk+1 )−V (xk , yk , αk , βk )
+2η(βk+1 − βk )T (βk+1 − ζk )
≤ V1 (xk+1 , yk+1 , )−V1 (xk , yk )
= 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2
+η(V2 (αk+1 )−V2 (αk ) + V3 (βk+1 )−V3 (βk ))
= −kxk+1 −xk k2 −kyk+1 −yk k2 −ηkβk+1 − βk − βk+1 + ζk k2
+2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk )) = 2ηkAyk+1 − Ayk k2 − ηkβk+1 − ζk k2 − ηkβk − ζk k2
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) ≤ 2ηkAyk+1 − Ayk k2 − ηkβk − ζk k2 , (16)
−2η(Ayk+1 + σ Lα ∗ )T (ζk −β ∗ )
where the third equality holds since ζk = βk +Ayk +σ L(αk −
+2ησ ζkT L(βk −αk + α ∗ ) + ησ (ζk −βk )T L(ζk −βk )
βk ) and A(yk+1 − yk ) = βk+1 − ζk .
−ησβkT Lβk −ηkβk+1 −βk k2 Substituting (16) into (15) follows
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ )
= −kxk+1 −xk k2 −kyk+1 −yk k2 V (xk+1 , yk+1 , αk+1 , βk+1 ) − V (xk , yk , αk , βk )
+2η(xk+1 −xk )T (∇x f (xk+1 , yk+1 )−∇x f (xk , yk )) ≤ −kxk+1 − xk k2 − kyk+1 − yk k2
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) +2η(xk+1 − xk )T (∇x f (xk+1 , yk+1 ) − ∇x f (xk , yk ))
−2η(Ayk+1 + σ Lα ∗ )T (ζk −β ∗ ) +2η(yk+1 − yk )T (∇y f (xk+1 , yk+1 ) − ∇y f (xk , yk ))
+2ησ (ζk −β ∗ )T L(βk −αk + α ∗ ) +2ηkAyk+1 − Ayk k2 − ηkβk − ζk k2
+ησ (ζk −βk )T L(ζk −βk ) +ησ (ζk − βk )T L(ζk − βk ) − ησβkT Lβk
−ησβkT Lβk −ηkβk+1 −βk k2 = −(1 − 2ηlx )(xk+1 − xk )T (xk+1 − xk )
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ ) −(yk+1 − yk )T (I − 2η(AT A + ly I ))(yk+1 − yk )
= −kxk+1 −xk k2 −kyk+1 −yk k2 −η(βk − ζk )T (I − σ L)(βk − ζk ) − ησβkT Lβk , (17)
+2η(xk+1 −xk ) (∇x f (xk+1 , yk+1 )−∇x f (xk , yk ))
T
in which I is the identity matrix, lx and ly are the Lipschitz
+2η(yk+1 −yk )T (∇y f (xk+1 , yk+1 )−∇y f (xk , yk )) constants of ∇x f and ∇y f respectively.
−2η(Ayk+1 + σ L(αk −βk ))T (ζk −β ∗ ) If η < max{1/(2lx ), 1/(2λmax (AT A + ly I ))} and
σ < 1/λmax (L), we have V (xk+1 , yk+1 , αk+1 , βk+1 ) −
+ησ (ζk −βk )T L(ζk −βk )
V (xk , yk , αk , βk ) ≤ 0.
−ησβkT Lβk −ηkβk+1 −βk k2 Combining with the LaSalle invariance principle [29]
+2η(Ayk+1 + σ L(αk −βk ))T (βk+1 −β ∗ ) (Theorem 6.4), (xk , yk ) converges to the largest invariant set
of
4 = {(xk , yk ) :
V (xk+1 , yk+1 , αk+1 , βk+1 ) − V (xk , yk , αk , βk ) = 0}.
where FIGURE 3. The topology structure and optimal locations of the free nodes
in example 2.
I −I 0 0 0
0 I −I 0 0
A0 = .
0 0 I −I 0
0 0 0 I −I
With random initial values, the simulation results are
shown in Figs. 1 and 2. The optimal locations of free nodes are
shown in Fig. 1, in which the blue solid squares are the fixed
points, the red solid disks are the free points, and the green
and pink dash lines are the links. The convergence behav-
iors of the outputs of the multi-agent network are depicted
in Fig. 2, which gets the optimal solution of the problem.
Example 2: We consider the problem in Example 1 with
a different topology of the graph. The six free points xi ∈
R2 (i = 1, 2, . . . , 6) are connected with a graph of circular
diagram, and the neighbors of each node are given in Fig. 3.
The optimization problem is formulated as
6
X 1 X X FIGURE 4. Convergence behaviors of output variables of the network for
min kxi − xj k2 + kxi − bl k2 , solving the problem in example 2.
2
i=1 j∈Ni l∈Ni
6
X
s.t. xi = 0, where y = (yT12 , yT21 , yT22 , yT31 , . . . , yT62 , yT11 )T and
i=1
− 2 ≤ xi ≤ 2, ( i = 1, 2, . . . , 6). (19)
I 0 I 0 I 0 I 0 I 0 I 0
I 0 0 0 0 0 0 0 0 0 0 −I
In the network, since each xi (i = 1, 2, . . . , 6) is shared
0 I −I 0 0 0 0 0 0 0 0 0
with its neighbors, we rename the shared vectors in the agents.
Let yij (i = 1, 2, . . . , 6, j = 1, 2) be the shared vectors; A0 = 0 0 0 I −I 0 0 0 0 0 0 0 .
i.e., y11 = y12 = x1 , y21 = y22 = x2 , . . . , y61 = y62 = x6 . 0 0 0 0 0 I −I 0 0 0 0 0
Then the problem in (19) derives the following problem (in 0 0 0 0 0 0 0 I −I 0 0 0
which y71 = y11 ) 0 0 0 0 0 0 0 0 0 I −I 0
6 We use a six-agents network to solve this problem. Sim-
X 1 X
min kyi2 − y(i+1)1 k2 + kyi2 − bl k2 , ulation results in Figs. 3 and 4 are generated with random
2
i=1 l∈Ni initial values. The convergence of output vectors is depicted
s.t. − 2 ≤ yij ≤ 2, i ∈ {1, 2, . . . , 6}, j ∈ {1, 2}, in Fig. 4, and the optimal locations of free nodes are shown
A0 y = 0, in Fig. 3.
V. CONCLUSIONS Then,
In this paper, a distributed convergence/consensus algorithm
with Lagrange multiplier method was proposed to seek kxk+1 − x ∗ k2 − kxk − x ∗ k2
the optimal solutions of decomposable optimization prob- = kg̃X − x ∗ k2 − kxk − x ∗ k2
lems based on the communication protocol in multi-agent = (g̃X − xk )T (g̃X + xk − 2x ∗ )
networks. The private and shared information were both = −kg̃X − xk k2 + 2(g̃X − xk )T (g̃X − x ∗ )
considered in the networks. By a simple mathematical
transformation, the optimization problem was converted = −kg̃X − xk k2 + 2(g̃X − xk
into a separable one, which was easily implemented for +η∇x f (xk , yk ))T (g̃X − x ∗ )
single-program-multiple-data (SPMD) parallel algorithms. −2η∇x f (xk , yk )T (g̃X − x ∗ ).
Combining with the dynamic behavior analysis method for
multi-agent networks, the convergence/consensus was proved Let u = xk − η∇x f (xk , yk ) and v = x ∗ , by the inequality in
under connected and undirected graph. Finally, the optimal Lemma 3, one gets
placement was investigated and two numerical examples
(g̃X − xk + η∇x f (xk , yk ))T (g̃X − x ∗ ) ≤ 0.
were given to illustrate the well performance of the proposed
method. Combining with xk+1 = g̃X , we have
[10] R. Mudumbai, S. Dasgupta, and B. B. Cho, ‘‘Distributed control for opti- [26] H. Zhang, D. Yue, and X. Xie, ‘‘Distributed model predictive control for
mal economic dispatch of a network of heterogeneous power generators,’’ hybrid energy resource system with large-scale decomposition coordina-
IEEE Trans. Power Syst., vol. 27, no. 4, pp. 1750–1760, Nov. 2012. tion approach,’’ IEEE Access, vol. 4, pp. 9332–9344, Dec. 2016.
[11] C. Li, X. Yu, W. Yu, T. Huang, and Z.-W. Liu, ‘‘Distributed event-triggered [27] S. S. Ram, A. Nedić, and V. V. Veeravalli, ‘‘A new class of distributed
scheme for economic dispatch in smart grids,’’ IEEE Trans. Ind. Informat., optimization algorithms: Application to regression of distributed data,’’
vol. 12, no. 5, pp. 1775–1785, Oct. 2016. Optim. Methods Softw., vol. 27, no. 1, pp. 71–88, 2012.
[12] I. Kouveliotis-Lysikatos and N. Hatziargyriou, ‘‘Fully distributed eco- [28] K. Li, Q. Liu, S. Yang, J. Cao, and G. Lu, ‘‘Cooperative optimization of
nomic dispatch of distributed generators in active distribution networks dual multiagent system for optimal resource allocation,’’ IEEE Trans. Syst.,
considering losses,’’ IET Gener. Transmiss. Distrib., vol. 11, no. 3, Man, Cybern., Syst., to be published. doi: 10.1109/TSMC.2018.2859364.
pp. 627–636, Feb. 2017. [29] J. P. La Salle, The Stability of Dynamical Systems. Philadelphia, PA, USA:
[13] Q. Liu, S. Yang, and Y. Hong, ‘‘Constrained consensus algorithms with SIAM, 1976.
fixed step size for distributed convex optimization over multiagent net- [30] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, U.K.:
works,’’ IEEE Trans. Autom. Control, vol. 62, no. 8, pp. 4259–4265, Cambridge Univ. Press, 2004.
Aug. 2017. [31] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational
[14] X. Le, S. Chen, Z. Yan, and J. Xi, ‘‘A neurodynamic approach to distributed Inequalities and Their Applications. New York, NY, USA: Academic,
optimization with globally coupled constraints,’’ IEEE Trans. Cybern., 1982.
vol. 48, no. 11, pp. 3149–3158, Nov. 2018.
[15] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming:
Theory and Algorithms, 3rd ed. Hoboken, NJ, USA: Wiley, 2006.
[16] B. Gharesifard and J. Cortés, ‘‘Distributed continuous-time convex opti-
mization on weight-balanced digraphs,’’ IEEE Trans. Autom. Control,
vol. 59, no. 3, pp. 781–786, Mar. 2014. YAN ZHAO received the B.S. and M.S. degrees
[17] A. Nedic, A. Ozdaglar, and P. A. Parrilo, ‘‘Constrained consensus and opti- in mathematics from Anhui Normal University,
mization in multi-agent networks,’’ IEEE Trans. Autom. Control, vol. 55, Wuhu, China, in 2001 and 2012, respectively. She
no. 4, pp. 922–938, Apr. 2010. is currently a Lecturer with the School of Common
[18] S. Yang, J. Wang, and Q. Liu, ‘‘Cooperative–competitive multiagent sys- Courses, Wannan Medical College, Wuhu, China.
tems for distributed minimax optimization subject to bounded constraints,’’ His current research interests include computa-
IEEE Trans. Autom. Control, vol. 64, no. 4, pp. 1358–1372, Apr. 2019. tional mathematics and optimization theory.
[19] X. Hu and J. Wang, ‘‘An improved dual neural network for solving
a class of quadratic programming problems and its k-winners-take-all
application,’’ IEEE Trans. Neural Netw., vol. 19, no. 12, pp. 2022–2031,
Dec. 2008.
[20] X. Hu, C. Sun, and B. Zhang, ‘‘Design of recurrent neural networks
for solving constrained least absolute deviation problems,’’ IEEE Trans.
Neural Netw., vol. 21, no. 7, pp. 1073–1086, Jul. 2010.
[21] Q. Liu and Y. Yang, ‘‘Global exponential system of projection neural QINGSHAN LIU (S’07–M’08–SM’15) received
networks for system of generalized variational inequalities and related the B.S. degree in mathematics from Anhui Nor-
nonlinear minimax problems,’’ Neurocomputing, vol. 73, nos. 10–12, mal University, Wuhu, China, in 2001, the M.S.
pp. 2069–2076, 2010. degree in applied mathematics from Southeast
[22] Q. Liu, T. Huang, and J. Wang, ‘‘One-layer continuous-and discrete-time University, Nanjing, China, in 2005, and the
projection neural networks for solving variational inequalities and related Ph.D. degree in automation and computer-aided
optimization problems,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 25, engineering from The Chinese University of
no. 7, pp. 1308–1318, Jul. 2014.
Hong Kong, Hong Kong, in 2008.
[23] S. Yang, Q. Liu, and J. Wang, ‘‘A collaborative neurodynamic approach
He is currently a Professor with the School
to multiple-objective distributed optimization,’’ IEEE Trans. Neural Netw.
Learn. Syst., vol. 29, no. 4, pp. 981–992, Apr. 2018. of Mathematics, Southeast University, Nanjing,
[24] H. Xiao and C. L. P. Chen, ‘‘Leader-follower consensus multi-robot for- China. His current research interests include optimization theory and applica-
mation control using neurodynamic-optimization-based nonlinear model tions, artificial neural networks, computational intelligence, and multi-agent
predictive control,’’ IEEE Access, vol. 7, pp. 43581–43590, Apr. 2019. systems. He is a member of the Editorial Board of Neural Networks.
[25] I.-Y. Joo and D.-H. Choi, ‘‘Distributed optimization framework for energy He serves as an Associate Editor for the IEEE TRANSACTIONS ON CYBERNETICS
management of multiple smart homes with distributed energy resources,’’ and the IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS.
IEEE Access, vol. 5, pp. 15551–15560, Aug. 2017.