ZHANG-Distributed Optimal Control
ZHANG-Distributed Optimal Control
Abstract—This paper develops a novel approach to the con- Homogeneous multi-agent systems means that all agents
sensus problem of multi-agent systems by minimizing a weighted have identical system dynamics, which mainly includes lead-
state error with neighbor agents via linear quadratic (LQ) erless consensus and leader-follower consensus. [5] proposed
optimal control theory. Existing consensus control algorithms
only utilize the current state of each agent, and the design a general framework of the consensus problem for networks
of distributed controller depends on nonzero eigenvalues of of first-order integrator agents with switching topologies based
the communication topology. The presented optimal consensus on the relative states. [6] derived a sufficient condition, which
controller is obtained by solving Riccati equations and designing was more relaxed than that in [5], for achieving consensus of
appropriate observers to account for agents’ historical state infor- multi-agent systems. Extending first-order consensus protocols
mation. It is shown that the corresponding cost function under
the proposed controllers is asymptotically optimal. Simulation in [6], the author in [7] further studied distributed leader-
examples demonstrate the effectiveness of the proposed scheme, following consensus algorithms for second-order integrators.
and a much faster convergence speed than the conventional [8] considered the consensusability of discrete-time multi-
consensus methods. Moreover, the new method avoids computing agent systems, and an upper bound of the eigenratio (the
nonzero eigenvalues of the communication topology as in the ratio of the second smallest to the largest eigenvalues of
traditional consensus methods.
the graph Laplacian matrix) was derived to characterize the
Index Terms—Consensus, Distributed control, Observer, Het- convergence rate. [4] proposed a consensus region approach
erogeneous multi-agent system to designing distributed adaptive consensus protocols with
undirected and directed graph for general continuous-linear
dynamics. [9] recently derived an optimal consensus region
I. I NTRODUCTION over directed communication graphs with a diagonalizable
Laplacian matrix. Besides, variants of these algorithms are
Cooperative control problem for multi-agent systems has
also currently applied to tackle communication uncertainties,
attracted considerable attentions from different scientific com-
such as fading communication channels [10], packet loss [11]
munities in recent years. Multiple agents can coordinate with
and communication time-delays [12]. It should be pointed out
each other via communication topology to accomplish tasks
that the aforementioned consensus control protocols merely
that may be difficult for single agent, and its potential appli-
use each agent and its neighbor’s current state information,
cations include unmanned aerial vehicles, satellite formation,
and ignore their historical state information. Additionally,
distributed robotics and wireless sensor networks [1]–[3].
the solvability condition of the consensus gain matrix K is
In the area of cooperative control of multi-agent systems,
dependent on the nonzero eigenvalues of Laplacian matrix or
consensus is a fundamental and crucial problem, which refers
even requires the communication topology to be a complete
to designing an appropriate distributed control protocol to
graph [13]. In particular, when agent number is large, the
steer all agents to achieve an agreement on certain variable
eigenvalues of the corresponding Laplacian matrix are difficult
[4]. Thus, the consensus problem has been widely studied by
to be determined, even if the eigenvalues are computable,
numerous researchers from various perspectives.
since their calculation still imposes a significant computational
burden. [14] presented a distributed consensus algorithm based
This work was supported by the Original Exploratory Program Project of
National Natural Science Foundation of China (62250056), National Natural on optimal control theory, while the state weight matrix in
Science Foundation of China (62103240), Youth Foundation of Natural given performance index is a special form.
Science Foundation of Shandong Province (ZR2021QF147), Foundation for On the other hand, many actual systems are heterogeneous
Innovative Research Groups of the National Natural Science Foundation
of China(61821004), Major Basic Research of Natural Science Foundation where system dynamics are different. So far, the distributed
of Shandong Province (ZR2021ZD14), High-level Talent Team Project of feedforward control [15]–[17] and internal model principle
Qingdao West Coast New Area (RCTD-JC-2019-05), Key Research and De- [18] are commonly used to solve the cooperative output regula-
velopment Program of Shandong Province (2020CXGC01208), and Science
and Technology Project of Qingdao West Coast New Area (2019-32, 2020-20, tion problem. These tools are also generalized for dealing with
2020-1-4). robust output regulation, switching networks and cooperative-
L. Zhang and H. Zhang are with the College of Electrical Engineering compete networks [19]–[23]. In fact, the essence of both the
and Automation, Shandong University of Science and Technology, Qingdao
266590, China (e-mail:[email protected]; [email protected]). algorithms can be attributed to two aspects: first, the refer-
J. Xu is with the School of Control Science and Engineering, Shandong ence generator [24] or the distributed observer estimating the
University, Jinan 250061, China (e-mail:[email protected]). reference system’s state is a critical technology for designing
L. Xie is with the School of Electrical and Electronic Engineer-
ing, Nanyang Technological University, Singapore 639798, Singapore (e- distributed controllers; second, the solvability conditions of
mail:[email protected]). output regulator equations or transmission zero conditions of
2
the system are also necessary for solving the output consensus We aim to design a distributed control protocol ui (k) based
problem. on the available information from neighbors in (1) to minimize
Motivated by the above analyses, in this paper, we study the performance (2).
the consensus problem of discrete-time linear heterogeneous Based on the optimal control theory, it is clear that if the
multi-agent systems with a novel consensus control protocol optimal controller exists, it must have that
based on LQ optimal control theory. Compared with the
lim kxi (k) − xj (k)k = 0, i = 1, · · · , N. (3)
existing results, the main contributions of this work are: 1) k→∞
We develop a novel consensus algorithm by minimizing the In other word, multi-agent systems (1) achieve consensus, and
weighted state errors of different neighbor agents. An optimal the protocol is termed as an optimal control based protocol,
consensus controller with the observer incorporating each which is completely different from classical approaches. In
agent’s historical state information is designed by solving fact, the commonly used consensus protocol [5], [6] for multi-
Riccati equations. The corresponding global cost function agent systems is designed as:
under the proposed controllers is shown to be asymptotically X
optimal. 2) The proposed new consensus controller can achieve ui (k) = F aij (xj (k) − xi (k)), (4)
much faster consensus speed than the traditional consensus j∈Ni
method, and avoid computing nonzero eigenvalues of the where F is a feedback gain matrix, which actually is depen-
Laplacian matrix associated with the communication topology. dent on λ2 (L) and λN (L) [8]. That is to say, one needs
The following notations will be used throughout this pa- to solve non-zero eigenvalues for the Laplacian matrix L
per: Rn×m represents the set of n × m-dimensional real associated with the communication topology to determine the
matrices. I is the identity matrix of a given dimension. feedback gain F .
diag{a1 , a2 , · · · , aN } denotes the diagonal matrix with diag- Different from the commonly used consensus protocol
onal elements being a1 , · · · , aN . ρ(A) is the spectral radius where the protocol is artificially defined, the protocol in this
of matrix A. ⊗ denotes the Kronecker product. paper is derived by optimizing a given LQ performance, and
Let the interaction among N agents be described by a the performance index (2) is more general with a positive semi-
directed graph G = {V, E, A}, where V = {1, 2, · · · , N } is definite weight matrix Q ≥ 0.
the set of vertices (nodes), E ⊆ V × V is the set of edges,
and A = [aij ] ∈ RN ×N is the signed weight matrix of G, B. Preliminary
aij 6= 0 if and only if the edge (vj , vi ) ∈ E, and we assume
that the graph has no self-loop, i.e., aii = 0. The neighbor Define the neighbor error variable among agents as:
of vi is denoted by Ni = {j|(vj , vi ) ∈ E}. The Laplacian eij (k) = xi (k) − xj (k). Then, it can be obtained from (1)
matrix L = [lij ]N ×N associated with the adjacency matrix that
P
A is defined as lii = aij , lij = −aij for i 6= j. A eij (k + 1) = Aeij (k) + Bi ui (k) − Bj uj (k). (5)
j∈Ni
directed path form vi to vj is represented by a sequence of T T
Let δi (k) = eij1 eTij2 · · · eTijℓ be the error vector be-
edges (vi , vi1 ), (vi1 , vi2 ), · · · , (vim , vj ). A directed graph is
tween the i-th agent and its neighbor agent µ with µ =
strongly connected if there exists a directed path between any
j1 , · · · , jℓ . By stacking the error vectors, the global error
pair of distinct nodes.
dynamics for the multi-agent system (1) has the form
N
X
II. P RELIMINARY AND P ROBLEM F ORMULATION e(k + 1) = Ãe(k) + B̄i ui (k), (6)
A. Problem Formulation i=1
Assumption 1. The directed graph G is strongly connected. Theorem 1. Consider the global error system (6), and the
distributed control laws (13) and (14). If there exist observer
In the ideal (complete graph) case, the error information
gains Υi , i = 1, · · · , N such that the matrix
e(k) is available for all agents, the solvability of the optimal
control problem for system (6) with the cost function (8) is Θ1 −B̄2 Ke2 ··· −B̄N KeN
equivalent to the following standard LQ optimal control prob- −B̄1 Ke1 Θ2 ··· −B̄N KeN
lem [25]. Moreover, under the centralized optimal controller Ãec = .. .. .. ..
. . . .
(9), multi-agent system (1) is able to achieve consensus.
−B̄1 Ke1 ··· −B̄N −1 KeN −1 ΘN
Lemma 1. [25] Suppose that error information e(k) is (15)
available for all agents, the optimal controller with respect
to the cost function (8) is given by is stable, where Θi = Ã + B̄Ke − B̄i Kei − Υi Hi . Then the
observers (14) are stable under the controller (13), i.e.,
u∗ (k) = Ke e(k), (9)
lim kêi (k) − e(k)k = 0. (16)
k→∞
where the feedback gain Ke is given by
Moreover, if the Riccati equation (11) has a positive definite
Ke = −(R + B̄ T Pe B̄)−1 B̄ T Pe à (10) solution Pe , under the distributed feedback controllers (13),
and Pe is the solution of the following ARE the multi-agent systems (1) can achieve consensus.
Pe = ÃT Pe à + Q̃ − ÃT Pe B̄(R + B̄ T Pe B̄)−1 B̄ T Pe Ã. Proof. Denoting observer error vectors
(11) ẽi (k) = e(k) − êi (k). (17)
The corresponding optimal cost function is
Then, combining system (6) with observers (14), one obtains
J ∗ (s, ∞) = eT (s)Pe e(s). (12)
e(k + 1) = (Ã + B̄Ke )e(k) − B̄1 Ke1 ẽ1 (k)
Moreover, if Pe is the unique positive definite solution to (11), − B̄2 Ke2 ẽ2 (k) − · · · − B̄N KeN ẽN (k) (18)
then à + B̄Ke is stable.
and
III. M AIN R ESULTS ẽ1 (k + 1) = (Ã + B̄Ke − B̄1 Ke1 − Υ1 H1 )ẽ1 (k)
A. Consensus of multi-agent systems (1) based on relative − B̄2 Ke2 ẽ2 (k) − · · · − B̄N KeN ẽN (k) (19a)
error feedback
··· ··· ···
In this subsection, we design a novel distributed observer-
based controller only using the relative error information from ẽi (k + 1) = (Ã + B̄Ke − B̄i Kei − Υi Hi )ẽi (k)
neighbors. − B̄1 Ke1 ẽ1 (k) − · · · − B̄i−1 Kei−1 ẽi−1 (k)
We design the following distributed controllers as − B̄i+1 Kei+1 ẽi+1 (k) − · · · − B̄N KeN ẽN (k)
u∗i (k) = Kei êi (k), i = 1, 2, · · · , N, (13) (19b)
··· ··· ···
where êi (k), i = 1, 2, · · · , N are distributed observers to
estimate the global error e(k) in system (6), which is based ẽN (k + 1) = (Ã + B̄Ke − B̄N KeN − ΥN HN )ẽN (k)
on the available information of agent i. Thus, − B̄1 Ke1 ẽ1 (k) − · · · − B̄N −1 KeN −1 ẽN −1 (k).
(19c)
ê1 (k + 1) = Ãê1 (k) + B̄1 u∗1 (k) + B̄2 Ke2 ê1 (k)
According to (19), we have
+ · · · + B̄N KeN ê1 (k)
+ Υ1 (Y1 (k) − H1 ê1 (k)) (14a) ẽ(k + 1) = Ãec ẽ(k), (20)
··· ··· ··· T
where ẽ(k) = ẽT1 (k), ẽT2 (k), · · · , ẽTN (k) . Obviously, if
êi (k + 1) = Ãêi (k) + B̄1 Ke1 êi (k) + · · · there exist matrices Υi such that Ãec is stable, then observer
+ B̄i−1 Kei−1 êi (k) + B̄i u∗i (k) errors ẽ(k) converge to zero as k → ∞, i.e., Eq. (16) holds.
+ B̄i+1 Kei+1 êi (k) + · · · + B̄N KeN êi (k) Furthermore, it follows from (18) and (19) that
+ Υi (Yi (k) − Hi êi (k)), (14b) e(k + 1) e(k)
= Āec (21)
ẽ(k + 1) ẽ(k)
··· ··· ···
êN (k + 1) = ÃêN (k) + B̄1 Ke1 êN (k) + · · · Ã + B̄Ke Ωe
where Āec = and Ωe =
+ B̄N −1 KeN −1 êN (k) + B̄N u∗N (k) 0 Ãec
−B̄1 Ke1 · · · −B̄N KeN . Since Pe is the positive
+ ΥN (YN (k) − HN êN (k)), (14c)
definite solution to Riccati equation (11), then à + B̄Ke
with Kei = 0 · · · I 0 · · · 0 Ke , which is obtained is stable, based on the LQ control theory, the consensus of
by solving an ARE (11), and the observer gain Υi is to be multi-agent system (1) can be achieved.
determined later to ensure the stability of the observers. The proof is completed.
4
Observe from (21) that since Ke has been given in (10), Based on the cost function (8) under the centralized optimal
the consensus error dynamics is dependent on Ãec , which is control, by performing summation on k from s to ∞ and
determined by the observer gains Υi . Thus, to speed up the applying algebraic calculations yields
convergence of consensus, the remaining problem is to choose
eT (s)Pe e(s) − eT (∞)Pe e(∞)
the matrix of Υi , i = 1, 2, · · · , N such that the maximum
∞
X T
eigenvalue of Ãec is as small as possible. To this end, let e(k) 0 Me1 e(k)
= J(s, ∞) − T (26)
ρ > 0 such that ẽ(k) Me1 Me2 ẽ(k)
k=s
ÃTec Ãec ≤ ρI, It follows from Theorem 1 that limk→∞ eT (k)Pe e(k) = 0.
or Therefore, the corresponding optimal cost function J ⋆ (s, ∞)
under the proposed distributed observer-based controller (13)
−ρI ÃTec
≤ 0, (22) is derived in (24). Furthermore, in line with (12), the optimal
Ãec −I
cost difference (25) holds.
where Ãec is as in (15). Then the optimal gain matrices Υi According to Theorem 1, the closed-loop system (21) is stable,
are chosen by: then there exist two constants a > 0 and 0 < γ < 1 such that
min ρ s.t. (22), i = 1, · · · , N. (23) e(k) k e(0)
Υi ≤ aγ . (27)
ẽ(k) ẽ(0)
Next, we will derive the cost difference between the pro-
posed distributed controller (13) and the centralized optimal Then, we have
control (9), and then analyze the asymptotical optimal property X∞ T
e(k) 0 Me1 e(k)
of the corresponding cost function. ∆Je (s, ∞) = T
ẽ(k) Me1 Me2 ẽ(k)
For the convenience of analysis, denote that k=s
2
X∞
0 Me1 e(k)
Me1 = (Ã + B̄Ke )T Pe Ωe − Ωe1 , ≤ T
T Me1 Me2 ẽ(k)
Ke1 R1 Ke1 0 ··· 0 k=s
T 2X ∞
0 K e2 2 Ke2
R ··· 0 0 Me1 e(0)
Me2 = .. .. .. .. + Ωe2 , ≤ T a2 γ 2k
. Me1 Me2 ẽ(0)
. . . k=s
T
0 · · · KeN 0
RN KeN = āγ 2s (28)
Ωe = −B̄1 Ke1 · · · −B̄N KeN , 2
T 0 Me1 e(0) a2
Ωe1 = Ke1 T
R1 Ke1 · · · KeN RN KeN , with ā = T 1−γ 2 .
Me1 Me2 ẽ(0)
Ωe2 = ΩTe Pe Ωe . Since 0 < γ < 1, for any given ε > 0, there exists a
sufficiently large integer M such that γ 2M < āε . Based on
Theorem 2. Under the proposed distributed controllers (13) (28), it holds that
and (14) with Γi , i = 1, 2, · · · , N chosen from the optimization ∞ T
X e(k) 0 Me1 e(k)
in (23), the corresponding cost function is given by < ε. (29)
T
ẽ(k) Me1 Me2 ẽ(k)
J ⋆ (s, ∞) k=M
∞
X T In this case, when M is large enough, the cost difference (25)
e(k) 0 Me1 e(k)
= eT (s)Pe e(s) + T (24) satisfies
ẽ(k) Me1 Me2 ẽ(k)
k=s
∆J(M, ∞) < ε.
Moreover, the cost difference between the cost function (24)
and the cost under the centralized optimal control is given by That is to say, the optimal cost difference (25) is equal to 0
⋆
∆J(s, ∞) = J (s, ∞) − J (s, ∞) ∗ as s → ∞. This proof is completed.
X∞ T
e(k) 0 Me1 e(k)
= T . (25) B. Comparison with traditional consensus algorithms
ẽ(k) Me1 Me2 ẽ(k)
k=s Firstly, the consensus with the new controller has a faster
In particular, the optimal cost function difference will ap- convergence speed than the traditional consensus algorithms.
proach to zero as s is sufficiently large. That is to say, the In fact, from the closed-loop system (21), one has
proposed consensus controller can achieve the optimal cost
e(k) e(k − 1)
(asymptotically). ≤ ρ(Āec ) , (30)
ẽ(k) ẽ(k − 1)
Proof. According to (11) and (18), we have
where ρ(Āec ) is the spectra radius of à + B̄Ke and Ãec .
eT (k)Pe e(k) − eT (k + 1)Pe e(k + 1) In particularly, Ã + B̄Ke is the closed-loop system ma-
= eT (k)(Q + KeT RKe )e(k) trix obtained by the optimal feedback control (10), that is,
− ẽT (k)ΩTe Pe (Ã + B̄Ke )e(k) e(k + 1) = (Ã + B̄Ke )e(k) while e(k + 1)T Q̃e(k + 1) is
minimized as in (8), so the modulus of the eigenvalues for
− eT (k)(Ã + B̄Ke )T Pe Ωe ẽ(k) − ẽT (k)ΩTe Pe Ωe ẽ(k) Ã + B̄Ke is minimized in certain sense. Besides, based on the
5
optimization in (23) , we can appropriately select Υi such that We design the distributed observer-based state feedback
the upper bound of the spectral radius ρ(Ãec ) is as small as controllers:
possible. From these perspectives, ρ(Āec ) is made more to be
small. This is in comparison with the conventional consensus u∗i (k) = Ki X̂i (k), i = 1, 2 · · · , N (39)
algorithms where the maximum eigenvalue of the matrix Āec
where the distributed observers X̂i (k) are given by
is not minimized and determined by the eigenvalues of the
Laplacian matrix L. Therefore, it can be expected that the X̂i (k + 1) = ÃX̂i (k) + B̃1 K1 X̂i (k) + · · ·
proposed approach can achieve a faster convergence than the
conventional algorithms as demonstrated in the simulation + B̃i−1 Ki−1 X̂i (k) + B̃i u∗i (k)
examples in Section IV. + · · · + B̃N KN X̂N (k)
Second, the cost difference ∆J(s, ∞) between the new + Li (Yi (k) − Ci X̂i (k)) (40)
distributed controller (13) and the centralized optimal control
(9) is provided in Theorem 2, and it is equal to zero as s → ∞. with
Li the observer gains to be designed, and Ki =
That is to say, the corresponding cost function under the 0 0 · · · I · · · 0 K.
proposed distributed controllers (13) is asymptotically optimal.
Theorem 3. Let Assumption 1 hold. Consider the global
system (37) for the multi-agent systems (1), and the control
C. Special case: consensus of multi-agent systems (1) via state laws (39), if there exist observer gains Li such that the matrix
feedback controller
By stacking the state vectors, the multi-agent systems (1) W1 −B̃2 K2 · · · −B̃N KN
−B̃1 K1 W2 · · · −B̃N KN
can be rewritten as
Ãc = .. .. .. .. (41)
. . . .
X(k + 1) = ÃX(k) + B̃u(k) (31)
T −B̃1 K1 −B̃2 K2 · · · WN
T
where X(k) = x1 (k), · · · , xTN (k) is the global state
variable. Ã = IN ⊗ A and B̃ = diag{B1 , B2 , · · · , BN }. is stable with Wi = Ã + B̃K − B̃i Ki − Li Ci . Then the
The cost function (2) is rewritten as observers (40) are stable, that is,
X
∞
lim kX̂i (k) − X(k)k = 0, i = 1, · · · , N. (42)
J(s, ∞) = [X T (k)QX(k) + uT (k)Ru(k)] (32) k→∞
k=s
Moreover, if P is the positive definite solution of (35), under
where Q = [Q]ij ≥ 0 with Qii = (N − 1)Q, Qij = −Q for the feedback controller (39) , the multi-agent systems (1) can
i 6= j, and R = diag{R1 , R2 , · · · , RN } > 0. achieve consensus.
Lemma 2. Assume the state information X(k) is available for Proof. This proof is similar to that in Theorem 1. So we will
all agents subject to (1), the optimal controller with respect not repeat the details.
to the cost function (2) is given by:
Through analyzing Theorem 3, under the distributed state
u∗ (k) = KX(k), (33) feedback controllers (39) and (40), all agents can achieve
where the feedback gain matrix K = L ⊗ F is given as consensus, and the state of each agent converges to zero, which
is also consistent with the single system’s result [26].
K = −(R + B̃ T P B̃)−1 B̃ T P Ã (34) Similar to Theorem 2, we will also discuss asymptotical
and P is the solution of the following ARE optimal property of the new distributed controllers (39).
P = ÃT P Ã + Q − ÃT P B̃(R + B̃ T P B̃)−1 B̃ T P Ã. (35) Theorem 4. Under the proposed distributed controller (39)
and (40) with Li , i = 1, 2, · · · , N chosen from Theorem 3,
The corresponding optimal cost function is the cost function is given by
J ∗ (s, ∞) = X T (s)P X(s). (36) J ⋆ (s, ∞)
∞
X T
Moreover, if P is the unique positive definite solution to (35), X(k) 0 M1 X(k)
à + B̃K is stable. = X T (s)P X(s) + ,
X̃(k) M1T M2 X̃(k)
k=s
The system (31) is rewritten as (43)
N
X where
X(k + 1) = ÃX(k) + B̃i ui (k) (37)
i=1 M1 = (Ã + B̃K)T P Ω − K1T R1 K1 · · · KN T
RN KN ,
T
Yi (k) = Ci X(k) (38) K1 R1 K1 0 ··· 0
T 0 K T
R 2 2 ···
K 0
where B̃i = 0 · · · B T 0 · · · 0 , Yi (k) is measure- 2
M2 = .. .. .. .. + ΩT P Ω.
ment, Ci is composed of 0 and In , which is determined by . . . .
T
the interaction among agents. 0 0 · · · KN RN KN
6
X∞ T
X(k) 0 M1 X(k) -4
0 2 4 6 8 10 12 14 16 18 20
= . (44)
X̃(k) M1T M2 X̃(k) k
k=s 0.5
-1.5
0 5 10 15 20 25 30
Proof. This proof is similar to that in Theorem 2. So the k
details are omitted.
Fig. 2. The state trajectories for each agent xi (k), i = 1, 2, 3, 4.
IV. N UMERICAL S IMULATION
20
state-trajectories xi1
x11 x21 x31 x41
In this section, we validate the the proposed theoretical 0
-40
-60
0 5 10 15 20 25 30 35 40
1 4 k
0.5
state-trajectories xi2
x12 x22 x32 x42
e12 e34 0
-0.5
2 3
e23 -1
-1.5
0 10 20 30 40 50 60 70 80 90 100
Fig. 1. Communication topology among four agents k
Example 1. Consider the multi-agent system consisting of four Fig. 3. The state trajectories xi (k) by the traditional consensus method.
homogeneous agents with the system matrices taken from [27],
0.6
1.1 0.3 1 e1(k)-\hat{e}_1(k)
A= ,B = . (45)
observer error trajectories for \tilde{e}_1(k)
0.4
0 0.8 0.5
0.2
The interactions of agents are given in Fig.1, in which each
0
agent receives neighbor error information, Then, we can
determine Hi , i = 1, 2, 3, 4 as -0.2
H1 = I2 0 0 , H2 = 0 I2 0 , -0.4
H3 = 0 I2 0 , H4 = 0. -0.6
-0.8
We choose
-1
0 5 10 15 20 25 30
Q = I2 , R1 = R2 = R3 = R4 = 1. k
According to ARE (11) and the optimization in (23), the Fig. 4. Observer error trajectories ẽ1 (k).
feedback gains Kei and the observer gain can be obtained,
respectively. Fig.2 displays the evolution of each agent’s state
by the proposed consensus algorithm(M2), it’s shown that all observer-based consensus algorithm (13) can ensure all agents
agent’s states reach the consensus value after 12 steps. The reach consensus with a faster convergence speed.
corresponding observer error vector e1 (k) − ê1 (k) under the
proposed controller (13) converges to zero. With the same
initial conditions, Fig.3 shows the state trajectories by the 1
traditional state feedback method (M1) [5]. One can see that
the second state of each agent reach consensus after 25steps.
To further compare the convergence performance of different
consensus algorithms, the quantitative calculations based on 2 3
the spectral radius ρ(Ãec ) and the norm of the first agent’s
state at different instants of time are shown in Table I. It can Fig. 5. Communication topology among three agents
be observed that the proposed consensus algorithm reduces
the maximum eigenvalue’s value of Ãc , and the norm of each Example 2. We first consider a multi-agent system that
agent’s state kxi (k0 )k. Therefore, the proposed distributed consists of three agents over a directed graph described in
7
TABLE I
ρ(Ãec ) AND THE NORM OF THE FIRST AGENT ’ S STATE BY USING DIFFERENT ALGORITHMS
kx1 k2
Method ρ(Ãec ) step1 step2 step4 step6 step8 step10 step12 step14 step16 step18 step20
.
M1 0.8777 1.4142 1.3670 1.3639 1.2424 0.9817 0.6218 0.4494 0.9513 1.7692 2.7805 3.9881
M2 0.7876 1.4142 1.2972 0.7293 0.3349 0.3702 0.6243 0.8271 1.0638 1.3096 1.6056 1.9587
x1
x3
[3] Y. Yang, Y. Xiao, and T. Li, “Attacks on formation control for multiagent
0.2
systems,” IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 12 805–
0 12 817, dec 2022.
[4] Z. Li and Z. Duan, “Distributed consensus protocol design for general
-0.2
linear multi-agent systems: a consensus region approach,” IET Control
-0.4 Theory & Applications, vol. 8, no. 18, pp. 2145–2161, dec 2014.
[5] R. Olfati-Saber and R. Murray, “Consensus problems in networks of
-0.6 agents with switching topology and time-delays,” IEEE Transactions on
-0.8
Automatic Control, vol. 49, no. 9, pp. 1520–1533, sep 2004.
[6] W. Ren and R. Beard, “Consensus seeking in multiagent systems under
-1 dynamically changing interaction topologies,” IEEE Transactions on
0 5 10 15 20 25 30
k Automatic Control, vol. 50, no. 5, pp. 655–661, may 2005.
[7] W. Ren and E. Atkins, “Distributed multi-vehicle coordinated controlvia
Fig. 6. Observer error trajectories X̃i (k) in Theorem 3. local information exchange,” International Journal of Robust and Non-
linear Control, vol. 17, no. 10-11, pp. 1002–1033, 2007.
[8] K. You and L. Xie, “Network topology and communication data rate for
consensusability of discrete-time multi-agent systems,” IEEE Transac-
V. C ONCLUSIONS tions on Automatic Control, vol. 56, no. 10, pp. 2262–2275, oct 2011.
[9] T. Feng, J. Zhang, Y. Tong, and H. Zhang, “Consensusability and
In this paper, we have studied the consensus problem for global optimality of discrete-time linear multiagent systems,” IEEE
discrete-time linear multi-agent systems by LQ optimal control Transactions on Cybernetics, vol. 52, no. 8, pp. 8227–8238, aug 2022.
8