Zhongkui Li Cooperative Control of Multi-Agent Systems
Zhongkui Li Cooperative Control of Multi-Agent Systems
of Multi-Agent
Systems
Cooperative Control
of Multi-Agent
Systems A CONSENSUS REGION
APPROACH
Z hong k u i L i • Z h is h e n g Du a n
Series Editors
FRANK L. LEWIS, Ph.D., SHUZHI SAM GE, Ph.D.,
Fellow IEEE, Fellow IFAC Fellow IEEE
Professor Professor
The Univeristy of Texas Research Institute Interactive Digital Media Institute
The University of Texas at Arlington The National University of Singapore
PUBLISHED TITLES
FORTHCOMING TITLES
Modeling and Control Dynamic Sensor Network,
Silvia Ferrari; Rafael Fierro; Thomas A. Wettergren
Optimal Networked Control Systems, Jagannathan Sarangapani; Hao Xu
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
Preface vii
v
vi Contents
Bibliography 235
Index 251
Preface
ix
x Cooperative Control of Multi-Agent Systems
CONTENTS
1.1 Introduction to Cooperative Control of Multi-Agent Systems . . . . . . . 1
1.1.1 Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Formation Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.3 Flocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Overview of This Monograph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Notations and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Basic Algebraic Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.3 Stability Theory and Technical Tools . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1
2 Cooperative Control of Multi-Agent Systems
Agent Dynamics
For different scenarios, the dynamics of the employed agents may also be
different. The simplest agent dynamics are those modeled by first-order inte-
grators, which can be adopted if the velocities of the agents such as mobile
sensors and robots can be directly manipulated. When the accelerations rather
than the velocities of the agents can be directly manipulated, it is common
to use second-order integrators to describe the agent dynamics. Most existing
works on cooperative control of multi-agent systems focus on first-order and
second-order integrator agent dynamics. More complicated agent dynamics are
described by high-order linear systems, which contain integrator-type agents
as special cases and can also be regarded as the linearized models of nonlin-
ear agents dynamics. If we are concerned with cooperative control of multiple
robotic manipulators or attitude synchronization of satellite formation flying,
the agents are generally described by second-order nonlinear Euler-Lagrangian
equations. In some cases, the agents may be perturbed by parameter uncer-
tainties, unknown parameters, or external disturbances, which should also be
taken into account.
For multi-agent systems concerned by the systems and control community,
4 Cooperative Control of Multi-Agent Systems
the agents are generally dynamically decoupled from each other, which im-
plies the necessity of cooperation in terms of information exchange between the
agents to achieve collective behavior. Specifically, each agent needs to receive
information from other agents via a direct sensing or communication network.
The interaction (information exchange) topology among the agents is usually
represented as a graph, where each agent is a node and the information ex-
changes between the agents are represented as edges. Detailed knowledge of
the graph theory is summarized in Section 1.3. An edge in a graph implies
that there exists information flow between the nodes on this edge. The infor-
mation flows among the agents can be directed or undirected. The interaction
graph among the agents can be static or dynamic. For static graphs, the edges
are not time varying, i.e., the graph remains unchanged throughout the whole
process. In contrast, the edges in dynamic graphs are time-varying, due to
disturbances and/or subject to communication range limitations.
For the cooperative control problem, the main task is to design appropriate
controllers to achieve a desired coordination objective. For cooperative control
design, two approaches, namely, the centralized approach and the distributed
approach, are commonly adopted. The centralized approach assumes that at
least a central hub (which can be one of the agents) is available and has the
ability to collect information from and send control signals to all the other
agents. The distributed approach is based on local interactions only, i.e., each
agent exchanges information with its neighbors. Due to the large number of
agents, the spatial distribution of actuators, limited sensing capability of sen-
sors, and short wireless communication ranges, it is considered too expensive
or even infeasible in practice to implement centralized controllers. Thus, dis-
tributed control, depending only on local information of the agents and their
neighbors, appears to be a promising resolution for multi-agent systems. In
some cases, only the relative states or output information between neighboring
agents, rather than the absolute states or outputs of the agents, is accessible
for controllers design. One typical case is that the agents are equipped with
only ultrasonic range sensors. Another instance is deep-space formation flying,
where the measurement of the absolute position of the spacecraft is accurate
only in the order of kilometers, which is thus useless for control purposes [155].
In the past decade, cooperative control of multi-agent systems has been
extensively studied and quite a large amount of research works has been
reported; see the recent surveys [4, 20, 115, 120, 134], the monographs
[8, 74, 112, 127, 138, 139], and the references therein. In the following sub-
sections, we will briefly introduce some typical cooperative control problems:
consensus, formation control, and flocking.
1.1.1 Consensus
In the area of cooperative control of multi-agent systems, consensus is an
important and fundamental problem, which is closely related to formation
control, flocking, distributed estimation, and so on. Consensus means that a
Introduction and Mathematical Background 5
1.1.3 Flocking
Flocking is a typical collective behavior in autonomous multi-agent systems,
which can be widely observed in nature in the scenes of grouping of birds,
schooling of fish, swarming of bacteria, and so on. In practice, understanding
the mechanisms responsible for the emergence of flocking in animal groups can
help develop many artificial autonomous systems such as formation control of
unmanned air vehicles, motion planning of mobile robots, and scheduling of
automated highway systems.
The classical flocking model proposed by Reynolds [140] in the 1980s con-
sists of three heuristic rules: 1) collision avoidance: attempt to avoid collisions
with nearby flockmates; 2) velocity matching: attempt to match velocity with
nearby flockmates; and 3) flock centering: attempt to stay close to nearby
flockmates. These three rules are usually known as separation, alignment, and
cohesion, respectively. In 1995, Vicsek et al. proposed a simple model for phase
transition of a group of self-driven particles and numerically observed the col-
lective behavior of synchronization of multiple agents’ headings [174]. In a
sense, Vicsek’s model can be viewed as a special case of Reynolds’ flocking
model, because only velocity matching is considered in the former. To sys-
tematically investigate the flocking control problem in multi-agent systems,
distributed algorithms consisting of a local artificial potential function and a
velocity consensus component were proposed in [119, 165]. The local artificial
potential function guarantees separation and cohesion whereas the velocity
consensus component ensures alignment in such multi-agent systems. This
kind of three-rule combined flocking algorithms were then extended to deal
with obstacles avoidance in [28]. The flocking control problem of multiple non-
holonomic mobile robots was investigated in [39] by using tools from nonlinear
control theory. Distributed protocols to achieve flocking in the presence of a
single or multiple virtual leaders were designed in [160, 161, 162]. The flocking
control problem of multi-agent systems with nonlinear velocity couplings was
investigated in [52, 179]. Distributed hybrid controllers were proposed in [194],
which can ensure that the flocking behavior emerges and meanwhile retain the
network connectivity.
8 Cooperative Control of Multi-Agent Systems
the set of positive real numbers. The superscript T means transpose for real
matrices and H means conjugate transpose for complex matrices. IN repre-
sents the identity matrix of dimension N . Matrices, if not explicitly stated,
are assumed to have compatible dimensions. Denote by 1 a column vector
with all entries equal to one. kxk denotes its 2-norm of a vector x and kAk
denotes the induced 2-norm of a real matrix A. For ζ ∈ C, denote by Re(ζ)
its real part and by Im(ζ) its imaginary part. rank(A) denotes the rank of a
matrix A. diag(A1 , · · · , An ) represents a block-diagonal matrix with matrices
Ai , i = 1, · · · , n, on its diagonal. det(B) denotes the determinant of a matrix
B. For real symmetric matrices X and Y , X > (≥)Y means that X − Y
is positive (semi-)definite. For a symmetric matrix A, λmin (A) and λmax (A)
denote, respectively, the minimum and maximum eigenvalues of A. range(B)
denotes the column space of a matrix B, i.e, the span of its column vectors. ι
denotes the imaginary unit.
Definition 1 A matrix A ∈ Cn×n is neutrally stable in the continuous-time
sense if it has no eigenvalue with positive real part and the Jordan block cor-
responding to any eigenvalue on the imaginary axis is of size one, while is
Hurwitz if all of its eigenvalues have strictly negative real parts.
Definition 2 A matrix A ∈ Cn×n is neutrally stable in the discrete-time
sense if it has no eigenvalue with magnitude larger than 1 and the Jordan block
corresponding to any eigenvalue with unit magnitude is of size one, while is
Schur stable if all of its eigenvalues have magnitude less than 1.
Definition 3 A matrix A ∈ Rn×n is a Hermite matrix if A = AH . Real
Hermite matrices are called symmetric matrix. A is a unitary matrix if AH A =
AAH = I. Real unitary matrices are called orthogonal matrices.
Definition 4 Matrix P ∈ Rn×n is an orthogonal projection onto a subspace
S, if range(P ) = S, P 2 = P , and P T = P . The orthogonal projection onto a
subspace is unique. For an orthogonal projection P onto a subspace S, if the
columns of C ∈ Rn×m are an orthonormal basis for S, then P = C T C and
S = range(P ) [168].
Definition 5 The Kronecker product of matrices A ∈ Rm×n and B ∈ Rp×q
is defined as
a11 B · · · a1n B
.. ,
A ⊗ B = ... ..
. .
am1 B ··· amn B
which satisfies the following properties:
(A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD),
(A ⊗ B)T = AT ⊗ B T ,
A ⊗ (B + C) = A ⊗ B + A ⊗ C,
where the matrices are assumed to be compatible for multiplication.
Introduction and Mathematical Background 11
with a simple path to all the other nodes. A graph is said to have or contain
a directed spanning tree if a subset of the edges forms a directed spanning
tree. This is equivalent to saying that the graph has at least one node with
directed paths to all other nodes. For undirected graphs, the existence of a
directed spanning tree is equivalent to being connected. However, in directed
graphs, the existence of a directed spanning tree is a weaker condition than
being strongly connected. A strongly connected graph contains at least one
directed spanning tree.
1 6
1 6
2 5 2 5
3 4 3 4
(a) (b)
1 6 1 6
2 5 2 5
3 4 3 4
(c) (d)
FIGURE 1.2: Different types of graphs of six nodes: (a) an undirected con-
nected graph, (b) a strongly connected graph, (c) a balanced and strongly
connected graph, (d) a directed spanning tree.
In FIGURE 1.2 are several examples of different types of graphs with six
nodes. In this book, a directed edge in a graph is denoted by a line with
a directional arrow while an undirected edge is denoted by a line with a
bidirectional arrow or no arrows.
The adjacency matrix A = [aij ] ∈ RN ×N associated with the directed
Introduction and Mathematical Background 13
graph G is defined such that aii = 0, aij is a positive value if (vj , vi ) ∈ E and
aij = 0 otherwise. The adjacency matrix of an undirected graph is defined
analogously except that aij = aji for all i 6= j. Note that aij denotes the
weight for the edge (vj , vi ) ∈ E. If the weights are not relevant, then aij is set
N ×N
equal to 1 if (vj , vi ) ∈ E.
PThe Laplacian matrix L = [Lij ] ∈ R of the graph
G is defined as Lii = j6=i aij and Lij = −aij , i 6= j. The Laplacian matrix
can be written into a compact form as L = D−A, where D = diag(d1 , · · · , dN )
is the degree matrix with di as the in-degree of the i-th node.
Example 1 For the graphs depicted in FIGURE 1.2, we compute the corre-
sponding Laplacian matrices as
3 −1 0 0 −1 −1 3 0 0 −1 −1 −1
−1 3 −1 −1 0 0 −1 1 0 0 0 0
0 −1 2 0 0 −1 −1 −1 2 0 0 0
La = ,
0 −1 0 2 0 −1 , Lb = −1 0 0 1 0 0
−1 0 0 0 1 0 0 0 0 −1 1 0
−1 0 −1 −1 0 3 0 0 0 0 −1 1
2 0 −1 −1 0 0 0 0 0 0 0 0
−1 2 0 0 0 −1 −1 1 0 0 0 0
0 −1 1 0 0 0 −1 0 1 0 0 0
Lc = −1 0 , L =
0 −1 0 1 0 0 .
0 2 −1 0
d
0 0 0 −1 2 −1 0 0 −1 0 1 0
0 −1 0 0 −1 2 0 0 0 0 −1 1
From the definition of the Laplacian matrix and also the above example,
it is easy to see that L is diagonally dominant and has nonnegative diagonal
entries. Since L has zero row sums, 0 is an eigenvalue of L with an associ-
ated eigenvector 1. According to Gershgorin’s disc theorem [58], all nonzero
eigenvalues of L are located within a disk in the complex plane centered at
dmax and having radius of dmax , where dmax denotes the maximum in-degree
of all nodes. According to the definition of M -matrix in the last subsection,
we know that the Laplacian matrix L is a singular M -matrix.
where yi denotes the i-th item of y. For strongly connected and balanced
directed graphs, y T Ly ≥ 0, ∀y ∈ RN , still holds, due to the property that
1 T
2 (L + L ) represents the Laplacian matrix of the undirected graph obtained
from the original directed graph by replacing the directed edges with undirect-
ed ones [121]. However, for general directed graphs which are not balanced,
the Laplacian matrix L is not symmetric and y T Ly can be sign-indefinite.
Fortunately, by exploiting the property of L being a singular M -matrix, we
have the following result.
The stochastic matrix is more convenient to deal with the consensus prob-
lem for discrete-time multi-agent systems, as detailed in Chapter 3.
For a directed graph G with N nodes, the row-stochastic matrix D ∈
N ×N
R
Pm is defined as dii > 0, dij > 0 if (j, i) ∈ E but 0 otherwise, and
j=1 d ij = 1. The stochastic matrix for undirected graphs can be defined
analogously except that dij = dji for all i 6= j. For undirected graphs, the
stochastic matrix D is doubly stochastic, since it is symmetric and all its row
sums and column sums equal to 1. Because the stochastic matrix D has unit
row sums, 1 is clearly an eigenvalue of D with an associated eigenvector 1.
According to Gershgorin’s disc theorem [58], all of the eigenvalues of D are
located in the unit disk centered at the origin.
Lemma 7 ([133]) All of the eigenvalues of the stochastic matrix D are either
in the open unit disk centered at the origin or equal to 1. Furthermore, 1 is a
simple eigenvalue of D if and only if the graph G contains a directed spanning
tree.
Note that in the above lemma I − ǫL with ǫ ∈ (0, dmax ] is actually a row-
stochastic matrix, since I −ǫL has nonnegative entries and has row sums equal
to 1 which follows from the fact that (I − ǫL)1 = 1.
16 Cooperative Control of Multi-Agent Systems
lim W3 (x(t)) = 0.
t→∞
Lemma 15 ([26]) For a system ẋ = f (x, t), where f (x, t) is locally Lipschitz
in x and piecewise continuous in t, assume that there exists a continuously
differentiable function V (x, t) such that along any trajectory of the system,
For the special case of autonomous system ẋ = f (x), the condition (1.4)
can be modified to that there exists a continuous positive definite and radically
unbounded function V (x) such that V̇ (x) ≤ −W (x) + ǫ, where W (x) is also
positive definite and radically unbounded.
existence of the solution u(t), and suppose u(t) ∈ J for all t ∈ [t0 , T ). Let v(t)
be a continuous function whose upper right-hand derivative D+ v(t) 2 satisfies
the differential inequality
where v(t) ∈ J for all t ∈ [t0 , T ). Then, v(t) ≤ u(t) for all t ∈ [t0 , T ).
2xT y ≤ xT P x + y T P −1 y.
(1) S < 0,
T −1
(2) S11 < 0, S22 − S12 S11 S12 < 0,
−1 T
(3) S22 < 0, S11 − S12 S22 S12 < 0.
1.4 Notes
The materials of Section 1.1 are mainly based on [4, 8, 20, 74, 112, 120, 134,
138, 139]. Section 1.3.2 is mainly based on [48, 134, 138]. Section 1.3.3 is
mainly based on [15, 71, 72, 154].
2 D + v(t) v(t+h)−v(t)
is defined by D + v(t) = lim sup h
.
h→0+
2
Consensus Control of Linear Multi-Agent
Systems: Continuous-Time Case
CONTENTS
2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 State Feedback Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.1 Consensus Condition and Consensus Value . . . . . . . . . . . . . . . . . 22
2.2.2 Consensus Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Consensus Protocol Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.3.1 The Special Case with Neutrally Stable Agents . . 30
2.2.3.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.3.3 Consensus with a Prescribed Convergence Rate . 33
2.3 Observer-Type Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3.1 Full-Order Observer-Type Protocol I . . . . . . . . . . . . . . . . . . . . . . . 35
2.3.2 Full-Order Observer-Type Protocol II . . . . . . . . . . . . . . . . . . . . . . 40
2.3.3 Reduced-Order Observer-Based Protocol . . . . . . . . . . . . . . . . . . . 41
2.4 Extensions to Switching Communication Graphs . . . . . . . . . . . . . . . . . . . . 43
2.5 Extension to Formation Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
19
20 Cooperative Control of Multi-Agent Systems
where L is the Laplacian matrix of G. The network (2.3) can be also regarded
as a large-scale interconnected system [151].
Let r = [r1 , · · · , rN ]T ∈ RN be the left eigenvector of L associated with
the eigenvalue 0 and satisfy rT 1 = 1. Introduce a new variable δ ∈ RN n×N n
as follows:
δ = [(IN − 1rT ) ⊗ In ]x. (2.4)
By the definition of r, it is easy to verify that δ satisfies
(rT ⊗ In )δ = 0. (2.5)
Remark 2 From (2.12), we can see that the final consensus value ̟(t) de-
pends on the state matrix A, the communication topology L, and the initial
states of the agents. Here we briefly discuss how the agent dynamics affect
̟(t). If A is Hurwitz, then ̟(t) → 0 as t → ∞, i.e., the consensus problem
in this case is trivial; If A has eigenvalues along the imaginary axis but no
eigenvalues with positive real parts, then ̟(t) is bounded; If A is unstable,
i.e., it has eigenvalues with positive real parts, then ̟(t) will tend to infin-
ity exponentially. The usefulness of studying the last case is that the linear
agents in (2.1) sometimes denote the linearized dynamics of nonlinear agents
(e.g., chaotic systems) whose solution, even though A is unstable, can still be
bounded, implying the final consensus value may also be bounded.
Definition 8 The region S of the parameter σ belonging to the open right half
plane, such that the matrix A + σBK is Hurwitz, is called the (continuous-
time) consensus region of the network (2.3).
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 25
By definition, the shape of the consensus region relies on only the agent
dynamics and the feedback gain matrix K of (2.2). For directed communi-
cation graphs, the consensus regions can be roughly classified into five type-
s, namely, bounded consensus region, consensus region with unbounded real
part but bounded imaginary part, consensus region with unbounded imag-
inary part but bounded real part, consensus region with both unbounded
real and imaginary parts, and disconnected consensus regions. For undirected
communication graphs, there are only three different types of consensus re-
gions, namely, bounded consensus region, unbounded consensus region, and
disconnected consensus region.
It should be mentioned that the notion of consensus region is similar to
some extent to the synchronization region issue of complex dynamical net-
works, studied in [41, 42, 101, 105, 124]. The types of bounded, unbounded,
disconnected synchronization regions are discussed in [41, 101, 105] for undi-
rected graphs and in [42] for directed graphs. For the case where the inner
linking matrix, equivalently, the matrix BK in (2.3), is an identity matrix,
then an unbounded synchronization region naturally arises [101].
The key differences between the consensus region in this chapter and the
synchronization region in [41, 42, 101, 105, 124] are
• For the synchronization region in [41, 42, 101, 105, 124], the inner linking
matrix can be arbitrarily chosen. It is not the case here, since B is a priori
given, implying that the consensus region in this chapter is more complicated
to analyze.
• We are more interested in how the consensus region is related to the consen-
sus protocol and the communication topology, and more importantly, how
to design consensus protocols to yield desirable consensus regions.
The following result is a direct consequence of Theorem 1.
Corollary 1 The agents described by (2.1) reach consensus under the protocol
(2.2) if and only if cλi ∈ S, i = 2, · · · , N , where λi , i = 2, · · · , N , are the
nonzero eigenvalues of L 1 .
In light of the consensus region notion, the design of the consensus protocol
(2.2) can be divided into two steps:
1) Determine the feedback gain matrix K to yield a desirable consensus region;
2) Then adjust the coupling gain c such that cλi , i = 2, · · · , N , belong to the
consensus region.
The consensus protocol design based on the consensus region as above
has a desirable decoupling feature. Specifically, in step 1) the design of the
1 Actually, cλ ∈ S, i = 2, · · · , N , are the nonzero eigenvalues of the weighted Laplacian
i
matrix cL.
26 Cooperative Control of Multi-Agent Systems
feedback gain matrix K of the consensus protocol relies on only the agent
dynamics, independent of the communication topology, while the effect of the
communication topology on consensus is handled in step 2) by manipulating
the coupling gain c.
It is worth noting that the consensus region S can be seen as the stability
region of the matrix pencil A + σBK with respect to the complex parameter
σ. Thus, tools from the stability of matrix pencils will be utilized to analyze
the consensus region. Before moving on, the lemma below is needed.
0.25
0.2
0.15
0.1
0.05
y
−0.05
−0.1
−0.15
−0.2
−0.25
Example 3 (Consensus region with unbounded real part but bounded imagi-
nary part)
The agent dynamics and the consensus protocol are given by (2.1) and
(2.2), respectively, with
−2 −1 1 0 0 −1
A= , B= , K= , σ = x + ιy.
2 1 0 1 1 0
0.8
0.6
0.4
0.2
y
−0.2
−0.4
−0.6
−0.8
0 2 4 6 8 10 12 14
x
Example 5 The agent dynamics and the consensus protocol are given by
(2.1) and (2.2), respectively, with
−2.4188 1 0.0078 1 0
A = −0.3129 −1 1.1325 , B = 0 1 ,
−0.0576 1 −0.9778 3.7867 −0.0644
−0.85 0 0.35
K= .
−9.6 0 6.01
For simplicity, we here consider the case where the communication graph G is
undirected. The characteristic polynomial of A + σBK can be obtained as
From the above inequalities, we can obtain that the consensus region S =
(1.037, 3.989), which is clearly bounded. Assume that the communication graph
G is given by FIGURE 2.3. The Laplacian matrix of G is equal to
4 −1 −1 −1 −1 0
−1 3 −1 0 0 −1
−1 −1 2 0 0 0
L= ,
−1 0 0 2 −1 0
−1 0 0 −1 3 −1
−1 0 0 0 −1 2
whose nonzero eigenvalues are 1.382, 1.6972, 3.618, 4, 5.3028. It follows from
Corollary 1 that consensus is achieved if and only if the coupling gain c satisfies
that 0.7504 < c < 0.7522.
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 29
1 6
2 5
3 4
S̃ + S̃ H = S − (x + ιy)HH T + S T − (x − ιy)HH T
(2.14)
= −2xHH T ≤ 0, ∀ x > 0.
Theorem 3 Given that A is neutrally stable and that Assumption 2.1 hold-
s, there exists a distributed protocol in the form of (2.2) that solves the
consensus problem and meanwhile yields an unbounded consensus region
(0, ∞) × (−∞, ∞) if and only if (A, B) is stabilizable. One such consensus
protocol is given by Algorithm 1.
which implies that the matrix A + (x + iy)BK is Hurwitz for all x > 0 and
y ∈ R, because, by Lemma 22, S − (x + iy)HH T is Hurwitz for any x > 0
and y ∈ R. Hence, by Theorem 1, the protocol given by Algorithm 1 solves the
consensus problem with an unbounded consensus region (0, ∞) × (−∞, ∞).
AP + P AT + BY + Y T B T < 0.
AP + P AT − τ BB T < 0. (2.17)
P [A + (x + ιy)BK]H + [A + (x + ιy)BK]P
= P [A + (x − ιy)BK]T + [A + (x + ιy)BK]P
= AP + P AT − 2xBB T < 0,
for all x ∈ [1, ∞) and y ∈ (−∞, ∞). That is, A + (x + ιy)BK is Hurwitz for
all x ∈ [1, ∞) and y ∈ (−∞, ∞).
Theorem 1 and the above proposition together lead to the following result.
Remark 5 Compared to the case when A is neutrally stable, where the con-
sensus region is the open right half plane, for the case where A is not neu-
trally stable, the consensus region can be achieved are generally smaller. This
is consistent with the intuition that unstable behaviors are more difficult to
synchronize than stable behaviors.
Proof 6 This proof can be completed by following similar steps as in the proof
of Proposition 1, which is omitted for conciseness.
Theorem 5 Assume that Assumption 2.1 holds. There exists a protocol (2.2)
that solves the consensus problem for the agents in (2.1) with a convergence
rate larger than α and yields an unbounded consensus region [1, ∞)×(−∞, ∞)
if and only if there exists a K such that A + BK is Hurwitz with a decay rate
larger than α.
˙
ξ˜ = (IN ⊗ A + cJ ⊗ H)ξ.
˜ (2.26)
Note that ξ˜1 = (rT ⊗ I2n )ξ ≡ 0 and that the elements of the state matrix
of (2.26) are either block diagonal or block upper-triangular. Hence, ξ˜i , i =
2, · · · , N , converge asymptotically to zero if and only if the N − 1 subsystems
along the diagonal, i.e.,
˙
ξ˜i = (A + cλi H)ξ˜i , i = 2, · · · , N, (2.27)
areasymptotically stable.
It is easy to check that matrices A+cλi H are similar
0
to A+cλ i LC
−cλi LC A+BF , i = 2, · · · , N. Therefore, the stability of the matrices
A + BF , A + cλi LC, i = 2, · · · , N , is equivalent to that the state ξ of (2.25)
converges asymptotically to zero, i.e., the consensus problem is solved.
By following similar steps as in the proof of Theorem 2, we can obtain the
solution of (2.22) as
implying that
z1 (0)
zi (t) → (rT ⊗ eAt ) ... , as t → ∞, (2.29)
zN (0)
for i = 1, · · · , N. Since A + BK is Hurwitz, (2.29) directly leads to (2.23).
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 37
For the observer-type protocol (2.21), the consensus region can be similarly
defined. Assuming that A + BF is Hurwitz, the region of the parameter σ
belonging to the open right half plane, such that A + σLC is Hurwitz, is the
consensus region of the network (2.22).
In what follows, we will show how to design the observer-type protocol
(2.21). First, consider the special case where A is neutrally stable.
Theorem 7 Given that A is neutrally stable and that Assumption 2.1 holds,
there exists an observer-type protocol (2.21) that solves the consensus problem
and meanwhile yields an unbounded consensus region (0, ∞) × (−∞, ∞) if
and only if (A, B, C) is stabilizable and detectable. One such protocol can be
constructed by Algorithm 4.
By Lemma 23, we can get that Ŝ − (x + ιy)Ĥ T Ĥ is Hurwitz for any x > 0 and
y ∈ R (which actually is dual to Lemma 23). Then, it follows from (2.30) that
A + (x + ιy)LC is Hurwitz for all x > 0 and y ∈ R. Hence, by Theorem 6, the
protocol given by Algorithm 4 solves the consensus problem with an unbounded
consensus region (0, ∞) × (−∞, ∞).
Proposition 3 Given the agent dynamics (2.1), there exists a matrix L such
that A + (x + ιy)LC is Hurwitz for all x ∈ [1, ∞) and y ∈ (−∞, ∞), if and
only if (A, C) is detectable.
Theorem 6 and the above proposition together lead to the following result.
to get one solution Q > 0. Then, choose the feedback gain matrix L =
−Q−1 C T .
3) Select the coupling gain c > cth , with cth given in (2.19).
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 39
1 6
2 5
3 4
From the proof of Theorem 6, it is easy to see that the convergence rate of
the N agents in (2.1) reaching consensus under the protocol (2.21) is equal to
the minimal decay rate of the N − 1 systems in (2.27). Thus, the convergence
rate of the agents (2.1) reaching consensus can be manipulated by properly
assigning the eigenvalues of matrices A+BF and A+cλi LC, i = 2, · · · , N . The
following algorithm present a multi-step procedure to design (2.21) solving the
consensus problem with a prescribed convergence rate.
3) Select the coupling gain c > cth , with cth given in (2.19).
10 8
6
5
4
Consensus errors
Consensus errors
2
0
−5
−2
−4
−10
−6
−15 −8
0 50 100 150 200 250 300 350 400 0 5 10 15 20 25 30 35 40
t t
(a) (b)
FIGURE 2.5: The consensus errors: (a) Algorithm 4, (b) Algorithm 6 with
α = 1.
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 41
where v̄i ∈ Rn , c > 0 denotes the coupling gain, L̄ ∈ Rn×q and F̄ ∈ Rp×n
are the feedback gain matrices.
Different from the dynamic protocol (2.21) where the protocol states vi
take roles of intermediate variables, the states v̄i of (2.33) can be the estimates
PN PN
of the relative states j=1 aij (xi − xj ). Let ei = v̄i − j=1 aij (xi − xj ),
i = 1, · · · , N . Then, we can get from (2.33) that ei , i = 1, · · · , N , satisfy the
following dynamics:
ėi = (A + L̄C)ei , i = 1, · · · , N.
Clearly, if L̄P
is chosen such that A + L̄C is Hurwtiz, then v̄i asymptotically
N
converge to j=1 aij (xi − xj ). Let e = [eT1 , · · · , eTN ]T and x = [xT1 , · · · , xTN ]T .
Then, we can get from (2.1) and (2.33) that the closed-loop network dynamics
can be written in terms of e and x as
ẋ A + cL ⊗ B F̄ cB F̄ x
= . (2.34)
ė 0 A + L̄C e
Theorem 9 Suppose that Assumption 2.1 holds. Then, the observer-type pro-
tocol (2.33) solves the consensus problem for the agents in (2.1) if all the
matrices A + L̄C and A + cλi B F̄ , i = 2, · · · , N , are Hurwitz.
Note that the design of F̄ and L̄ in Theorem 9 is actually dual to the design
of L and F in Theorem 6. Algorithms 4 and 5 in the previous subsection can
be modified to determine the parameters c, F̄ , and L̄ of the dynamic protocol
(2.33). The details are omitted here for conciseness.
where vi ∈ Rn−q is the protocol state, c > 0 is the coupling gain, aij is the
(i, j)-th entry of the adjacency matrix of G, F ∈ R(n−q)×(n−q) is Hurwitz and
has no eigenvalues in common with those of A, G ∈ R(n−q)×q , T ∈ R(n−q)×n
is the unique solution to the following Sylvester equation:
T A − F T = GC, (2.36)
n×q
which further satisfies that [ CT ] is nonsingular, Q1 ∈ R and Q2 ∈ Rn×(n−q)
−1
are given by [ Q1 Q2 ] = [ C
T ] , and K ∈ Rp×n is the feedback gain matrix.
ζ̇ = (IN ⊗ M + cL ⊗ R) ζ, (2.37)
3) Solve the LMI (2.18) to get one solution P > 0. Then, choose the matrix
K = −B T P −1 .
4) Select the coupling gains c ≥ cth , with cth given as in (2.19).
Remark 14 By Theorem 8.M6 in [23], a necessary condition for the matrix T
to the unique solution to (2.36) and further to satisfy that [ CT ] is nonsingular is
that (F, G) is controllable, (A, C) is observable, and F and A have no common
eigenvalues. In the case where the agent in (2.1) is single-input single-output
(SISO), this condition is also sufficient. Under such a condition, it is shown
for the general MIMO case that the probability for [ C T ] to be nonsingular is 1
[23]. If [ C
T ] is singular in step 2), we need to go back to step 1) and repeat the
process. As shown in Proposition 1, a necessary and sufficient condition for
the existence of a positive-definite solution to the LMI (2.18) is that (A, B) is
stabilizable. Therefore, a sufficient condition for Algorithm 7 to successfully
construct a protocol (2.35) is that (A, B, C) is stabilizable and observable.
Theorem 10 Assuming that Assumption 2.1 holds, the reduced-order proto-
col (2.35) solves the consensus problem for the agents described by (2.1) if
the matrices F and A + cλi BK, i = 2, · · · , N are Hurwitz. One such protocol
solving the consensus problem is constructed by Algorithm 7. Then, the state
trajectories of (2.37) satisfy
x1 (0)
xi (t) → ̟(t) , (rT ⊗ eAt ) ... ,
(2.38)
xN (0)
vi (t) → T ̟(t), i = 1, · · · , N, as t → ∞,
For the case where the state matrix A is neutrally stable, Algorithm 7 can
be modified by refereing to Algorithm 1. Moreover, Algorithm 7 can also be
modified to construct the protocol (2.35) to reach consensus with a prescribed
convergence rate. We omit the details here for conciseness.
where aij (t) is the (i, j)-th entry of the adjacency matrix associated with Gσ(t)
and the rest of the variables are the same as in (2.2). Using (2.39) for (2.1),
we can obtain the closed-loop network dynamics as
where x = [xT1 , · · · , xTN ]T and Lσ(t) is the Laplacian matrix associated with
Gσ(t) .
Let ς = [(IN − N1 11T ) ⊗ In ]x. Then, as shown in Section 2.2, the consensus
problem of (2.40) is reduced to the asymptotical stability problem of ς, which
satisfies
ς˙ = (IN ⊗ A + cLσ(t) ⊗ BK)ς. (2.41)
Algorithm 2 will be modified as below to design the consensus protocol
(2.39).
1) step 1) in Algorithm 2.
1
2) Select the coupling gain c ≥ λmin , where λmin
2 , minGσ(t) ∈GN {λ2 (Lσ(t) )}
2
denotes the minimum of the smallest nonzero eigenvalues of Lσ(t) for Gσ(t) ∈
GN .
The following theorem shows that the protocol (2.40) does solve the con-
sensus problem.
Because Gσ(t) is connected and (1T ⊗ I)ς = 0, it follows from Lemma 2 that
ς T (Lσ(t) ⊗ I)ς ≥ λmin T
2 ς ς. Since P
−1
BBP −1 ≥ 0, we can further get that
where the variables are the same as those in (2.2). It should be noted that
(2.45) reduces to the consensus protocol (2.2), when hi − hj = 0, ∀ i, j =
1, · · · , N .
Definition 9 The agents (2.1) under the protocol (2.45) achieve a given for-
mation He = (h1 , h2 , · · · , hN ), if
Theorem 12 For graph G satisfying Assumption 2.1, the agents (2.1) reach
the formation H e under the protocol (2.45) if all the matrices A + cλi BK, i =
2, · · · , N , are Hurwitz, and Ahi = 0, ∀ i = 1, · · · , N , where λi , i = 2, · · · , N ,
are the nonzero eigenvalues of the Laplacian matrix L.
Remark 16 Note that not all kinds of formation structure can be achieved for
the agents (2.1) by using protocol (2.45). The achievable formation structures
have to satisfy the constraints Ahi = 0, ∀ i = 1, · · · , N . Note that hi can be
replaced by hi − h1 , i = 2, · · · , N , in order to be independent of the reference
coordinate, by simply choosing h1 corresponding to agent 1 as the origin. The
formation protocol (2.45) satisfying Theorem 12 can be constructed by using
Algorithms 1 and 2.
The other consensus protocols discussed in the previous sections can also
be modified to solve the formation control problem. For instance, a distribut-
ed formation protocol corresponding to the observer-type consensus protocol
(2.1) can be described as follows:
N
X
v̇i = (A + BF )vi + cL aij [C(vi − vj ) − (yi − yj − C(hi − hj ))],
j=1 (2.48)
ui = F vi ,
where the variables are defined as in (2.21). Similar results can be accordingly
obtained. The only difference is that the constraints Ahi = 0, ∀ i = 1, · · · , N
need to be also satisfied.
In the following, we will consider the formation keeping problem for satel-
lites moving in the low earth orbit. The translational dynamics of satellites
48 Cooperative Control of Multi-Agent Systems
10
−10
−20
y −30
−40
−50
−60
−70
−25 −20 −15 −10 −5 0 5
x
in the low earth orbit, different from those of deep-space satellites, cannot be
modeled as double integrators. In order to simplify the analysis, assume that
a virtual reference satellite is moving in a circular orbit of radius R0 . The
relative dynamics of the other satellites with respect to the virtual satellite
will be linearized in the following coordinate system, where the origin is on
the mass center of the virtual satellite, the x-axis is along the velocity vector,
the y-axis is aligned with the position vector, and the z-axis completes the
right-hand coordinate system. The linearized equations of the relative dynam-
ics of the i-th satellite with respect to the virtual satellite are given by the
Hill’s equations [69]:
¨i − 2ω0 ỹ˙ i = uxi ,
x̃
ỹ¨i + 2ω0 x̃˙ i − 3ω02 yi = uyi , (2.49)
z̃¨i + ω02 z̃i = uzi ,
where x̃i , ỹi , z̃i are the position components of the i-th satellite in the rotating
coordinate, uxi , uyi , uzi are control inputs, and ω0 denotes the angular rate of
the virtual satellite. The main assumption inherent in Hill’s equations is that
the distance between the i-th satellite and the virtual satellite is very small
in comparison to the orbital radius R0 .
Denote the position vector by ri = [x̃i , ỹi , z̃i ]T and the control vector by
ui = [uxi , uyi , uzi ]T . Then, (2.49) can be rewritten as
ṙi 0 I 3 ri 0
= + u, (2.50)
r̈i A1 A2 ṙi I3 i
where
0 0 0 0 2ω0 0
A1 = 0 3ω02 0 , A2 = −2ω0 0 0 .
0 0 −ω02 0 0 0
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 49
The satellites are said to achieve formation keeping, if their velocities con-
verge to the same and their positions maintain a prescribed separation, i.e.,
ri − hi → rj − hj , ṙi → ṙj , as t → ∞, where hi − hj ∈ R3 denotes the desired
constant separation between satellite i and satellite j.
Represent the communication topology among the N satellites by a direct-
ed graph G. Assume that measurements of both relative positions and relative
velocities between neighboring satellites are available. The control input to
satellite i is proposed here as
N
X
ui = −A1 hi + c aij [F1 (ri − hi − rj + hj ) + F2 (ṙi − ṙj )] , (2.51)
j=1
where c > 0, F1 and F2 ∈ R3×3 are the feedback gain matrices to be deter-
mined. If satellite k is a leader, i.e., it does not receive information from any
other satellite, then the term A1 hk is set to zero. With (2.51), the equation
(2.50) can be reformulated as
N
X
ṙi 0 I3 ri − hi 0 0 r i − h i − rj + h j
= +c aij . (2.52)
r̈i A1 A2 ṙi F1 F2 ṙi − ṙj
j=1
Corollary 2 Assume that graph G has a directed spanning tree. Then, the
protocol
0 I3
(2.51)solves
0 0
the formation keeping problem if and only if the matrices
A1 A 2 + cλ i F1 F2 are Hurwitz for i = 2, · · · , N , where λi , i = 2, · · · , N,
denote the nonzero eigenvalues of the Laplacian matrix of G.
1 4
2 3
15
x−axis value 1500
y−axis value
10 z−axis value 1000
500
5
z
0
v
0
−500
−5 −1000
500
1000
0 500
−10
0
−500 −500
y −1000
0 5 10 15 20 25 30 −1000 −1500 x
t
(a) (b)
FIGURE 2.8: (a) The relative velocity; (b) the relative positions, t ∈ [0, 5000]s.
Consensus Control of Linear Multi-Agent Systems: Continuous-Time Case 51
2.6 Notes
The materials of this chapter are mainly based on [83, 86, 91, 198]. The full-
order observer-type consensus protocol (2.33) is adopted from [197], which is
also proposed in [169].
For more results about consensus of general continuous-time linear multi-
agent systems, please refer to [106, 116, 143, 146, 171, 173, 178, 186, 195]
and the references therein. In [106, 116, 171, 173], the authors considered the
case where the relative states of neighboring agents are available. Distributed
dynamic consensus protocols based on the local output information were pro-
posed in [143, 146, 195, 198]. Specially, [195] presented other types of observer-
type protocols except of those in this chapter. The observed-based consensus
protocol in [143] requires the absolute output of each agent but is applicable
to jointly connected switching communication graphs. A low gain technique
was used in [146, 198] to design an output feedback consensus protocol, which
are based on only the relative outputs of neighboring agents. It is worth men-
tioning that the convergence rate of reaching consensus under the protocols
using the low gain technique is generally much lower and a restriction on the
state matrix A is required in [146, 198]. Reduced-order consensus protocols in
[199] do not require the absolute output information, which however are based
on both the relative outputs and inputs of neighboring agents. Necessary and
sufficient conditions were derived in [186] for achieving continuous-time con-
sensus over Markovian switching topologies. The consensus problem of linear
multi-agent systems with uniform constant communication delay was con-
cerned in [178], where an upper bound for delay tolerance was obtained which
explicitly depends on the agent dynamics and the network topology.
This page intentionally left blank
3
Consensus Control of Linear Multi-Agent
Systems: Discrete-Time Case
CONTENTS
3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 State Feedback Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.1 Consensus Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2.2 Discrete-Time Consensus Region . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.3 Consensus Protocol Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.3.1 The Special Case with Neutrally Stable Agents . . 60
3.2.3.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3 Observer-Type Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.1 Full-Order Observer-Type Protocol I . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.2 Full-Order Observer-Type Protocol II . . . . . . . . . . . . . . . . . . . . . . 67
3.3.3 Reduced-Order Observer-Based Protocol . . . . . . . . . . . . . . . . . . . 68
3.4 Application to Formation Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
53
54 Cooperative Control of Multi-Agent Systems
it is shown that there exists a static protocol with a bounded consensus region
in the form of an open unit disk if each agent is stabilizable, implying that
such static protocol can achieve consensus with respect to all communication
topologies containing a directed spanning tree. For the general case where the
state matrix might be unstable, an algorithm is proposed to construct a pro-
tocol with the origin-centered disk of radius δ (0 < δ < 1) as its consensus
region. It is worth noting that δ has to further satisfy a constraint relying on
the unstable eigenvalues of the state matrix for the case where each agent has
a least one eigenvalue outside the unit circle, which shows that the consensus
problem of the discrete-time multi-agent systems is generally more difficult to
solve, compared to the continuous-time case in the previous chapter.
Based on the relative outputs of neighboring agents, distributed full-order
and reduced-order observer-type consensus protocols are proposed in Section
3.3. The Separation Principle of the traditional observer-based controllers still
holds in the discrete-time multi-agent setting presented in this paper. Algo-
rithms are presented to design these observer-type protocols. In Section 3.4,
the consensus protocols are modified to solve formation control problem for
discrete-time multi-agent systems. The main differences of the discrete-time
consensus in this chapter and the continuous-time consensus in the previous
chapter are further highlighted in Section 3.5.
where K ∈ Rp×n is the feedback gain matrix and dij is the (i, j)-th entry of
the row-stochastic matrix D associated with the graph G. Note that different
from the Laplacian matrix used in the last chapter, the stochastic matrix will
be used to characterize the communication graph in this chapter.
Let x = [xT1 , · · · , xTN ]T . Using (3.2) for (3.1), it is not difficult to get that
the closed-loop network dynamics can be written as
Proof 12 Because the graph G satisfies Assumption 3.1, it follows from Lem-
ma 7 that 0 is a simple eigenvalue of IN − D and the other eigenvalues lie in
56 Cooperative Control of Multi-Agent Systems
the open unit disk centered at 1+ι0 in the complex plane. Let Y1 ∈ CN ×(N −1) ,
Y2 ∈ C(N −1)×N , T ∈ RN ×N , and upper-triangular ∆ ∈ C(N −1)×(N −1) be such
that
T
r 0 0
T = 1 Y1 , T −1 = , T −1 (IN − D)T = J = , (3.6)
Y2 0 ∆
are Schur stable. Therefore, the Schur stability of the matrices A+(1− λ̂i )BK,
i = 2, · · · , N , is equivalent to that the state ζ of (3.7) converges asymptotically
to zero, implying that consensus is achieved.
implying that
xi (k) → ̟(k) as k → ∞, i = 1, · · · , N.
Corollary 3 The agents in (3.1) reach consensus under the static protocol
(3.2) if λ̂i ∈ S, i = 2, · · · , N , where λ̂i , i = 2, · · · , N , are the eigenvalues of
D located in the open unit disk.
Example 9 The agent dynamics and the consensus protocol are given by
(3.1) and (3.2), respectively, with
0 1 1 0 0 −1
A= , B= , K= .
−1 1.02 0 1 1 0
For simplicity in illustration, assume that the communication graph G is undi-
rected here. Then, the consensus region is a set of intervals on the real axis.
The characteristic equation of A + (1 − σ)BK is
It is well known that, under the bilinear transformation, (3.10) has all roots
within the unit disk if and only if the roots of (3.11) lie in the open left-half
plane (LHP). According to the Hurwitz criterion [23], (3.11) has all roots in
the open LHP if and only if 0.02 < σ 2 < 1. Therefore, the consensus region
in this case is S = (−1, −0.1414) ∪ (0.1414, 1), a union of two disconnected
intervals, which can be obtained from the plot of the eigenvalues of A + (1 −
σ)BK with respect to σ as depicted in FIGURE 3.1.
1.1
radius of A+(1−σ)BK
0.9
0.8
0.7
0.6
0.5
−1 −0.5 0 0.5 1
σ
1 6
2 5
3 4
all belong to S. Thus, it follows from Corollary 3 that the network (3.3) with
graph given in FIGURE 3.2 can achieve consensus.
Let us see how modifications of the communication topology affect the con-
sensus. Consider the following two simple cases:
1) An edge is added between nodes 1 and 5, thus more information exchange
will exist inside the network. Then, the row-stochastic matrix D becomes
0.2 0.2 0.2 0.2 0.1 0.1
0.2 0.6 0.2 0 0 0
0.2 0.2 0.6 0 0 0
,
0.2 0 0 0.4 0.4 0
0.1 0 0 0.4 0.2 0.3
0.1 0 0 0 0.4 0.5
whose eigenvalues, in addition to 1, are −0.2346, 0.0352, 0.4, 0.4634, 0.836.
Clearly, the eigenvalue 0.0352 does not belong to S, i.e., consensus can not be
achieved in this case.
2) The edge between nodes 5 and 6 is removed. The row-stochastic matrix
D becomes
0.3 0.2 0.2 0.2 0 0.1
0.2 0.6 0.2 0 0 0
0.2 0.2 0.6 0 0 0
,
0.2 0 0 0.4 0.4 0
0.1 0 0 0.4 0.6 0
0.1 0 0 0 0 0.9
whose eigenvalues, other than 1, are −0.0315, 0.2587, 0.4, 0.8676, 0.9052. In
this case, the eigenvalue −0.0315 does not belong to S, i.e., consensus can not
be achieved either.
These sample cases imply that, for disconnected consensus regions, con-
sensus can be quite fragile to the variations of the network’s communication
topology. Hence, the consensus protocol should be designed to have a suffi-
ciently large bounded consensus region in order to be robust with respect to the
communication topology, which is the topic of the following subsection.
60 Cooperative Control of Multi-Agent Systems
Next, an algorithm for protocol (3.1) is presented, which will be used later.
−1
U U M 0
A = (3.13)
W W 0 X
of A.
c) If the MARE (3.15) has a unique positive-definite solution P , then P =
limk→∞ Pk for any initial condition P0 ≥ 0, where Pk satisfies
P (k + 1) = AT P (k)A − δAT P (k)B(B T P (k)B + I)−1 B T P (k)A + Q.
for the case where A has a least one eigenvalue outside the unit circle and B
is of rank one.
Consensus Control of Linear Multi-Agent Systems: Discrete-Time Case 63
Remark 20 Note that Γ≤δ for the general case is a subset of ΓN for the
special case where A is neutrally stable. This is consistent with the intuition
that unstable behaviors are more difficult to synchronize than the neutrally
stable ones. By Lemma 26, it follows that a sufficient and necessary condition
for the existence of the consensus protocol (3.2) is that (A, B) is stabilizable for
the case where A has no eigenvalues with magnitude larger than 1. In contrast,
δ has to further satisfy δ < Π|λu1(A)| for the case where A has at least eigenvalue
i i
outside the unit circle and B is of rank one. This implies that contrary to
the continuous-time case in the previous chapter, both the eigenvalues of the
communication graph and the unstable eigenvalues of the agent dynamics are
critical for the design of the consensus protocol. In other words, the consensus
problem of discrete-time multi-agent systems in this chapter is generally more
challenging to solve.
Remark 21 For the case where B is of full column rank, the feedback
gain matrix K of (3.2) can be chosen as K = −(B T P B)−1 B T P A, where
P > 0 is the unique solution to this simplified MARE: P = AT P A − (1 −
δ 2 )AT P B(B T P B)−1 B T P A + Q.
1 6
2 5
3 4
that K = [ −0.0511 −1.0510 ]. It follows from Theorem 16 that the agents (3.1)
reach consensus under the protocol (3.2) with K given as above with respect
to Γ≤0.95 . Assume that the communication topology G is given as in FIGURE
3.3, and the corresponding row-stochastic matrix is
0.4 0 0 0.1 0.3 0.2
0.5 0.5 0 0 0 0
0.3 0.2 0.5 0 0 0
D= ,
0.5 0 0 0.5 0 0
0 0 0 0.4 0.4 0.2
0 0 0 0 0.3 0.7
whose eigenvalues, other than 1, are λi = 0.5, 0.5565, 0.2217±ι0.2531. Clearly,
|λi | < 0.95, for i = 2, · · · , 6. FIGURE 3.4 depicts the state trajectories of
the discrete-time double integrators, which shows that consensus is actually
achieved.
90 1.4
80 1.2
70
1
60
0.8
50
x (k)
x (k)
0.6
40
i1
i2
0.4
30
0.2
20
10 0
0 −0.2
−10 −0.4
0 20 40 60 80 100 0 20 40 60 80 100
k k
where
A BF 0 0
A= , H= .
0 A + BF −LC LC
Theorem 17 For any G satisfying Assumption 3.1, the agents in (3.1) reach
consensus under the observer-type protocol (3.17) if all the matrices A + BK
and A + (1 − λ̂i )LC, i = 2, · · · , N , are Schur stable, where λ̂i , i = 2, · · · , N ,
denote the eigenvalues of D located in the open unit disk. Furthermore, the
final consensus value is given by
xi (k + 1) → ̟(k), vi (k + 1) → 0, i = 1, · · · , N, as k → ∞, (3.19)
Proof 17 By following similar steps in the proof of Theorem 13, we can derive
that the consensus problem of (3.18) can be reduced to the asymptotical stability
problem of the following systems:
For the observer-type protocol (3.17), the consensus region can be similarly
defined as in Definition 10. Assuming that A + BF is Schur stable, the region
of the parameter σ ∈ C such that A + (1 − σ)LC is Schur stable, is the
discrete-time consensus region of the network (3.18).
In the following, we shall design the observer-type protocol (3.17). It is
noted that the design of A+(1− λ̂i )LC is dual to the design of A+(1− λ̂i )BK,
the latter of which has been discussed in the preceding section. First, for the
special case where A is neutrally stable, we can modify Algorithm 9 to get the
following.
where Q > 0 and δ ∈ R. Then, the protocol (3.17) designed as above has a
bounded consensus region in the form of an origin-centered disk of radius δ,
i.e., this protocol solves the consensus problem for networks with agents (3.1)
with respect to Γ≤δ , where δ satisfies 0 < δ < 1 for the case where A has no
eigenvalues with magnitude larger than 1 and satisfies 0 < δ < Π|λu1(A)| for
i i
the case where A has a least one eigenvalue outside the unit circle and C is
of rank one.
Proof 19 The proof of this theorem can be completed by following similar
steps as in the proof of Theorem 16.
where v̄i ∈ Rn , L̄
P∈N R
n×q
and F̄ ∈ Rp×n are the feedback gain matrices.
Let ei = v̄i − j=1 aij (xi −xj ), i = 1, · · · , N . Then, we can get from (3.21)
that ei satisfy the following dynamics:
e+
i = (A + L̄C)ei , i = 1, · · · , N.
Clearly, if L̄ P
is chosen such that A+ L̄C is Schur stable, then v̄i asymptotically
N
converge to j=1 aij (xi − xj ). Let e = [eT1 , · · · , eTN ]T and x = [xT1 , · · · , xTN ]T .
Then, we can get from (3.1) and (3.21) that the closed-loop network dynamics
can be written in terms of e and x as
+
x A + cL ⊗ B F̄ cB F̄ x
= .
e+ 0 A + L̄C e
By following similar steps in showing Theorem 17, we obtain the following
result.
Theorem 20 For any G satisfying Assumption 3.1, the agents in (3.1) reach
consensus under the observer-type protocol (3.21) if all the matrices A + L̄C
and A + (1 − λ̂i )B F̄ , i = 2, · · · , N , are Schur stable, where λ̂i , i = 2, · · · , N ,
denote the eigenvalues of D located in the open unit disk.
Note that the design of F̄ and L̄ in Theorem 20 is actually dual to the
design of L and F in Theorem 17. The algorithms in the previous subsection
can be modified to determine the parameters F̄ and L̄ of the dynamic protocol
(3.21). The details are omitted here for conciseness.
68 Cooperative Control of Multi-Agent Systems
where v̂i ∈ Rn−q is the protocol state, F ∈ R(n−q)×(n−q) is Schur stable and
has no eigenvalues in common with those of A, G ∈ R(n−q)×q , T ∈ R(n−q)×n
is the unique solution to (2.36), satisfying that [ C T ] is nonsingular, [ 1 2 ] =
Q Q
C −1 p×n
[T ] , K ∈ R is the feedback gain matrix to be designed, and dij is the
(i, j)-th entry of the row-stochastic matrix D associated with the graph G.
T T
Let ẑi = [xTi , v̂iT ]T and ẑ = [ẑ1T , · · · , ẑN ] . Then, the collective network
dynamics can be written as
where
A 0 BKQ1 C BKQ2
M= , R= .
GC F T BKQ1 C T BKQ2
Theorem 21 For any G satisfying Assumption 3.1, the agents in (3.1) reach
consensus under the observer-type protocol (3.17) if the matrices F and A +
(1− λ̂i )BK, i = 2, · · · , N , are Schur stable, where λ̂i , i = 2, · · · , N , denote the
eigenvalues of D located in the open unit disk. Moreover, the final consensus
value is given by
xi (k + 1) → ̟(k), vi (k + 1) → T ̟(k), i = 1, · · · , N, as k → ∞,
It should be noted that (3.25) reduces to the consensus protocol (3.17), when
hi − hj = 0, ∀ i, j = 1, · · · , N .
Definition 11 The agents (3.1) under the protocol (3.25) achieve a given
e = (h1 , · · · , hN ) if
formation H
Theorem 22 For any G satisfying Assumption 3.1, the agents in (3.1) reach
the formation H e under the protocol (3.25) if all the matrices A + BF and
A + (1 − λ̂i )LC, i = 2, · · · , N , are Schur stable, and (A − I)hi = 0, ∀ i, j =
1, · · · , N , where λ̂i , i = 2, · · · , N , denote the eigenvalues of D located in the
open unit disk.
70 Cooperative Control of Multi-Agent Systems
x
i −hi
Proof 21 Let z̃i = vi , i = 1, · · · , N . Then, it follows from (3.1) and
(3.25) that
+ A−I e
z̃ = [IN ⊗ A + (IN − D) ⊗ H]z̃ + ⊗ H, (3.26)
0
where the matrices A and H are defined in (3.18). Note that the formation
He is achieved if the system (3.26) reaches consensus. By following similar
steps in the proof of Theorem 17, it is easy to see that the formation H e is
achieved under the protocol (3.25) if the matrices A + BF and A + (1 − λ̂i )LC,
i = 2, · · · , N , are Schur stable, and (A − I)hi = 0, ∀ i = 1, · · · , N .
80
60
40
k
20
0
200
20
100
0
0
−20
y −100 −40 x
x+
i = xi + ṽi ,
ṽi+ = ṽi + ui ,
yi = x i , i = 1, · · · , 6,
3.5 Discussions
This chapter has addressed the consensus control problem for multi-agent
systems with general discrete-time linear agent dynamics. The results obtained
in this chapter can be regarded as extensions of those results in the preceding
chapter in the discrete-time setting.
Regarding the continuous-time consensus in Chapter 2 and the discrete-
time consensus in this chapter, there exist some important differences, which
are summarized as follows.
i) For the continuous-time multi-agent systems in Chapter 2, the existence of
consensus protocol relies only on the stabilizability or controllability of each
agent if the given communication topology contains a directed spanning
tree. In contrast, for the discrete-time multi-agent systems in this chapter,
the existence condition for the consensus protocols additionally depends on
the unstable eigenvalues of the state matrix of each agent, implying that
the consensus problem of discrete-time multi-agent systems is usually much
harder to tackle than that of continuous-time multi-agent systems.
ii) The consensus region for the continuous-time multi-agent systems in Chap-
ter 2 is the open right half plane or a subset of the open right half plane.
On the contrary, the consensus region for the discrete-time multi-agent sys-
tems in this chapter is the unit disk centered in the origin or a subset of the
unit disk. The continuous-time consensus region is related to the discrete-
time consensus region by a bilinear transformation, similarly to the rela-
tionship between the continuous-time and discrete-time stability of linear
time-invariant systems.
iii) Laplacian matrices are utilized to depict the communication topology for
the continuous-time multi-agent systems in Chapter 2. Differently, stochas-
tic matrices are chosen for the discrete-time multi-agent systems in this
chapter. Because the discrete-time consensus region is the unit disk cen-
tered in the origin or a subset of the unit disk, both largest and smallest
nonzero eigenvalues of the Laplacian matrix (for undirected graphs) are re-
quired for the design of the consensus protocols when adopting Laplacian
72 Cooperative Control of Multi-Agent Systems
matrices as in related works [185, 50, 55]. The advantage of using stochastic
matrices in this chapter is that only the eigenvalue with the largest modulus
matters, which facilitates the consensus protocol design.
3.6 Notes
The materials of this chapter are mainly based on [82, 91]. For further re-
sults on consensus of general linear discrete-time multi-agent systems, see,
e.g., [50, 55, 164, 172, 185]. Specially, distributed consensus protocols were
designed in [172] for discrete-time multi-agent systems with each node being
neutrally stable. Reference [185], similar to this chapter, also used modified
algebraic Riccati equation to design consensus protocol and further consid-
ered the effect of a finite communication data rate. By introducing a properly
designed dynamic filter into the local control protocols, the consensusability
condition in [185] was further relaxed in [50]. Methods based on the H∞ and
H2 type Riccati inequalities were given in [55] to design the consensus pro-
tocols. The effect of random link failures and random switching topology on
discrete-time consensus was investigated in [186]. The consensus problem of
discrete-time multi-agent systems with a directed topology and communica-
tion delay was considered in [164], where a consensus protocol based on the
networked predictive control scheme was proposed to overcome the effect of
delay.
4
H∞ and H2 Consensus Control of Linear
Multi-Agent Systems
CONTENTS
4.1 H∞ Consensus on Undirected Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.1.1 Problem Formulation and Consensus Condition . . . . . . . . . . . . 74
4.1.2 H∞ Consensus Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.3 H∞ Performance Limit and Protocol Synthesis . . . . . . . . . . . . 80
4.2 H2 Consensus on Undirected Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3 H∞ Consensus on Directed Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.1 Leader-Follower Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.3.2 Strongly Connected Directed Graphs . . . . . . . . . . . . . . . . . . . . . . . 87
4.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
In the previous chapters, we have considered the consensus problems for multi-
agent systems with linear node dynamics. In many circumstances, the agent
dynamics might be subject to external disturbances, for which case the consen-
sus protocols should maintain certain required disturbance attenuation per-
formance. In this chapter, we consider the distributed H∞ and H2 consensus
control problems for linear multi-agent systems in order to examine the dis-
turbance attenuation performance of the consensus protocols with respect to
external disturbances.
In Section 4.1, the H∞ consensus problem for undirected communication
graphs is formulated. A distributed consensus protocol based on the relative
states of neighboring agents is proposed. The distributed H∞ consensus prob-
lem of a linear multi-agent network is converted to the H∞ control problem
of a set of independent systems of the same dimension as a single agent.
The notion of H∞ consensus region is then defined and discussed. The H∞
consensus region can be regarded as an extension of the notion of consensus
region introduced in Chapter 2 in order to evaluate the performance of the
multi-agent network subject to external disturbances. It is be pointed out via
several examples that the H∞ consensus regions can serve as a measure for the
robustness of the consensus protocol with respect to the variations of the com-
munication graph. A necessary and sufficient condition for the existence of a
protocol yielding an unbounded H∞ consensus region is derived. A multi-step
procedure for constructing such a protocol is further presented, which main-
tains a favorable decoupling property. Such a design procedure involves no
73
74 Cooperative Control of Multi-Agent Systems
m1 -dimensional square integrable vector functions over [0, ∞), and A, B, and
D are constant matrices with compatible dimensions.
In this section, the communication graph among the N agents is repre-
sented by an undirected graph G. It is assumed that at each time instant each
agent knows the relative states of its neighboring agent with respect to itself.
Based on the relative states of neighboring agents, the following distributed
consensus protocol is proposed:
N
X
ui = cK aij (xi − xj ), i = 1, · · · , N, (4.2)
j=1
where c > 0 denotes the coupling gain, K ∈ Rp×n is the feedback gain matrix,
and aij denotes the (i, j)-th entry of the adjacency matrix associated with the
graph G.
H∞ and H2 Consensus Control of Linear Multi-Agent Systems 75
The objective here is to find an appropriate protocol (4.2) for the agents
in (4.1) to reach consensus and meanwhile maintain a desirable performance
with respect to external disturbances ωi . To this end, define the performance
variable zi , i = 1, · · · , N , as the average of the weighted relative states of the
agents, described by
N
1 X
zi = C(xi − xj ), i = 1, · · · , N, (4.3)
N i=1
Definition 12 Given the agents in (4.1) and an allowable γ > 0, the protocol
(4.2) is said to solve the distributed suboptimal H∞ consensus problem if
The H∞ performance limit of the consensus of the network (4.4) is the mini-
mal kTωz (s)k∞ of the network (4.4) achieved by using the protocol (4.2).
The following presents a necessary and sufficient condition for solving the
H∞ consensus problem of (4.4).
Proof 22 By using the fact that 1 is the right eigenvector and also the left
eigenvector of L associated with eigenvalue 0, we have
M L = L = LM. (4.6)
Let ξ = (M ⊗In )x. Then, by invoking (4.6), it follows from (4.4) that ξ evolves
according to the following dynamics:
U T M U = Ψ , diag(0, 1, · · · , 1),
(4.8)
U T LU = Λ , diag(0, λ2 , · · · , λN ).
Denote by T̃ω̂ẑ the transfer function matrices of (4.11). Then, it follows from
(4.12), (4.11), and (4.10) that
T̃ω̂ẑ = diag(0, T̃ω̂2 ẑ2 , · · · , T̃ω̂N ẑN ) = (U T ⊗ Im1 )T̃ωz (U ⊗ Im2 ), (4.13)
which implies that
kT̃ω̂ẑ k∞ = max kT̃ω̂i ẑi k∞ = kT̃ωz k∞ . (4.14)
i=2,··· ,N
As to ξ˜1 , we have
1T 1T
ξ˜1 = ( √ ⊗ In )ξ = ( √ ⊗ In )M x ≡ 0.
N N
Therefore, the suboptimal H∞ consensus problem for the network (4.4) is
solved if and only if the N − 1 systems in (4.5) are simultaneously asymp-
totically stable and kT̃ω̂i ẑi k∞ < γ, i = 2, · · · , N .
Remark 23 The usefulness of this theorem lies in that it converts the dis-
tributed H∞ consensus problem of the high-dimensional multi-agent network
(4.4) into the H∞ control problems of a set of independent systems having the
same dimensions as a single agent in (4.1), thereby significantly reducing the
computational complexity. A unique feature of protocol (4.2) proposed here is
that by introducing a constant scalar c > 0, called the coupling gain, the no-
tions of H∞ and H2 consensus regions can be brought forward, as detailed in
the following subsections.
ζ̇ = (A + σBK)ζ + Dωi ,
(4.15)
zi = Cζ,
where ζ ∈ Rn and σ ∈ R, with σ depending on c. The transfer function of
system (4.15) is denoted by Tbωi zi . Clearly, the stability and H∞ performance
of the system (4.15) depends on the scalar parameter σ.
The notion of H∞ consensus region is defined as follows.
78 Cooperative Control of Multi-Agent Systems
Corollary 4 For a given γ > 0, the protocol (4.2) solves the suboptimal
H∞ consensus problem for the agents (4.1) if and only if cλi ∈ Sγ , for
i = 2, · · · , N.
For a protocol of the form (4.2), its H∞ consensus region with index γ, if it
exists, is an interval or a union of several intervals on the real axis, where the
intervals themselves can be either bounded or unbounded. The H∞ consensus
region can serve as a measure for the robustness of the consensus protocol (4.2)
with respect to the variations of the communication topology, as illustrated
by the following example.
whose nonzero eigenvalues are 1.382, 1.6972, 3.618, 4, 5.3028. Thus, the pro-
tocol (4.2) given as above solves the suboptimal H∞ consensus problem with
γ = 1.782 for the graph given in FIGURE 4.1(b) if and only if the coupling
gain c lies within the set [0.0381, 0.0383].
Let us see how modifications of the communication graph affect the H∞
consensus problem by considering the following simple cases.
H∞ and H2 Consensus Control of Linear Multi-Agent Systems 79
14
1 6
12
10
H norm of (4.15)
6 2 5
∞
0
−0.2 0 0.2 0.4 0.6 0.8 1 1.2
σ 3 4
(a) (b)
FIGURE 4.1: (a) The H∞ consensus region; (b) the communication graph.
Example 14 The agent dynamics and the protocol are given by (4.1) and
80 Cooperative Control of Multi-Agent Systems
20
18
16
14
H norm of (4.15)
12
10
6
∞
−2
0 2 4 6 8 10 12
σ
Theorem 24 For a given γ > 0, there exists a protocol (4.2) having an un-
bounded H∞ consensus region Sγ , [τ, ∞) if and only if there exist a matrix
P > 0 and a scalar τ > 0 satisfying the following linear matrix inequality
(LMI):
AP + P AT − τ BB T D P CT
DT −γ 2 I 0 < 0. (4.16)
CP 0 −I
implying that kC(sI −A−cλi BK)−1 Dk∞ < γ, i = 2, · · · , N , i.e., the protocol
(4.2) with K given as above has an unbounded H∞ consensus region [τ, ∞).
The exact H∞ performance limit of the network (4.4) under the protocol
(4.2) is now obtained as a consequence.
82 Cooperative Control of Multi-Agent Systems
minimize γ
(4.20)
subject to LMI (4.16), with P > 0, τ > 0, γ > 0.
Algorithm 11 For any γ ≥ γmin , where γmin is given by (4.20), the proto-
col (4.2) solving the distributed H∞ consensus problem can be constructed as
follows:
1) Solve LMI (4.16) for a solution P > 0 and τ > 0. Then, choose the feedback
gain matrix K = − 21 B T P −1 .
τ
2) Select the coupling gain c not less than the threshold value cth = min λi ,
i=2,··,N
where λi , i = 2, · · · , N , are the nonzero eigenvalues of L.
Example 15 The agent dynamics (4.1) remain the same as in Example 13,
while matrix K will be redesigned via Algorithm 11. Solving LMI (4.16) with
γ = 1 by using toolboxes YALMIP [103] and SeDuMi [158] gives feasible
solutions P = [ 1.3049 0.0369
0.0369 0.1384 ] and τ = 1.7106. Thus, the feedback gain matrix
of (4.2) is chosen as K = [ −0.4890 3.7443 ]. Different from Example 13, the
protocol (4.2) with this matrix K has an unbounded H∞ consensus region with
index γ = 1 in the form of [1.7106, ∞). For the graph in FIGURE 4.1(b),
H∞ and H2 Consensus Control of Linear Multi-Agent Systems 83
the protocol (4.2) with K chosen here solves the suboptimal H∞ consensus
problem with γ = 1, if the coupling gain c is not less than the threshold value
cth = 1.2378.
By solving the optimization problem (4.20), the H∞ performance limit of
the consensus of the network (4.4) under the protocol (4.2) can be further
obtained as γmin = 0.0535. The corresponding optimal feedback gain matrix of
(4.2) is obtained as K = [ 48.6183 48.6450 ], and the scalar τ in (4.16) is τ =
9.5328×106 . For the graph in FIGURE 4.1 (b), the threshold cth corresponding
to γmin is cth = 6.8978 × 106 in this numerical example.
Definition 14 Given the agents in (4.1) and an allowable γ > 0, the con-
sensus protocol (4.2) is said to solve the distributed suboptimal H2 consensus
problem if
i) the network (4.4) with wi = 0 can reach consensus in the sense of
limt→∞ kxi − xj k = 0, ∀ i, j = 1, · · · , N ;
ii) kTωz k2 < γ.
The H2 performance limit of the consensus of the network (4.4) is the minimal
kTωz (s)k2 of the network (4.4) achieved by using the consensus protocol (4.2).
Theorem 25 For a given γ̃ > 0, there exists a protocol (4.2) solving the
suboptimal H2 consensus problem for the agents in (4.1) if and only if
qPN − 1 systems in (4.5) are simultaneously asymptotically stable and
the
N
kTeω̂ ẑ k2 < γ̃, where Teω̂ ẑ , i = 2, · · · , N , are the transfer function
i=1 i i 2 i i
which together imply that kC(sI−A−cλi BK)−1 Dk2 < √γ̃N , i = 2, · · · , N , i.e.,
the protocol (4.2) with K chosen as above has an unbounded H2 performance
region [τ̃ , ∞).
Corollary 7 The H2 performance limit γ̃min of the network (4.4) under the
consensus protocol (4.2) is given by the optimization problem:
minimize γ̃
(4.22)
subject to LMI (4.21), with Q > 0, τ̃ > 0, γ̃ > 0.
Proof 26 Solving the optimization problem (4.22) gives solutions γ̃min and
τ̃min . Select c such that cλi ≥ τ̃min , for i = 1, 2, · · · , N . Then, the protocol
(4.2) with K given as in the proof of Theorem 26 yields kTω̂i ẑi k2 = γ̃√min N
, for
i = 1, · · · , N , which by Theorem 25 implies that γ̃min is the H2 performance
limit of the network (4.4).
Algorithm 12 For any γ̃ ≥ γ̃min , where γ̃min is given by (4.22), the proto-
col (4.2) solving the distributed H2 consensus problem can be constructed as
follows:
1) Solve LMI (4.21) to get solutions Q > 0 and τ̃ > 0. Then, choose the
feedback gain matrix K = − 12 B T Q−1 .
τ̃
2) Select the coupling gain c > c̃th , with c̃th = min λi , where λi , i = 2, · · · , N ,
i=2,··,N
are the nonzero eigenvalues of L.
to get a matrix X > 0 and a scalar τ > 0, where rmin = min ri and rmax =
i
max ri . Then, choose the feedback gain matrix K = − 21 B T X −1 .
i
τ
2) Select the coupling gain c > c̃th , with c̃th = a(L) , where a(L) denotes the
generalized algebraic connectivity of the communication graph G (see Lemma
4 in Chapter 1).
Remark 28 In the last step, the generalized algebraic connectivity a(L) is
used to determine the required coupling gain c. For a given graph, its a(L)
can be obtained by using Lemma 8 in [189], which might be not easy to com-
pute especially if the graph is of large scale. For the case where the com-
munication graph is balanced and strongly connected, then rT = 1T /N and
a(L) = λ2 ( 12 (L + LT )). In this case, the LMI (4.30) will reduce to the LMI
(4.16) in Section 4.1. Further, step 2) of Algorithm 13 will be identical to step
2) of Algorithm 11 if the communication graph is undirected and connected.
Theorem 28 Assume that the communication graph G is strongly connected.
For a given γ > 0, the consensus protocol (4.2) designed in Algorithm 13 can
solve the suboptimal H∞ consensus problem.
Proof 28 Consider the Lyapunov function candidate
1 T
V = ζ (R ⊗ X −1 )ζ,
2
where R = diag(r1 , r2 , . . . , rN ). Differentiating V with respect to t along the
trajectories of (4.29) and substituting K = 21 B T X −1 give
1 f ⊗ X −1 D)ω
V̇ = ζ T (R ⊗ X −1 A − cRL ⊗ X −1 BB T X −1 )ζ + ζ T (RM
2
c f ⊗ D)ω,
= ζ̃ T [R ⊗ AX − (RL + LT R) ⊗ BB T ]ζ̃ + ζ̃ T (RM
4
(4.31)
H∞ and H2 Consensus Control of Linear Multi-Agent Systems 89
where ζ̃ = (I ⊗ X −1 )ζ.
f ⊗ B T X −1 )x = 0, we can obtain from Lemma 4
Since (rT ⊗ B T )ζ̃ = (rT M
that
1 T
ζ̃ [(RL + LT R) ⊗ BB T ]ζ̃ ≥ a(L)ζ̃ T (IN ⊗ BB T )ζ̃, (4.32)
2
where a(L) > 0. In light of (4.32), it follows from (4.31) and step 2) of
Algorithm 13 that
1 T f ⊗ D)ω.
V̇ ≤ ζ̃ [R ⊗ (AX + XAT − τ BB T )]ζ̃ + ζ̃ T (RM (4.33)
2
For the case of ω(t) ≡ 0, it follows from the above analysis that
1 T
V̇ ≤ ζ̃ [R ⊗ (AX + XAT − τ BB T )]ζ̃ < 0,
2
where the last inequality follows from the LMI (4.30). This indicates that
consensus of (4.29) with ω(t) ≡ 0 is achieved. Thus, the first condition in
Definition 12 is satisfied.
Next, analyze the H∞ consensus performance of the multi-agent system
with ω(t) 6= 0. Observe that
1 T f ⊗ D)ω
V̇ ≤ ζ̃ [R ⊗ (AX + XAT − τ BB T )]ζ̃ + ζ̃ T (RM
2
1 1
+ ζ̃ T (R ⊗ XC T CX)ζ̃ − z T z
2rmin 2
γ2 γ 2 (4.34)
− ω T (R ⊗ In )ω + ω T ω
2rmax 2
T
1 ζ̃ ζ̃ 1 γ2
= Ω − z T z + ω T ω,
2 ω ω 2 2
where " #
R⊗Π RMf⊗D
Ω = fT γ2 , (4.35)
M R ⊗ DT − rmax R⊗I
1
and Π = AX + XAT − τ BB T + rmin XC T CX.
Noting that Ω < 0 if and only if the following inequality holds,
rmax fT f
R⊗Π+ M RM ⊗ DDT < 0, (4.36)
γ2
f=M
where we have used the fact that RM fT R.
Further, we have
rmax fT f rmax fT RMf)I ⊗ DDT
R⊗Π+ 2
M RM ⊗ DDT ≤ R ⊗ Π + 2 λmax (M
γ γ
rmax fT RM
f)DDT ].
≤ R ⊗ [Π + 2 λmax (M
γ rmin
(4.37)
90 Cooperative Control of Multi-Agent Systems
By using the Gershgorin’s disc theorem [58], it is not difficult to get that
fT RM
λmax (M f) ≤ 1. Then, it follows from (4.37) that
rmax fT f rmax
R⊗Π+ 2
M RM ⊗ DDT ≤ R ⊗ (Π + 2 DDT )
γ γ rmin (4.38)
< 0,
which implies that (4.36) holds, i.e., Ω < 0. Note that the last inequality in
(4.38) follows from the LMI (4.30).
Then, invoking (4.34) gives that
1 γ2
V̇ + y T y − ω T ω < 0, (4.39)
2 2
for all z and ω such that |z|2 + |ω|2 6= 0. Integrating inequality (4.39) over
infinite horizon yields
Recalling that z(0) = 0, we have kyk2 −γ 2 kωk2 < 0, i.e., kTeωz k∞ = kTωz k∞ <
γ. Thus, the second condition in Definition 12 is satisfied. Therefore, the H∞
consensus problem is solved.
3 3.9 4
4.2 4
3.8
1 2 4 5
4
The external disturbance ω = [2w, 5w, 4w, w, 1.5w]T , where w(t) is a ten-
period square wave starting at t = 0 with width 5 and height 1.
From FIGURE 4.3, it is easy to see that the network G is strongly connect-
ed. Some simple calculations give that both the minimum and maximum values
of the left eigenvector r are equal to 0.4472. By using Lemma 8 in [189], we
can get that a(L) = 2.2244. Choose the H∞ performance index γ = 1. Solving
LMI (4.30) by using the LMI toolbox of MATLAB gives a feasible solution
P = [ 0.21 0.08
0.08 0.03 ] and τ = 7.52. Thus, by Algorithm
13, thefeedback gain ma-
trix of the protocol (4.2) is given as K = −23.52 57.73
33.78 −83.06 . Then, according
to Theorem 28 and Algorithm 13, the protocol (4.2) with K chosen as above
could solve H∞ consensus with performance index γ = 1, if the coupling gain
c ≥ 3.52. For the case of ω = 0, the state trajectories of the agents are re-
spectively shown in FIGURE 4.4, from which it can be seen that consensus is
indeed achieved. Furthermore, the trajectories of the performance variables zi ,
i = 1, · · · , 5, in the presence of disturbances under the zero initial conditions
are shown in FIGURE 4.5.
8 8
6
6
4
2
xi1
xi2
0 2
−2
0
−4
−2
−6
−8 −4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t t
FIGURE 4.4: The state trajectory of the agents under (4.2) constructed via
Algorithm 13.
4.4 Notes
The materials of Sections 4.1 and 4.2 are mainly adopted from [81, 85]. The
results in Section 4.3 are mainly based on [177]. For further results on dis-
tributed H∞ consensus and control problems of multi-agent systems, please
92 Cooperative Control of Multi-Agent Systems
0.15
0.1
0.05
zi
0
−0.05
−0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
t
FIGURE 4.5: The performance variables under (4.2) constructed via Algo-
rithm 13.
refer to [87, 98, 99, 102, 108, 197]. Specifically, the H∞ consensus problem for
multi-agent systems of first-order and second-order integrators with external
disturbances and parameter uncertainties were considered in [98, 99]. A de-
composition approach was proposed in [108] to solve the distributed H2 and
H∞ control of identical coupled linear systems. An observer-type consensus
protocol was provided by using dynamic output feedback approach in [197]
to deal with the H∞ consensus problem for the case where the disturbances
satisfy the matching condition. Distributed H∞ consensus of linear multi-
agent systems with switching directed and balanced topologies was studied in
[180]. Dynamic controllers were proposed for distributed H∞ control problem
in [87, 102], whose design needs to solve a set of linear matrix inequalities
associated with all the nonzero eigenvalues of the Laplacian matrix.
5
Consensus Control of Linear Multi-Agent
Systems Using Distributed Adaptive
Protocols
CONTENTS
5.1 Distributed Relative-State Adaptive Consensus Protocols . . . . . . . . . . 94
5.1.1 Consensus Using Edge-Based Adaptive Protocols . . . . . . . . . . 96
5.1.2 Consensus Using Node-Based Adaptive Protocols . . . . . . . . . . 100
5.1.3 Extensions to Switching Communication Graphs . . . . . . . . . . . 101
5.2 Distributed Relative-Output Adaptive Consensus Protocols . . . . . . . . 103
5.2.1 Consensus Using Edge-Based Adaptive Protocols . . . . . . . . . . 104
5.2.2 Consensus Using Node-Based Adaptive Protocols . . . . . . . . . . 107
5.2.3 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.3 Extensions to Leader-Follower Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4 Robust Redesign of Distributed Adaptive Protocols . . . . . . . . . . . . . . . . . 114
5.4.1 Robust Edge-Based Adaptive Protocols . . . . . . . . . . . . . . . . . . . . 115
5.4.2 Robust Node-Based Adaptive Protocols . . . . . . . . . . . . . . . . . . . . 119
5.4.3 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.5 Distributed Adaptive Protocols for Graphs Containing Directed
Spanning Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.5.1 Distributed Adaptive Consensus Protocols . . . . . . . . . . . . . . . . . 123
5.5.2 Robust Redesign in the Presence of External Disturbances 129
5.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
93
94 Cooperative Control of Multi-Agent Systems
has to know the entire communication graph G to compute it. Therefore, al-
though these consensus protocols are proposed and can be implemented in a
distributed fashion, they cannot be designed by each agent in a distributed
fashion. In other words, these consensus protocols are not fully distributed. As
explained in Chapter 1, due to the large number of agents, limited sensing ca-
pability of sensors, and short wireless communication ranges, fully distributed
consensus protocols, which can be designed and implemented by each agent
using only local information of its own and neighbors, are more desirable and
have been widely recognized. Fully distributed consensus protocols have the
advantages of high robustness, strong adaptivity, and flexible scalability.
In this paper, we intend to remove the limitation of requiring global in-
formation of the communication graph and introduce some fully distributed
consensus protocols. The main idea stems from the decoupling feature of the
consensus region approach proposed in Chapter 2 for consensus protocol de-
sign. As shown in Algorithm 2 in Chapter 2, the feedback gain matrix of the
consensus protocol (2.2) can be designed by using only the agent dynamics,
and the coupling gain c is used to deal with the effect of the communication
topology on consensus. The coupling gain c is essentially a uniform weight on
the edges in the communication graph. Since the knowledge of global informa-
tion of the communication graph is required for the selection of c, we intend
to implement some adaptive control tools to dynamically update the weights
on the communication graph in order to compensate for the lack of the global
information, and thereby to present fully distributed consensus protocols.
In Sections 5.1 and 5.2, we propose distributed consensus protocols based
on the relative state and output information, combined with an adaptive law
for adjusting the coupling weights between neighboring agents. Two types of
distributed adaptive consensus protocols are proposed, namely, the edge-based
adaptive protocol which assigns a time-varying coupling weight to each edge in
the communication graph and the node-based adaptive protocol which uses a
time-varying coupling weight for each node. For the case with undirected com-
munication graphs, these two classes of adaptive protocols are designed to en-
sure that consensus is reached in a fully distributed fashion for any undirected
connected communication graph without using any global information about
the communication graph. The case with switching communication graphs is
also studied. It is shown that the edge-based adaptive consensus protocol is ap-
plicable to arbitrary switching connected graphs. Extensions of the obtained
results to the case with a leader-follower communication graph are further
discussed in Section 5.3.
The robustness of the proposed adaptive protocols is discussed in Section
5.4. The σ-modification technique in [63] is implemented to present robust
adaptive protocols, which is shown to be able to guarantee the ultimate bound-
edness of both the consensus error and the adaptive weights in the presence
of bounded external disturbances. The upper bounds for the consensus error
are explicitly given.
Note that the aforementioned protocols are applicable to only undirected
Consensus Control Using Distributed Adaptive Protocols 95
where aij denotes the (i, j)-th entry of the adjacency matrix associated with
G, cij (t) denotes the time-varying coupling weight for the edge (i, j) with
96 Cooperative Control of Multi-Agent Systems
cij (0) = cji (0), κij = κji are positive constants, and K ∈ Rp×n and Γ ∈ Rn×n
are the feedback gain matrices.
The node-based adaptive consensus protocol assigns a time-varying cou-
pling weight to each node (i.e., each agent) and is described by
N
X
ui = di K aij (xi − xj ),
j=1
(5.3)
N
X N
X
d˙i = ǫi [ aij (xi − xj )]T Γ[ aij (xi − xj )], i = 1, · · · , N,
j=1 j=1
where di (t) denotes the coupling weight for agent i, ǫi are positive constants,
and the rest of the variables are defined as in (5.2).
The adaptive protocols (5.2) and (5.3) are actually extensions of the stat-
ic protocol (2.2) in Chapter 2 by dynamically updating the coupling weights
of neighboring agents, which is equivalent to adaptively weighting the com-
munication graph. The edge-based adaptive protocol (5.2) assigns different
time-varying weights for different communication edges while the node-based
adaptive protocol (5.3) assigns the same time-varying weight for all the ingo-
ing edges of each node, as illustrated in FIGURE 5.1. For each time instant,
the adaptive protocols (5.2) and (5.3) are in more general forms of the stat-
ic protocol (2.2) with nonidentical constant weights on the communication
graph.
d6 (t)
1 6
1 c16 (t) 6 d1 (t)
d1 (t) d5 (t)
c12 (t) c15 (t) d2 (t) d1 (t)
d6 (t)
2 d (t) d4 (t) 5
2 c36 (t) c46 (t) 5 d3 (t) 6
d3 (t) d4 (t)
c23 (t) c24 (t) d2 (t) d2 (t)
3 4 3 4
(a) (b)
FIGURE 5.1: The adaptive consensus protocols (5.2) and (5.3) assign different
time-varying weights on a given graph: (a) the edge-based adaptive protocol;
(b) the node-based adaptive protocol.
In the adaptive protocols (5.2) and (5.3), it can be observed that the adap-
tive coupling weights cij and di are monotonically increasing and will converge
to constant values when consensus is reached. An intuitive explanation for the
Consensus Control Using Distributed Adaptive Protocols 97
reason why (5.2) and (5.3) may solve the consensus problem is that the cou-
pling weights cij and di may be adaptively tuned such that all the nonzero
eigenvalues of the Laplacian matrix of the weighted communication graph are
located within the unbounded consensus region as in Theorem 4 of Chapter 2
(since the coupling weights (5.2) and (5.3) vary very slowly when consensus is
nearly reached, the closed-loop network resulting from (2.1) and (5.2) or (5.3)
may be regarded as a linear time-invariant system). However, how to design
the feedback gain matrices of (5.2) and (5.3) and theoretically prove that they
can solve the consensus problem for the N agents in (2.1) is far from being
easy, which will be addressed in the following subsections.
The following theorem presents the explicit expression for the final con-
sensus value.
XN
1 T (di − β)2
V2 = ξ (L ⊗ P −1 )ξ + , (5.16)
2 i=1
2ǫi
1T
ξ¯1 = ( √ ⊗ P −1 )ξ = 0. (5.20)
N
Then, we have
1 ¯T
V̇2 = ξ [Λ ⊗ (AP + P AT ) − 2βΛ2 ⊗ BB T ]ξ¯
2
N (5.21)
1 X ¯T
= λi ξi (AP + P AT − 2βλi BB T )ξ¯i .
2 i=2
where c̃ij = cij − δ and δ is a positive scalar. Following similar steps as in the
proof of Theorem 29, we can obtain the time derivative of V3 along (5.23) as
1 ˜T
˜
V̇3 = ξ IN ⊗ (AP + P AT ) − 2δLσ(t) ⊗ BB T ξ, (5.24)
2
where Lσ(t) is the Laplacian matrix associated with Gσ(t) and the rest of the
variables are the same as in (5.10). Since Gσ(t) is connected and (1T ⊗I)ξ˜ = 0,
it follows from Lemma 2 that ξ˜T (Lσ(t) ⊗ I)ξ˜ ≥ λmin ˜T ˜
2 ξ ξ, where λ2
min
,
minGσ(t) ∈GN {λ2 (Lσ(t) )} denotes the minimum of the smallest nonzero eigen-
values of Lσ(t) for all Gσ(t) ∈ GN . Therefore, we can get from (5.24) that
ui = F ṽi , i = 1, · · · , N,
where di (t) denotes the coupling weight for agent i, τi are positive constants,
and the rest of the variables are defined as in (5.25).
PN PN
Note that the terms j=1 cij aij C(vi −vj ) in (5.25) and j=1 aij C(ṽi − ṽj )
in (5.26) imply that the agents need to use the virtual outputs of the consensus
protocols from their neighbors via the communication topology G.
The adaptive dynamic protocols (5.25) and (5.26) are extensions of the
observer-type consensus protocol (2.21) in Chapter 2 by dynamically tun-
ing the coupling weights of the communication graph. Alternative distributed
Consensus Control Using Distributed Adaptive Protocols 105
The following theorem designs the adaptive protocol (5.25) to achieve con-
sensus.
where
e = T −T QT −1 = ς Q̃ 0 , M
Q f = T MT −1 = A + BF BF ,
0 Q 0 A
e = T HT −1 = 0 0 , R
H e = T −T RT −1 = 0 0
.
0 LC 0 CT C
eH
It is easy to see that Q e = −R.
e Then, by letting ζ̃ = [ζ̃ T , · · · , ζ̃ T ]T , it follows
1 N
from (5.31) and (5.32) that
N
X N X
X N
V̇4 = eM
ζ̃iT Q fζ̃i −α e ζ̃i − ζ̃j )
aij ζ̃iT R(
i=1 i=1 j=1 (5.33)
1 eMf+M
fT Q)
e − 2αL ⊗ R]
e ζ̃,
= ζ̃ T [IN ⊗ (Q
2
where L is the Laplacian matrix associated with G.
By the definitions of ζ and ζ̃, it is easy to see that (1T ⊗ I)ζ̃ =
Consensus Control Using Distributed Adaptive Protocols 107
converges to zero. It is not difficult to obtain that ζ̄ and di satisfy the following
dynamics:
1
ζ̄˙ = [IN ⊗ M + ((IN − 11T )DL) ⊗ H]ζ̄,
N
N
X N
X (5.36)
d˙i = τi [ Lij ζ̄jT ]R[ Lij ζ̄j ], i = 1, · · · , N,
j=1 j=1
XN
1 T (di − β)2
V5 = ζ̄ (L ⊗ Q)ζ̄ + , (5.37)
2 i=1
2τi
eM
V̇5 = ζ̂ T [L ⊗ Q f − LDL ⊗ R]
e ζ̂
N
X N
X N
X (5.38)
+ (di − β)( e
Lij ζ̂jT )R( Lij ζ̂j ),
i=1 j=1 j=1
T T
where ζ̂ , [ζ̂1T , · · · , ζ̂N e M,
] = (IN ⊗ T )ζ̄, T , Q, f and R
e are the same as in
(5.31). Observe that
N
X N
X N
X
e ζ̂ =
ζ̂ T (LDL ⊗ R) di ( e
Lij ζ̂jT )R( Lij ζ̂j ). (5.39)
i=1 j=1 j=1
Remark 36 The adaptive consensus protocols (5.2), (5.3), (5.25), and (5.26)
are extensions of their counterparts with constant coupling weights in Chapter
2 by using adaptive coupling weights. Compared to the consensus protocols
in Chapter 2 whose design generally requires the knowledge of the smallest
nonzero eigenvalue of the Laplacian matrix, one advantage of these adaptive
consensus protocols in the current chapter is that they depend on only local
information, which thereby are fully distributed and scalable in the sense that
new agents can be added or existing agents can be removed without redesigning
the protocol. As mentioned earlier, the reason why this can be done lies in the
decoupling property of the consensus region approach.
1 6 1 6
2 5 2 5
3 4 3 4
(a) G1 (b) G2
10 3
8 2.5
6 2
4
1
1.5
x −x
i
d
i
2 1
0 0.5
−2 0
−4 −0.5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time (s) time (s)
(a) (b)
FIGURE 5.3: (a) The consensus errors xi −x1 of third-order integrators under
(5.26); (b) the coupling weights di in (5.26).
Consensus Control Using Distributed Adaptive Protocols 111
6 2.5
4 2
2
1.5
0
1
xi−x1
−2
ij
0.5
c
−4
0
−6
−0.5
−8
−10 −1
−12 −1.5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
time (s) time (s)
(a) (b)
FIGURE 5.4: (a) The consensus errors xi −x1 of third-order integrators under
(5.25); (b) the coupling weights cij in (5.25).
where d¯i denotes the coupling weight associated with follower i and ǫi > 0.
114 Cooperative Control of Multi-Agent Systems
where Lij denotes the (i, j)-th entry of the Laplacian matrix L.
Consider the Lyapunov function candidate
N N
1 X T −1 X (di − σ)2
V7 = υi P υi + ,
2 i=2 i=2
2ǫi
to zero any more but rather can only be expected to converge into some small
neighborhood of the origin. Since the coupling weights cij and di are integrals
of the nonnegative quadratic functions of the relative states, it is easy to see
that in this case cij and di will grow unbounded, which is called the parameter
drift phenomenon in the classic adaptive control literature [63]. Therefore, the
adaptive protocols (5.2) and (5.3) are fragile in the presence of the bounded
disturbances ωi .
The main objective of this section is to make modifications on (5.2) and
(5.3) in order to present some distributed robust adaptive protocols which can
guarantee the ultimate boundedness of the consensus error and the coupling
weights for the agents in (5.46) in the presence of bounded disturbances ωi .
Motivated by the σ-modification technique in the robust adaptive control
literature [63], we propose a modified edge-based adaptive protocol as follows:
N
X
ui = cij aij K(xi − xj ),
j=1 (5.47)
ċij = κij aij [−φij cij + (xi − xj )T Γ(xi − xj )], i = 1, · · · , N,
where φij , i, j = 1, · · · , N , are small positive constants and the rest of the
variables are defined as in (5.2).
Similarly, a modified node-based adaptive protocol can be described by
N
X
ui = di aij K(xi − xj ),
j=1
N
X N
X
d˙i = τi [−ϕi di + ( aij (xi − xj )T )Γ( aij (xi − xj ))], i = 1, · · · , N,
j=1 j=1
(5.48)
where ϕi , i = 1, · · · , N , are small positive constants and the rest of the vari-
ables are defined as in (5.3).
where ε > 1. Then, both the consensus error ξ and the adaptive gains cij
i, j = 1, · · · , N , in (5.49) are uniformly ultimately bounded and the following
statements hold.
i) For any φij , the parameters ξ and cij exponentially converge to the residual
set
N
X N N
1 α2 X X
D1 , {ξ, cij : V8 < θi2 + φij aij }, (5.51)
2δλmax (Q) i=1
4δ i=1 j=1
ii) If φij is chosen such that ̺ , mini,j=1,··· ,N {κij φij } < ε−1, then in addition
to i), ξ exponentially converges to the residual set
XN N N
λmax (Q) 1 α2 X X
D2 , {ξ : kξk2 ≤ [ θi2 + φij aij ]}. (5.53)
ε − 1 − ̺ λmin (Q) i=1 2 i=1 j=1
2αL⊗Q BB Q , and we have used the fact that −c̃2ij −c̃ij α ≤ − 21 c̃2ij + 12 α2 .
−1 T −1
X N
1 T 1
V̇8 ≤ ξ (X + M ⊗ Q−1 )ξ + θ2
2 2λmin (Q) i=1 i
N N
(5.57)
1 XX
+ φij aij (−c̃2ij + α2 ).
4 i=1 j=1
1
PN
where we have used the assertion that ω T (M ⊗ Q−1 )ω ≤ λmin (Q)
2
i=1 θi .
Note that (5.57) can be rewritten into
1
V̇8 ≤ −δV8 + δV8 + ξ T (X + M ⊗ Q−1 )ξ
2
XN N N
1 1 XX
+ θi2 + φij aij (−c̃2ij + α2 )
2λmin (Q) i=1 4 i=1 j=1
X N (5.58)
1 1
= −δV8 + ξ T [X + (δIN + M ) ⊗ Q−1 ]ξ + θ2
2 2λmin (Q) i=1 i
N N N N
1 XX δ α2 X X
− (φij − )aij c̃2ij + φij aij .
4 i=1 j=1 κij 4 i=1 j=1
eigenvalue of L and all the other eigenvalues are positive. By the definitions
of M and L, we have M L = L = LM. Thus there exists a unitary matrix U ∈
RN ×N such that U T M U and U T LU are both diagonal [58]. Since L and M
have the same right and left eigenvectors corresponding
h 1T i to the zero eigenvalue,
namely, 1, we can choose U = [ √1N Y ] , U T = √N , with Y ∈ RN ×(N −1) ,
W
W ∈ R(N −1)×N , satisfying
0 0
UT MU = Π = ,
0 IN −1 (5.60)
T
U LU = Λ = diag(0, λ2 , · · · , λN ),
N
X
ξ T [X + (δIN + M ) ⊗ Q−1 ]ξ = ξ¯iT [AQ + QAT + (δ + 1)Q − 2αλi BB T ]ξ¯i
i=2
N
X
≤ ξ¯iT [AQ + QAT + εQ − 2BB T ]ξ¯i ≤ 0,
i=2
(5.61)
where we have used the fact that αλi ≥ 1, i = 2, · · · , N , and δ ≤ ε − 1. Then,
it follows from (5.59) and (5.61) that
N
X N N
1 α2 X X
V̇8 ≤ −δV8 + θi2 + φij aij . (5.62)
2λmin (Q) i=1
4 i=1 j=1
By using Lemma 16 (the Comparison lemma), we can obtain from (5.62) that
N
X N N
1 α2 X X
V8 ≤ [V8 (0) − θi2 − φij aij ]e−δt
2δλmin (Q) i=1
4δ i=1 j=1
(5.63)
N
X N N
1 α2 X X
+ θi2 + φij aij .
2δλmin (Q) i=1
4δ i=1 j=1
X N
ε−1−̺ T 1
− ξ (IN ⊗ Q−1 )ξ + θ2
2 2λmin (Q) i=1 i
N N
(5.64)
1 XX ̺
− (φij − )aij c̃2ij
4 i=1 j=1 κij
XN N N
ε−1−̺ 1 α2 X X
≤ −̺V8 − kξk2 + θi2 + φij aij .
2λmax (Q) 2λmin (Q) i=1 4 i=1 j=1
λmax (Q) 1
PN α2
PN PN
Obviously, if kξk2 > ε−1−̺ [ λmin (Q)
2
i=1 θi + 2 i=1 j=1 φij aij ], it fol-
lows from (5.64) that V̇8 < 0, which implies that ξ exponentially converges to
the residual set D2 in (5.53) with a convergence rate faster than e−̺t .
and di satisfy
ξ˙ = (IN ⊗ A + M DL ⊗ BK)ξ + (M ⊗ In )ω,
N
X N
X (5.65)
d˙i = τi [−ϕi di + ( Lij ξjT )Γ( Lij ξj )], i = 1, · · · , N,
j=1 j=1
1 T
where D = diag(d1 , · · · , dN ) and M = IN − N 11 .
By using (5.18) and the assertion that −d˜2i − d˜i α ≤ − 12 d˜2i + 12 α2 , it follows
from (5.69) that
N
1 T 1X
V̇9 ≤ ξ Yξ + ξ T (L ⊗ Q−1 )ω − ϕi (d˜2i − α2 ), (5.70)
2 2 i=1
Consensus Control Using Distributed Adaptive Protocols 121
where Y , L⊗(Q−1 A+AT Q−1 )−2αL2 ⊗Q−1 BB T Q−1 . By using the technique
of completing the square as in (5.56), we can obtain from (5.70) that
N
1 1 1X
V̇9 ≤ ξ T (Y + L ⊗ Q−1 )ξ + ω T (L ⊗ Q−1 )ω − ϕi (d˜2i − α2 )
2 2 2 i=1
(5.71)
N N
1 λmax (L) X 2 1 X
≤ ξ T (Y + L ⊗ Q−1 )ξ + θ − ϕi (d˜2i − α2 ),
2 2λmin (Q) i=1 i 2 i=1
(5.71) into
Remark 40 One weakness of the robust adaptive protocols (5.47) and (5.48)
is that in the absence of disturbances we can no longer ensure the asymptotical
convergence of the consensus error to the origin, which is actually an inherent
drawback of the σ-modification technique [63]. Other robust adaptive control
techniques, e.g., the classic dead-zone modification and the projection operator
techniques [62], or the new low-frequency learning method [191], might also be
able to be used to develop alternative robust adaptive consensus protocols.
2.5
1.5
i
1
d
0.5
−0.5
−1
−1.5
0 5 10 15 20 25 30 35 40
t
8
18
16
6
14
4 12
10
1
x −x
di
8
i
6
0
4
2
−2
0
−4 −2
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
t t
(a) (b)
graphs is quite challenging. The main difficulty lies in that the Laplacian ma-
trices of directed graphs are generally asymmetric, which renders the construc-
tion of adaptive consensus protocol and the selection of appropriate Lyapunov
function far from being easy.
Assumption 5.3 The graph G contains a directed spanning tree with the lead-
er as the root node.
where L2 ∈ R(N −1)×1 and L1 ∈ R(N −1)×(N −1) . Since G satisfies Assumption
5.3, it follows from Lemma 1 that all eigenvalues of L1 have positive real parts.
It then can be verified that L1 is a nonsingular M -matrix (see Definition 6 in
Chapter 1) and is diagonally dominant.
Consensus Control Using Distributed Adaptive Protocols 125
ui = di ρi (νiT P −1 νi )Kνi ,
(5.77)
d˙i = νiT Γνi , i = 2, · · · , N,
PN
where νi , j=1 aij (xi − xj ), di (t) denotes the time-varying coupling weight
associated with the i-th follower with di (0) ≥ 1, P > 0 is a solution to the
LMI (5.6), K ∈ Rp×n and Γ ∈ Rn×n are the feedback gain matrices, ρi (·)
are smooth and monotonically nondecreasing functions to be determined later
which satisfy that ρi (s) ≥ 1 for s > 0.
Let ν = [ν2T , · · · , νN
T T
] . Then,
x2 − x1
..
ν = (L1 ⊗ In ) . , (5.78)
xN − x1
ξ˙ = [IN −1 ⊗ A + L1 D
b ρ̂(ν) ⊗ BK]ν,
(5.79)
d˙i = ν T Γνi ,
i
Lemma 29 There exists a positive diagonal matrix G such that GL1 +LT1 G >
0. One such G is given by diag(q2 , · · · , qN ), where q = [q2 , · · · , qN ]T =
(LT1 )−1 1.
Proof 39 The first assertion is well known; see Theorem 4.25 in [127] or
Theorem 2.3 in [10]. The second assertion is shown in the following. Note
that the specific form of G given here is different from that in [30, 127, 196].
Since L1 is a nonsingular M -matrix, it follows from Theorem 4.25 in [127]
that (LT1 )−1 exists, is nonnegative, and thereby cannot have a zero row. Then,
it is easy to verify that q > 0 and hence GL1 1 ≥ 0 1 . By noting that LT1 G1 =
LT1 q = 1, we can conclude that (GL1 + LT1 G)1 > 0, implying that GL1 +
LT1 G is strictly diagonally dominant. Since the diagonal entries of GL1 +LT1 G
are positive, it then follows from Gershgorin’s disc theorem [58] that every
eigenvalue of GL1 + LT1 G is positive, implying that GL1 + LT1 G > 0.
1 For a vector x, x > (≥)0 means that every entry of x is positive (nonnegative).
126 Cooperative Control of Multi-Agent Systems
where we have used the fact that QL1 + LT1 Q > 0 to get the first inequality.
Because ρi are monotonically increasing and satisfy ρ(s) ≥ 1 for s > 0, it
follows that
N ˙ Z
X νiT P −1 νi N ˙
X
di di
ρi (s)ds ≤ ρi νiT P −1 νi
i=2
qi 0 i=2
qi
N
X N
X
d˙i 2 3 3
≤ + λ̂0 d˙i ρi2 (νiT P −1 νi ) 2
3 2
i=2 3qi λ̂0 i=2
3
(5.83)
N
X N
X
d˙i 2 3 3
≤ + λ̂0 d˙i ρi (1 + νiT P −1 νi )
2 2
where we have used the mean value theorem for integrals to get the first in-
equality and used Lemma 17 (Young’s inequality) to get the second inequality.
Substituting (5.82) and (5.83) into (5.81) yields
1 T b
V̇10 ≤ ν [DρG ⊗ (P −1 A + AT P −1 )]ν
2
XN
1 1 1
− [λ̂0 ( d2i ρ2i − di − ρ2i ) (5.84)
i=2
2 12 3
1 2
+ (λ̂0 α − )]νiT P −1 BB T P −1 νi .
12 qi λ̂20
3
2
Choose α ≥ α̂ +maxi=2,··· ,N qi3 λ̂30
, where α̂ > 0 will be determined later. Then,
by noting that ρi ≥ 1 and di ≥ 1, i = 2, · · · , N , it follows from (5.84) that
1 T b
V̇10 ≤ ν [DρG ⊗ (P −1 A + AT P −1 )]ν
2
N
λ̂0 X 2 2
− (d ρ + α̂)νiT P −1 BB T P −1 νi
12 i=2 i i (5.85)
1 b
≤ ν T [DρG ⊗ (P −1 A + AT P −1 )
2
1√ b ρ̂ ⊗ P −1 BB T P −1 ]ν.
− α̂λ̂0 D
3
q
Let ν̃ = ( D b ρ̂G ⊗ I)ν and choose α̂ to be sufficiently large such that
√ −1
α̂λ̂0 G ≥ 6I. Then, we can get from (5.85) that
1 T
V̇10 ≤ ν̃ [IN ⊗ (P −1 A + AT P −1 − 2P −1 BB T P −1 )]ν̃ ≤ 0, (5.86)
2
128 Cooperative Control of Multi-Agent Systems
where to get the last inequality we have used the assertion that P −1 A +
AT P −1 − 2P −1 BB T P −1 < 0, which follows readily form (5.6).
Since V̇10 ≤ 0, V10 (t) is bounded and so is each di . By noting d˙i ≥ 0, it
can be seen from (5.79) that di are monotonically increasing. Then, it follows
that each coupling weight di converges to some finite value. Note that V̇10 ≡ 0
implies that ν̃ = 0 and thereby ν = 0. Hence, by Lemma 11 (LaSalle’s Invari-
ance principle), it follows that the consensus error ν asymptotically converges
to zero. That is, the leader-follower consensus problem is solved.
Remark 41 In comparison to the adaptive protocols in the previous sections,
a distinct feature of (5.77) is that inspired by the changing supply functions
in [157], monotonically increasing functions ρi are introduced into (5.77) to
provide extra freedom for design. As the consensus error ν converges to zero,
the functions ρi will converge to 1, in which case the adaptive protocol (5.77)
will reduce to the adaptive protocol (5.45) for undirected graphs in Section
5.3. It should be mentioned that the Lyapunov function used in the proof of
Theorem 39 is partly inspired by [175].
1 2 7
3 6
4 5
νi 0
−2
−4
−6
−8
0 5 10 15 20 25 30 35 40
t
To illustrate Theorem 39, let the initial states di (0) and ρi (0) be randomly
chosen within the interval [1, 3]. The consensus errors νi , i = 2, · · · , 7, of the
double integrators, defined as in (5.78), under the adaptive protocol (5.77)
with K, Γ, and ρi chosen as above, are depicted in in FIGURE 5.8, which
states that leader-follower consensus is indeed achieved. The coupling weights
di associated with the nodes are drawn in FIGURE 5.9, from which it can be
observed that the coupling weights converge to finite steady-state values.
2.8
2.6
2.4
2.2
2
d (t)
i
1.8
1.6
1.4
1.2
1
0 5 10 15 20 25 30 35 40
t
AT Q + QA + I − QBB T Q = 0. (5.90)
and the rest of the variables are defined as in (5.77). Different from the previous
subsection where the LMI (5.6) is used to design the adaptive protocol (5.77),
here we use the algebraic Ricatti equation (5.90) to design the new adaptive
protocol (5.89). These two approaches are nearly equivalent.
Consensus Control Using Distributed Adaptive Protocols 131
where we have used the fact that di (0) ≥ 1 to get the last inequality.
where
N N
λ̂0 X 12 X
Ξ, ϕi (α − 1)2 + σ̄ 2 (GL1 ) (θi + θ1 )2 , (5.95)
24 i=1 λ̂0 i=2
72 2
α= max q 2
λ̂20 i=2,··· ,N i
+ max 3 3, σ̄(GL1 ) denotes the largest singular value of
i=2,··· ,N qi λ̂0
GL1 , and qi , G, and λ̂0 are chosen as in the proof of Theorem 39.
By following similar steps in the proof of Theorem 39, we can get the
following results:
N
X di 1 T b b 2 ρ̂2 ⊗ QBB T Q]ν
ρi νiT Qν̇i ≤ ν [Dρ̂G ⊗ (QA + AT Q) − λ̂0 D
i=2
qi 2 (5.97)
b ρ̂GL1 ⊗ QB)ω,
+ ν T (D
and
N ˙ Z Z
νiT Qνi
d˙i νiT Qνi
X N ¯
X
di
ρi (s)ds ≤ ρi (s)ds
i=2
qi 0 i=2
qi 0
N
X 1 2
≤ ( + λ̂0 ρ2i )d¯˙i (5.98)
3 2
i=2 3qi λ̂0
3
N
X 1 2
≤ ( + λ̂0 ρ2i )νiT QBB T Qνi
3 2
i=2 3qi λ̂0
3
we have used the fact that d˙i ≤ d¯˙i , νiT Γνi to get the first inequality. Substi-
tuting (5.97) and (5.98) into (5.96) yields
1 T b
V̇10 ≤ ν [Dρ̂G ⊗ (QA + AT Q)]ν
2
N
λ̂0 X 2 2
− (d ρ + 2α̂)νiT QBB T Qνi
12 i=2 i i (5.99)
N
λ̂0 X
+ ϕi [−d˜2i + (α − 1)2 ] + ν T (D
b ρ̂GL1 ⊗ QB)ω,
24 i=2
1 1
−(di − α)(di − 1) = −d˜i (d˜i + α − 1) ≤ − d˜2i + (α − 1)2 .
2 2
Consensus Control Using Distributed Adaptive Protocols 133
Note that
s s
b ρ̂GL1 ⊗ QB)ω = 2ν T ( λ̂0 b 12
2ν T (D Dρ̂ ⊗ QB)( GL1 ⊗ I)ω
12 λ̂0
λ̂0 T b 2 2 12 (5.100)
≤ ν (D ρ̂ ⊗ QBB T Q)ν + k(GL1 ⊗ I)ωk2
12 λ̂0
XN
λ̂0 T b 2 2 12
≤ ν (D ρ̂ ⊗ QBB T Q)ν + σ̄ 2 (GL1 ) (θi + θ1 )2 ,
12 λ̂0 i=2
where we have used (5.92) to get the last inequality. Then, substituting (5.100)
into (5.99) gives
1 T b λ̂0 b 2 2
V̇10 ≤ ν [Dρ̂G ⊗ (QA + AT Q) − (D ρ̂ + 4α̂I) ⊗ QBB T Q)]ν
2 24
N
λ̂0 X ˜2
− ϕi d i + Ξ (5.101)
24 i=2
N
1 λ̂0 X ˜2
≤ W (ν) − ϕi di + Ξ,
2 24 i=2
√
where we have used the assertion that λ̂120 (D b ρ̂ ≥ D
b 2 ρ̂2 + 4α̂I) ≥ λ̂0 α̂D b ρ̂G if
√ 6
6
α̂I ≥ λ̂ G to get the last inequality, Ξ is defined as in (5.95), and
0
PN
Therefore, we can verify that 12 W (ν) − λ̂240 i=2 ϕi d˜2i is negative definite. In
virtue of Lemma 15, we get that both the consensus error ν and the adaptive
gains di are uniformly ultimately bounded.
Note that (5.99) can be rewritten into
N
1 λ̂0 X ˜2
V̇10 ≤ −δV10 + δV10 + W (ν) − ϕi di + Ξ. (5.102)
2 24 i=2
τ −δ 1
V̇10 ≤ −δV10 − λmin (Q)( min )kνk2 + Ξ, (5.105)
2 i=2,··· ,N qi
converges to the residual set D5 in (5.94) with a convergence rate faster than
e−δt .
Remark 42 Theorem 40 shows that the modified adaptive protocol (5.89) can
ensure the ultimate boundedness of the consensus error ν and the adaptive
gains di for the agents in (5.88), implying that (5.89) is indeed robust in
the presence of bounded external disturbances. As discussed in Remark 38, ϕi
should be chosen to be relatively small in order to ensure a smaller bound for
the consensus error ν.
νi 0
−2
−4
−6
−8
0 10 20 30 40 50 60
t
To illustrate Theorem 40, let let ϕi = 0.02 in (5.89) and the initial states
di (0) be randomly chosen within the interval [1, 3]. The consensus errors νi ,
i = 2, · · · , 7, of the double integrators, defined as in (5.78), and the coupling
weights di associated with the followers are depicted in in FIGURE 5.10 and
FIGURE 5.11, respectively, both of which are clearly bounded.
2.2
2.1
1.9
1.8
di
1.7
1.6
1.5
1.4
1.3
0 10 20 30 40 50 60
t
5.6 Notes
The results in this section are mainly based on [78, 80, 93, 95]. There exist
some related works which have proposed similar adaptive protocols to achieve
136 Cooperative Control of Multi-Agent Systems
CONTENTS
6.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.2 Distributed Discontinuous Tracking Controllers . . . . . . . . . . . . . . . . . . . . . 139
6.2.1 Discontinuous Static Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.2.2 Discontinuous Adaptive Controllers . . . . . . . . . . . . . . . . . . . . . . . . 142
6.3 Distributed Continuous Tracking Controllers . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.1 Continuous Static Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.3.2 Adaptive Continuous Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.4 Distributed Output-Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.5 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
137
138 Cooperative Control of Multi-Agent Systems
control input) of the leader is available to only a subset of the followers. The
interaction graph among the N agents is represented by a directed graph G,
which satisfies the following assumption.
Assumption 6.2 The leader’s control input u1 is bounded, i.e., there exists
a positive constant γ such that ku1 k ≤ γ.
associated with G, and g(·) is a nonlinear function defined such that for w ∈
Rn , (
w
if kwk =
6 0
g(w) = kwk (6.4)
0 if kwk = 0.
T T
Let ξi = xi − x1 , i = 2, · · · , N , and ξ = [ξ2T , · · · , ξN ] . Using (6.3) for
(6.1), we obtain the closed-loop network dynamics as
ξ˙ = (IN −1 ⊗ A + c1 L1 ⊗ BK)ξ
(6.5)
+ c2 (IN −1 ⊗ B)G(ξ) − (1 ⊗ B)u1 ,
PN
g(K j=2 L2j ξj )
Theorem 41 Suppose that Assumptions 6.1 and 6.2 hold. The distributed
tracking control problem of the agents described by (6.1) is solved under the
controller (6.3) with c1 ≥ λmin1(L1 ) , c2 ≥ γ, and K = −B T P −1 , where
λmin (L1 ) denotes the smallest eigenvalue of L1 and P > 0 is a solution to
the following linear matrix inequality (LMI):
N
X N
X
=− kB T P −1 Lij ξj k.
i=2 j=2
(6.11)
Then, we can get from (6.8), (6.10), and (6.11) that
XN XN
1 T
V̇1 ≤ ξ X ξ − (c2 − γ) kB T P −1 Lij ξj k
2 i=2 j=2 (6.12)
1
≤ ξ T X ξ.
2
Because the graph L satisfies Assumption 6.1, by (6.6), we obtain that
− 21 − 21
(L1 ⊗ P )X (L1 ⊗ P ) = IN ⊗ (AP + P AT ) − 2c1 L1 ⊗ BB T
≤ IN ⊗ [AP + P AT − 2c1 λmin (L1 )BB T ] (6.13)
< 0,
which implies X < 0 and V̇1 < 0. Therefore, the state ξ of (6.5) is asymptoti-
cally stable, i.e., the tracking control problem is solved.
where K ∈ Rp×n and aij are defined in (6.3), di (t) denotes the time-varying
coupling gain (weight) associated with the i-th follower, Γ ∈ Rn×n is the
feedback gain matrix, and τi are positive scalars.
Let ξ be defined as in (6.5) and D = diag(d2 , · · · , dN ). From (6.1) and
(6.14), we can get the closed-loop network dynamics as
Theorem 42 Suppose that Assumptions 6.1 and 6.2 hold. Then, the dis-
tributed tracking control problem of the agents described by (6.1) is solved un-
der the adaptive controller (6.14) with K = −B T P −1 and Γ = P −1 BB T P −1 ,
where P > 0 is a solution to the LMI (6.6). Moreover, each coupling gain di
converges to some finite steady-state value.
XN
1 T −1 1
V2 = ξ (L1 ⊗ P )ξ + (di − β)2 , (6.16)
2 i=2
2τ i
where Y , L1 ⊗ (P −1 A + AT P −1 ) − 2βL21 ⊗ P −1 BB T P −1 .
As shown in the proof of Theorem 41, by selecting β sufficiently large such
that β ≥ γ and βλi ≥ 1, i = 2, · · · , N , we get from (6.20) that
1 T
V̇2 ≤ ξ Yξ, (6.21)
2
and from (6.13) that Y > 0. Then, it follows that V̇2 ≤ 0, implying that V2 (t)
is nonincreasing. Therefore, in view of (6.16), we know that di and ξ are
bounded. Since by Assumption 6.2, u1 is bounded, this implies from the first
equation in (6.15) that ξ˙ is bounded. As V2 (t) is nonincreasing and bounded
from Rbelow by zero, it has a finite limit V2∞ as
R ∞t → ∞. Integrating (6.21), we
∞
have 0 12 ξ T Yξdτ ≤ V2 (ζ(t1 )) − V2∞ . Thus, 0 21 ξ T Yξdτ exists and is finite.
Because ξ and ξ˙ are bounded, it is easy to see from (6.21) that ξ T Y ξ˙ is also
bounded, which in turn guarantees the uniform continuity of ξ T Yξ. Therefore,
by Lemma 13 (Barbalat’s Lemma), we get that ξ T Yξ → 0 as t → ∞, i.e.,
ξ → 0 as t → ∞. By noting that Γ ≥ 0 and τi > 0, it follows from (6.15) that
di is monotonically increasing. Thus, the boundedness of di implies that each
di converges to some finite value.
Remark 47 Compared to the static controller (6.3), the adaptive controller
(6.14) requires neither the minimal eigenvalue λmin (L1 ) of L1 nor the up-
per bound γ of u1 , as long as u1 is bounded. On the other hand, the cou-
pling gains need to be dynamically updated in (6.14), implying that the adap-
tive controller (6.14) is more complex than the static controller (6.3). The
PN
term di g( j=1 aij K(xi − xj )) in (6.14) is used to tackle the effect of the
leader’s nonzero control input u1 on consensus. For PNthe special case where
u1 = 0, we can accordingly remove the terms di g( j=1 aij K(xi − xj )) and
PN
τi k j=1 aij K(xi − xj )k from (6.14), which will reduce to the continuous pro-
tocol (5.45) in the previous chapter.
approach is to use the boundary layer technique [187, 43] to give a continuous
approximation of the discontinuous function g(·).
Using the boundary layer technique, we propose a distributed continuous
static controller as
N
X N
X
ui = c 1 K aij (xi − xj ) + c2 ĝi (K aij (xi − xj )), i = 2, · · · , N,
j=1 j=1
(6.22)
where the nonlinear functions ĝ(·) are defined such that for w ∈ Rn ,
(
w
if kwk > κi
ĝi (w) = kwk
w
(6.23)
κi if kwk ≤ κi
with κi being small positive scalars, denoting the widths of the boundary
layers, and the rest of the variables are the same as in (6.3).
Let the tracking error ξ be defined as in (6.5). From (6.1) and (6.22), we
can obtain that ξ in this case satisfies
where PN
ĝ2 (K j=2 L2j ξj )
..
b
G(ξ) , , (6.25)
.
PN
ĝN (K j=2 LN j ξj )
The following theorem states the ultimate boundedness of the tracking
error ξ.
Theorem 43 Assume that Assumptions 6.1 and 6.2 hold. Then, the tracking
error ξ of (6.24) under the continuous controller (6.22) with c1 , c2 , and K
chosen as in Theorem 41 is uniformly ultimately bounded and exponentially
converges to the residual set
N
2λmax (P )γ X
D1 , {ξ : kξk2 ≤ κi }, (6.26)
αλmin (L1 ) i=2
where
−λmax (AP + P AT − 2BB T )
α= . (6.27)
λmax (P )
Therefore, by analyzing the above three cases, we get that V̇1 satisfies (6.32)
for all ξ ∈ RN n . Note that (6.32) can be rewritten as
XN
1
V̇1 ≤ −αV1 + αV1 + ξ T X ξ + γ κi
2 i=2
(6.33)
XN
1
= −αV1 + ξ T (X + αL1 ⊗ P −1 )ξ + γ κi .
2 i=2
− 21 − 12
(L1 ⊗ P )(X + αL1 ⊗ P −1 )(L1 ⊗ P)
≤ IN ⊗ [AP + P A + αP − 2BB T ] < 0.
T
By using Lemma 16 (the Comparison lemma), we can obtain from (6.34) that
PN PN
γ i=2 κi −αt γ i=2 κi (6.35)
V1 (ξ) ≤ [V1 (ξ(0)) − ]e + ,
α α
which implies that ξ exponentially converges to the residual set D1 in (6.26)
with a convergence rate not less than e−αt .
that the adaptive weights di (t) will slowly grow unbounded. This argument
can also been easily verified by simulation results. To tackle this problem, we
propose to use the so-called σ-modification technique [63, 183] to modify the
discontinuous adaptive controller (6.14).
Using the boundary layer concept and the σ-modification technique, a
distributed continuous adaptive controller is proposed as
N
X N
X
ui = di K aij (xi − xj ) + di ri (K aij (xi − xj )),
j=1 j=1
N
X N
X
d˙i = τi [−ϕi di + ( aij (xi − xj )T )Γ( aij (xi − xj )) (6.36)
j=1 j=1
N
X
+ kK aij (xi − xj )k], i = 2, · · · , N,
j=1
where ϕi are small positive constants, the nonlinear functions ri (·) are defined
as follows: for w ∈ Rn ,
(
w
if di kwk > κi
ri (w) = kwk
w
(6.37)
κ i di if di kwk ≤ κi
and the rest of the variables are defined as in (6.14).
Let the tracking error ξ and the coupling gains D(t) be as defined in (6.15).
Then, it follows from (6.1) and (6.36) that ξ and D(t) satisfy the following
dynamics:
ξ˙ = (IN −1 ⊗ A + DL1 ⊗ BK)ξ + (D ⊗ B)R(ξ) − (1 ⊗ B)u1 ,
N
X N
X N
X
d˙i = τi [−ϕi di + ( Lij ξjT )Γ( Lij ξj ) + kK Lij ξj k], i = 2, · · · , N,
j=2 j=2 j=2
PN (6.38)
r2 (K j=2 L2j ξj )
where R(x) , .. .
PN.
rN (K j=2 LN j ξj )
The following theorem presents the ultimate boundedness of the states ξ
and di of (6.38).
Theorem 44 Suppose that Assumptions 6.1 and 6.2 hold. Then, both the
tracking error ξ and the coupling gains di , i = 2, · · · , N , in (6.38), under the
continuous adaptive controller (6.36) with K and Γ designed as in Theorem
42, are uniformly ultimately bounded . Moreover, the following two assertions
hold.
i) For any ϕi , both ξ and di exponentially converge to the residual set
N
1 X 2 κi
D2 , {ξ, di : V3 < (β ϕi + )}, (6.39)
2δ i=2 2
Distributed Tracking with a Leader of Possibly Nonzero Input 149
XN
1 T d˜2i
V3 = ξ (L ⊗ P −1 )ξ + , (6.40)
2 i=2
2τi
X N
λmax (P )
2 1
D3 , {ξ : kξk ≤ (β 2 ϕi + κi )}. (6.41)
λmin (L1 )(α − ̺) i=2 2
XN ˜
di ˜˙
V̇3 = ξ T (L1 ⊗ P −1 )ξ˙ + di
τ
i=2 i
= ξ T (L1 ⊗ P −1 A + L1 DL1 ⊗ P −1 BK)ξ
+ ξ T (L1 D ⊗ P −1 B)R(ξ) − ξ T (L1 1 ⊗ P −1 B)u1
N
X N
X N
X N
X
+ d˜i [−ϕi (d˜i + β) + ( Lij ξjT )Γ( Lij ξj ) + kK Lij ξj k],
i=2 j=2 j=2 j=2
(6.42)
where D = diag(d˜1 + β, · · · , d˜N + β).
Next, consider
PM the following three cases:
i) di kK j=2 Lij ξj k > κi , i = 2, · · · , N .
In this case, we can get from (6.37) that
N
X N
X
ξ T (L1 D ⊗ P −1 B)R(ξ) = − (d˜i + β)kB T P −1 Lij ξj k. (6.43)
i=2 j=2
where we have used the fact that −d˜2i − d˜i β ≤ − 21 d˜2i + 12 β 2 to get the first
inequality and Z , L1 ⊗ (P −1 A + AT P −1 ) − 2βL21 ⊗ P −1 BB T P −1 .
150 Cooperative Control of Multi-Agent Systems
PN
ii) di kK j=2 Lij ξj k ≤ κi , i = 2, · · · , N .
In this case, we can get from (6.37) that
N
X N
X
(d˜i + β)2
ξ T (L1 D ⊗ P −1 B)R(ξ) = − kB T P −1 Lij ξj k. (6.44)
i=2
κi j=2
Note that to get the last inequality in (6.45), we have used the following fact:
XN XN
d2i T −1 2 T −1 1
− kB P Lij ξj k + di kB P Lij ξj k ≤ κi ,
κi j=2 j=2
4
PN
for di kK j=2 Lij ξj k ≤ κi , i = 2, · · · , N .
iii) ξ satisfies neither Case i) nor Case ii).
PN
Without loss of generality, assume that di kK j=2 Lij ξj k ≤ κi , i =
PN
2, · · · , l, and di kK j=2 Lij ξj k > κi , i = l + 1, · · · , N , where 3 ≤ l ≤ N − 1.
By following similar steps in the two cases above, it is easy to get that
N N −l
1 T 1X 1X
V̇3 ≤ ξ Zξ + ϕi (β 2 − d˜2i ) + κi .
2 2 i=2 4 i=2
Therefore, based on the above three cases, we can get that V̇3 satisfies (6.45)
for all ξ ∈ RN n . Note that (6.45) can be rewritten into
N N
1 1X 1X
V̇3 ≤ −δV3 + δV3 + ξ T Zξ + ϕi (β 2 − d˜2i ) + κi
2 2 i=2 4 i=2
N
1 1X δ
= −δV3 + ξ T (Z + δL ⊗ P −1 )ξ − (ϕi − )d˜2i (6.46)
2 2 i=2 τi
N
1X 2 1
+ (β ϕi + κi ).
2 i=2 2
Distributed Tracking with a Leader of Possibly Nonzero Input 151
Because βλmin (L1 ) ≥ 1 and 0 < δ ≤ α, by following similar steps in the proof
of Theorem 43, we can show that ξ T (Z + δL ⊗ P −1 )ξ ≤ 0. Further, by noting
that δ ≤ mini=2,··· ,N ϕi τi , it follows from (6.46) that
N
1X 2 1
V̇3 ≤ −δV3 + (β ϕi + κi ), (6.47)
2 i=2 2
λmax (P ) PN
Obviously, it follows from (6.49) that V̇3 ≤ −̺V3 if kξk2 > λmin (L1 )(α−̺) i=2
λmin (L1 )
(β 2 ϕi + 21 κi ). Then, by noting V3 ≥ 2λ max (P )
kξk2 , we can get that if ̺ ≤ α then
ξ exponentially converges to the residual set D3 in (6.41) with a convergence
rate faster than e−̺t .
where v̂i ∈ Rn is the estimate of the state of the i-th follower, v̂1 ∈ Rn denotes
the estimate of the state of the leader, given by v̂˙ 1 = Av̂1 + Bu1 + L(C v̂1 − y1 ),
ϕi are small positive constants, τi are positive scalars, di denotes the time-
varying coupling gain associated with the i-th follower, L̂ ∈ Rn×q , F̂ ∈ Rp×n ,
and Γ̂ ∈ Rn×n are the feedback gain matrices, and ri (·) are defined as in (6.37).
PN
Note that the term j=1 aij (v̂i − v̂j ) in (6.50) imply that the agents need to
get the estimates of the states of their neighbors, which can be transmitted
−x1 graph G.
via the communication
Let ζi = xv̂ii −v̂ 1
, i = 2, · · · , N , and ζ = [ζ2T , · · · , ζN
T T
] . Using (6.50) for
(6.1), we obtain the closed-loop network dynamics as
Theorem 45 Suppose that Assumptions 6.1 and 6.2 hold. Then, both the
tracking error ζ and the coupling gains di , i = 2, · · · , N , in (6.51) are u-
niformly ultimately bounded under the distributed continuous adaptive pro-
tocol (6.50) with L̂ satisfying that A + L̂C is Hurwitz, F̂ = −B T P −1 , and
Γ̂ = P −1 BB T P −1 , where P > 0 is a solution to the LMI (6.6).
XN
1 T d˜2i
V4 = ζ (L1 ⊗ P)ζ + ,
2 i=2
2τi
h i
ϑP̂ −ϑP̂ T
where P , −ϑ P̂ ϑP̂ +P −1 , P̂ > 0 satisfies that (A+ L̂C)P̂ +(A+ L̂C) P̂ < 0,
bSb + L1 DL1 ⊗ P
V̇4 = ζ̂ T [(L1 ⊗ P bF)
b ζ̂ − (L1 1 ⊗ P
bB)u
b 1]
N
X
bB)R(
+ ζ̂ T (L1 D ⊗ P b ζ̂) + d˜i [−ϕi (d˜i + β)
i=2 (6.52)
N
X N
X N
X
+( Lij ζ̂jT )I( Lij ζ̂j ) + kO Lij ζ̂j k],
j=2 j=2 j=2
Next, consider
PN the following three cases.
i) di kO j=2 Lij ζ̂j k > κi , i = 2, · · · , N .
By observing that P bBb = −OT , it is not difficult to get from the definitions
of ri (·) and R(·) that
N
X N
X
ζbT (L1 D ⊗ P
bB)R(
b ζ̂) = − (d˜i + β)kBbT P
b Lij ζ̂j k. (6.55)
i=2 j=2
By noting that PbFb = −I and substituting (6.53), (6.54), and (6.55) into
(6.52), we can get that
N
X N
X N
X
V̇4 ≤ Z(ζ̂) − ϕi (d˜2i + β d˜i ) − (β − γ) kBbT P
b Lij ζ̂j k
i=2 i=2 j=2
N
1X
≤ Z(ζ̂) + ϕi (β 2 − d˜2i ),
2 i=2
where
1 T bSb + SbT P)
b − 2βL21 ⊗ I]ζ̂.
Z(ζ̂) , ζ̂ [L1 ⊗ (P (6.56)
2
PN
ii) di kO j=2 Lij ζ̂j k ≤ κi , i = 2, · · · , N .
From the definitions of ri (·) and R(·), in this case we get that
N
X N
X
(d˜i + β)2
ζbT (L1 D ⊗ P
bB)R(
b ζ̂) = − kBbT P
b Lij ζ̂j k2 . (6.57)
i=2
κi j=2
Note that to get the last inequality in (6.58), we have used the fact that for
Distributed Tracking with a Leader of Possibly Nonzero Input 155
Then, by following the steps in the two cases above, it is not difficult to get
that in this case
N
X N −l
1X
V̇4 ≤ Z(ζ̂) − ϕi (d˜2i + β d˜i ) + κi .
i=2
4 i=2
Analyzing the above three cases, we get that V̇4 satisfies (6.58) for all ζ ∈
R2N n . Note that
bSb + SbT P
diag(I, P )[P b − 2βλmin (L1 )I]diag(I, P )
Ξ −C T LT (6.59)
= ,
−LC AP + P AT − 2βλmin (L1 )BB T
For the special case where u1 = 0, the adaptive consensus protocols (5.25)
and (5.26) based on the relative output information of neighboring agents in
the previous chapter can also be extended to solve the distributed tracking
problem. For instance, a node-based adaptive protocol can be proposed as
N
X
ṽ˙ i = (A + BF )ṽi + d˜i L aij [C(ṽi − ṽj ) − (yi − yj )] ,
j=1
XN T XN (6.61)
˙ y i − yj yi − y j
d˜i = ǫi aij Γ aij ,
C(ṽi − ṽj ) C(ṽi − ṽj )
j=1 j=1
ui = F ṽi , i = 2, · · · , N,
Remark 53 Comparing the protocols (6.60) and (6.61), to solve the distribut-
ed tracking problem with u1 , the latter one is more preferable due to two rea-
sons. First, the protocol (6.61) generally requires a lighter communication load
(i.e., a lower dimension of information to be transmitted) than (6.60). Note
that the protocol (6.61) exchanges yi and C ṽi between neighboring agents while
(6.60) exchanges v̂i . The sum of the dimensions of yi and C ṽi is generally low-
er than the dimension of v̂i , e.g., for the single output case with n > 2. Second,
the protocol (6.60) requires the absolute measures of the agents’ outputs, which
might not be available in some circumstances, e.g., the case where the agents
are equipped with only ultrasonic range sensors. On the other hand, an advan-
tage of (6.60) is that it can be modified (specifically, the protocol (6.50)) to
solve the distributed tracking problem for the case where the leader’s control
input is nonzero and time varying. In contrast, it is not an easy job to extend
(6.61) to achieve consensus for the case of a leader with nonzero input.
Distributed Tracking with a Leader of Possibly Nonzero Input 157
1 2 7
3 6
4 5
10
20
18
5
16
14
0 12
1
x −x
di
10
i
−5 8
−10 4
−15 0
0 10 20 30 40 50 0 10 20 30 40 50
t t
(a) (b)
6.6 Notes
The results in this chapter are mainly based on [92, 95]. In the literature,
there exist other works on distributed tracking control of multi-agent systems
with a leader of nonzero control input, e.g., [21, 109]. Specifically, in [21],
discontinuous controllers were studied for multi-agent systems with first- and
second-order integrator agent dynamics in the absence of velocity or accelera-
tion measurements. The authors in [109] addressed a distributed coordinated
tracking problem for multiple Euler-Lagrange systems with a dynamic leader.
It is worthwhile to mention that the distributed tracking controllers proposed
in [21, 109] are discontinuous, which will result in undesirable chattering phe-
nomenon in real applications. In this chapter, to deal with the effect of the
leader’s nonzero control input on consensus, we actually borrow ideas from the
sliding mode control literature and treat the leader’s control input as an exter-
nal disturbance satisfying the matching condition. Another possible approach
is to design certain learning algorithm using the relative state information to
estimate the leader’s nonzero control input and then add the estimate into
the consensus protocols. Related works along this line include [125, 192].
7
Containment Control of Linear Multi-Agent
Systems with Multiple Leaders
CONTENTS
7.1 Containment of Continuous-Time Multi-Agent Systems with Leaders
of Zero Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7.1.1 Dynamic Containment Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.1.2 Static Containment Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.2 Containment Control of Discrete-Time Multi-Agent Systems with
Leaders of Zero Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.2.1 Dynamic Containment Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.2.2 Static Containment Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
7.2.3 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
7.3 Containment of Continuous-Time Multi-Agent Systems with Leaders
of Nonzero Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.3.1 Distributed Continuous Static Controllers . . . . . . . . . . . . . . . . . . 171
7.3.2 Adaptive Continuous Containment Controllers . . . . . . . . . . . . . 176
7.3.3 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
159
160 Cooperative Control of Multi-Agent Systems
linear dynamics. Both the cases where the leaders have zero control inputs
and have nonzero unknown inputs are considered. For the simple case where
the leaders are of zero control inputs, the containment control problems for
both continuous-time and discrete-time multi-agent systems with general lin-
ear dynamics under directed communication topologies are considered in Sec-
tions 7.1 and 7.2, respectively. Distributed dynamic containment controllers
based on the relative outputs of neighboring agents are proposed for both the
continuous-time and discrete-time cases. In the continuous-time case, a multi-
step algorithm is presented to construct a dynamic containment controller,
under which the states of the followers will asymptotically converge to the
convex hull formed by those of the leaders, if for each follower there exists at
least one leader that has a directed path to that follower. In the discrete-time
case, in light of the modified algebraic Riccati equation, an algorithm is given
to design a dynamic containment controller that solves the containment con-
trol problem. Furthermore, as special cases of the dynamic controllers, static
containment controllers relying on the relative states of neighboring agents
are also discussed for both the continuous-time and discrete-time cases.
The containment control problem for the general case where the leaders
have nonzero, bounded, and time-varying control inputs is further investigated
in Section 7.3. Based on the relative states of neighboring agents, distributed
continuous static and adaptive controllers are designed, under which the con-
tainment error is uniformly ultimately bounded and the upper bound of the
containment error can be made arbitrarily small, if the subgraph associated
with the followers is undirected and for each follower there exists at least one
leader that has a directed path to that follower. Extensions to the case where
only local output information is available are discussed. Based on the rela-
tive estimates of the states of neighboring agents, distributed observer-based
containment controllers are proposed.
agent has no neighbor. An agent is called a follower if the agent has at least
one neighbor. Without loss of generality, we assume that the agents indexed
by 1, · · · , M , are followers, while the agents indexed by M + 1, · · · , N , are
leaders whose control inputs are set to be zero. We use R , {M + 1, · · · , N }
and F , {1, · · · , M } to denote, respectively, the leader set and the follower
set. In this section, we consider the case where the leaders have zero control
inputs, i.e., ui = 0, i ∈ R. The communication topology among the N agents
is represented by a directed graph G. Note that here the leaders do not receive
any information.
Assumption 7.1 Suppose that for each follower, there exists at least one
leader that has a directed path to that follower.
where c and aij are defined as in (7.2), and K ∈ Rp×n is the feedback gain
matrix to be designed. Let xf = [xT1 , · · · , xTM ]T and xl = [xTM +1 , · · · , xTN ]T .
Using (7.13) for (7.1) gives the closed-loop network dynamics as
ẋf = (IM ⊗ A + cL1 ⊗ BK)xf + c(L2 ⊗ BK)xl ,
ẋl = (IN −M ⊗ A)xl ,
where L1 and L2 are defined in (7.3).
The following algorithm is given for designing the control parameters in
(7.13).
Algorithm 15 Under Assumption 7.1, the controller (7.13) can be construct-
ed as follows:
1) Choose the feedback gain matrix K = −B T P −1 , where P > 0 is a solution
to the following LMI:
AP + P AT − 2BB T < 0. (7.14)
= −R < 0.
(7.25)
Then, (7.25) implies that A + (1 − λ̂i )LC, i = 1, · · · , M , are Schur stable.
Therefore, considering (7.23) and (7.24), we obtain that (7.22) is asymptoti-
cally stable, i.e., the containment control problem is solved.
168 Cooperative Control of Multi-Agent Systems
where dij is defined as in (7.16) and K ∈ Rp×n is the feedback gain matrix
to be designed.
Let xf = [xT1 , · · · , xTM ]T and xl = [xTM +1 , · · · , xTN ]T . Using (7.26) for (7.1)
gives the closed-loop network dynamics as
x+
f = [IN ⊗ A + (IM − D1 ) ⊗ BK]xf − (D2 ⊗ BK)xl ,
x+
l = (IN −M ⊗ A)xl ,
1 6 7
2 5 8
3 4 9
where xi1 and xi2 are, respectively, the positions of the i-th vehicle along the
x and y coordinates, xi3 is the orientation of the i-th vehicle. The objective is
to design a static containment controller in the form of (7.13) by using local
state information of neighboring vehicles.
Solving the LMI (7.14) by using the LMI toolbox of MATLAB gives the
feedback gain matrix of (7.13) as
−0.0089 0.0068 0.0389 −0.0329 0.0180 0.0538
K= .
0.0068 −0.0089 −0.0389 0.0180 −0.0329 −0.0538
2.5
0 50
2
−10
40
1.5
−20
30 1
i3
xi2
i1
x
−30
x
0.5
20
−40
0
10
−50 −0.5
0
−60 −1
FIGURE 7.2: The positions and orientations of the nine vehicles under (7.13).
The solid and dashdotted lines denote, respectively, the trajectories of the
leaders and the followers.
(7.3) is
3 0 0 −1 −1 −1
−1 1 0 0 0 0
−1 −1 2 0 0 0
L1 =
−1
,
0 0 2 0 0
0 0 0 −1 2 0
0 0 0 0 −1 2
whose eigenvalues are 0.8213, 1, 2, 2.3329 ± ι0.6708, 3.5129. By Algorithm
15, we choose the coupling gain c ≥ 1.2176. The positions and orientations of
the nine vehicles under the controller (7.13) with K as above and c = 2 are
depicted in FIGURE 7.2, from which it can be observed that the containment
control problem is indeed solved.
Lemma 32 ([111]) Under Assumption 7.3, all the eigenvalues of L1 are pos-
itive, each entry of −L−1 −1
1 L2 is nonnegative, and each row of −L1 L2 has a
sum equal to one.
in (7.29) is introduced to deal with the effect of the nonzero ui , i ∈ R. For the
special case where ui = 0, i ∈ R, (7.29) can be reduced to (7.13) by removing
PN
c2 gi (K j=1 aij (xi − xj )).
Remark 59 Note that the nonlinear functions gi (·) in (7.30) are continuous,
which are actually continuous approximations, via the boundary layer approach
[26, 71], of the discontinuous function:
(
w
if kwk =
6 0
ĝ(w) = kwk (7.31)
0 if kwk = 0.
ξ , xf + (L−1
1 L2 ⊗ In )xl , (7.33)
T T
where ξ = [ξ1T , · · · , ξM ] . From (7.33), it is easy to see that ξ = 0 if and
−1
only if xf = (−L1 L2 ⊗ In )xl . In virtue of Lemma 32, we can get that the
containment control problem of the agents in (7.1) under the controller (7.29)
is solved if ξ converges to zero. Hereafter, we refer to ξ as the containment
error for (7.1) under (7.29). By (7.33) and (7.32), it is not difficult to obtain
that ξ satisfies the following dynamics:
where PM
g1 (K j=1 L1j ξj )
..
b
G(ξ) , , (7.35)
.
PM
gN (K j=1 L M j ξj )
Containment Control of Multi-Agent Systems with Multiple Leaders 173
b
with Lij denoting the (i, j)-th entry of L1 . Actually, G(ξ) = G(x), which
PN PM
follows from the fact that j=1 aij (xi − xj ) = j=1 Lij ξj , i = 1, · · · , M .
The following theorem states the ultimate boundedness of the containment
error ξ.
Theorem 48 Assume that Assumptions 7.2 and 7.3 hold. The parameters in
the containment controller (7.29) are designed as c1 ≥ λmin1(L1 ) , c2 ≥ γmax ,
and K = −B T P −1 , where γmax , max γi and P > 0 is a solution to the
i∈R
LMI (7.14). Then, the containment error ξ of (7.34) is uniformly ultimately
bounded and exponentially converges to the residual set
M
2λmax (P )γmax X
D1 , {ξ : kξk2 ≤ κi }, (7.36)
αλmin (L1 ) i=1
where
−λmax (AP + P AT − 2BB T )
α= . (7.37)
λmax (P )
7.2, we have
ξ T (L2 ⊗ P −1 B)ul
= ξ T (L1 ⊗ In )(L−1
1 L2 ⊗ P
−1
B)ul
PN −M
hP i k=1 b1k P −1 Buk
M PM ..
=− j=1 L1j ξjT ··· j=1 LM j ξjT .
PN −M
k=1 bM k P −1 Buk
M X
X M −M
NX
=− Lij ξjT P −1 B bjk uk
i=1 j=1 k=1
M
X M
X −M
NX
≤ kB T P −1 Lij ξj k bjk kuk k
i=1 j=1 k=1
M
X M
X
≤ γmax kB T P −1 Lij ξj k.
i=1 j=1
(7.41)
Next, Pconsider the following three cases:P
M M
i) kK j=1 Lij ξj k > κi , i.e., kB T P −1 j=1 Lij ξj k > κi , i = 1, · · · , M .
In this case, it follows from (7.30) and (7.35) that
M
X M
X
b
ξ T (L1 ⊗ P −1 B)G(ξ) =− kB T P −1 Lij ξj k. (7.42)
i=1 j=1
XM XM
1 T
V̇1 ≤ ξ X ξ − (c2 − γmax ) kB T P −1 Lij ξj k
2 i=1 j=1
1 T
≤ ξ X ξ.
2
PM
ii) kB T P −1 j=1 Lij ξj k ≤ κi , i = 1, · · · , M .
From (7.41), we can obtain that
M
X
ξ T (L2 ⊗ P −1 B)ul ≤ γmax κi . (7.43)
i=1
Therefore, by analyzing the above three cases, we get that V̇1 satisfies (7.45)
for all ξ ∈ RM n . Note that (7.45) can be rewritten as
XM
1 T
V̇1 ≤ −αV1 + αV1 + ξ X ξ + γmax κi
2 i=1
(7.46)
XM
1
= −αV1 + ξ T (X + αL1 ⊗ P −1 )ξ + γmax κi .
2 i=1
By using Lemma 16 (the Comparison lemma), we can obtain from (7.47) that
PM PM
γmax i=1 κi −αt γmax i=1 κi (7.48)
V1 (ξ) ≤ [V1 (ξ(0)) − ]e + ,
α α
which implies that ξ exponentially converges to the residual set D1 in (7.36)
with a convergence rate not less than e−αt .
176 Cooperative Control of Multi-Agent Systems
Remark 60 Note that the residual set D1 of the containment error ξ depend-
s on the communication graph G, the number of followers, the upper bounds
of the leader’s control inputs, and the widths κi of the boundary layers. By
choosing sufficiently small κi , the containment error ξ under the continuous
controller (7.29) can be arbitrarily small, which is acceptable in most cir-
cumstances. It can be shown that the containment error ξ can asymptotically
converge to zero under the controller (7.29) with gi (·) replaced by ĝ(·). The
advantage of the continuous controller (7.29) is that it can avoid the unde-
sirable the chattering effect caused by (7.29) with gi (·) replaced by ĝ(·). The
tradeoff is that the continuous controller (7.29) does not guarantee asymptotic
stability.
where di (t) denotes the time-varying coupling gain (weight) associated with
the i-th follower, ϕi are small positive constants, Γ ∈ Rn×n is the feedback
gain matrix, τi are positive scalars, the nonlinear functions ri (·) are defined
such that for w ∈ Rn ,
(
w
if di kwk > κi
ri (w) = kwk
w
(7.50)
κ i di if di kwk ≤ κi
diag(d1 (t), · · · , dM (t)). Then, it follows from (7.1) and (7.49) that the con-
tainment error ξ and the coupling gains D(t) satisfy the following dynamics:
ξ˙ = ẋf + (L−1
1 L2 ⊗ In )ẋl
= (I ⊗ A + DL1 ⊗ BK)xf + (DL2 ⊗ BK)xl + (D ⊗ B)R(x)
+ (L−1 −1
1 L2 ⊗ A)xl + (L1 L2 ⊗ B)ul
= (IM ⊗ A + DL1 ⊗ BK)ξ + (D ⊗ B)R(ξ) + (L−1
1 L2 ⊗ B)ul ,
M
X M
X M
X
d˙i = τi [−ϕi di + ( Lij ξjT )Γ( Lij ξj ) + kK Lij ξj k], i = 1, · · · , M,
j=1 j=1 j=1
PM (7.51)
r1 (K j=1 L1j ξj )
where R(ξ) = .. .
PM.
rM (K j=1 LM j ξj )
The following theorem shows the ultimate boundedness of the states ξ and
di of (7.51).
Theorem 49 Suppose that Assumptions 7.2 and 7.3 hold. The feedback gain
matrices of the adaptive controller (7.49) are designed as K = −B T P −1 and
Γ = P −1 BB T P −1 , where P > 0 is a solution to the LMI (7.14). Then, both
the containment error ξ and the coupling gains di , i = 1, · · · , M , in (7.51)
are uniformly ultimately bounded. Furthermore, if ϕi are chosen such that
̺ , mini=1,··· ,M ϕi τi < α, where α is defined as in (7.37), then ξ exponentially
converges to the residual set
X M
λmax (P ) 1
D2 , {ξ : kξk2 ≤ (β 2 ϕi + κi )}, (7.52)
λmin (L1 )(α − ̺) i=1 2
XM ˜2
1 T −1 di
V2 = ξ (L1 ⊗ P )ξ + ,
2 i=1
2τ i
where we have used the fact that β ≥ max γi and −d˜2i − d˜i β ≤ − 21 d˜2i + 21 β 2 to
i∈R
get the last inequality and Z = L1 ⊗ (P −1 A + AT P −1 ) − 2βL21 ⊗ P −1 BBP −1 .
PM
For the case where di kK j=1 Lij ξj k ≤ κi , i = 1, · · · , M , we can get from
(7.50) that
M
X M
X
(d˜i + β)2
ξ T (L1 D ⊗ P −1 B)R(ξ) = − kB T P −1 Lij ξj k2 . (7.57)
i=1
κi j=1
Containment Control of Multi-Agent Systems with Multiple Leaders 179
Note that to get the last inequality in (7.58), we have used the following fact:
M M
(d˜i + β)2 T −1 X X 1
− kB P Lij ξj k2 + (d˜i + β)kB T P −1 Lij ξj k ≤ κi ,
κi j=1 j=1
4
PM
for (d˜i + β)kK j=1 Lij ξj k ≤ κi , i = 1, · · · , M .
PM
For the case where di kK j=1 Lij ξj k ≤ κi , i = 1, · · · , l, and
PM
di kK j=1 Lij ξj k > κi , i = l + 1, · · · , M . By following the steps in the t-
wo cases above, it is easy to get that
M M −l
1 T 1X X 1
V̇2 ≤ ξ Zξ + ϕi (−d˜2i + β 2 ) + κi .
2 2 i=1 i=1
4
For the case where the relative state information is not available, the ideas
in Section 6.4 can be extended to construct dynamic containment controllers
using the relative estimates of the states of neighboring agents. The details
are omitted here. Interested readers can do it as an exercise.
7 1 6
8 2 5
3 4
1.5 1.5
1 1
0.5 0.5
xi2
xi1
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−2 −2
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
t t
FIGURE 7.4: The state trajectories of the agents. The solid and dashdotted
lines denote, respectively, the trajectories of the leaders and the followers.
Solving the LMI (7.14) by using the SeDuMi toolbox [158] gives the gain
matrices K and Γ in (7.49) as
2.6255 7.7075
K = − 1.6203 4.7567 , Γ = .
7.7075 22.6266
20
15
10
i
d
−5
0 5 10 15 20 25 30 35 40
t
7.4 Notes
The materials of this chapter are mainly based on [88, 94]. For further results
on containment control of multi-agent systems, see [16, 18, 19, 36, 46, 67, 104,
110, 111]. In particular, a hybrid containment control law was proposed in [67]
to drive the followers into the convex hull spanned by the leaders. Distribut-
ed containment control problems were studied in [16, 18, 19] for a group of
first-order and second-order integrator agents under fixed and switching di-
rected communication topologies. The containment control was considered in
[104] for second-order multi-agent systems with random switching topologies.
A hybrid model predictive control scheme was proposed in [46] to solve the
containment and distributed sensing problems in leader/follower multi-agent
systems. The authors in [36, 110, 111] study the containment control prob-
lem for a collection of Euler-Lagrange systems. In particular, [36] discussed
the case with multiple stationary leaders, [111] studied the case of dynam-
ic leaders with finite-time convergence, and [110] considered the case with
parametric uncertainties. In the above-mentioned works, the agent dynamics
are assumed to be single, double integrators, or second-order Euler-Lagrange
systems, which might be restrictive in some circumstances.
8
Distributed Robust Cooperative Control for
Multi-Agent Systems with Heterogeneous
Matching Uncertainties
CONTENTS
8.1 Distributed Robust Leaderless Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8.1.1 Distributed Static Consensus Protocols . . . . . . . . . . . . . . . . . . . . . 185
8.1.2 Distributed Adaptive Consensus Protocols . . . . . . . . . . . . . . . . . 190
8.2 Distributed Robust Consensus with a Leader of Nonzero Control
Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8.3 Robustness with Respect to Bounded Non-Matching Disturbances . 201
8.4 Distributed Robust Containment Control with Multiple Leaders . . . 205
8.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
183
184 Cooperative Control of Multi-Agent Systems
input. The case where the communication graph is undirected and connected
is considered in Section 8.1. By assuming that there exists time-varying upper
bounds of the matching uncertainties, distributed continuous static consen-
sus protocols based on the relative states of neighboring agents are designed,
which includes a nonlinear term to deal with the effect of the uncertainties.
It is shown that the consensus error under this static protocol is uniformly
ultimately bounded and exponentially converges to a small residual set. Note
that the design of this static protocol relies on the eigenvalues of the Lapla-
cian matrix and the upper bounds of the matching uncertainties. In order
to remove these requirements, a fully distributed adaptive protocol is further
designed, under which the residual set of the consensus error is also given.
One desirable feature is that for both the static and adaptive protocols, the
residual sets of the consensus error can be made to be reasonably small by
properly selecting the design parameters of the protocols and the convergence
rates of the consensus error are explicitly given.
The robust consensus problem for the case where there exists a leader with
nonzero control input is investigated in Section 8.2. Here we study the general
case where the leader’s control input is not available to any follower, which
imposes additional difficulty. Distributed adaptive consensus protocols based
on the relative state information are proposed and designed to ensure that the
consensus error can converge to a small adjustable residual set.
The robustness issue of the proposed consensus protocols with respect to
external disturbances which do not satisfy the matching condition is discussed
in Section 8.3, where the proposed consensus protocols are redesigned to guar-
antee the boundedness of the consensus error and the adaptive gains in the
presence of bounded external disturbances. Extensions of the results to the
containment control problem with multiple leaders are discussed in Section
8.4.
Assumption 8.1 There exist functions Ĥi (xi , t) and ν̂i (t) such that
Hi (xi , t) = B Ĥi (xi , t) and νi (t) = B ν̂i (t), i = 1, · · · , N .
In the previous chapters, the agents are identical linear systems and free
of uncertainties. In contrast, the agents (8.2) considered in this chapter are
subject to nonidentical uncertainties, which makes the resulting multi-agent
systems are essentially heterogeneous. The agents (8.2) can recover the nom-
inal linear agents in the previous chapters when the uncertainties fi (xi , t) do
not exist. Note that the existence of the uncertainties associated with the
agents makes the consensus problem quite challenging to solve, as detailed in
the sequel.
Regarding the bounds of the uncertainties fi (xi , t), we introduce the fol-
lowing assumption.
Assumption 8.2 There exist continuous scalar valued functions ρi (xi , t), i =
1, · · · , N , such that kfi (xi , t)k ≤ ρi (xi , t), i = 1, · · · , N , for all t ≥ 0 and
xi ∈ Rn .
Let x = [xT1 , · · · , xTN ]T and ρ(x, t) = diag(ρ1 (x1 , t), · · · , ρN (xN , t)). Using
(8.3) for (8.2), we can obtain the closed-loop network dynamics as
where
−λmax (AP + P AT − 2BB T )
α= . (8.10)
λmax (P )
XN XN
1 T
V̇1 ≤ ξ Xξ + ρi (xi , t)kB T P −1 Lij ξj k
2 i=1 j=1
(8.16)
XN
1
≤ ξT X ξ + κi .
2 i=1
188 Cooperative Control of Multi-Agent Systems
X N −l
1 T
V̇1 ≤ ξ Xξ + κi .
2 i=1
Therefore, by analyzing the above three cases, we get that V̇1 satisfies (8.16)
for all ξ ∈ RN n . Note that (8.16) can be rewritten as
XN
1
V̇1 ≤ −αV1 + αV1 + ξ T X ξ + κi
2 i=1
(8.18)
XN
1
= −αV1 + ξ T (X + αL ⊗ P −1 )ξ + κi ,
2 i=1
where α > 0.
Because G is connected, it follows from Lemma 1 that zero is a simple
i L and all the other eigenvalues are positive. Let U = [ N ] and
eigenvalue of √1 Y1
h 1T
UT = √
N , with Y1 ∈ RN ×(N −1) , Y2 ∈ R(N −1)×N , be such unitary matrices
Y2
that U T LU = Λ , diag(0, λ2 , · · · , λN ), where λ2 ≤ · · · ≤ λN are the nonzero
eigenvalues of L. Let ξ¯ , [ξ¯1T , · · · , ξ¯N ] = (U T ⊗ P −1 )ξ. By the definitions
T T
By using Lemma 16 (the Comparison lemma), we can obtain from (8.20) that
N
X N
X
V1 (ξ) ≤ [V1 (ξ(0)) − κi /α]e−αt + κi /α, (8.21)
i=1 i=1
Note that the nonlinear components gi (·) in (8.4) are actually continuous
approximations, (via the boundary layer concept [43, 71], of the discontinuous
w
if kwk =6 0
function ĝ(w) = kwk . The values of κi in (8.4) define the widths
0 if kwk = 0
of the boundary layers. As κi → 0, the continuous functions gi (·) approach
the discontinuous function ĝ(·).
by using the continuous protocol (8.3). The cast is that the protocol (8.3) does
not guarantee asymptotic stability but rather uniform ultimate boundedness
of the consensus error ξ. Note that the residual set D1 of ξ depends on the
smallest nonzero eigenvalue of L, the number of agents, the largest eigenvalue
of P , and the widths κi of the boundary layers. By choosing sufficiently small
κi , the consensus error ξ under the protocol (8.3) can converge to an arbitrarily
small neighborhood of zero, which is acceptable in most applications.
where PN
r1 (K j=1 L1j ξj )
..
R(ξ) ,
.
,
(8.26)
PN
rN (K j=1 L N j ξj )
and the rest of the variables are defined as in (8.5).
To establish the ultimate boundedness of the states ξ, d¯i , and ēi of (8.25),
we use the following Lyapunov function candidate
XN N
X
1 T d˜2i ẽ2i
V2 = ξ (L ⊗ P −1 )ξ + + , (8.27)
2 i=1
2τi i=1 2ǫi
i) For any ϕi and ψi , the parameters ξ, d˜i , and ẽi exponentially converge to
the residual set
N
1 X 2 κi
D2 , {ξ, d˜i , ẽi : V2 < (β ϕi + e2i ψi + )}, (8.28)
2δ i=1 2
Therefore, based on the above three cases, we can get that V̇2 satisfies (8.35)
for all ξ ∈ RN n . Note that (8.35) can be rewritten into
N
1 1X
V̇2 ≤ −δV2 + δV2 + ξ T Yξ − (ϕi d˜2i + ψi ẽ2i )
2 2 i=1
N
1X 2 1
+ (β ϕi + e2i ψi + κi )
2 i=1 2
(8.36)
N
1 1X δ
= −δV2 + ξ T (Y + δL ⊗ P −1 )ξ − [(ϕi − )d˜2i
2 2 i=1 τi
N
δ 2 1X 2 1
+ (ψi − )ẽi )] + (β ϕi + e2i ψi + κi ).
ǫi 2 i=1 2
λmax (P ) PN
Obviously, it follows from (8.39) that V̇2 ≤ −̺V2 if kξk2 > λ2 (α−̺) i=1 (β
2
ϕi +
e2i ψi + 21 κi ). λ2 2
Then, by noting V2 ≥ 2λmax (P ) kξk , we can get that if ̺ ≤ α then
ξ exponentially converges to the residual set D3 in (8.29) with a convergence
rate faster than e−̺t .
Remark 65 Contrary to the static protocol (8.3), the design of the adaptive
protocol (8.23) relies on only the agent dynamics and the local state informa-
tion of each agent and its neighbors, requiring neither the minimal nonzero
eigenvalue of L nor the upper bounds of of the uncertainties fi (xi , t). Thus,
the adaptive protocol (8.23) can be implemented by each agent in a fully dis-
tributed fashion without requiring any global information.
where the nonlinear functions r̄i (·) are defined such that r̄i (w) =
(
wd¯i
kwk if d¯i kwk > κ
wd¯2i
and the rest of the variables are defined as in (8.23).
κ if d¯i kwk ≤ κ
1 6
2 5
3 4
mÿi + ki yi = ui , i = 1, · · · , N, (8.41)
196 Cooperative Control of Multi-Agent Systems
with
0 1 0
A= , B = 1 , E = −1 0 .
0 0 m
3 1.5
1
2
0.5
0
1
−0.5
xi1
xi2
0 −1
−1.5
−1
−2
−2.5
−2
−3
−3 −3.5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
t t
25 8
0.03 0.04
7
0.03
20 0.02
6
0.02
15 0.01 5
0.01
4
0 0
i
i
10
e
d
35 36 37 38 39 40 35 36 37 38 39 40
3
5 2
1
0
0
−5 −1
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40
t t
Assumption 8.4 The leader’s control input u0 is bounded, i.e., there exists
a positive scalar γ such that ku0 k ≤ γ.
and the state of the leader is available to only a subset of the followers. The
communication graph among the N + 1 agents is represented by a directed
b which satisfies the following assumption.
graph G,
Assumption 8.5 Gb contains a directed spanning tree with the leader as the
root and the subgraph associated with the N followers is undirected.
X N X N X N
˙
dˆi = τi [−ϕi dˆi + ( aij (xi − xj )T )Γ( aij (xi − xj )) + kK aij (xi − xj )k],
j=0 j=0 j=0
N
X
ê˙ i = ǫi [−ψi êi + kK aij (xi − xj )kkxi k], i = 1, · · · , N,
j=0
(8.44)
where dˆi (t) and êi (t) are the adaptive gains associated with the i-th follower,
aij is the (i, j)-th entry of the adjacency matrix associated with G, b and the
rest of the variables are defined as in (8.23).
T T
Let x = [xT1 , · · · , xTN ]T , ζi = xi − x0 , i = 1, · · · , N , and ζ = [ζ1T , · · · , ζN ] .
Then, it follows from (8.2), (8.43), and (8.44) that the closed-loop network
dynamics can be obtained as
where Db = diag(dˆ1 , · · · , dˆN ), R(·) remains the same as in (8.26), and the rest
of the variables are defined as in (8.25).
To present the following theorem, we use a Lyapunov function in the form
Cooperative Control for Multi-Agent Systems with Matching Uncertainties 199
of
XN N
X
1 T d˘2i ĕ2i
V3 = ζ (L1 ⊗ P −1 )ζ + + ,
2 i=1
2τi i=1 2ǫi
Theorem 52 Supposing that Assumptions 8.3, 8.4, and 8.5 hold, the leader-
follower consensus error ζ and the adaptive gains dˆi and êi , i = 1, · · · , N , in
(8.45) are uniformly ultimately bounded under the distributed adaptive protocol
(8.44) with K and Γ designed as in Theorem 51. Moreover, the following two
assertions hold.
i) For any ϕi and ψi , the parameters ξ, d˘i , and ĕi exponentially converge to
the residual set
N
1 X 2 1
D4 , {ζ, d˘i , ĕi : V3 < (β̂ ϕi + e2i ψi + κi )}, (8.46)
2δ i=1 2
X N
λmax (P ) 1
D5 , {ζ : kζk2 ≤ (β̂ 2 ϕi + e2i ψi + κi )}. (8.47)
λmin (L1 )(α − ̺) i=1 2
where
−m10 (a + 1) a 0 1
A= 1 −1 1 , B = 0 ,
0 −b 0 0
a 1
f0 (x0 ) = (m0 − m20 )(|x01 + 1| − |x01 − 1|),
2
a
fi (xi ) = a(m10 − m1i )xi1 + (m1i − m2i )(|xi1 + 1| − |xi1 − 1|), i = 1, · · · , N.
2
For simplicity, we let u0 = 0 and take f0 (x0 ) as the virtual control input of
the leader, which clearly satisfies kf0 (x0 )k ≤ a2 |m10 − m20 |. Let a = 9, b =
18, m10 = − 43 , and m20 = − 34 . In this case, the leader displays a double-
scroll chaotic attractor [107]. The parameters m1i and m2i , i = 1, · · · , N , are
randomly chosen within the interval [−6, 0). It is easy to see that
0 1 6
2 5
3 4
5
1 6
4
4
0.5
3
2
2 0
0
i1
i3
xi2
1 −0.5
x
x
−2
0
−1
−4
−1
−1.5
−6
−2
−2 −8
−3
−4 −2.5 −10
0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30
t t t
The communication graph is given as in FIGURE 8.4, where the node in-
dexed by 0 is the leader. Solving the LMI (8.8) gives the feedback gain matrices
of (8.44) as
285.8453 280.3016 30.9344
K = − 16.9070 16.5791 1.8297 , Γ = 280.3016 274.8654 30.3344 .
30.9344 30.3344 3.3477
45
80
40
70
35
60
30
50
25
40
i
ei
20
d
30
15
20
10
5 10
0 0
−5 −10
0 5 10 15 20 25 30 0 5 10 15 20 25 30
t t
i) For any ϕi and ψi , the parameters ξ, d˜i , and ẽi exponentially converge to
the residual set
N N
1 X 2 1 λmax (L) X 2
D6 , {ξ, d˜i , ẽi : V6 < (β ϕi + e2i ψi + κi ) + υ },
2σ i=1 2 2σλmin (Q) i=1 i
(8.53)
where σ , mini=1,··· ,N {ε − 1, ϕi τi , ψi ǫi } and
XN N
X
1 T d˜2i ẽ2i
V4 = ξ (L ⊗ Q−1 )ξ + + , (8.54)
2 i=1
2τi i=1 2ǫi
where we have used the fact that σ ≤ mini=1,··· ,N {ϕi τi , ψi ǫi } to get that last
inequality. Let ξˆ , [ξˆ1T , · · · , ξˆN ] = (U T ⊗ Q−1 )ξ, where U is defined as in
T T
the proof of Theorem 50. By the definition of ξ, ˆ it is easy to see that ξˆ1 = 0.
Since σ ≤ ε − 1 and β ≥ λ12 , we then have
N
X
ξ T [W + (σ + 1)L ⊗ Q−1 ]ξ ≤ λi ξˆiT [AQ + QAT + εQ − 2βλi BB T ]ξˆi
i=2
N
X
≤ λi ξˆiT [AQ + QAT + εQ − 2BB T ]ξˆi ≤ 0.
i=2
(8.59)
Then, it follows from (8.58) and (8.59) that
N N
λmax (L) X 2 1 X 2 1
V̇4 ≤ −σV4 + υ + (β ϕi + e2i ψi + κi ), (8.60)
2λmin (Q) i=1 i 2 i=1 2
where xi ∈ Rn is the state and ui ∈ Rp is the control input of the i-th leader.
The communication graph of the agents consisting of the N followers and the
M leaders is assumed to satisfy Assumption 7.3 in Section 7.3.
Distributed static and adaptive containment controllers can be designed
by combing the ideas in the previous section and Section 7.3. Details of the
derivations are omitted here for brevity. Interested readers can complete the
derivations as an exercise.
8.5 Notes
The materials of this chapter are mainly based on [77]. The matching un-
certainties associated with the agents are assumed to satisfy certain upper
bounds in this chapter. Another kind of commonly encountered uncertainties
is the uncertainties which can be linearly parameterized. The distributed co-
operative control problems of multi-agent systems with linearly parameterized
uncertainties have been considered in, e.g., [30, 31, 126, 193, 196]. Distribut-
ed control of multi-agent systems with norm-bounded uncertainties which do
not necessarily satisfy the matching condition has been addressed in [89, 169],
where the distributed robust stabilization problem of multi-agent systems with
parameter uncertainties was concerned in [89] and the robust synchronization
problem was investigated in [169] for uncertainties in the form of additive
perturbations of the transfer matrices of the nominal dynamics.
This page intentionally left blank
9
Global Consensus of Multi-Agent Systems
with Lipschitz Nonlinear Dynamics
CONTENTS
9.1 Global Consensus of Nominal Lipschitz Nonlinear Multi-Agent
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
9.1.1 Global Consensus without Disturbances . . . . . . . . . . . . . . . . . . . . 208
9.1.2 Global H∞ Consensus Subject to External Disturbances . . 211
9.1.3 Extensions to Leader-Follower Graphs . . . . . . . . . . . . . . . . . . . . . . 214
9.1.4 Simulation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
9.2 Robust Consensus of Lipschitz Nonlinear Multi-Agent Systems with
Matching Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.2.1 Distributed Static Consensus Protocols . . . . . . . . . . . . . . . . . . . . . 219
9.2.2 Distributed Adaptive Consensus Protocols . . . . . . . . . . . . . . . . . 224
9.2.3 Adaptive Protocols for the Case without Uncertainties . . . . 230
9.2.4 Simulation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
9.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
In the previous chapters, the agent dynamics or the nominal agent dynamics
are assumed to be linear. This assumption might be restrictive in some circum-
stances. In this chapter, we intend to address the global consensus problem for
a class of nonlinear multi-agent systems, the Lipschitz nonlinear multi-agent
systems where each agent contains a nonlinear function satisfying a global
Lipschitz condition. The Lipschitz nonlinear multi-agent systems will reduce
to general linear multi-agent systems when the nonlinearity does not exist.
In Section 9.1, the global consensus problem of high-order multi-agent sys-
tems with Lipschitz nonlinearity and directed communication graphs is for-
mulated. A distributed consensus protocol is proposed, based on the relative
states of neighboring agents. A two-step algorithm is presented to construct
one such protocol, under which a Lipschitz multi-agent system without distur-
bances can reach global consensus for a strongly connected directed commu-
nication graph. The existence condition of the proposed consensus protocol is
also discussed. For the case where the agents are subject to external distur-
bances, the global H∞ consensus problem is formulated and another algorithm
is then given to design the protocol which achieves global consensus with a
guaranteed H∞ performance for a strongly connected balanced communica-
tion graph. It is worth mentioning that in these two algorithms the feedback
207
208 Cooperative Control of Multi-Agent Systems
where xi ∈ Rn and ui ∈ Rp are the state and the control input of the i-th a-
gent, respectively, A and B are constant matrices with compatible dimensions,
and the nonlinear function g(xi ) is assumed to satisfy the Lipschitz condition
with a Lipschitz constant γ > 0, i.e.,
proposed:
N
X
ui = cK aij (xi − xj ), i = 1, · · · , N, (9.3)
j=1
where c > 0 ∈ R denotes the coupling gain, K ∈ Rp×n is the feedback gain
matrix, and aij is (i, j)-th entry of the adjacency matrix associated with G.
The objective is to design a consensus protocol (9.3) such that the N agents
in (9.1) can achieve global consensus in the sense of limt→∞ kxi (t)−xj (t)k = 0,
∀ i, j = 1, · · · , N.
Let r = [r1 , · · · , rN ]T ∈ RN be the left eigenvector of L associated with
the zero eigenvalue, satisfying rT 1 = 1 and ri > 0, i = 1, · · · , N . Define
e = (W ⊗ In )x, where W = IN − 1rT and e = [eT1 , · · · , eTN ]T . It is easy to
verify that (rT ⊗ In )e = 0. By the definition of rT , it is easy to see that 0
is a simple eigenvalue of W with 1 as a right eigenvector and 1 is the other
eigenvalue with multiplicity N − 1. Then, it follows that e = 0 if and only
if x1 = · · · = xN . Therefore, the consensus problem under the protocol (9.3)
can be reduced to the asymptotical stability of e. Using (9.3) for (9.1), it can
be verified that e satisfies the following dynamics:
N
X N
X
ėi = Aei + g(xi ) − rj g(xj ) + c Lij BKej , i = 1, · · · , N, (9.4)
j=1 j=1
where Lij is the (i, j)-th entry of the Laplacian matrix associated with G.
Next, an algorithm is presented to select the control parameters in (9.3).
to get a matrix P > 0 and scalars τ > 0 and µ > 0. Then, choose K =
− 21 B T P −1 .
τ
2) Select the coupling gain c ≥ a(L) , where a(L) is the generalized algebraic
connectivity of G (defined in Lemma 4 in Chapter 1).
The following theorem presents a sufficient condition for the global con-
sensus of (9.4).
PN
where x̄ = j=1 rj xj .
Using the Lipschitz condition (9.2) gives
X N
X
ri eTi P −1 [g(x̄) − rj g(xj )] = 0. (9.9)
i=1 j=1
Global Consensus of Lipschitz Nonlinear Multi-Agent Systems 211
where a(L) > 0. In light of (9.11), it then follows from (9.10) that
AP + P AT + µI + µ−1 γ 2 P 2 − c a(L)BB T
(9.13)
≤ AP + P AT + µI + µ−1 γ 2 P 2 − τ BB T < 0,
where the last inequality follows from (9.5) by using Lemma 19 (the Schur
complement lemma). Therefore, it follows from (9.12) that V̇1 < 0, implying
that e(t) → 0, as t → ∞. That is, the global consensus of the network (9.4) is
achieved.
The objective in this subsection is to design a protocol (9.3) for the agents
in (9.14) to reach global consensus and meanwhile maintain a desirable dis-
turbance rejection performance. To this end, define the performance variable
zi , i = 1, · · · , N , as the average of the weighted relative states of the agents:
N
1 X
zi = C(xi − xj ), i = 1, · · · , N, (9.15)
N j=1
The global H∞ consensus problem for (9.14) under the protocol (9.3) is
first defined.
where z = [z1T , · · · , zN
T T
] , ω = [ω1T , · · · , ωN
T T
] .
Algorithm 18 For a given scalar γ > 0 and the agents in (9.14), a consensus
protocol (9.3) can be constructed as follows:
1) Solve the following LMI:
AQ + QAT − ςBB T + µI Q QC T D
Q −(γ −1 )2 µ 0 0
<0 (9.18)
CQ 0 −I 0
DT 0 0 η2 I
to get a matrix Q > 0 and scalars ς > 0 and µ > 0. Then, choose K =
− 21 B T Q−1 .
Global Consensus of Lipschitz Nonlinear Multi-Agent Systems 213
T T
2) Select the coupling gain c ≥ ς/λ2 ( L+L
2 ), where λ2 ( L+L
2 ) denotes the s-
L+LT
mallest nonzero eigenvalue of 2 .
Therefore, the protocol (9.3) solves the global H∞ consensus problem, if the
system (9.19) is asymptotically stable and satisfies (9.17).
Consider the Lyapunov function candidate
N
X
V2 = eTi Q−1 ei .
i=1
By following similar steps to those in Theorem 54, we can obtain the time
derivative of V2 along the trajectory of (9.19) as
N
X N
X
V̇2 ≤ eTi [(2Q−1 A + γ 2 µ−1 I + µ(Q−1 )2 )ei + 2c Lij Q−1 BKej
i=1 j=1
N
X
2
+ Q−1 D(ωi − ωj )] (9.20)
N j=1
c
= êT [IN ⊗ (AQ + QAT + γ 2 µ−1 Q2 + µI) − (L + LT ) ⊗ BB T ]ê
2
+ 2êT (M ⊗ D)ω,
1
where êi = Q−1 ei , ê = [êT1 , · · · , êTN ]T , and M = IN − T
N 11 .
214 Cooperative Control of Multi-Agent Systems
M (L + LT ) = L + LT = (L + LT )M.
1) Solve the LMI (9.5) to get a matrix P > 0 and scalars τ > 0 and µ > 0.
Then, choose K = − 12 B T P −1 .
τ
2) Select the coupling gain c ≥ λmin (H) min qi , where H and qi are defined in
i=2,··· ,N
(9.26).
Remark 72 For the case where the subgraph associated with the followers is
balanced and strongly connected, L1 + LT1 > 0 [86]. Then, by letting G = I,
L1 +LT
step 2) can be simplified to c ≥ τ /λmin ( 2
1
).
Theorem 56 Suppose that Assumption 9.1 holds and there exists a solution
to (9.5). Then, the consensus protocol (9.25) given by Algorithm 19 solves the
leader-follower global consensus problem for the N agents described by (9.1).
1 6
2 5
3 4
Clearly, g(xi ) here satisfies (9.2) with a Lipschitz constant α = 0.333. The
external disturbance here is ω = [w, −w, 1.5w, 3w, −0.6w, 2w]T , where w(t) is
a one-period square wave starting at t = 0 with width 2 and height 1.
Choose the H∞ performance index η = 2. Solving the LMI (9.18) by using
218 Cooperative Control of Multi-Agent Systems
5 100
80
0 60
40
−5 20
i2
i1
x
x
−10 −20
−40
−15 −60
−80
−20 −100
0 1 2 3 4 5 0 1 2 3 4 5
t t
5 3
2
0
−5
0
i3
xi4
x
−1
−10
−2
−15
−3
−20 −4
0 1 2 3 4 5 0 1 2 3 4 5
t t
0.3
0.2
0.1
0
i
z
−0.1
−0.2
−0.3
−0.4
0 1 2 3 4 5 6 7 8
t
FIGURE 9.3: The performance variables zi , i = 1, · · · , 6.
where
V̇5 = ξ T (L ⊗ P −1 )ξ˙
= ξ T (L ⊗ P −1 A + cL2 ⊗ P −1 BK)ξ
(9.37)
+ ξ T (L ⊗ P −1 )G(x) + ξ T (L ⊗ P −1 B)F (x, t)
+ ξ T [Lρ(x, t) ⊗ P −1 B]R(ξ).
By using Assumption 9.2, we can obtain that
N
X N
X
ξ T (L ⊗ P −1 B)F (x, t) ≤ kB T P −1 Lij ξj kkfi (xi , t)k
i=1 j=1
(9.38)
N
X N
X
T −1
≤ ρi (xi , t)kB P Lij ξj k.
i=1 j=1
Observe that
ξ T (L ⊗ P −1 )G(x) = ξ T (L ⊗ P −1 )[(M ⊗ I)G(x)]
= ξ T (L ⊗ P −1 )[G(x) − 1 ⊗ g(x̄)
1 (9.39)
+ 1 ⊗ g(x̄) − ( 11T ⊗ I)G(x)]
N
= ξ T (L ⊗ P −1 )[G(x) − 1 ⊗ g(x̄)],
222 Cooperative Control of Multi-Agent Systems
where x̄ = N1 (1T ⊗ I)x and we have used L1 = 0 to get the last equation.
Next, considerPthe following three cases.
N
i) ρi (xi , t)kK j=1 Lij ξj k > κi , i = 1, · · · , N .
In this case, it follows from (9.31) and (9.33) that
N
X N
X
T −1 T −1
ξ [Lρ(x, t) ⊗ P B]R(ξ) = − ρi (xi , t)kB P Lij ξj k. (9.40)
i=1 j=1
ξ T [Lρ(x, t) ⊗ P −1 B]R(ξ)
N
X N
X
ρi (xi , t)2 (9.41)
=− kB T P −1 Lij ξj k2 ≤ 0.
i=1
κi j=1
XN
1 T
V̇5 ≤ ξ [X ξ + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))] + κi . (9.42)
2 i=1
Then, it follows from (9.37), (9.40), (9.43), (9.38), and (9.39) that
X N −l
1 T
V̇5 ≤ ξ [X ξ + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))] + κi .
2 i=1
Therefore, by analyzing the above three cases, we get that V̇5 satisfies (9.42)
Global Consensus of Lipschitz Nonlinear Multi-Agent Systems 223
XN
1
V̇5 ≤ −αV5 + αV5 + ξ T [X ξ + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))] + κi
2 i=1
1
= −αV5 + ξ T [X + αL ⊗ P −1 + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))]
2
N
X
+ κi ,
i=1
(9.44)
where α > 0.
Because G is connected, it follows from Lemma 1 that zero is a simple
h L and
eigenvalue of √1
i all the other eigenvalues are positive. Let U = [ N ]
Y1
1T
and U T = √
N , with Y1 ∈ RN ×(N −1) , Y2 ∈ R(N −1)×N , be such unitary
Y2
matrices that U T LU = Λ , diag(0, λ2 , · · · , λN ), where λ2 ≤ · · · ≤ λN are
the nonzero eigenvalues of L. Let ξ¯ , [ξ¯1T , · · · , ξ¯N
T T
] = (U T ⊗ I)ξ. By the
¯ it is easy to see that ξ¯1 = ( √T ⊗ I)ξ = 0. In light of
definitions of ξ and ξ, 1
N
Lemma 18, it then follows that
2ξ T (L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))
= 2ξ¯T (Λ ⊗ P −1 )(U T ⊗ I)(G(x) − 1 ⊗ g(x̄))
1
≤ µξ¯T (Λ ⊗ P −1 )(Π−1 ⊗ I)(Λ ⊗ P −1 )ξ¯ + [G(x) − 1 ⊗ g(x̄)]T
µ
× (U ⊗ I)(Π ⊗ I)(U T ⊗ I)[G(x) − 1 ⊗ g(x̄)] (9.45)
N
X
1
= µξ¯T (ΛΠ−1 Λ ⊗ (P −1 )2 )ξ¯ + πi kg(xi ) − g(x̄)k2
µ i=1
¯T −1 −1 2 1 ¯
= µξ [ΛΠ Λ ⊗ (P ) + γ 2 Π]ξ,
µ
where Π = diag(π1 , · · · , πN ) is a positive-definite diagonal matrix and µ > 0,
and we have used the Lipschitz condition (9.2) to get the last inequality. By
choosing Π = diag(1, λ2 , · · · , λN ) and letting ξˆ = (I ⊗ P −1 )ξ¯ = (U T ⊗ P −1 )ξ,
it then follows from (9.45) and (9.44) that
By using Lemma 16 (the Comparison lemma), we can obtain from (9.47) that
N N
1X 1X
V5 (ξ) ≤ [V5 (ξ(0)) − κi ]e−αt + κi , (9.48)
α i=1 α i=1
Remark 73 Note that the nonlinear functions ri (·) in (9.31) are actually
continuous approximations, via(the boundary layer concept [43, 71], of the
w
if kwk =
6 0
discontinuous function ĝ(w) = kwk . The values of κi in (9.31)
0 if kwk = 0
define the widths of the boundary layers. From (9.34) we can observe that
the residual set D1 of the consensus error ξ depends on the smallest nonzero
eigenvalue of L, the number of agents, the largest eigenvalue of P , and the
values of κi . By choosing sufficiently small κi , the consensus error ξ under
the protocol (9.30) can converge to an arbitrarily small neighborhood of zero.
where di (t) and ei (t) are the adaptive gains associated with the i-th agent,
K ∈ Rp×n and Γ ∈ Rn×n are the feedback gain matrices to be designed, νi
and ǫi are positive scalars, ϕi and ψi are small positive constants, the nonlinear
functions r̄i (·) are defined such that for w ∈ Rn ,
( w(d +e kx k)
i i
kwk
i
if (di + ei kxi k)kwk > κi
r̄i (w) = w(di +e i kxi k)
2 (9.50)
κi if (di + ei kxi k)kwk ≤ κi
and the rest of the variables are defined as in (9.30).
Let the consensus error ξ be defined as in (9.32) and D(t) =
diag(d1 (t), · · · , dN (t)). Then, it is not difficult to get from (9.29) and (9.49)
that the closed-loop network dynamics can be written as
ξ˙ = (IN ⊗ A + M DL ⊗ BK)ξ + (M ⊗ I)G(x)
+ (M ⊗ B)[F (x, t) + R̄(ξ)],
N
X N
X N
X
d˙i = νi [−ϕi di + ( Lij ξjT )Γ( Lij ξj ) + kK Lij ξj k], (9.51)
j=1 j=1 j=1
N
X
ėi = ǫi [−ψi ei + kK Lij ξj kkxi k], i = 1, · · · , N,
j=1
where PN
r̄i (K j=1 L1j ξj )
..
R̄(ξ) =
.
,
(9.52)
PN
r̄i (K j=1 LN j ξj )
and the rest of the variables are defined as in (9.32).
In order to establish the ultimate boundedness of the states ξ, di , and ei
of (9.51), we use the following Lyapunov function candidate
XN N
X
1 T d˜2i ẽ2i
V6 = ξ (L ⊗ P −1 )ξ + + , (9.53)
2 i=1
2νi i=1 2ǫi
226 Cooperative Control of Multi-Agent Systems
XN ˜ XN
di ˜˙ ẽi ˙
V̇6 = ξ T (L ⊗ P −1 )ξ˙ + di + ẽi
ν
i=1 i
ǫ
i=1 i
= ξ T [(L ⊗ P −1 A + LDL ⊗ P −1 BK)ξ + (L ⊗ P −1 )]G(x)
+ ξ T (L ⊗ P −1 B)[F (x, t) + R̄(ξ)]
N N N (9.56)
X X X
+ d˜i [−ϕi (d˜i + β) + ( Lij ξjT )Γ( Lij ξj )
i=1 j=1 j=1
N
X N
X N
X
+ kK Lij ξj k] + ẽi [−ψi (ẽi + bi ) + kK Lij ξj kkxi k],
j=1 i=1 j=1
2 PN
where we have used the fact that − (di +eκi kx
i
i k)
kB T P −1 j=1 Lij ξj k2 + (di +
228 Cooperative Control of Multi-Agent Systems
PN PN
ei kxi k)kB T P −1 j=1 Lij ξj k ≤ 41 κi , for (di + ei kxi k)kK j=1 Lij ξj k ≤ κi ,
i = 1, · · · , N . PN
iii) (di + ei kxi k)kK j=1 Lij ξj k > κi , i = 1, · · · , l, and (di +
PN
ei kxi k)kK j=1 Lij ξj k ≤ κi , i = l + 1, · · · , N , where 2 ≤ l ≤ N − 1.
By following similar steps in the two cases above, it is not difficult to get
that
N −l
1 1X
V̇6 ≤ ξ T [Yξ + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))] + κi
2 4 i=1
N N
1X 1X 2
− (ϕi d˜2i + ψi ẽ2i ) + (β ϕi + b2i ψi ).
2 i=1 2 i=1
Therefore, based on the above three cases, we can get that V̇6 satisfies (9.61)
for all ξ ∈ RN n . Note that (9.61) can be rewritten into
1
V̇6 ≤ −δV6 + δV6 + ξ T [Yξ + 2(L ⊗ P −1 )(G(x) − 1 ⊗ g(x̄))]
2
N N
1X 1X 2 1
− (ϕi d˜2i + ψi ẽ2i ) + (β ϕi + b2i ψi + κi )
2 i=1 2 i=1 2
1
= −δV6 + ξ T [Yξ + δ(L ⊗ P −1 )ξ + 2(L ⊗ P −1 ) (9.62)
2
N
1X δ
× (G(x) − 1 ⊗ g(x̄))] − [(ϕi − )d˜2i
2 i=1 νi
N
δ 1X 2 1
+ (ψi − )ẽ2i )] + (β ϕi + b2i ψi + κi ).
ǫi 2 i=1 2
Remark 74 In Theorem 58, the design of the adaptive protocol (9.49) relies
on only the agent dynamics, requiring neither the minimal nonzero eigenvalue
of L nor the upper bounds of of the uncertainties fi (xi , t). Thus, the adaptive
controller (9.49), contrary to the static protocol (9.30), can be computed imple-
mented in a fully distributed fashion without requiring any global information.
As explained in Section 8.1 of Chapter 8, we should choose reasonably small
ϕi , ψi , and κi in practical implementations such that the consensus error ξ
is satisfactorily small and at the same time the adaptive gains di and ei are
tolerable.
where the nonlinear functions r̂i (·) are defined such that r̂i (w) =
(
wd¯i
kwk if d¯i kwk > κi
¯2 and the rest of the variables are defined as in (9.49).
if d¯i kwk ≤ κi
w di
κ
Remark 76 The case where there exists a leader whose control input is
bounded and unknown to any follower can be also investigated. Distributed
controllers based on the relative state information can be similarly designed by
following the steps in Section 8.3 of Chapter 8 and in this section, when the
communication graph contains a directed spanning tree with the leader as the
root and the subgraph associated with the followers is undirected. The details
are omitted here for conciseness.
1 6
2 5
3 4
Example 26
xConsider
a network of uncertain agents described by (9.29),
i1 0 1 0 0
xi2 −48.6 −1.26 48.6 0 T
with xi = xi3 , A = 0 0 0 10 , B = [ 0 21.6 0 0 ] , g(xi ) =
xi4 1.95 0 −1.95 0
T
[ 0 0 0 −0.333sin(xi1 ) ]
[129]. Clearly, g(xi ) here satisfies (9.2) with a Lipschitz
constant α = 0.333. For illustration, the heterogeneous uncertainties are
chosen as f1 = sin(t), f2 = sin(x25 )/2, f3 = sin(t/2), f4 = cos(t) + 1,
f5 = sin(t + 1) and f6 = sin(x63 − 4)/2.
Here we use the adaptive protocol (9.66) to solve the global consensus prob-
lem. The communication graph is given as in FIGURE 9.4, which is connect-
ed. Solving the LMI (9.5) gives the feedback gain matrices of (9.66) as K =
232 Cooperative Control of Multi-Agent Systems
20
10
−10
xi−x1
−20
−30
−40
−50
−60
0 2 4 6 8 10
t
4
di
−1
0 2 4 6 8 10
t
9.3 Notes
The materials of this section are mainly based on [90, 96]. For more results on
consensus of Lipschitz-type nonlinear multi-agent systems, please refer to [60,
Global Consensus of Lipschitz Nonlinear Multi-Agent Systems 233
84, 156, 181, 189]. In [181], the agents are characterized by first-order Lipschitz
nonlinear systems. In [156, 189], the agents were assumed to be second-order
systems with Lipschitz nonlinearity. [84] studied the global leader-follower
consensus of coupled Lur’e systems with certain sector-bound nonlinearity,
where the subgraph associated with the followers is required to be undirected.
[60] was concerned with the second-order consensus problem of multi-agent
systems with a virtual leader, where all agents and the virtual leader share the
same intrinsic dynamics with a locally Lipschitz condition. Consensus tracking
for multi-agent systems with Lipschitz-type nonlinear dynamics and switching
directed topology was studied in [182].
There exists a quite large body of research papers on cooperative control
of other types of nonlinear multi-agent systems. For instance, consensus of
a network of Euler-Lagrange systems was studied in [1, 25, 79, 109, 176], a
passivity-based design framework was proposed to deal with the consensus
problem and other group coordination problems in [5, 8], and cooperative
adaptive output regulation problems were addressed in [37, 163] for several
classes of nonlinear multi-agent systems.
This page intentionally left blank
Bibliography
235
236 Cooperative Control of Multi-Agent Systems
[23] C.T. Chen. Linear System Theory and Design. Oxford University Press,
New York, NY, 1999.
[24] G. Chen and F.L. Lewis. Distributed adaptive tracking control for
synchronization of unknown networked lagrangian systems. IEEE
Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,
41(3):805–816, 2011.
Bibliography 237
[40] W.J. Dong and J.A. Farrell. Cooperative control of multiple non-
holonomic mobile agents. IEEE Transactions on Automatic Control,
53(6):1434–1448, 2008.
[41] Z.S. Duan, G.R. Chen, and L. Huang. Disconnected synchronized re-
gions of complex dynamical networks. IEEE Transactions on Automatic
Control, 54(4):845–849, 2009.
[55] K. Hengster-Movric, K.Y. You, F.L. Lewis, and L.H. Xie. Synchroniza-
tion of discrete-time multi-agent systems on graphs using Riccati design.
Automatica, 49(2):414–423, 2013.
[56] Y.G. Hong, J.P. Hu, and L. Gao. Tracking control for multi-agent
consensus with an active leader and variable topology. Automatica,
42(7):1177–1182, 2006.
[57] Y.G. Hong, G.R. Chen, and L. Bushnell. Distributed observers design for
leader-following control of multi-agent networks. Automatica, 44(3):846–
850, 2008.
[58] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University
Press, New York, NY, 1990.
[59] A. Howard, M.J. Matarić, and G.S. Sukhatme. Mobile sensor network
deployment using potential fields: A distributed, scalable solution to the
area coverage problem. Proceedings of the 6th International Symposium
on Distributed Autonomous Robotics Systems, pp. 299–308, 2002.
[60] Y.B. Hu, H.S. Su, and J. Lam. Adaptive consensus with a virtual leader
of multiple agents governed by locally Lipschitz nonlinearity. Interna-
tional Journal of Robust and Nonlinear Control, 23(9):978–990, 2013.
[61] I.I. Hussein and D.M. Stipanovic. Effective coverage control for mobile
sensor networks with guaranteed collision avoidance. IEEE Transactions
on Control Systems Technology, 15(4):642–657, 2007.
[62] P. A. Ioannou and J. Sun. Robust Adaptive Control. Prentice-Hall, New
York, NY, 1996.
[63] P.A. Ioannou and P.V. Kokotovic. Instability analysis and improvement
of robustness of adaptive control. Automatica, 20(5):583–594, 1984.
240 Cooperative Control of Multi-Agent Systems
[64] T. Iwasaki and R.E. Skelton. All controllers for the general H∞ control
problem: LMI existence conditions and state space formulas. Automat-
ica, 30(8):1307–1317, 1994.
[65] A. Jadbabaie, J. Lin, and A.S. Morse. Coordination of groups of mobile
autonomous agents using nearest neighbor rules. IEEE Transactions on
Automatic Control, 48(6):988–1001, 2003.
[66] I.S. Jeon, J.I. Lee, and M.J. Tahk. Homing guidance law for cooper-
ative attack of multiple missiles. Journal of Guidance, Control, and
Dynamics, 33(1):275–280, 2010.
[67] M. Ji, G. Ferrari-Trecate, M. Egerstedt, and A. Buffa. Containment
control in mobile networks. IEEE Transactions on Automatic Control,
53(8):1972–1975, 2008.
[68] F.C. Jiang and L. Wang. Consensus seeking of high-order dynamic
multi-agent systems with fixed and switching topologies. International
Journal of Control, 85(2):404–420, 2010.
[69] V. Kapila, A.G. Sparks, J.M. Buffington, and Q. Yan. Spacecraft forma-
tion flying: Dynamics and control. Journal of Guidance, Control, and
Dynamics, 23(3):561–564, 2000.
[70] O. Katsuhiko. Modern Control Engineering. Prentice Hall, Upper Saddle
River, NJ, 1996.
[71] H.K. Khalil. Nonlinear Systems. Prentice Hall, Englewood Cliffs, NJ,
2002.
[72] M. Krstić, I. Kanellakopoulos, and P.V. Kokotovic. Nonlinear and Adap-
tive Control Design. John Wiley & Sons, New York, 1995.
[73] J. Larson, C. Kammer, K.Y. Liang, and K.H. Johansson. Coordinated
route optimization for heavy-duty vehicle platoons. Proceedings of the
16th International IEEE Annual Conference on Intelligent Transporta-
tion Systems, pp. 1196–1202, 2013.
[74] F.L. Lewis, H.W. Zhang, K. Hengster-Movric, and A. Das. Coopera-
tive Control of Multi-Agent Systems: Optimal and Adaptive Design Ap-
proaches. Springer-Verlag, London, 2014.
[75] T. Li, M.Y. Fu, L.H. Xie, and J.F. Zhang. Distributed consensus with
limited communication data rate. IEEE Transactions on Automatic
Control, 56(2):279–292, 2011.
[76] W. Li and C.G. Cassandras. Distributed cooperative coverage control of
sensor networks. Proceedings of the 44th IEEE Conference on Decision
and Control and 2005 European Control Conference, pp. 2542–2547,
2005.
Bibliography 241
[77] Z.K. Li, Z.S. Duan, and F.L. Lewis. Distributed robust consensus con-
trol of multi-agent systems with heterogeneous matching uncertainties.
Automatica, 50(3):883–889, 2014.
[78] Z.K. Li, G.H. Wen, Z.S. Duan, and W. Ren. Designing fully distributed
consensus protocols for linear multi-agent systems with directed com-
munication graphs. IEEE Transactions on Automatic Control, in press,
2014.
[82] Z.K. Li, Z.S. Duan, and G.R. Chen. Consensus of discrete-time linear
multi-agent systems with observer-type protocols. Discrete and Contin-
uous Dynamical Systems-Series B, 16(2):489–505, 2011.
[83] Z.K. Li, Z.S. Duan, and G.R. Chen. Dynamic consensus of linear multi-
agent systems. IET Control Theory and Applications, 5(1):19–28, 2011.
[84] Z.K. Li, Z.S. Duan, and G.R. Chen. Global synchronised regions
of linearly coupled Lur’e systems. International Journal of Control,
84(2):216–227, 2011.
[85] Z.K. Li, Z.S. Duan, and G.R. Chen. On H∞ and H2 performance regions
of multi-agent systems. Automatica, 47(4):797–803, 2011.
[86] Z.K. Li, Z.S. Duan, G.R. Chen, and L. Huang. Consensus of multia-
gent systems and synchronization of complex networks: A unified view-
point. IEEE Transactions on Circuits and Systems I: Regular Papers,
57(1):213–224, 2010.
[87] Z.K. Li, Z.S. Duan, and L. Huang. H∞ control of networked multi-agent
systems. Journal of Systems Science and Complexity, 22(1):35–48, 2009.
[88] Z.K. Li, Z.S. Duan, W. Ren, and G. Feng. Containment control of
linear multi-agent systems with multiple leaders of bounded inputs using
distributed continuous controllers. International Journal of Robust and
Nonlinear Control, in press, 2014.
242 Cooperative Control of Multi-Agent Systems
[89] Z.K. Li, Z.S. Duan, L.H. Xie, and X.D. Liu. Distributed robust control of
linear multi-agent systems with parameter uncertainties. International
Journal of Control, 85(8):1039–1050, 2012.
[90] Z.K. Li, X.D. Liu, M.Y. Fu, and L.H. Xie. Global H∞ consensus of
multi-agent systems with Lipschitz non-linear dynamics. IET Control
Theory and Applications, 6(13):2041–2048, 2012.
[91] Z.K. Li, X.D. Liu, P. Lin, and W. Ren. Consensus of linear multi-
agent systems with reduced-order observer-based protocols. Systems
and Control Letters, 60(7):510–516, 2011.
[92] Z.K. Li, X.D. Liu, W. Ren, and L.H. Xie. Distributed tracking control
for linear multi-agent systems with a leader of bounded unknown input.
IEEE Transactions on Automatic Control, 58(2):518–523, 2013.
[93] Z.K. Li, W. Ren, X.D. Liu, and M.Y. Fu. Consensus of multi-agent
systems with general linear and Lipschitz nonlinear dynamics using dis-
tributed adaptive protocols. IEEE Transactions on Automatic Control,
58(7):1786–1791, 2013.
[94] Z.K. Li, W. Ren, X.D. Liu, and M.Y. Fu. Distributed containment con-
trol of multi-agent systems with general linear dynamics in the presence
of multiple leaders. International Journal of Robust and Nonlinear Con-
trol, 23(5):534–547, 2013.
[95] Z.K. Li, W. Ren, X.D. Liu, and L.H. Xie. Distributed consensus of lin-
ear multi-agent systems with adaptive dynamic protocols. Automatica,
49(7):1986–1995, 2013.
[96] Z.K. Li, Y. Zhao, and Z.S. Duan. Distributed robust global consen-
sus of a class of Lipschitz nonlinear multi-agent systems with matching
uncertainties. Asian Journal of Control, in press, 2014.
[100] P. Lin, K.Y. Qin, Z.K. Li, and W. Ren. Collective rotating motions of
second-order multi-agent systems in three-dimensional space. Systems
and Control Letters, 60(6):365–372, 2011.
Bibliography 243
[101] C. Liu, Z.S. Duan, G.R. Chen, and L. Huang. Analyzing and controlling
the network synchronization regions. Physica A, 386(1):531–542, 2007.
[102] Y. Liu and Y.M. Jia. H∞ consensus control of multi-agent systems with
switching topology: A dynamic output feedback protocol. International
Journal of Control, 83(3):527–537, 2010.
[103] J. Löfberg. YALMIP: A toolbox for modeling and optimization in MAT-
LAB. Proceedings of 2004 IEEE International Symposium on Computer
Aided Control Systems Design, pp. 284–289, 2004.
[104] Y. Lou and Y.G. Hong. Target containment control of multi-agent sys-
tems with random switching interconnection topologies. Automatica,
48(5):879–885, 2012.
[105] J.H. Lv, X.H. Yu, G.R. Chen, and D.Z. Cheng. Characterizing the syn-
chronizability of small-world dynamical networks. IEEE Transactions
on Circuits and Systems I: Regular Papers, 51(4):787–796, 2004.
[106] C.Q. Ma and J.F. Zhang. Necessary and sufficient conditions for con-
sensusability of linear multi-sgent systems. IEEE Transactions on Au-
tomatic Control, 55(5):1263–1268, 2010.
[107] R.N. Madan. Chua’s Circuit: A Paradigm for Chaos. World Scientific,
Singapore, 1993.
[108] P. Massioni and M. Verhaegen. Distributed control for identical dynam-
ically coupled systems: A decomposition approach. IEEE Transactions
on Automatic Control, 54(1):124–135, 2009.
[109] J. Mei, W. Ren, and G. Ma. Distributed coordinated tracking with a
dynamic leader for multiple Euler-Lagrange systems. IEEE Transactions
on Automatic Control, 56(6):1415–1421, 2011.
[110] J. Mei, W. Ren, and G. Ma. Distributed containment control for La-
grangian networks with parametric uncertainties under a directed graph.
Automatica, 48(4):653–659, 2012.
[111] Z.Y. Meng, W. Ren, and Z. You. Distributed finite-time attitude con-
tainment control for multiple rigid bodies. Automatica, 46(12):2092–
2099, 2010.
[112] M. Mesbahi and M. Egerstedt. Graph Theoretic Methods in Multiagent
Networks. Princeton University Press, Princeton, NJ, 2010.
[113] N. Michael, J. Fink, and V. Kumar. Cooperative manipulation and
transportation with aerial robots. Autonomous Robots, 30(1):73–86,
2011.
244 Cooperative Control of Multi-Agent Systems
[135] W. Ren, K.L. Moore, and Y. Chen. High-order and model reference con-
sensus algorithms in cooperative control of multivehicle systems. Journal
of Dynamic Systems, Measurement, and Control, 129(5):678–688, 2007.
[136] W. Ren. Consensus strategies for cooperative control of vehicle forma-
tions. IET Control Theory and Applications, 1(2):505–512, 2007.
[137] W. Ren. Collective motion from consensus with Cartesian coordinate
coupling. IEEE Transactions on Automatic Control, 54(6):1330–1335,
2009.
[138] W. Ren and R.W. Beard. Distributed Consensus in Multi-Vehicle Co-
operative Control. Springer-Verlag, London, 2008.
[139] W. Ren and Y.C. Cao. Distributed Coordination of Multi-Agent Net-
works: Emergent Problems, Models, and Issues. Springer-Verlag, Lon-
don, 2010.
[141] E.J. Rodrı́guez-Seda, J.J. Troy, C.A. Erignac, P. Murray, D.M. Sti-
panovic, and M.W. Spong. Bilateral teleoperation of multiple mobile
agents: coordinated motion and collision avoidance. IEEE Transactions
on Control Systems Technology, 18(4):984–992, 2010.
[142] A. Sarlette, R. Sepulchre, and N.E. Leonard. Autonomous rigid body
attitude synchronization. Automatica, 45(2):572–577, 2009.
[143] L. Scardovi and R. Sepulchre. Synchronization in networks of identical
linear systems. Automatica, 45(11):2557–2562, 2009.
[144] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S.S. Sastry.
Foundations of control and estimation over lossy networks. Proceedings
of the IEEE, 95(1):163–187, 2007.
[145] L. Schenato and F. Fiorentin. Average timesynch: A consensus-based
protocol for clock synchronization in wireless sensor networks. Auto-
matica, 47(9):1878–1886, 2011.
[146] J.H. Seo, H. Shim, and J. Back. Consensus of high-order linear sys-
tems using dynamic output feedback compensator: Low gain approach.
Automatica, 45(11):2659–2664, 2009.
[147] R. Sepulchre, D.A. Paley, and N.E. Leonard. Stabilization of planar
collective motion: All-to-all communication. IEEE Transactions on Au-
tomatic Control, 52(5):811–824, 2007.
[148] R. Sepulchre, D.A. Paley, and N.E. Leonard. Stabilization of planar
collective motion with limited communication. IEEE Transactions on
Automatic Control, 53(3):706–719, 2008.
[149] G.S. Seyboth, D.V. Dimarogonas, and K.H. Johansson. Event-based
broadcasting for multi-agent average consensus. Automatica, 49(1):245–
252, 2013.
[150] D. Shevitz and B. Paden. Lyapunov stability theory of nonsmooth sys-
tems. IEEE Transactions on Automatic Control, 39(9):1910–1914, 1994.
[151] D.D. Šiljak. Decentralized Control of Complex Systems. Academic Press,
New York, NY, 1991.
[152] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M.I. Jordan, and
S.S. Sastry. Kalman filtering with intermittent observations. IEEE
Transactions on Automatic Control, 49(9):1453–1464, 2004.
[153] F. Sivrikaya and B. Yener. Time synchronization in sensor networks: A
survey. IEEE Network, 18(4):45–50, 2004.
[154] J.J.E. Slotine and W. Li. Applied Nonlinear Control. Prentice Hall,
Englewood Cliffs, NJ, 1991.
Bibliography 247
[159] H.S. Su, G.R. Chen, X.F. Wang, and Z.L. Lin. Adaptive second-order
consensus of networked mobile agents with nonlinear dynamics. Auto-
matica, 47(2):368–375, 2011.
[160] H.S. Su, X.F. Wang, and G.R. Chen. A connectivity-preserving flocking
algorithm for multi-agent systems based only on position measurements.
International Journal of Control, 82(7):1334–1343, 2009.
[161] H.S. Su, X.F. Wang, and W. Yang. Flocking in multi-agent systems
with multiple virtual leaders. Asian Journal of control, 10(2):238–245,
2008.
[162] H.S. Su, X.F. Wang, and Z.L. Lin. Flocking of multi-agents with a
virtual leader. IEEE Transactions on Automatic Control, 54(2):293–
307, 2009.
[163] Y.F. Su and J. Huang. Cooperative adaptive output regulation for a
class of nonlinear uncertain multi-agent systems with unknown leader.
Systems and Control Letters, 62(6):461–467, 2013.
[164] C. Tan and G. P. Liu. Consensus of networked multi-agent systems via
the networked predictive control and relative outputs. Journal of the
Franklin Institute, 349(7):2343–2356, 2012.
[180] G.H. Wen, G.Q. Hu, W.W. Yu, and G.R. Chen. Distributed H∞ consen-
sus of higher order multiagent systems with switching topologies. IEEE
Transactions on Circuits and Systems II: Express Briefs, 61(5):359-363,
2014.
[181] G.H. Wen, Z.S. Duan, Z.K. Li, G.R. Chen. Consensus and its L2 -gain
performance of multi-agent systems with intermittent information trans-
missions. International Journal of Control, 85(4):384-396, 2012.
[182] G.H. Wen, Z.S. Duan, G.R. Chen, W.W. Yu. Consensus tracking of
multi-agent systems with Lipschitz-type node dynamics and switching
topologies. IEEE Transactions on Circuits and Systems I: Regular Pa-
pers, 61(2):499-511, 2014.
[183] G. Wheeler, C.Y. Su, and Y. Stepanenko. A sliding mode controller
with improved adaptation laws for the upper bounds on the norm of
uncertainties. Automatica, 34(12):1657–1661, 1998.
[184] E. Yong. Autonomous drones flock like birds. Nature News,
doi:10.1038/nature.2014.14776, 2014.
[185] K.Y. You and L.H. Xie. Network topology and communication data
rate for consensusability of discrete-time multi-agent systems. IEEE
transactions on automatic control, 56(10):2262–2275, 2011.
[186] K.Y. You, Z.K. Li, and L.H. Xie. Consensus condition for linear
multi-agent systems over randomly switching topologies. Automatica,
49(10):3125–3132, 2013.
[187] K.D. Young, V.I. Utkin, and U. Ozguner. A control engineer’s guide to
sliding mode control. IEEE Transactions on Control Systems Technol-
ogy, 7(3):328–342, 1999.
[188] C.B. Yu, B.D.O. Anderson, S. Dasgupta, and B. Fidan. Control of
minimally persistent formations in the plane. SIAM Journal on Control
and Optimization, 48(1):206–233, 2009.
[189] W.W. Yu, G.R. Chen, M. Cao, and J. Kurths. Second-order consensus
for multiagent systems with directed topologies and nonlinear dynam-
ics. IEEE Transactions on Systems, Man, and Cybernetics, Part B:
Cybernetics, 40(3):881–891, 2010.
[190] W.W. Yu, W. Ren, W.X. Zheng, G.R. Chen, and J.H. Lv. Distributed
control gains design for consensus in multi-agent systems with second-
order nonlinear dynamics. Automatica, 49(7):2107–2115, 2013.
[191] T. Yucelen and W.M. Haddad. Low-frequency learning and fast adap-
tation in model reference adaptive control. IEEE Transactions on Au-
tomatic Control, 58(4):1080–1085, 2013.
250 Cooperative Control of Multi-Agent Systems
[194] M.M. Zavlanos, H.G. Tanner, A. Jadbabaie, and G.J. Pappas. Hybrid
control for connectivity preserving flocking. IEEE Transactions on Au-
tomatic Control, 54(12):2869–2875, 2009.
[195] H.W. Zhang, F.L. Lewis, and A. Das. Optimal design for synchro-
nization of cooperative systems: State feedback, observer, and output
feedback. IEEE Transactions on Automatic Control, 56(8):1948–1952,
2011.
[196] H.W. Zhang and F.L. Lewis. Adaptive cooperative tracking control of
higher-order nonlinear systems with unknown dynamics. Automatica,
48(7):1432–1439, 2012.
[197] Y. Zhao, Z.S. Duan, G.H. Wen, and G.R. Chen. Distributed H∞ con-
sensus of multi-agent systems: A performance region-based approach.
International Journal of Control, 85(3):332–341, 2012.
[198] Y. Zhao, G.H. Wen, Z.S. Duan, X. Xu, and G.R. Chen. A new observer-
type consensus protocol for linear multi-agent dynamical systems. Asian
Journal of Control, 15(2):571–582, 2013.
[199] B. Zhou, C.C. Xu, and G.R. Duan. Distributed and truncated reduced-
order observer based output feedback consensus of multi-agent systems.
IEEE Transactions on Automatic Control, 59(8): 2264–2270, 2014
[200] K.M. Zhou and J.C. Doyle. Essentials of Robust Control. Prentice Hall,
Upper Saddle River, NJ, 1998.
[201] F. Zhu and Z. Han. A note on observers for Lipschitz nonlinear systems.
IEEE Transactions on automatic control, 47(10):1751–1754, 2002.