Distributed Coordination-by-Constraint Strategies For Networked Control Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Proceedings of the 1st IFAC Workshop on Estimation and Preprint

Control of Networked Systems, Venice, Italy, September


24-26, 2009

Distributed Coordination-by-Constraint
Strategies for Networked Control Systems
Emanuele Garone Francesco Tedesco Alessandro Casavola

Dipartimento di Elettronica, Informatica e Sistemistica, Universita
della Calabria, Via P.Bucci, 42-C, Rende (CS), 87036, ITALY

Abstract: In this paper we present preliminary ideas on how to develop distributed supervision
strategies for networked control systems subject to coordination constraints to be enforced
on-line. Such a coordination paradigm, hereafter referred to as coordination-by-constraint, is
characterized by a set of spatially distributed dynamic systems, connected via communication
channels, with possibly dynamical coupling amongst them which need to be supervised and
coordinated in order to accomplish their overall objective. In order to evaluate the distributed
method here proposed, the distributed coordination of coupled autonomous vehicles under input-
saturation and formation accuracy constraints is presented as an example.

1. INTRODUCTION The peculiarities of the proposed SS-CG scheme make it


an attractive solution for distributed frameworks because
The problem of interest here is the design of distributed alleviate the need to make the entire aggregate state
supervision strategies based on Command Governor (CG) known to all agents at each time instant, the latter could
ideas for multi-agent systems where the use of a centralized being unrealistic or requiring unrealistic communication
coordination unit is impracticable because requiring unre- infrastructures in some large scale applications. In this
alistic or unavailable communication infrastructures. Ex- paper we will present two preliminary communication-
amples of relevant applications include groups of vehicles based distributed supervisory algorithms. Feasibility and
cooperatively converging to a desired formation (Keviczky stability of the presented approaches will be highlighted.
[2005]), large scale chemical processes (Venkat [2006]) and
coordination of generators in networked power systems 2. PROBLEM FORMULATION
(Kumar and Kothari [2005]) to mention a few. From a Consider a set of N subsystems A = {1, . . . , N }. Each
theoretical point of view, distributed control policies for subsystem is regulated by a local controller which ensures
dynamically coupled systems have been recently studied stability and good closed-loop properties when the con-
in Camponogara et al. [2002], DAndrea and Dullerud straints are not active (small-signal regimes). Let the i-
[2003], Venkat et al. [2005], Magni and Scattolini [2006], th closed-loop subsystem be described by the following
Dumbar [2007]. discrete-time model
X
The CG approach (see Gilbert et al. [1995], Casavola et al. x
i

(t+1) = ii xi (t)+Ggigi (t)+ ij xj (t)
[2006]) is a well known and established methodology that jA{i} (1)
provides a simple and effective way to enforce pointwise- y (t) = Hiy xi (t)
i

in-time constraints along the trajectories of a closed-loop = Hic x(t) + Li g(t)

ci (t)
system. Namely, the CG is a nonlinear device which is
where: t ZZ+ , xi IRni is the state vector (which includes
added to a pre-compensated control system that, whenever
the controller states under dynamic regulation), gi IRmi
necessary and on the basis of the knowledge of the actual
is the manipulable reference vector which, if no constraints
measured state, modifies the reference to the closed-loop
(and no CG) were present, would coincide with the desired
system so as to avoid constraints violation. In this paper,
reference ri IRm and yi IRmi is the output vector
we will introduce a new solution to the CG problem, c

hereafter referred to as the Steady-State CG (SS-CG) which is required to track ri . Finally, ci IRni represents
approach, that, at the price of some additional conserva- the local constrained vector which has to fulfill the set-
tiveness, is able to accomplish the CG task in absence of membership constraint
an explicit measure of the state. The idea behind such an ci (t) Ci , t ZZ+ , (2)
approach is that, if sufficiently smooth transitions in the Ci being a convex and compact set. It is worth point-
set-point modifications are acted by the CG unit, then the ing out that, in order to possibly characterize global
state will not differ too much from the steady-state equilib- (coupling) constraints amongst states of different sub-
rium. Clearly, the scheme remains a closed-loop strategy, systems, the vector ci in (1) is allowed to depend on
because the reference modification is undertaken on the the aggregate state and manipulable reference vectors
basis of the expected value of the current state. Extensions x = [xT1 , . . . , xTN ]T IRn , with n =
PN
i=1 ni , and g =
to the case of systems subject to bounded disturbances T T T m PN
can be directly obtained by following the standard lines [g1 , . . . , gN ] IR , with m = i=1 mi . Moreover, we
of (Casavola et al. [2000]) and will be reported in future denote by r = [r1T , . . . , rN ] IRm , y = [y1T , . . . , yN
T T T T
]
m T T T nc c
PN c
works. IR and c = [c1 , . . . , cN ] IR , with n = i=1 ni ,

144
NECSYS'09
Venice, Italy, September 24-26, 2009

the other relevant aggregate vectors. The overall system 3. THE STEADY-STATE CG APPROACH
arising by the composition of the above N subsystems can In order to make precise statements and comparisons,
be described as we describe the basic CG centralized approach of Be-
x(t + 1) = x(t) + Gg(t)
(
mporad et al. [1997], Casavola et al. [2000] first. For a
y(t) = H y x(t) (3) constrained closed-loop system of the form (3) satisfying
c(t) = H c x(t) + Lg(t) assumptions A1-A2, the standard CG design problem (4)
where can be solved by introducing, for a given > 0, the sets:
C := C B

11 . . . 1N G1 . . . 0 (7)
= ... . . . ... , G = ... . . . ... ,
W := {g IRm : cg C }
N 1 . . . N N 0 . . . GN where B is the ball of radius centered at the origin
and A E is the Pontryagin set difference defined as
H1y H1c L1
! ! !
y c {a : a + e A, e E}. In particular, W , which we
H = ... ,H = ... ,L = ... .
y c assume non-empty, is the convex and closed set of all
HN HN LN
constant commands g whose corresponding equilibrium
It is further assumed that points cg := H c (In )1 Gg + Lg satisfy the constraints
A1. The overall system (3) is asymptotically stable. with margin . Let introduce also the virtual evolutions of
A2. System (3) is off-set free i.e. H y (In )1 G = Im . the c-variable
k1
!
Roughly speaking, the CG design problem we want to X
solve is that of locally determine, at each time step t c(k, x(t), g(t)) := H c k x(t)+ ki1Gg(t) +Lg(t) (8)
i=0
and for each agent i A, a suitable reference signal
gi (t) which is the best approximation of ri (t) such that along the virtual time k, from the initial condition x(t) at
its application never produces constraints violation, i.e. time k = 0 under the application of a constant command
ci (t) Ci , t ZZ+ , i A. g(t), k. Then, for any given state x, we can define
Classical centralized solutions of the above stated CG V(x) = {g W : c(k, x, g) C, k ZZ+ }. (9)
design problem (see Bemporad et al. [1997], Casavola et al. As a consequence, V(x) represents, if non-empty, the set of
[2000]) have been achieved by finding, at each time t, a CG all constant commands in W whose virtual c-evolutions
action g(t) as a function of the current reference r(t) and starting from x at virtual time k = 0 satisfy the constraints
measured state x(t) also during transients. Then, the standard CG design
g(t) := g(r(t), x(t)) (4) problem can be solved by the following algorithm
such that g(t) is the best approximation of r(t) under the The standard CG Algorithm
condition c(t) C, where C {C1 ... CN } is the repeat at each time t
global admissible region. Here we will focus on a slight 1.1 solve
different approach to the CG design problem in which
g(t) = arg min k g r(t) k2 (10)
the explicit dependence on the state vector disappears. gV(x(t))
This is a convenient solution to be used in a decentralized 1.2 apply g(t)
environment because it eliminates the need to share the
state vector amongst the agents that, as well known (see The idea underlying the SS-CG approach is that of en-
Negenborn et al. [2008], Dumbar [2007]), is one of the suring that any admissible variation of the manipulated
main difficulties in defining decentralized schemes. Such reference g() always produces a guaranteed bounded per-
an approach, hereafter referred to as Steady-State CG (SS- turbation on the actual closed-loop state around a suitable
CG), will be described in next sections. The main idea feasible equilibrium state. Such a property is ensured if the
is that, if the manipulable reference signal g() is slow following technical expedients are adopted during the SS-
enough w.r.t. system dynamics, then, because of A1 and CG computation:
A2, the state x(t) will not differ too much from the closed-
(1) the computation of a new SS-CG action g() is per-
loop steady-state equilibrium that would correspond to the
formed every steps, being a suitable integer to
application of a constant set-point g(t 1) for a sufficient
be determined, rather than at each time t as in the
number of steps, i.e.
standard CG approach. Moreover, each new SS-CG
x(t) xg(t1) := (In )1 Gg(t 1) (5) command is applied for exactly steps;
The latter allows us to replace the dependence on the (2) the displacement between the new SS-CG commands
measured state x(t) with xg(t1) in (4). Moreover, because g(t) and the previous one g(t ) is explicitly bounded
xg(t1) univocally depends on the command signal applied during the SS-CG computation, i.e.
at the previous time step g(t 1), we can finally reformu- g(t) g(t ) G, (11)
late the Steady-State CG problem as the one of finding a
command signal g(t) as a function of r(t) and g(t 1) where the integer > 0 and the closed and convex set
G IRm are computed from the outset as detailed below.
g(t) = g(r(t), g(t 1)) (6)
Let us rewrite the virtual evolution of the c-variable (8) as
where g(t) is the best approximation of r(t) to be com-
c(k, x(t), g(t)) = cg(t) + H c k (x(t) xg(t) ). (12)
puted as in the standard CG approach with the additional
requirement that, at the next sampling time, the state The above expression shows that the predictions can be
x(t + 1) will be not far away from the new steady state divided in two amounts: a steady-state component repre-
solution, i.e. x(t + 1) xg(t) . sented by cg(t) and the transient evolution H c k (x(t)

145
NECSYS'09
Venice, Italy, September 24-26, 2009

xg(t) ). Then, because g(t) W and, in turn, cg(t) C 1.2 apply g(t)
at each time t, the key idea behind the SS-CG approach where = T > 0 is a weighting matrix and G is the
is that constraints can be satisfied, although in a quite
closed and convex set of all the possible -step incremental
arbitrary and conservative way, by only taking care of the
commands ensuring the inequality (21) to hold true:
transient component, on which the following condition
G = g :kH c k (I )1 Ggk (1), k 0 . (24)

kH c k (x(t) xg(t) )k , k 0 (13)
has to be enforced. One way to achieve (13) without It is worth to note that the sets W , G and the dwelling
assuming state availability is based on the idea that, after time can be computed off-line from the outset. The
a sufficient long time after from the application of a new following main properties can be proved for the above
SS-CG command, the transient contribution decreases and described SS-CG strategy Casavola et al. [2009]
can be bounded within a certain percentage of its initial Proposition 1. - Let assumptions A1-A2 be fulfilled.
magnitude. More formally, let us introduce the following Consider system (3) along with the SS-CG selection rule
notion of dwelling time: and let an admissible command signal g(0) W be
applied at t = 0 such that (13) holds true. Then:
Definition (Dwelling Time) - The integer > 0 is said
to be the dwelling time with parameter , 0 < < 1, for (1) the minimizer in (22) uniquely exists every steps
the pair (H c , ), if it is the smallest integer such that and can be obtained by solving a convex constrained
kH c k xk M (x), k 0 optimization problem;
(14) (2) constraints are fulfilled for all t ZZ+ ;
(3) the overall system is asymptotically stable and when-
kH c +k xk M (x), k 0
ever r(t) r, the sequence of g(t) converges in finite
holds true for each x IRn with the real M (x) > 0 any time either to r or to its best steady-state admissible
arbitrarily chosen upper-bound. 2 approximation: g(t) r := arg mingW kg rk2 . 2
Such a definition implies that if at time t a certain
command g(t ), such that 4. DISTRIBUTED SS-CG
c k Here we will focus on two distributed CG schemes where
k H (x(t ) xg(t ) ) k , k 0 (15)
is constantly applied to the system, the transient contri- agents are connected by a communication network. Such
bution from t onwards can be bounded as follows a network is modeled by means of a communication graph:
an undirected graph G = (A, B), where A denotes the set
k H c k (x(t) xg(t ) ) k , k 0 (16) of the N subsystems and B A A the set of edges
because the following relationship representing communication links amongst agents. More
(x(t ) xg(t ) ) = (x(t) xg(t ) ) (17) precisely the edge (i, j) will belong to B if and only if the
obviously holds true because of the above arguments. agents governing the i-th and the j-th subsystems are able
Consider now the transient contribution to the constrained to directly share information within sampling times. The
vector at time t depending on the new SS-CG command communication graph is assumed to be connected, i.e. for
g(t) to be determined each couple of agents i A, j A it exists at least one
sequence of edges connecting i and j and the minimum
k H c k (x(t) xg(t) ) k . (18) number of edges connecting two agents will be denoted by
The latter, by introducing the -step incremental vector di,j . The set of the agents with a direct connection with
xg(t) = xg(t) xg(t ) , can be rewritten as the i-th agent will be referred to as Neighborhood of the
i-th agent Ni = {j A : di,j = 1}.
k H c k (x(t) xg(t ) ) H c k xg(t) k . (19)
Moreover, if at time instant t , a SS-CG command 4.1 Sequential Procedure (S-SSCG)
g(t ) complying with (15) is applied, then one has Let G be an Hamiltonian graph and, without loss of gener-
kH c k (x(t)xg(t ) )H c k xg(t)k ality, the sequence H = {1, 2, ..., N 1, N } an Hamiltonian
(20) cycle. The idea behind the approach is that only one
+ kH c k xg(t) k .
agent per decision time is allowed to manipulate its local
The latter allows one to simplify the SS-CG design control command signal gi (t) while all others are instructed to
problem into the one of selecting, every steps, a new hold their previous values. After each decision, the agent in
command g(t) satisfying charge will update the global command received from the
k H c k xg(t) k (1 ), k 0. (21) previous updating agent and will forward this new value
Finally, because xg univocally depends on g(t) = g(t) to the next updating agent in the cycle. Such a policy
g(t ), we can formulate the Steady-State CG algorithm implies that, eventually after a preliminary initialization
as follows. cycle, at each time instant the agent in charge always
knows the whole aggregate vector g(t ). By exploiting
The SS-CG Algorithm this observation we can define the following distributed
repeat at each time t = , = 0, 1, . . . SS-CG algorithm:
1.1 solve Sequential-SSCG Algorithm (S-SSCG) - Agent i
g(t) = arg min k g r(t) k2 (22) repeat at each time t = , = 0, 1, . . .
g
 1.1 if (modN ) == i
g W 1.1.1 receive g(t ) from the previous agent in
subject to : (23)
(g g(t )) G the cycle H

146
NECSYS'09
Venice, Italy, September 24-26, 2009

1.1.2 solve Sequential SSCG algorithm when autonomous decisions


gi (t) = arg min k gi ri (t) k2i can yield to hazardous situations. More precisely:
gi
subject to : (25) if the shared information (t ) is such that any
g(t) = [g1T (t ),...,giT ,...,gN
T
(t )]T W possible (past and present) choice of the agent never


(gi gi (t )) G 0i makes the aggregate command to violate the con-


straints, i.e (t ) W com where W com , W
1.1.3 apply gi (t)
(dmax,1 G1 . . . dmax,N GN ), then each agent can
1.1.4 update g(t) = [g1T (t ), ..., gi (t), ..., gN
T
(t )]T
select its local command accordingly to the solution
1.1.5 transmit g(t) to the next agent in H
of the following optimization problem
1.2 else
1.2.1 apply gi (t) = gi (t ) gi (t) = arg min k gi ri (t) k2i
gi (26)
where i > 0 is a weighting matrix, modN is the subject to : {gi gi (t ) Gi
remainder of the integer division /N and
where G1 IRm1 , ..., GN IRmN are the sets
G 0i = gi [0, 0, . . . , giT , . . . , 0]T G

of possible reference variations for each agent pre-
is the set of all possible command variations for gi in the determined in order to accomplish G1 ... GN
case that the commands of all other agents are frozen. G (see Casavola et al. [2009] for details).
Proposition 2. - Let assumptions A1-A2 be fulfilled. otherwise, after a proper initialization phase, a switch
Consider system (3) as the composition of N subsystems in to the sequential S-SSCG Algorithm should be
form (1) along with the distributed S-SSCG selection rule undertaken and the commands selected on its basis.
and let an admissible aggregate command signal g(0) = We can then present the following parallel version of the
[g1T (0), . . . , gN
T
(0)]T W be applied at t = 0 such that distributed SS-CG algorithm:
(13) holds true. Then Parallel-SSCG Algorithm (P-SSCG) - Agent i
(1) for each agent i A the algorithm produces at each repeat at each time t = , = 0, 1, . . .
time step a feasible local command gi (t); 1.1 if (t ) W com
(2) constraints are fulfilled for all t ZZ+ ; solve (26)
(3) the overall system is asymptotically stable. In partic- = 1;
ular, whenever r(t) r, the sequence of the aggregate else if modN 6= i
T
vectors g(t) = [g1T (t), . . . , gN (t)]T converges in finite gi (t) = gi (t )
time either to r or to an admissible approximation in + +;
W which is Pareto-optimal w.r.t. the local objective else
functionals ||gi ri ||2 , i = 1, ..., N. 2 solve
Remark 3 - The proof of Proposition (2) is here omitted gi (t) = arg min k gi ri (t) k2i
gi
for brevity. See Casavola et al. [2009] for details. 2 subject to :
(gi gi (t )) G 0i (27)

4.2 Parallel SSCG (P-SSCG)
The main drawback of the S-SSCG algorithm is that, every g(t) = [g1T,...,giT ,...,gN
T
(t )]T W
gj Gi,j (i (t ), )

time instants, only one agent at a time is allowed to
modify its local command. In order to overcome such a + +;
limitation, any agent should be enabled to select its local 1.2 apply gi (t)
command every time instants. The two key points to 1.3 update i (t)
build up such a kind of strategy are i) the definition 1.4 transmit i (t) to neighborhood Ni
of the information available to each agent and ii) the 1.5 receive j (t) from all neighbors j Ni
determination of a set of selection rules such that the
composition of all feasible local commands satisfies the where is a counter initialized as = 1 that is incre-
overall constraints (23). Here, we will assume that each mented for each time passed in the sequential mode and
agent acts as a gateway in redistributing data amongst the Gi,j (i (t ), ) is the set of all the possible values the j-th
other, no directly connected, agents. Then, at each time command could have assumed from the i-th viewpoint:
instant t, the i-th agent has knowledge of the following Gi,j (i (t ), ) =
vector: (28)
= {gj |gj gj (tdi,j ) (min{0, di,j })Gi } .
i (t ) = [g1T (t di,1 ), . . . , giT (t ), . . . , gN
T
(t di,N )]T Please note that the introduction of such a set and of the
It is important to highlight that, under the above as- associate condition in the optimization problem (27) forces
sumption, the most recent information on the applied the initialization of the sequential algorithm. In fact, for
commands shared by all agents at each decision time t > di,j , it turns out that Gi,j (i (t ), ) = {gj (t
is given by the vector di,j )} reduces to a singleton in (28) and the algorithm
(t ) = [g1T (t dmax,1 ), ..., gN T
(t dmax,N )]T coincides with the S-SSCG scheme seen in the previous
where dmax,i = maxjA di,j . In the following, a simple Subsection.
selection rule is presented based on the idea that agents Remark 4 - It has been proved in Casavola et al. [2009]
can autonomously select their own commands whenever that the same properties pertaining to the S-SSCG scheme
their states are in a safe area of the constrained feasible and reported in Proposition 2 still apply for the P-SSCG
set whereas, on the contrary, they have to resort to the algorithm. Details are here omitted for space reasons.

147
NECSYS'09
Venice, Italy, September 24-26, 2009

x x
y F1 m1 m2 F2
k
y y
x F1 F2

Fig. 1. Planar system of two dynamically coupled masses.


Notice also that other different selection rules can be
defined and will be subject of further investigations. 2
5. ILLUSTRATIVE EXAMPLE: COORDINATION OF
AUTONOMOUS VEHICLES
Fig. 2. a) Planar system of three dynamically coupled
In Fig. 1, a system consisting of two dynamically coupled masses (m3-system) b) Planar system of four dynam-
masses is presented. Such a system, which will be referred ically coupled masses (m4-system)
to as the m2-system, represents in essence a pair of
unmanned vehicles transporting an elastic membrane. The
system is described by the following equations Reference Centralized CG
X X 0.5 0.5
mi xi = k(xi xj ) (x1 xj ) + Fix
0.4 0.4
jN jNi
Xi X (29) 0.3 0.3

yj ) + Fiy

y [m]

y [m]
mi yi = k(yi yj ) (y1 0.2 0.2

jNi jNi 0.1 0.1

where (xi , yi ), i A = {1, 2} are the coordinates of the i- 0 0

th mass position w.r.t a cartesian reference and (Fix , Fiy ), 0.1 0 0.1 0.2
x [m]
0.3 0.4 0.5 0.1 0 0.1 0.2
x [m]
0.3 0.4 0.5

i A the components, along the same reference frame, of


the forces acting as inputs for the two slave systems. Ni SSSCG PSSCG

denotes the neighborhood of the i-th agent, in this case 0.5 0.5

N1 = {2} and N2 = {1}. The following system parameters 0.4 0.4

sec
are assumed = 1 [ N m ], k = 1 [ N
m ], mi = 1 [Kg], A
0.3 0.3
y [m]

y [m]
and a sampling time of Tc = 0.1 [sec] is employed in the 0.2 0.2

simulations. Each subsystem has been precompensated by 0.1 0.1

an optimal LQ state feedback local controller. The problem 0 0

we consider here is the coordination of the planar motions 0.1 0 0.1 0.2
x [m]
0.3 0.4 0.5 0.1 0 0.1 0.2
x [m]
0.3 0.4 0.5

of those two masses along two continuously parameterized


paths r1 (1 ) IR2 , r2 (2 ) IR2 where 1 [0, ], 2 Fig. 3. Tracking trajectories for m2-system: a) Reference
[0, ] are real parameters. It is required that each agent Trajectories, b) Standard CG, c) S-SSCG d) P-SSCG
tracks its own path by progressively increasing the value of
i (t) at the maximum speed complying with the following algorithms, formations of three and four masses, hereafter
local and coupling constraints denoted as the m3-system and the m4-system, have been

j
considered with masses on the vertices, of an equilateral
Fi (t) 0.5 [N ] j = x, y, i A, triangle for m3-system (see Figure 2.a) and of a square for
|i (t)ri (i (t))| 0.05 [m] i A, (30) m4-system (see Figure 2.a), and coupling elements on the
|i (t)j (t)| 0.06, i A, j Ni . edges. Simulative results on the trajectories are depicted
The first set of inequalities represents input-saturation in Figures 5 and Figures 6 respectively. Finally, in Figure
constraints on the four forces Fix and Fiy , i A, acting as 7 some comparisons regarding the amounts of CPU usage
inputs of the vehicles. The second set of constraints repre- are reported. It is possible to note that the standard CG
sents the component-wise accuracy of the vehicle positions algorithm requires a much higher computational time than
i (t) = [xi (t), yi (t)] with respect to the target motion the two decentralized counterparts and that the increment
ri (i (t)). Finally, the third group of constraints represents of the number of the agents produces an approximately
the coordination constraints between the agents: the two linear increment in the computational burdens.
agents have never to be too far in the parameter pro-
gression in order to maintain the formation shape. In this 6. CONCLUSIONS
example we will consider the rigid motions r1 (), r2 () In this paper, a distributed implementation of CG schemes
for [0, 4] reported in Figure 3. and we compare in has been developed for dynamically coupled linear systems
Figures 3.b -3.d, the real system trajectories achieved by subject to local and global constraints. The key point was
the use of the centralized Standard CG strategy (10) and to resort to a new CG scheme that, thanks to the asymp-
of the two presented decentralized approaches for the first totical stability of the pre-compensated system, does not
1000 simulation steps. Those trajectories corresponds to need an explicit measure of the state. Two algorithms for
the evolution of i (t), i = 1, 2 shown in Figure 4. As the case where communication amongst the agents are
expected, the use of decentralized schemes introduces a allowed have been presented and results on constraints
certain level of conservativeness w.r.t. standard centralized fulfillment and stability highlighted. Comparisons with
CG approaches. However, the performance, especially for central solutions have been presented and commented in
the P-SSCG method, could be adequate in certain applica- the final illustrative example. The presented results are
tions. In order to evaluate the scalability of the proposed encouraging and stimulate further research on the topic.

148
NECSYS'09
Venice, Italy, September 24-26, 2009

Computed Reference Centralized CG


4
0.5 0.5

3 0.4 0.4
1 (t )

0.3 0.3

y [m]

y [m]
2
0.2 0.2

1
0.1 0.1

0 0 0
0 100 200 300 400 500 600 700 800 900 1000
0.1 0 0.1 0.2 0.3 0.4 0.5 0.1 0 0.1 0.2 0.3 0.4 0.5
Time [steps] x [m] x [m]

4 SSSCG PSSCG

0.5 0.5
3
0.4 0.4
2 (t )

2 0.3 0.3

y [m]

y [m]
0.2 0.2
1
0.1 0.1
0
0 100 200 300 400 500 600 700 800 900 1000 0 0
Time [steps] 0.1 0 0.1 0.2 0.3 0.4 0.5 0.1 0 0.1 0.2 0.3 0.4 0.5
x [m] x [m]
Fig. 4. Computed 1 (t), 2 (t) for m2-system: Standard CG
(-), P-SSCG (-.-), S-SSCG(- -). Fig. 6. Tracking trajectories for m4-system: a) Reference
Trajectories, b) Standard CG, c) S-SSCG d) P-SSCG
Reference Centralized CG

0.5 0.5

0.4 0.4

0.3 0.3
y [m]

y [m]

0.2 0.2

0.1 0.1

0 0

0.1 0 0.1 0.2 0.3 0.4 0.5 0.1 0 0.1 0.2 0.3 0.4 0.5
x [m] x [m]

SSSCG PSSCG

0.5 0.5

0.4 0.4
Fig. 7. Mean CPU usage for each agent (sec).
0.3 0.3
y [m]

y [m]

0.2 0.2
R. R. Negenborn, B. De Schutter, and J. Hellendoorn.
0.1 0.1
Multi-agent model predictive control for transportation
0 0
networks: Serial versus parallel schemes. Engineering
0.1 0 0.1 0.2 0.3 0.4 0.5 0.1 0 0.1 0.2 0.3 0.4 0.5
Appl. of Artificial Intelligence, 21(3), pp.353-366, 2008.
x [m] x [m]
L. Magni and R. Scattolini. Stabilizing decentralized
Fig. 5. Tracking trajectories for m3-system: a) Reference model predictive control of nonlinear systems. Auto-
Trajectories, b) Standard CG, c) S-SSCG d) P-SSCG matica, 42, pp. 1231-1236, 2006.
W.B. Dunbar. Distributed receding horizon of dynamically
REFERENCES coupled nonlinear systems. IEEE Trans. Automat.
Control, 52(7), pp. 1249-1263, 2007.
T. Keviczky. Decentralized Receding Horizon Control E.G. Gilbert, I. Kolmanovsky and K. Tin Tan. Discrete-
of Large Scale Dynamically Decoupled Systems. PhD time Reference Governors and the Nonlinear Control of
Thesis, University of Minnesota, USA, 2005. Systems with State and Control Constraints. Interna-
A.N. Venkat. Distributed Model Predictive Control: The- tional Journal on Robust and Nonlinear Control, 5, pp.
ory and Applications. PhD Thesis, University of Wis- 487-504, 1995.
consin, USA, 2006. A. Bemporad, A. Casavola and E. Mosca. Nonlinear
Ibraheem, P. Kumar, and D.P. Kothari, Recent Philoso- Control of Constrained Linear Systems via Predictive
phies of Automatic Generation Control Strategies in Reference Management. IEEE Trans. Automat. Con-
Power Systems, IEEE Trans. on Power Systems, 20(1), trol, 42, pp. 340-349, 1997.
2005. A. Casavola, M. Papini and G. Franze. Supervision
E. Camponogara, D. Jia, B.H. Krogh and S. Talukdar. of networked dynamical systems under coordination
Distributed model predictive control. IEEE Control constraints. IEEE Trans. Automat. Control, 51(3), pp.
Systems Magazine, February, pp. 44-52, 2002. 421-437, 2006.
R. DAndrea and G.E. Dullerud, Distributed Control A. Casavola, E. Mosca and D. Angeli. Robust command
Design for Spatially Interconnected Systems. IEEE governors for constrained linear systems. IEEE Trans.
Trans. Automat. Control, Vol. 48(9), pp. 1478-1495, Automat. Control, 45, pp. 2071-2077, 2000.
2003. A. Casavola, E. Garone, F. Tedesco. Decentralized Steady
A.N. Venkat, J.B. Rawlings and S.J. Wright. Stability and State Command Governor, DEIS-UNICAL, Technical
optimality of distributed model predictive control. IEEE Report, DEIS-18/02, University of Calabria, 2009.
CDC-ECC 05, Sevilla, Spain, pp. 6680-6685, 2005.

149

You might also like