Distributed Cont
Distributed Cont
min
(P) denote the largest and smallest eigenvalues of P,
respectively. Often, the notation x is understood to mean
x(t) at some instant of time t R. THe objective is to
stabilize a group of N
a
2 dynamically coupled agents
toward the origin in a cooperative and distributed way using
RHC. For each agent i {1, ..., N
a
}, the state and control
vectors are denoted z
i
(t) R
n
and u
i
(t) R
m
, respectively,
at any time t t
0
R. The dimension of every agents
state (control) are assumed to be the same, for notational
simplicity and without loss of generality. The concatenated
vectors are denoted z = (z
1
, ..., z
Na
) and u = (u
1
, ..., u
Na
).
The dynamic coupling between the agents is topologically
identied by a directed graph G = (V, E), where V =
{1, ..., N
a
} is the set of nodes (agents) and E V V
is the set of all directed edges between nodes in the graph.
The set E is dened in the following way. If any components
of z
j
appear in the dynamic equation for agent i, for some
j V, it is said that j is an upstream neighbor of agent
i, and N
u
i
V denotes the set of upstream neighbors of
any agent i V. The set of all directed edges is dened as
E = {(i, j) V V | j N
u
i
, i V}. For every i V,
it is assumed that z
i
appears in the dynamic equation for i,
and so i N
u
i
for every i V. In the language of graph
theory, then, every node has a self-loop edge in E. Note that
j N
u
i
does not necessarily imply i N
u
j
.
It will also be useful to reference the set of agents for
which any of the components of z
i
arises in their dynamical
equation. This set is referred to as the downstream neighbors
of agent i, and is denoted N
d
i
. The set of all directed edges
can be equivalently dened as E = {(i, j) V V | i
N
d
j
, i V}. Note that j N
u
i
if and only if i N
d
j
,
for any i, j V. It is assumed in this paper that the
graph G is connected. Consequently, for every i V, the
set
_
N
d
i
N
u
i
_
\ {i} = , and every agent is dynamically
coupled to at least one other agent. It is also assumed that
agents can receive information directly from each and every
upstream neighbor, and agents can transmit information
directly to each and every downstream neighbor. The coupled
time-invariant nonlinear system dynamics for each agent
i V are given by
z
i
(t) = f
i
(z
i
(t), z
i
(t), u
i
(t)), t t
0
, (1)
where z
i
= (z
j1
, ..., z
j
l
), l = |N
u
i
|, denotes the concate-
nated vector of the states of the upstream neighbors of i.
Each agent i is also subject to the decoupled input constraints
u
i
(t) U, t t
0
. The set U
N
is the N-times Cartesian
product U U. In concatenated vector form, the system
dynamics are
z(t) = f(z(t), u(t)), t t
0
, (2)
given z(t
0
), and f = (f
1
, ..., f
Na
).
Assumption 1: The following holds: (a) f is C
2
and 0 =
f(0, 0); (b) system (2) has a unique solution for any z(t
0
)
and any piecewise right-continuous control u : [t
0
, )
U
Na
; (c) U R
m
is compact, containing the origin in its
interior.
Consider now the linearization of (1) around the origin,
denoting A
il
= f
i
/z
l
(0, 0) and B
i
= f
i
/u
i
(0, 0). As
in many RHC formulations, a feedback controller for which
the closed-loop system is asymptotically stabilized inside a
neighborhood of the origin will be utilized.
Assumption 2: For every agent i V, there exists a
decoupled static feedback u
i
= K
i
z
i
such that A
di
A
ii
+
B
i
K
i
is Hurwitz, and the closed-loop linear system z = A
c
z
is asymptotically stable, where A
c
[f
z
(0, 0) + f
u
(0, 0)K]
and K = diag(K
1
, ..., K
Na
).
The decoupled linear feedbacks above are referred to as
terminal controllers. Associated with the closed-loop lin-
earization, denote the block-diagonal Hurwitz matrix A
d
=
diag(A
d1
, ..., A
dNa
) and the off-diagonal matrix A
o
= A
c
A
d
. Assumption 2 inherently presumes decoupled stabiliz-
ability and that the coupling between subsystems in the
linearization is sufciently weak, as discussed and quantied
in the survey paper [15]. The terminal controllers will be
employed in a prescribed neighborhood of the origin.
III. DISTRIBUTED RECEDING HORIZON CONTROL
In this section, N
a
separate optimal control problems
are dened and the distributed RHC algorithm. In every
distributed optimal control problem, the same constant pre-
diction horizon T (0, ) and constant update period
(0, T] are used. In practice, the update period
(0, T] is typically the sample interval. By the distributed
implementation presented here, additional conditions on
are required, as quantied in the next section. Denote the
update time t
k
= t
0
+ k, where k N = {0, 1, 2, ...}. In
the following implementation, every distributed RHC law is
updated globally synchronously, i.e., at the same instant of
time t
k
for the k
th
-update.
At each update, every agent optimizes only for its own
predicted open-loop control, given its current state. Since the
dynamics of each agent i depend on states z
i
, that agent
will presume some trajectories for z
i
over each prediction
horizon. To that end, prior to each update, each agent i
receives an assumed state trajectory z
j
from each upstream
neighbor j N
u
i
. Likewise, agent i transmits an assumed
state trajectory z
i
to every downstream neighbor j N
d
i
,
prior to each update. By design, then, the assumed state
trajectory for any agent is the same in the distributed optimal
control problem of every downstream neighbor. Since the
models are used with assumed trajectories for upstream
neighbors, there will be a discrepancy, over each optimization
time window, between the predicted open-loop trajectory and
the actual trajectory that results from every agent applying
the predicted control. This discrepancy is identied by using
the following notation. Recall that z
i
(t) and u
i
(t) are the
actual state and control, respectively, for each agent i V
at any time t t
0
. Associated with update time t
k
, the
trajectories for each agent i V are denoted
z
p
i
(t; t
k
) the predicted state trajectory,
z
i
(t; t
k
) the assumed state trajectory,
u
p
i
(t; t
k
) the predicted control trajectory,
where t [t
k
, t
k
+ T]. Consistent with the ordering of z
i
,
let z
i
(; t
k
) be the assumed open-loop state trajectories of
the upstream neighbors of i. For any agent i V, then, the
predicted state trajectory satises
z
p
i
(t; t
k
) = f
i
(z
p
i
(t; t
k
), z
i
(t; t
k
), u
p
i
(t; t
k
)), (3)
for all t [t
k
, t
k
+ T], given z
p
i
(t
k
; t
k
) = z
i
(t
k
). The
assumed state trajectory for each agent i V is given by
z
i
(t; t
k
) =
_
z
p
i
(t; t
k1
), t [t
k
, t
k1
+ T)
z
K
i
(t), t [t
k1
+ T, t
k
+ T]
(4)
where z
K
i
is the solution to z
K
i
(t) = A
di
z
K
i
(t) with initial
condition z
K
i
(t
k1
+ T) = z
p
i
(t
k1
+ T; t
k1
). By con-
struction, each assumed state trajectory z
i
is the remainder
of the previously predicted trajectory, concatenated with the
closed-loop linearization response that ignores coupling. The
collective actual state trajectories for the agents over any
update window [t
k
, t
k+1
) is given by
z(t) = f(z(t), u
p
(t; t
k
)), t [t
k
, t
k+1
), (5)
given z(t
k
). While the actual and predicted state trajectories
do have the same initial condition z
i
(t
k
) for each i V,
they typically diverge over each update window [t
k
, t
k+1
],
and z
p
(t
k+1
; t
k
) = z(t
k+1
) in general. The reason is that,
while the predicted state trajectories in (3) are based on
the assumption that neighbors continue along their previous
trajectory, neighbors in fact compute and employ their own
updated predicted control trajectory. Therefore, the actual
state evolves according to (5). The challenge then is to
generate a distributed RHC algorithm that has feasibility
and stability properties in the presence of the discrepancy
between predicted and actual state trajectories. A desirable
property of any RHC algorithm is to have feasible state and
control trajectories at any update, as the trajectories can be
used to preempt the optimization algorithm used to solve the
optimal control problem. In many formulations, the feasible
state trajectory is the remainder of the previous trajectory
concatenated with the response under a terminal controller
[3], [12], [13]. While z
i
(; t
k
) is such a trajectory, it cannot
be used since z
i
(t
k
; t
k
) = z
i
(t
k
). Still, a feasible control
trajectory exists. Indeed, a contribution of this paper is to
show that a feasible control is the remainder of the previous
control trajectory concatenated with the terminal controller,
with the corresponding feasible state trajectory starting from
the true state at each update time. The feasible state and
control trajectories at any update t
k
are denoted z
i
(; t
k
) and
u
i
(; t
k
), respectively. The feasible state trajectory satises
z
i
(t; t
k
) = f
i
( z
i
(t; t
k
), z
i
(t; t
k
), u
i
(t; t
k
)) , (6)
given initial condition z
i
(t
k
; t
k
) = z
i
(t
k
), and the feasible
control is given by
u
i
(t; t
k
) =
_
u
p
i
(t; t
k1
), t [t
k
, t
k1
+ T)
K
i
z
i
(t; t
k
), t [t
k1
+ T, t
k
+ T]
. (7)
The feasible control trajectory u
i
is the remainder of the
previously predicted control trajectory, concatenated with the
linear control applied to the nonlinear model and based on
the decoupled linear responses for each upstream neighbor.
In the next section, feasibility and stability will be proven.
In each local optimal control problem, for any agent i V
at update time t
k
, the cost function J
i
(z
i
(t
k
), u
p
i
(; t
k
)) is
given by
_
t
k
+T
t
k
z
p
i
(s; t
k
)
2
Qi
+u
p
i
(s; t
k
)
2
Ri
ds +z
p
i
(t
k
+ T; t
k
)
2
Pi
,
where Q
i
= Q
T
i
> 0, R
i
= R
T
i
> 0 and P
i
= P
T
i
>
0. The matrix P
i
= P
T
i
> 0 is chosen to satisfy the
Lyapunov equation P
i
A
di
+ A
T
di
P
i
=
Q
i
, where
Q
i
=
Q
i
+ K
T
i
R
i
K
i
. Denoting P = diag(P
1
, ..., P
Na
) and
Q =
diag(
Q
1
, ...,
Q
Na
), it follows that PA
d
+ A
T
d
P =
Q and
_
z R
nNa
| z
P
_
, is a positively invariant
region of attraction for both the closed-loop linearization
z(t) = A
c
z(t) and the closed-loop nonlinear system z(t) =
f(z(t), Kz(t)). Additionally, Kz U
Na
for all z
.
The parameter (0, ) that satised the conditions of
the lemma can be found numerically, as described in [6],
[13]. In each local optimal control problem, the terminal state
constraint set for each i V is
i
()
_
z
i
R
n
| z
i
Pi
/
_
N
a
_
. (8)
By construction, if z
1
()
Na
(), then the
decoupled controllers can stabilize the system to the origin,
since z
i
2
Pi
2
Na
, i V implies
iV
z
i
2
Pi
2
.
Suppose then that at some time t
t
0
, z
i
(t
)
i
() for
every i V. Then, from Lemma 1, stabilization is achieved
if every agent employs their decoupled static feedback con-
troller K
i
z
i
(t) for all time t t
N
a
)]
2
, b =
[/(2(q + 1)N
a
)]
2
, z
p
i
(; t
k
) satises the dynamic equation
(3) and the terminal constraint z
p
i
(t
k
+ T; t
k
)
i
(/2),
with
i
dened in (8).
Equation (9) is utilized to prove that the distributed RHC
algorithm is stabilizing. Equation (10) is referred to as the
consistency constraint, which requires that each predicted
trajectory remain close to the assumed trajectory (that neigh-
bors assume for that agent). In particular, the predicted
trajectory z
p
i
must remain nearly as close to the assumed
trajectory z
i
as the feasible trajectory z
i
, with an added
margin of freedom parameterized by (/2(q + 1)N
a
)
2
. In
the analysis that follows, the consistency constraint (10)
is a key equation in proving that z
i
is a feasible state
trajectory at each update. The constant q {1, 2, 3, ...} is
a design parameter, and the choice for q will be motivated
in Section IV. Before stating the distributed RHC algorithm,
an assumption (standard in centralized implementations) is
made to facilitate the initialization phase.
Assumption 4: Given z(t
0
) at initial time t
0
, there exists
a feasible control u
p
i
(; t
0
) U, [t
0
, t
0
+ T], for each
agent i V, such that the solution to the full system z() =
f(z(), u
p
(; t
0
)), denoted z
p
(; t
0
), satises z
p
i
(t
0
+T; t
0
)
i
(/2) and results in a bounded cost J
i
(z
i
(t
0
), u
p
i
(; t
0
))
for every i V. Moreover, each agent i V has access to
u
p
i
(; t
0
).
Let Z R
nNa
denote the set of initial states for which there
exists a control satisfying the conditions in Assumption 4.
The control algorithm is now stated.
Algorithm 1: The distributed receding horizon control law
for any agent i V is as follows:
Data: z(t
0
), u
p
i
(; t
0
) satisfying Assumption 4, T
(0, ), (0, T], q {1, 2, 3, ...}.
Initialization: At time t
0
, if z(t
0
)
, then
apply the terminal controller u
i
(t
) = K
i
z
i
(t
),
for all t
t. Else:
b) Compute z
i
(; t
k+1
) according to (4) and trans-
mit it to every downstream neighbor l N
d
i
.
c) Receive z
j
(; t
k+1
) from every upstream neighbor
j N
u
i
and assemble z
i
(; t
k+1
).
d) Apply u
p
i
(; t
k
), [t
k
, t
k+1
).
2) At any time t
k+1
, k N:
a) Measure z
i
(t
k+1
).
b) Compute z
i
(; t
k+1
) according to (6).
c) Solve Problem 1, yielding u
p
i
(; t
k+1
).
Part 1(a) of Algorithm 1 presumes that the every agent can
obtain the full state z(t). This requirement is a theoretical
artifact needed when employing dual-mode control, so that
switching occurs only when the conditions of Lemma 1 are
satised. In the next section, it is shown that the distributed
RHC policy drives the state z(t
l
) to
, u) = f(z, u)
whenever z = z
.
Assumption 5: Given P and R, there exist positive con-
stants and such that the Lipschitz bound
F(z, z
, u)
P
z
P
+ z
P
+ u,
holds for all z, y Z, and u U
Na
.
More generally, the Lipschitz bound would take the form
F(z, z
, u)
P
z
P
+
z
P
+ u for some positive
constants ( ,
, ). Thus, Assumption 5 presumes that one
can identify the Lipschitz constants ( ,
, ), and that the
differential equation f (or F) is already normalized by , so
that =
/ and = / . The local Lipschitz constant
represents a normalized measure of the amount of coupling
in the collective dynamic model. The feasibility result is now
stated.
Theorem 1: Suppose that Assumptions 15 hold, z(t
0
)
Z and the following parametric conditions hold:
e
(1+
K
)
ln
_
(q + 1)
2
/q
2
min
iV
{
min
(P
1/2
i
Q
i
P
1/2
i
)}
e
(1+
K
)
r/(r + 1)
c(q + 1)
N
a
, (11)
2T(r + 1) exp [T + (1 + +
K
)] 1, (12)
where constants c and
K
are denes as c =
1/(10
max
(
Q
1/2
P
Q
1/2
))+
1/2
max
(P
1/2
A
T
o
PA
o
P
1/2
),
K
=
1/2
max
(P
1/2
K
T
KP
1/2
), and constants q, r
{1, 2, 3, ...} are chosen design parameters. Then, for every
agent i V, the control and state pair ( u
i
(; t
k
), z
i
(; t
k
)),
dened by equations (6) and (7), is a feasible solution to
Problem 1 at every update k 1.
The purpose of the design parameters q, r {1, 2, 3, ...}
is to shift the bounds on and , as necessary to nd
feasible values for a specic problem. The larger the chosen
value of q, the smaller the lower and upper bounds on . for
example. The ability to shift the feasible range for is useful
for design purposes, as will be demonstrated in Section V.
Also, larger values of q reduce the margin in the consistency
constraint (10) that bounds how much the predicted state
can deviate from the assumed state. Equation (12) places
an upper bound on the Lipschitz coupling constant . By
increasing the design parameter r, one can increase the upper
bound on at the price of requiring a tighter bound on .
The utility of being able to choose r is also demonstrated in
Section V. Parametric conditions which ensure the stability
of the closed-loop system (5) are now stated.
Theorem 2: Suppose that Assumptions 15 hold, z(t
0
)
Z, conditions (11)(12) are satised, and the following
parametric conditions hold
T 8, (q + 1) 2
T
. (13)
Then, by application of Algorithm 1, the closed-loop system
(5) is asymptotically stabilized to the origin.
The feasibility and stability results in this paper are related
to those of Michalska and Mayne [13], who demonstrated
robust feasibility and stability in the presence of model
error by placing parametric bounds on (combinations of)
the update period and a Lipschitz constant. While there is
no model error here, bounds are likewise derived to ensure
robustness to the bounded discrepancy between what agents
do, and what their neighbors believe they will do.
V. COUPLED OSCILLATORS
In this section, the example of three coupled Van der Pol
oscillators is considered for application of the distributed
RHC algorithm. The three oscillators modeled here are
physically meaningful in that they capture the thigh and knee
dynamics of a walking robot experiment [8].In the following,
1
[/2, /2] is the relative angle between the two
thighs,
2
[/2, /2] is the right knee angle (relative to
the right thigh), and
3
[/2, /2] is the left knee angle
(relative to left thigh). The controlled equations of motion in
units of (rad/sec)
2
are
1
(t) = 0.1
_
1 5.25
2
1
(t)
1
(t)
1
(t) + u
1
(t)
2
(t) = 0.01
_
1 p
2
(
2
(t)
2e
)
2
_
2
(t) 4(
2
(t)
2e
)
+ 0.057
1
(t)
1
(t) + 0.1(
2
(t)
3
(t)) + u
2
(t)
3
(t) = 0.01
_
1 p
3
(
3
(t)
3e
)
2
_
3
(t) 4(
3
(t)
3e
)
+ 0.057
1
(t)
1
(t) + 0.1(
3
(t)
2
(t)) + u
3
(t),
subject to |u
i
(t)| 1, t 0, i = 1, 2, 3.
Two-phase biped locomotion is generated by these
equations with zero control (open-loop) and time-
varying parameter values, given by (
2e
,
3e
, p
2
, p
3
)(t) =
(0.227, 0.559, 6070, 192) for t [0, ), and equal to
(0.559, 0.226, 226, 5240) for t [, 2). Figure 1
shows the resulting open-loop stable limit cycle response,
starting from the initial position (40, 3, 3) degrees, with
i
(0) = 0 for i = 1, 2, 3. Through perturbation analysis
and the method of harmonic balance, the limit cycle
is closely approximated by
lc
1
(t) = (50/180) cos(t),
lc
2
(t) =
2e
+ (3/180
2e
) cos(2t), and
lc
3
(t) =
3e
+ (3/180
3e
) cos(2t). The chosen
initial condition demonstrates the attractivity of the stable
0 5 10 15
1
0.5
0
0.5
1
time (sec)
(
r
a
d
)
3
Fig. 1. Open-loop stable limit cycle, showing the angular positions starting
from (40, 3, 3) degrees with zero initial angular velocity.
limit cycle. For example, note that the amplitude of
1
(t)
starts at 0.70 radians and approaches 0.87 radians, the
amplitude of
lc
1
(t). While the robot has 6 total degrees of
freedom when walking in accordance with the limit cycle
response above, the remaining degrees of freedom (including
two ankles and one free foot) can be derived from the three
primary degrees of freedom,
1
,
2
and
3
[8]. With zero
control, there are two equilibrium conditions. One is the
limit cycle dened above, and the other is the unstable xed
point (
1
,
2
,
3
) = (
1e
,
2e
,
3e
) with
1e
=
i
= 0 for
i = 1, 2, 3. A reasonable control objective is to command
torque motors (controls u
i
) to drive the three angles from
the stable limit cycle response to the xed point; that is,
to stably bring the robot to a stop. To do so within one
half-period of the limit cycle response means that one set of
parameter values (
2e
,
3e
, p
2
, p
3
) can be considered in the
model. As such, for control purposes, these parameters are
assumed to take on the values (0.227, 0.559, 6070, 192).
In this way, discontinuous dynamic equations are also
avoided. Now, through a change of variables, the dynamics
and input constraints satisfy the conditions of Assump. 1.
Denoting z
i
= (
i
ie
,
i
), the dynamics are linearized
around (z
i
, u
i
) = (0, 0). The matrix A
11
has unstable
eigenvalues 0.05 j, and the matrices A
22
and A
33
are
unstable with eigenvalues 0.0552j. For all three oscillators,
the dynamics are linearly controllable around the origin. In
accordance with Assumption 2, the following gain matrices
are used to stabilize the linearized dynamics: K
1
= [3.6 5.3],
K
2
= K
3
= [2.0 5.0]. The resulting closed-loop matrix A
c
has eigenvalues (1.1, 4.1, 3, 2.4 0.5j, 2). For the
cost function J
i
, the chosen weights are Q
i
= diag(30, 30)
and R
i
= 0.1, i = 1, 2, 3. Then, each P
i
is calculated accord-
ing to the Lyapunov equation on page 3. Since the maximum
eigenvalue of PA
o
+ A
T
o
P
Q/2 is 11, Assumption 3
is satised. The constraint parameter = 0.2 satises the
conditions of Lemma 1, with the calculated for described in
[6]. In accordance with Assumption 4, a centralized optimal
control problem is solved at initial time t
0
= 0. In this
problem, and in the centralized RHC implementation, the
sum of the three cost functions J
1
+ J
2
+ J
3
is minimized,
enforcing terminal state and input constraints with a horizon
time of T = 6 seconds. The initial condition is kept the same
as that shown in Figure 1.
To solve the centralized optimal control problem, and
each of the distributed optimal control problems, the same
approach is used. The computer with MATLAB 7.0 software
has a 2.4 GHz Intel Pentium(R) 4 CPU, with 512 MB of
RAM. In the spirit of the Nonlinear Trajectory Generation
package developed by Milam et al. [14], a collocation
technique is employed within MATLAB. First, each angular
position trajectory
i
(t) is parameterized as a C
2
[t
k
, t
k
+T]
6-th order B-spline polynomial. The constraints and cost
functions are evaluated at 121 breakpoints over each 6 second
time window. The resulting nonlinear programming problem
is solved using the fmincon function, generating the 27 B-
spline coefcients for each position
i
(t). Using the concept
of differential atness, the control inputs u
i
are not param-
eterized as polynomials for which the coefcients must also
be calculated. Instead, each control input is dened in terms
of the parameterized positions
i
(t) and their derivatives
through the dynamics. With an update period of = 0.15
seconds, the centralized RHC state and control response is
shown in Figure 2. The position and control trajectories are
0 1 2 3 4 5 6
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
time (sec)
(
r
a
d
)
CRHC
1,C
2,C
3,C
0 1 2 3 4 5 6
1
0.5
0
0.5
1
time (sec)
(
r
a
d
/
s
e
c
2
)
CRHC
u
1,C
u
2,C
u
3,C
Fig. 2. The centralized RHC response, showing the angular position
trajectories
i,C
(left plot) and the control trajectories u
i,C
(right plot),
for each i = 1, 2, 3.
denoted
i,C
and u
i,C
, respectively. Note that the positions are
brought suitably close to their xed point values (shown by
dashed lines) within a half-period of seconds, validating
the assumption that the model parameters (
2e
,
3e
, p
2
, p
3
)
are constant over the time horizon of 6 seconds.
With an initially feasible solution available, the distributed
RHC algorithm can be employed. Before presenting the
results, the theoretical conditions are evaluated. In total, the
parametric equations that must be satised are (11)(13). In
accordance with Assumption 5, the Lipschitz parameters are
identied as
F(z, z
, u)
P
4z
P
+0.1z
P
+1u. To
facilitate calculation of an update period that satises the
parametric conditions, time scaling is introduced to normal-
ize the horizon time from T = 6 seconds to 1 second. For
the dynamics
F, let (t) = t/T [0, 1] such that
d
d
z() =
T
F(z(), z
, u)
P
4Tz
P
+0.1Tz
P
+
Tu. To get into the normalized form, the dynamics are
scaled as F =
F/(4T). Then, the normalized Lipschitz
bounds become F(z, z
, u)
P
z
P
+ z
P
+ u,
where = 0.1/4 = 0.025 and = 1/4 = 0.25. Choosing
the design parameters q = 90 and r = 6, the lower bound in
(11) is 0.025 and the upper bound is 0.028. So, the update
period (for the time-scaled dynamics) is chosen to be =
0.025 seconds. Also, the left hand side of (12) is 0.997, so
the inequality is satised. Lastly, equation (13) is a sufcient
condition for stability, and it is satised for the values T = 1,
= 0.025 and q = 90. Therefore, the parametric conditions
of the theory guaranteeing feasibility and stability of the
distributed RHC algorithm are satised. Scaling time back
to a planning horizon of 6 seconds corresponds to an update
period of = 0.15 seconds, and this is the update period used
in the centralized and distributed RHC implementations.
Distributed RHC is implemented precisely according to
Algorithm 1, with one modication to Problem 1. In the
optimization code, the constants on the right-hand side of
constraints (9) and (10) are set to a = b = 0.1. The actual
constants in (9) and (10) are small enough ( 10
7
) to
cause feasibility problems in each distributed optimization
code. The value of 0.1, on the other hand, worked quite
well. Of course, the constants dened in constraints (9)
and (10) are derived based on the sufcient conditions of
the theory, and are likely to be conservative. The closed-
loop position and control trajectories generated by applying
the distributed RHC algorithm are shown in Figure 3. The
0 1 2 3 4 5 6
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
time (sec)
(
r
a
d
)
DRHC
1,D
2,D
3,D
0 1 2 3 4 5 6
1
0.5
0
0.5
1
time (sec)
(
r
a
d
/
s
e
c
2
)
DRHC
u
1,D
u
2,D
u
3,D
Fig. 3. The distributed RHC response, showing the angular position
trajectories
i,D
(left plot) and the control trajectories u
i,D
(right plot),
for each i = 1, 2, 3. The response is quite close to the centralized RHC
response shown in Figure 2.
position and control trajectories for this closed-loop solution
are denoted
i,D
and u
i,D
, respectively. While the algorithm
and theory suggest switching to the terminal controllers
once z(t)